Jobs
Interviews

2536 Data Engineering Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

What You'll own as Account Executive at Hevo:. We are looking for a high-impact Account Executive who thrives in selling complex, technical solutions to mid-market customers. This role requires a proactive sales professional who can drive the full sales cycle from strategic prospecting to closing high-value deals. You will engage with senior decision-makers, navigate competitive sales cycles, and create demand through outbound efforts and social selling.. Key Responsibilities:. Pipeline Generation & Outbound Sales Identify, engage, and develop new business opportunities through outbound prospecting, personalized outreach, and strategic social selling.. Building Business Cases Develop and present clear, data-backed business cases that align with the customer's pain points, priorities, and financial objectives. Drive urgency by quantifying ROI and cost of inaction.. Driving Proof of Concepts (PoCs) Partner with Solutions Engineers, Product, Engineering, and Support teams to design and execute PoCs that demonstrate the real-world impact of our solution.. Deal Execution Lead high-stakes conversations with CXOs, overcome objections, negotiate and drive opportunities to close through a structured and value-driven approach.. Competitive Positioning Hold your ground in competitive sales cycles, effectively differentiating our solution in a market with well-established players.. Technical Acumen & Continuous Learning Develop a strong understanding of data engineering, analytics, and modern data stack components. Stay up to date on industry trends, evolving technologies, and customer challenges.. Market Insights & Adaptability Stay ahead of industry trends, adapt messaging based on competitive dynamics, and continuously refine sales strategies.. What are we looking for:. 6+ years of SaaS or B2B technology sales experience, with a track record of successfully selling to mid-market customers.. Proven ability to create and close net-new business while managing multi-stakeholder sales cycles.. Strong outbound sales acumencomfortable with prospecting, networking, and driving engagement beyond inbound leads.. Experience in navigating competitive deal cycles and articulating differentiation in highly contested sales motions.. Exceptional communication, negotiation, and stakeholder management skills.. Experience using CRM and sales automation tools (e.g., Salesforce, HubSpot) to track and manage pipeline performance.. Experience selling to CDOs, Head of Data Analytics personas is a plus but not mandatory.. This role is for someone who is driven, adaptable, and eager to make a tangible impact in a fast-moving SaaS environment. If you're ready to take ownership of your pipeline and drive revenue growth

Posted 2 weeks ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Kochi

Work from Office

This role will focus on developing a comprehensive data infrastructure, ensuring data accuracy, and providing critical insights to support our business goals.ResponsibilitiesData Strategy & Governance: Develop and implement a data strategy that aligns with organizational goals. Establish governance policies to maintain data quality, consistency, and security.Team Leadership: Provide training and development to enhance the teams skills in data management and reporting.Reporting & Analytics: Oversee the creation of dashboards and reports, delivering key insights to stakeholders. Ensure reports are accessible, reliable, and relevant, with a focus on performance metrics, customer insights, and operational efficiencies.Cross-functional Collaboration: Work closely with cross-functional teams (Tech, Finance, Operations, Marketing, Credit and Analytics) to identify data requirements, integrate data across systems, and support data-driven initiatives. Data Infrastructure & Tools: Work with Data Engineering to assess, select, and implement data tools and platforms to optimize data storage, processing, and reporting capabilities. Maintain and improve our data infrastructure to support scalability and data accessibility.Data Compliance: Ensure adherence to data privacy laws and compliance standards, implementing best practices in data security and privacy.QualificationsExperience: 5-10 years of experience in data management and reporting with at least some in a leadership role.Education: Bachelors or Masters degree in Data Science, Business Analytics, Statistics, Computer Science, or a related field (STEM).Technical Skills: Proficiency in data visualization tools (Metabase, Sisense, Tableau, Power BI), SQL, and data warehousing solutions. Knowledge of ETL processes and familiarity with cloud data platforms is a plus.Analytical Skills: Strong analytical abilities and a strategic mindset, with proven experience in translating data into actionable business insights.Leadership & Communication: Excellent leadership, communication, and presentation skills. Ability to communicate complex information clearly to both technical and non-technical stakeholders. Strong strategic thinking and problem-solving skillsEnthusiasm for working across cultures, functions, and time zones

Posted 2 weeks ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Chennai

Work from Office

Data Engineer, Chennai, India. About the job: The Data Engineer is a cornerstone of Vendasta's R&D team, driving the efficient processing, organization, and delivery of clean, structured data in support of business intelligence and decision-making. By developing and maintaining scalable ELT pipelines, they ensure data reliability and scalability, adhering to Vendasta's commitment to delivering data solutions aligned with evolving business needs. Your Impact: Design, implement, and maintain scalable ELT pipelines within a Kimball Architecture data warehouse. Ensure robustness against failures and data entry errors, managing data conformation, de-duplication, survivorship, and coercion. Manage historical and hierarchical data structures, ensuring usability for the Business Intelligence (BI) team and scalability for future growth. Partner with BI teams to prioritize and deliver data solutions while maintaining alignment with business objectives. Work closely with source system owners to extract, clean, and integrate data into the data warehouse. Advocate for and influence improvements in source data integrity. Champion best practices in data engineering, including governance, lineage tracking, and quality assurance. Collaborate with Site Reliability Engineering (SRE) teams to optimize cloud infrastructure usage. Operate within an Agile framework, contributing to team backlogs via Kanban or Scrum processes as appropriate. Balance short-term deliverables with long-term technical investments in collaboration with BI and engineering management. What you bring to the table: 5 - 8 years of proficiency in ETL, SQL and experience with cloud-based platforms like Google Cloud (BigQuery, DBT, Looker). In-depth understanding of Kimball data warehousing principles, including the 34-subsystems of ETL. Strong problem-solving skills for diagnosing and resolving data quality issues. Ability to engage with BI teams and source system owners to prioritize and deliver data solutions effectively. Eagerness to advocate for data integrity improvements while respecting the boundaries of data mesh principles. Ability to balance immediate needs with long-term technical investments. Understanding of cloud infrastructure for effective resource management in partnership with SRE teams. About Vendasta: So what do we actually do? Vendasta is a SaaS company composed of a company of global brands including MatchCraft, Yesware, and Broadly, that builds and sells software and services to help small businesses operate more efficiently as a team, meet more client needs, and provide incredible client experiences. We have offices in Saskatoon, Saskatchewan, Boston and Boca Raton, Florida, and Chennai, India. Perks: Benefits of health insurance Paid time off Training & Career Development: Professional development plans, leadership workshops, mentorship programs, and more! Free Snacks, hot beverages, and catered lunches on Fridays Culture - comprised of our core values: Drive, Innovation, Respect, and Agility Night Shift Premium Provident Fund

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Jaipur

Work from Office

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Raipur

Work from Office

Job OverviewBranch launched in India in 2019 and has seen rapid adoption and growth. As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data.We are seeking an experienced and strategic Data and Reporting Lead to shape our data strategy and drive data-driven decision-making across the organization. This role will focus on developing a comprehensive data infrastructure, ensuring data accuracy, and providing critical insights to support our business goals.ResponsibilitiesData Strategy & Governance: Develop and implement a data strategy that aligns with organizational goals. Establish governance policies to maintain data quality, consistency, and security.Team Leadership: Provide training and development to enhance the teams skills in data management and reporting.Reporting & Analytics: Oversee the creation of dashboards and reports, delivering key insights to stakeholders. Ensure reports are accessible, reliable, and relevant, with a focus on performance metrics, customer insights, and operational efficiencies.Cross-functional Collaboration: Work closely with cross-functional teams (Tech, Finance, Operations, Marketing, Credit and Analytics) to identify data requirements, integrate data across systems, and support data-driven initiatives.Data Infrastructure & Tools: Work with Data Engineering to assess, select, and implement data tools and platforms to optimize data storage, processing, and reporting capabilities. Maintain and improve our data infrastructure to support scalability and data accessibility.Data Compliance: Ensure adherence to data privacy laws and compliance standards, implementing best practices in data security and privacy.QualificationsExperience: 5-10 years of experience in data management and reporting with at least some in a leadership role.Education: Bachelors or Masters degree in Data Science, Business Analytics, Statistics, Computer Science, or a related field (STEM).Technical Skills: Proficiency in data visualization tools (Metabase, Sisense, Tableau, Power BI), SQL, and data warehousing solutions. Knowledge of ETL processes and familiarity with cloud data platforms is a plus.Analytical Skills: Strong analytical abilities and a strategic mindset, with proven experience in translating data into actionable business insights.Leadership & Communication: Excellent leadership, communication, and presentation skills. Ability to communicate complex information clearly to both technical and non-technical stakeholders.Strong strategic thinking and problem-solving skillsEnthusiasm for working across cultures, functions, and time zones

Posted 2 weeks ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Vadodara

Work from Office

Job OverviewBranch launched in India in 2019 and has seen rapid adoption and growth. As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data.We are seeking an experienced and strategic Data and Reporting Lead to shape our data strategy and drive data-driven decision-making across the organization. This role will focus on developing a comprehensive data infrastructure, ensuring data accuracy, and providing critical insights to support our business goals.ResponsibilitiesData Strategy & Governance: Develop and implement a data strategy that aligns with organizational goals. Establish governance policies to maintain data quality, consistency, and security.Team Leadership: Provide training and development to enhance the teams skills in data management and reporting.Reporting & Analytics: Oversee the creation of dashboards and reports, delivering key insights to stakeholders. Ensure reports are accessible, reliable, and relevant, with a focus on performance metrics, customer insights, and operational efficiencies.Cross-functional Collaboration: Work closely with cross-functional teams (Tech, Finance, Operations, Marketing, Credit and Analytics) to identify data requirements, integrate data across systems, and support data-driven initiatives.Data Infrastructure & Tools: Work with Data Engineering to assess, select, and implement data tools and platforms to optimize data storage, processing, and reporting capabilities. Maintain and improve our data infrastructure to support scalability and data accessibility.Data Compliance: Ensure adherence to data privacy laws and compliance standards, implementing best practices in data security and privacy.QualificationsExperience: 5-10 years of experience in data management and reporting with at least some in a leadership role.Education: Bachelors or Masters degree in Data Science, Business Analytics, Statistics, Computer Science, or a related field (STEM).Technical Skills: Proficiency in data visualization tools (Metabase, Sisense, Tableau, Power BI), SQL, and data warehousing solutions. Knowledge of ETL processes and familiarity with cloud data platforms is a plus.Analytical Skills: Strong analytical abilities and a strategic mindset, with proven experience in translating data into actionable business insights.Leadership & Communication: Excellent leadership, communication, and presentation skills. Ability to communicate complex information clearly to both technical and non-technical stakeholders.Strong strategic thinking and problem-solving skillsEnthusiasm for working across cultures, functions, and time zones

Posted 2 weeks ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Thiruvananthapuram

Work from Office

We are seeking an experienced and strategic Data and Reporting Lead to shape our data strategy and drive data-driven decision-making across the organization. This role will focus on developing a comprehensive data infrastructure, ensuring data accuracy, and providing critical insights to support our business goals.ResponsibilitiesData Strategy & Governance: Develop and implement a data strategy that aligns with organizational goals. Establish governance policies to maintain data quality, consistency, and security.Team Leadership: Provide training and development to enhance the teams skills in data management and reporting.Reporting & Analytics: Oversee the creation of dashboards and reports, delivering key insights to stakeholders. Ensure reports are accessible, reliable, and relevant, with a focus on performance metrics, customer insights, and operational efficiencies.Cross-functional Collaboration: Work closely with cross-functional teams (Tech, Finance, Operations, Marketing, Credit and Analytics) to identify data requirements, integrate data across systems, and support data-driven initiatives.Data Infrastructure & Tools: Work with Data Engineering to assess, select, and implement data tools and platforms to optimize data storage, processing, and reporting capabilities. Maintain and improve our data infrastructure to support scalability and data accessibility.Data Compliance: Ensure adherence to data privacy laws and compliance standards, implementing best practices in data security and privacy.QualificationsExperience: 5-10 years of experience in data management and reporting with at least some in a leadership role.Education: Bachelors or Masters degree in Data Science, Business Analytics, Statistics, Computer Science, or a related field (STEM).Technical Skills: Proficiency in data visualization tools (Metabase, Sisense, Tableau, Power BI), SQL, and data warehousing solutions. Knowledge of ETL processes and familiarity with cloud data platforms is a plus.Analytical Skills: Strong analytical abilities and a strategic mindset, with proven experience in translating data into actionable business insights.Leadership & Communication: Excellent leadership, communication, and presentation skills. Ability to communicate complex information clearly to both technical and non-technical stakeholders.Strong strategic thinking and problem-solving skillsEnthusiasm for working across cultures, functions, and time zones

Posted 2 weeks ago

Apply

9.0 - 13.0 years

7 - 17 Lacs

Pune

Work from Office

Required Skills and Qualifications: Minimum 8+ years of hands-on experience in Data Engineering Strong proficiency with: Databricks Azure Data Factory (ADF) SQL (T-SQL or similar) PySpark Experience with cloud-based data platforms, especially Azure Strong understanding of data warehousing, data lakes, and data modeling Ability to write efficient, maintainable, and reusable code Excellent analytical, problem-solving, and communication skills Willingness to travel to the customer location in Hinjawadi all 3 working days

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad, Bengaluru

Work from Office

About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotaks data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects

Posted 2 weeks ago

Apply

4.0 - 7.0 years

25 - 27 Lacs

Chennai

Work from Office

Overview Position Overview Annalect is currently seeking a data engineer to join our technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions. Responsibilities Steward data and compute environments to facilitate usage of data assets Design, build, test and deploy scalable and reusable systems that handle large amounts of data Manage small team of developers Perform code reviews and provide leadership and guidance to junior developers Learn and teach new technologies Qualifications Experience designing and managing data flows Experience designing systems and APIs to integrate data into applications 8+ years of Linux, Bash, Python, and SQL experience 4+ years using Spark and other Hadoop ecosystem software 4+ years using AWS cloud services, esp. EMR, Glue, Athena, and Redshift 4+ years managing team of developers Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges

Posted 2 weeks ago

Apply

4.0 - 5.0 years

7 - 10 Lacs

Pune, Chennai, Bengaluru

Hybrid

Senior Data Engineer Shift Time: 1 - 11 pm/3:30 pm - 1:30 am Start Date: Immediate Location: Anywhere in India (flexible to WFO Hybrid Mode) Salary : Upto 12 LPA Job Description: 1. 4+ years of experience working in data warehousing systems 2. 3+ strong hands-on programming expertise in the Databricks landscape, including SparkSQL , Workflows for data processing and pipeline development 3. 3+ strong hands-on data transformation/ ETL skills using Spark SQL, Pyspark, Unity Catalog working in Databricks Medallion architecture 4. 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP 5. Good exposure to Git version control, and CI/CD best practices 6. Experience in developing data ingestion pipelines from ERP systems (Oracle Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors is a plus 7. Experience in a fast-paced, ever-changing and growing environment 8. Understanding of metadata management, data lineage, and data glossaries is a plus. Responsibilities: 1. Involve in design and development of enterprise data solutions in Databricks, from ideation to deployment, ensuring robustness and scalability. 2. Work with Sr. Data Engineer to build, and maintain robust and scalable data pipeline architectures on Databricks using PySpark and SQL 3. Assemble and process large, complex ERP datasets to meet diverse functional and non-functional requirements. 4. Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance data solution quality 5. Focus on improving performance, reliability, and maintainability of data pipelines. 6. Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large datasets Qualifications • Bachelors Degree in Computer Science, Engineering, Statistics, Finance or equivalent experience • Good communication skills Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)

Posted 2 weeks ago

Apply

2.0 - 5.0 years

18 - 21 Lacs

Bengaluru

Work from Office

Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day – we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies

Posted 2 weeks ago

Apply

3.0 - 5.0 years

15 - 25 Lacs

Hyderabad

Work from Office

About the Role: We are seeking a highly skilled and passionate Data Engineer to join our growing team dedicated to building and supporting cutting-edge analytical solutions. In this role, you will play a critical part in designing, developing, and maintaining the data infrastructure and pipelines that power our optimization engines. You will work in close collaboration with our team of data scientists who specialize in mathematical optimization techniques. Your expertise in data engineering will be essential in ensuring seamless data flow, enabling the development and deployment of high-impact solutions across various areas of our business. Responsibilities: Design, build, and maintain robust and scalable data pipelines to support the development and deployment of mathematical optimization models. Collaborate closely with data scientists to deeply understand the data requirements for optimization models. This includes: Data preprocessing and cleaning Feature engineering and transformation Data validation and quality assurance Develop and implement comprehensive data quality checks and monitoring systems to guarantee the accuracy and reliability of the data used in our optimization solutions. Optimize data storage and retrieval processes for highly efficient model training and execution. Work effectively with large-scale datasets, leveraging distributed computing frameworks when necessary to handle data volume and complexity. Contribute to the development and maintenance of thorough data documentation and metadata management processes. Stay up to date on the latest industry best practices and emerging technologies in data engineering, particularly in the areas of optimization and machine learning. Qualifications: Education: Bachelor's degree in computer science, Data Engineering, Software Engineering, or a related field is required. Master's degree in a related field is a plus. Experience: 3+ years of demonstrable experience working as a data engineer, specifically focused on building and maintaining complex data pipelines. Proven track record of successfully working with large-scale datasets, ideally in environments utilizing distributed systems. Technical Skills - Essential: Programming: High proficiency in Python is essential. Experience with additional scripting languages (e.g., Bash) is beneficial. Databases: Extensive experience with SQL and relational database systems (PostgreSQL, MySQL, or similar). You should be very comfortable with: Writing complex and efficient SQL queries Understanding performance optimization techniques for databases Applying schema design principles Data Pipelines: Solid understanding and practical experience in building and maintaining data pipelines using modern tools and frameworks. Experience with the following is highly desirable: Workflow management tools like Apache Airflow Data streaming systems like Apache Kafka Cloud Platforms: Hands-on experience working with major cloud computing environments such as AWS, Azure, or GCP. You should have a strong understanding of: Cloud-based data storage solutions (Amazon S3, Azure Blob Storage, Google Cloud Storage) Cloud compute services Cloud-based data warehousing solutions (Amazon Redshift, Google Big Query, Snowflake) Technical Skills - Advantageous (Not Required, But Highly Beneficial): NoSQL Databases: Familiarity with NoSQL databases like MongoDB, Cassandra, and DynamoDB, along with an understanding of their common use cases. Containerization: Understanding of containerization technologies such as Docker and container orchestration platforms like Kubernetes. Infrastructure as Code (IaC): Experience using IaC tools such as Terraform or CloudFormation. Version Control: Proficiency with Git or similar version control systems. Soft Skills: Communication: Excellent verbal and written communication skills. You'll need to effectively explain complex technical concepts to both technical and non-technical audiences. Collaboration: You'll collaborate closely with data scientists and other team members, so strong teamwork and interpersonal skills are essential. Problem-Solving: You should possess a strong ability to diagnose and solve complex technical problems related to data infrastructure and data pipelines. Adaptability: The data engineering landscape is constantly evolving. A successful candidate will be adaptable, eager to learn new technologies, and embrace change. Additional Considerations: Industry Experience: While not a strict requirement, experience working in industries with a focus on optimization, logistics, supply chain management, or similar domains would be highly valuable. Machine Learning Operations (MLOps): Familiarity with MLOps concepts and tools is increasingly important for data engineers in machine learning-focused environments.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

11 - 15 Lacs

Pune

Work from Office

About The Role Senior AI Engineer At Codvo, software and people transformations go hand-in-hand We are a global empathy-led technology services company where product innovation and mature software engineering are embedded in our core DNA Our core values of Respect, Fairness, Growth, Agility, and Inclusiveness guide everything we do We continually expand our expertise in digital strategy, design, architecture, and product management to offer measurable results and outside-the-box thinking About the Role: We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and implementation of robust and scalable pipelines and backend systems for our Generative AI applications In this role, you will be responsible for orchestrating the flow of data, integrating AI services, developing RAG pipelines, working with LLMs, and ensuring the smooth operation of the backend infrastructure that powers our Generative AI solutions Responsibilities: Generative AI Pipeline Development: Design and implement efficient and scalable pipelines for data ingestion, processing, and transformation, tailored for Generative AI workloads Orchestrate the flow of data between various AI services, databases, and backend systems within the Generative AI context Build and maintain CI/CD pipelines for deploying and updating Generative AI services and pipelines Data and Document Ingestion: Develop and manage systems for ingesting diverse data sources (text, images, code, etc.) used in Generative AI applications Implement OCR and other preprocessing techniques to prepare data for use in Generative AI pipelines Ensure data quality, consistency, and security throughout the ingestion process AI Service Integration: Integrate and manage external AI services (e.g., cloud-based APIs for image generation, text generation, LLMs) into our Generative AI applications Develop and maintain APIs for seamless communication between AI services and backend systems Monitor and optimize the performance of integrated AI services within the Generative AI pipeline Retrieval Augmented Generation (RAG) Pipelines: Design and implement RAG pipelines to enhance Generative AI capabilities with external knowledge sources Develop and optimize data retrieval and indexing strategies for RAG systems used in conjunction with Generative AI Evaluate and improve the accuracy and relevance of RAG-generated responses in the context of Generative AI applications Large Language Model (LLM) Integration: Develop and manage interactions with LLMs through APIs and SDKs within Generative AI pipelines Implement prompt engineering strategies to optimize LLM performance for specific Generative AI tasks Analyze and debug LLM outputs to ensure quality and consistency in Generative AI applications Backend Services Ownership: Design, develop, and maintain backend services that support Generative AI applications Ensure the scalability, reliability, and security of backend infrastructure for Generative AI workloads Implement monitoring and logging systems for backend services and pipelines supporting Generative AI Troubleshoot and resolve backend-related issues impacting Generative AI applications Required Skills and Qualifications: EducationBachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field Experience: 5+ years of experience in AI/ML development with a focus on building and deploying AI pipelines and backend systems Proven experience in designing and implementing data ingestion and processing pipelines Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services Technical Skills: Expertise in Python and relevant AI/ML libraries Strong understanding of AI infrastructure and deployment strategies Experience with data engineering and data processing techniques Proficiency in software development principles and best practices Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) Experience with version control (Git) Experience with RESTful APIs and API development Experience with vector databases and their application in AI/ML, particularly for similarity search and retrieval Generative AI Specific Skills: Familiarity with Generative AI concepts and techniques (e.g., GANs, Diffusion Models, VAEs, LLMs) Experience with integrating and managing Generative AI services Understanding of RAG pipelines and their application in Generative AI Experience with prompt engineering for LLMs in Generative AI contexts Soft Skills: Strong problem-solving and analytical skills Excellent communication and collaboration skills Ability to work in a fast-paced environment Preferred Qualifications: Experience with OCR and document processing technologies Experience with MLOps practices for Generative AI Contributions to open-source AI projects Strong experience with vector databases and their optimization for Generative AI applications Experience 5+ years Shift Time 2:30PM to 11:30PM Show more Show less

Posted 2 weeks ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Bengaluru

Hybrid

Client name: Zeta Global Full-time Job Location: Bangalore Experience Required: 8+ years Mode of Work: Hybrid, 3 days work from the office and 2 days work from home Job Title: Data Engineer As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, Fast API, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree inComputer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in the use of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos, etc. Experience in adequate usage of cloud services (AWS) at scale Proven long-term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low-latency and scalable solutions in either cloud or on-premises environments. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in the Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills. Please fill in all the essential details which are given below & attach your updated resume, and send it to ralish.sharma@compunnel.com 1. Total Experience: 2. Relevant Experience in Data Engineering : 3. Experience in Python : 4. Experience in Spark/Airflow/ Snowflake/Hive : 5. Experience in Fast API : 6. Experience in ETL : 7. Experience in SQL : 8. Experience in Apache : 9. Experience in AWS : 10. Current company : 11. Current Designation : 12. Highest Education : 10. Notice Period: 11 Current CTC: 12. Expected CTC: 13. Current Location: 14. Preferred Location: 15. Hometown: 16. Contact No: 17. If you have any offer from some other company, please mention the Offer amount and Offer Location: 18. Reason for looking for change: 19. PANCARD : If the job description is suitable for you, please get in touch with me at the number below: 9910044363 .

Posted 2 weeks ago

Apply

10.0 - 15.0 years

11 - 15 Lacs

Pune

Work from Office

Job Description: We, at Jet2 (UK’s third largest airlines and the largest tour operator), have set up a state-of-the-art Technology and Innovation Centre in Pune, India. The Lead Visualisation Developer will join our growing Data Visualisation team with delivering impactful data visualisation projects (using Tableau) whilst leading the Jet2TT visualisation function. The team currently works with a range of departments including Pricing & Revenue, Overseas Operations and Contact Centre. This new role provides a fantastic opportunity to represent visualisation to influence key business decisions. As part of the wider Data function, you will be working alongside Data Engineers, Data Scientists and Business Analysts to understand and gather requirements. In the role, you will be scoping visualisation projects, to deliver or delegate to members of the team, ensuring they have everything need to start development whilst guiding them through visualisation delivery. You will also support our visualisation Enablement team by supporting with the release of new Tableau features. Roles and Responsibilities What you’ll be doing: The successful candidate will work independently on data visualisation projects with zero or minimal guidance, the incumbent is expected to operate out of Pune location and collaborate with various stakeholders in Pune, Leeds, and Sheffield. Representing visualisation during project scoping. Working with Business Analysts and Product Owners to understand and scope requirements. Working with Data Engineers and Architects to ensure data models are fit visualisation. Developing Tableau dashboards from start to finish, using Tableau Desktop / Cloud – from gathering requirements, designing dashboards, and presenting to internal stakeholders. Presenting visualisations to stakeholders. Supporting and guiding members of the team through visualisation delivery. Supporting feature releases for Tableau. Teaching colleagues about new Tableau features and visualisation best practices. What you’ll have Extensive experience in the use of Tableau, evidenced by a strong Tableau Public portfolio. Expertise in the delivery of data visualisation Experience in r equirements gathering and presenting visualisations to internal stakeholders. Strong understanding of data visualisation best practices Experience of working in Agile Scrum framework to deliver high quality solutions. Strong communication skills – Written & Verbal Knowledge of the delivery of Data Engineering and Data Warehousing to Cloud Platforms. Knowledge of or exposure to Cloud Data Warehouse platforms (Snowflake Preferred) Knowledge and experience of working with a variety of databases (e.g., SQL).

Posted 2 weeks ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Hyderabad

Hybrid

We are looking for a highly skilled Full Stack Developer with expertise in .NET Core and React.js to design, develop, and deploy robust, scalable, and cloud-native applications. The ideal candidate will have a strong understanding of backend and frontend technologies, experience with Microsoft Azure, and a passion for building high-quality software in a collaborative environment. Key Responsibilities: Design, develop, and maintain scalable web applications using .NET Core (backend) and React.js (frontend). Build and integrate RESTful APIs, services, and microservices. Develop and deploy cloud-native applications leveraging Microsoft Azure services such as Azure Functions, App Services, Azure DevOps, and Blob Storage. Collaborate with cross-functional teams including UI/UX designers, product managers, and fellow developers to deliver efficient, user-friendly solutions. Write clean, maintainable, and testable code adhering to industry best practices. Conduct code reviews, enforce coding standards, and mentor junior developers. Ensure application performance, reliability, scalability, and security. Actively participate in Agile/Scrum ceremonies and contribute to team discussions and continuous improvement. Required Skills: Strong experience with .NET Core / ASP.NET Core (Web API, MVC). Proficiency in React.js, JavaScript/TypeScript, HTML5, and CSS3. Solid experience with Microsoft Azure services (e.g., App Services, Azure Functions, Key Vault, Azure DevOps). Hands-on experience with Entity Framework Core, LINQ, and SQL Server. Familiarity with Git, CI/CD pipelines, and modern DevOps practices. Strong understanding of software design patterns, SOLID principles, and clean code methodologies. Basic knowledge of containerization tools like Docker. Nice to Have: Experience with Azure Kubernetes Service (AKS) or Azure Logic Apps. Familiarity with unit testing frameworks (xUnit, NUnit). Exposure to Agile/Scrum methodologies and tools like Jira or Azure Boards.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

We are seeking a highly experienced and self-driven Senior Data Engineer to design, build, and optimize modern data pipelines and infrastructure. This role requires deep expertise in Snowflake, DBT, Python, and cloud data ecosystems. You will play a critical role in enabling data-driven decision-making across the organization by ensuring the availability, quality, and integrity of data. Key Responsibilities: Design and implement robust, scalable, and efficient data pipelines using ETL/ELT frameworks. Develop and manage data models and data warehouse architecture within Snowflake . Create and maintain DBT models for transformation, lineage tracking, and documentation. Write modular, reusable, and optimized Python scripts for data ingestion, transformation, and automation. Collaborate closely with data analysts, data scientists, and business teams to gather and fulfill data requirements. Ensure data integrity, consistency, and governance across all stages of the data lifecycle. Monitor pipeline performance and implement optimization strategies for queries and storage. Follow best practices for data engineering including version control (Git), testing, and CI/CD integration. Required Skills and Qualifications: 8+ years of experience in Data Engineering or related roles. Deep expertise in Snowflake : schema design, performance tuning, security, and access controls. Proficiency in Python , particularly for scripting, data transformation, and workflow automation. Strong understanding of data modeling techniques (e.g., star/snowflake schema, normalization). Proven experience with DBT for building modular, tested, and documented data pipelines. Familiarity with ETL/ELT tools and orchestration platforms like Apache Airflow or Prefect . Advanced SQL skills with experience handling large and complex data sets. Exposure to cloud platforms such as AWS , Azure , or GCP and their data services. Preferred Qualifications: Experience implementing data quality checks and governance frameworks. Understanding of modern data stack and CI/CD pipelines for data workflows. Contributions to data engineering best practices, open-source projects, or thought leadership.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Lucknow

Work from Office

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Surat

Work from Office

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Chennai

Work from Office

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Hyderabad

Work from Office

Job Mode Onsite/Work from Office | Monday to Friday | Shift 1 (Morning). Overview : We are seeking an experienced Team Lead to oversee our data engineering and analytics team consisting of data engineers, ML engineers, reporting engineers, and data/business analysts. The ideal candidate will drive end-to-end data solutions from data lake and data warehouse implementations to advanced analytics and AI/ML projects, ensuring timely delivery and quality standards. Key Responsibilities : - Lead and mentor a cross-functional team of data professionals including data engineers, ML engineers, reporting engineers, and data/business analysts. - Manage the complete lifecycle of data projects from requirements gathering to implementation and maintenance. - Develop detailed project estimates and allocate work effectively among team members based on skills and capacity. - Implement and maintain data architectures including data lakes, data warehouses, and lakehouse solutions. - Review team deliverables for quality, adherence to best practices, and performance optimization. - Hold team members accountable for timelines and quality standards through regular check-ins and performance tracking. - Translate business requirements into technical specifications and actionable tasks. - Collaborate with clients and internal stakeholders to understand business needs and define solution approaches. - Ensure proper documentation of processes, architectures, and code. Technical Requirements : - Strong understanding of data engineering fundamentals including ETL/ELT processes, data modeling, and pipeline development. - Proficiency in SQL and data warehousing concepts including dimensional modeling and optimization techniques. - Experience with big data technologies and distributed computing frameworks. - Hands-on experience with at least one major cloud provider (AWS, GCP, or Azure) and their respective data services. - Knowledge of on-premises data infrastructure setup and maintenance. - Understanding of data governance, security, and compliance requirements. - Familiarity with AI/ML workflows and deployment patterns. - Experience with BI and reporting tools for data visualization and insights delivery. Management Skills : - Proven experience leading technical teams of 4+ members. - Strong project estimation and resource allocation capabilities. - Excellent code and design review skills. - Ability to manage competing priorities and deliver projects on schedule. - Effective communication skills to bridge technical concepts with business objectives. - Problem-solving mindset with the ability to remove blockers for the team. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field; - 5+ years of experience in data engineering or related roles. - 2+ years of team leadership or management experience. - Demonstrated success in delivering complex data projects. - Certification in relevant cloud platforms or data technologies is a plus. What We Offer : - Opportunity to lead cutting-edge data projects for diverse clients. - Professional development and technical growth path. - A collaborative work environment that values innovation. - Competitive salary and benefits package.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Kolkata

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies