Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Excellent Knowledge of Data Pipeline development, data consumption solution and maintenance Excellent knowledge of Spark (PySpark) & SQL Good to have knowledge of IT Infra Data Modelling, Analyzing raw data Integration with multiple Platforms Developing Data Warehouse, Data Lake solutions and maintaining datasets
Posted 14 hours ago
4.0 - 9.0 years
10 - 18 Lacs
Noida
Work from Office
Precognitas Health Pvt. Ltd., a fully owned subsidiary of Foresight Health Solutions LLC, is seeking a Data Engineer to build and optimize our data pipelines, processing frameworks, and analytics infrastructure that power critical healthcare insights. Are you a bright, energetic, and skilled data engineer who wants to make a meaningful impact in a dynamic environment? Do you enjoy designing and implementing scalable data architectures, ML pipelines, automating ETL workflows, and working with cloud-native solutions to process large datasets efficiently? Are you passionate about transforming raw data into actionable insights that drive better healthcare outcomes? If so, join us! Youll play a crucial role in shaping our data strategy, optimizing data ingestion, and ensuring seamless data flow across our systems while leveraging the latest cloud and big data technologies. Required Skills & Experience : 4+ years of experience in data engineering, data pipelines, and ETL/ELT workflows. Strong Python programming skills with expertise in Python Programming, NumPy, Pandas, and data manipulation techniques. Hands-on experience with orchestration tools like Prefect, Apache Airflow, or AWS Step Functions for managing complex workflows. Proficiency in AWS services, including AWS Glue, AWS Batch, S3, Lambda, RDS, Athena, and Redshift. Experience with Docker containerization and Kubernetes for scalable and efficient data processing. Strong understanding of data processing layers, batch and streaming data architectures, and analytics frameworks. Expertise in SQL and NoSQL databases, query optimization, and data modeling for structured and unstructured data. Familiarity with big data technologies like Apache Spark, Hadoop, or similar frameworks. Experience implementing data validation, quality checks, and observability for robust data pipelines. Strong knowledge of Infrastructure as Code (IaC) using Terraform or AWS CDK for managing cloud-based data infrastructure. Ability to work with distributed systems, event-driven architectures (Kafka, Kinesis), and scalable data storage solutions. Experience with CI/CD for data workflows, including version control (Git), automated testing, and deployment pipelines. Knowledge of data security, encryption, and access control best practices in cloud environments. Strong problem-solving skills and ability to collaborate with cross-functional teams, including data scientists and software engineers. Compensation will be commensurate with experience. If you are interested, please send your application to jobs@precognitas.com. For more information about our work, visit www.caliper.care
Posted 15 hours ago
5.0 - 7.0 years
15 - 25 Lacs
Pune, Bengaluru
Hybrid
Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only
Posted 16 hours ago
7.0 - 9.0 years
27 - 30 Lacs
Bengaluru
Work from Office
We are seeking experienced Data Engineers with over 7 years of experience to join our team at Intuit. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 7+ years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.
Posted 20 hours ago
6.0 - 10.0 years
30 - 35 Lacs
Kochi, Hyderabad, Coimbatore
Work from Office
1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python
Posted 1 day ago
6.0 - 8.0 years
37 - 40 Lacs
Kochi, Hyderabad, Coimbatore
Work from Office
Key Responsibilities: Design and implement scalable Snowflake data warehouse solutions for structured and semi-structured data. Develop ETL/ELT pipelines using Informatica IICS, dbt, Matillion, Talend, Airflow, or equivalent tools. Optimize query performance and implement best practices for cost and efficiency. Work with cloud platforms (AWS, Azure, GCP) for data integration and storage. Implement role-based access control (RBAC), security policies, and encryption within Snowflake. Perform data modeling (Star Schema, Snowflake Schema, Data Vault) and warehouse design. Collaborate with data engineers, analysts, and business teams to ensure data consistency and availability. Automate Snowflake object creation, pipeline scheduling, and monitoring. Migrate existing on-premise databases (Oracle, SQL Server, Teradata, Redshift, etc.) to Snowflake. Implement data governance, quality checks, and observability frameworks. Required Skills & Qualifications: 6+ years of experience in data engineering / warehousing with at least 2+ years in Snowflake. Strong expertise in Snowflake features such as Virtual Warehouses, Streams, Tasks, Time Travel, and Cloning. Experience in SQL performance tuning, query optimization, and stored procedures (JavaScript UDFs/ UDAFs). Hands-on experience with ETL/ELT tools like Informatica, dbt, Matillion, Talend, Airflow, or AWS Glue. Experience with Python, PySpark, or Scala for data processing. Knowledge of CI/CD pipelines, Git, Terraform, or Infrastructure as Code (IaC). Experience with semi-structured data (JSON, Parquet, Avro) and handling ingestion from APIs. Strong understanding of cloud platforms (AWS S3, Azure Data Lake, GCP BigQuery) and data lake architectures. Familiarity with BI/Analytics tools like Tableau, Power BI, Looker, or ThoughtSpot. Strong problem-solving skills and experience working in Agile/Scrum environments.
Posted 1 day ago
10.0 - 15.0 years
40 - 50 Lacs
Hyderabad
Hybrid
Envoy Global is a proven innovator in the global immigration space. Our mission combines our industry-leading tech platform with holistic service to streamline, simplify and expedite the immigration process for employers and individuals. We are seeking a highly skilled Team Lead OR Manager, Data Engineering within Envoy Global 's tech team to join us on a full time, permanent basis. This role is responsible for the end-to-end design, development, and documentation of data pipelines and ETL (Extract, Transform, Load) processes. This role focuses on enabling data migration, integration, and warehousing, encompassing the creation of ETL jobs, reports, dashboards, and data pipelines. As our Senior Data Engineering Lead OR Manager, you will be required to: Lead and mentor a small team of data engineers, fostering a collaborative and innovative environment. Design, develop, and document robust data pipelines and ETL jobs. Engage in data modeling activities to ensure efficient and effective data structures. Ensure the seamless integration of data across various platforms and systems Lead all aspects of the design, implementation, and maintenance of data engineering pipelines in our Azure environment including integration with a variety of data sources Collaborate with Data Analytics and DataOps teams and other partners in Architecture, Engineering and Devops teams to delivery high quality data platforms that enable analytics solutions for the business Ensure data engineering standards are in line with established principles of data governance, data quality and data security Monitor and optimizes the performance of data pipelines, ensuring they meet SLAs in terms of data availability and quality Hire, manage and mentor a team of Data Engineers and Data Quality Engineers Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Proven experience in data engineering, with a strong background in designing and developing ETL processes. Excellent collaboration skills, with the ability to work effectively with cross-functional teams. Leadership experience, with a track record of managing and mentoring a team of data engineers. 8+ years of experience as a Data Engineer with 3+ years of experience in a managerial role Technical experience in one or more of the cloud-based data warehouse/data lake platforms such as AWS, Snowflake, Azure Synapse ETL experience using SSIS, ADF or another equivalent tool Knowledgeable in Data Modeling and Data warehouse concepts Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application.
Posted 1 day ago
3.0 - 8.0 years
0 - 3 Lacs
Bengaluru
Remote
If you are passionate about Snowflake, data warehousing, and cloud-based analytics, we'd love to hear from you! Apply now to be a part of our growing team. Perks and benefits Intersected candidates can go through the below link to apply directly and can complete the 1st round of technical discussion https://app.hyrgpt.com/candidate-job-details?jobId=67ecc88dda1154001cc8b88f Job Summary: We are looking for a skilled Snowflake Engineer with 3-10 years of experience in designing and implementing cloud-based data warehousing solutions. The ideal candidate will have hands-on expertise in Snowflake architecture, SQL, ETL pipeline development, and performance optimization. This role requires proficiency in handling structured and semi-structured data, data modeling, and query optimization to support business intelligence and analytics initiatives. The ideal candidate will work on a project for one of our key Big4 consulting customer and will have immense learning opportunities Key Responsibilities: Design, develop, and manage high-performance data pipelines for ingestion, transformation, and storage in Snowflake. Optimize Snowflake workloads, ensuring efficient query execution and cost management. Develop and maintain ETL processes using SQL, Python, and orchestration tools. Implement data governance, security, and access control best practices within Snowflake. Work with structured and semi-structured data formats such as JSON, Parquet, Avro, and XML. Design and maintain fact and dimension tables, ensuring efficient data warehousing and reporting. Collaborate with data analysts and business teams to support reporting, analytics, and business intelligence needs. Troubleshoot and resolve data pipeline issues, ensuring high availability and reliability. Monitor and optimize Snowflake storage and compute usage to improve efficiency and performance. Required Skills & Qualifications: 3-10 years of experience in Snowflake, SQL, and data engineering. Strong hands-on expertise in Snowflake development, including data sharing, cloning, and time travel. Proficiency in SQL scripting for query optimization and performance tuning. Experience with ETL tools and frameworks (e.g., DBT, Airflow, Matillion, Talend). Familiarity with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake. Strong understanding of data warehousing concepts, including fact and dimension modeling. Ability to work with semi-structured data formats like JSON, Avro, Parquet, and XML. Knowledge of data security, governance, and access control within Snowflake. Excellent problem-solving and troubleshooting skills. Preferred Qualifications: Experience in Python for data engineering tasks. Familiarity with CI/CD pipelines for Snowflake development and deployment. Exposure to streaming data ingestion and real-time processing. Experience with BI tools such as Tableau, Looker, or Power BI.
Posted 1 day ago
2.0 - 6.0 years
5 - 8 Lacs
Pune
Work from Office
Supports, develops, and maintains a data and analytics platform. Effectively and efficiently processes, stores, and makes data available to analysts and other consumers. Works with Business and IT teams to understand requirements and best leverage technologies to enable agile data delivery at scale. Note:- Although the role category in the GPP is listed as Remote, the requirement is for a Hybrid work model. Key Responsibilities: Oversee the development and deployment of end-to-end data ingestion pipelines using Azure Databricks, Apache Spark, and related technologies. Design high-performance, resilient, and scalable data architectures for data ingestion and processing. Provide technical guidance and mentorship to a team of data engineers. Collaborate with data scientists, business analysts, and stakeholders to integrate various data sources into the data lake/warehouse. Optimize data pipelines for speed, reliability, and cost efficiency in an Azure environment. Enforce and advocate for best practices in coding standards, version control, testing, and documentation. Work with Azure services such as Azure Data Lake Storage, Azure SQL Data Warehouse, Azure Synapse Analytics, and Azure Blob Storage. Implement data validation and data quality checks to ensure consistency, accuracy, and integrity. Identify and resolve complex technical issues proactively. Develop reliable, efficient, and scalable data pipelines with monitoring and alert mechanisms. Use agile development methodologies, including DevOps, Scrum, and Kanban. External Qualifications and Competencies Technical Skills: Expertise in Spark, including optimization, debugging, and troubleshooting. Proficiency in Azure Databricks for distributed data processing. Strong coding skills in Python and Scala for data processing. Experience with SQL for handling large datasets. Knowledge of data formats such as Iceberg, Parquet, ORC, and Delta Lake. Understanding of cloud infrastructure and architecture principles, especially within Azure. Leadership & Soft Skills: Proven ability to lead and mentor a team of data engineers. Excellent communication and interpersonal skills. Strong organizational skills with the ability to manage multiple tasks and priorities. Ability to work in a fast-paced, constantly evolving environment. Strong problem-solving, analytical, and troubleshooting abilities. Ability to collaborate effectively with cross-functional teams. Competencies: System Requirements Engineering: Uses appropriate methods to translate stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively to meet shared objectives. Communicates Effectively: Delivers clear, multi-mode communications tailored to different audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes good and timely decisions to keep the organization moving forward. Data Extraction: Performs ETL activities and transforms data for consumption by downstream applications. Programming: Writes and tests computer code, version control, and build automation. Quality Assurance Metrics: Uses measurement science to assess solution effectiveness. Solution Documentation: Documents information for improved productivity and knowledge transfer. Solution Validation Testing: Ensures solutions meet design and customer requirements. Data Quality: Identifies, understands, and corrects data flaws. Problem Solving: Uses systematic analysis to address and resolve issues. Values Differences: Recognizes the value that diverse perspectives bring to an organization. Preferred Knowledge & Experience: Exposure to Big Data open-source technologies (Spark, Scala/Java, Map-Reduce, Hive, HBase, Kafka, etc.). Experience with SQL and working with large datasets. Clustered compute cloud-based implementation experience. Familiarity with developing applications requiring large file movement in a cloud-based environment. Exposure to Agile software development and analytical solutions. Exposure to IoT technology. Additional Responsibilities Unique to this Position Qualifications: Education: Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 3 to 5 years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks, Apache Spark, Python/Scala, CI/CD, Snowflake, and Qlik for data processing. Experience working with multiple file formats like Parquet, Delta, and Iceberg. Knowledge of Kafka or similar streaming technologies. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments. Deep understanding of Azure Data Services. Experience with CI/CD pipelines, version control (Git), Jenkins, and agile methodologies. Familiarity with data lakes, data warehouses, and modern data architectures. Experience with Qlik Replicate (optional).
Posted 1 day ago
8.0 - 12.0 years
0 - 1 Lacs
Hyderabad, Ahmedabad, Bengaluru
Hybrid
Contractual (Project-Based) Notice Period: Immediate - 15 Days Fill this form: https://forms.office.com/Pages/ResponsePage.aspx?id=hLjynUM4c0C8vhY4bzh6ZJ5WkWrYFoFOu2ZF3Vr0DXVUQlpCTURUVlJNS0c1VUlPNEI3UVlZUFZMMC4u Resume- shweta.soni@panthsoftech.com
Posted 1 day ago
5.0 - 10.0 years
7 - 14 Lacs
Pune
Work from Office
We are looking for a skilled Data Engineer with 5-10 years of experience to join our team in Pune. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and databases. Ensure data quality, integrity, and security. Optimize data processing and analysis workflows. Participate in code reviews and contribute to improving overall code quality. Job Requirements Strong proficiency in programming languages such as Python or Java. Experience with big data technologies like Hadoop or Spark. Knowledge of database management systems like MySQL or NoSQL. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Notice period: Immediate joiners preferred.
Posted 1 day ago
5.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
We are looking for a skilled Senior Data Engineer with 5-8 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering. Job Requirements Strong knowledge of data engineering principles and practices. Experience with data modeling, database design, and data warehousing. Proficiency in programming languages such as Python, Java, or C++. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 day ago
6.0 - 10.0 years
8 - 18 Lacs
Chennai
Work from Office
Role Overview Are you passionate about building scalable data systems and working on cutting-edge cloud technologies? We're looking for a Senior Data Engineer to join our team and play a key role in transforming raw data into powerful insights. What Youll Do: Design, develop, and optimize scalable ETL/ELT pipelines and data integration workflows Build and maintain data lakes, warehouses, and real-time streaming pipelines Work with both structured and unstructured data, ensuring clean, usable datasets for analytics & ML Collaborate with analytics, product, and engineering teams to implement robust data models Ensure best practices around data quality, governance, lineage, and security Code in Python, SQL, PySpark, and work on Databricks Operate in AWS environments using Redshift, Glue, S3 Continuously monitor and optimize pipeline performance Document workflows and contribute to engineering standards What We’re Looking For: Strong hands-on experience in modern data engineering tools & platforms Cloud-first mindset with expertise in AWS data stack Solid programming skills and a passion for building high-performance data systems Excellent communication & collaboration skills
Posted 1 day ago
12.0 - 18.0 years
50 - 65 Lacs
Bengaluru
Work from Office
Oversee the delivery of data engagements across a portfolio of client accounts, understanding their specific needs, goals & challenges Provide mentorship & guidance for the Architects, Project Managers, & technical teams for data engagements Required Candidate profile 12+ years of experience and should be hands on in Data Architecture and should be an expert in DataBricks or Azure Should be in data engineering leadership or management roles
Posted 1 day ago
5.0 - 8.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Key Responsibilities: •Design, develop, and optimize data models within the Celonis Execution Management System (EMS). •Extract, transform, and load (ETL) data from flat files and UDP into Celonis. • Work closely with business stakeholders and data analysts to understand data requirements and ensure accurate representation of business processes. •Develop and optimize PQL (Process Query Language) queries for process mining. • Collaborate with group data engineers, architects, and analysts to ensure high-quality data pipelines and scalable solutions. •Perform data validation, cleansing, and transformation to enhance data quality. •Monitor and troubleshoot data integration pipelines, ensuring performance and reliability. •Provide guidance and best practices for data modeling in Celonis. Qualifications & Skills: • 5+ years of experience in data engineering, data modeling, or related roles Proficiency in SQL, ETL processes, and database management (e.g.,PostgreSQL, Snowflake, BigQuery, or similar). •Experience working with large-scale datasets and optimizing data models for performance. •Data management experience that spans across the data lifecycle and critical functions (e.g., data profiling, data modeling, data engineering, data consumption product and services). • Strong problem-solving skills and ability to work in an agile, fast-paced environment. •Excellent communications and demonstrated hands-on experience communicating technical topics with non-technical audiences. • Ability to effectively collaborate and manage the timely completion of assigned activities while working in a highly virtual team environment. •Excellent collaboration skills to work with cross-functional teams Roles and Responsibilities Key Responsibilities: •Design, develop, and optimize data models within the Celonis Execution Management System (EMS). •Extract, transform, and load (ETL) data from flat files and UDP into Celonis. • Work closely with business stakeholders and data analysts to understand data requirements and ensure accurate representation of business processes. •Develop and optimize PQL (Process Query Language) queries for process mining. • Collaborate with group data engineers, architects, and analysts to ensure high-quality data pipelines and scalable solutions. •Perform data validation, cleansing, and transformation to enhance data quality. •Monitor and troubleshoot data integration pipelines, ensuring performance and reliability. •Provide guidance and best practices for data modeling in Celonis. Qualifications & Skills: • 5+ years of experience in data engineering, data modeling, or related roles Proficiency in SQL, ETL processes, and database management (e.g.,PostgreSQL, Snowflake, BigQuery, or similar). •Experience working with large-scale datasets and optimizing data models for performance. •Data management experience that spans across the data lifecycle and critical functions (e.g., data profiling, data modeling, data engineering, data consumption product and services). • Strong problem-solving skills and ability to work in an agile, fast-paced environment. •Excellent communications and demonstrated hands-on experience communicating technical topics with non-technical audiences. • Ability to effectively collaborate and manage the timely completion of assigned activities while working in a highly virtual team environment. •Excellent collaboration skills to work with cross-functional teams
Posted 1 day ago
11.0 - 13.0 years
35 - 50 Lacs
Bengaluru
Work from Office
Principal AWS Data Engineer Location : Bangalore Experience : 9 - 12 years Job Summary: In this key leadership role, you will lead the development of foundational components for a Lakehouse architecture on AWS and drive the migration of existing data processing workflows to the new Lakehouse solution. You will work across the Data Engineering organisation to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue and Glue Data Catalog. The main goal of this position is to ensure successful migration and establish robust data quality governance across the new platform, enabling reliable and efficient data processing. Success in this role requires deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Must Have Tech Skills: Prior Principal Engineer experience, leading team best practices in design, development, and implementation, mentoring team members, and fostering a culture of continuous learning and innovation Extensive experience in software architecture and solution design, including microservices, distributed systems, and cloud-native architectures. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Deep technical knowledge of AWS data services and engineering practices, with demonstrable experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience of delivering Lakehouse solutions/architectures Nice To Have Tech Skills: Knowledge of additional programming languages and development tools to provide flexibility and adaptability across varied data engineering projects A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Lead complex projects autonomously, fostering an inclusive and open culture within development teams. Mentor team members and lead technical discussions. Provides strategic guidance on best practices in design, development, and implementation. Leads the development of high-quality, efficient code and develops necessary tools and applications to address complex business needs Collaborates closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading the design and planning of these components. Drive the migration of existing data processing workflows to a Lakehouse architecture, leveraging Iceberg capabilities. Serves as an internal subject matter expert in software development, advising stakeholders on best practices in design, development, and implementation Key Skills: Deep technical knowledge of data engineering solutions and practices. Expert in AWS services and cloud solutions, particularly as they pertain to data engineering practices Extensive experience in software architecture and solution design Specialized expertise in Python and Spark Ability to provide technical direction, set high standards for code quality and optimize performance in data-intensive environments. Skilled in leveraging automation tools and Continuous Integration/Continuous Deployment (CI/CD) pipelines to streamline development, testing, and deployment. Exceptional communicator who can translate complex technical concepts for diverse stakeholders, including engineers, product managers, and senior executives. Provides thought leadership within the engineering team, setting high standards for quality, efficiency, and collaboration. Experienced in mentoring engineers, guiding them in advanced coding practices, architecture, and strategic problem-solving to enhance team capabilities. Educational Background: Bachelor’s degree in computer science, Software Engineering, or a related field is essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices.
Posted 1 day ago
5.0 - 10.0 years
7 - 17 Lacs
Bengaluru
Work from Office
About this role: Wells Fargo is seeking a Lead Software Engineer (Lead Data Engineer). In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 5+ Years of experience in Data Engineering. 5+ years of overall experience of software development. 5+ years of Python development experience must include 3+ years in Spark framework. 5+ years of Oracle or SQL Server experience in designing, coding and delivering database applications Expert knowledge and considerable development experience with at least two or more of the following : Kafka ,ETL, Big Data, NoSql database, S3 or other object store . Strong understanding of data flows design and how to implement your designs in python Experience in writing and debugging complex PL/SQL or TSQL Stored Procedures Excellent troubleshooting and debugging skills Analyze a feature story and design a robust solution for it and create specs for complex business rules and calculations Ability to understand business problems and articulate a corresponding solution Excellent verbal, written, and interpersonal communication skills Job Expectations: Strong knowledge and understanding of Dremio framework Database query design and optimization Strong experience using the development ecosystem of applications (JIRA, ALM, GitHub, uDeploy(Urban Code Deploy), Jenkins, Artifactory, SVN, etc) Knowledge and understanding of multiple source code version control systems in working with branches, tags and labels
Posted 1 day ago
6.0 - 7.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 5 - 8 Years Location : Bangalore Contribute to building state-of-the-art data platforms in AWS, leveraging Python and Spark. Be part of a dynamic team, building data solutions in a supportive and hybrid work environment. This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement, and optimize ETL workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable experience as a senior data engineer. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience with data services in Lakehouse architecture. Good background and proven experience of data modelling for data platforms Nice To Have Tech Skills: A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Provides guidance on best practices in design, development, and implementation, ensuring solutions meet business requirements and technical standards. Works closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading design and planning of these components. Drive the migration of existing data processing workflows to the Lakehouse architecture, leveraging Iceberg capabilities. Communicates complex technical information clearly, tailoring messages to the appropriate audience to ensure alignment. Key Skills: Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using AWS data services and Lakehouse capabilities. Highly proficient in Python, Spark and familiar with a variety of development technologies. Skilled in decomposing solutions into components (Epics, stories) to streamline development. Proficient in creating clear, comprehensive documentation. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Previous Financial Services experience delivering data solutions against financial and market reference data. Solid grasp of Data Governance and Data Management concepts, including metadata management, master data management, and data quality. Educational Background: Bachelor’s degree in computer science, Software Engineering, or related field essential. Bonus Skills: A working knowledge of Indices, Index construction and Asset Management principles.
Posted 1 day ago
8.0 - 10.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary: Experience : 4 - 8 years Location : Bangalore The Data Engineer will contribute to building state-of-the-art data Lakehouse platforms in AWS, leveraging Python and Spark. You will be part of a dynamic team, building innovative and scalable data solutions in a supportive and hybrid work environment. You will design, implement, and optimize workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires previous experience of building data products using AWS services, familiarity with Python and Spark, problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable previous experience as a data engineer. Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Nice To Have Tech Skills: Familiar with data services in a Lakehouse architecture. Familiar with technical design practices, allowing for the creation of scalable, reliable data products that meet both technical and business requirements A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Writes high quality code, ensuring solutions meet business requirements and technical standards. Works with architects, Product Owners, and Development leads to decompose solutions into Epics, assisting the design and planning of these components. Creates clear, comprehensive technical documentation that supports knowledge sharing and compliance. Experience in decomposing solutions into components (Epics, stories) to streamline development. Actively contributes to technical discussions, supporting a culture of continuous learning and innovation. Key Skills: Proficient in Python and familiar with a variety of development technologies. Previous experience of implementing data pipelines, including use of ETL tools to streamline data ingestion, transformation, and loading. Solid understanding of AWS services and cloud solutions, particularly as they pertain to data engineering practices. Familiar with AWS solutions including IAM, Step Functions, Glue, Lambda, RDS, SQS, API Gateway, Athena. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Experienced in Agile development, including sprint planning, reviews, and retrospectives Educational Background: Bachelor’s degree in computer science, Software Engineering, or related essential. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices. Familiar with implementing and optimizing CI/CD pipelines. Understands the processes that enable rapid, reliable releases, minimizing manual effort and supporting agile development cycles.
Posted 1 day ago
5.0 - 10.0 years
25 - 35 Lacs
Bengaluru
Work from Office
We are one of Australias leading integrated media companies, with major operations in broadcast television, publishing, and digital content. They own Channel 7, The West Australian newspaper, and 7plus, a streaming platform. Their portfolio includes partnerships with leading global media brands, reaching millions of Australians across various media channels. Role - Senior Data Engineer Responsibilities : Data Acquisition : Proactively design and implement processes for acquiring data from both internal systems and external data providers. Understand the various data types involved in the data lifecycle, including raw, curated, and lake data, to ensure effective data integration. SQL Development : Develop advanced SQL queries within database frameworks to produce semantic data layers that facilitate accurate reporting. This includes optimizing queries for performance and ensuring data quality. Linux Command Line : Utilize Linux command-line tools and functions, such as bash shell scripts, cron jobs, grep, and awk, to perform data processing tasks efficiently. This involves automating workflows and managing data pipelines. Data Protection : Ensure compliance with data protection and privacy requirements, including regulations like GDPR. This includes implementing best practices for data handling and maintaining the confidentiality of sensitive information. Documentation : Create and maintain clear documentation of designs and workflows using tools like Confluence and Visio. This ensures that stakeholders can easily communicate and understand technical specifications. API Integration and Data Formats : Collaborate with RESTful APIs and AWS services (such as S3, Glue, and Lambda) to facilitate seamless data integration and automation. Demonstrate proficiency in parsing and working with various data formats, including CSV and Parquet, to support diverse data processing needs. Key Requirements: 5+ years of experience as a Data Engineer , focusing on ETL development. 3+ years of experience in SQL and writing complex queries for data retrieval and manipulation. 3+ years of experience in Linux command-line and bash scripting. Familiarity with data modelling in analytical databases. Strong understanding of backend data structures, with experience collaborating with data engineers ( Teradata, Databricks, AWS S3 parquet/CSV ). Experience with RESTful APIs and AWS services like S3, Glue, and Lambda Experience using Confluence for tracking documentation. Strong communication and collaboration skills, with the ability to interact effectively with stakeholders at all levels. Ability to work independently and manage multiple tasks and priorities in a dynamic environment. Bachelors degree in Computer Science, Engineering, Information Technology, or a related field. Good to Have: Experience with Spark and Databricks. Understanding of data visualization tools, particularly Tableau. Knowledge of data clean room techniques and integration methodologies.
Posted 1 day ago
9.0 - 13.0 years
13 - 18 Lacs
Hyderabad
Work from Office
This role involves the development and application of engineering practice and knowledge in defining, configuring and deploying industrial digital technologies including but not limited to PLM MES for managing continuity of information across the engineering enterprise, including design, industrialization, manufacturing supply chain, and for managing the manufacturing data. - Grade Specific Focus on Digital Continuity Manufacturing. Fully competent in own area. Acts as a key contributor in a more complex critical environment. Proactively acts to understand and anticipates client needs. Manages costs and profitability for a work area. Manages own agenda to meet agreed targets. Develop plans for projects in own area. Looks beyond the immediate problem to the wider implications. Acts as a facilitator, coach and moves teams forward.
Posted 1 day ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. This role is responsible for executing multiple tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Key Responsibilities Participate in the development of moderately complex and occasionally complex technical solutions using Big Data techniques in data & analytics processes Develops innovative solutions within the team Participates in the development of moderately complex and occasionally complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Understands the Big Data related problems and requirements to identify the correct technical approach Takes coaching from key team members to ensure efforts within owned tracks of work will meet their needs Executes moderately complex and occasionally complex functional work tracks for the team Partners with Allstate Technology teams on Big Data efforts Partners closely with team members on Big Data solutions for our data science community and analytic users Leverages and uses Big Data best practices / lessons learned to develop technical solutions Education 4 year Bachelors Degree (Preferred) Experience 2 or more years of experience (Preferred) Supervisory Responsibilities This job does not have supervisory duties. Education & Experience (in lieu) In lieu of the above education requirements, an equivalent combination of education and experience may be considered. Primary Skills Big Data Engineering, Big Data Systems, Big Data Technologies, Data Science, Influencing Others Shift Time Recruiter Info Annapurna Jhaajhat@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 1 day ago
4.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. This role is responsible for driving multiple complex tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics; and discovery of new technical challenges that can be solved with existing and emerging Big Data hardware and software solutions. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Skills Primarily Scala & Spark: Strong in functional programming and big data processing using Spark... Java Proficient in Java 8+, REST API development, multithreading, and OOP concepts.Good Hands-on with MongoDB CAAS Experience with Docker, Kubernetes, and deploying containerized apps. Tools: Git, CI/CD, JSON, SBT/Maven, Agile methodologies. Key Responsibilities Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Participates in the development of complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Supports Innovation; regularly provides new ideas to help people, process, and technology that interact with analytic ecosystem Participates in the development of complex technical solutions using Big Data techniques in data & analytics processes Influence within the team on the effectiveness of Big Data systems to solve their business problems. Leverages and uses Big Data best practices / lessons learned to develop technical solutions used for descriptive analytics, ETL, predictive modeling, and prescriptive "real time decisions" analytics Partners closely with team members on Big Data solutions for our data science community and analytic users Partners with Allstate Technology teams on Big Data efforts Education Masters Degree (Preferred) Experience 6 or more years of experience (Preferred) Primary Skills Apache Spark, Big Data, Big Data Engineering, Big Data Systems, Big Data Technologies, CasaXPS, CI/CD, Data Science, Docker (Software), Git, Influencing Others, Java, MongoDB, Multithreading, RESTful APIs, Scala (Programming Language), ScalaTest, Spring Boot Shift Time Recruiter Info rkotz@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 1 day ago
4.0 - 9.0 years
18 - 22 Lacs
Bengaluru
Work from Office
About us As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team overview Target Global Supply Chain and Logistics (GSCL) is evolving at an incredible pace. We are constantly reimagining how we get the right product to the guest even better, faster and more cost effectively than before. We are becoming more intelligent, automated and algorithmic in our decision-making, so that no matter how guests shopin stores or on Target.com we deliver the convenience and immediate gratification they demand and deserve. Operational Intelligence Analytics , within Targets Supply Chain, is responsible for identifying data and empowering users with insight to improve operational performance. The skills mix is a blend of data engineering, data science, and diverse problem-solving capabilities jack of several trades, master of none. The team currently uses a wide variety of analytics tools including SQL, Python, R and visualization tools to work with small, sparse datasets as well as big data platforms like Hadoop. Role overview This role will support Data & Analytics for Sales & Operational Planning (S&OP). As a Sr Product Manager, you will work in the product model and will partner to develop a comprehensive product strategy, related roadmap, and set key business objectives (OKRs) for your respective product. You will need to leverage the knowledge of your product, as well as, customer feedback and establish other relevant data points to assess value, develop business cases, and prioritize the direction and desired outcomes for your product. You will lead a product and work in unison with data analysts, engineers, data scientists and business partners to deliver a product. You will be the voice of the product to key stakeholders to ensure that their needs are met and that the product team is getting the direction and support that it needs to be successful. You will develop and actively understand the market, own a product roadmap, and backlog outlining the customer themes, epics, and stories while prioritizing the backlog to focus on the highest impact work for your team and stakeholders. You will encourage the open exchange of information and viewpoints, as well as inspire others to achieve challenging goals and high standards of performance while committing to the organization's direction. You will foster a sense of urgency to achieve goals and leverage resources to overcome unexpected obstacles, and partner with product teams across the organization to help them achieve their goals while pursuing and completing yours. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. 4-year college degree (or equivalent experience) 8 + total experience & 6 + years of Product Management experience , or experience within S&OP /Supply Chain Strong communication skills, building trusted relationships with stakeholders, influencing teams across the organization, managing conflicts, and adapting to a fast-moving environment Skilled in Excel, Greenfield, Smartsheet, Confluence, Jira, and Data@Target Experience with analytics and ability to facilitate communication between business and technical teams Hands on e xperience working in an agile environment and driving team operating model improvements ( e.g. leading ceremonies, user stories, iterative development, scrum teams, sprints, personas) Experience working with Global teams and openness to meetings in the evenings post 8pm IST as well Proven ability in leveraging problem solving frameworks Proven ability to lead a body of work with cross-functional partners, specifically Data Engineering, Data Science, Product, and Business Owners Proven ability to manage a large list of priorities and provide transparency to stakeholders on trade off decisions and expected time of completion Useful Links: Life at Targethttps://india.target.com/ Benefitshttps://india.target.com/life-at-target/workplace/benefits Culture https://india.target.com/life-at-target/belonging
Posted 1 day ago
3.0 - 8.0 years
4 - 8 Lacs
Kolkata
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work, contributing to the growth and success of the projects. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Collaborate with team members to analyze, design, and develop software solutions.- Participate in code reviews and provide constructive feedback.- Troubleshoot and debug software applications to ensure optimal performance.- Research and implement new technologies to enhance existing software.- Document software specifications, user manuals, and technical documentation. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data engineering concepts and best practices.- Experience with cloud platforms such as AWS or Azure.- Hands-on experience with big data technologies like Hadoop or Spark.- Knowledge of programming languages such as Python, Java, or Scala. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Kolkata office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data engineering job market in India is flourishing with a high demand for professionals who can manage and optimize large amounts of data. Data engineering roles are critical in helping organizations make informed decisions and derive valuable insights from their data.
The average salary range for data engineering professionals in India varies based on experience levels. Entry-level positions typically start around ₹4-6 lakhs per annum, while experienced data engineers can earn upwards of ₹15-20 lakhs per annum.
In the field of data engineering, a typical career path may progress as follows: - Junior Data Engineer - Data Engineer - Senior Data Engineer - Tech Lead
In addition to data engineering expertise, professionals in this field are often expected to have skills in: - Data modeling - ETL processes - Database management - Programming languages like Python, Java, or Scala
As you explore data engineering jobs in India, remember to hone your skills, stay updated on the latest technologies, and prepare thoroughly for interviews. With the right mindset and preparation, you can confidently apply for and excel in data engineering roles in the country. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane