Home
Jobs

2105 Data Engineering Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

14.0 - 19.0 years

4 - 8 Lacs

Hyderabad

Work from Office

SkillData Engineer RoleT2, T1 Key responsibility Data Engineer Must have 9+ years of experience in below mentioned skills. Must HaveBig Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to HaveEvent-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Seeking a skilled Data Engineer to work on cloud-based data pipelines and analytics platforms. The ideal candidate will have hands-on experience in PySpark and AWS, with proficiency in designing Data Lakes and working with modern data orchestration tools. Data Engineer to work on cloud-based data pipelines and analytics platforms PySpark and AWS, with proficiency in designing Data Lakes working with modern data orchestration tools

Posted 1 week ago

Apply

9.0 - 14.0 years

4 - 8 Lacs

Hyderabad

Work from Office

To be responsible for data modelling, design, and development of the batch and real-time extraction, load, transform (ELT) processes, and the setup of the data integration framework, ensuring best practices are followed during the integration development. Bachelors degree in CS/IT or related field (minimum) Azure Data Engineer (ADF, ADSL, MS Fabric), Databricks Azure DevOps, Confluence

Posted 1 week ago

Apply

2.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact! IBM’s Cloud Services are focused on supporting clients on their cloud journey across any platform to achieve their business goals. It encompasses Cloud Advisory, Architecture, Cloud Native Development, Application Portfolio Migration, Modernization, and Rationalization as well as Cloud Operations. Cloud Services supports all public/private/hybrid Cloud deployments: IBM Bluemix/IBM Cloud/Red Hat/AWS/ Azure/Google and client private environments. Cloud Services has the best Cloud developer architect Complex SI, Sys Ops and delivery talent delivered through our GEO CIC Factory model. As a member of our Cloud Practice you will be responsible for defining and implementing application cloud migration, modernisation and rationalisation solutions for clients across all sectors. You will support mobilisation and help to lead the quality of our programmes and services, liaise with clients and provide consulting services including: Create cloud migration strategies; defining delivery architecture, creating the migration plans, designing the orchestration plans and more. Assist in creating and executing of migration run books Evaluate source cloud (Physical Virtual and Cloud) and target Workloads Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Cloud data engineers with GCP PDE certification and working experience with GCP. Building end to end data pipelines in GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Experience in logging and monitoring of GCP services and Experience in Terraform and infrastructure automation. Expertise in Python coding language Develops data engineering solutions on Google Cloud ecosystem and Supports and maintains data engineering solutions on Google Cloud ecosystem Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues, and deploy applications to the cloud platform

Posted 1 week ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Neo4j, Stardog Good to have skills : JavaMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Neo4j.- Good To Have Skills: Experience with Java.- Strong understanding of data modeling and graph database concepts.- Experience with data integration tools and ETL processes.- Familiarity with data quality frameworks and best practices.- Proficient in programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Neo4j.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft Azure DatabricksMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your role involves creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. Roles & ResponsibilitiesData Integration:Develop and implement data pipelines using Azure Data Factory, Azure Fabric, and MDMF(Meta data Management framework) to ingest, transform, and store data from various sources. Data Modeling:Maintain data models, ensuring data quality and consistency across different databases and systems. Database Management:Manage Azure SQL Databases and other storage solutions to optimize performance and scalability. ETL Processes:Design and optimize ETL (Extract, Transform, Load) processes to ensure efficient data flow and availability for analytics and reporting. Collaboration:Work closely with data scientists, analysts, and other stakeholders to understand data requirements and provide actionable insights. Documentation:Maintain clear documentation of data architectures, data flows, and pipeline processes. Professional & Technical Skills: Proficiency in Azure services such as Azure Data Factory,Azure Fabric, MDMF Azure SQL Database. Good knowledge of SQL and experience with programming languages such as Python and Pyspark. Familiarity with data modeling techniques and data warehousing concepts. Experience with cloud architecture and data architecture best practices. Understanding of data governance and security principles. Excellent problem-solving skills and attention to detail. Strong communication skills for collaborating with technical and non-technical stakeholders. Additional Information:- The candidate should have a minimum of 3 years of experience in Microsoft Azure Data Services.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and maintain data pipelines for efficient data processing.- Implement ETL processes to ensure seamless data migration.- Collaborate with cross-functional teams to optimize data solutions.- Conduct data quality assessments and implement improvements.- Stay updated on industry trends and best practices for data management. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Strong understanding of data modeling and database design.- Experience with cloud-based data platforms like AWS or Azure.- Hands-on experience with SQL and scripting languages like Python.- Knowledge of data governance principles and practices. Additional Information:- The candidate should have a minimum of 3 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processing workflows to enhance efficiency and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data pipeline architecture and design.- Experience with ETL processes and data integration techniques.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 5 years of experience in Apache Spark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business AgilityMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities:- Need Databricks resource with Azure cloud experience- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with data architects and analysts to design scalable data solutions.- Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Business Agility.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with cloud platforms and services related to data storage and processing. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. You will also monitor and optimize existing data processes to enhance performance and reliability, making data accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with data architects and analysts to design data models that meet business needs.- Develop and maintain documentation for data processes and workflows to ensure clarity and compliance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data processing frameworks and methodologies.- Experience in building and optimizing data pipelines for performance and scalability.- Familiarity with data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 3 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Navi Mumbai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies.- Good To Have Skills: Experience with data warehousing solutions.- Strong understanding of ETL processes and tools.- Familiarity with data governance and data quality frameworks.- Experience in programming languages such as Python or SQL for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Data Modeling Techniques and Methodologies.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Noida

Work from Office

About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field. Heres What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices

Posted 1 week ago

Apply

2.0 - 4.0 years

5 - 8 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Job Summary: Data Operations: We are looking for a motivated and detail-oriented Data Operations Specialist with 14years of experience to join our data team in Chennai. In this role, you will be responsible for monitoring, supporting, and maintaining data platforms and pipelines built on Microsoft Azure as well as on premises. You will play a key role in ensuring the reliability, performance, and availability of our data systems and workflows. The ideal candidate will have a solid understanding Of SQL, ETL processes, and Azure Data services, with hands-on experience trouble shooting data issues and supporting analytics products. Strong analytical thinking, communication skills, and a proactive mindset are essential Responsibilities: Investigate and resolve data issues using SQL, Python, and Azure tools. Monitor, manage, and troubleshoot Azure-based data pipelines and workflows. Support data ingestion, transformation, and integration processes. Collaborate with engineering and analytics teams to maintain data quality and availability. Assist users with data-related inquiries and reporting issues. Document support interactions, issue resolutions, pipelines, and operational standards. Contribute to internal knowledge bases and share best practices. Support database operations such as backups, health checks, and schema documentation. Communicate findings and progress clearly within the team. Continuously develop skills in Azure data services, ETL tools, and analytics. Required Skills & Qualifications: 1–4yearsofexperienceindataoperations, support, or analytics roles. Proficient in SQL with hands-on experience in Microsoft SQL Server; familiarity with Python scripting is a plus. Practical knowledge of Azure Data Services such as Azure Data Factory, Azure DataLake, Azure SQL Database and Azure Monitor. Strong understanding of ETL concepts, data warehousing, and exposure to BI/ reporting tools like PowerBI. Solid grasp of data flow and monitoring in production environments. Strong analytical and problem-solving skills with a focus on performance and operational efficiency. Ability to write basic to inter mediate SQL queries and work with relational databases. Effective communication skills, both verbal and written. Collaborative mindset with the ability to work independently and in cross-functional teams. A bachelor’s degree in computer science, Data Science, Statistics, or a related field. Eagerness to learn, adapts, and grows in a dynamic data-driven environment.

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Hyderabad

Work from Office

Urgent Requirement for Big Data, Notice Period Immediate Location Hyderabad/Pune Employment Type C2H Primary Skills 6-8yrs of Experience in working as bigdata developer/supporting environemnts Strong knowledge in Unix/BigData Scripting Strong understanding of BigData (CDP/Hive) Environment Hands-on with GitHub and CI-CD implementations. Attitude to learn understand ever task doing with reason Ability to work independently on specialized assignments within the context of project deliverable Take ownership of providing solutions and tools that iteratively increase engineering efficiencies. Excellent communication Skills & team player Good to have hadoop, Control-M Tooling knowledge. Good to have Automation experience, knowledge of any Monitoring Tools. Role You will work with team handling application developed using Hadoop/CDP, Hive. You will work within the Data Engineering team and with the Lead Hadoop Data Engineer and Product Owner. You are expected to support existing application as well as design and build new Data Pipelines. You are expected to support Evergreening or upgrade activities of CDP/SAS/Hive You are expected to participate in the service management if application Support issue resolution and improve processing performanceavoid issue reoccurring Ensure the use of Hive, Unix Scripting, Control-M reduces lead time to delivery Support application in UK shift as well as on-call support over night/weekend This is mandatory

Posted 1 week ago

Apply

2.0 - 5.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Apply your engineering expertise using the latest Data Engineering and Data Science technologies as Business Intelligence Development Analyst at the JLL Technologies Global Centre of Expertise in Bangalore, India. The JLL Technologies Product Engineering team aims to bring successful technology-based products to market in a high-growth environment. The team's mission is focused on accelerating technology adoption in commercial real estate by bringing creative, innovative and technical solutions to solve large, complex problems for our clients. Shape the future of real estate for a better world by contributing to the creation of globally scalable products used by JLLs client customers the most respected brands in the world. Experience & Education Experience with one or more public clouds such as Azure, AWS and GCP (Azure preferred). Reliable, self-motivated and self-disciplined individual capable of planning and executing multiple projects simultaneously within a fast-paced environment. Bachelor's degree in Computer Science or related discipline or Electronics & Communication Engineering. Advanced degree preferred. 6 months+ of experience Capable of rapid self-learning of new software applications and programming languages. Effective written and verbal communication skills, including technical documentation. Excellent technical, analytical, time management, and organizational skills Requires excellent collaboration, presentation and communication skills. Technical Skills & Competencies Strong experience in Data tools and technologies particularly MS Azure Databricks, SQL. Strong experience in Python, PySpark. Strong experience in building and maintaining data pipelines. Strong knowledge and working experience in DW/BI, Data Engineering and/or Data Science using different tools and in different domains. Good knowledge of GitHub, Agile methodologies and tools. Nice to have: Experience in Data Science, AI/ML - good understanding and demonstrated application of the concepts, various models and algorithms. Nice to have: Experience in Tableau, Power BI or other reporting tools.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Noida

Hybrid

Data Engineer (SaaS-Based). Immediate Joiners Preferred. Shift : 3 PM to 12 AM IST. Good to have : GCP Certified Data Engineer. Overview Of The Role. As a GCP Data Engineer, you'll focus on solving problems and creating value for the business by building solutions that are reliable and scalable to work with the size and scope of the company. You will be tasked with creating custom-built pipelines as well as migrating on-prem data pipelines to the GCP stack. You will be part of a team tackling intricate problems by designing and deploying reliable and scalable solutions tailored to the company's data landscape.. Required Skills: 5+ years of industry experience in software development, data engineering, business intelligence, or related field with experience in manipulating, processing, and extracting value from datasets.. Extensive experience in doing requirement discovery, analysis and data pipeline solution design.. Design, build and deploy internal applications to support our technology life cycle, collaboration and spaces, service delivery management, data and business intelligence among others.. Building Modular code for multi usable pipeline or any kind of complex Ingestion Framework used to ease the job to load the data into Datalake or Data Warehouse from multiple sources.. Work closely with analysts and business process owners to translate business requirements into technical solutions.. Coding experience in scripting and languages (Python, SQL, PySpark).. Expertise in Google Cloud Platform (GCP) technologies in the data warehousing space (BigQuery, GCP Workflows, Cloud Scheduler, Secret Manager, Batch, Cloud Logging, Cloud SDK, Google Cloud Storage, IAM).. Exposure of Google Dataproc and Dataflow.. Maintain highest levels of development practices including: technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability.. Understanding CI/CD Processes using Pulumi, GitHub, Cloud Build, Cloud SDK, Docker. Experience with SAS/SQL Server/SSIS is an added advantage.. Qualifications: Bachelor's degree in Computer Science or related technical field, or equivalent practical experience.. GCP Certified Data Engineer (preferred). Excellent verbal and written communication skills with the ability to effectively advocate technical solutions to other engineering teams and business audiences..

Posted 1 week ago

Apply

14.0 - 22.0 years

35 - 50 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role & responsibilities Job Summary: We are seeking a highly experienced Azure Databricks Architect to design and implement large-scale data solutions on Azure. The ideal candidate will have a strong background in data architecture, data engineering, and data analytics, with a focus on Databricks. Key Responsibilities: Design and implement end-to-end data solutions on Azure, leveraging Databricks, Azure Data Factory, Azure Storage, and other Azure services Lead data architecture initiatives, ensuring alignment with business objectives and best practices Collaborate with stakeholders to define data strategies, architectures, and roadmaps Develop and maintain data pipelines, ensuring seamless data integration and processing Optimize performance, scalability, and cost efficiency for Databricks clusters and data pipelines Ensure data security, governance, and compliance across Azure data services Provide technical leadership and mentorship to junior team members Stay up-to-date with industry trends and emerging technologies, applying knowledge to improve data solutions Requirements: 15+ years of experience in data architecture, data engineering, or related field 6+ years of experience with Databricks, including Spark, Delta Lake, and other Databricks features Databricks certification (e.g., Databricks Certified Data Engineer or Databricks Certified Architect) Strong understanding of data architecture principles, data governance, and data security Experience with Azure services, including Azure Data Factory, Azure Storage, Azure Synapse Analytics Programming skills in languages such as Python, Scala, or R Excellent communication and collaboration skills Nice to Have: Experience with data migration from on-premises data warehouses (e.g., Oracle, Teradata) to Azure Knowledge of data analytics and machine learning use cases Familiarity with DevOps practices and tools (e.g., Azure DevOps, Git) What We Offer: Competitive salary and benefits package Opportunity to work on large-scale data projects and contribute to the development of cutting-edge data solutions Collaborative and dynamic work environment Professional development and growth opportunities

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Hybrid

Immediate Openings on ITSS- Senior Azure Developer / Data Engineer _ Bangalore_Contract Experience: 5+ Years Skill: ITSS- Senior Azure Developer / Data Engineer Location: Bangalore Notice Period: Immediate . Employment Type: Contract Working Mode : Hybrid Job Description Description - Senior Azure Developer Loc - Bangalore , Hyd, Chennai and Noida Role: Data Engineer Experience: Mid-Level Primary Skillsets: Azure (ADF/ADLS/Key vaults) Secondary Skillsets: Databricks Good to have Skillsets: Ability to communicate well Experience in cloud applications, especially in Azure (basically things covered in Primary skillsets listed below) Work in agile framework Experience in ETL, SQL, PySpark Run with a task without waiting for direction all the time Experience with git repository and release pipelines. Any certifications on Azure Any certification on Databricks will be a topping on a cake

Posted 1 week ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Telangana

Work from Office

Education Bachelors degree in Computer Science, Engineering, or a related field. A Masters degree is preferred. Experience Minimum of 4+ years of experience in data engineering or a similar role. Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pnadas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 1 week ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Hyderabad, Bengaluru

Work from Office

Skill-Snowflake Developer with Data Build Tool with ADF with Python Job Descripion: We are looking for a Data Engineer with experience in data warehouse projects, strong expertise in Snowflake , and hands-on knowledge of Azure Data Factory (ADF) and dbt (Data Build Tool). Proficiency in Python scripting will be an added advantage. Key Responsibilities Design, develop, and optimize data pipelines and ETL processes for data warehousing projects. Work extensively with Snowflake, ensuring efficient data modeling, and query optimization. Develop and manage data workflows using Azure Data Factory (ADF) for seamless data integration. Implement data transformations, testing, and documentation using dbt. Collaborate with cross-functional teams to ensure data accuracy, consistency, and security. Troubleshoot data-related issues. (Optional) Utilize Python for scripting, automation, and data processing tasks. Required Skills & Qualifications Experience in Data Warehousing with a strong understanding of best practices. Hands-on experience with Snowflake (Data Modeling, Query Optimization). Proficiency in Azure Data Factory (ADF) for data pipeline development. Strong working knowledge of dbt (Data Build Tool) for data transformations. (Optional) Experience in Python scripting for automation and data manipulation. Good understanding of SQL and query optimization techniques. Experience in cloud-based data solutions (Azure). Strong problem-solving skills and ability to work in a fast-paced environment. Experience with CI/CD pipelines for data engineering.

Posted 1 week ago

Apply

5.0 - 9.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Looking for senior pyspark developer with 6+ years of hands on experienceBuild and manage large scale data solutions using tools like Pyspark, Hadoop, Hive, Python & SQLCreate workflows to process data using IBM TWSAble to use pyspark to create different reports and handle large datasetsUse HQL/SQL/Hive for ad-hoc query data and generate reports, and store data in HDFS Able to deploy code using Bitbucket, Pycharm and Teamcity.Can manage folks, able to communicate with several teams and can explain problem/solutions to business team in non-tech manner -Primary Skill Pyspark-Hadoop-Spark - One to Three Years,Developer / Software Engineer

Posted 1 week ago

Apply

3.0 - 7.0 years

20 - 30 Lacs

Bengaluru

Hybrid

Role & responsibilities Design, develop, and optimize complex SQL queries, stored procedures, and data models for Oracle-based systems Create and maintain efficient data pipelines for extract, transform, and load (ETL) processes using Informatica or Python Implement data quality controls and validation processes to ensure data integrity Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications Document database designs, procedures, and configurations to support knowledge sharing and system maintenance Troubleshoot and resolve database performance issues through query optimization and indexing strategies Integrate Oracle systems with cloud services, particularly AWS S3 and related technologies Participate in code reviews and contribute to best practices for database development Support migration of data and processes from legacy systems to modern cloud-based solutions Work within an Agile framework, participating in sprint planning, refinement, and retrospectives Required Qualifications 3+ years of experience with Oracle databases, including advanced SQL & PLSQL development Strong knowledge of data modelling principles and database design Proficiency with Python for data processing and automation Experience implementing and maintaining data quality controls Experience with AI-assisted development (GH copilot, etc..) Ability to reverse engineer existing database schemas and understand complex data relationships Experience with version control systems, preferably Git/GitHub Excellent written communication skills for technical documentation Demonstrated ability to work within Agile development methodologies Knowledge of concepts, particularly security reference data, fund reference data, transactions, orders, holdings, and fund accounting Additional Qualifications Experience with ETL tools like Informatica and Control-M Unix shell scripting skills for data processing and automation Familiarity with CI/CD pipelines for database code Experience with AWS services, particularly S3, Lambda, and Step Functions Knowledge of database security best practices Experience with data visualization tools (Power BI) Familiarity with domains (Security Reference, Trades, Orders Holdings, Funds, Accounting, Index etc) Preferred candidate profile

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies