Home
Jobs

144 Data Engineer Jobs - Page 3

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

16 - 25 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge

Posted 2 weeks ago

Apply

5.0 - 10.0 years

16 - 31 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge Real - time analytics

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

Python+ Pyspark (5 to 10 Yrs) joining location Hyderabad and Pune.

Posted 2 weeks ago

Apply

1.0 - 6.0 years

2 - 7 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Position Name Data Engineer Total Exp: 3-5 Years Notice Period: Immedidate joiner Work Location: Mumbai, Kandivali Work Type: Work from Office Job Description Must have: Must have Data Engineer having 3 to 5 years of experience Must have Should be an individual contributor to deliver the feature/story within given time and expected quality Must have Should be good in Agile process Must have Should be strong in Programming and SQL queries Must have Should be capable to learn new tools and technologies to scale on Data engineering Must have Should have good communication and client interaction. Technical Skills: Must have Data Engineering using Java/Python, Spark/Py-Spark, Big Data(Hadoop, Hive, Yarn, Oozie, etc.,), Cloud warehouse(Snowflake), Cloud services(AWS EMR, S3, Lambda, RDS/Aurora) Must have Unit testing Framework Junit/Mokito/PowerMock Must have Strong experience on SQL queries(MySQL/SQL server/Oracle/Hadoop/snowflake) Must have Source safe - GITHub Must have Project management tool - VSTS Must have Build management tool - Maven / Gradle Must have CI/CD Azure devops Added advantage: Good to have Shell script, Linux commands

Posted 2 weeks ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad

Work from Office

Naukri logo

Dear all., Greetings of the Day!!! We have opening with one of the top MNC. Experience:5-15 Years Notice Period: Immediate to 45 Days Job Description: Design, build, and maintain data pipelines (ETL/ELT) using BigQuery , Python , and SQL Optimize data flow, automate processes, and scale infrastructure Develop and manage workflows in Airflow/Cloud Composer and Ascend (or similar ETL tools) Implement data quality checks and testing strategies Support CI/CD (DevSecOps) processes, conduct code reviews, and mentor junior engineers Collaborate with QA/business teams and troubleshoot issues across environments. DBT for transformation Collibra for data quality Working with unstructured datasets Strong analytical and SQL expertise Interested candidates revert back with your updated CV to sushma.b@technogenindia.com

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Pune

Work from Office

Naukri logo

Role : Consultant/Sr. Consultant Mandate Skills : GCP & Hadoop Location : Pune Budget : 7-10 Years - Upto 30 LPA & 10-12 Years - Upto 40 LPA (Non-Negotiable) Interested candidates can share resumes at: "Kashif@d2nsolutions.com"

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Pune

Work from Office

Naukri logo

Experience as a Data Analyst with GCP & Hadoop is mandatory. Work From Office

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Chennai

Work from Office

Naukri logo

5+ Years of experience in ETL development with strong proficiency in Informatica BDM . Hands-on experience with big data platforms like Hadoop, Hive, HDFS, Spark . Proficiency in SQL and working knowledge of Unix/Linux shell scripting. Experience in performance tuning of ETL jobs in a big data environment. Familiarity with data modeling concepts and working with large datasets. Strong problem-solving skills and attention to detail. Experience with job scheduling tools (e.g., Autosys, Control-M) is a plus.

Posted 3 weeks ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Pune

Work from Office

Naukri logo

Perydot is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up- to- date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Role & responsibilities As a Senior Data Engineer, you will work to solve some of the organizational data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member and Data Modeling lead as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Engage the clients & understand the business requirements to translate those into data models. Create and maintain a Logical Data Model (LDM) and Physical Data Model (PDM) by applying best practices to provide business insights. Contribute to Data Modeling accelerators Create and maintain the Source to Target Data Mapping document that includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Gather and publish Data Dictionaries. Involve in maintaining data models as well as capturing data models from existing databases and recording descriptive information. Use the Data Modelling tool to create appropriate data models. Contribute to building data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Use version control to maintain versions of data models. Collaborate with Data Engineers to design and develop data extraction and integration code modules. Partner with the data engineers to strategize ingestion logic and consumption patterns. Preferred candidate profile 6+ years of experience in Data space. Decent SQL skills. Significant experience in one or more RDBMS (Oracle, DB2, and SQL Server) Real-time experience working in OLAP & OLTP database models (Dimensional models). Good understanding of Star schema, Snowflake schema, and Data Vault Modelling. Also, on any ETL tool, Data Governance, and Data quality. Eye to analyze data & comfortable with following agile methodology. Adept understanding of any of the cloud services is preferred (Azure, AWS & GCP). You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry

Posted 3 weeks ago

Apply

8.0 - 12.0 years

20 - 35 Lacs

Kolkata, Pune, Chennai

Work from Office

Naukri logo

Senior data engineer Job description: - Demonstrate hands-on expertise in Ab Initio GDE, Metadata Hub, Co-operating system & Control-Centre. - Must demonstrate high proficiency in SQL. - Develop and implement solutions for metadata management and data quality assurance. - Able to identify, analyze, and resolve technical issues related to the Ab Initio solution. - Perform unit testing and ensure the quality of developed solutions. - Provide Level 3 support and troubleshoot issues with Ab Initio applications deployed in Production - Working knowledge of Azure Databricks & python will be an advantage. - Any past experience working on SAP HANA Data layer would be good to have. Other traits - Proficient communication skill required as she/he will be directly engaging with client teams. - Technical leadership, open to learn and adopt the complex landscape of data technologies in a new environment.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 4.5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only

Posted 3 weeks ago

Apply

4.0 - 9.0 years

3 - 8 Lacs

Pune

Work from Office

Naukri logo

Design, develop, and maintain ETL pipelines using Informatica PowerCenter or Talend to extract, transform, and load data into EDW systems and data lake. Optimize and troubleshoot complex SQL queries and ETL jobs to ensure efficient data processing and high performance. Technologies - SQL, Informatica Power center, Talend, Big Data, Hive

Posted 3 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Remote

Naukri logo

Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)

Posted 3 weeks ago

Apply

6.0 - 9.0 years

15 - 20 Lacs

Chennai

Work from Office

Naukri logo

Skills Required: Should have a minimum 6+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

4 - 9 Lacs

Chennai

Work from Office

Naukri logo

Skills Required: Should have a minimum 3+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Looking for Snowflake developer for US client, this candidate should be strong with Snowflake & DBT & should be able to do impact analysis on the current ETLs (Informatica/ Data stage) and provide solutions based on the analysis. Exp: 7- 12yrs

Posted 3 weeks ago

Apply

2.0 - 7.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Design, develop, deploy ETL workflows mappings using Informatica PowerCenter Extract data from various source systems transform/load into target systems Troubleshoot ETL job failures resolve data issues promptly. Optimize and tune complex SQL queries Required Candidate profile Maintain detailed documentation of ETL design, mapping logic, and processes. Ensure data quality and integrity through validation and testing. Exp with Informatica PowerCenter Strong SQL knowledge Perks and benefits Perks and Benefits

Posted 3 weeks ago

Apply

5.0 - 10.0 years

11 - 21 Lacs

Kochi, Bengaluru

Work from Office

Naukri logo

Hiring for Senior Python Developer Data Engineering with Mage.ai Mage.ai experience mandatory Location : Remote/Bangalore/Kochi Experience : 6+ Years | Role Type : Full-time About the Role We are looking for a Senior Python Developer with strong Data Engineering expertise to help us build and optimize data workflows, manage large-scale pipelines, and enable efficient data operations across the organization. This role requires hands-on experience with Mage.AI , PySpark , and cloud-based data engineering workflows , and will play a critical part in our data infrastructure modernization efforts. Required Skills & Experience 6+ years of hands-on Python development with strong data engineering focus. Deep experience in Mage.AI for building and managing data workflows. Advanced proficiency in PySpark for distributed data processing and pipeline orchestration. Strong understanding of ETL/ELT best practices , data architecture, and pipeline design patterns. Familiarity with data warehouse technologies (PostgreSQL, Redshift, Snowflake, etc.). Experience integrating APIs, databases, and file-based sources into scalable pipelines. Strong problem-solving, debugging, and performance tuning skills. Preferred Qualifications Experience with cloud platforms (AWS, GCP, Azure) and deploying pipelines on EMR, EKS, or GKE . Exposure to streaming data workflows (e.g., Kafka, Spark Streaming, etc.). Experience working in Agile teams , contributing to sprint planning and code reviews. Contributions to open-source projects or community engagement in the data engineering space. if interested apply for this role . With Regards , Rathna (rathna@trinityconsulting.asia)

Posted 3 weeks ago

Apply

5.0 - 10.0 years

22 - 35 Lacs

Kochi, Bengaluru

Hybrid

Naukri logo

Greetings from Trinity!! We are looking for Python Developer with proficiency in Mage.AI Senior Python Developer Data Engineering Focus Location: Bangalore/Kochi/Remote Budget: 15-25 LPA for 5-7 years and 24-32 LPA for 7-9 years Mode of hiring: FTE About the Role We are looking for a Senior Python Developer with strong Data Engineering expertise to help us build and optimize data workflows, manage large-scale pipelines, and enable efficient data operations across the organization. This role requires hands-on experience with Mage.AI , PySpark , Required Skills & Experience 6+ years of hands-on Python development with strong data engineering focus. Deep experience in Mage.AI for building and managing data workflows. Advanced proficiency in PySpark for distributed data processing and pipeline orchestration. Strong understanding of ETL/ELT best practices , data architecture, and pipeline design patterns. Familiarity with data warehouse technologies (PostgreSQL, Redshift, Snowflake, etc.).

Posted 3 weeks ago

Apply

6.0 - 11.0 years

5 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

Dear Candidates, We are conducting Face to Face Drive on 7th June 2025. Whoever interested in F2F Drive kindly share the updated resume asap. Here are the JD Details: Role: Data Engineer with Python, Apache Spark, HDFS Experience: 6 to 12 Years Location: Hyderabad Shift Timings: General Shift Job Overview: Key Responsibilities: • Design, develop, and maintain scalable data pipelines using Python and Spark. • Ingest, process, and transform large datasets from various sources into usable formats. • Manage and optimize data storage using HDFS and MongoDB. • Ensure high availability and performance of data infrastructure. • Implement data quality checks, validations, and monitoring processes. • Collaborate with cross-functional teams to understand data needs and deliver solutions. • Write reusable and maintainable code with strong documentation practices. • Optimize performance of data workflows and troubleshoot bottlenecks. • Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role • Minimum 6 years of experience as a Data Engineer or similar role. • Strong proficiency in Python for data manipulation and pipeline development. • Hands-on experience with Apache Spark for large-scale data processing. • Experience with HDFS and distributed data storage systems. • Strong understanding of data architecture, data modeling, and performance tuning. • Familiarity with version control tools like Git. • Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. • Knowledge of cloud services (AWS, GCP, or Azure) is preferred. • Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: • Experience with containerization (Docker, Kubernetes). • Knowledge of real-time data streaming tools like Kafka. • Familiarity with data visualization tools (e.g., Power BI, Tableau). • Exposure to Agile/Scrum methodologies. Note: If Interested then please share me your updated resume to jamshira@srinav.net with below requried details asap. Details Needed: Full Name: Mail id : Contact Number : Current Exp Relevant Exp: CTC: Expected CTC/MONTH: Current Location: Relocation (Yes/No): Notice Period Official: LWD: Holding offer in hand: Tentative Doj: PAN ID: DOB (DD/MM/YYYY): LinkedIn profile Link:

Posted 3 weeks ago

Apply

3.0 - 8.0 years

8 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities ualifications Experience - 3-6 years Education - B.E/B.Tech/MCA/M.Tech Minimum Qualifications • Bachelor's Degree in Computer Science, CIS, or related field (or equivalent work experience in a related field) • 3 years of experience in software development or a related field • 2 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) You will be responsible for designing, building, and maintaining our data infrastructure, ensuring data quality, and enabling data-driven decision-making across the organization. The ideal candidate will have a strong background in data engineering, excellent problem-solving skills, and a passion for working with data. Responsibilities: • Design, build, and maintain our data infrastructure, including data pipelines, warehouses, and databases • Ensure data quality and integrity by implementing data validation, testing, and monitoring processes • Collaborate with cross-functional teams to understand data needs and translate them into technical requirements • Develop and implement data security and privacy policies and procedures • Optimize data processing and storage performance, ensuring scalability and reliability • Stay up-to-date with the latest data engineering trends and technologies • Provide mentorship and guidance to junior data engineers and analysts • Contribute to the development of data-driven solutions and products Requirements: • 3+ years of experience in data engineering, with a Bachelor's degree in Computer Science, Engineering, or a related field • Strong knowledge of data engineering tools and technologies, including SQL, and GCP • Experience with big data processing frameworks, such as Spark or Hadoop or Python • Experience with data warehousing solutions : BigQuery • Strong problem-solving skills, with the ability to analyze complex data sets and identify trends and insights • Excellent communication and collaboration skills, with the ability to work with cross-functional teams and stakeholders • Strong data security and privacy knowledge and experience • Experience with agile development methodologies is a plus Preferred candidate profile 3-4 yrs Max 12 LPA budget 4-6 yrs Max 14-16 LPA budget

Posted 3 weeks ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer (Java + Hadoop/Spark) Location: Bangalore WFO Type: Full Time Experience: 8-12 years Notice Period Immediate Joiners to 30 Days Virtual drive on 1st June '25 Job Description: We are looking for a skilled Data Engineer with strong expertise in Java and hands-on experience with Hadoop or Spark. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and processing systems. Key Responsibilities: • Develop and maintain data pipelines using Java. • Work with big data technologies such as Hadoop or Spark to process large datasets. • Optimize data workflows and ensure high performance and reliability. • Collaborate with data scientists, analysts, and other engineers on data-related initiatives. Requirements: • Strong programming skills in Java. • Hands-on experience with Hadoop or Spark. • Experience with data ingestion, transformation, and storage solutions. • Familiarity with distributed systems and big data architecture. If interested send updated resume on rosalin.m@genxhire.in or 8976791986 Share the following details: Current CTC Expected CTC: Notice Period Age Reason for leaving last job

Posted 3 weeks ago

Apply

3.0 - 5.0 years

10 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Data Pipelines: Proven experience in building scalable and reliable data pipelines BigQuery: Expertise in writing complex SQL transformations; hands-on with indexing and performance optimization Ingestion: Skilled in data scraping and ingestion through RESTful APIs and file-based sources Orchestration: Familiarity with orchestration tools like Prefect, Apache Airflow (nice to have) Tech Stack: Proficient in Python, FastAPI, and PostgreSQL End-to-End Workflows: Capable of owning ingestion, transformation, and delivery processes Location-Remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies