Jobs
Interviews

2897 Data Engineering Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 10.0 years

10 - 12 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBT: automated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Sr. Python Developer Experience 5+Years Location Bangalore/Hyderabad Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

About the Role: We are seeking a skilled and detail-oriented Data Migration Specialist with hands-on experience in Alteryx and Snowflake. The ideal candidate will be responsible for analyzing existing Alteryx workflows, documenting the logic and data transformation steps and converting them into optimized, scalable SQL queries and processes in Snowflake. The ideal candidate should have solid SQL expertise, a strong understanding of data warehousing concepts. This role plays a critical part in our cloud modernization and data platform transformation initiatives. Key Responsibilities: Analyze and interpret complex Alteryx workflows to identify data sources, transformations, joins, filters, aggregations, and output steps. Document the logical flow of each Alteryx workflow, including inputs, business logic, and outputs. Translate Alteryx logic into equivalent SQL scripts optimized for Snowflake, ensuring accuracy and performance. Write advanced SQL queries , stored procedures, and use Snowflake-specific features like Streams, Tasks, Cloning, Time Travel , and Zero-Copy Cloning . Implement data ingestion strategies using Snowpipe , stages, and external tables. Optimize Snowflake performance through query tuning , partitioning, clustering, and caching strategies. Collaborate with data analysts, engineers, and stakeholders to validate transformed logic against expected results. Handle data cleansing, enrichment, aggregation, and business logic implementation within Snowflake. Suggest improvements and automation opportunities during migration. Conduct unit testing and support UAT (User Acceptance Testing) for migrated workflows. Maintain version control, documentation, and audit trail for all converted workflows. Required Skills: Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have aleast 4 years of hands-on experience in designing and developing scalable data solutions using the Snowflake Data Cloud platform Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. 1+ years of experience with Alteryx Designer, including advanced workflow development and debugging. Strong proficiency in SQL, with 3+ years specifically working with Snowflake or other cloud data warehouses. Python programming experience focused on data engineering. Experience with data APIs , batch/stream processing. Solid understanding of data transformation logic like joins, unions, filters, formulas, aggregations, pivots, and transpositions. Experience in performance tuning and optimization of SQL queries in Snowflake. Familiarity with Snowflake features like CTEs, Window Functions, Tasks, Streams, Stages, and External Tables. Exposure to migration or modernization projects from ETL tools (like Alteryx/Informatica) to SQL-based cloud platforms. Strong documentation skills and attention to detail. Experience working in Agile/Scrum development environments. Good communication and collaboration skills.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Lead Python Developer Experience 7+Years Location Bangalore/Hyderabad Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!

Posted 2 weeks ago

Apply

7.0 - 11.0 years

30 - 35 Lacs

Bengaluru

Hybrid

Lead Data Engineer We're Hiring: Lead Data Engineer | Bangalore | 7 - 11 Years Experience Location: Bangalore(Hybrid) Position Type: Permanent Mode of Interview: Face to Face Experience: 7 - 11yrs. Skills: Snowflake, ETL tools(Informatica/BODS/Datastage), Scripting (Python/Powershell/Shell), SQL, Data Warehousing Candidate who are available for a Face to Face discussion can apply. Interested? Send your updated CV to: radhika@theedgepartnership.com Do connect to me on LinkedIn: https://www.linkedin.com/in/radhika-gm-00b20a254/ Skills and Qualification (Functional and Technical Skills) Functional Skills: Team Player: Support peers, team, and department management. Communication: Excellent verbal, written, and interpersonal communication skills. Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality. Partnership and Collaboration: Develop and maintain partnership with business and IT stakeholders Attention to Detail: Ensure accuracy and thoroughness in all tasks. Technical/Business Skills: Data Engineering: Experience in designing and building Data Warehouse and Data lakes. Good knowledge of data warehouse principles, and concepts. Technical expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza, Teradata, and SQL Server. Experience with public cloud-based data platforms especially Snowflake and AWS. Data integration skills: Expertise in design and development of complex data pipelines Solutions using any industry leading ETL tools such as SAP Business Objects Data Services (BODS), Informatica Cloud Data Integration Services (IICS), IBM Data Stage. Experience of ELT tools such as DBT, Fivetran, and AWS Glue Expert in SQL - development experience in at least one scripting language (Python etc.), adept in tracing and resolving data integrity issues. Strong knowledge of data architecture, data design patterns, modeling, and cloud data solutions (Snowflake, AWS Redshift, Google BigQuery). Data Model: Expertise in Logical and Physical Data Model using Relational or Dimensional Modeling practices, high volume ETL/ELT processes. Performance tuning of data pipelines and DB Objects to deliver optimal performance. Experience in Gitlab version control and CI/CD processes. Experience working in Financial Industry is a plus.

Posted 2 weeks ago

Apply

11.0 - 14.0 years

13 - 16 Lacs

Hyderabad

Work from Office

We are looking for a skilled Senior Azure/Fabric Data Engineer with 11-14 years of experience to join our team at Apps Associates (I) Pvt. Ltd, located in the IT Services & Consulting industry. Roles and Responsibility Design and implement scalable data pipelines using Azure and Fabric. Develop and maintain large-scale data architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Ensure data quality and integrity by implementing robust testing and validation procedures. Optimize data storage and retrieval processes for improved performance and efficiency. Provide technical guidance and mentorship to junior team members. Job Strong understanding of data engineering principles and practices. Experience with Azure and Fabric technologies is highly desirable. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a fast-paced environment. Strong communication and interpersonal skills. Experience with agile development methodologies is preferred.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 5 Lacs

Dhule

Work from Office

The Development Lead will oversee the design, development, and delivery of advanced data solutions using Azure Databricks, SQL, and data visualization tools like Power BI. The role involves leading a team of developers, managing data pipelines, and creating insightful dashboards and reports to drive data-driven decision-making across the organization. The individual will ensure best practices are followed in data architecture, development, and reporting while maintaining alignment with business objectives. Key Responsibilities: Data Integration & ETL Processes: Design, build, and optimize ETL pipelines to manage the flow of data from various sources into data lakes, data warehouses, and reporting platforms. Data Visualization & Reporting: Lead the development of interactive dashboards and reports using Power BI, ensuring that business users have access to actionable insights and performance metrics. SQL Development & Optimization: Write, optimize, and review complex SQL queries for data extraction, transformation, and reporting, ensuring high performance and scalability across large datasets. Azure Cloud Solutions: Implement and manage cloud-based solutions using Azure services (Azure Databricks, Azure SQL Database, Data Lake) to support business intelligence and reporting initiatives. Collaboration with Stakeholders: Work closely with business leaders and cross-functional teams to understand reporting and analytics needs, translating them into technical requirements and actionable data solutions. Quality Assurance & Best Practices: Implement and maintain best practices in development, ensuring code quality, version control, and adherence to data governance standards. Performance Monitoring & Tuning: Continuously monitor the performance of data systems, reporting tools, and dashboards to ensure they meet SLAs and business requirements. Documentation & Training: Create and maintain comprehensive documentation for all data solutions, including architecture diagrams, ETL workflows, and data models. Provide training and support to end-users on Power BI reports and dashboards. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Development Lead or Senior Data Engineer with expertise in Azure Databricks, SQL, Power BI, and data reporting/visualization. Hands-on experience in Azure Databricks for large-scale data processing and analytics, including Delta Lake, Spark SQL, and integration with Azure Data Lake. Strong expertise in SQL for querying, data transformation, and database management. Proficiency in Power BI for developing advanced dashboards, data models, and reporting solutions. Experience in ETL design and data integration across multiple systems, with a focus on performance optimization. Knowledge of Azure cloud architecture, including Azure SQL Database, Data Lake, and other relevant services. Experience leading agile development teams, with a strong focus on delivering high-quality, scalable solutions. Strong problem-solving skills, with the ability to troubleshoot and resolve complex data and reporting issues. Excellent communication skills, with the ability to interact with both technical and non-technical stakeholders. Preferred Qualifications: Knowledge of additional Azure services (e.g., Azure Synapse, Data Factory, Logic Apps) is a plus. Experience in Power BI for data visualization and custom calculations.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

1 - 2 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Responsibilities: Design and implement scalable data pipelines to ingest, process, and analyze large volumes of structured and unstructured data from various sources. Develop and optimize data storage solutions, including data warehouses, data lakes, and NoSQL databases, to support efficient data retrieval and analysis. Implement data processing frameworks and tools such as Apache Hadoop, Spark, Kafka, and Flink to enable real-time and batch data processing. Collaborate with data scientists and analysts to understand data requirements and develop solutions that enable advanced analytics, machine learning, and reporting. Ensure data quality, integrity, and security by implementing best practices for data governance, metadata management, and data lineage. Monitor and troubleshoot data pipelines and infrastructure to ensure reliability, performance, and scalability. Develop and maintain ETL (Extract, Transform, Load) processes to integrate data from various sources and transform it into usable formats. Stay current with emerging technologies and trends in big data and cloud computing, and evaluate their applicability to enhance our data engineering capabilities. Document data architectures, pipelines, and processes to ensure clear communication and knowledge sharing across the team. Strong programming skills in Java, Python, or Scala.Strong understanding of data modelling, data warehousing, and ETL processes. Min 4 to Max 6yrs of Relevant exp.Strong understanding of Big Data technologies and their architectures, including Hadoop, Spark, and NoSQL databases. Locations : Mumbai, Delhi NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 2 weeks ago

Apply

6.0 - 8.0 years

15 - 18 Lacs

Bengaluru

Work from Office

We are seeking an experienced professional in AI and machine learning with a strong focus on large language models (LLMs) for a 9-month project. The role involves hands-on expertise in developing and deploying agentic systems and working with transformer architectures, fine-tuning, prompt engineering, and task adaptation. Responsibilities include leveraging reinforcement learning or similar methods for goal-oriented autonomous systems, deploying models using MLOps practices, and managing large datasets in production environments. The ideal candidate should excel in Python, ML libraries (Hugging Face Transformers, TensorFlow, PyTorch), data engineering principles, and cloud platforms (AWS, GCP, Azure). Strong analytical and communication skills are essential to solve complex challenges and articulate insights to stakeholders.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

25 - 30 Lacs

Chennai

Work from Office

Zalaris is looking for Senior Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

25 - 30 Lacs

Pune

Work from Office

Diverse Lynx is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Total Yrs. of Experience* 3-5 years Relevant Yrs. of experience* 3+ yrs. Detailed JD *(Roles and Responsibilities) Must Have Skills: Proficient in Data Engineering Hands-on experience in Python, Azure Data Factory, Azure Data bricks (PySpark )and ETL Knowledge of Data Lake storage (storage container) and MSSQL A quick and enthusiastic learner (must) and who is willing to work on new technologies depending on requirement. Configuring and deploying using Azure DevOps pipelines Airflow Good to have Skills: SQL knowledge and experience working with relational databases. Understanding of banking domain concepts Understanding of the project lifecycles: waterfall and agile. Work Experience: 3 - 5 years of experience in Data Engineering Mandatory skills* Azure Databricks, Azure data Factory and Python coding skills

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 9 Lacs

Hyderabad, Pune, Chennai

Work from Office

Total Yrs. of Experience-3-5 years Relevant Yrs. of experience-3+ yrs. Detailed JD *(Roles and Responsibilities)- Must Have Skills: Proficient in Data Engineering Hands-on experience in Python, Azure Data Factory, Azure Data bricks (PySpark )and ETL Knowledge of Data Lake storage (storage container) and MSSQL A quick and enthusiastic learner (must) and who is willing to work on new technologies depending on requirement. Configuring and deploying using Azure DevOps pipelines Airflow Good to have Skills: SQL knowledge and experience working with relational databases. Understanding of banking domain concepts Understanding of the project lifecycles: waterfall and agile. Work Experience: 3 - 5 years of experience in Data Engineering

Posted 2 weeks ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Technical Skills Required: ETL Concepts: Strong understanding of Extract, Transform, Load (ETL) processes. Ability to design, develop, and maintain robust ETL pipelines. Database Fundamentals: Proficiency in working with relational databases (e.g., MySQL, PostgreSQL, Oracle, or MS SQL Server). Knowledge of database design and optimization techniques. Basic Data Visualization: Ability to create simple dashboards or reports using visualization tools (e.g., Tableau, Power BI, or similar). Query Optimization: Expertise in writing efficient, optimized queries to handle large datasets. Testing and Documentation: Experience in validating data accuracy and integrity through rigorous testing. Ability to document data workflows, processes, and technical specifications clearly. Key Responsibilities: Data Engineering Tasks: Design, develop, and implement scalable data pipelines to support business needs. Ensure data quality and integrity through testing and monitoring. Optimize ETL processes for performance and reliability. Database Management: Manage and maintain databases, ensuring high availability and security. Troubleshoot database-related issues and optimize performance. Collaboration: Work closely with data analysts, data scientists, and other stakeholders to understand and deliver on data requirements. Provide support for data-related technical issues and propose solutions. Documentation and Reporting: Create and maintain comprehensive documentation for data workflows and technical processes. Develop simple reports or dashboards to visualize key metrics and trends. Learning and Adapting: Stay updated with new tools, technologies, and methodologies in data engineering. Adapt quickly to new challenges and project requirements. Additional Requirements: Strong communication skills, both written and verbal. Analytical mindset with the ability to solve complex data problems. Quick learner and willingness to adopt new tools and technologies as needed. Flexibility to work in shifts, if required. Preferred Skills (Not Mandatory): Experience with cloud platforms (e.g., AWS, Azure, or GCP). Familiarity with big data technologies such as Hadoop or Spark. Basic understanding of machine learning concepts and data science workflows.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

New Delhi, Chennai, Bengaluru

Work from Office

We are looking for an experienced Data Engineer with a strong background in data engineering, storage, and cloud technologies. The role involves designing, building, and optimizing scalable data pipelines, ETL/ELT workflows, and data models for efficient analytics and reporting. The ideal candidate must have strong SQL expertise, including complex joins, stored procedures, and certificate-auth-based queries. Experience with NoSQL databases such as Firestore, DynamoDB, or MongoDB is required, along with proficiency in data modeling and warehousing solutions like BigQuery (preferred), Redshift, or Snowflake. The candidate should have hands-on experience working with ETL/ELT pipelines using Airflow, dbt, Kafka, or Spark. Proficiency in scripting languages such as PySpark, Python, or Scala is essential. Strong hands-on experience with Google Cloud Platform (GCP) is a must. Additionally, experience with visualization tools such as Google Looker Studio, LookerML, Power BI, or Tableau is preferred. Good-to-have skills include exposure to Master Data Management (MDM) systems and an interest in Web3 data and blockchain analytics.

Posted 2 weeks ago

Apply

5.0 - 6.0 years

8 - 14 Lacs

Hyderabad

Work from Office

- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. If you are interested in the above roles and responsibilities, please share your updated resume along with the following details : Name as per Aadhar card.: Mobile Number : Alternative Mobile : Mail ID : Alternative Mail ID : Date of Birth : Total EXP : Relevant EXP : Current CTC : ECTC : Notice period(LWD) : Updated resume : Holding Offer(If any) : Interview availability : PF / UAN Number : Any Career /Education Gap:

Posted 2 weeks ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Noida

Work from Office

Description: We are seeking a seasoned Manager – Data Engineering with strong experience in Databricks or the Apache data stack to lead complex data platform implementations. You will be responsible for leading high-impact data engineering engagements for global clients, delivering scalable solutions, and driving digital transformation. Requirements: Required Skills & Experience: • 12–18 years of total experience in data engineering, including 3–5 years in a leadership/managerial role. • Hands-on experience in Databricks OR core Apache stack – Spark, Kafka, Hive, Airflow, NiFi, etc. • Expertise in one or more cloud platforms: AWS, Azure, or GCP – ideally with Databricks on cloud. • Strong programming skills in Python, Scala, and SQL. • Experience in building scalable data architectures, delta lakehouses, and distributed data processing. • Familiarity with modern data governance, cataloging, and data observability tools. • Proven experience managing delivery in an onshore-offshore or hybrid model. • Strong communication, stakeholder management, and team mentoring capabilities. Job Responsibilities: Key Responsibilities: • Lead the architecture, development, and deployment of modern data platforms using Databricks, Apache Spark, Kafka, Delta Lake, and other big data tools. • Design and implement data pipelines (batch and real-time), data lakehouses, and large-scale ETL frameworks. • Own delivery accountability for data engineering programs across BFSI, telecom, healthcare, or manufacturing clients. • Collaborate with global stakeholders, product owners, architects, and business teams to understand requirements and deliver data-driven outcomes. • Ensure best practices in DevOps, CI/CD, infrastructure-as-code, data security, and governance. • Manage and mentor a team of 10–25 engineers, conducting performance reviews, capability building, and coaching. • Support presales activities including solutioning, technical proposals, and client workshops What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Data Engineering Full Stack. Experience3-5 Years.

Posted 2 weeks ago

Apply

10.0 - 12.0 years

25 - 30 Lacs

Noida, Hyderabad

Work from Office

We’re hiring an Azure Data Architect with 10+ years of experience in designing end-to-end data solutions using ADF, Synapse, Databricks, Data Lake, and Python/SQL.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Role : SSAS Data Engineer We are looking for an experienced SSAS Data Engineer with strong expertise in SSAS (Tabular and/or Multidimensional Models), SQL, MDX/DAX, and data modeling.The ideal candidate will have a solid background in designing and developing BI solutions, working with large datasets, and building scalable SSAS cubes for reporting and analytics.Experience with ETL processes and reporting tools like Power BI is a strong plus. Key Responsibilities - Design, develop, and maintain SSAS models (Tabular and/or Multidimensional).- Build and optimize MDX or DAX queries for advanced reporting needs.- Create and manage data models (Star/Snowflake schemas) supporting business KPIs.- Develop and maintain ETL pipelines for efficient data ingestion (preferably using SSIS or similar tools).- Implement KPIs, aggregations, partitioning, and performance tuning in SSAS cubes.- Collaborate with data analysts, business stakeholders, and Power BI teams to deliver accurate and insightful reporting solutions.- Maintain data quality and consistency across data sources and reporting layers.- Implement RLS/OLS and manage report security and governance in SSAS and Power BI. Primary : Required Skills : - SSAS Tabular & Multidimensional.- SQL Server (Advanced SQL, Views, Joins, Indexes).- DAX & MDX.- Data Modeling & OLAP concepts. Secondary : - ETL Tools (SSIS or equivalent).- Power BI or similar BI/reporting tools.- Performance tuning & troubleshooting in SSAS and SQL.- Version control (TFS/Git), deployment best practices.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering. Experience5-8 Years.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies