Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
15 - 30 Lacs
Noida, Greater Noida, Delhi / NCR
Hybrid
Role & responsibilities Senior Data Engineer (C2H Position) Strong Looking for 6+ years of experience Hadoop, Spark, SQL Data Brick Python Scripting Preferred candidate profile
Posted 1 month ago
2.0 - 5.0 years
8 - 12 Lacs
Chennai
Work from Office
OneMagnify is looking for Databricks AI Engineer to join our dynamic team and embark on a rewarding career journey Build scalable AI models using Databricks and ML frameworks Optimize data pipelines for model training and deployment Collaborate with data scientists and engineering teams Ensure reliability and performance of AI systems
Posted 1 month ago
2.0 - 5.0 years
3 - 7 Lacs
Chennai
Work from Office
OneMagnify is looking for Databricks Engineer to join our dynamic team and embark on a rewarding career journey Develop and optimize big data solutions using Databricks Implement ETL workflows and manage Spark environments Ensure performance tuning and security compliance Collaborate with analytics and data science teams
Posted 1 month ago
3.0 - 7.0 years
5 - 9 Lacs
Hyderabad
Work from Office
We are looking for an experienced Azure Data Engineer with 2+ years of hands-on experience in Azure Data Lake and Azure Data Factory. The ideal candidate will have a strong background in connecting data sources to the Data Lake, writing PiSpark SQL codes, and building SSIS packages. Additionally, experience in data architecture, data modeling, and creating visualizations is essential. Key Responsibilities : Work with Azure Data Lake and Azure Data Factory to design, implement, and manage data pipelines. Connect various data sources (applications, databases, etc.) to the Azure Data Lake for storage and processing. Write PiSpark SQL codes and SSIS packages for data retrieval and transformation from different data sources. Design and develop efficient Data Architecture and Data Modeling solutions to support business requirements. Create data visualizations to communicate insights to stakeholders and decision-makers. Optimize data workflows and pipelines for better performance and scalability. Collaborate with cross-functional teams to ensure seamless data integration and delivery. Ensure data integrity, security, and compliance with best practices. Skills and Qualifications : 2+ years of experience working with Azure Data Lake, Azure Data Factory, and related Azure services. Proficiency in writing PiSpark SQL codes for data extraction and transformation. Experience in developing SSIS packages for data integration and automation. Strong understanding of Data Architecture and Data Modeling concepts. Experience in creating effective and insightful data visualizations using tools like Power BI or similar. Familiarity with cloud-based storage and computing concepts and best practices. Strong problem-solving skills with an ability to troubleshoot and optimize data workflows. Ability to collaborate effectively in a team environment and communicate with stakeholders. Preferred Qualifications : Certifications in Azure (e.g., Azure Data Engineer or similar) would be a plus. Experience with other Azure tools like Azure Synapse, Databricks, etc.
Posted 1 month ago
14.0 - 24.0 years
35 - 55 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
About the role We are seeking a Sr. Practice Manager with Insight , you will be involved in different phases related to Software Development Lifecycle including Analysis, Design, Development and Deployment. We will count on you to be proficient in Software Design and Development, data modelling, data processing and data visualization. Along the way, you will get to: Help customers leverage existing data resources, implement new technologies and tooling to enable data science and data analytics Track the performance of our resources and related capabilities Experience mentoring and managing other data engineers and ensuring data engineering best practices are being followed. Constantly evolve and scale our capabilities along with the growth of the business and needs of our customers Be Ambitious : This opportunity is not just about what you do today but also about where you can go tomorrow. As a Practice Manager, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. What were looking for Sr. Practice Manager with: Total of 14+ yrs of relevant experience, atleast 5-6 years in people management, managing 20+ team. Minimum 12 years of experience in Data technology. Experience in Data Warehouse and excellent command in SQL, data modeling and ETL development. Hands-on experience in SQL Server, Microsoft Azure (Data Factory, Data Lake, Data Bricks) Experience in MSBI (SSRS, SSIS, SSAS), writing queries and stored procedures. (Good to have) Experienced using Power BI, MDX, DAX, MDS, DQS. (Good to have) Experience developing design related to Predictive Analytics model Ability to handle performance improvement tasks & data archiving. Proficient in relevant provisioning of Azure resources, forecasting hardware usage, and managing to a budget.
Posted 1 month ago
3.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Ability to take full ownership and deliver component or functionality. Supporting the team to deliver project features with high quality and providing technical guidance. Responsible to work effectively individually and with team members toward customer satisfaction and success Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL ADF Azure Data Bricks Preferred technical and professional experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage
Posted 1 month ago
2.0 - 7.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level:Analyst Location:Bangalore/ Kolkata/Hyderabad Must have skills: Data engineering with Python or pyspark Good to have skills: Gen AI Job Summary : We are seeking a highly skilled and motivated Data Science Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech, Enterprise Tech, and Semiconductors. This role combines designing, building, and maintaining scalable data pipelines and infrastructure, and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Generative AI Expertise Develop fine-tune models for NLP, Computer Vision, and multimodal applications, leveraging GenAI frameworks. Design and implement evaluation strategies to optimize model performance (e.g., BLEU, ROUGE, FID). Architect deployment solutions, including API development and seamless integration with existing systems. Data Science and Engineering Design, build, and maintain robust, scalable, and efficient data pipelines (ETL/ELT). Work with structured and unstructured data across a wide variety of data sources. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data systems and architecture for performance, scalability, and reliability. Monitor data quality and support initiatives to ensure clean, accurate, and consistent data. Develop and maintain data models and metadata. Implement and maintain best practices in data governance, security, and compliance. Required Qualifications Experience: 2+ years in data engineering/data science Education: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Technical Skills Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Strong programming skills in Python, Scala, or Java. Language & Frameworks - Python, SQL, scikit-learn, TensorFlow or PyTorch Data Tools:Pandas, NumPy, Matplotlib, Seaborn Orchestration - DBT, Apache Airflow GenAI & LLM Tooling :LangChain, LlamaIndex, Hugging Face Transformers, vector databases (e.g., FAISS, Pinecone) Good knowledge of ML Ops best practices and processes. Experience with big data technologies such as Spark or Hive. Familiarity with cloud platforms like AWS, Azure, or GCP, especially services like S3, Redshift, BigQuery, or Azure Data Lake. Experience with orchestration tools like Airflow, Luigi, or similar. Solid understanding of data warehousing concepts and data modelling techniques. Good problem-solving skills and attention to detail. Preferred Skills Experience with modern data stack tools like dbt, Snowflake, or Databricks. Knowledge of CI/CD pipelines and version control (e.g., Git). Exposure to containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation). Additional Information: - The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients - This position is based at our Bengaluru (preferred), Kolkata and Hyderabad office. About Our Company | Accenture Qualification Experience: 2+ years in data engineering and/or data science Educational Qualification:B tech, M tech in Computer Science, Statistics, Applied Mathematics, Engineering or related field
Posted 1 month ago
2.0 - 7.0 years
9 - 13 Lacs
Chennai
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. You will collaborate with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Your typical day will involve working on the data platform blueprint and design, collaborating with architects, and ensuring seamless integration between systems and data models. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist with the data platform blueprint and design.- Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models.- Work on data platform components.- Participate in team discussions to provide valuable insights.- Contribute to solving work-related problems. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with data integration tools.- Strong understanding of data platform components and architecture.- Experience in designing and implementing data models.- Knowledge of data integration best practices.- Familiarity with data governance and security.- Hands-on experience with data platform implementation.- Ability to troubleshoot and resolve data platform issues. Additional Information:- The candidate should have a minimum of 2 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Pune
Work from Office
Project Role : Software Configuration Engineer Project Role Description : Implement the configuration management plan as directed by the Configuration Lead. Assist in the design of software configuration and customization to meet the business process design and application requirements. Must have skills : Spring Boot Good to have skills : JavaMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Configuration Engineer, you will be responsible for implementing the configuration management plan as directed by the Configuration Lead. You will assist in the design of software configuration and customization to meet the business process design and application requirements. Your day will involve collaborating with the team to ensure smooth configuration processes and customization. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Assist in the design of software configuration and customization- Excellent presentation, and communication skills. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot, Java- Experience of 5-12 years working in Java Spring boot for implementing projects with high SLA of data availability and data quality, exposure to Cloud technologies (Azure preferred) is a major plus.- 8+ years of strong delivery experience in Backend Development, Java Spring Boot, J2EE, REST. Experience using DevOps toolsets like GitLab, Jenkins, TDD/BDD tools like PyTest , Cucumber.- Hands-on experience working with build tools like Maven/Gradle. Experience of working on Kubernetes / OpenShift, Containerization (Docker, Podman or Similar), Cloud Native technologies and frameworks (e.g. Spring Boot).- Hands-on experience on with PostgreSQL or similar (RDBMS concepts) .- Experience working in any cloud platform and preferably in Azure development including Databricks , Azure Services , ADLS etc- While not necessary, experience of working with Kafka and Elastic Search will be a plus- A real passion for and experience of Agile working practices, with a strong desire to work with baked in quality subject areas such as TDD, BDD, test automation and DevOps principles Additional Information:- The candidate should have a minimum of 8 years of experience in Spring Boot.- This position is based at our client office Pune Kharadi . Looking for candidates who are willing to work 3 days a week from Client office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 month ago
2.0 - 5.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Azure Data Services, Microsoft Azure Analytics ServicesMinimum 12 year(s) of experience is required Educational Qualification : Full Tim Education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with the team to develop and implement solutions that align with the organization's goals and objectives. You will utilize your expertise in Databricks Unified Data Analytics Platform to create efficient and effective applications that enhance business processes and drive innovation. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Collaborate with stakeholders to gather requirements and understand business needs.- Design and develop applications using Databricks Unified Data Analytics Platform.- Configure and customize applications to meet specific business process requirements.- Perform code reviews and ensure adherence to coding standards.- Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Azure Data Services, Microsoft Azure Analytics Services.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A Full Time Education is required. Qualification Full Tim Education
Posted 1 month ago
4.0 - 6.0 years
1 - 5 Lacs
Gurugram, Bengaluru
Hybrid
Role & responsibilities : Data Analysis & Insights Analyze and transform large datasets to identify actionable insights into customer experience touchpoints. Use statistical techniques and tools to improve success metrics and answer business questions. Reporting & Visualization Create dashboards, reports, and visualizations to communicate analytical findings effectively. Present insights to senior leaders using data storytelling and actionable recommendations. Consultation & Collaboration. Preferred candidate profile 4-6 years of experience in analytics, statistics, or informatics. Technical Skills • Proficiency in SQL, Python, and data visualization tools (e.g., Power BI). Experience with Databricks and handling large datasets. Bonus: Familiarity with Azure Analysis Services, NLP, or machine learning techniques. Behavioral Skills Strong analytical mindset with excellent problem-solving and storytelling abilities.
Posted 1 month ago
12.0 - 20.0 years
22 - 37 Lacs
Bengaluru
Hybrid
12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com
Posted 1 month ago
5.0 - 10.0 years
9 - 19 Lacs
Bengaluru
Remote
5+ years with Python, PySpark, SQL and SparkSQL
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Location: Bangalore, Hyderabad, Chennai Notice Period: Immediate to 20 days Experience: 5+ years Relevant Experience: 5+ years Skills: Data Engineer, Azure, Python, Panda, SQL, Pyspark, SQL, Databricks, Data pipeline, Synapse
Posted 1 month ago
3.0 - 8.0 years
20 - 30 Lacs
Chennai
Hybrid
Job Title: Senior Data Engineer Data Products Location: Chennai, India Open Roles: 2 Mode: Hybrid About the Role Are you a hands-on data engineer who thrives on solving complex data challenges and building modern cloud-native solutions? We're looking for two experienced Senior Data Engineers to join our growing Data Engineering team. This is an exciting opportunity to work on cutting-edge data platform initiatives that power advanced analytics, AI solutions, and digital transformation across a global enterprise. In this role, you'll help design and build reusable, scalable, and secure data pipelines on a multi-cloud infrastructure, while collaborating with cross-functional teams in a highly agile environment. What You’ll Do Design and build robust data pipelines and ETL frameworks using modern tools and cloud platforms. Implement lakehouse architecture (Bronze/Silver/Gold layers) and support data product publishing via Unity Catalog. Work with structured and unstructured enterprise data including ERP, CRM, and product data systems. Optimize pipeline performance, reliability, and security across AWS and Azure environments. Automate infrastructure using IaC tools like Terraform and AWS CDK. Collaborate closely with data scientists, analysts, and platform teams to deliver actionable data products. Participate in agile ceremonies, conduct code reviews, and contribute to team knowledge sharing. Ensure compliance with data privacy, cybersecurity, and governance policies. What You Bring 3+ years of hands-on experience in data engineering roles. Strong command of SQL and Python ; experience with Scala is a plus. Proficiency in cloud platforms (AWS, Azure), Databricks , DBT , Airflow , and version control tools like GitLab . Hands-on experience implementing lakehouse architectures and multi-hop data flows using Delta Lake . Background in working with enterprise data systems like SAP, Salesforce, and other business-critical platforms. Familiarity with DevOps , DataOps , and agile delivery methods (Jira, Confluence). Strong understanding of data security , privacy compliance , and production-grade pipeline management. Excellent communication skills and ability to work in global, multicultural teams. Why Join Us? Opportunity to work with modern data technologies in a complex, enterprise-scale environment. Be part of a collaborative, forward-thinking team that values innovation and continuous learning. Hybrid work model that offers both flexibility and team engagement . A role where you can make a real impact by contributing to digital transformation and data-driven decision-making.
Posted 1 month ago
4.0 - 7.0 years
13 - 17 Lacs
Pune
Hybrid
Role: Performance Testing Specialist Databricks Pipelines Job Seniority: Advanced (4-6 years) OR Experienced (3-4 years) Location: Magarpatta City,Pune Unit: Amdocs Data and Intelligence Mandatory SKills: All Skills must be in the resume in the roles and responsibilities Strong understanding of Databricks, Apache Spark, and performance tuning techniques for distributed data processing systems. Hands-on experience in Spark (PySpark/Scala) performance profiling, partitioning strategies, and job parallelization. 2+ year s of experience in performance testing and load simulation of data pipelines. Solid skills in SQL, Snowflake, and analyzing performance via query plans and optimization hints. Familiarity with Azure Databricks, Azure Monitor, Log Analytics, or similar observability tools. Proficient in scripting (Python/Shell) for test automation and pipeline instrumentation. Experience with DevOps tools such as Azure DevOps, GitHub Actions, or Jenkins for automated testing. Comfortable working in Unix/Linux environments and writing shell scripts for monitoring and debugging. Notice Period: Only Serving NP candidate who can join in the month of June ( 15 days to Immediate) Excellent Communication SKills This is C2H role. Interested Candidate Share Resume at dipti.bhaisare@in.experis.com
Posted 1 month ago
2.0 - 4.0 years
12 - 15 Lacs
Navi Mumbai
Work from Office
Defines, designs, develops, test software components/applications using Microsoft Azure- (Databricks, Data Factory, Data Lake Storage, Logic Apps, Azure Key Vaults, ADLS) Strong SQL skills, Structured & unstructured datasets, Data Modeling Required Candidate profile Must Have Databricks, Python, SQL, Pyspark Big Data Ecosystem Spark Ecosystem Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) AWS Data Modelling, ETL Methodology.
Posted 1 month ago
5.0 - 10.0 years
5 - 15 Lacs
Chennai
Work from Office
About the Role We are seeking a highly skilled Senior Azure Data Solutions Architect to design and implement scalable, secure, and efficient data solutions supporting enterprise-wide analytics and business intelligence initiatives. You will lead the architecture of modern data platforms, drive cloud migration, and collaborate with cross-functional teams to deliver robust Azure-based solutions. Key Responsibilities Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Stay current with Azure platform advancements and recommend improvements. Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. • Expertise in data modeling and design (relational, dimensional, NoSQL). • Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. • Proficiency in Python, SQL, Scala, and/or Java. • Strong knowledge of data governance, security, and compliance frameworks. • Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). • Familiarity with BI and analytics tools such as Power BI or Tableau. • Excellent communication, collaboration, and stakeholder management skills. • Bachelors degree in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership. Why Join Us? • Work on cutting-edge cloud data platforms in a collaborative, innovative environment. • Lead strategic data initiatives that impact enterprise-wide decision-making. • Competitive compensation and opportunities for professional growth.
Posted 1 month ago
8.0 - 12.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Good to have skills required : Cloud, SQL , data analysis skills Location : Pune - Kharadi - WFO - 3 days/week. Job Description : We are seeking a highly skilled and experienced Python Lead to join our team. The ideal candidate will have strong expertise in Python coding and development, along with good-to-have skills in cloud technologies, SQL, and data analysis. Key Responsibilities : - Lead the development of high-quality, scalable, and robust Python applications. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Develop RESTful applications using frameworks like Flask, Django, or FastAPI. - Utilize Databricks, PySpark SQL, and strong data analysis skills to drive data solutions. - Implement and manage modern data solutions using Azure Data Factory, Data Lake, and Data Bricks. Mandatory Skills : - Proven experience with cloud platforms (e.g. AWS) - Strong proficiency in Python, PySpark, R, and familiarity with additional programming languages such as C++, Rust, or Java. - Expertise in designing ETL architectures for batch and streaming processes, database technologies (OLTP/OLAP), and SQL. - Experience with the Apache Spark, and multi-cloud platforms (AWS, GCP, Azure). - Knowledge of data governance and GxP data contexts; familiarity with the Pharma value chain is a plus. Good to Have Skills : - Experience with modern data solutions via Azure. - Knowledge of principles summarized in the Microsoft Cloud Adoption Framework. - Additional expertise in SQL and data analysis. Educational Qualifications : Bachelor's/Master's degree or equivalent with a focus on software engineering. If you are a passionate Python developer with a knack for cloud technologies and data analysis, we would love to hear from you. Join us in driving innovation and building cutting-edge solutions! Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 1 month ago
5.0 - 7.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer / Technical Lead Location: Bangalore Employment Type: Full-time Role Summary We are seeking a highly skilled and motivated Senior Data Engineer/Technical Lead to take ownership of the end-to-end delivery of a key project involving data lake transitions, data warehouse maintenance, and enhancement initiatives. The ideal candidate will bring strong technical leadership, excellent communication skills, and hands-on expertise with modern data engineering tools and platforms. Experience in Databricks and JIRA is highly desirable. Knowledge of supply chain and finance domains is a plus, or a willingness to quickly ramp up in these areas is expected. Key Responsibilities Delivery Management Lead and manage data lake transition initiatives under the Gold framework. Oversee delivery of enhancements and defect fixes related to the enterprise data warehouse. Technical Leadership Design and develop efficient, scalable data pipelines using Python, PySpark , and SQL . Ensure adherence to coding standards, performance benchmarks, and data quality goals. Conduct performance tuning and infrastructure optimization for data solutions. Provide code reviews, mentorship, and technical guidance to the engineering team. Collaboration & Stakeholder Engagement Collaborate with business stakeholders (particularly the Laboratory Products team) to gather, interpret, and refine requirements. Communicate technical solutions and project progress clearly to both technical and non-technical audiences. Tooling and Technology Use Leverage tools such as Databricks, Informatica, AWS Glue, Google DataProc , and Airflow for ETL and data integration. Use JIRA to manage project workflows, track defects, and report progress. Documentation and Best Practices Create and review documentation including architecture, design, testing, and deployment artifacts. Define and promote reusable templates, checklists, and best practices for data engineering tasks. Domain Adaptation Apply or gain knowledge in supply chain and finance domains to enhance project outcomes and align with business needs. Skills and Qualifications Technical Proficiency Strong hands-on experience in Python, PySpark , and SQL . Expertise with ETL tools such as Informatica, AWS Glue, Databricks , and Google Cloud DataProc . Deep understanding of data warehousing solutions (e.g., Snowflake, BigQuery, Delta Lake, Lakehouse architectures ). Familiarity with performance tuning, cost optimization, and data modeling best practices. Platform & Tools Proficient in working with cloud platforms like AWS, Azure, or Google Cloud . Experience in version control and configuration management practices. Working knowledge of JIRA and Agile methodologies. Certifications (Preferred but not required) Certifications in cloud technologies, ETL platforms, or relevant domain (e.g., AWS Data Engineer, Databricks Data Engineer, Supply Chain certification). Expected Outcomes Timely and high-quality delivery of data engineering solutions. Reduction in production defects and improved pipeline performance. Increased team efficiency through reuse of components and automation. Positive stakeholder feedback and high team engagement. Consistent adherence to SLAs, security policies, and compliance guidelines. Performance Metrics Adherence to project timelines and engineering standards Reduction in post-release defects and production issues Improvement in data pipeline efficiency and resource utilization Resolution time for pipeline failures and data issues Completion of required certifications and training Preferred Background Background or exposure to supply chain or finance domains Willingness to work during morning US East hours Ability to work independently and drive initiatives with minimal oversight Required Skills Databricks,Data Warehousing,ETL,SQL
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Pune
Work from Office
New Opportunity :FullStack Engineer. Location :Pune (Onsite). Company :Apptware Solutions Hiring. Experience :4+ years. We're looking for a skilled Full Stack Engineer to join our team. If you have experience in building scalable applications and working with modern technologies, this role is for you. Role & Responsibilities. Develop product features to help customers easily transform data. Design, implement, deploy, and support client-side and server-side architectures, including web applications, CLI, and SDKs. Minimum Requirements. 4+ years of experience as a Full Stack Developer or similar role. Hands-on experience in a distributed engineering role with direct operational responsibility (on-call experience preferred). Proficiency in at least one back-end language (Node.js, TypeScript, Python, or Go). Front-end development experience with Angular or React, HTML, CSS. Strong understanding of web applications, backend APIs, CI/CD pipelines, and testing frameworks. Familiarity with NoSQL databases (e.g. DynamoDB) and AWS services (Lambda, API Gateway, Cognito, etc.). Bachelor's degree in Computer Science, Engineering, Math, or equivalent experience. Strong written and verbal communication skills. Preferred Skills. Experience with AWS Glue, Spark, or Athena. Strong understanding of SQL and data engineering best practices. Exposure to Analytical EDWs (Snowflake, Databricks, Big Query, Cloudera, Teradata). Experience in B2B applications, SaaS offerings, or startups is a plus. (ref:hirist.tech). Show more Show less
Posted 1 month ago
2.0 - 5.0 years
3 - 12 Lacs
Kolkata, Pune, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 2-5 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks. Collaborate with cross-functional teams to understand business requirements and design scalable solutions for big data processing using PySpark on Azure Data Lake Storage. Develop complex SQL queries to optimize database performance and troubleshoot issues in real-time. Ensure high availability of the system by implementing monitoring tools and performing regular maintenance tasks. Job Requirements : 2-5 years of experience in designing and developing large-scale data systems on Microsoft Azure platform. Strong understanding of Azure Data Factory (ADF), Azure Databricks, and Azure Data Lake Storage concepts. Proficiency in writing efficient Python code using PySpark for big data processing.
Posted 1 month ago
5.0 - 7.0 years
5 - 16 Lacs
Hyderabad, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 5-7 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to integrate various data sources into a centralized data lake. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business objectives. Develop complex SQL queries to extract insights from large datasets stored in Azure Databricks or other relational databases. Troubleshoot issues related to ADF pipeline failures, data quality problems, and performance optimization. Job Requirements : 5-7 years of experience in designing and developing large-scale data pipelines using ADF. Strong understanding of Azure Databricks, including its architecture, features, and best practices. Proficiency in writing complex SQL queries for querying large datasets stored in relational databases. Experience working with PySpark on AWS EMR clusters.
Posted 1 month ago
5.0 - 8.0 years
5 - 15 Lacs
Kochi
Work from Office
Job Summary: We are looking for a seasoned Data Engineer with 58 years of experience, specializing in Microsoft Fabric. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and models. You will work closely with analytics and business teams to drive data integration, ensure quality, and support data-driven decision-making in a modern cloud environment. Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Microsoft Fabric (Data Factory, Dataflows Gen2). Create and maintain data models , semantic models , and data marts for analytical and reporting purposes. Develop and manage SQL-based ETL processes , integrating various structured and unstructured data sources. Collaborate with BI developers and analysts to develop Power BI datasets, dashboards, and reports. Implement robust data integration solutions across diverse platforms and sources (on-premises, cloud). Ensure data integrity, quality, and governance through automated validation and error handling mechanisms. Work with business stakeholders to understand data requirements and translate them into technical specifications. Optimize data workflows for performance and cost-efficiency in a cloud-first architecture. Provide mentorship and technical guidance to junior data engineers. Required Skills: Strong hands-on experience with Microsoft Fabric , including Dataflows Gen2, Pipelines, and OneLake. Proficiency in Power BI , including building reports, dashboards, and working with semantic models. Solid understanding of data modeling techniques : star schema, snowflake, normalization/denormalization. Deep experience with SQL , stored procedures, and query optimization. Experience in data integration from diverse sources such as APIs, flat files, databases, and streaming data. Knowledge of data governance , lineage , and data catalog capabilities within the Microsoft ecosystem.
Posted 1 month ago
10.0 - 15.0 years
15 - 30 Lacs
Pallavaram
Work from Office
Data Engineering Lead Company Name: Blackstraw.ai Oce Location: Chennai (Work from Office) Job Type: Full-time Experience: 10 - 15 Years Candidates who can join immediately will be preferred. Job Description: As a lead data engineer you will oversee data architecture, ETL processes, and analytics pipelines, ensuring efficiency, scalability, and quality. Key Responsibilities: Working with clients to understand their data. Based on the understanding you will be building the data structures and pipelines. You will be working on the application from end to end collaborating with UI and other development teams. You will be working with various cloud providers such as Azure & AWS. You will be engineering data using the Hadoop/Spark ecosystem. You will be responsible for designing, building, optimizing and supporting new and existing data pipelines. Orchestrating jobs using various tools such Oozie, Airflow, etc. Developing programs for cleaning and processing data. You will be responsible for building the data pipelines to migrate and load the data into the HDFS either on-prem or in the cloud. Developing Data ingestion/process/integration pipelines effectively. Creating Hive data structures,metadata and loading the data into data lakes / BigData warehouse environments. Optimized (Performance tuning) many data pipelines effectively to minimize cost. Code versioning control and git repository is up to date. You should be able to explain the data pipeline to internal and external stakeholders. You will be responsible for building and maintaining CI/CD of the data pipelines. You will be managing the unit testing of all data pipelines. Tech Stack: Minimum of 5+ years working experience with Spark, Hadoop eco systems. Minimum of 4+ years working experience on designing data streaming pipelines. Should be an expert in either Python/Scala/Java. Should have experience in Data Ingestion and Integration into data lake using hadoop ecosystem tools such as Sqoop, Spark, SQL, Hive, Airflow, etc.. Should have experience optimizing (Performance tuning) data pipelines. Should have minimum experience of 3+ years on NoSQL and Spark Streaming. Knowledge of Kubernetes and Docker is a plus. Should have experience with Cloud services either Azure/AWS. Should have experience with on-prem distribution such as Cloudera/HortonWorks/MapR. Basic understanding of CI/CD pipelines. Basic knowledge of Linux environment and commands. Preferred Qualifications: Bachelors degree in computer science or related field. Proven experience with big data ecosystem tools such as Sqoop, Spark, SQL, API, Hive, Oozie, Airflow, etc.. Solid experience in all phases of SDLC with 10+ years of experience (plan, design, develop, test, release, maintain and support) Hands-on experience using Azures data engineering stack. Should have implemented projects using programming languages such as Scala or Python. Working experience on SQL complex data merging techniques such as windowing functions etc.. Hands-on experience with on-prem distribution tools such as Cloudera/HortonWorks/MapR. Should have excellent communication, presentation and problem solving skills. Key Traits: Should have excellent communication skills. Should be self motivated and willing to work as part of a team. Should be able to collaborate and coordinate with on shore and offshore teams. Be a problem solver and be proactive to solve the challenges that come his way.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France