Home
Jobs

1899 Data Engineering Jobs - Page 10

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

18 - 21 Lacs

Bengaluru

Work from Office

Naukri logo

Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day – we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies

Posted 3 days ago

Apply

3.0 - 5.0 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role: We are seeking a highly skilled and passionate Data Engineer to join our growing team dedicated to building and supporting cutting-edge analytical solutions. In this role, you will play a critical part in designing, developing, and maintaining the data infrastructure and pipelines that power our optimization engines. You will work in close collaboration with our team of data scientists who specialize in mathematical optimization techniques. Your expertise in data engineering will be essential in ensuring seamless data flow, enabling the development and deployment of high-impact solutions across various areas of our business. Responsibilities: Design, build, and maintain robust and scalable data pipelines to support the development and deployment of mathematical optimization models. Collaborate closely with data scientists to deeply understand the data requirements for optimization models. This includes: Data preprocessing and cleaning Feature engineering and transformation Data validation and quality assurance Develop and implement comprehensive data quality checks and monitoring systems to guarantee the accuracy and reliability of the data used in our optimization solutions. Optimize data storage and retrieval processes for highly efficient model training and execution. Work effectively with large-scale datasets, leveraging distributed computing frameworks when necessary to handle data volume and complexity. Contribute to the development and maintenance of thorough data documentation and metadata management processes. Stay up to date on the latest industry best practices and emerging technologies in data engineering, particularly in the areas of optimization and machine learning. Qualifications: Education: Bachelor's degree in computer science, Data Engineering, Software Engineering, or a related field is required. Master's degree in a related field is a plus. Experience: 3+ years of demonstrable experience working as a data engineer, specifically focused on building and maintaining complex data pipelines. Proven track record of successfully working with large-scale datasets, ideally in environments utilizing distributed systems. Technical Skills - Essential: Programming: High proficiency in Python is essential. Experience with additional scripting languages (e.g., Bash) is beneficial. Databases: Extensive experience with SQL and relational database systems (PostgreSQL, MySQL, or similar). You should be very comfortable with: Writing complex and efficient SQL queries Understanding performance optimization techniques for databases Applying schema design principles Data Pipelines: Solid understanding and practical experience in building and maintaining data pipelines using modern tools and frameworks. Experience with the following is highly desirable: Workflow management tools like Apache Airflow Data streaming systems like Apache Kafka Cloud Platforms: Hands-on experience working with major cloud computing environments such as AWS, Azure, or GCP. You should have a strong understanding of: Cloud-based data storage solutions (Amazon S3, Azure Blob Storage, Google Cloud Storage) Cloud compute services Cloud-based data warehousing solutions (Amazon Redshift, Google Big Query, Snowflake) Technical Skills - Advantageous (Not Required, But Highly Beneficial): NoSQL Databases: Familiarity with NoSQL databases like MongoDB, Cassandra, and DynamoDB, along with an understanding of their common use cases. Containerization: Understanding of containerization technologies such as Docker and container orchestration platforms like Kubernetes. Infrastructure as Code (IaC): Experience using IaC tools such as Terraform or CloudFormation. Version Control: Proficiency with Git or similar version control systems. Soft Skills: Communication: Excellent verbal and written communication skills. You'll need to effectively explain complex technical concepts to both technical and non-technical audiences. Collaboration: You'll collaborate closely with data scientists and other team members, so strong teamwork and interpersonal skills are essential. Problem-Solving: You should possess a strong ability to diagnose and solve complex technical problems related to data infrastructure and data pipelines. Adaptability: The data engineering landscape is constantly evolving. A successful candidate will be adaptable, eager to learn new technologies, and embrace change. Additional Considerations: Industry Experience: While not a strict requirement, experience working in industries with a focus on optimization, logistics, supply chain management, or similar domains would be highly valuable. Machine Learning Operations (MLOps): Familiarity with MLOps concepts and tools is increasingly important for data engineers in machine learning-focused environments.

Posted 3 days ago

Apply

6.0 - 10.0 years

11 - 15 Lacs

Pune

Work from Office

Naukri logo

About The Role Senior AI Engineer At Codvo, software and people transformations go hand-in-hand We are a global empathy-led technology services company where product innovation and mature software engineering are embedded in our core DNA Our core values of Respect, Fairness, Growth, Agility, and Inclusiveness guide everything we do We continually expand our expertise in digital strategy, design, architecture, and product management to offer measurable results and outside-the-box thinking About the Role: We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and implementation of robust and scalable pipelines and backend systems for our Generative AI applications In this role, you will be responsible for orchestrating the flow of data, integrating AI services, developing RAG pipelines, working with LLMs, and ensuring the smooth operation of the backend infrastructure that powers our Generative AI solutions Responsibilities: Generative AI Pipeline Development: Design and implement efficient and scalable pipelines for data ingestion, processing, and transformation, tailored for Generative AI workloads Orchestrate the flow of data between various AI services, databases, and backend systems within the Generative AI context Build and maintain CI/CD pipelines for deploying and updating Generative AI services and pipelines Data and Document Ingestion: Develop and manage systems for ingesting diverse data sources (text, images, code, etc.) used in Generative AI applications Implement OCR and other preprocessing techniques to prepare data for use in Generative AI pipelines Ensure data quality, consistency, and security throughout the ingestion process AI Service Integration: Integrate and manage external AI services (e.g., cloud-based APIs for image generation, text generation, LLMs) into our Generative AI applications Develop and maintain APIs for seamless communication between AI services and backend systems Monitor and optimize the performance of integrated AI services within the Generative AI pipeline Retrieval Augmented Generation (RAG) Pipelines: Design and implement RAG pipelines to enhance Generative AI capabilities with external knowledge sources Develop and optimize data retrieval and indexing strategies for RAG systems used in conjunction with Generative AI Evaluate and improve the accuracy and relevance of RAG-generated responses in the context of Generative AI applications Large Language Model (LLM) Integration: Develop and manage interactions with LLMs through APIs and SDKs within Generative AI pipelines Implement prompt engineering strategies to optimize LLM performance for specific Generative AI tasks Analyze and debug LLM outputs to ensure quality and consistency in Generative AI applications Backend Services Ownership: Design, develop, and maintain backend services that support Generative AI applications Ensure the scalability, reliability, and security of backend infrastructure for Generative AI workloads Implement monitoring and logging systems for backend services and pipelines supporting Generative AI Troubleshoot and resolve backend-related issues impacting Generative AI applications Required Skills and Qualifications: EducationBachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field Experience: 5+ years of experience in AI/ML development with a focus on building and deploying AI pipelines and backend systems Proven experience in designing and implementing data ingestion and processing pipelines Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services Technical Skills: Expertise in Python and relevant AI/ML libraries Strong understanding of AI infrastructure and deployment strategies Experience with data engineering and data processing techniques Proficiency in software development principles and best practices Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) Experience with version control (Git) Experience with RESTful APIs and API development Experience with vector databases and their application in AI/ML, particularly for similarity search and retrieval Generative AI Specific Skills: Familiarity with Generative AI concepts and techniques (e.g., GANs, Diffusion Models, VAEs, LLMs) Experience with integrating and managing Generative AI services Understanding of RAG pipelines and their application in Generative AI Experience with prompt engineering for LLMs in Generative AI contexts Soft Skills: Strong problem-solving and analytical skills Excellent communication and collaboration skills Ability to work in a fast-paced environment Preferred Qualifications: Experience with OCR and document processing technologies Experience with MLOps practices for Generative AI Contributions to open-source AI projects Strong experience with vector databases and their optimization for Generative AI applications Experience 5+ years Shift Time 2:30PM to 11:30PM Show more Show less

Posted 3 days ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Bengaluru

Hybrid

Naukri logo

Client name: Zeta Global Full-time Job Location: Bangalore Experience Required: 8+ years Mode of Work: Hybrid, 3 days work from the office and 2 days work from home Job Title: Data Engineer As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, Fast API, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree inComputer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in the use of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos, etc. Experience in adequate usage of cloud services (AWS) at scale Proven long-term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low-latency and scalable solutions in either cloud or on-premises environments. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in the Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills. Please fill in all the essential details which are given below & attach your updated resume, and send it to ralish.sharma@compunnel.com 1. Total Experience: 2. Relevant Experience in Data Engineering : 3. Experience in Python : 4. Experience in Spark/Airflow/ Snowflake/Hive : 5. Experience in Fast API : 6. Experience in ETL : 7. Experience in SQL : 8. Experience in Apache : 9. Experience in AWS : 10. Current company : 11. Current Designation : 12. Highest Education : 10. Notice Period: 11 Current CTC: 12. Expected CTC: 13. Current Location: 14. Preferred Location: 15. Hometown: 16. Contact No: 17. If you have any offer from some other company, please mention the Offer amount and Offer Location: 18. Reason for looking for change: 19. PANCARD : If the job description is suitable for you, please get in touch with me at the number below: 9910044363 .

Posted 3 days ago

Apply

10.0 - 15.0 years

11 - 15 Lacs

Pune

Work from Office

Naukri logo

Job Description: We, at Jet2 (UK’s third largest airlines and the largest tour operator), have set up a state-of-the-art Technology and Innovation Centre in Pune, India. The Lead Visualisation Developer will join our growing Data Visualisation team with delivering impactful data visualisation projects (using Tableau) whilst leading the Jet2TT visualisation function. The team currently works with a range of departments including Pricing & Revenue, Overseas Operations and Contact Centre. This new role provides a fantastic opportunity to represent visualisation to influence key business decisions. As part of the wider Data function, you will be working alongside Data Engineers, Data Scientists and Business Analysts to understand and gather requirements. In the role, you will be scoping visualisation projects, to deliver or delegate to members of the team, ensuring they have everything need to start development whilst guiding them through visualisation delivery. You will also support our visualisation Enablement team by supporting with the release of new Tableau features. Roles and Responsibilities What you’ll be doing: The successful candidate will work independently on data visualisation projects with zero or minimal guidance, the incumbent is expected to operate out of Pune location and collaborate with various stakeholders in Pune, Leeds, and Sheffield. Representing visualisation during project scoping. Working with Business Analysts and Product Owners to understand and scope requirements. Working with Data Engineers and Architects to ensure data models are fit visualisation. Developing Tableau dashboards from start to finish, using Tableau Desktop / Cloud – from gathering requirements, designing dashboards, and presenting to internal stakeholders. Presenting visualisations to stakeholders. Supporting and guiding members of the team through visualisation delivery. Supporting feature releases for Tableau. Teaching colleagues about new Tableau features and visualisation best practices. What you’ll have Extensive experience in the use of Tableau, evidenced by a strong Tableau Public portfolio. Expertise in the delivery of data visualisation Experience in r equirements gathering and presenting visualisations to internal stakeholders. Strong understanding of data visualisation best practices Experience of working in Agile Scrum framework to deliver high quality solutions. Strong communication skills – Written & Verbal Knowledge of the delivery of Data Engineering and Data Warehousing to Cloud Platforms. Knowledge of or exposure to Cloud Data Warehouse platforms (Snowflake Preferred) Knowledge and experience of working with a variety of databases (e.g., SQL).

Posted 3 days ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly experienced and self-driven Senior Data Engineer to design, build, and optimize modern data pipelines and infrastructure. This role requires deep expertise in Snowflake, DBT, Python, and cloud data ecosystems. You will play a critical role in enabling data-driven decision-making across the organization by ensuring the availability, quality, and integrity of data. Key Responsibilities: Design and implement robust, scalable, and efficient data pipelines using ETL/ELT frameworks. Develop and manage data models and data warehouse architecture within Snowflake . Create and maintain DBT models for transformation, lineage tracking, and documentation. Write modular, reusable, and optimized Python scripts for data ingestion, transformation, and automation. Collaborate closely with data analysts, data scientists, and business teams to gather and fulfill data requirements. Ensure data integrity, consistency, and governance across all stages of the data lifecycle. Monitor pipeline performance and implement optimization strategies for queries and storage. Follow best practices for data engineering including version control (Git), testing, and CI/CD integration. Required Skills and Qualifications: 8+ years of experience in Data Engineering or related roles. Deep expertise in Snowflake : schema design, performance tuning, security, and access controls. Proficiency in Python , particularly for scripting, data transformation, and workflow automation. Strong understanding of data modeling techniques (e.g., star/snowflake schema, normalization). Proven experience with DBT for building modular, tested, and documented data pipelines. Familiarity with ETL/ELT tools and orchestration platforms like Apache Airflow or Prefect . Advanced SQL skills with experience handling large and complex data sets. Exposure to cloud platforms such as AWS , Azure , or GCP and their data services. Preferred Qualifications: Experience implementing data quality checks and governance frameworks. Understanding of modern data stack and CI/CD pipelines for data workflows. Contributions to data engineering best practices, open-source projects, or thought leadership.

Posted 3 days ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Naukri logo

We are looking for a highly skilled Full Stack Developer with expertise in .NET Core and React.js to design, develop, and deploy robust, scalable, and cloud-native applications. The ideal candidate will have a strong understanding of backend and frontend technologies, experience with Microsoft Azure, and a passion for building high-quality software in a collaborative environment. Key Responsibilities: Design, develop, and maintain scalable web applications using .NET Core (backend) and React.js (frontend). Build and integrate RESTful APIs, services, and microservices. Develop and deploy cloud-native applications leveraging Microsoft Azure services such as Azure Functions, App Services, Azure DevOps, and Blob Storage. Collaborate with cross-functional teams including UI/UX designers, product managers, and fellow developers to deliver efficient, user-friendly solutions. Write clean, maintainable, and testable code adhering to industry best practices. Conduct code reviews, enforce coding standards, and mentor junior developers. Ensure application performance, reliability, scalability, and security. Actively participate in Agile/Scrum ceremonies and contribute to team discussions and continuous improvement. Required Skills: Strong experience with .NET Core / ASP.NET Core (Web API, MVC). Proficiency in React.js, JavaScript/TypeScript, HTML5, and CSS3. Solid experience with Microsoft Azure services (e.g., App Services, Azure Functions, Key Vault, Azure DevOps). Hands-on experience with Entity Framework Core, LINQ, and SQL Server. Familiarity with Git, CI/CD pipelines, and modern DevOps practices. Strong understanding of software design patterns, SOLID principles, and clean code methodologies. Basic knowledge of containerization tools like Docker. Nice to Have: Experience with Azure Kubernetes Service (AKS) or Azure Logic Apps. Familiarity with unit testing frameworks (xUnit, NUnit). Exposure to Agile/Scrum methodologies and tools like Jira or Azure Boards.

Posted 3 days ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Lucknow

Work from Office

Naukri logo

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 3 days ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 3 days ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Surat

Work from Office

Naukri logo

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 3 days ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.

Posted 3 days ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Job Mode Onsite/Work from Office | Monday to Friday | Shift 1 (Morning). Overview : We are seeking an experienced Team Lead to oversee our data engineering and analytics team consisting of data engineers, ML engineers, reporting engineers, and data/business analysts. The ideal candidate will drive end-to-end data solutions from data lake and data warehouse implementations to advanced analytics and AI/ML projects, ensuring timely delivery and quality standards. Key Responsibilities : - Lead and mentor a cross-functional team of data professionals including data engineers, ML engineers, reporting engineers, and data/business analysts. - Manage the complete lifecycle of data projects from requirements gathering to implementation and maintenance. - Develop detailed project estimates and allocate work effectively among team members based on skills and capacity. - Implement and maintain data architectures including data lakes, data warehouses, and lakehouse solutions. - Review team deliverables for quality, adherence to best practices, and performance optimization. - Hold team members accountable for timelines and quality standards through regular check-ins and performance tracking. - Translate business requirements into technical specifications and actionable tasks. - Collaborate with clients and internal stakeholders to understand business needs and define solution approaches. - Ensure proper documentation of processes, architectures, and code. Technical Requirements : - Strong understanding of data engineering fundamentals including ETL/ELT processes, data modeling, and pipeline development. - Proficiency in SQL and data warehousing concepts including dimensional modeling and optimization techniques. - Experience with big data technologies and distributed computing frameworks. - Hands-on experience with at least one major cloud provider (AWS, GCP, or Azure) and their respective data services. - Knowledge of on-premises data infrastructure setup and maintenance. - Understanding of data governance, security, and compliance requirements. - Familiarity with AI/ML workflows and deployment patterns. - Experience with BI and reporting tools for data visualization and insights delivery. Management Skills : - Proven experience leading technical teams of 4+ members. - Strong project estimation and resource allocation capabilities. - Excellent code and design review skills. - Ability to manage competing priorities and deliver projects on schedule. - Effective communication skills to bridge technical concepts with business objectives. - Problem-solving mindset with the ability to remove blockers for the team. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field; - 5+ years of experience in data engineering or related roles. - 2+ years of team leadership or management experience. - Demonstrated success in delivering complex data projects. - Certification in relevant cloud platforms or data technologies is a plus. What We Offer : - Opportunity to lead cutting-edge data projects for diverse clients. - Professional development and technical growth path. - A collaborative work environment that values innovation. - Competitive salary and benefits package.

Posted 3 days ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Kolkata

Work from Office

Naukri logo

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 3 days ago

Apply

6.0 - 8.0 years

19 - 25 Lacs

Pune, Chennai

Hybrid

Naukri logo

Role: Data Engineer Experience: 6-9Years Relevant Experience in Data Engineer: 5+ Years Notice Period: Immediate Joiners Only Job Location: Pune and Chennai Key Responsibilities: Mandatory Skill: Spark,SQL and Python Must Have: Relevant experience of 6-9 years as a Data Engineer Experience in programming language like Python Good Understanding of ETL (Extract, Transform, Load) concepts Good analytical and problem-solving skills Knowledge of a ticketing tool like JIRA/SNOW Good communication skills to interact with Customers on issues & requirements. Reach us:If you are interested in this position and meet the above qualifications, please reach out to me directly at swati@cielhr.com and share your updated resume highlighting your relevant experience.

Posted 3 days ago

Apply

6.0 - 11.0 years

25 - 35 Lacs

Valsad

Remote

Naukri logo

Job Timing : Monday-Friday : 3:00 PM to 12:00 AM ( 8:00 PM to 9:00 PM Dinner Break ) Saturday : 9:30 AM to 2:30 PM ( 1PM to 1:30 PM Lunch Break ) Job Description: As a Data Scientist specializing in AI and Machine Learning, you will play a key role in developing and deploying state-of-the-art machine learning models. You will work closely with cross-functional teams to create solutions leveraging AI technologies, including OpenAI models, Google Gemini, Copilot, and other cutting-edge AI tools. Key Responsibilities: Design, develop, and implement advanced AI and machine learning models focusing on generative AI and NLP technologies. Work with large datasets, applying statistical and machine learning techniques to extract insights and develop predictive models. Collaborate with engineering teams to integrate models into production systems. Apply best practices for model training, tuning, evaluation, and optimization. Develop and maintain pipelines for data ingestion, feature engineering, and model deployment. Leverage tools like OpenAI's GPT models, Google Gemini, Microsoft Copilot, and other available platforms for AI-driven solutions. Build and experiment with large language models, recommendation systems, computer vision models, and reinforcement learning systems. Continuously stay up-to-date with the latest AI/ML technologies and research trends. Qualifications: Required: Proven experience as a Data Scientist, Machine Learning Engineer, or similar role. Strong expertise in building and deploying machine learning models across various use cases. In-depth experience with AI frameworks and tools such as OpenAI (e.g., GPT models), Google Gemini, Microsoft Copilot, and others. Proficiency in machine learning techniques, including supervised/unsupervised learning, reinforcement learning, and deep learning. Expertise in model training, fine-tuning, and hyperparameter optimization. Strong programming skills in Python, R, or similar languages. Solid understanding of model evaluation metrics and performance tuning. Familiarity with cloud platforms (AWS, Azure, Google Cloud) and ML model deployment tools like TensorFlow, PyTorch, and Keras. Experience with MLOps tools such as MLflow, Kubeflow, and DataRobot. Strong experience with data wrangling, feature engineering, and preprocessing techniques. Excellent problem-solving skills and the ability to communicate complex ideas to non-technical stakeholders. Preferred: PhD or Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience with large-scale data processing frameworks (Hadoop, Spark, Databricks). Expertise in Natural Language Processing (NLP) techniques and frameworks like Hugging Face, BERT, T5, etc. Familiarity with deploying AI solutions on cloud services, including AWS SageMaker, Azure ML, or Google AI Platform. Experience with distributed machine learning techniques, multi-GPU setups, and optimizing large-scale models. Knowledge of reinforcement learning (RL) algorithms and practical application experience. Familiarity with AI interpretability tools such as SHAP, LIME, and Fairness Indicators. Proficiency in using collaboration tools such as Jupyter Notebooks, Git, and Docker for version control and deployment. Additional Tools & Technologies (Preferred Experience): Natural Language Processing (NLP): OpenAI GPT, BERT, T5, spaCy, NLTK, Hugging Face Machine Learning Frameworks: TensorFlow, PyTorch, Keras, Scikit-Learn Big Data Processing: Hadoop, Spark, Databricks, Dask Cloud Platforms: AWS SageMaker, Google AI Platform, Microsoft Azure ML, IBM Watson Automation & Deployment: Docker, Kubernetes, Terraform, Jenkins, CircleCI, GitLab CI/CD Visualization & Analysis: Tableau, Power BI, Plotly, Matplotlib, Seaborn, NumPy, Pandas Database : RDBMS, NoSQL Version Control: Git, GitHub, GitLab Why Join Us: Innovative Projects: Work on groundbreaking AI solutions and cutting-edge technology. Collaborative Team: Join a passionate, highly skilled, and collaborative team that values creativity and new ideas. Growth Opportunities: Develop your career in an expanding AI-focused company with continuous learning opportunities. Competitive Compensation: We offer a competitive salary and benefits package.

Posted 4 days ago

Apply

5.0 - 7.0 years

19 Lacs

Kolkata, Mumbai, Hyderabad

Work from Office

Naukri logo

Reporting to Global Head of Data Operations Role purpose As a Data Engineer, you will be a driving force towards data engineering excellence. Working with other data engineers, analysts, and the architecture function, youll be involved in the building out of a modern data platform using a number of cutting-edge technologies, and in a multi cloud environment, Youll get the opportunity to spread your knowledge and skills across multiple areas, with involvement in a range of different functional areas. As the business grows, we want our staff to grow with us, so therell be plenty of opportunity to learn and upskill in areas such as data pipelines, data integrations, data preparation, data models, analytical and reporting marts. Also, whilst work is often following business requirements and design concepts, youll play a huge part in the continuous development and maturing of design patterns and automation process for others to follow. Accountabilities and main responsibilities In this role, you will be delivering solutions and patterns through Agile methodologies as part of a squad. Youll be collaborating with customers, partners and peers, and will help to identify data requirements. Wed also rely on you to: Help break down large problems into smaller iterative steps Contribute to defining the prioritisation of your squads backlog Build out the modern data platform (data pipelines, data integrations, data preparation, data models, analytical and reporting marts) based on business requirements using agreed design patterns Help determine the most appropriate tool, method and design pattern in order to satisfy the requirement Proactively suggest improvements where they see issues Learn how to prepare our data in order to surface it for use within APIs Learn how to Document, support, manage and maintain the modern data platform built within your squad Learn how to provide guidance and training to downstream consumers of data on how best to use the data in our platform Learn how to support and build new data APIs Contribute to evangelising and educating within Sanne about the better use and value of data Comply with all Sanne policies Any other duties in the scope of the role that the company requires. Qualifications and skills Technical Skills: Data Warehousing and Data Modelling Data Lakes (AWS Lake Formation, Azure Data Lake) Cloud Data Warehouses (AWS Redshift, Azure Synapse, Snowflake) ETL/ELT/ Pipeline tools (AWS Glue, Azure Data Factory, FiveTran, Stitch) Data Message Bus/Pub Sub systems (AWS SNS & SQS Azure ASQ, Kafka, RabbitMQ) Data Programming languages (SQL, Python, Scala, Java) Cloud Workflow Service (AWS Step Functions, Azure Logic Apps, Camuda) Interactive Query Services (AWS Athena, Azure DL Analytics) Event and schedule management (AWS Lambda Functions, Azure Functions) Traditional Microsoft BI Stack (SQLServer, SSIS, SSAS, SSRS) Reporting and visualisation tools (Power BI, QuickSight, Mode) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) (Desirable) API Management (Desirable) Core Skills: Excellent communication and interpersonal skills Critical Thinking and research capabilities Strong problem-solving skills Ability to plan, and manage your own work loads Work well on own initiative as well as part of a bigger team Working knowledge of Agile Software Development Lifecycles.

Posted 4 days ago

Apply

0.0 - 2.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon About Tredence Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees. Visit our website for more details: Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure or GCP . As a Data Engineer at Tredence , you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure or GCP . Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.) Required Skills Azure Databricks / GCP Python SQL Pyspark

Posted 4 days ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

ob Summary: We are seeking a skilled Azure Databricks Developer with strong Terraform expertise to join our data engineering or cloud team. This role involves building, automating, and maintaining scalable data pipelines and infrastructure in the Azure cloud environment using Databricks and Infrastructure as Code (IaC) practices. The ideal candidate has hands-on experience with data processing in Databricks and cloud provisioning using Terraform. Key Responsibilities: Develop and optimize data pipelines using Azure Databricks (Spark, Delta Lake, notebooks, jobs) Design and automate infrastructure provisioning on Azure using Terraform Collaborate with data engineers, analysts, and cloud architects to integrate Databricks with other Azure services (e.g., Data Lake, Synapse, Key Vault) Maintain CI/CD pipelines for deploying Databricks and Terraform configurations Apply best practices for security, scalability, cost optimization , and performance Monitor and troubleshoot jobs and infrastructure components Document architecture, processes, and configuration standards Required Skills & Experience: 5+ years of experience in Azure Databricks , including PySpark, notebooks, cluster management, Delta Lake Strong hands-on experience in Terraform for managing cloud infrastructure (especially Azure) Proficiency in Python and SQL Experience with Azure services : Azure Data Lake, Azure Data Factory, Azure Key Vault, Azure DevOps Familiarity with CI/CD pipelines and version control (e.g., Git) Good understanding of data engineering concepts and cloud-native architecture Preferred Qualifications: Azure certifications (e.g., DP-203 , AZ-104 , or AZ-400 ) Knowledge of Databricks CLI , REST API, and workspace automation Experience with monitoring and alerting for data pipelines and cloud resources Understanding of cost management for Databricks and Azure services Role & responsibilities

Posted 4 days ago

Apply

7.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Prof in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational / NoSQL) ,data modeling tech Required Candidate profile looking for candidates with strong experience in data architecture Potential companies: Tiger Analytics, Tredence, Quantiphi, Data Engineering Group within Infosys/TCS/Cognizant, Deloitte Consulting Perks and benefits 5 working days - Onsite

Posted 4 days ago

Apply

8.0 - 13.0 years

17 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Project description We are seeking an experienced Senior Project Manager with a strong background in delivering data engineering and Python-based development projects. In this role, you will manage cross-functional teams and lead Agile delivery for high-impact, cloud-based data initiatives. You'll work closely with data engineers, scientists, architects, and business stakeholders to ensure projects are delivered on time, within scope, and aligned with strategic objectives. The ideal candidate combines technical fluency, strong leadership, and Agile delivery expertise in data-centric environments. Responsibilities Lead and manage data engineering and Python-based development projects, ensuring timely delivery and alignment with business goals. Work closely with data engineers, data scientists, architects, and product owners to gather requirements and define project scope. Translate complex technical requirements into actionable project plans and user stories. Oversee sprint planning, backlog grooming, daily stand-ups, and retrospectives in Agile/Scrum environments. Ensure best practices in Python coding, data pipeline design, and cloud-based data architecture are followed. Identify and mitigate risks, manage dependencies, and escalate issues when needed. Own stakeholder communications, reporting, and documentation of all project artifacts. Track KPIs and delivery metrics to ensure accountability and continuous improvement. Skills Must have ExperienceMinimum 8+ years of project management experience, including 3+ years managing data and Python-based development projects. Agile ExpertiseStrong experience delivering projects in Agile/Scrum environments with distributed or hybrid teams. Technical Fluency: Solid understanding of Python, data pipelines, and ETL/ELT workflows. Familiarity with cloud platforms such as AWS, Azure, or GCP. Exposure to tools like Airflow, dbt, Spark, Databricks, or Snowflake is a plus. ToolsProficiency with JIRA, Confluence, Git, and project dashboards (e.g., Power BI, Tableau). Soft Skills: Strong communication, stakeholder management, and leadership skills. Ability to translate between technical and non-technical audiences. Skilled in risk management, prioritization, and delivery tracking. Nice to have N/A OtherLanguagesEnglishC1 Advanced SenioritySenior

Posted 4 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Lucknow

Work from Office

Naukri logo

Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies

Posted 4 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Ludhiana

Work from Office

Naukri logo

Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies

Posted 4 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies

Posted 4 days ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Chennai

Work from Office

Naukri logo

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 4 days ago

Apply

7.0 - 12.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. Position Overview: As a Lead Data Engineer , you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions . You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability .Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving . You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives.Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus. Key Responsibilities: Architect and build scalable, high-performance data pipelines and distributed data processing solutions using Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure) . Design and implement real-time and batch data processing solutions , ensuring data is efficiently processed and made available for analytical and operational use. Develop APIs and data services to expose low-latency, high-throughput data for downstream applications, enabling real-time decision-making. Optimize and enhance data models, workflows, and processing frameworks to improve performance, scalability, and cost-efficiency. Drive data governance, security, and compliance best practices. Collaborate with data scientists, product teams, and business stakeholders to understand requirements and deliver data-driven solutions . Lead the design, implementation, and lifecycle management of data services and solutions. Stay up to date with emerging technologies and drive adoption of best practices in big data engineering, cloud computing, and API development . Provide technical leadership and mentorship to engineering teams, promoting best practices in data engineering and API design . About You: 7+ years of experience in data engineering, software development, or distributed systems. Expertise in big data technologies such as Hadoop, Spark, and distributed processing frameworks. Strong programming skills in Scala and/or Java (Python is a plus). Experience with cloud platforms (AWS, GCP, or Azure) and their data ecosystem (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.). Proficiency in API development using REST, GraphQL, or gRPC to serve real-time and batch data. Experience with real-time and streaming data architectures (Kafka, Flink, Kinesis, etc.). Strong knowledge of data modeling, ETL pipeline design, and performance optimization . Understanding of data governance, security, and compliance in large-scale data environments. Experience with Customer Data Platforms (CDP) or customer-centric data processing is a strong plus. Strong problem-solving skills and ability to work in complex, unstructured environments . Excellent communication and collaboration skills, with experience working in cross-functional teams . Why Join Us Work with cutting-edge big data, API, and cloud technologies in a fast-paced, collaborative environment. Influence and shape the future of data architecture and real-time data services at Target. Solve high-impact business problems using scalable, low-latency data solutions . Be part of a culture that values innovation, learning, and growth . Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies