Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 20 Lacs
Hyderabad
Hybrid
We are looking for a highly skilled Full Stack Developer with expertise in .NET Core and React.js to design, develop, and deploy robust, scalable, and cloud-native applications. The ideal candidate will have a strong understanding of backend and frontend technologies, experience with Microsoft Azure, and a passion for building high-quality software in a collaborative environment. Key Responsibilities: Design, develop, and maintain scalable web applications using .NET Core (backend) and React.js (frontend). Build and integrate RESTful APIs, services, and microservices. Develop and deploy cloud-native applications leveraging Microsoft Azure services such as Azure Functions, App Services, Azure DevOps, and Blob Storage. Collaborate with cross-functional teams including UI/UX designers, product managers, and fellow developers to deliver efficient, user-friendly solutions. Write clean, maintainable, and testable code adhering to industry best practices. Conduct code reviews, enforce coding standards, and mentor junior developers. Ensure application performance, reliability, scalability, and security. Actively participate in Agile/Scrum ceremonies and contribute to team discussions and continuous improvement. Required Skills: Strong experience with .NET Core / ASP.NET Core (Web API, MVC). Proficiency in React.js, JavaScript/TypeScript, HTML5, and CSS3. Solid experience with Microsoft Azure services (e.g., App Services, Azure Functions, Key Vault, Azure DevOps). Hands-on experience with Entity Framework Core, LINQ, and SQL Server. Familiarity with Git, CI/CD pipelines, and modern DevOps practices. Strong understanding of software design patterns, SOLID principles, and clean code methodologies. Basic knowledge of containerization tools like Docker. Nice to Have: Experience with Azure Kubernetes Service (AKS) or Azure Logic Apps. Familiarity with unit testing frameworks (xUnit, NUnit). Exposure to Agile/Scrum methodologies and tools like Jira or Azure Boards.
Posted 1 hour ago
5.0 - 8.0 years
6 - 10 Lacs
Lucknow
Work from Office
Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.
Posted 2 hours ago
5.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.
Posted 2 hours ago
5.0 - 8.0 years
6 - 10 Lacs
Surat
Work from Office
Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.
Posted 3 hours ago
5.0 - 8.0 years
6 - 10 Lacs
Chennai
Work from Office
Key Responsibilities : Design, develop, and maintain data pipelines to support business intelligence and analytics. Implement ETL processes using SSIS (advanced level) to ensure efficient data transformation and movement. Develop and optimize data models for reporting and analytics. Work with Tableau (advanced level) to create insightful dashboards and visualizations. Write and execute complex SQL (advanced level) queries for data extraction, validation, and transformation. Collaborate with cross-functional teams in an Agile environment to deliver high-quality data solutions. Ensure data integrity, security, and compliance with best practices. Troubleshoot and optimize data workflows for performance improvement. Required Skills & Qualifications : 5+ years of experience as a Data Engineer. Advanced proficiency in SSIS, Tableau, and SQL. Strong understanding of ETL processes and data pipeline development. Experience with data modeling for analytical and reporting solutions. Hands-on experience working in Agile development environments. Excellent problem-solving and troubleshooting skills. Ability to work independently in a remote setup. Strong communication and collaboration skills.
Posted 3 hours ago
5.0 - 7.0 years
7 - 9 Lacs
Hyderabad
Work from Office
Job Mode Onsite/Work from Office | Monday to Friday | Shift 1 (Morning). Overview : We are seeking an experienced Team Lead to oversee our data engineering and analytics team consisting of data engineers, ML engineers, reporting engineers, and data/business analysts. The ideal candidate will drive end-to-end data solutions from data lake and data warehouse implementations to advanced analytics and AI/ML projects, ensuring timely delivery and quality standards. Key Responsibilities : - Lead and mentor a cross-functional team of data professionals including data engineers, ML engineers, reporting engineers, and data/business analysts. - Manage the complete lifecycle of data projects from requirements gathering to implementation and maintenance. - Develop detailed project estimates and allocate work effectively among team members based on skills and capacity. - Implement and maintain data architectures including data lakes, data warehouses, and lakehouse solutions. - Review team deliverables for quality, adherence to best practices, and performance optimization. - Hold team members accountable for timelines and quality standards through regular check-ins and performance tracking. - Translate business requirements into technical specifications and actionable tasks. - Collaborate with clients and internal stakeholders to understand business needs and define solution approaches. - Ensure proper documentation of processes, architectures, and code. Technical Requirements : - Strong understanding of data engineering fundamentals including ETL/ELT processes, data modeling, and pipeline development. - Proficiency in SQL and data warehousing concepts including dimensional modeling and optimization techniques. - Experience with big data technologies and distributed computing frameworks. - Hands-on experience with at least one major cloud provider (AWS, GCP, or Azure) and their respective data services. - Knowledge of on-premises data infrastructure setup and maintenance. - Understanding of data governance, security, and compliance requirements. - Familiarity with AI/ML workflows and deployment patterns. - Experience with BI and reporting tools for data visualization and insights delivery. Management Skills : - Proven experience leading technical teams of 4+ members. - Strong project estimation and resource allocation capabilities. - Excellent code and design review skills. - Ability to manage competing priorities and deliver projects on schedule. - Effective communication skills to bridge technical concepts with business objectives. - Problem-solving mindset with the ability to remove blockers for the team. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field; - 5+ years of experience in data engineering or related roles. - 2+ years of team leadership or management experience. - Demonstrated success in delivering complex data projects. - Certification in relevant cloud platforms or data technologies is a plus. What We Offer : - Opportunity to lead cutting-edge data projects for diverse clients. - Professional development and technical growth path. - A collaborative work environment that values innovation. - Competitive salary and benefits package.
Posted 3 hours ago
3.0 - 6.0 years
5 - 8 Lacs
Kolkata
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 3 hours ago
6.0 - 8.0 years
19 - 25 Lacs
Pune, Chennai
Hybrid
Role: Data Engineer Experience: 6-9Years Relevant Experience in Data Engineer: 5+ Years Notice Period: Immediate Joiners Only Job Location: Pune and Chennai Key Responsibilities: Mandatory Skill: Spark,SQL and Python Must Have: Relevant experience of 6-9 years as a Data Engineer Experience in programming language like Python Good Understanding of ETL (Extract, Transform, Load) concepts Good analytical and problem-solving skills Knowledge of a ticketing tool like JIRA/SNOW Good communication skills to interact with Customers on issues & requirements. Reach us:If you are interested in this position and meet the above qualifications, please reach out to me directly at swati@cielhr.com and share your updated resume highlighting your relevant experience.
Posted 3 hours ago
6.0 - 11.0 years
25 - 35 Lacs
Valsad
Remote
Job Timing : Monday-Friday : 3:00 PM to 12:00 AM ( 8:00 PM to 9:00 PM Dinner Break ) Saturday : 9:30 AM to 2:30 PM ( 1PM to 1:30 PM Lunch Break ) Job Description: As a Data Scientist specializing in AI and Machine Learning, you will play a key role in developing and deploying state-of-the-art machine learning models. You will work closely with cross-functional teams to create solutions leveraging AI technologies, including OpenAI models, Google Gemini, Copilot, and other cutting-edge AI tools. Key Responsibilities: Design, develop, and implement advanced AI and machine learning models focusing on generative AI and NLP technologies. Work with large datasets, applying statistical and machine learning techniques to extract insights and develop predictive models. Collaborate with engineering teams to integrate models into production systems. Apply best practices for model training, tuning, evaluation, and optimization. Develop and maintain pipelines for data ingestion, feature engineering, and model deployment. Leverage tools like OpenAI's GPT models, Google Gemini, Microsoft Copilot, and other available platforms for AI-driven solutions. Build and experiment with large language models, recommendation systems, computer vision models, and reinforcement learning systems. Continuously stay up-to-date with the latest AI/ML technologies and research trends. Qualifications: Required: Proven experience as a Data Scientist, Machine Learning Engineer, or similar role. Strong expertise in building and deploying machine learning models across various use cases. In-depth experience with AI frameworks and tools such as OpenAI (e.g., GPT models), Google Gemini, Microsoft Copilot, and others. Proficiency in machine learning techniques, including supervised/unsupervised learning, reinforcement learning, and deep learning. Expertise in model training, fine-tuning, and hyperparameter optimization. Strong programming skills in Python, R, or similar languages. Solid understanding of model evaluation metrics and performance tuning. Familiarity with cloud platforms (AWS, Azure, Google Cloud) and ML model deployment tools like TensorFlow, PyTorch, and Keras. Experience with MLOps tools such as MLflow, Kubeflow, and DataRobot. Strong experience with data wrangling, feature engineering, and preprocessing techniques. Excellent problem-solving skills and the ability to communicate complex ideas to non-technical stakeholders. Preferred: PhD or Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience with large-scale data processing frameworks (Hadoop, Spark, Databricks). Expertise in Natural Language Processing (NLP) techniques and frameworks like Hugging Face, BERT, T5, etc. Familiarity with deploying AI solutions on cloud services, including AWS SageMaker, Azure ML, or Google AI Platform. Experience with distributed machine learning techniques, multi-GPU setups, and optimizing large-scale models. Knowledge of reinforcement learning (RL) algorithms and practical application experience. Familiarity with AI interpretability tools such as SHAP, LIME, and Fairness Indicators. Proficiency in using collaboration tools such as Jupyter Notebooks, Git, and Docker for version control and deployment. Additional Tools & Technologies (Preferred Experience): Natural Language Processing (NLP): OpenAI GPT, BERT, T5, spaCy, NLTK, Hugging Face Machine Learning Frameworks: TensorFlow, PyTorch, Keras, Scikit-Learn Big Data Processing: Hadoop, Spark, Databricks, Dask Cloud Platforms: AWS SageMaker, Google AI Platform, Microsoft Azure ML, IBM Watson Automation & Deployment: Docker, Kubernetes, Terraform, Jenkins, CircleCI, GitLab CI/CD Visualization & Analysis: Tableau, Power BI, Plotly, Matplotlib, Seaborn, NumPy, Pandas Database : RDBMS, NoSQL Version Control: Git, GitHub, GitLab Why Join Us: Innovative Projects: Work on groundbreaking AI solutions and cutting-edge technology. Collaborative Team: Join a passionate, highly skilled, and collaborative team that values creativity and new ideas. Growth Opportunities: Develop your career in an expanding AI-focused company with continuous learning opportunities. Competitive Compensation: We offer a competitive salary and benefits package.
Posted 5 hours ago
5.0 - 7.0 years
19 Lacs
Kolkata, Mumbai, Hyderabad
Work from Office
Reporting to Global Head of Data Operations Role purpose As a Data Engineer, you will be a driving force towards data engineering excellence. Working with other data engineers, analysts, and the architecture function, youll be involved in the building out of a modern data platform using a number of cutting-edge technologies, and in a multi cloud environment, Youll get the opportunity to spread your knowledge and skills across multiple areas, with involvement in a range of different functional areas. As the business grows, we want our staff to grow with us, so therell be plenty of opportunity to learn and upskill in areas such as data pipelines, data integrations, data preparation, data models, analytical and reporting marts. Also, whilst work is often following business requirements and design concepts, youll play a huge part in the continuous development and maturing of design patterns and automation process for others to follow. Accountabilities and main responsibilities In this role, you will be delivering solutions and patterns through Agile methodologies as part of a squad. Youll be collaborating with customers, partners and peers, and will help to identify data requirements. Wed also rely on you to: Help break down large problems into smaller iterative steps Contribute to defining the prioritisation of your squads backlog Build out the modern data platform (data pipelines, data integrations, data preparation, data models, analytical and reporting marts) based on business requirements using agreed design patterns Help determine the most appropriate tool, method and design pattern in order to satisfy the requirement Proactively suggest improvements where they see issues Learn how to prepare our data in order to surface it for use within APIs Learn how to Document, support, manage and maintain the modern data platform built within your squad Learn how to provide guidance and training to downstream consumers of data on how best to use the data in our platform Learn how to support and build new data APIs Contribute to evangelising and educating within Sanne about the better use and value of data Comply with all Sanne policies Any other duties in the scope of the role that the company requires. Qualifications and skills Technical Skills: Data Warehousing and Data Modelling Data Lakes (AWS Lake Formation, Azure Data Lake) Cloud Data Warehouses (AWS Redshift, Azure Synapse, Snowflake) ETL/ELT/ Pipeline tools (AWS Glue, Azure Data Factory, FiveTran, Stitch) Data Message Bus/Pub Sub systems (AWS SNS & SQS Azure ASQ, Kafka, RabbitMQ) Data Programming languages (SQL, Python, Scala, Java) Cloud Workflow Service (AWS Step Functions, Azure Logic Apps, Camuda) Interactive Query Services (AWS Athena, Azure DL Analytics) Event and schedule management (AWS Lambda Functions, Azure Functions) Traditional Microsoft BI Stack (SQLServer, SSIS, SSAS, SSRS) Reporting and visualisation tools (Power BI, QuickSight, Mode) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) (Desirable) API Management (Desirable) Core Skills: Excellent communication and interpersonal skills Critical Thinking and research capabilities Strong problem-solving skills Ability to plan, and manage your own work loads Work well on own initiative as well as part of a bigger team Working knowledge of Agile Software Development Lifecycles.
Posted 16 hours ago
0.0 - 2.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon About Tredence Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees. Visit our website for more details: Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure or GCP . As a Data Engineer at Tredence , you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure or GCP . Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.) Required Skills Azure Databricks / GCP Python SQL Pyspark
Posted 18 hours ago
5.0 - 10.0 years
15 - 27 Lacs
Bengaluru
Work from Office
ob Summary: We are seeking a skilled Azure Databricks Developer with strong Terraform expertise to join our data engineering or cloud team. This role involves building, automating, and maintaining scalable data pipelines and infrastructure in the Azure cloud environment using Databricks and Infrastructure as Code (IaC) practices. The ideal candidate has hands-on experience with data processing in Databricks and cloud provisioning using Terraform. Key Responsibilities: Develop and optimize data pipelines using Azure Databricks (Spark, Delta Lake, notebooks, jobs) Design and automate infrastructure provisioning on Azure using Terraform Collaborate with data engineers, analysts, and cloud architects to integrate Databricks with other Azure services (e.g., Data Lake, Synapse, Key Vault) Maintain CI/CD pipelines for deploying Databricks and Terraform configurations Apply best practices for security, scalability, cost optimization , and performance Monitor and troubleshoot jobs and infrastructure components Document architecture, processes, and configuration standards Required Skills & Experience: 5+ years of experience in Azure Databricks , including PySpark, notebooks, cluster management, Delta Lake Strong hands-on experience in Terraform for managing cloud infrastructure (especially Azure) Proficiency in Python and SQL Experience with Azure services : Azure Data Lake, Azure Data Factory, Azure Key Vault, Azure DevOps Familiarity with CI/CD pipelines and version control (e.g., Git) Good understanding of data engineering concepts and cloud-native architecture Preferred Qualifications: Azure certifications (e.g., DP-203 , AZ-104 , or AZ-400 ) Knowledge of Databricks CLI , REST API, and workspace automation Experience with monitoring and alerting for data pipelines and cloud resources Understanding of cost management for Databricks and Azure services Role & responsibilities
Posted 18 hours ago
7.0 - 10.0 years
20 - 30 Lacs
Hyderabad, Chennai
Work from Office
Prof in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational / NoSQL) ,data modeling tech Required Candidate profile looking for candidates with strong experience in data architecture Potential companies: Tiger Analytics, Tredence, Quantiphi, Data Engineering Group within Infosys/TCS/Cognizant, Deloitte Consulting Perks and benefits 5 working days - Onsite
Posted 18 hours ago
8.0 - 13.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Project description We are seeking an experienced Senior Project Manager with a strong background in delivering data engineering and Python-based development projects. In this role, you will manage cross-functional teams and lead Agile delivery for high-impact, cloud-based data initiatives. You'll work closely with data engineers, scientists, architects, and business stakeholders to ensure projects are delivered on time, within scope, and aligned with strategic objectives. The ideal candidate combines technical fluency, strong leadership, and Agile delivery expertise in data-centric environments. Responsibilities Lead and manage data engineering and Python-based development projects, ensuring timely delivery and alignment with business goals. Work closely with data engineers, data scientists, architects, and product owners to gather requirements and define project scope. Translate complex technical requirements into actionable project plans and user stories. Oversee sprint planning, backlog grooming, daily stand-ups, and retrospectives in Agile/Scrum environments. Ensure best practices in Python coding, data pipeline design, and cloud-based data architecture are followed. Identify and mitigate risks, manage dependencies, and escalate issues when needed. Own stakeholder communications, reporting, and documentation of all project artifacts. Track KPIs and delivery metrics to ensure accountability and continuous improvement. Skills Must have ExperienceMinimum 8+ years of project management experience, including 3+ years managing data and Python-based development projects. Agile ExpertiseStrong experience delivering projects in Agile/Scrum environments with distributed or hybrid teams. Technical Fluency: Solid understanding of Python, data pipelines, and ETL/ELT workflows. Familiarity with cloud platforms such as AWS, Azure, or GCP. Exposure to tools like Airflow, dbt, Spark, Databricks, or Snowflake is a plus. ToolsProficiency with JIRA, Confluence, Git, and project dashboards (e.g., Power BI, Tableau). Soft Skills: Strong communication, stakeholder management, and leadership skills. Ability to translate between technical and non-technical audiences. Skilled in risk management, prioritization, and delivery tracking. Nice to have N/A OtherLanguagesEnglishC1 Advanced SenioritySenior
Posted 19 hours ago
5.0 - 10.0 years
7 - 12 Lacs
Lucknow
Work from Office
Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 19 hours ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 19 hours ago
5.0 - 10.0 years
7 - 12 Lacs
Ludhiana
Work from Office
Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 19 hours ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Functional Area : Data Engineering / SAP BW/4HANA / Automation Qualification : Bachelors or Masters Degree in Computer Science, Information Technology, or related field Responsibilities : - Design, develop, and maintain high-performance data solutions within the SAP BW/4HANA environment, ensuring data quality and integrity. - Utilize ABAP and AMDP to develop efficient data extraction, transformation, and loading (ETL) processes within SAP BW/4HANA. - Optimize existing data models, data flows, and query performance in SAP BW/4HANA. - Implement and manage data automation workflows using tools such as Automic or similar scheduling platforms. - Support and troubleshoot data pipelines, which may include integration with Hadoop-based systems and other data sources. - Collaborate effectively with international team members across different time zones, participating in meetings and sharing knowledge. - Actively participate in all phases of the project lifecycle, from requirements gathering and design to development, testing, and deployment, utilizing tools like Micro Focus ALM and MF Service Manager for project tracking and issue management. - Write clear and concise technical documentation for developed data solutions and processes. - Work with File Transfer Protocols (e.g., SFTP, FTP) to manage data exchange with various systems. - Utilize collaboration tools such as Jira and Confluence for task management, knowledge sharing, and documentation. - Ensure adherence to data governance policies and standards. - Proactively identify and resolve performance bottlenecks and data quality issues within the SAP BW/4HANA system. - Stay up-to-date with the latest advancements in SAP BW/4HANA, data engineering technologies, and automation tools. - Contribute to the continuous improvement of our data engineering processes and methodologies. Technical Skills : - SAP BW/4HANA : Extensive hands-on experience in designing, developing, and administering SAP BW/4HANA systems, including data modeling (LSA++, virtual data models), data extraction from various sources (SAP and non-SAP), transformations, and loading processes. - ABAP for BW/4HANA : Strong proficiency in ABAP programming, specifically for developing routines, transformations, and other custom logic within SAP BW/4HANA. - AMDP : Solid experience in developing and optimizing data transformations using ABAP Managed Database Procedures (AMDP) for performance enhancement. - Data Engineering Principles : Strong understanding of data warehousing concepts, data modeling techniques, ETL/ELT processes, and data quality principles. - Automation Tools : Hands-on experience with automation tools, preferably Automic, for scheduling and managing data workflows. - Hadoop (Beneficial) : Familiarity with Hadoop ecosystems and technologies (e.g., HDFS, Hive, Spark) and experience integrating with SAP BW/4HANA is a plus. - File Transfer Protocols : Experience working with various file transfer protocols (e.g., SFTP, FTP, HTTPS) for data exchange. - SQL : Strong SQL skills for data querying and analysis. - SAP BW Query Designer and BEx Analyzer (Beneficial) : Experience with SAP BW Query Designer and BEx Analyzer for creating and troubleshooting reports is a plus. - SAP HANA (Underlying Database) : Good understanding of SAP HANA as the underlying database for SAP BW/4HANA. Functional Skills : - Strong analytical and problem-solving skills with the ability to translate business requirements into technical data solutions. - Excellent communication skills, both written and verbal, with the ability to effectively communicate with technical and non-technical stakeholders in an international setting. - Proven ability to collaborate effectively within a geographically distributed team. - Experience working with project management tools like Micro Focus ALM and MF Service Manager for issue tracking and project lifecycle management. - Familiarity with collaboration tools such as Jira and Confluence. - Ability to work independently and manage tasks effectively in a remote environment. - Adaptability to different cultural norms and communication styles within a global team. Qualifications : - Bachelors or Masters Degree in Computer Science, Information Technology, Data Science, or a related field. - 5-10 years of professional experience as a Data Engineer with a focus on SAP BW/4HANA. - Proven track record of successful implementation and support of SAP BW/4HANA data solutions. - Strong understanding of data warehousing principles and best practices. - Excellent communication and collaboration skills in an international environment. Bonus Points : - SAP BW/4HANA certification. - Experience with other data warehousing technologies. - Knowledge of SAP Analytics Cloud (SAC) and its integration with SAP BW/4HANA. - Experience with agile development methodologies
Posted 19 hours ago
6.0 - 9.0 years
8 - 11 Lacs
Chennai
Work from Office
About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 19 hours ago
7.0 - 12.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. Position Overview: As a Lead Data Engineer , you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions . You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability .Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving . You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives.Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus. Key Responsibilities: Architect and build scalable, high-performance data pipelines and distributed data processing solutions using Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure) . Design and implement real-time and batch data processing solutions , ensuring data is efficiently processed and made available for analytical and operational use. Develop APIs and data services to expose low-latency, high-throughput data for downstream applications, enabling real-time decision-making. Optimize and enhance data models, workflows, and processing frameworks to improve performance, scalability, and cost-efficiency. Drive data governance, security, and compliance best practices. Collaborate with data scientists, product teams, and business stakeholders to understand requirements and deliver data-driven solutions . Lead the design, implementation, and lifecycle management of data services and solutions. Stay up to date with emerging technologies and drive adoption of best practices in big data engineering, cloud computing, and API development . Provide technical leadership and mentorship to engineering teams, promoting best practices in data engineering and API design . About You: 7+ years of experience in data engineering, software development, or distributed systems. Expertise in big data technologies such as Hadoop, Spark, and distributed processing frameworks. Strong programming skills in Scala and/or Java (Python is a plus). Experience with cloud platforms (AWS, GCP, or Azure) and their data ecosystem (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.). Proficiency in API development using REST, GraphQL, or gRPC to serve real-time and batch data. Experience with real-time and streaming data architectures (Kafka, Flink, Kinesis, etc.). Strong knowledge of data modeling, ETL pipeline design, and performance optimization . Understanding of data governance, security, and compliance in large-scale data environments. Experience with Customer Data Platforms (CDP) or customer-centric data processing is a strong plus. Strong problem-solving skills and ability to work in complex, unstructured environments . Excellent communication and collaboration skills, with experience working in cross-functional teams . Why Join Us Work with cutting-edge big data, API, and cloud technologies in a fast-paced, collaborative environment. Influence and shape the future of data architecture and real-time data services at Target. Solve high-impact business problems using scalable, low-latency data solutions . Be part of a culture that values innovation, learning, and growth . Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 19 hours ago
2.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Data Engineer1 Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow,Experience on Spark/Hive/HDFS,Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow, Experience on Spark/Hive/HDFS, Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Data Engineer - Knows HR Knowledge , all other requirement from Functional Area given by UBER Customer Name Customer Nameuber
Posted 20 hours ago
8.0 - 10.0 years
7 - 11 Lacs
Hyderabad, Pune
Work from Office
Sr Data Engineer1 We are looking for a highly skilled Senior Data Engineer with strong expertise in DataWarehousing & Analytics to join our team. The ideal candidate will have extensive experiencein designing and managing data solutions, advanced SQL proficiency, and hands-on expertisein Python.Key ResponsibilitiesDesign, develop, and maintain scalable data warehouse solutions. Write and optimize complex SQL queries for data extraction, transformation, andreporting. Develop and automate data pipelines using Python. Work with AWS cloud services for data storage, processing, and analytics. Collaborate with cross-functional teams to provide data-driven insights and solutions. Ensure data integrity, security, and performance optimization. Work in UK shift hours to align with global stakeholders.Required Skills & Experience8-10 years of experience in Data Warehousing & Analytics. Strong proficiency in writing complex SQL queries with deep understanding of queryoptimization, stored procedures, and indexing. Hands-on experience with Python for data processing and automation. Experience working with AWS cloud services. Ability to work independently and collaborate with teams across different time zones.Good to HaveExperience in the Finance domain and understanding of financial data structures. Hands-on experience with reporting tools like Power BI or Tableau.
Posted 20 hours ago
3.0 - 8.0 years
5 - 13 Lacs
Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)
Work from Office
Design implement Python AI/ML/Gen AI models algorithms &app Coordinate data scientists to translate their ideas into working solutions Apply ML techniques to explore & analyze data patterns insights Supp deployment of models to production env Required Candidate profile Strong proficiency in Python including libraries Exp with machine learning algorithms & tech Understanding of AI concepts neural networks Exp with one of the cloud platforms cloud-based machine Perks and benefits 10% additional variable on top of fixed +mediclaim
Posted 20 hours ago
3.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Work from Office
3+ years of data engineering experience, preferably with one or more of these technologies SQL, ANSI SQL, LookML, Data Modeling Language (DML) or BigQuery Understanding of how to structure and consume data into Looker to provide optimal performance. Telco Background Experience developing data solutions in the cloud using GCP, AWS, or Azure primary skills : SQL , Big Query , Looker
Posted 20 hours ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata
Work from Office
This role involves the development and application of engineering practice and knowledge in defining, configuring and deploying industrial digital technologies (including but not limited to PLM and MES) for managing continuity of information across the engineering enterprise, including design, industrialization, manufacturing and supply chain, and for managing the manufacturing data. - Grade Specific Focus on Digital Continuity and Manufacturing. Develops competency in own area of expertise. Shares expertise and provides guidance and support to others. Interprets clients needs. Completes own role independently or with minimum supervision. Identifies problems and relevant issues in straight forward situations and generates solutions. Contributes in teamwork and interacts with customers.
Posted 20 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France