Home
Jobs

984 Adf Jobs - Page 9

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

Xylem is a Fortune 500 global water solutions company dedicated to advancing sustainable impact and empowering the people who make water work every day. As a leading water technology company with 23,000 employees operating in over 150 countries, Xylem is at the forefront of addressing the world's most critical water challenges. We invite passionate individuals to join our team, dedicated to exceeding customer expectations through innovative and sustainable solutions. As a Data Engineer, you will design, develop, and optimize scalable data pipelines and workflows to support advanced analytics and business intelligence needs. You will collaborate with cross-functional teams to ensure data accessibility, integrity, and security. Core Responsibilities Design, develop, and implement robust data pipelines for data collection, transformation, and integration. Collaborate with senior engineers to architect scalable data solutions using Azure services, including Azure Data Factory and Databricks. Integrate data from SAP ERP systems and other enterprise platforms into modern cloud-based data ecosystems. Leverage Databricks for big data processing and workflow optimization. Work with stakeholders to understand data requirements, ensuring data quality and consistency. Maintain data governance practices to support compliance and security protocols. Support analytics teams by providing well-structured, reliable data for reporting and machine learning projects. Troubleshoot and resolve data pipeline and workflow issues. Qualifications Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field. 3–5 years of experience in data engineering or a related role. Proficiency in Azure technologies, including Azure Data Factory, Azure SQL Database, and Databricks. Experience with SAP data integration is a plus. Strong SQL and Python programming skills for data engineering tasks. Familiarity with data modeling concepts (e.g., star and snowflake schemas) and best practices. Experience with CI/CD pipelines for deploying data workflows and infrastructure. Knowledge of distributed file systems like Azure Data Lake or equivalent cloud storage solutions. Basic understanding of Apache Spark for distributed data processing. Strong problem-solving skills and a collaborative mindset. Technical Knowledge Deep understanding of Azure cloud infrastructure and services, particularly those related to data management (e.g., Azure Data Lake, Azure Blob Storage, Azure SQL Database). Experience with Azure Data Factory (ADF) for orchestrating ETL pipelines and automating data workflows. Familiarity with Azure Databricks for big data processing, machine learning, and collaborative analytics. Expertise in Apache Spark for distributed data processing and large-scale analytics. Familiarity with Databricks, including managing clusters and optimizing performance for big data workloads. Understanding of Databricks Bronze, Silver, and Gold Model. Understanding of distributed file systems like HDFS and cloud-based equivalents like Azure Data Lake. Proficiency in SQL and NoSQL databases, including designing schemas, query optimization, and managing large datasets. Experience with data warehousing solutions like Databricks, Azure Synapse Analytics or Snowflake. Familiarity with connecting data Lakehouse’s with Power BI. Understanding of OLAP (Online Analytical Processing) and OLTP (Online Transaction Processing) systems. Strong grasp of data modeling techniques, including conceptual, logical, and physical data models. Experience with star schema, snowflake schema, and normalization for designing scalable, performant databases. Knowledge of data architecture best practices, ensuring efficient data flow, storage, and retrieval. Knowledge of CI/CD pipelines for automating the deployment of data pipelines, databases, and infrastructure. Experience with infrastructure as code tools like Terraform or Azure Resource Manager to manage cloud resources. Preferred Qualifications Familiarity with tools like Apache Airflow or other workflow orchestration tools. Knowledge of Azure Monitor or similar tools for system performance tracking. Certifications in Azure Data Engineering or related cloud platforms. Join the global Xylem team to be a part of innovative technology solutions transforming water usage, conservation, and re-use. Our products impact public utilities, industrial sectors, residential areas, and commercial buildings, with a commitment to providing smart metering, network technologies, and advanced analytics for water, electric, and gas utilities. Partner with us in creating a world where water challenges are met with ingenuity and dedication; where we recognize the power of inclusion and belonging in driving innovation and allowing us to compete more effectively around the world. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

In, Tandjilé, Chad

On-site

Linkedin logo

Job Description Job Title – Azure Data Engineer Candidate Specification – 5+ years, Notice Period – Immediate to 30 days, Hybrid. Job Description Strong in Azure Data Factory (ADF), Azure Databricks. Experience in Azure Synapse Analytics, Azure Data Lake Storage (Gen2). Data Abse experience - Azure SQL Database / SQL Server. Proficiency in writing complex SQL queries and working with large datasets. Experience with Python, Scala, PySpark for data transformations. Knowledge of DevOps practices and tools (e.g., Azure DevOps, CI/CD for data pipelines). Skills Required RoleAzure Data Engineer Industry TypeIT/ Computers - Software Functional Area Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills AZURE DATAFACTORY AZURE DATABRICKS PYTHON Other Information Job CodeGO/JC/186/2025 Recruiter Name Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Oracle Cloud Visual Builder Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and troubleshooting to guarantee that the applications function seamlessly within the business environment, contributing to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to be an SME. - Analyze requirements, determine technical level of effort and prepare technical design and specifications. - Conversant in deploying and troubleshooting, analyzing, and resolving technical problems - Conduct Design review to provide guidance and Quality assurance around best practices and frameworks Professional & Technical Skills: - Must To Have Skills: Proficiency in Oracle Cloud Visual Builder. - Overall 4+ years of experience in Web App development (Oracle ADF) - 2 to 3 years of experience in Oracle VBCS (Visual Builder Cloud Service) - Knowledge of Oracle database and PL/SQL - Experience in GIT-HUB, Oracle Developer Cloud and UCD tools for build and deployment - Good hands on knowledge in JavaScript, CSS3, XML/JSON/WSDL, Consuming Web Services(SOAP/REST), Testing Tools(Postman/SoapUI/JMeter) -Experience with building different types of application in VBCS using Business Object, ORDS -Knowledge and experience in integration with other Oracle PaaS services. -Experience with integrating VBCS applications with Oracle SaaS Applications -Work experience on development of SaaS extensions using VBCS -Experience of various web service related technologies such as WSDL/XML/SOAP/REST/JSON standards -Hands on writing SQL Queries -Good communication interpersonal skills. Good analytical and debugging skills Additional Information: - The candidate should have minimum 4 years of experience in Oracle Cloud Visual Builder. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Engineer Location: Hyderabad, Kochi, Trivandrum Experience Required: 10-19 Yrs Skills: Primary - Scala, Pyspark, Python / Secondary - ETL, SQL, Azure Role Proficiency The role demands expertise in building robust, scalable data pipelines that support ingestion, wrangling, transformation, and integration of data from multiple sources. The ideal candidate should have hands-on experience with ETL tools (e.g., Informatica, AWS Glue, Databricks, GCP DataProc), and strong programming skills in Python, PySpark, SQL, and optionally Scala. Proficiency across various data domains and familiarity with modern data warehouse and lakehouse architectures (Snowflake, BigQuery, Delta Lake, Lakehouse) is essential. A solid understanding of DevOps and infrastructure cost optimization is required. Key Responsibilities & Outcomes Technical Development Develop high-performance data pipelines and applications. Optimize development using design patterns and reusable solutions. Create and tune code using best practices for performance and scalability. Develop schemas, data models, and data storage solutions (SQL/NoSQL/Delta Lake). Perform debugging, testing, and validation to ensure solution quality. Documentation & Design Produce high-level and low-level design (HLD, LLD, SAD) and architecture documentation. Prepare infra costing, source-target mappings, and business requirement documentation. Contribute to and govern documentation standards/templates/checklists. Project & Team Management Support Project Manager in planning, delivery, and sprint execution. Estimate effort and provide input on resource planning. Lead and mentor junior team members, define goals, and monitor progress. Monitor and manage defect lifecycle including RCA and proactive quality improvements. Customer Interaction Gather and clarify requirements with customers and architects. Present design alternatives and conduct product demos. Ensure alignment with customer expectations and solution architecture. Testing & Release Design and review unit/integration test cases and execution strategies. Provide support during system/integration testing and UAT. Oversee and execute release cycles and configurations. Knowledge Management & Compliance Maintain compliance with configuration management plans. Contribute to internal knowledge repositories and reusable assets. Stay updated and certified on relevant technologies/domains. Measures of Success (KPIs) Adherence to engineering processes and delivery schedules. Number of post-delivery defects and non-compliance issues. Reduction in recurring defects and faster resolution of production bugs. Timeliness in detecting, responding to, and resolving pipeline/data issues. Improvements in pipeline efficiency (e.g., runtime, resource utilization). Team engagement and upskilling; completion of relevant certifications. Zero or minimal data security/compliance breaches. Expected Deliverables Code High-quality data transformation scripts and pipelines. Peer-reviewed, optimized, and reusable code. Documentation Design documents, technical specifications, test plans, and infra cost estimations. Configuration & Testing Configuration management plans and test execution results. Knowledge Sharing Contributions to SharePoint, internal wikis, client university platforms. Skill Requirements Mandatory Technical Skills Languages : Python, PySpark, Scala ETL Tools : Apache Airflow, Talend, Informatica, AWS Glue, Databricks, DataProc Cloud Platforms : AWS, GCP, Azure (esp. BigQuery, DataFlow, ADF, ADLS) Data Warehousing : Snowflake, BigQuery, Delta Lake, Lakehouse architecture Performance Tuning : For large-scale distributed systems and pipelines Additional Skills Experience in data model design and optimization. Good understanding of data schemas, window functions, and data partitioning strategies. Awareness of data governance, security standards, and compliance. Familiarity with DevOps, CI/CD, infrastructure cost estimation. Certifications (Preferred) Cloud certifications (e.g., AWS Data Analytics, GCP Data Engineer) Informatica or Databricks certification Domain-specific certifications based on project/client need Soft Skills Strong analytical and problem-solving capabilities Excellent communication and documentation skills Ability to work independently and collaboratively in cross-functional teams Stakeholder management and customer interaction Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

About Beyond Key We are a Microsoft Gold Partner and a Great Place to Work-certified company. "Happy Team Members, Happy Clients" is a principle we hold dear. We are an international IT consulting and software services firm committed to providing. Cutting-edge services and products that satisfy our clients' global needs. Our company was established in 2005, and since then we've expanded our team by including more than 350+ Talented skilled software professionals. Our clients come from the United States, Canada, Europe, Australia, the Middle East, and India, and we create and design IT solutions for them. If you need any more details, you can get them at https://www.beyondkey.com/about. Job Summary We’re looking for a hands-on Azure DevOps & Data Engineer who can bridge the gap between platform automation and data engineering. You’ll work on automating and optimizing our Azure data pipelines and deployments using Azure DevOps, Logic Apps, Data Factory, and SQL-based solutions. The role requires strong command over T-SQL and experience managing workflows and releases in a modern Azure setup. Experience Required: 6+ years Key Responsibilities Azure DevOps Build and maintain CI/CD pipelines for deploying ADF, SQL scripts, Logic Apps, and other data components. Manage Azure DevOps Repos, Pipelines, and Releases for consistent deployments. Set up deployment automation and rollback mechanisms across dev, test, and prod. Azure Data Services Design and manage data pipelines using Azure Data Factory (ADF) linked services, triggers, and parameterized workflows. Develop and maintain Azure SQL Database and Azure SQL Managed Instance objects. Leverage Azure Logic Apps to orchestrate workflows, alerting, approvals, and integrations with other systems. Database Write and optimize complex SQL queries, stored procedures, and functions. Perform query tuning, indexing, and data integrity checks. Work with large datasets and troubleshoot performance issues. Monitoring & Maintenance Set up monitoring and alerting using Azure Monitor, Log Analytics, or custom alerts in ADF and Logic Apps. Handle data job failures, pipeline errors, and CI/CD release troubleshooting. Collaboration & Documentation Collaborate with data analysts, business users, and platform engineers. Maintain up-to-date documentation of pipeline workflows, release notes, and known issues. Required Skills Solid experience with Azure DevOps (Pipelines, Repos, Releases). Hands-on expertise in Azure Data Factory, Azure Logic Apps, Azure SQL Database, and SQL Managed Instance. Strong command over SQL (SPs, UDFs, performance tuning, query plans). Good understanding of Git-based source control and branching models. Experience in troubleshooting integration flows and ETL/ELT processes. Nice-to-Have (Not Mandatory) Exposure to Power BI, Data Lake. Basic scripting in PowerShell or Python. Understanding of RBAC, resource tagging, and cost monitoring in Azure. Soft Skills Strong analytical and debugging skills. Proactive communicator and collaborator. Able to handle multiple deployments and shifting priorities. Share with someone awesome View all job openings Show more Show less

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Mode: Remote Duration: 6 Months Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Oracle Cloud Visual Builder Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and troubleshooting to ensure that the applications function as intended, contributing to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Analyze requirements, determine technical level of effort and prepare technical design and specifications. - Conversant in deploying and troubleshooting, analyzing, and resolving technical problems - Hands on writing SQL Queries - Conduct Design review to provide guidance and Quality assurance around best practices and frameworks Professional & Technical Skills: - Overall 4+ years of experience in Web App development (Oracle ADF) - 2 to 3 years of experience in Oracle VBCS (Visual Builder Cloud Service) - Good hands on knowledge in JavaScript, CSS3, XML/JSON/WSDL,Consuming Web Services(SOAP/REST),Testing Tools(Postman/SoapUI/JMeter) - Experience with building different types of application in VBCS using Business Object, ORDS - Knowledge and experience in integration with other Oracle PaaS services. - Experience with integrating VBCS applications with Oracle SaaS Applications - Work experience on development of SaaS extensions using VBCS - Experience of various web service related technologies such as WSDL/XML/SOAP/REST/JSON standards - Knowledge of Oracle database and PL/SQL - Experience in GIT-HUB, Oracle Developer Cloud and UCD tools for build and deployment - Good communication interpersonal skills. Good analytical and debugging skills Additional Information: - The candidate should have minimum 3 years of experience in Oracle Cloud Visual Builder. .- A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

1. Job Title: Senior Azure Engineer (Azure Platform Operations & Automation) Experience: 5–7 years Location: Onsite/Remote (Noida) Reports To: Technical Manager / Architect Budget: Max. 12 LPA Responsibilities: · Manage and troubleshoot ADF and Databricks workflows, ensuring triggers, linked services, parameters, and pipelines function correctly end-to-end. · Investigate and resolve complex job failures; debug Spark jobs, and analyze notebook execution graphs and logs. · Lead performance optimization for ADF pipelines, partitioning strategies, and ADLS data formats (e.g., Parquet tuning). · Execute and automate data pipeline deployment using Azure DevOps, ARM templates, PowerShell scripts, and Git repositories. · Govern data lifecycle rules, partition retention, and enforce consistency across raw/curated zones in ADLS. · Monitor resource consumption (clusters, storage, pipelines) and advise on cost-saving measures (auto-scaling, tiering, concurrency). · Prepare RCA for P1/P2 incidents and support change deployment validation, rollback strategy, and UAT coordination. · Review Power BI refresh bottlenecks, support L1 Power BI developer with dataset tuning and refresh scheduling improvements. · Validate SOPs and support documentation prepared by L1s, and drive process improvement via automation or standardization. Required Skills · Expert in Azure Data Factory, Databricks (PySpark), Azure Data Lake Storage, Synapse. · Proficient in Python, PySpark, SQL/SparkSQL, and JSON configurations. · Familiar with Azure DevOps, Git for version control, and CI/CD automation. · Hands-on with monitoring (Azure Monitor), diagnostics, and cost governance. · Strong understanding of data security practices, IAM, RBAC, and audit trail enforcement Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer Location: Remote Experience: 6+ years Job Summary: We are seeking a highly skilled Senior Data Engineer with deep expertise in C#, Azure Data Factory (ADF), Databricks, SQL Server, and Python . The ideal candidate will have a strong understanding of modern CI/CD practices and experience in designing, developing, and optimizing complex data pipelines . Key Responsibilities: Design, develop, and maintain robust, scalable, and efficient data pipelines using Azure Data Factory, Databricks, and SQL Server. Write clean, scalable, and efficient code in C# and Python . Build and manage ETL/ELT processes and ensure data integrity and quality. Optimize SQL queries and database performance. Implement best practices in data engineering , including CI/CD pipelines and version control. Work closely with data scientists, analysts, and business stakeholders to understand data needs. Troubleshoot and resolve issues related to data processing and performance. Document technical solutions and processes clearly and concisely. Required Skills & Experience: 6+ years of experience in Data Engineering . Proficiency in C# and Python for data processing and automation. Strong hands-on experience with Azure Data Factory and Azure Databricks . In-depth experience with SQL Server and writing optimized SQL queries. Solid understanding of CI/CD practices and tools (Azure DevOps, Git, etc.). Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Nice to Have: Experience with Delta Lake , Azure Synapse , or Power BI . Knowledge of big data concepts and tools. Familiarity with data governance , security , and compliance standards . Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer Experience: 5+ Years Location: Remote Contract Duration: Short Term Work Time: IST Shift Job Description We are seeking a skilled and experienced Senior Data Engineer to develop scalable and optimized data pipelines using the Databricks Lakehouse platform. The role requires proficiency in Apache Spark, PySpark, cloud data services (AWS, Azure, GCP), and solid programming knowledge in Python and Java. The engineer will collaborate with cross-functional teams to design and deliver high-performing data solutions. Responsibilities Data Pipeline Development Build efficient ETL/ELT workflows using Databricks and Spark for batch and streaming data Utilize Delta Lake and Unity Catalog for structured data management Optimize Spark jobs using tuning techniques such as caching, partitioning, and serialization Cloud-Based Implementation Develop and deploy data workflows on AWS (S3, EMR, Glue), Azure (ADLS, ADF, Synapse), and/or GCP (GCS, Dataflow, BigQuery) Manage and optimize data storage, access control, and orchestration using native cloud tools Implement data ingestion and querying with Databricks Auto Loader and SQL Warehousing Programming and Automation Write clean, reusable, and production-grade code in Python and Java Automate workflows using orchestration tools like Airflow, ADF, or Cloud Composer Implement testing, logging, and monitoring mechanisms Collaboration and Support Work closely with data analysts, scientists, and business teams to meet data requirements Support and troubleshoot production workflows Document solutions, maintain version control, and follow Agile/Scrum methodologies Required Skills Technical Skills Databricks: Experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration Spark: Proficient in transformations, joins, window functions, and tuning Programming: Strong in PySpark and Java, with data validation and error handling expertise Cloud: Experience with AWS, Azure, or GCP data services and security frameworks Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools Experience 5–8 years in data engineering or backend development Minimum 1–2 years of hands-on experience with Databricks and Spark Experience with large-scale data migration, processing, or analytics projects Certifications (Optional but Preferred) Databricks Certified Data Engineer Associate Working Conditions Full-time remote work with availability during IST hours Occasional on-site presence may be required during client visits No regular travel required On-call support expected during deployment phases Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Engineer – Databricks, Delta Live Tables, Data Pipelines Location: Bhopal / Hyderabad / Pune (On-site) Experience Required: 5+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced Data Engineer with a strong background in designing and building data pipelines using Databricks and Delta Live Tables. The ideal candidate should have hands-on experience in managing large-scale data engineering workloads and building scalable, reliable data solutions in cloud environments. Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines using Databricks and Delta Live Tables . Work with structured and unstructured data to enable analytics and reporting use cases. Implement data ingestion , transformation , and cleansing processes. Collaborate with Data Architects, Analysts, and Data Scientists to ensure data quality and integrity. Monitor data pipelines and troubleshoot issues to ensure high availability and performance. Optimize queries and data flows to reduce costs and increase efficiency. Ensure best practices in data security, governance, and compliance. Document architecture, processes, and standards. Required Skills: Minimum 5 years of hands-on experience in data engineering . Proficient in Apache Spark , Databricks , Delta Lake , and Delta Live Tables . Strong programming skills in Python or Scala . Experience with cloud platforms such as Azure , AWS , or GCP . Proficient in SQL for data manipulation and analysis. Experience with ETL/ELT pipelines , data wrangling , and workflow orchestration tools (e.g., Airflow, ADF). Understanding of data warehousing , big data ecosystems , and data modeling concepts. Familiarity with CI/CD processes in a data engineering context. Nice to Have: Experience with real-time data processing using tools like Kafka or Kinesis. Familiarity with machine learning model deployment in data pipelines. Experience working in an Agile environment. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Linkedin logo

Job Title : Senior Azure Data Factory Engineer Experience Required : 8+ Years Location : Remote Department : Data Engineering / Analytics / IT Job Summary We are looking for a highly experienced and motivated Senior Azure Data Factory Developer with 8+ years of experience in data engineering and cloud data integration. The ideal candidate should have a strong background in Azure Data Services, ETL/ELT pipelines, and modern data warehousing solutions. Key Responsibilities Design, develop, and deploy scalable data pipelines using Azure Data Factory (ADF). Integrate various structured and unstructured data sources from on-premises and cloud environments. Implement and manage data ingestion workflows and orchestration pipelines. Develop ADF pipelines, datasets, linked services, and triggers for data flow and control flow. Collaborate with data architects, analysts, and other stakeholders to gather data requirements. Implement data transformation and mapping logic using Data Flows, Databricks, SQL, or Azure Functions. Ensure data quality, security, and performance optimization across solutions. Monitor, troubleshoot, and optimize existing data pipelines. Work with Azure services such as Azure SQL Database, Azure Synapse Analytics, Azure Blob Storage, and Azure Data Lake. Leverage CI/CD tools and DevOps practices for deployment and version control. Provide mentorship and code reviews to junior developers. Required Skills & Qualifications 8+ years of experience in data engineering, ETL development, or related roles. 4+ years of hands-on experience with Azure Data Factory. Strong expertise in SQL Server, T-SQL, and performance tuning. Experience with Azure Synapse Analytics, Azure Data Lake Storage (ADLS), and Blob Storage. Working knowledge of Azure DevOps, Git, and CI/CD pipelines. Strong understanding of data modeling, data governance, and metadata management. Knowledge of Power BI or other BI tools is a plus. Familiarity with Databricks, Python, or Spark is an added advantage. Excellent problem-solving, analytical, and communication skills. Education & Certifications Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Microsoft Certified: Azure Data Engineer Associate (preferred). Nice To Have Experience working in Agile environments. Exposure to real-time data streaming (e.g., Azure Event Hubs, Kafka). (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary We are looking for a Data Engineer with solid hands-on experience in Azure-based data pipelines and Snowflake to help build and scale data ingestion, transformation, and integration processes in a : Develop and maintain data pipelines using ADF, Snowflake, and Azure Storage. Perform data integration from various sources including APIs, flat files, and databases. Write clean, optimized SQL and support data modeling efforts in Snowflake. Monitor and troubleshoot pipeline issues and data quality concerns. Contribute to documentation and promote best practices across the 3 to 5 years of experience in data engineering or related role. Strong hands-on knowledge of Snowflake, Azure Data Factory, SQL, and Azure Data Lake. Proficient in scripting (Python preferred) for data manipulation and automation. Understanding of data warehousing concepts and ETL/ELT patterns. Experience with Git, JIRA, and agile delivery environments is a plus. Strong attention to detail and eagerness to learn in a collaborative team setting. (ref:hirist.tech) (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Azure backend expert (ADLS, ADF and Azure SQL DW)4+Yrs/Immediate Joiners only One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models. Skills: adf,sql,adls,azure,azure sql dw

Posted 1 week ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Job Description : We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Senior Azure Engineer (Azure Platform Operations & Automation) Experience: 5-7 years Location: Onsite (Noida) Reports To: Technical Manager / Architect Responsibilities Manage and troubleshoot ADF and Databricks workflows, ensuring triggers, linked services, parameters, and pipelines function correctly end-to-end. Investigate and resolve complex job failures; debug Spark jobs, and analyze notebook execution graphs and logs. Lead performance optimization for ADF pipelines, partitioning strategies, and ADLS data formats (e.g., Parquet tuning). Execute and automate data pipeline deployment using Azure DevOps, ARM templates, PowerShell scripts, and Git repositories. Govern data lifecycle rules, partition retention, and enforce consistency across raw/curated zones in ADLS. Monitor resource consumption (clusters, storage, pipelines) and advise on cost-saving measures (auto-scaling, tiering, concurrency). Prepare RCA for P1/P2 incidents and support change deployment validation, rollback strategy, and UAT coordination. Review Power BI refresh bottlenecks, support L1 Power BI developer with dataset tuning and refreshing scheduling improvements. Validate SOPs and support documentation prepared by L1s, and drive process improvement via automation or standardization. Required Skills Expert in Azure Data Factory, Databricks (PySpark), Azure Data Lake Storage, Synapse. Proficient in Python, PySpark, SQL/SparkSQL, and JSON configurations. Familiar with Azure DevOps, Git for version control, and CI/CD automation. Hands-on with monitoring (Azure Monitor), diagnostics, and cost governance. Strong understanding of data security practices, IAM, RBAC, and audit trail enforcement. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Lead Technical Architect (Strategy & Optimization Data Lake & Analytics) Experience: 10+ years Location: Onsite (Noida) Reports To: Client Stakeholders / Delivery Head Responsibilities Manage Project Delivery, scope, timelines, budget, resource allocation, and risk mitigation. Develop and maintain robust data ingestion pipelines (batch, streaming, API). Provide architectural inputs during incident escalations and act as final authority for RCA documentation and closure. of ADF, Power BI, and Databricks. Define and enforce data governance, metadata, and quality standards across zones. Monitor performance, optimize data formats (e.g., Parquet), and tune for cost-efficiency. Tune query performance for Databricks and Power BI datasets using optimization techniques (e.g. caching, BI Engine, materialized views). Lead and mentor a team of data engineers, fostering skills in Azure services and DevOps. Guide schema designs for new datasets and integrations aligned with Diageo’s analytics strategy. Coordinate cross-functional stakeholders (security, DevOps, business) for aligned execution. Oversee incident and change management with SLA adherence and continuous improvement. Serve as the governance owner for SLA compliance, IAM policies, encryption standards, and data retention strategies. Ensure compliance with policies (RBAC, ACLs, encryption) and regulatory audits. Initial data collection for RCA Report project status, KPIs, and business value to senior leadership. Lead monthly and quarterly reviews, presenting insights, improvements, and roadmap alignment to Diageo stakeholders. Required Skills Strong architecture-level expertise in Azure Data Platform (ADLS, ADF, Databricks, Synapse, Power BI). Deep understanding of data lake zone structuring, data lineage, metadata governance, and compliance (e.g., GDPR, ISO). Expert in Spark, PySpark, SQL, JSON, and automation tooling (ARM, Bicep, Terraform optional). Capable of aligning technical designs with business KPIs and change control frameworks. Excellent stakeholder communication, team mentoring, and leadership capabilities. Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Power BI Developer Experience: 3-5 years Location: Onsite (Noida) Reports To: Senior Azure Engineer / Technical Manager / Architect Responsibilities Monitor and develop Power BI Dashboards/Reports along with Power BI dataset refreshes (scheduled/manual) and gateway status using Power BI Service logs and alerts. Identify failures in dashboard visuals, data models, or refresh schedules and initiate first-level remediation (e.g., gateway restart, re-publishing). Collaborate with Azure Engineers to validate backend refresh (ADF/Databricks) issues affecting Power BI performance. Conduct daily dashboard validation checks post refresh cycles for critical reports. Respond to report-related SRs (new access, broken visuals, workspace moves), log and track issues, and resolve within SLA. Maintain refresh logs and summary dashboards for support traceability and performance reporting. Assist with visual enhancements and dataset changes in collaboration with the L2/Architect for larger CRs. Document report issues and corrective actions, contributing to the Power BI support knowledge base. Required Skills Proficient in DAX, Power Query (M), and Power BI Service features (gateways, workspaces, sharing). Strong SQL querying and data modeling experience. Exposure to Azure SQL, Synapse, or Databricks as backend sources is a plus. Familiarity with row-level security, role-based sharing, and Power BI governance best practices. Basic understanding of ticketing tools (e.g., ServiceNow). Show more Show less

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Azure Engineer (Azure Data Lake Operations) Experience: 3-4 years Location: Onsite (Noida) Shift: Rotational Shifts support including Weekends on need basis Reports To: Technical Manager / Architect Responsibilities Perform daily / monthly monitoring of ADF pipelines, Databricks notebooks, and ADLS health using Azure Monitor, Log Analytics, and ServiceNow. Querying data, basic troubleshooting of dataset issues, pipeline validation (in ADF/Databricks) using SQL/PySpark. Execute and track daily job health checks, validate schedule adherence, and ensure successful data ingestion runs. Triage alerts and incidents from automation tools and manual tickets; perform first-level diagnostics and routing. Monitor ADF/Databricks refreshes, failure alerts. Re-run failed jobs, validate trigger configurations, and raise SRs or incidents when beyond scope. Perform manual file availability checks and escalate delays to application or business stakeholders. Maintain operational logs and checklists for daily activities with consistent timestamping and status remarks. Acknowledge and act on alerts within SLA; provide inputs to L2 for RCA or escalation-worthy cases. Raise and track service requests in ServiceNow and maintain traceability until closure. Required Skills Proficiency in Azure Data Factory, Azure Data Lake Storage, Azure Monitor, Log Analytics. Strong SQL knowledge in writing complex queries for data validation and job verification. Working knowledge of ServiceNow or equivalent ticketing system. Exposure to Databricks and PySpark is preferred. Good understanding of Azure CLI/PowerShell is a plus. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Leadership Skills: Should be able to manage a large team of 20+ professionals, demonstrating strategic leadership and team collaboration. Will be responsible for broader discretion of hiring and firing, recommendation of ratings and promotions basis the performance. Technical Skills: Advanced expertise in data analytics and data visualization techniques. Proficient in data engineering, including data transformation, integration, acquisition, preparation, modeling, and master data management. Extensive experience in implementing visualization projects using tools like MS Power BI, Tableau, Spotfire, and others. Proficient with Microsoft BI technologies (SQL, SSIS, SSAS, SSRS) and Azure-BI solutions such as ADF, Synapse, and Databricks. Programming skills in SQL, DAX, MDX, Python, Power Shell Scripting, Node JS, React JS, C++, with proficiency in databases like MS-SQL Server, Access, and MySQL. Proficiency in developing and managing Excel macro-based dashboards. Proficiency in performance tuning and optimization for processing large datasets, ensuring efficient data retrieval and minimal response times. Experience with partitioning strategies, indexing, and query optimization to manage and expedite data access in high-volume environments. Skilled in using big data technologies and distributed computing frameworks to address scalability and performance challenges in large-scale data processing. Expertise in designing and optimizing data pipelines and ETL processes to improve data flow and reduce bottlenecks in extensive datasets. Familiarity with advanced analytics and machine learning algorithms for efficient processing and analysis of massive datasets to derive actionable insights. Knowledge of cloud-based data services and tools for scalable storage, analysis, and management of large volumes of data, including Azure Synapse, Snowflake, and Amazon Redshift. Soft Skills: Effective communication, analytical thinking, and problem-solving abilities. Managerial Roles: As a Manager at EY GDS, one should be capable of designing and delivering analytics foundations, managing a team, constructing dashboards, and employing critical thinking to resolve complex audit and non-audit issues. The role involves developing, reviewing, and analyzing solution architecture, gathering and defining requirements, leading project design, and overseeing implementation. He/She is responsible for owning the engagement economics of the team, updating key findings to the leadership, and assisting in alignment in case of discrepancies. It is essential to align and collaborate with the Service Delivery Manager (SDM) from various Digital delivery regions to perform project scoping, estimations, and strategically drive the deliveries to success. The Manager should identify the high and low outliers in the team and help align low outliers with the right learning paths to support their career alignments. Own the learning and development of the team and periodically revisit the learnings and advice the team and align them as per the emerging market trends. Perform R&D and produce POC that can prove the various capability of the team in implementing advanced concepts in visualizations and organize calls with various groups to explain the features and benefits. Try and implement it in engagements to lead the success. Should have periodical alignment on resource performance deployed in various engagements. Prioritise and assist team in generating the automation savings with unique ideas and with the help of cutting-edge implementations. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

11 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Work From Office location :- Hyderabad - Raheja Mindspace Rotational Shifts Exp:- 3 to 6 yrs Must have : Azure(3) Key Responsibilities : 3 to 6 years of experience in Production Support in the areas of Azure Data Pipeline, and Azure Data Factory. Experience in monitoring the Daily batch, following P1 & P2 Protocols. Strong oral and written communication skills are a must for this role. Speak confidently and establish a good relationship with stakeholders and the team Good appetite to learn the domain knowledge. Experience in writing medium complex SQL queries. Ability to troubleshoot the issues and provide detailed analysis for next steps. Ability to document the issues and follow-up for closure. Experience in monitoring tools Splunk and Dynatrace. Must work from Raheja all 5 days. Need to work on weekend and holiday shifts on Roster basis. Experience in Financial background and .NET experience is nice-to-have. Why Join Us? This is a fantastic opportunity to be at the forefront of cloud data architecture, working in a dynamic environment that fosters innovation and professional growth. Join us to make a significant impact in the way we handle data and drive business success. Please share the details below : CTC:- ECTC:- Exp:- NP/LWD:- Preferred job location:-

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Gurgaon/ Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking an Assistant Scientist for our Data Engineering team. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be DOING What will your essential responsibilities include? Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Provide technical guidance and mentorship to junior and mid-level developers. Lead design discussions and code reviews to ensure high-quality development. Utilize SQL to query and manipulate data from various sources, ensuring data accuracy and integrity. Develop and maintain data pipelines using DBT, Databricks to facilitate data processing and transformation. Skilled professional with extensive experience in SQL performance tuning and a proven track record in writing complex SQL queries. Demonstrated ability to optimize database performance, enhance data retrieval efficiency, and ensure the integrity and security of data across various applications. Collaborate with project managers to define project scope, timelines, and deliverables. Oversee the development process, ensuring projects are delivered on time and meet quality standards. Assist in deploying and managing data solutions on Azure Cloud, ensuring optimal performance and security. Create and maintain documentation for data processes, data models, and system architecture. Participate in data quality checks and troubleshooting to resolve data-related issues. Maintain integrity and quality across all pipelines and environments. Participate in the architectural design and decision-making processes for new projects and enhancements. Bring ideas to the tables that help to streamline and rationalize BTL jobs. Leads small team strategic partner/vendor team members. Works with business users and tries to bring closure to the request. You will report to Lead Scientist. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Proficiency in SQL & experience with DBT is essential. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Proficiency in SQL for database querying and management. Excellent programming skills in Python, with experience in data manipulation and analysis. Must have hands-on experience in designing and developing ETL Pipelines. Relevant years of exposure and good proficiency in data warehousing concepts. Proficient in SQL and database Design concepts. Good knowledge of unit testing, documentation - low-level design's Desired Skills And Abilities Understanding of the Azure cloud computing platform, specifically Azure Synapse and Azure Data Lake Storage (ADLS), is a plus. Experience with Databricks, Azure Data Factory (ADF), and PySpark is must. A passion for data and experience in a data-driven organizational environment. A commitment to excellence and a genuine care for both your work and the overall mission of the organization. Knowledge of GitHub and build management practices is an added advantage. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies