Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderābād
On-site
Key Responsibilities: A day in the life of an Infoscion As part of the Infosys consulting team your primary role would be to actively aid the consulting team in different phases of the project including problem definition effort estimation diagnosis solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys information available in public domains vendor evaluation information etc and build POCs You will create requirement specifications from the business needs define the to be processes and detailed functional designs based on requirements You will support configuring solution requirements on the products understand if any issues diagnose the root cause of such issues seek clarifications and then identify and shortlist solution alternatives You will also contribute to unit level and organizational initiatives with an objective of providing high quality value adding solutions to customers If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Primary skills Technology AWS Devops Technology Cloud Integration Azure Data Factory ADF Technology Cloud Platform AWS Database Technology Cloud Platform Azure Devops Azure Pipelines Technology DevOps Continuous integration Mainframe Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining analyzing and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes identify improvement areas and suggest the technology solutions One or two industry domain knowledge Preferred Skills: Technology->DevOps->Continuous integration - Mainframe,Technology->Cloud Platform->Azure Devops->Azure Pipelines,Technology->Cloud Platform->AWS Database,Technology->Cloud Integration->Azure Data Factory (ADF)
Posted 3 weeks ago
6.0 years
0 Lacs
Delhi
Remote
Full time | Work From Office This Position is Currently Open Department / Category: ENGINEER Listed on Jun 30, 2025 Work Location: NEW DELHI BANGALORE HYDERABAD Job Descritpion of Databricks Engineer 6 to 8 Years Relevant Experience We are looking for an experienced Databricks Engineer to join our data engineering team and contribute to designing and implementing scalable data solutions on the Azure platform. This role involves working closely with cross-functional teams to build high-performance data pipelines and maintain a modern Lakehouse architecture. Key Responsibilities: Design and develop scalable data pipelines using Spark-SQL and PySpark in Azure Databricks. Build and maintain Lakehouse architecture using Azure Data Lake Storage (ADLS) and Databricks. Perform comprehensive data preparation tasks, including: Data cleaning and normalization Deduplication Type conversions Collaborate with the DevOps team to deploy and manage solutions in production environments. Partner with Data Science and Business Intelligence teams to share insights, align on best practices, and drive innovation. Support change management through training, communication, and documentation during upgrades, data migrations, and system changes. Required Qualifications: 5+ years of IT experience with strong exposure to cloud technologies, particularly in Microsoft Azure. Hands-on experience with: Databricks, Azure Data Factory (ADF), and Azure Data Lake Storage (ADLS) Programming with PySpark, Python, and SQL Solid understanding of data engineering concepts, data modeling, and data processing frameworks. Ability to work effectively in distributed, remote teams. Excellent communication skills in English (both written and verbal). Preferred Skills: Strong working knowledge of distributed computing frameworks, especially Apache Spark and Databricks. Experience with Delta Lake and Lakehouse architecture principles. Familiarity with data tools and libraries such as Pandas, Spark-SQL, and PySpark. Exposure to on-premise databases such as SQL Server, Oracle, etc. Experience with version control tools (e.g., Git) and DevOps practices including CI/CD pipelines. Required Skills for Databricks Engineer Job Spark-SQL and PySpark in Azure Databricks Python SQL Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round
Posted 3 weeks ago
2.0 - 6.0 years
10 - 13 Lacs
Noida
On-site
Requirements Gathering & Data Analysis (~15%) Uncover Customer Needs: Actively gather customer requirements and analyze user needs to ensure software development aligns with real-world problems. Transform Needs into Action: Translate these requirements into clear and actionable software development tasks. Deep Collaboration: Collaborate daily with stakeholders across the project, including internal and external teams, to gain a comprehensive understanding of business objectives. Building the Foundation: System Architecture (~10%) Prototype & Analyze: Develop iterative prototypes while analyzing upstream data sources to ensure the solution aligns perfectly with business needs. Evaluate & Validate: Assess design alternatives, technical feasibility, and build proofs of concept to gather early user feedback and choose the most effective approach. Design for Scale: Craft a robust, scalable, and efficient database schema, documenting all architectural dependencies for future reference. Optimize Implementation: Translate functional specifications Write clean, well-documented, and efficient code (~55%): Technologies: Microsoft Fabric, Azure Synapse, Azure Data Explorer, along with other Azure services, Power BI, Machine Learning, Power Apps, Dynamic 365, HTML 5, and React. Azure Data Platform Specialist: Develop, maintain, and enhance data pipelines using Azure Data Factory (ADF) to streamline data flow. Analyze data models in Azure Analysis Services for deeper insights. Leverage the processing muscle of Azure Databricks for complex data transformations. Data Visualization Wizard: Craft compelling reports, dashboards, and analytical models using BI tools like Power BI to transform raw data into actionable insights. AI & Machine Learning Powerhouse: Craft and maintain cutting-edge machine learning models using Python to uncover hidden insights in data, predict future trends, and even integrate with powerful Large Language Models (LLMs) to unlock new possibilities. Full-Stack Rockstar: Build beautiful and interactive user interfaces (UIs) with the latest front-end frameworks like React, and craft powerful back-end code based on system specifications. Level up your coding with cutting-edge AI: Write code faster and smarter with AI-powered copilots that suggest code completions and help you learn the latest technologies. Quality Champion: Implement unit testing to ensure code quality and functionality. Utilize the latest frameworks and libraries to develop and maintain web applications that are efficient and reliable. Data-Driven Decisions: Analyze reports generated from various tools to identify trends and incorporate those findings into ongoing development for continuous improvement. Collaborative Code Craftsmanship: Foster a culture of code excellence through peer and external code reviews facilitated by Git and Azure DevOps. Automation Advocate: Automate daily builds for efficient verification and customer feedback, ensuring a smooth development process. Ensuring Seamless User Experience: Bridge the gap between defined requirements, business logic implemented in the database, and user experience to ensure users can easily interact with the data. Proactive Problem Solver: Proactively debug, monitor, and troubleshoot solutions to maintain optimal performance and a positive user experience. Quality Control and Assurance (10%) Code Excellence: Ensure code quality aligns with industry standards, best practices, and automated quality tools for maintainable and efficient development. Proactive Debugging: Continuously monitor, debug, and troubleshoot solutions to maintain optimal performance and reliability. End-to-End & Automated Testing: Implement automated testing frameworks to streamline testing processes, enhance coverage, and improve efficiency. Conduct comprehensive manual and automated tests across all stages of development to validate functionality, security, and user experience. AI-Powered Testing: Leverage AI-driven testing tools for intelligent test case generation. Collaborative Code Reviews: Foster a culture of excellence by conducting peer and external code reviews to enhance code quality and maintainability. Seamless Deployment: Oversee the deployment process, ensuring successful implementation and validation of live solutions. Continuous Learning & Skill Development (10%) Community & Training: Sharpen your skills by actively participating in technical learning communities and internal training programs. Industry Certifications: Earn industry-recognized certifications to stay ahead of the curve in in-demand technologies like data analysis, Azure development, data engineering, AI engineering, and data science (as applicable). Online Learning Platforms: Expand your skillset through online courses offered by platforms like Microsoft Learn, Coursera, edX, Udemy, and Pluralsight. Candidate Profile Eligible Branches: B. Tech./B.E. (CSE/IT) M. Tech./ M.E. (CSE/IT) Eligibility criteria: 60% plus or equivalent in Computer Science/Information Technology 2 to 6 years of software development experience Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,300,000.00 per year Application Question(s): Notice Period Work Location: In person
Posted 3 weeks ago
15.0 - 20.0 years
20 - 25 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Locations: Offices in Austin (USA), Singapore, Hyderabad, Indore, Ahmedabad (India) Primary Job Location: Hyderabad / Indore / Ahmedabad (India) Role Type: Full-time | Onsite What You Will Do Role Overview As a Data Governance Architect, you will define and lead enterprise-wide data governance strategies, design robust governance architectures, and enable seamless implementation of tools like Microsoft Purview, Informatica, and other leading data governance platforms. This is a key role bridging compliance, data quality, security, and metadata management across cloud and enterprise ecosystems. Key Responsibilities 1. Strategy, Framework, and Operating Model Define governance strategies, standards, and policies for compliance and analytics readiness. Establish a governance operating model with clear roles and responsibilities. Conduct maturity assessments and lead change management efforts. 2. 5. Metadata, Lineage & Glossary Management Architect technical and business metadata workflows. Validate end-to-end lineage across ADF Synapse Power BI. Govern glossary approvals and term workflows. 6. Policy & Data Classification Management Define and enforce rules for: Classification, Access, Retention, and Sharing. Leverage Microsoft Information Protection (MIP) for automation. Ensure alignment with GDPR, HIPAA, CCPA, SOX. 7. Data Quality Governance Define quality KPIs, validation logic, and remediation rules. Build scalable frameworks embedded in pipelines and platforms. 8. Compliance, Risk & Audit Oversight Establish compliance standards, dashboards, and alerts. Enable audit readiness and reporting through governance analytics. 9. Automation & Integration Automate workflows using: PowerShell, Azure Functions, Logic Apps, REST APIs. Integrate governance into: Azure Monitor, Synapse Link, Power BI, and third-party tools. Primary Skills Microsoft Purview Architecture & Administration Data Governance Framework Design Metadata & Data Lineage Management (ADF Synapse Power BI) Data Quality and Compliance Governance Informatica / Collibra / BigID / Alation / Atlan PowerShell, REST APIs, Azure Functions, Logic Apps RBAC, Glossary Governance, Classification Policies MIP, Insider Risk, DLP, Compliance Reporting Azure Data Factory, Agile Methodologies #Tags #DataGovernance #MicrosoftPurview #GovernanceArchitect #MetadataManagement #DataLineage #DataQuality #Compliance #RBAC #PowerShell #RESTAPI #Informatica #Collibra #BigID #Azure Functions #ADF #Synapse #PowerBI #GDPR #HIPAA #CCPA #SOX #OnsiteJobs #HyderabadJobs #IndoreJobs #AhmedabadJobs #HiringNow #DataPrivacy #EnterpriseArchitecture #DSPM #Governance Strategy #Information Security Would you like this JD tailored for a LinkedIn post, referral message, or email template as well
Posted 3 weeks ago
7.0 - 11.0 years
12 - 18 Lacs
Mumbai, Indore, Hyderabad
Work from Office
Key Responsibilities 1. Governance Strategy & Stakeholder Enablement Define and drive enterprise-level data governance frameworks and policies.Align governance objectives with compliance, analytics, and business priorities.Work with IT, Legal, Compliance, and Business teams to drive adoption.Conduct training, workshops, and change management programs. 2. Microsoft Purview Implementation & Administration Administer Microsoft Purview: accounts, collections, RBAC, and scanning policies.Design scalable governance architecture for large-scale data environments (>50TB).Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, and Snowflake. 3. Metadata & Data Lineage Management Design metadata repositories and workflows.Ingest technical/business metadata via ADF, REST APIs, PowerShell, Logic Apps.Validate end-to-end lineage (ADF Synapse Power BI), impact analysis, and remediation. 4. Data Classification & SecurityImplement and govern sensitivity labels (PII, PCI, PHI) and classification policies.Integrate with Microsoft Information Protection (MIP), DLP, Insider Risk, and Compliance Manager.Enforce lifecycle policies, records management, and information barriers.Working knowledge of GDPR, HIPAA, SOX, CCPA.Strong communication and leadership to bridge technical and business governance Location-Mumbai,Hyderabad,Indore,Ahmedabad
Posted 3 weeks ago
3.0 years
20 - 23 Lacs
Chennai, Tamil Nadu, India
On-site
We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks
Posted 3 weeks ago
6.0 - 11.0 years
16 - 27 Lacs
Noida, Pune, Bengaluru
Hybrid
Location: Mumbai/ Pune/ Bangalore/ Noida/ Kochi Job Description: Key Responsibilities: Collaborate with stakeholders to identify and gather reporting requirements, translating them into Power BI dashboards (in collaboration with Power BI developers). Monitor, troubleshoot, and optimize data pipelines and Azure services for performance and reliability. Follow best practices in DevOps to implement CI/CD pipelines. Document pipeline architecture, infrastructure changes, and operational procedures Required Skills Strong understanding of DevOps principles and CI/CD in Azure environments. Proven hands-on experience with: Azure Data Factory Azure Synapse Analytics Azure Function Apps Azure Infrastructure Services (Networking, Storage, RBAC, etc.) PowerShell scripting Experience in designing data workflows for reporting and analytics, especially integrating with Azure DevOps (ADO). --- Good to Have Experience with Azure Service Fabric. Hands-on exposure to Power BI or close collaboration with Power BI developers. Familiarity with Azure Monitor, Log Analytics, and other observability tools.
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Required Skills: YOE-5+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins)
Posted 3 weeks ago
3.0 years
20 - 23 Lacs
Gurugram, Haryana, India
On-site
We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks
Posted 3 weeks ago
3.0 years
20 - 23 Lacs
Pune, Maharashtra, India
On-site
We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Java & Oracle ADF (Application Development Framework) Developer with 5+ years of experience to design, develop, and maintain enterprise applications using Java & Oracle's ADF technology stack and related technologies. Location - Ramanujan IT City, Chennai (Onsite) Contract Duration - 3+ months (Extendable) Immediate Joiner Role and Responsibilities Design and develop enterprise applications using Java & Oracle ADF framework and related technologies Create and maintain ADF Business Components (Entity Objects, View Objects, Application Modules) Develop user interfaces using ADF Faces components and ADF Task Flows Implement business logic and data validation rules using ADF BC Design and develop reports using Jasper Reports Configure and maintain application servers (Tomcat, JBoss) Integrate applications with MySQL databases and web services Handle build and deployment processes Perform code reviews and ensure adherence to coding standards Debug and resolve production issues Collaborate with cross-functional teams including business analysts, QA, and other developers Provide technical documentation and maintain project documentation Core Technical Skills: Strong expertise in Oracle ADF framework (5 years hands-on experience) Proficient in Java/J2EE technologies Advanced knowledge of ADF Business Components (Entity Objects, View Objects, Application Modules) Strong experience with ADF Faces Rich Client components Expertise in ADF Task Flows (bounded and unbounded) Proficient in MySQL database design, optimization, and query writing Strong experience with Jasper Reports for report development and customization Application Server & Build Experience: Experience in deploying and maintaining applications on Tomcat Server Experience with JBoss/WildFly application server configuration and deployment Expertise in build tools (Maven/Ant) and build automation Experience with continuous integration and deployment processes Knowledge of application server clustering and load balancing Database Skills: Strong knowledge of MySQL database administration Experience in writing complex SQL queries and stored procedures Understanding of database optimization and performance tuning Knowledge of database backup and recovery procedures Reporting Skills: Expertise in Jasper Reports design and development Experience in creating complex reports with sub-reports Knowledge of JasperReports Server administration Ability to integrate reports with web applications
Posted 4 weeks ago
5.0 years
20 - 24 Lacs
Chennai, Tamil Nadu, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql
Posted 4 weeks ago
5.0 years
0 Lacs
Delhi, Delhi
Remote
Full time | Work From Office This Position is Currently Open Department / Category: ENGINEER Listed on Jun 30, 2025 Work Location: NEW DELHI BANGALORE HYDERABAD Job Descritpion of Databricks Engineer 6 to 8 Years Relevant Experience We are looking for an experienced Databricks Engineer to join our data engineering team and contribute to designing and implementing scalable data solutions on the Azure platform. This role involves working closely with cross-functional teams to build high-performance data pipelines and maintain a modern Lakehouse architecture. Key Responsibilities: Design and develop scalable data pipelines using Spark-SQL and PySpark in Azure Databricks. Build and maintain Lakehouse architecture using Azure Data Lake Storage (ADLS) and Databricks. Perform comprehensive data preparation tasks, including: Data cleaning and normalization Deduplication Type conversions Collaborate with the DevOps team to deploy and manage solutions in production environments. Partner with Data Science and Business Intelligence teams to share insights, align on best practices, and drive innovation. Support change management through training, communication, and documentation during upgrades, data migrations, and system changes. Required Qualifications: 5+ years of IT experience with strong exposure to cloud technologies, particularly in Microsoft Azure. Hands-on experience with: Databricks, Azure Data Factory (ADF), and Azure Data Lake Storage (ADLS) Programming with PySpark, Python, and SQL Solid understanding of data engineering concepts, data modeling, and data processing frameworks. Ability to work effectively in distributed, remote teams. Excellent communication skills in English (both written and verbal). Preferred Skills: Strong working knowledge of distributed computing frameworks, especially Apache Spark and Databricks. Experience with Delta Lake and Lakehouse architecture principles. Familiarity with data tools and libraries such as Pandas, Spark-SQL, and PySpark. Exposure to on-premise databases such as SQL Server, Oracle, etc. Experience with version control tools (e.g., Git) and DevOps practices including CI/CD pipelines. Required Skills for Databricks Engineer Job Spark-SQL and PySpark in Azure Databricks Python SQL Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round
Posted 4 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra
On-site
Who are we Fulcrum Digital is an agile and next-generation digital accelerating company providing digital transformation and technology services right from ideation to implementation. These services have applicability across a variety of industries, including banking & financial services, insurance, retail, higher education, food, healthcare, and manufacturing. Position Overview We are seeking a highly skilled Senior Data Engineer to lead the migration, optimization, and governance of data pipelines in Azure Data Factory (ADF) and SQL. The ideal candidate will have extensive experience in Change Data Capture (CDC), performance tuning, data security, and compliance within a cloud-based architecture. Key Responsibilities Architect & optimize CDC pipelines in ADF, Data Sync Services and SQL, ensuring efficient data ingestion. Implement performance tuning strategies such as parallel processing, indexing, partitioning, and data archiving. Enforce security & compliance by implementing RBAC, data encryption, audit logging, and access control best practices. Define and implement data governance models, including roles, metadata management, and compliance policies. Develop robust error handling & monitoring with automated alerts, logging strategies, and retry mechanisms for CDC failures. Provide technical documentation & best practices for CDC implementation, security, and performance optimization. Requirements Qualifications & Skills 5+ years of experience in data engineering with expertise in Azure (ADF, Synapse), SQL, and CDC implementation. Strong knowledge of performance tuning techniques, including parallel processing and indexing strategies. Hands-on experience with data security, governance, and compliance best practices in a cloud environment. Experience with error logging, monitoring, and notification setup for data pipelines. Ability to troubleshoot performance bottlenecks and implement scalable solutions. Excellent communication and documentation skills. Job Opening ID RRF_5456 Job Type Permanent Industry IT Services Date Opened 01/07/2025 City Pune City Province Maharashtra Country India Postal Code 411057
Posted 4 weeks ago
5.0 years
20 - 24 Lacs
Pune, Maharashtra, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql
Posted 4 weeks ago
5.0 years
20 - 24 Lacs
Gurugram, Haryana, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql
Posted 4 weeks ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Description Experience: 4+ years of experience Key Responsibilities Help define/improvise actionable and decision driving management information Ensure streamlining, consistency and standardization of MI within the handled domain Build and operate flexible processes/reports that meet changing business needs Would be required to prepare the detailed documentation of schemas Any other duties commensurate with position or level of responsibility Desired Profile Prior experience in insurance companies/insurance sector would be an added advantage Experience of Azure Technologies (include SSAS, SQL Server, Azure Data Lake, Synapse) Hands on experience on SQL and PowerBI. Excellent understanding in developing stored procedures, functions, views and T-SQL programs. Developed and maintained ETL (Data Extraction, Transformation and Loading) mappings using ADF to extract the data from multiple source systems. Analyze existing SQL queries for performance improvements. Excellent written and verbal communication skills Ability to create and design MI for the team Expected to handle multiple projects, with stringent timelines Good interpersonal skills Actively influences strategy by creative and unique ideas Exposure to documentation activities Key Competencies Technical Learning - Can learn new skills and knowledge, as per business requirement Action Orientated - Enjoys working; is action oriented and full of energy for the things he/she sees as challenging; not fearful of acting with a minimum of planning; seizes more opportunities than others. Decision Quality - Makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment. Detail Orientation : High attention to detail, especially in data quality, documentation, and reporting Communication & Collaboration : Strong interpersonal skills, effective in team discussions and stakeholder interactions Qualifications B.Com/ BE/BTech/MCA from reputed College/Institute
Posted 4 weeks ago
7.0 - 10.0 years
0 Lacs
Greater Madurai Area
On-site
Job Requirements Why work for us? Alkegen brings together two of the world’s leading specialty materials companies to create one new, innovation-driven leader focused on battery technologies, filtration media, and specialty insulation and sealing materials. Through global reach and breakthrough inventions, we are delivering products that enable the world to breathe easier, live greener, and go further than ever before. With over 60 manufacturing facilities with a global workforce of over 9,000 of the industry’s most experienced talent, including insulation and filtration experts, Alkegen is uniquely positioned to help customers impact the environment in meaningful ways. Alkegen offers a range of dynamic career opportunities with a global reach. From production operators to engineers, technicians to specialists, sales to leadership, we are always looking for top talent ready to bring their best. Come grow with us! Key Responsibilities Lead and manage the Data Operations team, including BI developers and ETL developers, to deliver high-quality data solutions. Oversee the design, development, and maintenance of data models, data transformation processes, and ETL pipelines. Collaborate with business stakeholders to understand their data needs and translate them into actionable data insights solutions. Ensure the efficient and reliable operation of data pipelines and data integration processes. Develop and implement best practices for data management, data quality, and data governance. Utilize SQL, Python, and Microsoft SQL Server to perform data analysis, data manipulation, and data transformation tasks. Build and deploy data insights solutions using tools such as PowerBI, Tableau, and other BI platforms. Design, create, and maintain data warehouse environments using Microsoft SQL Server and the data vault design pattern. Design, create, and maintain ETL packages using Microsoft SQL Server and SSIS. Work closely with cross-functional teams in a matrix organization to ensure alignment with business objectives and priorities. Lead and mentor team members, providing guidance and support to help them achieve their professional goals. Proactively identify opportunities for process improvements and implement solutions to enhance data operations. Communicate effectively with stakeholders at all levels, presenting data insights and recommendations in a clear and compelling manner. Implement and manage CI/CD pipelines to automate the testing, integration, and deployment of data solutions. Apply Agile methodologies and Scrum practices to ensure efficient and timely delivery of projects. Skills & Qualifications Master's or Bachelor’s degree in computer science, Data Science, Information Technology, or a related field. 7 to 10 years of experience in data modelling, data transformation, and building and managing ETL processes. Strong proficiency in SQL, Python, and Microsoft SQL Server for data manipulation and analysis. Extensive experience in building and deploying data insights solutions using BI tools such as PowerBI and Tableau. At least 2 years of experience leading BI developers or ETL developers. Experience working in a matrix organization and collaborating with cross-functional teams. Proficiency in cloud platforms such as Azure, AWS, and GCP. Familiarity with data engineering tools such as ADF, Databricks, Power Apps, Power Automate, and SSIS. Strong stakeholder management skills with the ability to communicate complex data concepts to non-technical audiences. Proactive and results-oriented, with a focus on delivering value aligned with business objectives. Knowledge of CI/CD pipelines and experience implementing them for data solutions. Experience with Agile methodologies and Scrum practices. Relevant certifications in Data Analytics, Data Architecture, Data Warehousing, and ETL are highly desirable. At Alkegen, we strive every day to help people – ALL PEOPLE – breathe easier, live greener and go further than ever before. We believe that diversity and inclusion is central to this mission and to our impact. Our diverse and inclusive culture drives our growth & innovation and we nurture it by actively embracing our differences and using our varied perspectives to solve the complex challenges facing our changing and diverse world. Employment selection and related decisions are made without regard to sex, race, ethnicity, nation of origin, religion, color, gender identity and expression, age, disability, education, opinions, culture, languages spoken, veteran’s status, or any other protected class.
Posted 4 weeks ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Description Experience: 4+ years of experience Key Responsibilities Help define/improvise actionable and decision driving management information Ensure streamlining, consistency and standardization of MI within the handled domain Build and operate flexible processes/reports that meet changing business needs Would be required to prepare the detailed documentation of schemas Any other duties commensurate with position or level of responsibility Desired Profile Prior experience in insurance companies/insurance sector would be an added advantage Experience of Azure Technologies (include SSAS, SQL Server, Azure Data Lake, Synapse) Hands on experience on SQL and PowerBI. Excellent understanding in developing stored procedures, functions, views and T-SQL programs. Developed and maintained ETL (Data Extraction, Transformation and Loading) mappings using ADF to extract the data from multiple source systems. Analyze existing SQL queries for performance improvements. Excellent written and verbal communication skills Ability to create and design MI for the team Expected to handle multiple projects, with stringent timelines Good interpersonal skills Actively influences strategy by creative and unique ideas Exposure to documentation activities Key Competencies Technical Learning - Can learn new skills and knowledge, as per business requirement Action Orientated - Enjoys working; is action oriented and full of energy for the things he/she sees as challenging; not fearful of acting with a minimum of planning; seizes more opportunities than others. Decision Quality - Makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment. Detail Orientation : High attention to detail, especially in data quality, documentation, and reporting Communication & Collaboration : Strong interpersonal skills, effective in team discussions and stakeholder interactions Qualifications B.Com/ BE/BTech/MCA from reputed College/Institute
Posted 4 weeks ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
Role & responsibilities 8-12 years of professional work experience in a relevant field Proficient in Azure Databricks, ADF, Delta Lake, SQL Data Warehouse, Unity Catalog, Mongo DB, Python Experience/ prior knowledge on semi structure data and Structured Streaming, Azure synapse analytics, data lake, data warehouse. Proficient in creating Azure Data Factory pipelines for ETL/ELT processing ; copy activity, custom Azure development etc. Lead the technical team of 4-6 resource. Prior Knowledge in Azure DevOps and CI/CD processes including Github . Good knowledge of SQL and Python for data manipulation, transformation, and analysis knowledge on Power bi would be beneficial. Understand business requirements to set functional specifications for reporting applications
Posted 4 weeks ago
6.0 - 8.0 years
3 - 8 Lacs
Bengaluru
On-site
Country/Region: IN Requisition ID: 26961 Work Model: Position Type: Salary Range: Location: INDIA - BENGALURU - HP Title: Technical Lead-Data Engg Description: Area(s) of responsibility Azure Data Lead - 5A ( HP Role – Senior Data Engineer) Experience: 6 to 8 Years Azure Lead with experience in Azure ADF, ADLS Gen2, Databricks, PySpark and Advanced SQL Responsible for designing and implementing secure, scalable, and highly available cloud-based solutions and estimation on Azure Cloud 4 Years of experince in Azure Databricks and PySpark Experience in Performance Tuning Experience with integration of different data sources with Data Warehouse and Data Lake is required Experience in creating Data warehouse, data lakes Understanding of data modelling and data architecture concepts To be able to clearly articulate pros and cons of various technologies and platforms Experience in supporting tools GitHub, Jira, Teams, Confluence need to be used Collaborate with clients to understand their business requirements and translate them into technical solutions that leverage AWS and Azure cloud platforms. Mandatory Skillset: Azure Databricks, PySpark and Advanced SQL
Posted 4 weeks ago
12.0 - 14.0 years
9 - 10 Lacs
Bengaluru
On-site
Country/Region: IN Requisition ID: 26981 Work Model: Position Type: Salary Range: Location: INDIA - BENGALURU - AUTOMOTIVE Title: Project Manager Description: Area(s) of responsibility Job description - Azure Tech Project Manager Experience Required- 12- 14 years Project Manager will be responsible for driving project management activities in the Azure Cloud Services (ADF, Azure Databricks, PySpark, ADLS Gen2) Strong understanding of Azure Services process execution from acquiring data from source system to visualization Experience in Azure DevOPS Experience in Data Warehouse and Data Lake and Visualizations Project Management skills including time and risk management, resource prioritization and project structuring Responsible for END TO END project execution and delivery across multiple clients Understand ITIL processes related to incident management, problem management, application life cycle management, operational health management. Strong in Agile and Jira Tool Strong customer service, problem solving, organizational and conflict management skills Should be able to prepare weekly/monthly reports both internal and client management Strong in Agile and Jira Tool Should be able to help team members on technical issue Should be good learner and open to learn new functionalities
Posted 4 weeks ago
0 years
1 - 9 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. This role is crucial for us on the Cloud Data Engineering team (Data Exchange)for all the Cloud development, migrations, and support works related to WellMed Data Services. Team and Role require to maintain and supports EDW, IS cloud modernization in wellmed, that involves cloud development on Data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Hands-on experience in Cloud Data Engineering Contributed for all the Cloud development, migrations, and support works Proven to be comfortable independently maintain and supports EDW, cloud modernization, SQL development, Azure cloud development, ETL using Azure Data Factory Proven to successfully implemented data Streaming using Apache Kafka, Kubernetes, Databricks, Snowflake, SQL Server, Airflow for monitoring and support. GIT and DevOps for pipeline automations At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 4 weeks ago
12.0 - 14.0 years
0 Lacs
Greater Bengaluru Area
On-site
Area(s) of responsibility Job Description - Azure Tech Project Manager Experience Required- 12- 14 years Project Manager will be responsible for driving project management activities in the Azure Cloud Services (ADF, Azure Databricks, PySpark, ADLS Gen2) Strong understanding of Azure Services process execution from acquiring data from source system to visualization Experience in Azure DevOPS Experience in Data Warehouse and Data Lake and Visualizations Project Management skills including time and risk management, resource prioritization and project structuring Responsible for END TO END project execution and delivery across multiple clients Understand ITIL processes related to incident management, problem management, application life cycle management, operational health management. Strong in Agile and Jira Tool Strong customer service, problem solving, organizational and conflict management skills Should be able to prepare weekly/monthly reports both internal and client management Strong in Agile and Jira Tool Should be able to help team members on technical issue Should be good learner and open to learn new functionalities
Posted 4 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities • Build and optimize ETL/ELT pipelines using Databricks and ADF , ingesting data from diverse sources including APIs, flat files, and operational databases. • Develop and maintain scalable PySpark jobs for batch and incremental data processing across Bronze, Silver, and Gold layers. • Write clean, production-ready Python code for data processing, orchestration, and integration tasks. • Contribute to the medallion architecture design and help implement data governance patterns across data layers. • Collaborate with analytics, data science, and business teams to design pipelines that meet performance and data quality expectations. • Monitor, troubleshoot, and continuously improve pipeline performance and reliability. • Support CI/CD for data workflows using Git , Databricks Repos , and optionally Terraform for infrastructure-as-code. • Document pipeline logic, data sources, schema transformations, and operational playbooks. ⸻ Required Qualifications • 3–5 years of experience in data engineering roles with increasing scope and complexity. • Strong hands-on experience with Databricks , including Spark, Delta Lake, and SQL-based transformations. • Proficiency in PySpark and Python for large-scale data manipulation and pipeline development. • Hands-on experience with Azure Data Factory for orchestrating data workflows and integrating with Azure services. • Solid understanding of data modeling concepts and modern warehousing principles (e.g., star schema, slowly changing dimensions). • Comfortable with Git-based development workflows and collaborative coding practices. ⸻ Preferred / Bonus Qualifications • Experience with Terraform to manage infrastructure such as Databricks workspaces, ADF pipelines, or storage resources. • Familiarity with Unity Catalog , Databricks Asset Bundles (DAB) , or Delta Live Tables (DLT) . • Experience with Azure DevOps or GitHub Actions for CI/CD in a data environment. • Knowledge of data governance , role-based access control , or data quality frameworks . • Exposure to real-time ingestion using tools like Event Hubs , Azure Functions , or Autoloader .
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi