Home
Jobs

1022 Data Bricks Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

9 - 13 Lacs

Gurugram

Work from Office

Dentsply Sirona is the world’s largest manufacturer of professional dental products and technologies, with a 130-year history of innovation and service to the dental industry and patients worldwide. Dentsply Sirona develops, manufactures, and markets a comprehensive solutions offering including dental and oral health products as well as other consumable medical devices under a strong portfolio of world class brands. Dentsply Sirona’s products provide innovative, high-quality and effective solutions to advance patient care and deliver better and safer dentistry. Dentsply Sirona’s global headquarters is located in Charlotte, North Carolina, USA. The company’s shares are listed in the United States on NASDAQ under the symbol XRAY.. Bringing out the best in people. As advanced as dentistry is today, we are dedicated to making it even better. Our people have a passion for innovation and are committed to applying it to improve dental care. We live and breathe high performance, working as one global team, bringing out the best in each other for the benefit of dental patients, and the professionals who serve them. If you want to grow and develop as a part of a team that is shaping an industry, then we’re looking for the best to join us.. Working At Dentsply Sirona You Are Able To. Develop faster with our commitment to the best professional development.. Perform better as part of a high-performance, empowering culture.. Shape an industry with a market leader that continues to drive innovation.. Make a difference -by helping improve oral health worldwide.. Scope of Role. In the role as Azure Data Engineer, you will have the opportunity to join us and become part of the team that works with development, enhancement, and maintenance of our Data Engagement Platform (DEP). You will work with advanced analytics and the latest technology and be part of our passionate team. Does this sound like something that would energize you, then come join us!. Our Global Data and Analytics department handles the collection and streamlining of data into (DEP), development of BI solutions and reports in the Dentsply Sirona group. The team consists of 20+ members and work cross-functionally, which means that you will interact with many functions such as finance, marketing, sales, commercial, supply and operations. We use Azure tools together with Databricks and dbt.. Responsibilities. Develop and maintain high quality data warehouse solution.. Collaborate with BI Developers and Business Analysts for successful development of BI reporting and analysis.. Develop scalable and maintainable data layer for BI applications to meet business objectives.. Work in a small, smart, agile team – design, develop and own full solution for an assigned data area.. Perform maintenance and troubleshooting activities in Azure data platform.. Take part in accurate and complete technical architectural documents.. Work closely with other members in the Global Data and Analytics team.. Maintain clear and coherent communication, both verbal and written, to understand data requirement needs.. Additional responsibilities as assigned.. Education. An academic background, with relevant university degree within Management Information System or similar.. Years And Type Of Experience. Minimum 5 year work experience in a BI position.. Experience with Databricks and dbt is desirable.. Experience with Azure DevOps code repository, version control and task management. Strong proficiency with SQL and its variation among popular databases. Knowledge of best practices when dealing with relational databases. Key Skills, Knowledge & Capabilities. Capable of troubleshooting common database issuesIs motivated by analyzing and understanding the business needs, translating it to technical solutions, assuring that both businessand technical-needs are met.. Strong analytical and logical thinking. Communicative skills, verbally and writing. English language – proficiency in verbal and written communication. How We Lead The DS Way. Actively articulates and promotes Dentsply Sirona’s vision, mission and values.. Advocates on behalf of the customer.. Promotes high performance, innovation and continual improvement.. Consistently meets Company standards, ethics and compliance requirements.. Clear and effective communication with stake holders, which span across multiple levels, socio-geographic areas and functional expertise.. DentsplySirona is an Equal Opportunity/ Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, sexual orientation, disability, or protected Veteran status. We appreciate your interest in DentsplySirona.. If you need assistance with completing the online application due to a disability, please send an accommodation request to careers@dentsplysirona.com. Please be sure to include “Accommodation Request” in the subject.. Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Job Description. Job Title: Backend Developer. Job Type: Full-time. Location: On-site, Hyderabad, Telangana, India. About us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications.. Key Responsibilities:. Develop, test, and maintain scalable backend components and microservices using Python and PySpark.. Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms.. Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance.. Integrate machine learning models into production-grade backend systems powering innovative AI features.. Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions.. Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability.. Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills.. Required Skills and Qualifications:. Proficient in Python for backend development with strong coding standards.. Practical experience with Databricks and PySpark in live production environments.. Advanced knowledge of MySQL database design, query optimization, and maintenance.. Solid foundation in machine learning concepts and deploying ML models in backend systems.. Experience utilizing Redis for effective caching and state management.. Outstanding written and verbal communication abilities with strong attention to detail.. Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad.. Preferred Qualifications:. Background in high-growth AI/ML or complex data engineering projects.. Familiarity with additional backend technologies or cloud-based platforms.. Experience mentoring or leading technical teams.. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving.. Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Title: Data Quality & Automation Engineer.. Job Type: Full-time, Contractor.. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary. We are seeking a skilled and innovative Data Quality & Automation Engineer to join our customer's dynamic team. In this role, you will leverage your expertise to ensure the quality and reliability of our data processing systems, playing a crucial role in our commitment to excellence. We are looking for a candidate who possesses a keen eye for detail and a strong ability to communicate both verbally and in writing.. Key Responsibilities. Develop and execute automated test scripts using Python and Selenium to validate data processing systems.. Perform rigorous data validation and ensure data integrity across various data platforms.. Collaborate with data engineers and developers to identify and troubleshoot issues.. Maintain and enhance existing automation frameworks and scripts.. Utilize SQL for advanced data querying and validation tasks.. Implement and manage workflows using Apache Airflow.. Work with Databricks to test data pipelines and transformations.. Required Skills and Qualifications. Proven experience in automation testing with a focus on data quality.. Proficiency in Python programming and Selenium automation tools.. Strong understanding of SQL for data validation and reporting.. Experience with ALM.. Knowledge of data warehousing and data lake architectures.. Experience in leading and mentoring teams.. Experience with data testing tools (dbt Test).. Experience with Apache Airflow for workflow management.. Familiarity with Databricks for data processing and analytics.. Exceptional written and verbal communication skills.. Attention to detail and a proactive approach to problem-solving.. Preferred Qualifications. Experience with cloud platforms (AWS, Azure) and big data technologies.. Knowledge of continuous integration and deployment processes.. Certification in data engineering or software testing is a plus.. Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Gurugram, Bengaluru

Hybrid

Skill Expectation Skills (Must have) Python Hands on experience of using python programming to automate basic tasks. SQL Proficiency in SQL, with the ability to write complex queries for data validation. ETL Testing Experience with ETL data validation testing and Exposure to ETL Process/Data Lakes Skills (Good to have) Data Platform Exposure to Databricks/Azure platform and Databricks-specific testing tools and frameworks Load Testing Hands on experience with any load testing tools and techniques CI/CD Basics Basics understanding and experience with CI/CD Tooling (Gitlab , Github, Azure etc)

Posted 1 week ago

Apply

2.0 - 5.0 years

8 - 12 Lacs

Pune

Work from Office

Job Summary QA Specialist, Data & Analytics. We’re looking for a meticulous and detail-oriented QA Specialist who is passionate about data quality. You will collaborate with our analytics team to develop and execute comprehensive QA processes, validate data pipelines, and automate recurring QA processes. Your work will be key to ensuring our data and analytics deliverables meet the highest standards of accuracy and reliability.. Responsibilities:. Develop and execute comprehensive QA processes for data and analytics deliverables.. Validate the entire data pipeline, including data sources, ETL processes, extracts, calculations, visualizations, and application interfaces.. Perform functional, regression, performance, and tolerance-range testing across reports and data systems.. Simulate end-user journeys to ensure a seamless user experience with analytics outputs.. Validate application tracking functionality (data collection through application usage).. Validate calculations and metrics in Tableau, Power BI, and other BI tools.. Conduct database validations using SQL (Oracle, Big Query) and NoSQL (MongoDB) systems.. Automate recurring QA processes in the analytics/BI environment when feasible.. Identify and document data quality issues and discrepancies.. Collaborate with cross-functional teams, including data engineers, BI developers, and product managers, to ensure analytics quality.. Experience. 3+ years of experience in QA, data validation, or analytics testing.. Hands-on experience BI tools environment testing.. Proficiency in SQL (Advantage: experience with Oracle and Big Query databases).. Experience with NoSQL databases (Advantage: MongoDB). Technical Skills.. Familiarity with regression testing and simulating user interactions with BI tools.. Nice-to-Have Qualifications. Advantage: Familiarity with scripting languages like R or Python.. Advantage: Experience in automation testing within analytics or BI environments.. Advantage: Experience in Databricks environment. Collaboration and Leadership:. Excellent communication skills with the ability to collaborate effectively across departments.. Strong ability to present complex findings to both technical and non-technical audiences.. About Aumni Techworks. Aumni Techworks, established in 2016, is a Software Services Company that partners with Product companies to build and manage their dedicated teams in India. So, while you are working for a services company, you are working within a product team and growing with them.. We do not take projects, and we have long term (open ended) contracts with our clients. When our clients sign up with us, they are looking at a multi-year relationship. For e.g. Some of the clients we signed up 8 or 6 years, are still with us.. We do not move people across client teams and there is no concept of bench.. At Aumni, we believe in quality work, and we truly believe that Indian talent is at par with someone in NY, London or Germany. 300+ and growing. Benefits Of Working At Aumni Techworks. Our award-winning culture reminds us of our engineering days.. Medical insurance (including Parents), Life and disability insurance. 24 leaves + 10 public holidays + leaves for Hospitalization, maternity, paternity and bereavement.. On site Gym, TT, Carrom, Foosball and Pool table. Hybrid work culture. Fitness group / rewards. Friday Socials, Annual parties, treks.. Show more Show less

Posted 1 week ago

Apply

12.0 - 22.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Role & responsibilities Lead (Hands On) the team of Data Engineers Good Communication and strong in technical design decision making. Strong Experience in ETL / Dataware Ware House Strong Experience in Databricks, Unity Catalogue, Medallion Architecture, Data Lineage & Pyspark, CTE Good Experience in Data Analysis Strong Experience in SQL Queries, Stored Procedures, Views, Functions, UDF (User Defined Functions) Experience in Azure Cloud, ADF, Storage & Containers, Azure DB, Azure Datalake Experience in SQL Server Experince in Data Migration & Production Support"

Posted 1 week ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Noida

Hybrid

Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks Optimize and troubleshoot existing data pipelines to enhance performance and reliability. Ensure data quality, integrity, and consistency across various data sources. Implement ETL processes and manage data flows into Data Warehouses and Data Marts. Develop and optimize SQL queries on Snowflake for data processing and reporting. Utilize Python for data processing, transformation, and automation tasks. Monitor pipeline performance, proactively identify issues, and conduct necessary maintenance and updates. Maintain comprehensive documentation of data processes, architectures, and technical specifications. Required Skills: Azure Databricks PowerBI SSRS and MSSQL Snowflake Python ETL Development GitHub for version control and collaboration JIRA for work management Experience Range: 3 to 5 years Interpersonal Skills: Strong problem-solving and analytical abilities. Excellent written and verbal communication skills. Ability to work effectively within a team and collaborate.

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 17 Lacs

Pune

Hybrid

Software Engineer Baner, Pune, Maharashtra Department Software & Automation Employee Type Permanent Experience Range 3 - 6 Years Qualification: Bachelor's or master's degree in computer science, IT, or related field. Roles & Responsibilities: Tasks Facilitate Agile ceremonies and lead Scrum practices. Support the Product Owner in backlog management and team organization. Promote Agile best practices (Scrum, SAFe) and continuous delivery improvements. Develop and maintain scalable data pipelines using AWS and Databricks (secondary focus). Collaborate with architects and contribute to solution design (support role). Occasionally travel for global team collaboration. Scrum Master or Agile team facilitation experience. Familiarity with Python and Databricks (PySpark, SQL). Good AWS cloud exposure (S3, EC2 basics Good to Have: Certified Scrum Master (CSM) or equivalent. Experience with ETL pipelines or data engineering concepts. Multi-cultural team collaboration experience. Software Skills: JIRA Confluence Python (basic to intermediate) Databricks (basic)

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Chennai

Remote

Who We Are For 20 years, we have been working with organizations large and small to help solve business challenges through technology. We bring a unique combination of engineering and strategy to Make Data Work for organizations. Our clients range from the travel and leisure industry to publishing, retail and banking. The common thread between our clients is their commitment to making data work as seen through their investment in those efforts. In our quest to solve data challenges for our clients, we work with large enterprise, cloud-based and marketing technology suites. We have a deep understanding of these solutions so we can help our clients make the most of their investment in an efficient way to have a data-driven business. Softcrylic now joins forces with Hexaware to Make Data Work in bigger ways! Why Work at Softcrylic? Softcrylic provides an engaging, team-focused, and rewarding work environment where people are excited about the work they do and passionate about delivering creative solutions to our clients. Work Timing: 12:30 pm to 9:30 pm (Flexible in work timing) Here's how to approach the interview: All technical interview rounds will be conducted virtually. The final round will be a face-to-face interview with HR in Chennai. However, there will be a 15-minute technical assessment/in-person technical discussion as part of the final round. Make sure to prepare accordingly for both virtual and in-person components. Job Description: 5 + years of experience in working as Data Engineer Experience in migrating existing datasets from Big Query to Databricks using Python scripts. Conduct thorough data validation and QA to ensure accuracy, completeness, parity, and consistency in reporting. Monitor the stability and status of migrated data pipelines, applying fixes as needed. Migrate data pipelines from Airflow to Airbyte/Dagster based on provided frameworks. Develop Python scripts to facilitate data migration and pipeline transformation. Perform rigorous testing on migrated data and pipelines to ensure quality and reliability. Required Skills: Strong experience in working on Python for scripting Good experience in working on Data Bricks and Big Query Familiarity with data pipeline tools such as Airflow, Airbyte, and Dagster. Strong understanding of data quality principles and validation techniques. Ability to work collaboratively with cross-functional teams. Dinesh M dinesh.m@softcrylic.com +9189255 18191

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Hyderabad

Work from Office

Role : Were looking for a skilled Databricks Solution Architect who will lead the design and implementation of data migration strategies and cloud-based data and analytics transformation on the Databricks platform. This role involves collaborating with stakeholders, analyzing data, defining architecture, building data pipelines, ensuring security and performance, and implementing Databricks solutions for machine learning and business intelligence. Key Responsibilities: Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Preferred candidate profile: Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.

Posted 1 week ago

Apply

8.0 - 12.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design and develop the migration strategies and processes Collaborate with stakeholders to understand business requirements and technical challenges. Analyze current data and scope for optimization during the migration process. Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Work closely with engineering and data teams to ensure proper migration of ETL processes, data models, and analytics workloads. Troubleshoot and resolve issues related to migration, data quality, and performance. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.

Posted 1 week ago

Apply

5.0 - 10.0 years

11 - 20 Lacs

Hyderabad

Work from Office

Mandatory skills AWS,Python,Pyspark,SQL,Databricks Role & responsibilities Design, develop, and maintain robust and scalable data pipelines using AWS services and Databricks. Implement data processing solutions using PySpark and SQL to handle large volumes of data efficiently. Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs. Ensure data quality and integrity through rigorous testing and validation processes. Optimize data workflows for performance and cost-efficiency. Document data processes and provide support for data-related issues. Preferred candidate profile AWS Services: Proficiency in AWS services such as S3, EC2, Lambda, and Redshift. Programming: Strong experience in Python for data manipulation and scripting. Big Data Processing: Hands-on experience with PySpark for distributed data processing. SQL: Expertise in writing complex SQL queries for data extraction and transformation. Databricks: Experience in developing and managing workflows in Databricks environment.

Posted 1 week ago

Apply

6.0 - 9.0 years

25 - 30 Lacs

Pune, Mumbai (All Areas)

Work from Office

Role & responsibilities Develop and maintain scalable data pipelines using Databricks and PySpark . Collaborate with cross-functional teams to deliver effective data solutions. Optimize ETL processes for enhanced performance and reliability. Ensure adherence to data quality and governance best practices. Deploy and manage data solutions in cloud environments ( Azure/ AWS ). Preferred candidate profile Proven experience as a Data Engineer , with a focus on Databricks and PySpark . Strong proficiency in Python and SQL . Experience with cloud platforms such as Azure(mainly) or AWS . Familiarity with data warehousing and integration technologies .

Posted 1 week ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-5 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders.

Posted 1 week ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-5 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders.

Posted 1 week ago

Apply

5.0 - 7.0 years

9 - 10 Lacs

Bengaluru

Work from Office

Seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. Requires strong skills in SQL, Python, ETL, and cloud platforms (AWS/GCP/Azure). Experience with big data tools like Spark and Kafka preferred.

Posted 1 week ago

Apply

3.0 - 5.0 years

8 - 15 Lacs

Hyderabad

Work from Office

Understanding the requirements and developing ADF pipelines Good knowledge of data bricks Strong understanding of the existing ADF pipelines and enhancements Deployment and Monitoring ADF Jobs Good understanding of SQL concepts and Strong in SQL query writing Understanding and writing the stored procedures Performance Tuning Roles and Responsibilities Understand business and data integration requirements. Design, develop, and implement scalable and reusable ADF pipelines for ETL/ELT processes. Leverage Databricks for advanced data transformations within ADF pipelines. Collaborate with data engineers to integrate ADF with Azure Databricks notebooks for big data processing. Analyze and understand existing ADF workflows. Implement improvements, optimize data flows, and incorporate new features based on evolving requirements. Manage deployment of ADF solutions across development, staging, and production environments. Set up monitoring, logging, and alerts to ensure smooth pipeline executions and troubleshoot failures. Write efficient and complex SQL queries to support data analysis and ETL tasks. Tune SQL queries for performance, especially in large-volume data scenarios. Design, develop, and maintain stored procedures for data transformation and business logic. Ensure procedures are optimized and modular for reusability and performance. Identify performance bottlenecks in queries and data processing routines. Apply indexing strategies, query refactoring, and execution plan analysis to enhance performance

Posted 1 week ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Cloud Data Scientist to build and scale data science solutions in cloud-native environments. Ideal for candidates who specialize in analytics and machine learning using cloud ecosystems. Key Responsibilities: Design predictive and prescriptive models using cloud ML tools Use BigQuery, SageMaker, or Azure ML Studio for scalable experimentation Collaborate on data sourcing, transformation, and governance in the cloud Visualize insights and present findings to stakeholders Required Skills & Qualifications: Strong Python/R skills and experience with cloud ML stacks (AWS, GCP, or Azure) Familiarity with cloud-native data warehousing and storage (Redshift, BigQuery, Data Lake) Hands-on with model deployment, CI/CD, and A/B testing in the cloud Bonus: Background in NLP, time series, or geospatial analysis Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 1 week ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Bengaluru, Karnataka, India

On-site

Our new member - who are you You are driven by curiosity and are passionate about partnering with a diverse range of business and tech colleagues to deeply understand their customers, uncover new opportunities, advise and support them in design, execution and analysis of experiments, or to develop ML solutions for ML-driven personalisation (e.g., supervised or unsupervised) that drive substantial customer and business impact. You will use your expertise in experiment design, data science, causal inference and machine learning to stimulate data-driven innovation. This is an incredibly exciting role with high impact. You are, like us, a team player who cares about your team members, about growing professionally and personally, about helping your teammates grow, and about having fun together. Basic Qualifications: Bachelors or masters degree in computer science, Software Engineering, Data Science, or related field 35 years of professional experience in designing, building, and maintaining scalable data pipelines, both in on-premises and cloud (Azure preferred) environments. Strong expertise inworking with large datasets from Salesforce, port operations, cargo tracking, and enterprise systems etc. Proficient writing scalable and high-quality SQL queries, Python coding and object-oriented programming, with a solid grasp of data structures and algorithms. Experience in software engineering best practices, including version control (Git), CI/CD pipelines, code reviews, and writing unit/integration tests. Familiarity with containerization and orchestration tools (Docker, Kubernetes) for data workflows and microservices. Hands-on experience with distributed data systems (e.g., Spark, Kafka, Delta Lake, Hadoop). Experience in data modelling, and workflow orchestration tools like Airflow Ability to support ML engineers and data scientists by building production-grade data pipelines Demonstrated experience collaborating with product managers, domain experts, and stakeholders to translate business needs into robust data infrastructure. Strong analytical and problem-solving skills, with the ability to work in a fast-paced, global, and cross-functional environment. Preferred Qualifications: Experience deploying data solutions in enterprise-grade environments, especially in the shipping, logistics, or supply chain domain. Familiarity with Databricks, Azure Data Factory, Azure Synapse, or similar cloud-native data tools. Knowledge of MLOps practices, including model versioning, monitoring, and data drift detection. Experience building or maintaining RESTful APIs for internal ML/data services using FastAPI, Flask, or similar frameworks. Working knowledge of ML concepts, such as supervised learning, model evaluation, and retraining workflows. Understanding of data governance, security, and compliance practices. Passion for clean code, automation, and continuously improving data engineering systems to support machine learning and analytics at scale. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .

Posted 1 week ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Noida, Pune, Bengaluru

Work from Office

Desired Profile - Collect, analyse, and document all business and functional requirements for the Data Lake infrastructure. Support advancements in Business Analytics to ensure the system meets evolving business needs. Profile new and existing data sources to define and refine the data warehouse model. Collaborate with architects and stakeholders to define data workflows and strategies. Drive process improvements to optimize data handling and performance. Perform deep data analysis to ensure accuracy, consistency, and quality of data. Work with QA resources on test planning to ensure quality and consistency within the data lake and Data Warehouse. Gather data governance requirements and ensure implementation of data governance practices within the Data Lake infrastructure. Collaborate with functional users to gather and define metadata for the Data Lake. Key Skills: Azure Data Factory, Synapse, Power BI, Data Lake, SQL, KQL, Azure Security, data integration, Oracle EBS, cloud computing, data visualization, CI/CD pipelines, communication skills Pls share cv at parul@mounttalent.com

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Roles & Responsibilities Build strong relationships and channels of communication with other team members When necessary, challenge the team on their estimation values to gain a deeper understanding of the product from a business, design, and technical perspective Support the team in building a trusting and respectful environment where issues can be discussed openly and in a calm and friendly way Facilitate all reporting on scrum health and help to identify key learnings and areas of improvement Actively help the team in becoming self-organized and support them in aligning to the 12 principles of agile Display strong communication skills and be comfortable in dealing with conflict resolution to facilitate continuous improvement and empowerment Manage dependencies and mitigate them and support the team to accomplish sprint goal Collaborate effectively with Scrum Leads to standards and best practices Accurate reporting to the management depicting the true picture and resolving impediments on daily basis Scrum Masters must deliver facilitation of all SCRUM rituals, including Person should have exp in Business Analysis, Should have worked as Business Analyst in past Daily Stand-ups Backlog Grooming Estimation Sessions Sprint Planning Retrospectives Key Skills 8+ years of experience working as a scrum master Experienced in working with Atlassian tools Experienced in assisting product owners on product backlogs Experienced in coaching team members Excellent verbal and written communication skills

Posted 1 week ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune, Ahmedabad

Hybrid

Key Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows using Microsoft Fabric, Azure Data Factory, and Azure Synapse Analytics. Implement Lakehouse and Warehouse architectures within Microsoft Fabric, supporting medallion (bronze-silver-gold) data layers. Collaborate with business and analytics teams to build scalable and reliable data models (star/snowflake) using Azure SQL, Power BI, and DAX. Utilize Azure Analysis Services, Power BI Semantic Models, and Microsoft Fabric Dataflows for analytics delivery. Very good hands-on experience with Python for data transformation and processing. Apply CI/CD best practices and manage code through Git version control. Ensure data security, lineage, and quality using data governance best practices and Microsoft Purview (if applicable). Troubleshoot and improve performance of existing data pipelines and models. Participate in code reviews, testing, and deployment activities. Communicate effectively with stakeholders across geographies and time zones. Required Skills: Hands-on experience with Microsoft Fabric (Lakehouse, Warehouse, Dataflows, Pipelines). Strong knowledge of Azure Synapse Analytics, Azure Data Factory, Azure SQL, and Azure Analysis Services. Proficiency in Power BI and DAX for data visualization and analytics modeling. Strong Python skills for scripting and data manipulation. Experience in dimensional modeling, star/snowflake schemas, and Kimball methodologies. Familiarity with CI/CD pipelines, DevOps, and Git-based versioning. Understanding of data governance, data cataloging, and quality management practices. Excellent verbal and written communication skills.

Posted 1 week ago

Apply

5.0 - 10.0 years

13 - 23 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Primarily looking for a Data Engineer with expertise in processing data pipelines using Databricks PySpark SQL on Cloud distributions like AWS Must have AWS Databricks Good to have PySpark Snowflake Talend Requirements- Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills : Apache Spark, Databricks, Java, Python, Scala, Spark SQL.

Posted 1 week ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.

Posted 1 week ago

Apply

5.0 - 10.0 years

11 - 21 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Job Title: Senior Data Engineer ADF | Snowflake | DBT | Databricks Experience: 5 to 8 Years Locations: Pune / Hyderabad / Gurgaon / Bangalore (Hybrid) Job Type: Full Time, Permanent Job Description: We are hiring for a Senior Data Engineer role with strong expertise in Azure Data Factory (ADF) , Snowflake , DBT , and Azure Databricks . The ideal candidate will be responsible for designing, building, and maintaining scalable cloud-based data pipelines and enabling high-quality data delivery for analytics and reporting. Key Responsibilities Build and manage ETL/ELT pipelines using ADF, Snowflake, DBT, and Databricks Create parameterized, reusable components within ADF pipelines Perform data transformations and modeling in Snowflake using DBT Use Databricks for data processing using PySpark/SQL Collaborate with stakeholders to define and implement data solutions Optimize data workflows for performance, scalability , and cost-efficiency Ensure data quality, governance, and documentation standards Mandatory Skills Azure Data Factory (ADF) Snowflake DBT (Data Build Tool) Azure Databricks Strong SQL and data modeling experience Good-to-Have Skills Azure Data Lake, Azure Synapse, Blob Storage CI/CD using Azure DevOps or GitHub Python scripting, PySpark Power BI/Tableau integration Experience in metadata/data governance tools Role Requirements Education : Bachelors/Masters degree in Computer Science, Data Engineering, or related field Certifications : Azure or Snowflake certification is a plus Strong problem-solving and communication skills Keywords: Azure Data Factory, ADF, Snowflake, DBT, Azure Databricks, PySpark, SQL, Data Engineer, Azure Data Lake, ETL, ELT, Azure Synapse, Power BI, CI/CD

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies