Home
Jobs

995 Data Bricks Jobs - Page 40

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2 - 7 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

About the role We are seeking a highly skilled and experienced Data Architect to join our team. The ideal candidate will have at least 1 2 years of experience in s oftware & data engineering and a nalytics and a proven track record of designing and implementing complex data solutions. Y ou will be expected to design, create, deploy, and manage Blackbauds data architecture. This role has considerable technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at?Blackbaud. This?individual acts as an evangelist for proper data strategy with other teams at Blackbaud and assists with the technical direction, specifically with data, of other projects. What youll be doing Develop and direct the strategy for all aspects of Blackbauds Data and Analytics platforms, products and services Set, communicate and facilitate technical direction?more broadly for the AI Center of Excellence and collaboratively beyond the Center of Excellence Design and develop breakthrough products, services or technological advancements in the Data Intelligence space that expand our business Work alongside product management to craft technical solutions to solve customer business problems. Own the technical data governance practices and ensures data sovereignty, privacy, security and regulatory compliance. Continuously challenging the status quo of how things have been done in the past. Build data access strategy to securely democratize data and enable research, modelling, machine learning and artificial intelligence work. Help define the tools and pipeline patterns our engineers and data engineers use to transform data and support our analytics practice Work in a cross-functional team to translate business needs into data architecture solutions. Ensure data solutions are built for performance, scalability, and reliability. Mentor junior data architects and team members. Keep current on technologydistributed computing, big data concepts and architecture. Promote internally how data within Blackbaud can help change the world. What we want you to have: 1 0 + years of experience in data and advanced analytics At least 8 years of experience working on data technologies in Azure/AWS Experience building modern products and infrastructure Experience working with .Net /Java and Microservice Architecture Expertise in SQL and Python Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. Expertise in Databricks, Microsoft Fabric Strong understanding of data modeling, data warehousing, data lakes , data mesh and data products. Experience with machine learning Excellent communication and leadership skills. Able to work flexible hours as required by business priorities ? Ability to deliver software that meets consistent standards of quality, security and operability. Stay up to date on everything Blackbaud, follow us on , , , and Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.

Posted 1 month ago

Apply

1 - 3 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description We are seeking an MDM Associate Analyst with 2 –5 years of development experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong experience on MDM (Master Data Management) on configuration (L3 Configuration, Assets creati on, Data modeling etc ) , ETL and data mappings (CAI, CDI ) , data mastering (Match/Merge and Survivorship rules) , source and target integrations ( RestAPI , Batch integration, Integration with Databricks tables etc ) Roles & Responsibilities Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark , and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience Master’s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Strong experience with Informatica or Reltio MDM platforms in building configurations from scratch (Like L3 configuration or Data modeling, Assets creations, Setting up API integrations, Orchestration) Strong experience in building data mappings, data profiling, creating and implementation business rules for data quality and data transformation Strong experience in implementing match and merge rules and survivorship of golden records Expertise in integrating master data records with downstream systems Very good understanding of DWH basics and good knowledge on data modeling Experience with IDQ, data modeling and approval workflow/DCR. Advanced SQL expertise and data wrangling. Exposure to Python and PySpark for data transformation workflows. Knowledge of MDM, data governance, stewardship, and profiling practices. Good-to-Have Skills: Familiarity with Databricks and AWS architecture. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Basics of data engineering concepts. Professional Certifications Any ETL certification ( e.g. Informatica) Any Data Analysis certification (SQL , Python, Databricks ) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply

- 3 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description We are seeking an MDM Associate Data Steward who will be responsible for ensuring the accuracy, completeness, and reliability of master data across critical business domains such as Customer, Product, Affiliations, and Payer. This role involves actively managing and curating master data through robust data stewardship processes, comprehensive data cataloging, and data governance frameworks utilizing Informatica or Reltio MDM platforms. Additionally, the incumbent will perform advanced data analysis, data validation, and data transformation tasks through SQL queries and Python scripts to enable informed, data-driven business decisions. The role emphasizes cross-functional collaboration with various teams, including Data Engineering, Commercial, Medical, Compliance, and IT, to align data management activities with organizational goals and compliance standards. ? Roles & Responsibilities? ? Responsible for master data stewardship, ensuring data accuracy and integrity across key master data domains (e.g., Customer, Product, Affiliations). Conduct advanced data profiling, cataloging, and reconciliation activities using Informatica or Reltio MDM platforms. Manage the reconciliation of potential matches, ensuring accurate resolution of data discrepancies and preventing duplicate data entries. Effectively manage Data Change Request (DCR) processes, including reviewing, approving, and documenting data updates in compliance with established procedures and SLAs. Execute and optimize SQL queries for validation and analysis of master data. Perform basic Python for data transformation, quality checks, and automation. Collaborate effectively with cross-functional teams including Data Engineering, Commercial, Medical, Compliance, and IT to fulfill data requirements. Support user acceptance testing (UAT) and system integration tests for MDM related system updates. Implement data governance processes ensuring compliance with enterprise standards, policies, and frameworks. Document and maintain accurate SOPs, Data Catalogs, Playbooks, and SLAs. Identify and implement process improvements to enhance data stewardship and analytic capabilities. Perform regular audits and monitoring to maintain high data quality and integrity. Basic Qualifications and Experience? Master’s degree with 1 - 3 years of experience in Business, Engineering, IT or related field? OR ?? Bachelor’s degree with 2 - 5 years of experience in Business, Engineering, IT or related field? OR ? Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field? Functional Skills: ? Must-Have Skills: ? Direct experience in data stewardship, data profiling, and master data management. Hands-on experience with Informatica or Reltio MDM platforms. Proficiency in SQL for data analysis and querying. Knowledge of data cataloging techniques and tools. Basic proficiency in Python scripting for data processing. Good-to-Have Skills: ? Experience with PySpark and Databricks for large-scale data processing. Background in the pharmaceutical, healthcare, or life sciences industries. Familiarity with AWS or other cloud-based data solutions. Strong project management and agile workflow familiarity (e.g., using Jira, Confluence). Understanding of regulatory compliance related to data protection (GDPR, CCPA). Professional Certifications ? Any ETL certification ( e.g. Informatica)? Any Data Analysis certification (SQL)? Any cloud certification (AWS or AZURE)? Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply

3 - 6 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage

Posted 1 month ago

Apply

2 - 6 years

9 - 13 Lacs

Mumbai

Work from Office

Naukri logo

About The Role The leader must demonstrate an ability to anticipate, understand, and act on evolving customer needs, both stated and unstated. Through this, the candidate must create a customer-centric organization and use innovative thinking frameworks to foster value-added relations. With the right balance of bold initiatives, continuous improvement, and governance, the leader must adhere to the delivery standards set by the client and eClerx by leveraging the knowledge of market drivers and competition to effectively anticipate trends and opportunities. Besides, the leader must demonstrate a capacity to transform, align, and energize organization resources, and take appropriate risks to lead the organization in a new direction. As a leader, the candidate must build engaged and high-impact direct, virtual, and cross-functional teams, and take the lead towards raising the performance bar, build capability and bring out the best in their teams. By collaborating and forging partnerships both within and outside the functional area, the leader must work towards a shared vision and achieve positive business outcomes. Associate Program Manager Role and responsibilities: Represent eClerx in client pitches and external forums. Own platform and expertise through various COE activities and content generation to promote practice and business development. Lead continuous research and assessments to explore best and latest platforms, approaches, and methodologies. Contribute to developing the practice area through best practices, ideas, and Point of Views in the form of white papers and micro articles. Lead/partner in multi-discipline assessments and workshops at client sites to identify new opportunities. Lead key projects and provide development/technical leadership to junior resources. Drive solution design and build to ensure scalability, performance, and reuse. Design robust data architectures, considering performance, data quality, scalability, and data latency requirements. Recommend and drive consensus around preferred data integration and platform approaches, including Azure and Snowflake. Anticipate data bottlenecks (latency, quality, speed) and recommend appropriate remediation strategies. This is a hands-on position with a significant development component, and the ideal candidate is expected to lead the technical development and delivery of highly visible and strategic projects. Technical and Functional skills: Bachelor's Degree with at least 2-3 large-scale Cloud implementations within Retail, Manufacturing, or Technology industries. 10+ years of overall experience with data management and cloud engineering. Expertise in Azure Cloud, Azure Data Lake, Databricks, Snowflake, Teradata, and compatible ETL technologies. Strong attention to detail and ability to collaborate with multiple parties, including analysts, data subject matter experts, external labs, etc.

Posted 1 month ago

Apply

3 - 7 years

5 - 9 Lacs

Mumbai

Work from Office

Naukri logo

About The Role The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Senior Process Manager Roles and responsibilities: We are seeking a talented and motivated Data Engineer to join our dynamic team. The ideal candidate will have a deep understanding of data integration processes and experience in developing and managing data pipelines using Python, SQL, and PySpark within Databricks. You will be responsible for designing robust backend solutions, implementing CI/CD processes, and ensuring data quality and consistency. Data Pipeline Development: Using Data bricks features to explore raw datasets and understand their structure. Creating and optimizing Spark-based workflows. Create end-to-end data processing pipelines, including ingesting raw data, transforming it, and running analyses on the processed data. Create and maintain data pipelines using Python and SQL. Solution Design and Architecture: Design and architect backend solutions for data integration, ensuring they are robust, scalable, and aligned with business requirements. Implement data processing pipelines using various technologies, including cloud platforms, big data tools, and streaming frameworks. Automation and Scheduling: Automate data integration processes and schedule jobs on servers to ensure seamless data flow. Data Quality and Monitoring: Develop and implement data quality checks and monitoring systems to ensure data accuracy and consistency. CI/CD Implementation: Use Jenkins and Bit bucket to create and maintain metadata and job files. Implement continuous integration and continuous deployment (CI/CD) processes in both development and production environments to deploy data pipelines efficiently. Collaboration and Documentation: Work effectively with cross-functional teams, including software engineers, data scientists, and DevOps, to ensure successful project delivery. Document data pipelines and architecture to ensure knowledge transfer and maintainability. Participate in stakeholder interviews, workshops, and design reviews to define data models, pipelines, and workflows. Technical and Functional Skills: Education and Experience: Bachelors Degree with 7+ years of experience, including at least 3+ years of hands-on experience in SQL/ and Python. Technical Proficiency: Proficiency in writing and optimizing SQL queries in MySQL and SQL Server. Expertise in Python for writing reusable components and enhancing existing ETL scripts. Solid understanding of ETL concepts and data pipeline architecture, including CDC, incremental loads, and slowly changing dimensions (SCDs). Hands-on experience with PySpark. Knowledge and experience with using Data bricks will be a bonus. Familiarity with data warehousing solutions and ETL processes. Understanding of data architecture and backend solution design. Cloud and CI/CD Experience: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with Jenkins and Bit bucket for CI/CD processes. Additional Skills: Ability to work independently and manage multiple projects simultaneously.

Posted 1 month ago

Apply

3 - 8 years

12 - 16 Lacs

Mumbai

Work from Office

Naukri logo

About The Role The leader must demonstrate an ability to anticipate, understand, and act on evolving customer needs, both stated and unstated. Through this, the candidate must create a customer-centric organization and use innovative thinking frameworks to foster value-added relations. With the right balance of bold initiatives, continuous improvement, and governance, the leader must adhere to the delivery standards set by the client and eClerx by leveraging the knowledge of market drivers and competition to effectively anticipate trends and opportunities. Besides, the leader must demonstrate a capacity to transform, align, and energize organization resources, and take appropriate risks to lead the organization in a new direction. As a leader, the candidate must build engaged and high-impact direct, virtual, and cross-functional teams, and take the lead towards raising the performance bar, build capability and bring out the best in their teams. By collaborating and forging partnerships both within and outside the functional area, the leader must work towards a shared vision and achieve positive business outcomes. Associate Program Manager Role and responsibilities: Represent eClerx in client pitches, external forums, and COE (Center of Excellence) activities to promote cloud engineering expertise. Lead research, assessments, and development of best practices to keep our cloud engineering solutions at the forefront of technology. Contribute to the growth of the cloud engineering practice through thought leadership, including the creation of white papers and articles. Lead and collaborate on multi-discipline assessments at client sites to identify new cloud-based opportunities. Provide technical leadership in the design and development of robust, scalable cloud architectures. Drive key cloud engineering projects, ensuring high performance, scalability, and adherence to best practices. Design and implement data architectures that address performance, scalability, and data latency requirements. Lead the development of cloud-based solutions, ensuring they are scalable, robust, and aligned with business needs. Anticipate and mitigate data bottlenecks, proposing strategies to enhance data processing efficiency. Provide mentorship and technical guidance to junior team members. Technical and Functional skills: Bachelors with 10+ years of experience in data management and cloud engineering. Proven experience in at least 2-3 large-scale cloud implementations within industries such as Retail, Manufacturing, or Technology. Expertise in Azure Cloud, Azure Data Lake, Databricks, Teradata, and ETL technologies. Strong problem-solving skills with a focus on performance optimization and data quality. Ability to collaborate effectively with analysts, subject matter experts, and external partners.

Posted 1 month ago

Apply

5 - 7 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Job: Data Engineer Experience: 5+ years Mandatory Skill: Python, Pyspark, Linux Shell Scripting Location: Trivandrum Required Skills & Experience: Experience with large-scale distributed data processing systems. Expertise in data modelling, testing, quality, access, and storage. Proficiency in Python, SQL , and experience with Databricks and DBT. Experience implementing cloud data technologies (GCP, Azure, or AWS). Knowledge of improving the data development lifecycle and shortening lead times. Agile delivery experience. Required Skills Python,Pyspark,Linux Shell Scripting

Posted 1 month ago

Apply

11 - 20 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

10 - 17 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Hiring across India We are looking for someone with strong expertise with Azure (ADF Pipelines), Databricks, Python, Fast api SQL/Snowflake [Must Have]. Hands experience with data Architecture and design techniques (local/abstract) including API techniques/developments. Hands experience of real-time data pipelining-based technologies (e.g. KAFKA, Event Hubs) [Must Have]. Experience on delta-lake would be preferable. Experience with architecting, designing and developing Big-Data processing pipelines. Extensive experience with data platforms; working with large data sets, large amounts of data in motion and numerous big data technologies. Experience with Cloud-based software, specifically Microsoft Azure (including but not limited to Databricks, Azure Functions). Experience of Agile Project Delivery techniques (e.g. Scrum, Kanban). Have good interpersonal, communication, facilitation and presentation skills. Comfortable in communicating with business colleagues and working in cross functional global teams. Ability to work under pressure with tight deadlines and balancing multiple priorities. Analytical, troubleshooting and problem-solving skills. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Excellent communication and stakeholder management skills.

Posted 1 month ago

Apply

5 - 10 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Azure Data Bricks (Python) Azure devops Azure data factory

Posted 1 month ago

Apply

8 - 12 years

20 - 25 Lacs

Gandhinagar

Remote

Naukri logo

Requirement : 8+ years of professional experience as a data engineer and 2+ years of professional experience as a senior data engineer Must have strong working experience in Python and its various data analysis packages Pandas / NumPy Must have strong understanding of prevalent cloud ecosystems and experience in one of the cloud platforms AWS / Azure / GCP . Must have strong working experience in one of the leading MPP Databases Snowflake / Amazon Redshift / Azure Synapse / Google Big Query Must have strong working experience in one of the leading data orchestration tools in cloud – Azure Data Factory / Amazon Glue / Apache Airflow Must have experience working with Agile methodologies, Test Driven Development, and implementing CI/CD pipelines using one of leading services – GITLab / Azure DevOps / Jenkins / AWS Code Pipeline / Google Cloud Build Must have Data Governance / Data Management / Data Quality project implementation experience Must have experience in big data processing using Spark Must have strong experience with SQL databases (SQL Server, Oracle, Postgres etc.) Must have stakeholder management experience and very good communication skills Must have working experience on end-to-end project delivery including requirement gathering, design, development, testing, deployment, and warranty support Must have working experience with various testing levels, such as, unit testing, integration testing and system testing Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures Nice to have Skills : Working experience in DataBricks notebooks and managing DataBricks clusters Experience in Data Modelling tool such as Erwin or ER Studio Experience in one of the data architectures, such as Data Mesh or Data Fabric Has handled real time data or near real time data Experience in one of the leading Reporting & analysis tools, such as Power BI, Qlik, Tableau or Amazon Quick Sight Working experience with API integration General insurance / banking / finance domain understanding

Posted 1 month ago

Apply

20 - 30 years

70 - 90 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Job Title: Global Head Data Practice Location: India Reports to: CEO About Our Client : Our client is a trusted digital engineering and cloud transformation partner, enabling businesses across 40 countries to unlock value from technology at scale. With a legacy of delivering complex programs in the UK, US, Middle East, and APAC, we are committed to engineering excellence that drives agility, resilience, and customer-centric innovation. Their portfolio spans Cloud-Native & Application Development, Oracle Cloud Implementations, Digital Commerce, Data & AI, Application Modernization, Agile Consulting, and Digital Assurance. At the heart of our transformation capability is the ability to harness data for smarter, faster, and more informed decision-making. Role Summary As the Global Head – Data Practice , you will be responsible for shaping and scaling Companies data practice into a globally recognized, high-impact capability. Based in India and working closely with global leadership, this is a strategic and visible role that combines thought leadership , delivery excellence , and business accountability . You will spearhead the end-to-end evolution of our data offerings, from data engineering and platform modernization to AI-driven insights and next-gen governance,partnering with enterprise clients on their most complex transformation journeys. This is not just a practice leadership role. It’s a global change-agent mandate for someone who can influence at the C-suite , innovate with purpose , and scale with precision . Key Responsibilities Strategic Leadership Own and evolve the global strategy for the Data Practice across key markets (UK, Europe, North America, Middle East, APAC), ensuring alignment with industry trends, client needs, and Companies growth priorities. Translate the practice vision into an actionable roadmap—covering go-to-market, talent, partnerships, and capability building. Practice Development: Spearhead the growth and evolution of Companies data capabilities, with a strong emphasis on Snowflake, Databricks , and related modern data platforms. Drive the continuous innovation and improvement of service offerings in data management, analytics, and cloud-native solutions . Client & Market Engagement Serve as a senior advisor to CXOs on data transformation initiatives—shaping outcomes that blend business insight with technical excellence. Develop compelling, differentiated propositions for data modernization, cloud data platforms, AI/ML, governance, and analytics-led innovation. Build deep client relationships with existing and new accounts, ensuring that the data practice is viewed as a value creator, not just a service line. Practice & People Leadership Build and lead a high-caliber team of data engineers, architects, data scientists, and consultants across geographies. Invest in developing next-generation leadership within the practice, creating a bench of future-ready talent aligned to global demand. Create a strong culture of innovation, accountability, and continuous learning. Operational & Financial Excellence Own the P&L of the global data practice, with responsibility for revenue growth, margin optimization, and delivery effectiveness. Work with sales, delivery, and marketing to create scalable solution accelerators, reusable assets, and market-ready offerings. Innovation & Ecosystem Drive the adoption of emerging technologies with a strategic focus on Snowflake and Databricks as core platforms, while also exploring areas like DataOps, GenAI, MLOps, Metadata Automation, and Data Mesh that complement and enhance the modern data stack. Build deep ecosystem partnerships with hyperscalers (AWS, Azure, GCP) , niche ISVs , and platform providers , especially those aligned with Snowflake and Databricks ecosystems . Represent Organization at global industry forums, partner events, and thought leadership platforms to showcase our capabilities and reinforce Companies leadership in next-gen data transformation . Ideal Candidate Profile 15–20+ years of experience in data consulting, digital transformation, or analytics leadership roles with global delivery exposure. Experience in scaling global data practices within IT services, consulting, or cloud-native companies. Proven expertise in cloud data platforms (Azure, AWS, GCP), modern data architectures, and advanced analytics (ML/AI). Executive presence and gravitas to engage and influence CXO-level stakeholders across industries. Track record of building data products, platforms, or frameworks that have driven measurable business impact. Familiarity with verticalized solutions (e.g., healthcare data, financial services analytics, supply chain intelligence) is a plus. Strong understanding of data compliance, privacy, and ethical AI considerations in global markets. Bachelor’s in Engineering/Technology or relevant field; Master’s or MBA preferred. What We Offer A strategic, board-visible role shaping the future of Companies global growth agenda. Competitive compensation, including performance incentives and Long-Term Incentive Plans (RSUs) . A platform to influence enterprise-wide transformation and lead from the front on innovation. A collaborative, values-driven culture that respects autonomy and champions impact. Opportunity to represent Organization at international platforms and contribute to global thought leadership.

Posted 1 month ago

Apply

8 - 13 years

15 - 20 Lacs

Pune

Hybrid

Naukri logo

Job Description : Strong experience on Python programming. Experience on Databricks. Experience on Database like SQL Perform database performance tuning and optimization. Databricks Platform Work with Databricks platform for big data processing and analytics. Develop and maintain ETL processes using Databricks notebooks. Implement and optimize data pipelines for data transformation and integration. Design, develop, test, and deploy high-performance and scalable data solutions using Python

Posted 1 month ago

Apply

2 - 6 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

2 - 4 years

12 - 17 Lacs

Chennai, Pune

Work from Office

Naukri logo

Data Quality/Governance Analyst - Data & AI KEY ACCOUNTABILITIES Investigate, troubleshoot, and resolve data related production issues. Provide timely reporting on data quality metrics and trends. Document and maintain support procedures for data quality processes. Collaborate with IT and business teams to implement data quality improvements. Ensure data validation and reconciliation processes are followed. Engage with stakeholders to establish procedures for data validation and quality metrics. Track data issues using incident tickets and ensure timely resolution or escalate issues for immediate attention if not resolved. Maintain and update production support dashboards (Microsoft Power BI) to ensure accuracy and meet monitoring requirements. Develop Data Quality health reports for stakeholders to monitor and observe data reliability across the platform. Creating and maintaining documentation procedures, and best practices of data governance and related processes Provide training to users on tools to promote awareness and adherence. Collaborating with data owners and data stewards to ensure data governance is implemented and followed. Able to work with vendor as there will be technical platform issues that requires coordination and solution. Deliver consistent, accurate and high- quality work while communicating findings and insights in a clear manner. EXPERIENCE / QUALIFICATIONS At least 4 years of hands-on experience with a Data Quality tool (Collibra is preferred), Databricks and Microsoft Power BI Strong technical skills in data and database management, with proficiency in data wrangling, analytics, and transformation using Python and SQL Asset Management experience will be beneficial to understand and recommend the required data quality rules and remediation plan to the stakeholders. Other Attributes Curious, analytical, and able to think critically to solve problems Detail-oriented and comfortable dealing with complex structured and unstructured datasets Customer-centric and strive to deliver value by effectively and proactively engaging stakeholders Clear and effective communication skills, with an ability to communicate complex ideas and manage stakeholder expectations Strong organisational and prioritisation skills, adaptable and able to work independently as required

Posted 1 month ago

Apply

3 - 6 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

locationsTower 02, Manyata Embassy Business Park, Racenahali & Nagawara Villages. Outer Ring Rd, Bangalore 540065 time typeFull time posted onPosted 5 Days Ago job requisition idR0000388711 About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. AtTarget, we are gearing up for exponential growth and continuously expanding our guest experience. To support this expansion, Data Engineering is building robust warehouses and enhancing existing datasets to meet business needs across the enterprise. We are looking for talented individuals who are passionate about innovative technology, data warehousing and are eager to contribute to data engineering. . Position Overview Assess client needs and convert business requirements into business intelligence (BI) solutions roadmap relating to complex issues involving long-term or multi-work streams. Analyze technical issues and questions identifying data needs and delivery mechanisms Implement data structures using best practices in data modeling, ETL/ELT processes, Spark, Scala, SQL, database, and OLAP technologies Manage overall development cycle, driving best practices and ensuring development of high quality code for common assets and framework components Develop test-driven solutions and provide technical guidance and heavily contribute to a team of high caliber Data Engineers by developing test-driven solutions and BI Applications that can be deployed quickly and in an automated fashion. Manage and execute against agile plans and set deadlines based on client, business, and technical requirements Drive resolution of technology roadblocks including code, infrastructure, build, deployment, and operations Ensure all code adheres to development & security standards About you 4 year degree or equivalent experience 5+ years of software development experience preferably in data engineering/Hadoop development (Hive, Spark etc.) Hands on Experience in Object Oriented or functional programming such as Scala / Java / Python Knowledge or experience with a variety of database technologies (Postgres, Cassandra, SQL Server) Knowledge with design of data integration using API and streaming technologies (Kafka) as well as ETL and other data Integration patterns Experience with cloud platforms like Google Cloud, AWS, or Azure. Hands on Experience on BigQuery will be an added advantage Good understanding of distributed storage(HDFS, Google Cloud Storage, Amazon S3) and processing(Spark, Google Dataproc, Amazon EMR or Databricks) Experience with CI/CD toolchain (Drone, Jenkins, Vela, Kubernetes) a plus Familiarity with data warehousing concepts and technologies. Maintains technical knowledge within areas of expertise Constant learner and team player who enjoys solving tech challenges with global team. Hands on experience in building complex data pipelines and flow optimizations Be able to understand the data, draw insights and make recommendations and be able to identify any data quality issues upfront Experience with test-driven development and software test automation Follow best coding practices & engineering guidelines as prescribed Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to variety of audiences Life at Target- Benefits- Culture-

Posted 1 month ago

Apply

5 - 10 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Governance Practitioner Project Role Description : Establish and enforce data governance policies to ensure the accuracy, integrity, and security of organizational data. Collaborate with key stakeholders to define data standards, facilitate effective data collection, storage, access, and usage; and drive data stewardship initiatives for comprehensive and effective data governance. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Governance Practitioner, you will establish and enforce data governance policies to ensure the accuracy, integrity, and security of organizational data. Collaborate with key stakeholders to define data standards, facilitate effective data collection, storage, access, and usage; and drive data stewardship initiatives for comprehensive and effective data governance. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead data governance initiatives within the organization Develop and implement data governance policies and procedures Ensure compliance with data governance regulations and standards Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform Strong understanding of data governance principles Experience in implementing data quality and data stewardship programs Knowledge of data privacy regulations and compliance requirements Experience in data management and data security practices Additional Information: The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Pune office. A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

18 - 23 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 18 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in ensuring the smooth functioning of applications. Roles & Responsibilities: Expected to be a SME with deep knowledge and experience. Should have influencing and Advisory skills. Engage with multiple teams and responsible for team decisions. Expected to provide solutions to problems that apply across multiple teams. Provide solutions to business area problems. Lead and mentor junior professionals in the team. Collaborate with stakeholders to gather requirements. Conduct code reviews and ensure best practices are followed. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of data architecture principles. Experience in designing and implementing data solutions. Knowledge of cloud platforms like AWS or Azure. Hands-on experience with ETL processes. Familiarity with data modeling and database design. Additional Information: The candidate should have a minimum of 18 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Mumbai office. A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3 - 8 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in the development and maintenance of the data platform components, contributing to the overall success of the project. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Assist with the data platform blueprint and design. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data platform components. Contribute to the overall success of the project. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Hyderabad office. A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies