Jobs
Interviews

6325 Databricks Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks. The ideal candidate will have a deep understanding of streaming architectures, Medallion data models, and performance optimization techniques in cloud environments. This role requires hands-on technical expertise, including live coding during the interview process. Key Responsibilities • Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming. • Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers. • Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. • Work with large volumes of structured and unstructured data, ensuring high availability and performance. • Apply performance tuning techniques such as partitioning, caching, and cluster resource optimization. • Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. • Establish best practices for code versioning, deployment automation, and data governance. Required Technical Skills: • Strong expertise in Azure Databricks and Spark Structured Streaming • Processing modes (append, update, complete) • Output modes (append, complete, update) • Checkpointing and state management • Experience with Kafka integration for real-time data pipelines • Deep understanding of Medallion Architecture • Proficiency with Databricks Autoloader and schema evolution • Deep understanding of Unity Catalog and Foreign catalog • Strong knowledge of Spark SQL, Delta Lake, and DataFrames • Expertise in performance tuning (query optimization, cluster configuration, caching strategies) • Must have Data management strategies • Excellent with Governance and Access management • Strong with Data modelling, Data warehousing concepts, Databricks as a platform • Solid understanding of Window functions Proven experience in: • Merge/Upsert logic • Implementing SCD Type 1 and Type 2 • Handling CDC (Change Data Capture) scenarios • Retail/Telcom/Energy any one industry expertise • Real time use case execution • Data modelling

Posted 1 week ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We have been retained by a Global client into an AI/ML platform designed for data science teams to build, deploy, and manage AI solutions. The client is looking for someone who can express passion about new technologies and the possibilities of Data and Advanced Analytics to enterprise customers across a range of industries within an extensive network in India. This role consists of engaging prospects and customers about their related initiatives to help them develop a more efficient approach, leveraging the client’s platform. Someone with Strong Sales track record in India is preferred. Enterprise Sales Leadership : Looking for someone who has led large, consultative sales cycles in AI/ML or cloud platforms—ideally with experience selling to CIOs, CTOs, and Chief Data Officers in BFSI and Manufacturing. C-Suite Fluency : The ideal candidate should be able to speak the language of business outcomes, not just tech. Think: ROI on AI investments, operational efficiency, regulatory compliance, and digital transformation narratives How you'll make an impact: Meet/exceed sales revenue quota and renewal goals. Work extensively with channel partners and RSIs to sell with/through and drive customer success. Build and execute territory sales plan for the assigned sales territory in India. Drive customer engagement and build a joint pipeline of sales opportunities. Ensure the development of joint sales and technical skills within the partner ecosystem to drive demand and customer success. Oversee the development and delivery of joint marketing campaigns and field engagement with partners and the company’s field teams to generate pipeline. Alignment of internal stakeholders across the business in Sales, Marketing, Services, Customer Success, and Product. Build executive relationships with the leadership of key customers and facilitate customer engagement. Collaborate with cross-functional the company’s leadership to scale the ecosystem of partnerships. Represent companies at industry events in a manner that reflects and upholds the company’s core values. What you'll need to be successful: - Sales track record in India. Strong experience in successfully building and scaling channel resellers program and network in the region that contribute significantly to sales growth. Extensive experience working with local partners. Domain expertise of modern AI/ML and cloud computing technologies. Be fluent in what’s relevant to the C-Suite today. Connections within our target technology, system integrator and consulting partners, which include Snowflake, Databricks, AWS, Google, Microsoft, Accenture, Deloitte, PWC, KMPG, Capgemini, DXC, Tech Mahindra, Wipro, and others. Know their businesses and the people that drive them. 10+ years of track record in software sales within the Data / AI industry, technology / cloud computing, SI, and consulting companies in India. A collaborative connector and influential leadership style that drives results. Be able to align stakeholders and get people on the journey to achieve desired objectives and results for the region. Interested to know more, please share your number or cv here or at swati.singh@gateway-search.com to have a detailed discussion.

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Sr. AWS Data Engineer Years of experience: 5-10 years (with minimum 5 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: Permanent Key skills: Python,SQL,Pyspark, AWS, Databricks, SQL, Data Modelling Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3, Glue, EMR, Athena, Redshift, and Lambda. Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow, Databricks Jobs, or AWS Step Functions. Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability, monitoring, and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Why Join 7-Eleven Global Solution Center? When you join us, you'll embrace ownership as teams within specific product areas take responsibility for end-to-end solution delivery, supporting local teams and integrating new digital assets. Challenge yourself by contributing to products deployed across our extensive network of convenience stores, processing over a billion transactions annually. Build solutions for scale, addressing the diverse needs of our 84,000+ stores in 19 countries. Experience growth through cross-functional learning, encouraged and applauded at 7-Eleven GSC. With our size, stability, and resources, you can navigate a rewarding career. Embody leadership and service as 7-Eleven GSC remains dedicated to meeting the needs of customers and communities. Why We Exist, Our Purpose and Our Transformation? 7-Eleven is dedicated to being a customer-centric, digitally empowered organization that seamlessly integrates our physical stores with digital offerings. Our goal is to redefine convenience by consistently providing top-notch customer experiences and solutions in a rapidly evolving consumer landscape. Anticipating customer preferences, we create and implement platforms that empower customers to shop, pay, and access products and services according to their preferences. To achieve success, we are driving a cultural shift anchored in leadership principles, supported by the realignment of organizational resources and processes. At 7-Eleven we are guided by our Leadership Principles . Each principle has a defined set of behaviours which help guide the 7-Eleven GSC team to Serve Customers and Support Stores. Be Customer Obsessed Be Courageous with Your Point of View Challenge the Status Quo Act Like an Entrepreneur Have an “It Can Be Done” Attitude Do the Right Thing Be Accountable About This Opportunity: We are seeking a highly skilled senior AI/ML engineer to design, implement and deploy AI/ML solutions that drive innovation and efficiency. The ideal candidate will have extensive experience in Langchain, NLP, RAG based systems, Prompt Engineering, Agentic Systems and cloud platforms (Azure,AWS) and be adept in building AI driven applications. Responsibilities: Design and implement AI driven solutions using advanced frameworks and technologies, ensuring scalability and efficiency. Develop and optimize Langchain(agents, chains, memories, parsers, document loaders),Gen AI and NLP models for specific use cases. Quickly experiment different machine learning models for specific use case. Strong problem-solving capabilities and ability to quickly propose feasible solutions and effectively communicate strategy and risk mitigation approaches to leadership. Required Qualifications: 3 - 5 years of experience in AI/ML engineering, with exposure to both classical machine learning methods and language model-based applications. Must have experience in Azure cloud and databricks setup. Proficiency in Python and machine learning frameworks like Tensorflow, Pytorch, scikit-learn. Strong understanding of machine learning algorithms, including supervised and unsupervised learning, reinforcement learning, and deep learning. Strong expertise in Generative AI, NLP, and conversational AI technologies. Experience in building and deploying AI-powered applications at scale. Expertise in working with structured and unstructured data, including data cleaning, feature engineering with data stores like vector, relational, NoSQL databases and data lakes through APIs. Model Evaluation and Metrics: Proficiency in evaluating both classical ML models and LLMs using relevant metrics. Excellent written and verbal communications skills. Be eager to explore and implement the latest advancements in LLMs and ML, integrating them with existing solutions and enhancing their capabilities. Ability to understand business requirements and translate into technical requirements. Educational Background: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related field. Familiarity with code versioning tools - Git (Gitlab). Exposure to retail industry, experience with e-commerce applications. 7-Eleven Global Solution Center is an Equal Opportunity Employer committed to diversity in the workplace. Our strategy focuses on three core pillars – workplace culture, diverse talent and how we show up in the communities we serve. As the recognized leader in convenience, the 7-Eleven family of brands embraces diversity, equity and inclusion (DE+I). It’s not only the right thing to do for customers, Franchisees and employees—it’s a business imperative. Privileges & Perquisites: 7-Eleven Global Solution Center offers a comprehensive benefits plan tailored to meet the needs and improve the overall experience of our employees, aiding in the management of both their professional and personal aspects. Work-Life Balance: Encouraging employees to unwind, recharge, and find balance, we offer flexible and hybrid work schedules along with diverse leave options. Supplementary allowances and compensatory days off are provided for specific work demands. Well-Being & Family Protection: Comprehensive medical coverage for spouses, children, and parents/in-laws, with voluntary top-up plans, OPD coverage, day care services, and access to health coaches. Additionally, an Employee Assistance Program with free, unbiased and confidential expert consultations for personal and professional issues. Top of Form Wheels and Meals: Free transportation and cafeteria facilities with diverse menu options including breakfast, lunch, snacks, and beverages, customizable and health-conscious choices. Certification & Training Program: Sponsored training for specialized certifications. Investment in employee development through labs and learning platforms. Hassel free Relocation: Support and reimbursement for newly hired employees relocating to Bangalore, India.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Finance Job Family Group: Business Support Group Job Description: About us: At bp, we’re reimagining energy for people and our planet. With operations working across almost every part of the energy system, we’re aiming the way in reducing carbon emissions and developing more sustainable methods for solving the energy challenge. We’re a team with multi-layered strengths of engineers, scientists, traders and business professionals determined to find answers to problems. And we know we can’t do it alone. We’re looking for people who share our passion for reinvention, to bring fresh opinions, ambition, and to challenge our thinking in our goal to achieve net zero! We believe our portfolio of businesses and investments in growth and transformation will result in a company with the scale, brand, capabilities, talent, and values to succeed as the digital revolution transforms our society, our industry and our planet Key Accountabilities Data Quality/Modelling/Design thinking: Demonstrating SAP MDG/ECCs experience the candidate is able to investigate to do root cause analysis for assigned use cases. Also able to work with Azure data lake (via dataBricks) using SQL/Python. Data Model (Conceptual and Physical) will be needed to be identified and built that provides automated mechanism to supervise on going DQ issues. Multiple workshops may also be needed to work through various options and identifying the one that is most efficient and effective Works with business (Data Owners/Data Stewards) to profile data for exposing patterns indicating data quality issues. Also is able to identify impact to specific CDEs deemed important for each individual business. Identifies financial impacts of Data Quality Issue. Also is able to identify business benefit (quantitative/qualitative) from a remediation standpoint along with leading implementation timelines. Schedules regular working groups with business that have identified DQ issues and ensures progression for RCA/Remediation or for presenting in DGFs Identifies business DQ rules basis which critical metrics/Measures are stood up that foster into the dashboarding/workflows for BAU monitoring. Red flags are raised and investigated Understanding of Data Quality value chain, starting with Critical Data Element concepts, Data Quality Issues, Data Quality important metrics/Measures is needed. Also has experience owing and completing Data Quality Issue assessments to aid improvements to operational process and BAU initiatives Highlights risk/hidden DQ issues to Lead/Manager for further guidance/customer concern Communication skills are significant in this role as this is outward facing and focus has to be on clearly articulation messages. Dashboarding & Workflow: Builds and maintains effective analytics and partner concern mechanisms which detect poor data and help business lines drive resolution Support crafting, building and deployment of data quality dashboards via PowerBI Resolves critical issue paths and constructs workflow and alerts which advise process and data owners of unresolved data quality issues Collaborates with IT & analytics teams to drive innovation (AI, ML, cognitive science etc.) DQ Improvement Plans: Creates, embeds and drives business ownership of DQ improvement plans Works with business functions and projects to create data quality improvement plans Sets targets for data improvements / maturity. Monitors and intervenes when sufficient progress is not being made Supports initiatives which are driving data clean-up of existing data landscape Project Delivery: Oversees, advises Data Quality Analysts and participates in delivery of data quality activities including profiling, establishing conversion criteria and resolving technical and business DQ issues Owns and develops relevant data quality work products as part of the DAS data change methodology Ensures data quality aspects are delivered as part of Gold and Silver data related change projects Supports the creation of cases with insight into the cost of poor data Essential Experience and Job Requirements: 11-15 total yrs of experience in Oil & Gas or a Financial Services/Banking industry within Data Management space Experience of working with Data Models/Structures and investigating to design and fine tune them Experience of Data Quality Management i.e. Governance, DQI management (root cause analysis, Remediation /solution identification), Governance Forums (papers production, quorum maintenance, Minutes publication), CDE identification, Data Lineage (identification of authoritative data sources) preferred. Understand of important metrics/Measures needed as well Experience of having worked with senior partners in multiple Data Domain/Business Areas, CDO and Technology. Ability to operate in global teams within multiple time zones Ability to operate in a evolving and changing setup and be able to identify priorities. Also ability to operate independently without too much direction Desirable criteria SAP MDG/SAP ECC experience (T codes, Tables structures etc) Azure Data lake /AWS/Data Bricks Crafting dashboards & workflows (powerBI Qlikview or Tableau etc.) Crafting analytics and insight in a DQ setting (powerBI/powerQuery) Profiling and analysis skills (SAP DI, Informatica or Collibra) Persuading, influencing and communication at a senior level management level Certification in Data Management, Data Science, Python/R desirable Travel Requirement No travel is expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Primary Skills-Azure Data Bricks A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Awareness of latest Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods technologies and trends Excellent problem solving, analytical and debugging skills

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Job Title: Lead Data Engineer Location: Hyderabad Job Type: Full-Time Position: Lead Data Engineer Job Overview: This is a technical hands-on role in the implementation of data solutions and related downstream systems. The role involves working a Data Management team while towards establishing Data Reporting pipeline and automations using combination of Python scripts, Databricks, API, along with the ability to model a smart solution. Primary Responsibilities: Collaborate with the Architect, Product Manager, and other development team to build the Data Pipeline. Advise the product manager on the right use of technological components Work with stakeholders, including management and domain leads, to address data-related technical issues and support data infrastructure needs. Develop and program database management may include ETL processes, data modeling, and infrastructure, including using and developing APIs, front-end applications and automated data pipelines. Ensure enterprise data policies, best practices, standards, and processes are followed. Communicate, coordinate, and collaborate effectively with business, IT architecture, and data teams across multi-functional areas to complete deliverables. Technical Skills / Experience: 7 to 12 years of proven working experience in building data pipeline using scripts, ETL tools and doing data integration and data migration. Primary skill: Strong hands-on development skills in Python, Databricks, and API knowledge. Secondary Skill: Working Knowledge of database systems such as Azure SQL DB, Azure Synapse and Snowflake. Secondary Skill: Experience in working in a DevSecOps environment. Good to Have: Knowledge of data governance tools and integration with enterprise data platforms. Good to Have: Experience with data management components such as IDMC/IICS is a plus. Other Skills / Experience: Experience being part of high-performance agile teams in a fast-paced environment. Strong team emphasis and relationship-building skills; partners well with business and other IT/Data areas. Excellent communication and negotiation skills. Up to date with latest developments around AI, Development frameworks, technologies and can actively participate in innovation initiatives. Ability to organize, plan, and implement work assignments, juggle competing demand. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Role : ML Engineer (Associate / Senior) Experience : 2-3 Years (Associate) 4-5 Years (Senior) Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 1 week ago

Apply

4.0 years

7 - 10 Lacs

Hyderābād

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver dashboards, schema, data pipelines, and software solutions. Design, develop, and maintain scalable data pipelines to process and transform large volumes of structured and unstructured data. Build and maintain ETL/ELT workflows for data ingestion from various sources (APIs, databases, files, cloud). Ensure data quality, integrity, and governance across the pipeline. This includes developing, configuring, or modifying data components within various complex business and/or enterprise application solutions in various computing environments. Responsibilities: We are currently seeking Sr data engineer, who can perform data integration to build custom data pipeline, manage data transformation, Performance optimization, Automation, Data Governance & data quality. Mandatory skill sets: ‘Must have’ knowledge, skills and experiences · GCP – Dataproc- Pyspark, Spark SQL, Dataflow- Apache beam, Cloud composer, Bigquery, API Management Preferred skill sets: ‘Good to have’ knowledge, skills and experiences · Experience in building data pipelines · Experience in Software lifecycle tools for CI/CD and version control system such as GIT · Familiarity with Agile methodologies is a plus. Years of experience required: Experience and Qualifications · Experience - 4 Years to 12 years · NP - Immediate to 30 days · Location - Hyderabad · 3 days / week work from client office Education qualification: o BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

10.0 years

1 - 10 Lacs

Hyderābād

On-site

If you are looking for a game-changing career, working for one of the world's leading financial institutions, you’ve come to the right place. As a Principal Software Engineer at JP Morgan Chase within the Consumer & Community Banking Technology Team, you, you provide expertise and engineering excellence as an integral part of an agile team to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various technologies to support one or more of the firm’s portfolios. Job responsibilities Creates complex and scalable coding frameworks using appropriate software design frameworks Develops secure and high-quality production code, and reviews and debugs code written by others Advises cross-functional teams on technological matters within your domain of expertise Serves as the function’s go-to subject matter expert Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies Creates durable, reusable software frameworks that are leveraged across teams and functions Influences leaders and senior stakeholders across business, product, and technology teams Champions the firm’s culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on data management concepts and 10+ years applied experience. Experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise. Proven experience in designing and developing large scale data pipelines for batch & stream processing Strong understanding of Data Warehousing, Data Lake, ETL processes and Big Data technologies (e.g Hadoop, Snowflake, Databricks, Apache Spark, PySpark, Airflow, Apache Kafka, Java, Open File & Table Formats, GIT, CI/CD pipelines etc. ) Expertise with public cloud platforms (e.g., AWS, Azure, GCP) and modern data processing & engineering tools Excellent communication, presentation, and interpersonal skills Experience developing or leading large or cross-functional teams of technologists Demonstrated prior experience influencing across highly matrixed, complex organizations and delivering value at scale Experience leading complex projects supporting system design, testing, and operational stability Experience with hiring, developing, and recognizing talent Extensive practical cloud native experience Expertise in Computer Science, Computer Engineering, Mathematics, or a related technical field Preferred qualifications, capabilities, and skills Experience working at code level and ability to be hands-on performing PoCs, code reviews Experience in Data Modeling (ability to design Conceptual, Logical and Physical Models, ERDs and proficiency in data modeling software like ERwin) Experience with Data Governance, Data Privacy & Subject Rights, Data Quality & Data Security practices Strong understanding of Data Validation / Data Quality Experience with supporting large scale AI/ML Data requirements Experience in Data visualization & BI tools is a huge plus

Posted 1 week ago

Apply

0 years

4 - 8 Lacs

Hyderābād

On-site

ABOUT FLUTTER ENTERTAINMENT Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million Average Monthly Players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game. Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. FLUTTER ENTERTAINMENT INDIA Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. ROLE PURPOSE: At Flutter, we are embarking on an ambitious global finance transformation programme throughout 2025, 2026 and 2027. The Technology Enablement and Automation Manager will be responsible for delivering elements of the ICFR pillar of the global finance transformation programme. The Technology Enablement and Automation Transformation Manager will report directly, or indirectly, to the Head of Technology Enablement and Automation Transformation. Flutter consists of two commercial divisions (FanDuel and International) and our central Flutter Functions; COO, Finance & Legal. Here in Flutter Functions we work with colon-premises across all our divisions and regions to deliver something we call the Flutter Edge. It’s our competitive advantage, our ‘secret sauce’ which plays a key part in our ongoing success and powers our brands and divisions, through Product, Tech, Expertise and Scale. In Flutter Finance we pride ourselves on providing global expertise to ensure Flutter is financially healthy. Utilizing our Flutter Edge to turbo-charge our capabilities. KEY RESPONSIBILITIES Design, develop, launch and maintain custom technical solutions including workflow automations, reporting pipelines / dashboards and cloud systems integrations, focused on improving and enabling Flutter’s Internal Controls over Financial Reporting (ICFR) annual cycle Bring your technical know-how to continuously improve Finance and IT processes and controls (for example, balance sheet reconciliations, GRC tool enablement and analytical reviews). Prepare and maintain high quality documentation related to your automation and reporting deliverables. Contribute to robust technical delivery processes for the ICFR Transformation Technology Enablement & Automation team. Collaborate closely with Internal Controls Transformation, Internal Controls Assurance teams and with colleagues across Finance and IT (Group and Divisional teams) to ensure seamless delivery of the technical solutions, automations and reporting that you own. Contribute to regular status reporting to senior leaders, highlighting potential challenges and opportunities for improvement. TO EXCEL IN THIS ROLE, YOU WILL NEED TO HAVE Passion for technical solution delivery, and for learning new technologies. Strong technology architecture, design, development, deployment and maintenance skills. Demonstrable coding experience launching workflow automations and reporting solutions using SQL and Python (or equivalent programming languages) with measurable business impact Proficiency with databases, data pipelines, data cleansing and data visualization / business intelligence (including ETL) - using tools such as KNIME, Pentaho, Alteryx, Power Automate, Databricks, Tableau or PowerBI (or equivalent) Hands-on technical experience and confidence in implementing at least one of: System integrations - ideally across both on-premises and cloud-based applications, (including Application Integration Patterns and Microservices orchestration) Robotic process automation - such as Alteryx, UIPath, BluePrism (or equivalent) Low-code application development - such as Retool (or equivalent) Business process orchestration / business process management - such as Appian, Pega, Signavio, Camunda (or equivalent) Business process mining and continuous controls monitoring - such as Celonis, Soroco or Anecdotes (or equivalent) Ability to operate in a fast-paced environment and successfully deliver technical change. Strong communication skills, clearly articulating technical challenges and potential solutions. It will be advantageous, but not essential to have one or more of: Experience improving processes focused on reducing risk (e.g. ICFR / internal controls / audit / risk and compliance). Experience of betting, gaming or online entertainment businesses. Experience bringing Artificial Intelligence (AI) solutions to improve enterprise business processes. Knowledge of Oracle ERP (e.g. Oracle Fusion and Oracle Governance, Risk and Compliance modules). Knowledge of Governance, Risk and Compliance systems. BENEFITS WE OFFER Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . WHY CHOOSE US: Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India.

Posted 1 week ago

Apply

2.0 years

3 - 6 Lacs

Hyderābād

On-site

Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Technical Lead- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81032 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML expert Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Technical Lead: Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Technical Lead- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, Cloud, R&D, Computer Science, Engineer, Engineering, Technology, Research

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Worker Location: Hyderabad- Hybrid Job Title: Data Engineering - Data Engineering Job Description: Design and develops complex software that processes, stores and serves data for use by others. Designs and develops complex and large-scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs. Writes complex ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processing. Ensures that data pipelines are scalable, repeatable and secure. Improves data consistency and integrity. Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards. Has knowledge of large-scale search applications and building high-volume data pipelines. Knowledge of Databricks, Unity Catalog, Event Ingestion, PySpark, Customer analytics including churn, funnel, loyalty, PowerBI (plus). Complexity & Problem Solving: - Learns routine assignments of limited scope and complexity. - Follows practices and procedures to solve standard or routine problems. Autonomy & Supervision: - Receives general instructions on routine work and detailed guidance from more senior members on all new tasks. - Work is typically reviewed in detail at frequent intervals for accuracy. Communication & Influence: - Builds stable internal working relationships. - Communicates and seeks guidance/feedback regularly from more senior members of the team. - Primarily interacts with supervisors, project leads, mentors, or other professionals in the same discipline. - Explains facts, policies, and practices related to discipline. Knowledge & Experience: - Typically requires a college degree (or equivalent) with up to one year of experience but may not have any. - Has conceptual knowledge of theories, principles, and practices within discipline and industry. Nice to Have: Dp-203 Certification Databricks Data Engineer Associate Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Work Location: In person

Posted 1 week ago

Apply

2.0 years

3 - 6 Lacs

Hyderābād

On-site

Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Specialist- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81033 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML exper t Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Specialist : Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Specialist- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, R&D, Cloud, Computer Science, Engineer, Engineering, Research, Technology

Posted 1 week ago

Apply

7.0 years

3 - 8 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design and implement IaC (Infrastructure as Code) solutions using tools such as Terraform, CloudFormation, or Ansible. Manage provisioning, configuration, and maintenance of servers and containers on cloud platforms (AWS, Azure, GCP, etc.). Ensure infrastructure is scalable, secure, and cost-effective. Architect, build, and maintain automated CI/CD pipelines using Jenkins, GitHub Actions, or other tools. Manage infrastructure for data engineering teams – Databricks and Snowflake. Establish development standards, automate builds and tests, and ensure seamless code deployments. Evaluate, select, and integrate services and tools that fit the organization’s cloud strategy. Optimize cloud services for cost and efficiency. Monitor and maintain cloud environments for performance and availability. Set up and configure monitoring tools (Prometheus, Grafana etc.) to track system health, performance, and security. Implement and maintain robust logging and alerting strategies to minimize downtime. Collaborate with data/software engineering teams, data analysts, and technology leads to streamline delivery processes and resolve issues. Mentor other team members on standard tools, process, automation, general DevOps practice maturity Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Equivalent work experience is also acceptable 7+ years of experience in DevOps, Site Reliability Engineering (SRE), or related roles Proven track record of managing and automating large-scale cloud infrastructure and architecture Experience in designing Cloud Infrastructure workflows Hands-on experience with Docker and container orchestration platforms like Kubernetes Demonstrated expertise with Terraform, CloudFormation, Ansible, or similar tools In-depth knowledge of Linux/UNIX environments Familiarity with tools like Prometheus, Grafana, Splunk Proficiency in at least one major cloud provider (AWS, Azure, GCP) Proven solid scripting skills (Bash, Python, PowerShell, Go, etc.) and familiarity with Git version control Preferred Qualifications: Certification(s) in DevOps, Cloud, or Security (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) Familiarity with microservices architecture and how CI/CD pipelines integrate with microservices deployments Working knowledge of serverless computing (Azure Functions) Python and Shell Scripting Expert in automating Infrastructure provisioning and maintenance At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

40.0 years

4 - 8 Lacs

Hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master’s / Bachelor’s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Requisition Number: 101628 Architect I - Data Location: This is a hybrid opportunity in Delhi-NCR, Bangalore, Hyderabad, Gurugram area. Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20 on Fortune’s World's Best Workplaces™ list #14 on Forbes World's Best Employers in IT – 2023 #23 on Forbes Best Employers for Women in IT- 2023 $1.4M+ total charitable contributions in 2023 by Insight globally Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. About The Role As an Architect I , you will focus on leading our Business Intelligence (BI) and Data Warehousing (DW) initiatives. We will count on you to be involved in designing and implementing end-to-end data pipelines using cloud services and data frameworks. Along the way, you will get to: Architect and implement end-to-end data pipelines, data lakes, and warehouses using modern cloud services and architectural patterns. Develop and build analytics tools that deliver actionable insights to the business. Integrate and manage large, complex data sets to meet strategic business requirements. Optimize data processing workflows using frameworks such as PySpark. Establish and enforce best practices for data quality, integrity, security, and performance across the entire data ecosystem. Collaborate with cross-functional teams to prioritize deliverables and design solutions. Develop compelling business cases and return on investment (ROI) analyses to support strategic initiatives. Drive process improvements for enhanced data delivery speed and reliability. Provide technical leadership, training, and mentorship to team members, promoting a culture of excellence. What We’re Looking For 8+ years in Business Intelligence (BI) solution design, with 6+ years specializing in ETL processes and data warehouse architecture. 6+ years of hands-on experience with Azure Data services including Azure Data Factory, Azure Databricks, Azure Data Lake Gen2, Azure SQL DB, Synapse, Power BI, and MS Fabric. Strong Python and PySpark software engineering proficiency, coupled with a proven track record of building and optimizing big data pipelines, architectures, and datasets. Proficient in transforming, processing, and extracting insights from vast, disparate datasets, and building robust data pipelines for metadata, dependency, and workload management. Familiarity with software development lifecycles/methodologies, particularly Agile. Experience with SAP/ERP/Datasphere data modeling is a significant plus. Excellent presentation and collaboration skills, capable of creating formal documentation and supporting cross-functional teams in a dynamic environment. Strong problem-solving, time management, and organizational abilities. Keen to learn new languages and technologies continually. Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or an equivalent field What You Can Expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. Medical Insurance Health Benefits Professional Development: Learning Platform and Certificate Reimbursement Shift Allowance But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambITious journey starts here. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process.At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you!Today's talent leads tomorrow's success. Learn more about Insight: https://www.linkedin.com/company/insight/ Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India

Posted 1 week ago

Apply

8.0 years

1 - 8 Lacs

Gurgaon

On-site

Requisition Number: 101628 Architect I - Data Location: This is a hybrid opportunity in Delhi-NCR, Bangalore, Hyderabad, Gurugram area. Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20 on Fortune’s World's Best Workplaces™ list #14 on Forbes World's Best Employers in IT – 2023 #23 on Forbes Best Employers for Women in IT- 2023 $1.4M+ total charitable contributions in 2023 by Insight globally Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. About the role As an Architect I , you will focus on leading our Business Intelligence (BI) and Data Warehousing (DW) initiatives. We will count on you to be involved in designing and implementing end-to-end data pipelines using cloud services and data frameworks. Along the way, you will get to: Architect and implement end-to-end data pipelines, data lakes, and warehouses using modern cloud services and architectural patterns. Develop and build analytics tools that deliver actionable insights to the business. Integrate and manage large, complex data sets to meet strategic business requirements. Optimize data processing workflows using frameworks such as PySpark. Establish and enforce best practices for data quality, integrity, security, and performance across the entire data ecosystem. Collaborate with cross-functional teams to prioritize deliverables and design solutions. Develop compelling business cases and return on investment (ROI) analyses to support strategic initiatives. Drive process improvements for enhanced data delivery speed and reliability. Provide technical leadership, training, and mentorship to team members, promoting a culture of excellence. What we’re looking for 8+ years in Business Intelligence (BI) solution design, with 6+ years specializing in ETL processes and data warehouse architecture. 6+ years of hands-on experience with Azure Data services including Azure Data Factory, Azure Databricks, Azure Data Lake Gen2, Azure SQL DB, Synapse, Power BI, and MS Fabric. Strong Python and PySpark software engineering proficiency, coupled with a proven track record of building and optimizing big data pipelines, architectures, and datasets. Proficient in transforming, processing, and extracting insights from vast, disparate datasets, and building robust data pipelines for metadata, dependency, and workload management. Familiarity with software development lifecycles/methodologies, particularly Agile. Experience with SAP/ERP/Datasphere data modeling is a significant plus. Excellent presentation and collaboration skills, capable of creating formal documentation and supporting cross-functional teams in a dynamic environment. Strong problem-solving, time management, and organizational abilities. Keen to learn new languages and technologies continually. Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or an equivalent field What you can expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. Medical Insurance Health Benefits Professional Development: Learning Platform and Certificate Reimbursement Shift Allowance But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambITious journey starts here. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process.At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you!Today's talent leads tomorrow's success. Learn more about Insight: https://www.linkedin.com/company/insight/ Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Sr. AWS Data Engineer Years of experience: 5-10 years (with minimum 5 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: Permanent Key skills: Python,SQL,Pyspark, AWS, Databricks, SQL, Data Modelling JD : Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3 , Glue , EMR , Athena , Redshift , and Lambda . Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow , Databricks Jobs , or AWS Step Functions . Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability , monitoring , and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description At CGA by NIQ, we deliver the most complete and clear understanding of consumer buying behavior that reveals new pathways to growth for the largest food and drink businesses and most iconic brands. With offices in the United Kingdom and the USA it is our vision to be the world’s leading business intelligence and strategic consultancy provider for the out-of-home leisure market. CGA was acquired by NIQ, the world's leading consumer intelligence provider for off-premise data and insights, bringing together on- and off-trade services that offer clients an unparalleled opportunity to grow brand and market share. NIQ operates in more than one hundred countries, delivering alcoholic beverage measurement and consumer insights that power growth. Job Description We are seeking a talented Data Engineering Manager to join our Data Engineering team. The successful candidate will have 6 years plus experience in SQL Server, Azure, Databricks and Python. They will also need to have experience in line management and working with business stakeholders. Responsibilities Lead the support and management of our Global Outlet processing platform ensuring it is robust, reliable, and cost-effective in providing high-quality data Design, develop, and uphold data integration processes, including the creation and execution of scalable and effective data pipelines for managing and transforming large datasets Identify and implement improvements, working with the business to ensure they are deployed and working correctly Ensure timely and correct data deliveries Provide line management and mentorship to data engineers, supporting their development and skill-building; to ensure they become self-sufficient Lead the technical aspects of our GOI strategy, ensuring alignment with organisational goals and objectives Qualifications Requirements Bachelor’s degree in computer science or a related field, or equivalent experience 6 years plus of experience in SQL Server support and maintenance Experience in developing and data manipulation in Python Proficient in Azure data services (e.g., SQL, Data Factory, Data Lake, Databricks) - Databricks is essential Ability to create and troubleshoot SQL queries and stored procedures Proven experience in designing and implementing data pipelines, ETL processes Knowledge of data integration, data modelling concepts, and familiarity with cloud data platforms and storage technologies, ideally within Azure Strong problem-solving skills and attention to detail Strong communication and collaboration skills Ability to prioritize and manage multiple tasks effectively Line Management Nice-to-Have Understanding of programming languages such as C#, or PowerShell Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 week ago

Apply

3.0 years

6 - 7 Lacs

Vadodara

On-site

Job Description Job Description Join a dynamic and diverse global team dedicated to developing innovative solutions that uncover the complete consumer journey for our clients. We are seeking a highly skilled Data Scientist with strong development skills in programming languages such as Python. Additionally, expertise in statistics, mathematics, econometrics, and experience with panel data to revolutionize the way we measure consumer behavior both online and in-store. Looking ahead, we are excited to find someone who will join our team in developing a tool that can simulate the impact of production process changes on client data. This tool outside of the production factory will allow the wider Data Science team to drive innovation with unpresented efficiency. About the Role Collaborative Environment: Work with an international team in a flexible and supportive setting, fostering cross-functional collaboration between data scientists, engineers, and product stakeholders. Tool Ownership and Development: Take ownership of a core Python-based tool, ensuring its continued development, scalability, and maintainability. Use robust engineering practices such as version control, testing and PRs. Innovative Solution Development: Collaborate closely with subject matter experts to understand complex methodologies. Translate these into scalable, production-ready implementations within the Python tool. Design and implement new features and enhancements to the tool to address evolving market challenges and improve team efficiency. Methodology Enhancement: Evaluate and improve current methodologies, including data cleaning, preparation, quality tracking, and consumer projection, with a strong focus on automation and reproducibility. Documentation & Code Quality: Maintain comprehensive documentation of the tool’s architecture, usage, and development roadmap. Ensure high code quality through peer reviews and adherence to best practices. Research and Analysis: Conduct rigorous research and analysis to inform tool improvements and ensure alignment with business needs. Communicate findings and recommendations clearly to both technical and non-technical audiences. Deployment and Support: Support the production deployment of new features and enhancements. Monitor tool performance and address issues proactively to ensure reliability and user satisfaction. Cross-Team Coordination: Coordinate efforts across multiple teams and stakeholders to ensure seamless integration of the tool into broader workflows and systems. Qualifications About You Ideally you possess a good understanding of consumer behavior, panel-based projections, and consumer metrics and analytics. You have successfully designed and developed software applying statistical and data analytical methods and demonstrated your ability to handle complex data sets. Experience with (un)managed crowdsourced panels and receipt capture methodologies is an advantage. Educational Background: Bachelor’s or Master’s Degree in Computer Science, Software Engineering, Mathematics, Statistics, Socioeconomics, Data Science, or a related field with a minimum of 3 years of relevant experience. Programming Proficiency: Proficient with Python or another programming language, R, C++ or JAVA, with a willingness to learn Python. Software Engineering Skills: Strong software engineering skills, including experience designing and developing software; optionally, experience with version control systems GitHub or Bitbucket Data Analysis Skills: Proficiency in manipulating, analyzing, and interpreting large data sets. Data Handling: Experience using Spark, specifically with PySpark package, experience working with large-scale datasets. Optionally, experience in SQL and working with queries. Continuous Learning: Eagerness to adopt and develop evolving technologies and tools. Statistical Expertise: Statistical and logical skills, experience in data cleaning, and data aggregation techniques. Communication and Collaboration: Strong communication, writing, and collaboration skills. Nice to Have Consumer Insights: Knowledge of consumer behavior and (un)managed consumer-related crowdsourced panels. Technology Skills: Familiarity with technology stacks for cloud computing (AzureAI, , Databricks, Snowflake). Production Support:Experience or interest in supporting technology teams in production deployment. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Senior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. WHAT YOU’LL DO: Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies Deploy application components using CI/CD pipelines Build utilities for monitoring and automating repetitive functions Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner Qualifications 3-6 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL Must have experience in cloud technologies, preferably Microsoft Azure Must have experience in performance optimization of Spark workloads Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker Good to have knowledge of relational databases, preferably PostgreSQL Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 week ago

Apply

3.0 years

4 - 6 Lacs

Vadodara

On-site

Analytics Posted on Jul 22, 2025 Vadodara-Gujarat Minimum Required Experience : 3 years Full Time Skills Machine Learning TensorFlow Pytorch NLP Description Designation/Role Name Machine Learning Engineer Org Structure Data Science & AI Team Work Schedule According to the business needs Job Description With excellent analytical and problem-solving skills, you should understand business problems of the customers, translate them into scope of work and technical specifications for developing into Data Science projects. Efficiently utilize cutting edge technologies in AI areas (Machine Learning, NLP, Computer Vision) and development of solutions for business problems. Good exposure technology platforms for Data Science, AI, Gen AI, cloud with implementation experience. Ability to understand data, requirements, design and develop a Machine Learning Model for the requirements. This Job requires the following: Designing, developing, and implementing end-to-end machine learning production pipelines (data exploration, sampling, training data generation, feature engineering, model building, and performance evaluation) Experience in predictive analytics and statistical modeling Experience in successfully making use of the following: Logistic Regression Multivariate Regression, Support Vector Machines, Stochastic Processes, Decision Trees, Lifetime analysis, common clustering algorithms, Optimization, CNN Essential Qualifications B-Tech or BE - computer / IT or MCA or MSC- Computer Science along with necessary certifications is preferred Technical Qualifications (Essential) Hands-on programming experience Hands-on technical design experience Hands-on prompt engineering experience Design and Development of at least 3 Data Science, AI Projects with design and development of Machine Learning models 1 Generative AI Project designed, developed and delivered to production is desirable Primary Skills Hands-on coding experience in Python, PyTorch, Spark/PySpark, SQL, TensorFlow, NLP Frameworks and similar tools/frameworks Good understanding of business and domain of the applications Hands-on experience in design and development of Gen AI applications using Open Source LLMs and cloud platforms Hands-on experience in design and development of API based applications for AI and Data Science Projects Understanding in GenAI concepts, RAG and Models fine-tuning techniques is desirable Understand the concepts of major AI models such as OpenAI, Llama, Hugging Face, Mistral AI etc., Understanding of DevOps pipelines for deployment Good understanding of Data Engineering lifecycle – data pipelines, data warehouse, data lake Secondary Skills Experience using Databricks and Azure Data platform Knowledge of any configuration management tools is desirable Familiarity with containerization and container orchestration services like Docker and Kubernetes Experience 3+ years in Machine Learning Model development in Data Science/AI Projects. Awareness of LLM integrations / development is desirable. Description of Responsibility Understand customer’s requirements (Business, Functional, Non-Functional etc.,), design and develop Machine Learning Models Design and implement Machine Learning Models using major technology and computing platforms (open source and cloud) Possess excellent analytical and problem-solving skills and be able to understand various forms of data, patterns and derive insights. Collaborate internal and external stakeholders for deriving solution that requires cross functional teams and smoother execution of the projects Knowledge of data modeling and understanding of different data structures Experience with design of AI/ML solutions either as standalone or integrated with other applications Experience in Generative AI solutions for the business/automation requirements using open source LLMs (Open AI, LLama, Mistral etc.,) is desirable Sills / Competencies requirement Research Orientation Proactive & Clear Communication Collaboration Solution Orientation Solution Articulation Accountability Adaptability / Flexibility Analytical Skills Listening Skills Customer Service Orientation

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About the Company JMAN Group is a growing technology-enabled management consultancy that empowers organizations to create value through data. Founded in 2010, we are a team of 450+ consultants based in London, UK, and a team of 300+ engineers in Chennai, India. Having delivered multiple projects in the US, we are now opening a new office in New York to help us support and grow our US client base. We approach business problems with the mindset of a management consultancy and the capabilities of a tech company. We work across all sectors, and have in depth experience in private equity, pharmaceuticals, government departments and high-street chains. Our team is as cutting edge as our work. We take pride for ourselves on being great to work with – no jargon or corporate-speak, flexible to change and receptive of feedback. We have a huge focus on investing in the training and professional development of our team, to ensure they can deliver high quality work and shape our journey to becoming a globally recognised brand. The business has grown quickly in the last 3 years with no signs of slowing down. About the Role 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high-quality delivery and adherence to standards. Support transition from project mode to Managed Services – including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. Qualifications ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two – Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). Required Skills Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: We are looking for a seasoned Azure Data Engineer with strong expertise in Python programming and cloud-based data solutions. The ideal candidate will have hands-on experience in designing, building, and maintaining scalable data pipelines and analytics solutions. Role: Azure Data Engineer Location: All PSL Locations Experience: 5+ years Job Type: Full Time Employment What You'll Do: Design and develop ETL/ELT pipelines using Azure Data Factory, Databricks, and other Azure services. Build and maintain data lakes, data warehouses, and real-time data streaming solutions. Write efficient and reusable Python scripts for data transformation, automation, and orchestration. Optimize and manage SQL queries and database performance across Azure SQL, Synapse, and other platforms. Collaborate with data scientists, analysts, and business stakeholders to deliver actionable insights. Implement CI/CD pipelines for data workflows and ensure robust data governance and security. Monitor and troubleshoot data pipelines and ensure high availability and reliability. Expertise You'll Bring: Minimum 5 years of experience in data engineering or related roles. Strong hands-on experience with Azure services: Data Factory, Databricks, etc. Proficiency in Python for data processing and automation. Solid understanding of SQL and relational/non-relational databases. Excellent problem-solving and communication skills. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies