Jobs
Interviews

6460 Databricks Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master’s / Bachelor’s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management: Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technical Skills Cloud Platforms: Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics). Data Processing: Databricks (PySpark, Spark SQL), Apache Spark. Programming Languages: Python, SQL Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow Other: Git, CI/CD Professional Experience Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights. Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP). Strong Proficiency in SQL & Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers. Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability. Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications. Implement Azure functions to trigger and manage data processing workflows. Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing. Conduct performance tuning and optimization of data processing workflows. Provide technical support and troubleshooting for data processing issues. Experience with successful migrations from legacy data infrastructure to Azure Databricks, improving scalability and cost savings. Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis. Effective oral and written management communication skills. Qualifications Minimum 8 years of Relevant experience Bachelor’s Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

About Us We are an innovative AI SaaS venture that develops cutting-edge AI solutions and provides expert consulting services. Our mission is to empower businesses with state-of-the-art AI technologies and data-driven insights. We're seeking a talented Data Engineer to join our team and help drive our product development and consulting initiatives. Job Overview For our Q4 2025 and 2026+ ambition, we are looking for a motivated Intern in Data Engineering (Azure). You will assist in building and maintaining foundational data pipelines and architectures under the guidance of senior team members. This role focuses on learning Azure tools (ADF, Databricks, Pyspark, Scala, python), supporting data ingestion/transformation workflows, and contributing to scalable solutions for AI-driven projects. Tasks Tasks Develop basic data pipelines using Azure Data Factory , Azure Synapse Analytics , or Azure Databricks . Assist in ingesting structured/semi-structured data from sources (e.g., APIs, databases, files) into Azure Data Lake Storage (ADLS) . Write simple SQL queries and scripts for data transformation and validation. Write simple Pyspark, scala and python code if required Monitor pipeline performance and troubleshoot basic issues. Collaborate with AI/ML teams to prepare datasets for model training. Document workflows and adhere to data governance standards. Requirements Preferred Qualifications Basic knowledge of AI/ML concepts. Bachelor in any stream mentioned in (Engineering, Science & Commerce). Basic understanding of Azure services (Data Factory, Synapse, ADLS, SQL Database, Databricks, Azure ML). Familiarity with SQL, Python, or Pyspark, Scala for scripting. Exposure to data modeling and ETL/ELT processes. Ability to work in Agile/Scrum teams Benefits What We Offer Cutting-edge Technology: Opportunity to work on cutting-edge AI projects and shape the future of data visualization Rapid Growth: Be part of a high-growth startup with endless opportunities for career advancement. Impactful Work: See your contributions make a real difference in how businesses operate. Collaborative Culture: Join a diverse team of brilliant minds from around the world. Flexible Work Environment: Enjoy remote work options and a healthy work-life balance. Competitive Compensation as per market. We’re excited to welcome passionate, driven individuals who are eager to learn and grow with our team. If you’re ready to gain hands-on experience, contribute to meaningful projects, and take the next step in your professional journey, we encourage you to apply. We look forward to exploring the possibility of having you onboard. Follow us for more updates: https://www.linkedin.com/company/ingeniusai/posts/

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technical Skills Cloud Platforms: Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics). Data Processing: Databricks (PySpark, Spark SQL), Apache Spark. Programming Languages: Python, SQL Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow Other: Git, CI/CD Professional Experience Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights. Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP). Strong Proficiency in SQL & Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers. Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability. Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications. Implement Azure functions to trigger and manage data processing workflows. Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing. Conduct performance tuning and optimization of data processing workflows. Provide technical support and troubleshooting for data processing issues. Experience with successful migrations from legacy data infrastructure to Azure Databricks, improving scalability and cost savings. Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis. Effective oral and written management communication skills. Qualifications Minimum 8 years of Relevant experience Bachelor’s Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 week ago

Apply

5.0 - 15.0 years

13 - 37 Lacs

Kolkata, Pune, Bengaluru

Work from Office

Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, load data from various sources into Azure storage solutions such as Blob Storage, Cosmos DB, etc. Collaborate with cross-functional teams to gather requirements for new data engineering projects and ensure successful implementation of ADF workflows. Troubleshoot issues related to ADF pipeline failures by analyzing logs, debugging techniques, and working closely with stakeholders to resolve problems efficiently. Develop automated testing frameworks for ADF pipelines using PySpark or other tools to ensure high-quality delivery of projects. Job Requirements : 5-15 years of experience in Data Engineering with expertise in Azure Data Factory (ADF). Strong understanding of big data technologies like Hadoop ecosystem components including Hive, Pig, Spark etc. . Proficiency in writing complex SQL queries on Azure Databricks platform.

Posted 1 week ago

Apply

4.0 years

10 - 25 Lacs

Faridabad, Haryana, India

On-site

Role Overview We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python , and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment. The ideal candidate will have a solid background in real-time streaming , big data processing , and cloud platforms , along with strong leadership and stakeholder engagement capabilities. Key Responsibilities Design and develop scalable real-time data streaming solutions using Apache Kafka and Python. Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data. Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability. Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST. Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions. Mentor junior engineers, perform code reviews, and promote engineering best practices. Stay current with evolving technologies in cloud, big data, and healthcare data standards. Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes). Required Skills & Qualifications 4+ years of hands-on experience in data engineering roles. Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry). Proficient in Python for data processing and automation. Experience with Azure Databricks (or readiness to ramp up quickly). Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus). Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems. Familiarity with containerization tools like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines for data applications. Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable. Excellent problem-solving abilities and a proactive mindset. Strong communication and interpersonal skills to work in cross-functional teams. Skills:- Microsoft Windows Azure, Data engineering, Python and Apache Kafka

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Greater Kolkata Area

On-site

Job description: Job Description Role Purpose The purpose of this role is to prepare test cases and perform testing of the product/ platform/ solution to be deployed at a client end and ensure its meet 100% quality assurance parameters. ͏ Do Instrumental in understanding the test requirements and test case design of the product Authoring test planning with appropriate knowledge on business requirements and corresponding testable requirements Implementation of Wipro's way of testing using Model based testing and achieving efficient way of test generation Ensuring the test cases are peer reviewed and achieving less rework Work with development team to identify and capture test cases, ensure version Setting the criteria, parameters, scope/out-scope of testing and involve in UAT (User Acceptance Testing) Automate the test life cycle process at the appropriate stages through vb macros, scheduling, GUI automation etc To design and execute the automation framework and reporting Develop and automate tests for software validation by setting up of test environments, designing test plans, developing test cases/scenarios/usage cases, and executing these cases Ensure the test defects raised are as per the norm defined for project / program / account with clear description and replication patterns Detect bug issues and prepare file defect reports and report test progress No instances of rejection / slippage of delivered work items and they are within the Wipro / Customer SLA's and norms Design and timely release of test status dashboard at the end of every cycle test execution to the stake holders Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ͏ Status Reporting and Customer Focus on an ongoing basis with respect to testing and its execution Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc On time deliveries - WSRs, Test execution report and relevant dashboard updates in Test management repository Updates of accurate efforts in eCube, TMS and other project related trackers Timely Response to customer requests and no instances of complaints either internally or externally ͏ NoPerformance ParameterMeasure1Understanding the test requirements and test case design of the productEnsure error free testing solutions, minimum process exceptions, 100% SLA compliance, # of automation done using VB, macros2Execute test cases and reportingTesting efficiency & quality, On-Time Delivery, Troubleshoot queries within TAT, CSAT score ͏ Mandatory Skills: Databricks Data Quality . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 week ago

Apply

9.0 - 14.0 years

0 - 3 Lacs

Bengaluru

Remote

Job Description: As a GCP Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Google Cloud Platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Databricks, Python, SQL, PySpark / Scala, and Informatica will be essential for performing the following key responsibilities: Key Responsibilities: Designing and developing data pipelines: Design and implement scalable and efficient data pipelines using GCP-native services (e.g., Cloud Composer, Dataflow, BigQuery) and tools like Databricks, PySpark, and Scala. This includes data ingestion, transformation, and loading (ETL/ELT) processes. Data modeling and database design: Develop data models and schema designs to support efficient data storage and analytics using tools like BigQuery, Cloud Storage, or other GCP-compatible storage solutions. Data integration and orchestration: Orchestrate and schedule complex data workflows using Cloud Composer (Apache Airflow) or similar orchestration tools. Manage end-to-end data integration across cloud and on-premises systems. Data quality and governance: Implement data quality checks, validation rules, and governance processes to ensure data accuracy, integrity, and compliance with organizational standards and external regulations. Performance optimization: Optimize pipelines and queries to enhance performance and reduce processing time, including tuning Spark jobs, SQL queries, and leveraging caching mechanisms or parallel processing in GCP. Monitoring and troubleshooting: Monitor data pipeline performance using GCP operations suite (formerly Stackdriver) or other monitoring tools. Identify bottlenecks and troubleshoot ingestion, transformation, or loading issues. Documentation and collaboration: Maintain clear and comprehensive documentation for data flows, ETL logic, and pipeline configurations. Collaborate closely with data scientists, business analysts, and product owners to understand requirements and deliver data engineering solutions. Skills and Qualifications: 5+ years of experience in a Data Engineer role with exposure to large-scale data processing. Strong hands-on experience with Google Cloud Platform (GCP), particularly services like BigQuery, Cloud Storage, Dataflow, and Cloud Composer. Proficient in Python and/or Scala, with a strong grasp of PySpark. Experience working with Databricks in a cloud environment. Solid experience building and maintaining big data pipelines, architectures, and data sets. Strong knowledge of Informatica for ETL/ELT processes. Proven track record of manipulating, processing, and extracting value from large-scale, unstructured datasets. Working knowledge of stream processing and scalable data stores (e.g., Kafka, Pub/Sub, BigQuery). Solid understanding of data modeling concepts and best practices in both OLTP and OLAP systems. Familiarity with data quality frameworks, governance policies, and compliance standards. Skilled in performance tuning, job optimization, and cost-efficient cloud architecture design. Excellent communication and collaboration skills to work effectively in cross-functional and client-facing roles. Bachelor's degree in Computer Science, Information Systems, or a related field (Mathematics, Engineering, etc.). Bonus: Experience with distributed computing frameworks like Hadoop and Spark

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks. The ideal candidate will have a deep understanding of streaming architectures, Medallion data models, and performance optimization techniques in cloud environments. This role requires hands-on technical expertise, including live coding during the interview process. Key Responsibilities • Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming. • Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers. • Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. • Work with large volumes of structured and unstructured data, ensuring high availability and performance. • Apply performance tuning techniques such as partitioning, caching, and cluster resource optimization. • Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. • Establish best practices for code versioning, deployment automation, and data governance. Required Technical Skills: • Strong expertise in Azure Databricks and Spark Structured Streaming • Processing modes (append, update, complete) • Output modes (append, complete, update) • Checkpointing and state management • Experience with Kafka integration for real-time data pipelines • Deep understanding of Medallion Architecture • Proficiency with Databricks Autoloader and schema evolution • Deep understanding of Unity Catalog and Foreign catalog • Strong knowledge of Spark SQL, Delta Lake, and DataFrames • Expertise in performance tuning (query optimization, cluster configuration, caching strategies) • Must have Data management strategies • Excellent with Governance and Access management • Strong with Data modelling, Data warehousing concepts, Databricks as a platform • Solid understanding of Window functions Proven experience in: • Merge/Upsert logic • Implementing SCD Type 1 and Type 2 • Handling CDC (Change Data Capture) scenarios • Retail/Telcom/Energy any one industry expertise • Real time use case execution • Data modelling

Posted 1 week ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We have been retained by a Global client into an AI/ML platform designed for data science teams to build, deploy, and manage AI solutions. The client is looking for someone who can express passion about new technologies and the possibilities of Data and Advanced Analytics to enterprise customers across a range of industries within an extensive network in India. This role consists of engaging prospects and customers about their related initiatives to help them develop a more efficient approach, leveraging the client’s platform. Someone with Strong Sales track record in India is preferred. Enterprise Sales Leadership : Looking for someone who has led large, consultative sales cycles in AI/ML or cloud platforms—ideally with experience selling to CIOs, CTOs, and Chief Data Officers in BFSI and Manufacturing. C-Suite Fluency : The ideal candidate should be able to speak the language of business outcomes, not just tech. Think: ROI on AI investments, operational efficiency, regulatory compliance, and digital transformation narratives How you'll make an impact: Meet/exceed sales revenue quota and renewal goals. Work extensively with channel partners and RSIs to sell with/through and drive customer success. Build and execute territory sales plan for the assigned sales territory in India. Drive customer engagement and build a joint pipeline of sales opportunities. Ensure the development of joint sales and technical skills within the partner ecosystem to drive demand and customer success. Oversee the development and delivery of joint marketing campaigns and field engagement with partners and the company’s field teams to generate pipeline. Alignment of internal stakeholders across the business in Sales, Marketing, Services, Customer Success, and Product. Build executive relationships with the leadership of key customers and facilitate customer engagement. Collaborate with cross-functional the company’s leadership to scale the ecosystem of partnerships. Represent companies at industry events in a manner that reflects and upholds the company’s core values. What you'll need to be successful: - Sales track record in India. Strong experience in successfully building and scaling channel resellers program and network in the region that contribute significantly to sales growth. Extensive experience working with local partners. Domain expertise of modern AI/ML and cloud computing technologies. Be fluent in what’s relevant to the C-Suite today. Connections within our target technology, system integrator and consulting partners, which include Snowflake, Databricks, AWS, Google, Microsoft, Accenture, Deloitte, PWC, KMPG, Capgemini, DXC, Tech Mahindra, Wipro, and others. Know their businesses and the people that drive them. 10+ years of track record in software sales within the Data / AI industry, technology / cloud computing, SI, and consulting companies in India. A collaborative connector and influential leadership style that drives results. Be able to align stakeholders and get people on the journey to achieve desired objectives and results for the region. Interested to know more, please share your number or cv here or at swati.singh@gateway-search.com to have a detailed discussion.

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Sr. AWS Data Engineer Years of experience: 5-10 years (with minimum 5 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: Permanent Key skills: Python,SQL,Pyspark, AWS, Databricks, SQL, Data Modelling Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3, Glue, EMR, Athena, Redshift, and Lambda. Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow, Databricks Jobs, or AWS Step Functions. Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability, monitoring, and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Why Join 7-Eleven Global Solution Center? When you join us, you'll embrace ownership as teams within specific product areas take responsibility for end-to-end solution delivery, supporting local teams and integrating new digital assets. Challenge yourself by contributing to products deployed across our extensive network of convenience stores, processing over a billion transactions annually. Build solutions for scale, addressing the diverse needs of our 84,000+ stores in 19 countries. Experience growth through cross-functional learning, encouraged and applauded at 7-Eleven GSC. With our size, stability, and resources, you can navigate a rewarding career. Embody leadership and service as 7-Eleven GSC remains dedicated to meeting the needs of customers and communities. Why We Exist, Our Purpose and Our Transformation? 7-Eleven is dedicated to being a customer-centric, digitally empowered organization that seamlessly integrates our physical stores with digital offerings. Our goal is to redefine convenience by consistently providing top-notch customer experiences and solutions in a rapidly evolving consumer landscape. Anticipating customer preferences, we create and implement platforms that empower customers to shop, pay, and access products and services according to their preferences. To achieve success, we are driving a cultural shift anchored in leadership principles, supported by the realignment of organizational resources and processes. At 7-Eleven we are guided by our Leadership Principles . Each principle has a defined set of behaviours which help guide the 7-Eleven GSC team to Serve Customers and Support Stores. Be Customer Obsessed Be Courageous with Your Point of View Challenge the Status Quo Act Like an Entrepreneur Have an “It Can Be Done” Attitude Do the Right Thing Be Accountable About This Opportunity: We are seeking a highly skilled senior AI/ML engineer to design, implement and deploy AI/ML solutions that drive innovation and efficiency. The ideal candidate will have extensive experience in Langchain, NLP, RAG based systems, Prompt Engineering, Agentic Systems and cloud platforms (Azure,AWS) and be adept in building AI driven applications. Responsibilities: Design and implement AI driven solutions using advanced frameworks and technologies, ensuring scalability and efficiency. Develop and optimize Langchain(agents, chains, memories, parsers, document loaders),Gen AI and NLP models for specific use cases. Quickly experiment different machine learning models for specific use case. Strong problem-solving capabilities and ability to quickly propose feasible solutions and effectively communicate strategy and risk mitigation approaches to leadership. Required Qualifications: 3 - 5 years of experience in AI/ML engineering, with exposure to both classical machine learning methods and language model-based applications. Must have experience in Azure cloud and databricks setup. Proficiency in Python and machine learning frameworks like Tensorflow, Pytorch, scikit-learn. Strong understanding of machine learning algorithms, including supervised and unsupervised learning, reinforcement learning, and deep learning. Strong expertise in Generative AI, NLP, and conversational AI technologies. Experience in building and deploying AI-powered applications at scale. Expertise in working with structured and unstructured data, including data cleaning, feature engineering with data stores like vector, relational, NoSQL databases and data lakes through APIs. Model Evaluation and Metrics: Proficiency in evaluating both classical ML models and LLMs using relevant metrics. Excellent written and verbal communications skills. Be eager to explore and implement the latest advancements in LLMs and ML, integrating them with existing solutions and enhancing their capabilities. Ability to understand business requirements and translate into technical requirements. Educational Background: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related field. Familiarity with code versioning tools - Git (Gitlab). Exposure to retail industry, experience with e-commerce applications. 7-Eleven Global Solution Center is an Equal Opportunity Employer committed to diversity in the workplace. Our strategy focuses on three core pillars – workplace culture, diverse talent and how we show up in the communities we serve. As the recognized leader in convenience, the 7-Eleven family of brands embraces diversity, equity and inclusion (DE+I). It’s not only the right thing to do for customers, Franchisees and employees—it’s a business imperative. Privileges & Perquisites: 7-Eleven Global Solution Center offers a comprehensive benefits plan tailored to meet the needs and improve the overall experience of our employees, aiding in the management of both their professional and personal aspects. Work-Life Balance: Encouraging employees to unwind, recharge, and find balance, we offer flexible and hybrid work schedules along with diverse leave options. Supplementary allowances and compensatory days off are provided for specific work demands. Well-Being & Family Protection: Comprehensive medical coverage for spouses, children, and parents/in-laws, with voluntary top-up plans, OPD coverage, day care services, and access to health coaches. Additionally, an Employee Assistance Program with free, unbiased and confidential expert consultations for personal and professional issues. Top of Form Wheels and Meals: Free transportation and cafeteria facilities with diverse menu options including breakfast, lunch, snacks, and beverages, customizable and health-conscious choices. Certification & Training Program: Sponsored training for specialized certifications. Investment in employee development through labs and learning platforms. Hassel free Relocation: Support and reimbursement for newly hired employees relocating to Bangalore, India.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Finance Job Family Group: Business Support Group Job Description: About us: At bp, we’re reimagining energy for people and our planet. With operations working across almost every part of the energy system, we’re aiming the way in reducing carbon emissions and developing more sustainable methods for solving the energy challenge. We’re a team with multi-layered strengths of engineers, scientists, traders and business professionals determined to find answers to problems. And we know we can’t do it alone. We’re looking for people who share our passion for reinvention, to bring fresh opinions, ambition, and to challenge our thinking in our goal to achieve net zero! We believe our portfolio of businesses and investments in growth and transformation will result in a company with the scale, brand, capabilities, talent, and values to succeed as the digital revolution transforms our society, our industry and our planet Key Accountabilities Data Quality/Modelling/Design thinking: Demonstrating SAP MDG/ECCs experience the candidate is able to investigate to do root cause analysis for assigned use cases. Also able to work with Azure data lake (via dataBricks) using SQL/Python. Data Model (Conceptual and Physical) will be needed to be identified and built that provides automated mechanism to supervise on going DQ issues. Multiple workshops may also be needed to work through various options and identifying the one that is most efficient and effective Works with business (Data Owners/Data Stewards) to profile data for exposing patterns indicating data quality issues. Also is able to identify impact to specific CDEs deemed important for each individual business. Identifies financial impacts of Data Quality Issue. Also is able to identify business benefit (quantitative/qualitative) from a remediation standpoint along with leading implementation timelines. Schedules regular working groups with business that have identified DQ issues and ensures progression for RCA/Remediation or for presenting in DGFs Identifies business DQ rules basis which critical metrics/Measures are stood up that foster into the dashboarding/workflows for BAU monitoring. Red flags are raised and investigated Understanding of Data Quality value chain, starting with Critical Data Element concepts, Data Quality Issues, Data Quality important metrics/Measures is needed. Also has experience owing and completing Data Quality Issue assessments to aid improvements to operational process and BAU initiatives Highlights risk/hidden DQ issues to Lead/Manager for further guidance/customer concern Communication skills are significant in this role as this is outward facing and focus has to be on clearly articulation messages. Dashboarding & Workflow: Builds and maintains effective analytics and partner concern mechanisms which detect poor data and help business lines drive resolution Support crafting, building and deployment of data quality dashboards via PowerBI Resolves critical issue paths and constructs workflow and alerts which advise process and data owners of unresolved data quality issues Collaborates with IT & analytics teams to drive innovation (AI, ML, cognitive science etc.) DQ Improvement Plans: Creates, embeds and drives business ownership of DQ improvement plans Works with business functions and projects to create data quality improvement plans Sets targets for data improvements / maturity. Monitors and intervenes when sufficient progress is not being made Supports initiatives which are driving data clean-up of existing data landscape Project Delivery: Oversees, advises Data Quality Analysts and participates in delivery of data quality activities including profiling, establishing conversion criteria and resolving technical and business DQ issues Owns and develops relevant data quality work products as part of the DAS data change methodology Ensures data quality aspects are delivered as part of Gold and Silver data related change projects Supports the creation of cases with insight into the cost of poor data Essential Experience and Job Requirements: 11-15 total yrs of experience in Oil & Gas or a Financial Services/Banking industry within Data Management space Experience of working with Data Models/Structures and investigating to design and fine tune them Experience of Data Quality Management i.e. Governance, DQI management (root cause analysis, Remediation /solution identification), Governance Forums (papers production, quorum maintenance, Minutes publication), CDE identification, Data Lineage (identification of authoritative data sources) preferred. Understand of important metrics/Measures needed as well Experience of having worked with senior partners in multiple Data Domain/Business Areas, CDO and Technology. Ability to operate in global teams within multiple time zones Ability to operate in a evolving and changing setup and be able to identify priorities. Also ability to operate independently without too much direction Desirable criteria SAP MDG/SAP ECC experience (T codes, Tables structures etc) Azure Data lake /AWS/Data Bricks Crafting dashboards & workflows (powerBI Qlikview or Tableau etc.) Crafting analytics and insight in a DQ setting (powerBI/powerQuery) Profiling and analysis skills (SAP DI, Informatica or Collibra) Persuading, influencing and communication at a senior level management level Certification in Data Management, Data Science, Python/R desirable Travel Requirement No travel is expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Primary Skills-Azure Data Bricks A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Awareness of latest Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods technologies and trends Excellent problem solving, analytical and debugging skills

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Job Title: Lead Data Engineer Location: Hyderabad Job Type: Full-Time Position: Lead Data Engineer Job Overview: This is a technical hands-on role in the implementation of data solutions and related downstream systems. The role involves working a Data Management team while towards establishing Data Reporting pipeline and automations using combination of Python scripts, Databricks, API, along with the ability to model a smart solution. Primary Responsibilities: Collaborate with the Architect, Product Manager, and other development team to build the Data Pipeline. Advise the product manager on the right use of technological components Work with stakeholders, including management and domain leads, to address data-related technical issues and support data infrastructure needs. Develop and program database management may include ETL processes, data modeling, and infrastructure, including using and developing APIs, front-end applications and automated data pipelines. Ensure enterprise data policies, best practices, standards, and processes are followed. Communicate, coordinate, and collaborate effectively with business, IT architecture, and data teams across multi-functional areas to complete deliverables. Technical Skills / Experience: 7 to 12 years of proven working experience in building data pipeline using scripts, ETL tools and doing data integration and data migration. Primary skill: Strong hands-on development skills in Python, Databricks, and API knowledge. Secondary Skill: Working Knowledge of database systems such as Azure SQL DB, Azure Synapse and Snowflake. Secondary Skill: Experience in working in a DevSecOps environment. Good to Have: Knowledge of data governance tools and integration with enterprise data platforms. Good to Have: Experience with data management components such as IDMC/IICS is a plus. Other Skills / Experience: Experience being part of high-performance agile teams in a fast-paced environment. Strong team emphasis and relationship-building skills; partners well with business and other IT/Data areas. Excellent communication and negotiation skills. Up to date with latest developments around AI, Development frameworks, technologies and can actively participate in innovation initiatives. Ability to organize, plan, and implement work assignments, juggle competing demand. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Role : ML Engineer (Associate / Senior) Experience : 2-3 Years (Associate) 4-5 Years (Senior) Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 1 week ago

Apply

4.0 years

7 - 10 Lacs

Hyderābād

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver dashboards, schema, data pipelines, and software solutions. Design, develop, and maintain scalable data pipelines to process and transform large volumes of structured and unstructured data. Build and maintain ETL/ELT workflows for data ingestion from various sources (APIs, databases, files, cloud). Ensure data quality, integrity, and governance across the pipeline. This includes developing, configuring, or modifying data components within various complex business and/or enterprise application solutions in various computing environments. Responsibilities: We are currently seeking Sr data engineer, who can perform data integration to build custom data pipeline, manage data transformation, Performance optimization, Automation, Data Governance & data quality. Mandatory skill sets: ‘Must have’ knowledge, skills and experiences · GCP – Dataproc- Pyspark, Spark SQL, Dataflow- Apache beam, Cloud composer, Bigquery, API Management Preferred skill sets: ‘Good to have’ knowledge, skills and experiences · Experience in building data pipelines · Experience in Software lifecycle tools for CI/CD and version control system such as GIT · Familiarity with Agile methodologies is a plus. Years of experience required: Experience and Qualifications · Experience - 4 Years to 12 years · NP - Immediate to 30 days · Location - Hyderabad · 3 days / week work from client office Education qualification: o BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

10.0 years

1 - 10 Lacs

Hyderābād

On-site

If you are looking for a game-changing career, working for one of the world's leading financial institutions, you’ve come to the right place. As a Principal Software Engineer at JP Morgan Chase within the Consumer & Community Banking Technology Team, you, you provide expertise and engineering excellence as an integral part of an agile team to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various technologies to support one or more of the firm’s portfolios. Job responsibilities Creates complex and scalable coding frameworks using appropriate software design frameworks Develops secure and high-quality production code, and reviews and debugs code written by others Advises cross-functional teams on technological matters within your domain of expertise Serves as the function’s go-to subject matter expert Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies Creates durable, reusable software frameworks that are leveraged across teams and functions Influences leaders and senior stakeholders across business, product, and technology teams Champions the firm’s culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on data management concepts and 10+ years applied experience. Experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise. Proven experience in designing and developing large scale data pipelines for batch & stream processing Strong understanding of Data Warehousing, Data Lake, ETL processes and Big Data technologies (e.g Hadoop, Snowflake, Databricks, Apache Spark, PySpark, Airflow, Apache Kafka, Java, Open File & Table Formats, GIT, CI/CD pipelines etc. ) Expertise with public cloud platforms (e.g., AWS, Azure, GCP) and modern data processing & engineering tools Excellent communication, presentation, and interpersonal skills Experience developing or leading large or cross-functional teams of technologists Demonstrated prior experience influencing across highly matrixed, complex organizations and delivering value at scale Experience leading complex projects supporting system design, testing, and operational stability Experience with hiring, developing, and recognizing talent Extensive practical cloud native experience Expertise in Computer Science, Computer Engineering, Mathematics, or a related technical field Preferred qualifications, capabilities, and skills Experience working at code level and ability to be hands-on performing PoCs, code reviews Experience in Data Modeling (ability to design Conceptual, Logical and Physical Models, ERDs and proficiency in data modeling software like ERwin) Experience with Data Governance, Data Privacy & Subject Rights, Data Quality & Data Security practices Strong understanding of Data Validation / Data Quality Experience with supporting large scale AI/ML Data requirements Experience in Data visualization & BI tools is a huge plus

Posted 1 week ago

Apply

0 years

4 - 8 Lacs

Hyderābād

On-site

ABOUT FLUTTER ENTERTAINMENT Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million Average Monthly Players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game. Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. FLUTTER ENTERTAINMENT INDIA Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. ROLE PURPOSE: At Flutter, we are embarking on an ambitious global finance transformation programme throughout 2025, 2026 and 2027. The Technology Enablement and Automation Manager will be responsible for delivering elements of the ICFR pillar of the global finance transformation programme. The Technology Enablement and Automation Transformation Manager will report directly, or indirectly, to the Head of Technology Enablement and Automation Transformation. Flutter consists of two commercial divisions (FanDuel and International) and our central Flutter Functions; COO, Finance & Legal. Here in Flutter Functions we work with colon-premises across all our divisions and regions to deliver something we call the Flutter Edge. It’s our competitive advantage, our ‘secret sauce’ which plays a key part in our ongoing success and powers our brands and divisions, through Product, Tech, Expertise and Scale. In Flutter Finance we pride ourselves on providing global expertise to ensure Flutter is financially healthy. Utilizing our Flutter Edge to turbo-charge our capabilities. KEY RESPONSIBILITIES Design, develop, launch and maintain custom technical solutions including workflow automations, reporting pipelines / dashboards and cloud systems integrations, focused on improving and enabling Flutter’s Internal Controls over Financial Reporting (ICFR) annual cycle Bring your technical know-how to continuously improve Finance and IT processes and controls (for example, balance sheet reconciliations, GRC tool enablement and analytical reviews). Prepare and maintain high quality documentation related to your automation and reporting deliverables. Contribute to robust technical delivery processes for the ICFR Transformation Technology Enablement & Automation team. Collaborate closely with Internal Controls Transformation, Internal Controls Assurance teams and with colleagues across Finance and IT (Group and Divisional teams) to ensure seamless delivery of the technical solutions, automations and reporting that you own. Contribute to regular status reporting to senior leaders, highlighting potential challenges and opportunities for improvement. TO EXCEL IN THIS ROLE, YOU WILL NEED TO HAVE Passion for technical solution delivery, and for learning new technologies. Strong technology architecture, design, development, deployment and maintenance skills. Demonstrable coding experience launching workflow automations and reporting solutions using SQL and Python (or equivalent programming languages) with measurable business impact Proficiency with databases, data pipelines, data cleansing and data visualization / business intelligence (including ETL) - using tools such as KNIME, Pentaho, Alteryx, Power Automate, Databricks, Tableau or PowerBI (or equivalent) Hands-on technical experience and confidence in implementing at least one of: System integrations - ideally across both on-premises and cloud-based applications, (including Application Integration Patterns and Microservices orchestration) Robotic process automation - such as Alteryx, UIPath, BluePrism (or equivalent) Low-code application development - such as Retool (or equivalent) Business process orchestration / business process management - such as Appian, Pega, Signavio, Camunda (or equivalent) Business process mining and continuous controls monitoring - such as Celonis, Soroco or Anecdotes (or equivalent) Ability to operate in a fast-paced environment and successfully deliver technical change. Strong communication skills, clearly articulating technical challenges and potential solutions. It will be advantageous, but not essential to have one or more of: Experience improving processes focused on reducing risk (e.g. ICFR / internal controls / audit / risk and compliance). Experience of betting, gaming or online entertainment businesses. Experience bringing Artificial Intelligence (AI) solutions to improve enterprise business processes. Knowledge of Oracle ERP (e.g. Oracle Fusion and Oracle Governance, Risk and Compliance modules). Knowledge of Governance, Risk and Compliance systems. BENEFITS WE OFFER Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . WHY CHOOSE US: Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India.

Posted 1 week ago

Apply

2.0 years

3 - 6 Lacs

Hyderābād

On-site

Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Technical Lead- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81032 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML expert Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Technical Lead: Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Technical Lead- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, Cloud, R&D, Computer Science, Engineer, Engineering, Technology, Research

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Worker Location: Hyderabad- Hybrid Job Title: Data Engineering - Data Engineering Job Description: Design and develops complex software that processes, stores and serves data for use by others. Designs and develops complex and large-scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs. Writes complex ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processing. Ensures that data pipelines are scalable, repeatable and secure. Improves data consistency and integrity. Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards. Has knowledge of large-scale search applications and building high-volume data pipelines. Knowledge of Databricks, Unity Catalog, Event Ingestion, PySpark, Customer analytics including churn, funnel, loyalty, PowerBI (plus). Complexity & Problem Solving: - Learns routine assignments of limited scope and complexity. - Follows practices and procedures to solve standard or routine problems. Autonomy & Supervision: - Receives general instructions on routine work and detailed guidance from more senior members on all new tasks. - Work is typically reviewed in detail at frequent intervals for accuracy. Communication & Influence: - Builds stable internal working relationships. - Communicates and seeks guidance/feedback regularly from more senior members of the team. - Primarily interacts with supervisors, project leads, mentors, or other professionals in the same discipline. - Explains facts, policies, and practices related to discipline. Knowledge & Experience: - Typically requires a college degree (or equivalent) with up to one year of experience but may not have any. - Has conceptual knowledge of theories, principles, and practices within discipline and industry. Nice to Have: Dp-203 Certification Databricks Data Engineer Associate Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Work Location: In person

Posted 1 week ago

Apply

2.0 years

3 - 6 Lacs

Hyderābād

On-site

Become our next FutureStarter Are you ready to make an impact? ZF is looking for talented individuals to join our team. As a FutureStarter, you’ll have the opportunity to shape the future of mobility. Join us and be part of something extraordinary! Specialist- AI/ML Expert Country/Region: IN Location: Hyderabad, TG, IN, 500032 Req ID 81033 | Hyderabad, India, ZF India Pvt. Ltd. Job Description About the team: AIML is used to create chatbots, virtual assistants, and other forms of artificial intelligence software. AIML is also used in research and development of natural language processing systems. What you can look forward to as a AI/ML exper t Lead Development : Own end‑to‑end design, implementation, deployment and maintenance of both traditional ML and Generative AI solutions (e.g., fine‑tuning LLMs, RAG pipelines) Project Execution & Delivery : Translate business requirements into data‑driven and GenAI‑driven use cases; scope features, estimates, and timelines Technical Leadership & Mentorship : Mentor, review and coach junior/mid‑level engineers on best practices in ML, MLOps and GenAI Programming & Frameworks : Expert in Python (pandas, NumPy, scikit‑learn, PyTorch/TensorFlow) Cloud & MLOps : Deep experience with Azure Machine Learning (SDK, Pipelines, Model Registry, hosting GenAI endpoints) Proficient in Azure Databricks : Spark jobs, Delta Lake, MLflow for tracking both ML and GenAI experiments Data & GenAI Engineering : Strong background in building ETL/ELT pipelines, data modeling, orchestration (Azure Data Factory, Databricks Jobs) Experience with embedding stores, vector databases, prompt‑optimization, and cost/performance tuning for large GenAI models Your profile as a Specialist : Bachelor’s or master’s in computer science/Engineering/Ph.D in Data Science, or a related field Min of 2 years of professional experience in AI/ML engineering, including at least 2 years of hands‑on Generative AI project delivery Track record of production deployments using Python, Azure ML, Databricks, and GenAI frameworks Hands‑on data engineering experience—designing and operating robust pipelines for both structured data and unstructured (text, embeddings) Preferred: Certifications in Azure AI/ML, Databricks, or Generative AI specialties Experience working in Agile/Scrum environments and collaborating with cross‑functional teams. Why you should choose ZF in India Innovative Environment : ZF is at the forefront of technological advancements, offering a dynamic and innovative work environment that encourages creativity and growth. Diverse and Inclusive Culture : ZF fosters a diverse and inclusive workplace where all employees are valued and respected, promoting a culture of collaboration and mutual support. Career Development: ZF is committed to the professional growth of its employees, offering extensive training programs, career development opportunities Global Presence : As a part of a global leader in driveline and chassis technology, ZF provides opportunities to work on international projects and collaborate with teams worldwide. Sustainability Focus: ZF is dedicated to sustainability and environmental responsibility, actively working towards creating eco-friendly solutions and reducing its carbon footprint. Employee Well-being : ZF prioritizes the well-being of its employees, providing comprehensive health and wellness programs, flexible work arrangements, and a supportive work-life balance. Be part of our ZF team as Specialist- AI/ML Expert and apply now! Contact Veerabrahmam Darukumalli What does DEI (Diversity, Equity, Inclusion) mean for ZF as a company? At ZF, we continuously strive to build and maintain a culture where inclusiveness is lived and diversity is valued. We actively seek ways to remove barriers so that all our employees can rise to their full potential. We aim to embed this vision in our legacy through how we operate and build our products as we shape the future of mobility. Find out how we work at ZF: Job Segment: R&D Engineer, R&D, Cloud, Computer Science, Engineer, Engineering, Research, Technology

Posted 1 week ago

Apply

7.0 years

3 - 8 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design and implement IaC (Infrastructure as Code) solutions using tools such as Terraform, CloudFormation, or Ansible. Manage provisioning, configuration, and maintenance of servers and containers on cloud platforms (AWS, Azure, GCP, etc.). Ensure infrastructure is scalable, secure, and cost-effective. Architect, build, and maintain automated CI/CD pipelines using Jenkins, GitHub Actions, or other tools. Manage infrastructure for data engineering teams – Databricks and Snowflake. Establish development standards, automate builds and tests, and ensure seamless code deployments. Evaluate, select, and integrate services and tools that fit the organization’s cloud strategy. Optimize cloud services for cost and efficiency. Monitor and maintain cloud environments for performance and availability. Set up and configure monitoring tools (Prometheus, Grafana etc.) to track system health, performance, and security. Implement and maintain robust logging and alerting strategies to minimize downtime. Collaborate with data/software engineering teams, data analysts, and technology leads to streamline delivery processes and resolve issues. Mentor other team members on standard tools, process, automation, general DevOps practice maturity Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Equivalent work experience is also acceptable 7+ years of experience in DevOps, Site Reliability Engineering (SRE), or related roles Proven track record of managing and automating large-scale cloud infrastructure and architecture Experience in designing Cloud Infrastructure workflows Hands-on experience with Docker and container orchestration platforms like Kubernetes Demonstrated expertise with Terraform, CloudFormation, Ansible, or similar tools In-depth knowledge of Linux/UNIX environments Familiarity with tools like Prometheus, Grafana, Splunk Proficiency in at least one major cloud provider (AWS, Azure, GCP) Proven solid scripting skills (Bash, Python, PowerShell, Go, etc.) and familiarity with Git version control Preferred Qualifications: Certification(s) in DevOps, Cloud, or Security (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) Familiarity with microservices architecture and how CI/CD pipelines integrate with microservices deployments Working knowledge of serverless computing (Azure Functions) Python and Shell Scripting Expert in automating Infrastructure provisioning and maintenance At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

40.0 years

4 - 8 Lacs

Hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master’s / Bachelor’s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Requisition Number: 101628 Architect I - Data Location: This is a hybrid opportunity in Delhi-NCR, Bangalore, Hyderabad, Gurugram area. Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20 on Fortune’s World's Best Workplaces™ list #14 on Forbes World's Best Employers in IT – 2023 #23 on Forbes Best Employers for Women in IT- 2023 $1.4M+ total charitable contributions in 2023 by Insight globally Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. About The Role As an Architect I , you will focus on leading our Business Intelligence (BI) and Data Warehousing (DW) initiatives. We will count on you to be involved in designing and implementing end-to-end data pipelines using cloud services and data frameworks. Along the way, you will get to: Architect and implement end-to-end data pipelines, data lakes, and warehouses using modern cloud services and architectural patterns. Develop and build analytics tools that deliver actionable insights to the business. Integrate and manage large, complex data sets to meet strategic business requirements. Optimize data processing workflows using frameworks such as PySpark. Establish and enforce best practices for data quality, integrity, security, and performance across the entire data ecosystem. Collaborate with cross-functional teams to prioritize deliverables and design solutions. Develop compelling business cases and return on investment (ROI) analyses to support strategic initiatives. Drive process improvements for enhanced data delivery speed and reliability. Provide technical leadership, training, and mentorship to team members, promoting a culture of excellence. What We’re Looking For 8+ years in Business Intelligence (BI) solution design, with 6+ years specializing in ETL processes and data warehouse architecture. 6+ years of hands-on experience with Azure Data services including Azure Data Factory, Azure Databricks, Azure Data Lake Gen2, Azure SQL DB, Synapse, Power BI, and MS Fabric. Strong Python and PySpark software engineering proficiency, coupled with a proven track record of building and optimizing big data pipelines, architectures, and datasets. Proficient in transforming, processing, and extracting insights from vast, disparate datasets, and building robust data pipelines for metadata, dependency, and workload management. Familiarity with software development lifecycles/methodologies, particularly Agile. Experience with SAP/ERP/Datasphere data modeling is a significant plus. Excellent presentation and collaboration skills, capable of creating formal documentation and supporting cross-functional teams in a dynamic environment. Strong problem-solving, time management, and organizational abilities. Keen to learn new languages and technologies continually. Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or an equivalent field What You Can Expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. Medical Insurance Health Benefits Professional Development: Learning Platform and Certificate Reimbursement Shift Allowance But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambITious journey starts here. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process.At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you!Today's talent leads tomorrow's success. Learn more about Insight: https://www.linkedin.com/company/insight/ Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies