Jobs
Interviews

6216 Databricks Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Ciklum is looking for a Senior Data Scientist to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Senior Data Scientist, become a part of a cross-functional development team working for A healthcare technology company that provides platforms and solutions to improve the management and access of cost-effective pharmacy benefits. Our technology helps enterprise and partnership clients simplify their businesses and helps consumers save on prescriptions. Our client is a leader in SaaS technology for healthcare, They offer innovative solutions with integrated intelligence on a single enterprise platform that connects the pharmacy ecosystem. With their expertise and modern, modular platform, our partners use real-time data to transform their business performance and optimize their innovative models in the marketplace. Responsibilities: Development of prototype solutions, mathematical models, algorithms, machine learning techniques, and robust analytics to support analytic insights and visualization of complex data sets Work on exploratory data analysis so you can navigate a dataset and come out with broad conclusions based on initial appraisals Provide optimization recommendations that drive KPIs established by product, marketing, operations, PR teams, and others Interacts with engineering teams and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability Work directly with business analysts and data engineers to understand and support their use cases Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions Drive innovation by exploring new experimentation methods and statistical techniques that could sharpen or speed up our product decision-making processes Cross-trains other team members on technologies being developed, while also continuously learning new technologies from other team members Contribute to unit’s activities and community building, participate in conferences, and provide excellence in exercise and best practices Requirements: We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you’re a good fit! 5+ years of development of Data Science solutions with a proven track record of leveraging analytics to drive significant business impact Bachelor's/Master's degree in Mathematics, Statistics, Computer Science, Operations Research, Econometrics or related field Proven ability to relate and solve business problems through machine learning and statistics 4+ years of experience applying various machine learning techniques: regression, classification, clustering, dimensional reduction, time series prediction, and/or outlier detection, recommendation systems Understanding of advantages and drawbacks of machine learning algorithms as well as their usage constraints including performance 4+ years of experience in Python development of machine learning solutions and statistical analysis: Pandas, SciPy, Scikit-learn, XGBoost, LightGBM, and/or statsmodels, imbalanced-learn libraries and ML libraries like scikit-learn, TensorFlow, PyTorch, data wrangling and visualization (e.g., Pandas, NumPy, Matplotlib, Seaborn Experience in working with large-scale datasets, including time series and healthcare data Experience with NLP, deep learning and GenAI Experience diving into data to consider hidden patterns and conducting error analysis 2+ years experience in data visualization: Power BI, Tableau, and/or Python libraries like Matplotlib and Seaborn Experience with SQL for data processing, data manipulation, sampling, reporting 3+ years experience creating/maintaining of OOP Machine Learning solutions Understanding of CRISP-ML(Q) / TDSP concept 1+ year of experience with MLOps: integration of reliable Machine Learning Pipelines in Production, Docker, containerization, orchestration 2+ years of experience with Clouds (AWS, Azure, GCP) and Clouds AI And ML Services(e.g. Amazon Sage Maker, Azure ML) Excellent time and project management skills, with the ability to manage detailed work and communicate project status effectively to all levels Desirable: Probability Theory & Statistics knowledge and intuition as well as understanding of Mathematics behind Machine Learning 1+ year of experience in Deep Learning solution development with Tensorflow or PyTorch libraries Data Science / Machine Learning certifications, or research experience with papers being published Experience with Kubernetes Experience with Databricks, Snowflake platforms 1+ year of BigData experience, i.e. Hadoop / Spark Experience with No-SQL, and/or columnar/graph databases What`s in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy license, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally, and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: India is a strategic growth market for Ciklum. Be a part of a big story created right now. Let’s grow our delivery center in India together! Boost your skills and knowledge: create and innovate with like-minded professionals — all of that within a global company with a local spirit and start-up soul. Supported by Recognize Partners and expanding globally, we will engineer the experiences of tomorrow! Be bold, not bored! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: CPG Commercial Analytics SME Level: Associate Director/Director Location: Flexible / Hybrid Function: Commercial / Data & Analytics / Revenue Growth Management Role Overview: We are looking for a senior leader to drive the data product strategy and value realization across both Trade Promotion Management (TPM) and Revenue Growth Management (RGM) domains. This role is focused on enabling high-impact commercial analytics through robust, well-architected TPM and RGM data pipelines. This is not an execution or reporting role — this is a cross-functional, strategic leadership position responsible for translating commercial growth priorities into scalable, trusted, and reusable data assets for use in pricing, promotion, investment optimization, and forecasting. Key Responsibilities: Data Strategy & Ownership Define and own the strategic roadmap for TPM and RGM data products Translate commercial use cases into enterprise-ready data product requirements Serve as the business-facing lead across pricing, trade promotion, investment ROI, and pack architecture analytics Analytics & Value Enablement Partner with RGM, Finance, and Sales Analytics teams to activate data in key use cases: Promotion ROI and trade spend effectiveness Net Revenue Management (NRM) performance tracking Pack-price architecture and elasticity modeling Market Mix Modeling (MMM) and promo uplift forecasting Drive data product adoption across business teams through aligned KPIs, semantic layers, and business-friendly design Cross-functional Leadership Act as the single point of accountability for TPM & RGM data needs across commercial, finance, and digital teams Lead the definition of commercial data standards, hierarchies, and quality expectations Represent the TPM/RGM data domains in data councils, transformation programs, and platform initiatives 📐 Governance & Quality Management Required Experience & Qualifications: Professional Background 10+ years in CPG, retail, or related industry in roles spanning Revenue Growth Management, TPM, or Commercial Analytics Deep experience with TPM and RGM data, systems, and analytics Demonstrated leadership in building or governing data products for commercial use cases Domain & Technical Fluency In-depth knowledge of: TPM data (spend types, uplift models, claims/deductions, ROI) RGM data (price ladders, pack architecture, NRM metrics, elasticities) Proven experience designing data layers in a lakehouse or enterprise data platform Familiarity with tools such as SAP TPM, Exceedra, Oracle Demantra, Power BI, Databricks, dbt, or Looker Leadership & Communication Strategic thinker with the ability to align data priorities to business goals Skilled in stakeholder management across commercial, finance, and technology organizations Strong communication and storytelling skills for both executive and operational audiences

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

POWER BI Location – Bangalore Experience – 5+ Years Lead the identification and delivery of automated reporting using modern data visualization techniques. Design and create interactive dashboards, reports, and visualizations using Power BI tools to deliver actionable insights to business users. Collaborate with stakeholders to understand their reporting requirements, translate them into technical specifications, and ensure timely delivery of high-quality BI/analytical solutions. Identify and resolve data quality issues and inconsistencies, working closely with data stakeholders to ensure data consistency and trustworthiness. Expert knowledge in Databricks, Azure SQL server, query authoring (SQL), as well as common data modelling, analytics, and visualization toolsets (Predominantly in Azure stack) Soft Skills: Ability to work independently, manage multiple projects simultaneously, and meet deadlines in a fast-paced environment. Effectively prioritize and deliver a clear roadmap of deliverables. Agile methodology would be an advantatge Coach, mentor, and inspire a team of other Power BI developers. Communicate effectively with all stakeholders, including technical and non-technical audiences. Preferred Experience with Maximo data structure and asset management concepts. Knowledge of financial systems and cost modeling. Experience with AI/ML tools for predictive analytics. Familiarity with integration frameworks About Encora Encora is a global company that offers Software and Digital Engineering solutions, with more than 9000 Encorians around the world. Our technology practices include Cloud Services, Product Engineering & Development, Data Modernization & Engineering, Digital Experience, DevSecOps, Cybersecurity, Quality Engineering, Generative AI, among others. At Encora Inc. we hire professionals based solely on their skills and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role Are you looking for an exciting opportunity in Solution Architecture ? Are you passionate about everything Azure Cloud ? Then join us as a Senior Cloud Architect Your Main Responsibilities Design and deliver Azure solution architecture for an application workload. Design disaster recovery and backup plans based on RTO, RPO and other non-functional requirements to deliver resilient solution on Public Cloud Assisting engineering teams delivering infrastructure architecture and designing cloud solutions. Build and maintain relationship with Application teams, understanding the context and assisting in achieving respective cloud transformation roadmap Engage with subject matter experts in Security, Enterprise Architecture and Governance teams to contribute and develop cloud technology roadmap and adherence to best practices. About You The following proven technical skills will be required: Expertise in designing app workload using Cloud Platform Services (SaaS, and PaaS). Expertise in technology selection based on Architecture decision records and other standards within the organization. Expertise in Azure AI Services (Foundry, ML OpenAI, Anomaly detection, Bot Services, LUIS) , AKS (Azure Kubernetes Service), App Services, DataBricks, ADF (Azure Data Factory), ASB (Azure Service Bus), EventHub, KV (Key Vault), SA (Storage Account), Container Registry, Azure Functions, Redis, LogicApps, Azure Firewall, VNET (Virtual Network), Private Endpoint, Service Endpoint, SQL Server, CosmosDB, MongoDB Experience in designing IaC using Terraform on Azure DevOps. Expertise in designing Azure disaster recovery and backup scenarios meeting NFRs. Azure Well-Architected framework or Cloud design patterns. Experience in one or more programming languages: .Net / C#, Java, Python or Ruby Experience in Azure landing zone design, platform automation design, DevSecOps tooling, network topology & connectivity, access management & privileged identity design, platform monitoring, security architecture, high availability architecture design. Experience in DevOps methodology (preferably DevSecOps), both technically and organisationally, including continuous deployment, delivery pipelines and test environments. Good stakeholder communication. Ability to work with Product Owner, Product Manager, Architects and engineering teams. At ease working in a transformational and complex environment at a fast-pace and getting things done. Proficiency in English is required. About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords Reference Code: 133521

Posted 1 week ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Hyderabad, Gurugram, Bengaluru

Hybrid

Design and build data pipelines using Spark-SQL and PySpark in Azure Databricks Design and build ETL pipelines using ADF Build and maintain a Lakehouse architecture in ADLS / Databricks. Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Work with DevOps team to deploy solutions in production environments. Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors. Participate as a full member of the global Analytics team, providing solutions for and insights into data related items. Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members. Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations. Job Requirement Must Have Skills: Azure Databricks Azure Data Factory PySpark Spark - SQL ADLS

Posted 1 week ago

Apply

2.0 - 4.0 years

6 - 12 Lacs

Hyderabad

Work from Office

We are seeking experienced Data Analysts / Data Engineers with strong expertise in U.S. pharmaceutical commercial datasets to support critical Data Operations initiatives. This role will be focused on onboarding third-party data, ensuring data quality , and implementing outlier detection techniques . Familiarity with ML/A I approaches for anomaly detection is highly desirable. Key Responsibilities: Pharma Data Integration: Work extensively with U.S. pharmaceutical commercial datasets. Ingest and onboard third-party data sources such as IQVIA, Symphony Health, Komodo Health etc. Ensure alignment of data schemas, dictionary mapping, and metadata integrity. Data Quality & Governance: Design and implement QC protocols for data integrity and completeness. Track data lineage and maintain proper documentation of data flow and transformations. Outlier Detection & Analytics: Apply statistical or algorithmic techniques to identify anomalies in data related to sales, claims, or patient-level records. Utilize ML/AI tools (if applicable) for automated outlier detection and trend analysis. Collaboration & Reporting: Work cross-functionally with business teams, data scientists, and IT to ensure timely delivery of reliable data. Provide detailed reports and insights for stakeholders to support commercial decision-making. Required Skills & Qualifications: 3+ years of experience in Pharmaceutical Data Operations , preferably with U.S. market data. Strong hands-on experience with third-party commercial healthcare data sources (IQVIA, Symphony, Komodo, etc.). Solid understanding of ETL pipelines, data ingestion frameworks, and metadata management . Proficient in SQL, Python, or R for data processing and quality checks. Experience in outlier detection techniques both statistical (Z-score, IQR, etc.) and ML-based (Isolation Forest, Autoencoders, etc.). Familiarity with Snowflake, Databricks, AWS, or similar cloud platforms is a plus. Excellent problem-solving, documentation, and communication skills

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Responsibilities Developing system integration scopes and objectives, involving all relevant stakeholders, and ensuring technical feasibility Developing and implementing strategies to connect different systems, ensuring they can share data and functionality Defining and managing API that allow different software applications to communicate with each other Defining and managing asynchronous data exchange between applications and as a part of data pipelines Ensuring high data quality and integrity to avoid operational failures and improve system accuracy Diving deep into the details to solve complex technical challenges Leading a partner team of engineers, providing guidance, mentorship, and support to help them grow and succeed in their roles Maintaining comprehensive system integration documentation Overseeing the technical aspects of backend development, including code quality, system architecture, and performance optimization Coordinating cross-functional to optimize processes and ensure that backend systems integrate seamlessly Implementing process improvements, especially cross-team processes, to enhance productivity and efficiency Undertaking technical and functional evaluation of third-party COTS applications Support project management activities, resource monitoring, technical risk identification & mitigations Qualifications Education Qualification Bachelor's or Master's degree in Computer Science, Information Technology, or a related field MBA or other relevant post-graduate qualification is desirable Certification Required Specific certifications in cloud computing or data management would be advantageous Professional certifications in technology, project management, or related fields would be advantageous Experience 10-15 years of experience in technology engineering roles, preferably within the aviation industry Demonstrated track record of successfully implementing, integrating and managing technology systems and applications Proven experience in managing digital transformation initiatives and leveraging technology to enable business growth Strong background in coordinating with cross-functional teams and managing complex technical projects Behavioural Skills Change management and adaptability Problem-solving and analytical mindset Outcome orientation and go getter approach Stakeholder management and partnering skills Team building and mentorship capabilities Technical Skills Proficiency in one programming languages such as Node.js, Python, Java, JavaScript. Expertise in databases like PostgreSQL, MS SQL, and MongoDB. Expertise in APIs, ESB (Enterprise Service Bus), and Kafka. Experience with Data Warehouse solutions like Spark, Databricks, etc Familiarity with cloud platforms like Azure Cloud. Knowledge of data analytics, business intelligence, and AI/ML technologies Understanding of Agile methodologies, Jira, and Confluence. Knowledge in tools like Docker and nginx. Knowledge of cybersecurity principles and practices.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Greater Hyderabad Area

Remote

Job Title:AI/ML Engineer / Data Scientist - with Databricks focus Experience: 8+Years Work type: Remote (India) Key Responsibilities: • Develop, deploy, and maintain scalable MLOps pipelines for both traditional ML and Generative AI use cases leveraging Databricks (Unity Catalog, Delta Tables, Inference Tables, Mosaic AI). • Operationalize large language models (LLMs) and other GenAI models, ensuring efficient prompt engineering, fine-tuning, and serving. • Implement model tracking, versioning, and experiment management using MLflow. • Build robust CI/CD pipelines for ML and GenAI workloads to automate testing, validation, and deployment to production. • Use Vertex AI to manage training, deployment, and monitoring of ML and GenAI models in the cloud. • Integrate high-quality, governed data pipelines that enable ML and Generative AI solutions with strong lineage and reproducibility. • Design and enforce AI Governance frameworks covering model explainability, bias monitoring, data access, compliance, and audit trails. • Collaborate with data scientists and GenAI teams to productionize prototypes and research into reliable, scalable products. • Monitor model performance, usage, and drift — including specific considerations for GenAI systems such as hallucination checks, prompt/response monitoring, and user feedback loops. • Stay current with best practices and emerging trends in MLOps and Generative AI. Key Qualifications: Must Have Skills: • 3+ years of experience in MLOps, ML Engineering, or related field. • Hands-on experience with operationalizing ML and Generative AI models in production. • Proficiency with Databricks (Unity Catalog, Delta Tables, Mosaic AI, Inference Tables). • Experience with MLflow for model tracking, registry, and reproducibility. • Strong understanding of Vertex AI pipelines and deployment services. • Expertise in CI/CD pipelines for ML and GenAI workloads (e.g., GitHub Actions, Azure DevOps, Jenkins). • Proven experience in integrating and managing data pipelines for AI, ensuring data quality, versioning, and lineage. • Solid understanding of AI Governance, model explainability, and responsible AI practices. • Proficiency in Python, SQL, and distributed computing frameworks. • Excellent communication and collaboration skills. Nice to Have: • Experience deploying and monitoring Large Language Models (LLMs) and prompt-driven AI workflows. • Familiarity with vector databases, embeddings, and retrieval-augmented generation (RAG) architectures. • Infrastructure-as-Code experience (Terraform, CloudFormation). • Experience working in regulated industries (e.g., finance, Retail) with compliance-heavy AI use cases.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

At Amaris Consulting, we’re on the lookout for bold, versatile, and forward-thinking individuals to join our Data & AI Center of Excellence as Data Consultants. Whether your strength lies in analytics, engineering, or machine learning—your expertise belongs here. What does it mean to be a Data Consultant at Amaris? As a Data Consultant, you’ll be at the heart of strategic and technical projects for top-tier organizations. From building scalable data pipelines to deploying cutting-edge ML models, your work will directly shape how clients turn raw data into real-world impact. You'll collaborate across teams, industries, and geographies—delivering solutions that matter. Who we’re looking for: Data Engineer You don’t just work with data—you build the engines that power data-driven products. You’re fluent in Python and SQL, and you know how to architect clean, scalable pipelines that deliver results. You’ve worked with AI-enabled solutions, integrating pre-trained models, embeddings, and computer vision into production environments. You love solving problems, thrive in fast-paced product teams, and feel right at home in client-facing settings and global, cross-functional collaborations. 🔥 What You’ll Do as a Data Consultant: Work in cross-functional teams with engineers, scientists, analysts, and project managers Build and optimize data pipelines for AI and product development use cases Collaborate with AI teams to operationalize models, including vision and NLP-based pre-trained systems Participate in client discussions to translate technical needs into valuable solutions Ensure code quality and scalability using best practices in Python and SQL Shape and implement technical solutions across cloud, hybrid, or on-prem environments Support product development initiatives by embedding data capabilities into features Contribute to internal R&D and knowledge-sharing efforts within the CoE Our Environment & Tech Stack: We’re tech-agnostic and pragmatic: we adapt our stack to each client’s needs. Some of the most used technologies include: Languages: Python, SQL AI & ML: Pre-trained models, embedding models, computer vision frameworks Cloud platforms: Azure, AWS, GCP Orchestration & Transformation: Airflow, dbt, Kedro Big Data & Storage: Spark, Databricks, Snowflake MLOps & DevOps: MLflow, Docker, Git, CI/CD pipelines Product & API Development: REST APIs, microservices (bonus) 🎯 Your Profile: 4–5 years of experience as a Data Engineer Excellent skills in Python and SQL Experience with pre-trained models, embeddings, and computer vision Exposure to product development and AI integration in live environments Comfortable in client-facing roles and interacting with international teams Strong communicator with the ability to explain complex topics to both technical and business audiences Fluent in English; additional languages are a plus Autonomous, proactive, and a continuous learner 🚀 Why Join our Data & AI Center of Excellence? Work with major clients across Europe and globally on impactful projects Join a community of 600+ data professionals in our Center of Excellence Access continuous upskilling, tech exchanges, and mentorship opportunities Grow into technical leadership, architecture, or specialized AI domains 💡 We are an independent company that values: Agility – thrive in a flexible, dynamic, and stimulating environment International scope – engage in daily cross-border collaboration and mobility in 60+ countries Intrapreneurship – contribute to transversal topics or launch your own initiatives Attentive management – benefit from personalized support and career development Amaris Consulting is proud to be an equal-opportunity workplace. We are committed to promoting diversity and creating an inclusive work environment. We welcome applications from all qualified individuals, regardless of gender, orientation, background, or ability.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Nashik Area

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role Contributing to the Data Science efforts of AB InBevʼs global forecasting team. Candidate will be required to contribute to building, interpreting and scaling forecasting models across multiple ABI markets. Key tasks & accountabilities Preferred industry exposure CPG, Consulting with 3+ years (in case of consulting the typical profile would be of a Lead consultant with relevant experience mentioned in the point below) Experience of working in the domain of Forecasting Analytics preferred “preferably in a CPG organization” with a demonstrated capability of successfully deploying analytics solutions and products for internal or external clients. Has interacted with Senior internal or external stakeholders around project/ service conceptualization and plan of delivery. Exposure to AI/ML methodologies with a previous hands-on experience in ML concepts like forecasting, clustering, regression, classification, optimization, deep learning. Has experience of working on data manipulation using tools such as excel, Python. Strong proficiency in Object-Oriented Programming (OOP) principles and design patterns. Good understanding of data structures and algorithms as they relate to machine learning tasks. Experience with version control tools such as Git. Familiarity with MLOPS and containerization tools like Docker would be plus. Consistently display an intent for problem solving Qualifications, Experience, Skills Level Of Educational Attainment Required B.Tech/BE/ Masters in Statistics or Economics/ econometrics, MBA Previous Work Experience Minimum 3 years of relevant experience. Technical Skills Required Hands-on experience in data manipulation using Excel, Python. Expert level proficiency in Python(knowledge of writing end-to-end ML or data pipelines in python) Proficient in application of ML concepts and forecasting techniques to solve end-to-end business problems. Familiarity with Azure Tech Stack, Databricks, ML Flow in any cloud platform. Other Skills Required Passion for solving problems using data Detail oriented, analytical and inquisitive Ability to effectively communicate and present information at various levels of an organization Ability to work independently and with others And above all of this, an undying love for beer! We dream big to create future with more cheers

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will lead a high-energy, top-performing team of engineers and product managers, working alongside technology and business leaders to shape the vision and drive the execution of transformative data initiatives that make a real impact. Let me tell you about the role As a Senior Master Data Platform Services Manager, you will play a strategic role in shaping and securing enterprise-wide technology landscapes, ensuring their resilience, performance, and compliance. You will provide deep expertise in security, infrastructure, and operational excellence, driving large-scale transformation and automation initiatives. Your role will encompass platform architecture, system integration, cybersecurity, and operational continuity. You will be collaborating with engineers, architects, and business partners, working to establish robust governance models, technology roadmaps, and innovative security frameworks to safeguard critically important enterprise applications. What you will deliver Design and implement enterprise technology architecture, security frameworks, and platform engineering. Strengthen platform security and ensure compliance with industry standards and regulations. Optimize system performance, availability, and scalability. Advance enterprise modernization and drive seamless integration with enterprise IT. Establish governance, security standards, and risk management strategies. Develop automated security monitoring, vulnerability assessments, and identity management solutions. Drive adoption of CI/CD, DevOps, and Infrastructure-as-Code methodologies. Enhance disaster recovery and resilience planning for enterprise platforms. Partner with technology teams and external vendors to align enterprise solutions with business goals. Lead and mentor engineering teams, fostering a culture of innovation and excellence. Shape strategies for enterprise investments, cybersecurity risk mitigation, and operational efficiency! Collaborate across teams to implement scalable solutions and long-term technology roadmaps. What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelor’s degree in technology, Engineering, or a related technical discipline. 6+ years of experience in enterprise technology, security, and operations in large-scale global environments. Experience implementing CI/CD pipelines, DevOps methodologies, and Infrastructure-as-Code (AWS Cloud Development Kit, Azure Bicep, etc.). Deep knowledge of ITIL, Agile, and enterprise IT governance frameworks. Proficiency in programming languages such as Python, Java, or Scala. Experience with data pipeline frameworks (e.g., Apache Airflow, Kafka, Spark) and cloud-based data platforms (AWS, GCP, Azure). Expertise in database technologies (SQL, NoSQL, Data Lakes) and data modeling principles. Essential Skills Proven technical expertise in Microsoft Azure, AWS, Databricks, and Palantir. Strong understanding of data ingestion, pipelines, governance, security, and visualization. Experience designing, deploying, and optimizing multi-cloud data platforms that support large-scale, cloud-native workloads balancing cost efficiency with performance and resilience. Hands-on performance tuning, data indexing, and distributed query optimization. Experience with real-time, and batch data streaming architectures. Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will be part of a high-energy, top-performing team of engineers and product managers, working alongside technology and business leaders to support the execution of transformative data initiatives that make a real impact. Let me tell you about the role As a Senior Data Platform Services Engineer, you will play a strategic role in shaping and securing enterprise-wide technology landscapes, ensuring their resilience, performance, and compliance. You will provide deep expertise in security, infrastructure, and operational excellence, driving large-scale transformation and automation initiatives. Your role will encompass platform architecture, system integration, cybersecurity, and operational continuity. You will be collaborating with engineers, architects, and business partners, working to establish robust governance models, technology roadmaps, and innovative security frameworks to safeguard critically important enterprise applications. What You Will Deliver Contribute to enterprise technology architecture, security frameworks, and platform engineering for our core data platform. Support end-to-end security implementation across our unified data platform, ensuring compliance with industry standards and regulatory requirements. Help drive operational excellence by supporting system performance, availability, and scalability. Contribute to modernization and transformation efforts, assisting in integration with enterprise IT systems. Assist in the design and execution of automated security monitoring, vulnerability assessments, and identity management solutions. Apply DevOps, CI/CD, and Infrastructure-as-Code (IaC) approaches to improve deployment and platform consistency. Support disaster recovery planning and high availability for enterprise platforms. Collaborate with engineering and operations teams to ensure platform solutions align with business needs. Provide guidance on platform investments, security risks, and operational improvements. Partner with senior engineers to support long-term technical roadmaps that reduce operational burden and improve scalability! What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelor’s degree in technology, engineering, or a related technical discipline. 3–5 years of experience in enterprise technology, security, or platform operations in large-scale environments. Experience with CI/CD pipelines, DevOps methodologies, and Infrastructure-as-Code (e.g., AWS CDK, Azure Bicep). Knowledge of ITIL, Agile delivery, and enterprise governance frameworks. Proficiency with big data technologies such as Apache Spark, Hadoop, Kafka, and Flink. Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data solutions (BigQuery, Redshift, Snowflake, Databricks). Strong skills in SQL, Python, or Scala, and hands-on experience with data platform engineering. Understanding of data modeling, data warehousing, and distributed systems architecture. Essential Skills Technical experience in Microsoft Azure, AWS, Databricks, and Palantir. Understanding of data ingestion pipelines, governance, security, and data visualization. Experience supporting multi-cloud data platforms at scale—balancing cost, performance, and resilience. Familiarity with performance tuning, data indexing, and distributed query optimization. Exposure to both real-time and batch data streaming architectures Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will be part of a high-energy, top-performing team of engineers and product managers, working alongside technology and business leaders to support the execution of transformative data initiatives that make a real impact. Let Me Tell You About The Role As a Senior Data Tooling Services Engineer, you will play a strategic role in shaping and securing enterprise-wide technology landscapes, ensuring their resilience, performance, and compliance. You will provide deep expertise in security, infrastructure, and operational excellence, driving large-scale transformation and automation initiatives. Your role will encompass platform architecture, system integration, cybersecurity, and operational continuity. You will be collaborating with engineers, architects, and business partners, working to establish robust governance models, technology roadmaps, and innovative security frameworks to safeguard critically important enterprise applications. What You Will Deliver Contribute to enterprise technology architecture, security frameworks, and platform engineering for our core data platform. Support end-to-end security implementation across our unified data platform, ensuring compliance with industry standards and regulatory requirements. Help drive operational excellence by supporting system performance, availability, and scalability. Contribute to modernization and transformation efforts, assisting in integration with enterprise IT systems. Assist in the design and execution of automated security monitoring, vulnerability assessments, and identity management solutions. Apply DevOps, CI/CD, and Infrastructure-as-Code (IaC) approaches to improve deployment and platform consistency. Support disaster recovery planning and high availability for enterprise platforms. Collaborate with engineering and operations teams to ensure platform solutions align with business needs. Provide guidance on platform investments, security risks, and operational improvements. Partner with senior engineers to support long-term technical roadmaps that reduce operational burden and improve scalability! What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelor’s degree in technology, engineering, or a related technical discipline. 3–5 years of experience in enterprise technology, security, or platform operations in large-scale environments. Experience with CI/CD pipelines, DevOps methodologies, and Infrastructure-as-Code (e.g., AWS CDK, Azure Bicep). Knowledge of ITIL, Agile delivery, and enterprise governance frameworks. Proficiency with big data technologies such as Apache Spark, Hadoop, Kafka, and Flink. Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data solutions (BigQuery, Redshift, Snowflake, Databricks). Strong skills in SQL, Python, or Scala, and hands-on experience with data platform engineering. Understanding of data modeling, data warehousing, and distributed systems architecture. Essential Skills Technical experience in Microsoft Azure, AWS, Databricks, and Palantir. Understanding of data ingestion pipelines, governance, security, and data visualization. Experience supporting multi-cloud data platforms at scale—balancing cost, performance, and resilience. Familiarity with performance tuning, data indexing, and distributed query optimization. Exposure to both real-time and batch data streaming architectures Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Company Description Decision Point develops analytics and big data solutions for CPG, Retail, and Consumer-focused industries, working with global Fortune 500 clients. We provide analytical insights and solutions that help develop sales and marketing strategies in the Retail and CPG Industry by leveraging diverse sources of data including Point of Sale data, syndicated category data, primary shipments, and other similar sources. Decision Point was founded by Ravi Shankar along with his classmates from IIT Madras, who have diverse experience across the CPG and Marketing Analytics domain. At Decision Point, you will meet data scientists, business consultants, and tech-savvy engineers passionate about extracting every ounce of value from data for our clients. Role Description This is a full-time on-site role for a Lead Data Engineer, located in Gurugram. The Lead Data Engineer will be responsible for designing, developing, and maintaining data pipelines, building data models, implementing ETL processes, and managing data warehousing solutions. The role also includes data analytics responsibilities to derive actionable insights for our clients. The candidate will engage with cross-functional teams to understand data requirements and deliver robust data solutions. Qualifications Key Responsibilities: Design and build scalable data pipelines using Databricks and Microsoft Fabric Develop and maintain robust ETL processes for efficient data movement Implement and optimize data models – Dimensional & Data Vault Work with data warehousing solutions to ensure clean, consumable data for analytics teams Collaborate with cross-functional teams to deliver end-to-end data flows (ingestion → transformation → consumption) Ensure data accuracy, performance tuning, and problem resolution Required Skills: Databricks (hands-on) Microsoft Fabric (strong working knowledge) SQL – advanced querying, stored procedures, and tuning PySpark / Python – for data transformation Basic knowledge of Azure Data Factory (ADF) Experience with data modeling and data warehousing Strong analytical and problem-solving abilities Ability to work in a fast-paced, collaborative environment

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Your Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Staff Data Scientist who understands healthcare data and can leverage the data to build algorithms to personalize treatments based on the clinical and behavioral history of patients. We are looking for a superstar who will define and build the next generation of predictive analytics tools in healthcare. Analytics at Innovaccer Our analytics team is dedicated to weaving analytics and data science magics across our products. They are the owners and custodians of intelligence behind our products. With their expertise and innovative approach, they play a crucial role in building various analytical models (including descriptive, predictive, and prescriptive) to help our end-users make smart decisions. Their focus on continuous improvement and cutting-edge methodologies ensures that they're always creating market leading solutions that propel our products to new heights of success A Day in the Life Design and lead the development of various artificial intelligence initiatives to help improve health and wellness of patients Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them Define technical architecture to productize Innovaccer's machine-learning algorithms and take them to market with partnerships with different organizations Proven ability to break down complex business problems into machine learning problems and design solution workflows Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle Define and execute on the quarterly roadmap What You Need Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) 7+ years of experience in Data Science (healthcare experience will be a plus) Strong written and spoken communication skills Strong hands-on experience in Python - building enterprise applications alongwith optimization techniques Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - knowledge of implementing agentic workflows is a plus Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry Deep understanding of classical ML techniques - Random Forests, SVM, Boosting, Bagging - and building training and evaluation pipelines Demonstrate experience with global and local model explainability using LIME, SHAP and associated techniques Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s Experience in developing and deploying production ready models Knowledge of implementing an MLOps framework Possess a customer-focused attitude through conversations and documentation We offer competitive benefits to set you up for success in and outside of work. Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where And How We Work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our Px department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings, and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer's EPx-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, and innovaccer.com.

Posted 1 week ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

We are seeking a Senior .NET Full Stack Developer with 810 years of experience for a full-time, 6-month remote role. The ideal candidate must possess a strong mix of backend, frontend, data, mobile, and CMS skill sets. Proficiency in Azure, .NET 8, C#, React JS, Tailwind CSS, Python, Power BI, Azure SQL, React Native, and CMS platforms like Sitecore, Tridion, or WordPress is essential. The role demands hands-on development experience and the ability to deliver scalable, end-to-end solutions. Strong analytical thinking, adaptability, and collaboration skills are a must. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Analytics – JD (Azure DE) EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 61,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 12,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Job Title: Consultant / Senior Consultant – Azure Data Engineering Location: India – Gurgaon preferred Industry: Insurance Analytics & AI Vertical Role Overview: We are seeking a hands-on Consultant / Senior Consultant with strong expertise in Azure-based data engineering to support end-to-end development and delivery of data pipelines for our insurance clients. The ideal candidate will have a deep understanding of Azure Data Factory, ADLS, Databricks (preferably with DLT and Unity Catalog), SQL, and Python and be comfortable working in a dynamic, client-facing environment. This is a key offshore role requiring both technical execution and solution-oriented thinking to support modern data platform initiatives. Collaborate with data scientists, analysts, and stakeholders to gather requirements and define data models that effectively support business requirements Demonstrate decision-making, analytical and problem-solving abilities Strong verbal and written communication skills to manage client discussions Familiar with working on Agile methodologies - daily scrum, sprint planning, backlog refinement Key Responsibilities & Skillsets: o Design and develop scalable and efficient data pipelines using Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS). o Build and maintain Databricks notebooks for data ingestion, transformation, and quality checks, using Python and SQL. o Work with Delta Live Tables (DLT) and Unity Catalog (preferred) to improve pipeline automation, governance, and performance. o Collaborate with data architects, analysts, and onshore teams to translate business requirements into technical specifications. o Troubleshoot data issues, ensure data accuracy, and apply best practices in data engineering and DevOps. o Support the migration of legacy SQL pipelines to modern Python-based frameworks. o Ensure adherence to data security, compliance, and performance standards, especially within insurance domain constraints. o Provide documentation, status updates, and technical insights to stakeholders as required. o Excellent communication skills and stakeholder management Required Skills & Experience: 3–7 years of strong hands-on experience in data engineering with a focus on Azure cloud technologies. Proficient in Azure Data Factory, Databricks, ADLS Gen2, and working knowledge of Unity Catalog. Strong programming skills in both SQL, Python especially within Databricks Notebooks. Pyspark expertise is good to have. Experience in Delta Lake / Delta Live Tables (DLT) is a plus. Good understanding of ETL/ELT concepts, data modeling, and performance tuning. Exposure to Insurance or Financial Services data projects is highly preferred. Strong communication and collaboration skills in an offshore delivery model. Required Skills & Experience: Experience working in Agile/Scrum teams Familiarity with Azure DevOps, Git, and CI/CD practices Certifications in Azure Data Engineering (e.g., DP-203) or Databricks What we offer: EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond. "EOE/Minorities/Females/Vets/Disabilities"

Posted 1 week ago

Apply

9.0 - 14.0 years

30 - 45 Lacs

Chennai, Bengaluru

Work from Office

Educational Qualifications Lead and mentor the team, define goals, ensure timely project delivery, and manage code reviews. Design scalable data architectures, select appropriate technologies, and ensure compliance with data security regulations. Build and optimize ETL/ELT pipelines, automate workflows, and ensure data quality. Hands-on experience with Databricks for building and managing scalable data pipelines. Manage cloud-based infrastructure, implement IaC, and optimize costs and availability. Work with business stakeholders to translate requirements into technical solutions, track project milestones, and manage risks using Agile. Stay updated on new technologies, drive innovation, and optimize existing systems. Maintain documentation, share knowledge, and conduct team training sessions. Educational Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or related field 9+ years of experience in Data Engineering, with at least 3+ years in an architectural role Strong expertise in data engineering tools and technologies (e.g., Apache Spark, 1 Cloud (AWS or GCP or Azure), SQL, Python). Proficiency in any cloud platforms (e.g., AWS, Azure, GCP) and their data services. Experience with data modeling, ETL/ELT processes, and data warehousing solutions. Knowledge of distributed systems, big data technologies, and real-time data processing. Strong leadership, communication, and problem-solving skills. Familiarity with DevOps practices and tools (e.g., Docker, Kubernetes, Terraform). Understanding of data governance, security, and compliance requirements.

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

Notice period strictly Immediate to 30 days . Experience range- 5+ yrs to 12 yrs. Location: PAN India JD 5+ Years of experience using Azure. Strong proficiency in Databricks. Experience with Pyspark. Proficiency in SQL or TSQL. Experience in Azure service components like Azure data factory, Azure data lake, Databricks, SQL DB SQL Server. Databricks Jobs for efficient data processing ETL tasks and report generation. Hands on experience with scripting languages such as Python for data processing and manipulation. Key responsibilities: Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms Participate in code and design reviews to maintain high development standards. Optimize data querying layers to enhance performance and support analytical requirements. Should be able to develop end to end automations in Azure stack for ETL workflows data quality validations.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position: Technical Data Analyst (Snowflake+Pyhton+Sql+Databricks) Client: One of our Prestigious client Locations: Pune/Hyderabad Mode of hiring: Fulltime/Permanent Mode of Interview: face to face Experience: 6+ Years. Budget: 28-33 LPA (Based on exp) Notice Period: 0-15 days (Only serving notice period) Share your CV 📧: sathish.m@tekgence.com Note: PF(UAN) is mandatory (No Dual emp & overlap) We are seeking a highly skilled **Technical Data Analyst** to join our team and play a key role in building a **single source of truth** for our high-volume, direct-to-consumer accounting and financial data warehouse. The ideal candidate will have a strong background in data analysis, SQL, and data transformation, with experience in financial data warehousing and reporting. This role will involve working closely with finance and accounting teams to gather requirements, build dashboards, and transform data to support month-end accounting, tax reporting, and financial forecasting. The financial data warehouse is currently built in **Snowflake** and will be migrated to **Databricks**. The candidate will be responsible for transitioning reporting and transformation processes to Databricks while ensuring data accuracy and consistency. **Key Responsibilities:** 1. **Data Analysis & Reporting:** - Build and maintain **month-end accounting and tax dashboards** using SQL and Snowsight in Snowflake . - Transition reporting processes to **Databricks**, creating dashboards and reports to support finance and accounting teams. - Gather requirements from finance and accounting stakeholders to design and deliver actionable insights. 2. **Data Transformation & Aggregation:** - Develop and implement data transformation pipelines in **Databricks** to aggregate financial data and create **balance sheet look-forward views**. - Ensure data accuracy and consistency during the migration from Snowflake to Databricks. - Collaborate with the data engineering team to optimize data ingestion and transformation processes. 3. **Data Integration & ERP Collaboration:** - Support the integration of financial data from the data warehouse into ** NetSuite ERP ** by ensuring data is properly transformed and validated. - Work with cross-functional teams to ensure seamless data flow between systems. 4. **Data Ingestion & Tools:** - Understand and work with ** Fivetran* * for data ingestion (no need to be an expert, but familiarity is required). - Troubleshoot and resolve data-related issues in collaboration with the data engineering team. Additional Qualifications: - 3+ years of experience as a **Data Analyst** or similar role, preferably in a financial or accounting context. - Strong proficiency in **SQL** and experience with **Snowflake** and **Databricks**. - Experience building dashboards and reports for financial data (e.g., month-end close, tax reporting, balance sheets). - Familiarity with **Fivetran** or similar data ingestion tools. - Understanding of financial data concepts (e.g., general ledger, journals, balance sheets, income statements). - Experience with data transformation and aggregation in a cloud-based environment. - Strong communication skills to collaborate with finance and accounting teams. - Nice-to-have: Experience with **NetSuite ERP** or similar financial systems.

Posted 1 week ago

Apply

4.0 years

6 - 10 Lacs

Gurgaon

On-site

About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners Job Summary: We are seeking a Senior Data Engineer – Databricks with a strong development background in Azure Databricks and Python, who will be instrumental in building and optimising scalable data pipelines and solutions across the Azure ecosystem. This role requires hands-on development experience with PySpark , data modelling, and Azure Data Factory. You will collaborate closely with data architects, analysts, and business stakeholders to ensure reliable and high-performance data solutions. Experience Required: 4+ Years Lead/Senior Data Engineer (Microsoft Azure, Databricks, Data Factory, Data Engineer, Data Modelling) Key Responsibilities: Develop and Maintain Data Pipelines: Design, implement, and optimise scalable data pipelines using Azure Databricks (PySpark) for both batch and streaming use cases. Azure Platform Integration: Work extensively with Azure services including Data Factory , ADLS Gen2 , Delta Lake , and Azure Synapse for end-to-end data pipeline orchestration and storage. Data Transformation & Processing: Write efficient, maintainable, and reusable PySpark code for data ingestion, transformation, and validation processes within the Databricks environment. Collaboration: Partner with data architects, analysts, and data scientists to understand requirements and deliver robust, high-quality data solutions. Performance Tuning and Optimisation: Optimise Databricks cluster configurations, notebook performance, and resource consumption to ensure cost-effective and efficient data processing. Testing and Documentation: Implement unit and integration tests for data pipelines. Document solutions, processes, and best practices to enable team growth and maintainability. Security and Compliance: Ensure data governance, privacy, and compliance are upheld across all engineered solutions, following Azure security best practices. Preferred Skills : Strong hands-on experience with Delta Lake , including table management, schema evolution, and implementing ACID-compliant pipelines. Skilled in developing and maintaining Databricks notebooks and jobs for large-scale batch and streaming data processing. Experience writing modular, production-grade PySpark and Python code , including reusable functions and libraries for data transformation. Experience in streaming data ingestion and Structured Streaming in Databricks for near real-time data solutions. Knowledge of performance tuning techniques in Spark – including job optimization, caching, and partitioning strategies. Exposure to data quality frameworks and testing practices (e.g., pytest , data validation libraries, custom assertions). Basic understanding of Unity Catalog for managing data governance, access controls, and lineage tracking from a developer’s perspective. Familiarity with Power BI - able to structure data models and views in Databricks or Synapse to support BI consumption .

Posted 1 week ago

Apply

0 years

2 - 9 Lacs

Gurgaon

On-site

Data Engineering Specialist YOE - 7+yrs Role Purpose The purpose of the Data Engineer role is to build and unit test code for projects and programmes on the Azure Cloud Data and Analytics Platform. Key Accountabilities Analyse business requirements and support/create design for requirements Build and deploy new or updated data mappings, sessions, and workflows in the Azure Cloud Platform, with a key focus on Azure Databricks Develop performant and scalable code Perform ETL routines including performance tuning, troubleshooting, support, and capacity estimation Conduct thorough testing of ETL code changes to ensure quality deliverables Provide day-to-day support and mentoring to end users interacting with the data Profile and understand large volumes of source data, including structured and semi-structured/web activity data Analyse defects and provide timely fixes Provide release notes for deployments Support release activities Demonstrate a problem-solving attitude Continuously develop technical skills, especially within the Azure platform Functional / Technical Skills Skills and Experience: Experienced in ETL tools and data projects Recent Azure experience with strong knowledge of Azure Databricks (Python/SQL) Good knowledge of SQL and Python Strong analytical skills Knowledge of Azure DevOps Experience with Azure Databricks and Logic Apps (highly desirable) Experience with Python programming (highly desirable) Experience with Azure Functions (a plus) Decision Making Authority / Impact Responsible for day-to-day decisions related to coding and unit testing Immediate joiners, currently serving notice period upto 1 month notice period If you are interested to apply for this role, send your CV to bhavya.vemuri@invokhr.com

Posted 1 week ago

Apply

1.0 - 3.0 years

2 - 7 Lacs

Gurgaon

On-site

Donaldson is committed to solving the world’s most complex filtration challenges. Together, we make cool things. As an established technology and innovation leader, we are continuously evolving to meet the filtration needs of our changing world. Join a culture of collaboration and innovation that matters and a chance to learn, effect change, and make meaningful contributions at work and in communities. We are seeking a skilled and motivated Data Engineer II to join the Corporate Technology Data Engineering Team. This role is important for developing and sustaining our data infrastructure, which supports a wide range of R&D, sensor-based, and modeling technologies. The Data Engineer II will design and maintain pipelines that enable the use of complex datasets. This position directly empowers faster decision making by building trustworthy data flows and access for engineers and scientists. Primary Role Responsibilities: Develop and maintain data ingestion and transformation pipelines across on-premise and cloud platforms. Develop scalable ETL/ELT pipelines that integrate data from a variety of sources (i.e. form-based entries, SQL databases, Snowflake, SharePoint). Collaborate with data scientists, data analysts, simulation engineers and IT personnel to deliver data engineering and predictive data analytics projects. Implement data quality checks, logging, and monitoring to ensure reliable operations. Follow and maintain data versioning, schema evolution, and governance controls and guidelines. Help administer Snowflake environments for cloud analytics. Work with more senior staff to improve solution architectures and automation. Stay updated with the latest data engineering technologies and trends. Participate in code reviews and knowledge sharing sessions. Participate in and plan new data projects that impact business and technical domains. Required Qualifications & Relevant Experience: Bachelor’s or master’s degree in computer science, data engineering, or related field. 1-3 years of experience in data engineering, ETL/ELT development, and/or backend software engineering. Demonstrated expertise in Python and SQL. Demonstrated experience working with data lakes and/or data warehouses (e.g. Snowflake, Databricks, or similar) Familiarity with source control and development practices (e.g Git, Azure DevOps) Strong problem-solving skills and eagerness to work with cross-functional globalized teams. Preferred Qualifications: Required qualification plus Working experience and knowledge of scientific and R&D workflows, including simulation data and LIMS systems. Demonstrated ability to balance operational support and longer-term project contributions. Experience with Java Strong communication and presentation skills. Motivated and self-driven learner Employment opportunities for positions in the United States may require use of information which is subject to the export control regulations of the United States. Hiring decisions for such positions are required by law to be made in compliance with these regulations. Applicants for employment opportunities in other countries must be able to meet the comparable export control requirements of that country and of the United States. Donaldson Company has been made aware that there are several recruiting scams that are targeting job seekers. These scams have attempted to solicit money for job applications and/or collect confidential information, Donaldson will never solicit money during the application or recruiting process. Donaldson only accepts online applications through our Careers | Donaldson Company, Inc. website and any communication from a Donaldson recruiter would be sent using a donaldson.com email address. If you have any questions about the legitimacy of an employment opportunity, please reach out to talentacquisition@donaldson.com to verify that the communication is from Donaldson. Our policy is to provide equal employment opportunities to all qualified persons without regard to race, gender, color, disability, national origin, age, religion, union affiliation, sexual orientation, veteran status, citizenship, gender identity and/or expression, or other status protected by law.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Any Btech Degree Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for creating efficient and scalable applications using Microsoft Azure Databricks. Your typical day will involve collaborating with the team to understand business requirements, designing and developing applications, and ensuring the applications meet quality standards and performance expectations. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Collaborate with the team to understand business requirements and translate them into technical specifications. - Design, develop, and test applications using Microsoft Azure Databricks. - Ensure the applications meet quality standards and performance expectations. - Troubleshoot and debug applications to identify and resolve issues. - Provide technical guidance and support to junior developers. - Stay updated with the latest industry trends and technologies related to application development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Microsoft Azure Databricks. - This position is based at our Hyderabad office. - A Any Btech Degree is required. Any Btech Degree

Posted 1 week ago

Apply

3.0 years

6 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen #NJP

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies