Home
Jobs

3752 Databricks Jobs - Page 2

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Indore

On-site

GlassDoor logo

Your IT Future, Delivered. Principal DevOps Engineer With a global team of 5600+ IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. All our locations have earned the #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experience. Digitalization. Simply delivered. At DHL IT Services, we are designing, building and running IT solutions for the whole DPDHL globally. The AI & Analytics team builds and runs solutions to get much more value out of our data. We help our business colleagues all over the world with machine learning algorithms, predictive models and visualizations. We manage more than 46 AI & Big Data Applications, 3.000 active users, 87 countries and up to 100,000,000 daily transaction. Integration of AI & Big Data into business processes to compete in a data driven world needs state of the art technology. Our infrastructure, hosted on-prem and in the cloud (Azure and GCP), includes MapR, Airflow, Spark, Kafka, jupyter, Kubeflow, Jenkins, GitHub, Tableau, Power BI, Synapse (Analytics), Databricks and further interesting tools. We like to do everything in an Agile/DevOps way. No more throwing the “problem code” to support, no silos. Our teams are completely product oriented, having end to end responsibility for the success of our product. Grow together. Currently, we are looking for Principal DevOps Engineer. In this role, you will have the opportunity to design and develop solutions, contribute to roadmaps of Big Data architectures and provide mentorship and feedback to more junior team members. We are looking for someone to help us manage the petabytes of data we have and turn them into value. Does that sound a bit like you? Let’s talk! Even if you don’t tick all the boxes below, we’d love to hear from you; our new department is rapidly growing and we’re looking for many people with the can-do mindset to join us on our digitalization journey. Thank you for considering DHL as the next step in your career – we do believe we can make a difference together! Ready to embark on the journey? Here’s what we are looking for: University Degree in Computer Science, Information Systems, Business Administration, or related field. 10+ years of IT Experience with 5+ years of experience in the Data Engineering. Strong analytic skills related to working with structured, semi structured and unstructured datasets. Hands-on experience with Data Lake/Big Data Projects implementation on On-premise and Cloud platforms (preferably Azure/GCP) Hands-on experience with Docker and Kubernetes and related ecosystem tooling on-prem and in public clouds. Hands-on experience with public clouds (preferred: GCP, Azure), with specific focus on using them for Data Lakes. Experience working with Big Data Taechnologies – e.g. Spark, Kafka, HDFS, Hive, Hadoop distributions (Cloudera or MapR) Experience with streaming platforms/frameworks such as Kafka, Spark-Streaming, Flink. Good Programming skills (Java/Scala/Python) Advanced working SQL knowledge and experience with relational databases, query authoring (SQL), as well as working familiarity with a variety of databases. Proven experience in building and optimizing big data pipelines, architectures and data sets. Experience in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience in building processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Experience with Git, CI/CD and Good to have Containerization like Docker or Open shift. You should have: Certifications in some of the core technologies. Ability to collaborate across different teams/geographies/stakeholders/levels of seniority. Customer focus with an eye on continuous improvement. Energetic, enthusiastic and results-oriented personality. Ability to coach other team members, you must be a team player! Strong will to overcome the complexities involved in developing and supporting data pipelines. Language requirements: English – Fluent spoken and written (C1 level) An array of benefits for you: Hybrid work arrangements to balance in-office collaboration and home flexibility. Annual Leave: 42 days off apart from Public / National Holidays. Medical Insurance: Self + Spouse + 2 children. An option to opt for Voluntary Parental Insurance (Parents / Parent -in-laws) at a nominal premium covering pre existing disease. In House training programs: professional and technical training certifications.

Posted 2 hours ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Requirements Description and Requirements Job Description and Requirements Position Summary The MetLife Corporate Technology (CT) organization is evolving to enable MetLife’s New Frontier strategy. With a strong vision in place, we are a global function focused on driving digital technology strategies for key corporate functions within MetLife including, Finance, Actuarial, Reinsurance, Legal, Human Resources, Employee Experience, Risk, Treasury, Audit and Compliance. In partnership with our business leaders, we develop and deliver seamless technology experiences to our employees across the entire employee lifecycle. Our vision and mission is to create innovative, transformative and contemporary technology solutions to empower our leaders and employees so they can focus on what matters most, our customers. We are technologists with strong business acumen focused on developing our talent to continually transform and innovate. As part of Tech Talent Transformation (T3) agenda, MetLife is establishing a Technology Center in India. This technology center will perform as an integrated organization between onshore, offshore, and strategic vendor partners in an Agile delivery model. We are seeking a highly skilled hands-on delivery engineer who is responsible for partnering with Internal Audit Leaders, third party vendors and IT Executives to lead global transformation projects with the goal of attracting, developing and retaining talent across the organization. This position will be a part of a fast-paced IT team leveraging technology expertise that spans across Java, REACT, Python, Azure and AI. He/she should be a strategic thinker, an effective communicator, and an expert in technological development. Key Relationships Internal Stake Holder – Corporate Technology Control Functions Leader, Control Functions Leadership team, India Corporate Technology AVP, and Business process Owners for Internal Audit Key Responsibilities Stakeholder Management - Managing key business stakeholders to deliver required technology capabilities to support the digital transformation agenda. Driving prioritization of the product backlog. This includes managing key vendors providing the resources, SaaS & other capabilities. Technology Implementation – Implement and support projects on Internal Audit Technology platforms, specifically Azure Cloud. Ways of Working – Adoption of the Agile ways of working in the software delivery lifecycle. E2E Software Lifecycle Management (Architecture, Design, Development, Testing & Production) Evaluate/Implement technical solutions supporting Internal Audit and SAAS based solutions, talent development, performance management, and workforce analytic Work with Functional Experts to translate user requirements into Technical Specifications Partner with internal business process owners, technical team members, and senior management throughout the project life cycle Act as the intermediary to facilitate a clear understanding among all parties about business assumptions and requirements, design, technical, testing, and production migration requirements Drive the resolution and troubleshooting of issues during development and post- production support. Responsible to Support Day-to-day business enhancements Knowledge, Skills, And Abilities Education A Bachelors/master's degree in computer science or equivalent Engineering degree. Candidate Qualifications: Education: Bachelor's degree in computer science, Information Systems or related field Experience: Required: 8+ years of experience in Controls Technology (Compliance, Audit, Legal, Risk) implementation & support; preferably cloud based solutions Global SAAS based Internal Audit or other control functions technology implementation experience Familiar with technology landscape supporting integration solutions such as Azure, Databricks, API Management Prior lead role or project management experience Experience in both front-end (e.g. REACT) and back-end technologies (e.g. Node.js, Python, Java), including Restful API design and microservices architecture Experience with MS Project, Visio, Excel, PowerPoint and related project delivery utilities. Preferred: Azure Cloud Certifications OTBI and BI Reports development Ability to manage systems testing including unit, QA, end to end and user acceptance testing Experience managing vendors to SLA’s. Proven experience collaborating with peers to establish best practices to achieve high service levels. Skills and Competencies: Competencies Communication: Ability to influence and help communicate the organization’s direction and ensure results are achieved Collaboration: Proven track record of building collaborative partnerships and ability to operate effectively in a global environment People Management: Inspiring, motivating and leading diverse and distributed teams Diverse environment: Can-do attitude and ability to work in a high paced environment Tech Stack Development & Delivery Methods: Agile (Scaled Agile Framework) DevOps and CI/CD Azure DevOps, JFrog Development Frameworks and Languages: Java REACT SQL Python Azure: Functional Knowledge of cloud based solutions Development Tools & Platforms Test Automation Security and Monitoring: Authentication/Authorization (CA SiteMinder, MS Entra, PingOne) About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 2 hours ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company Description Cint is a pioneer in research technology (ResTech). Our customers use the Cint platform to post questions and get answers from real people to build business strategies, confidently publish research, accurately measure the impact of digital advertising, and more. The Cint platform is built on a programmatic marketplace, which is the world’s largest, with nearly 300 million respondents in over 150 countries who consent to sharing their opinions, motivations, and behaviours. We are feeding the world’s curiosity! Job Description We are seeking a skilled and driven Business Intelligence Engineer to join our data team in India. In this role, you will collaborate with cross-functional teams to design, build, and maintain scalable data solutions that drive business insights and decision-making. This is a great opportunity to work on complex data challenges, contribute to infrastructure improvements, and help shape the future of BI within our organization. What You Will Do Collaborate with business and technical teams to gather and understand data requirements Design, develop, test, and deploy robust business intelligence solutions Define and document data models, structures, and data flows Ensure data quality, completeness, and consistency across systems Contribute to the growth and scalability of our data infrastructure Perform troubleshooting and provide support for existing BI tools and systems Participate in code reviews and promote best practices within the team Provide 24/7 production support as part of the team’s on-call rotation Work effectively across multiple time zones with minimal supervision Qualifications Bachelor's degree in Computer Science, Information Systems, or equivalent work experience 3+ years of experience in business intelligence, data engineering, or data warehousing Strong proficiency in ETL pipelines and Python Advanced SQL skills including complex queries and performance tuning Hands-on experience with Cloud technologies such as AWS and Snowflake Experience integrating and processing large volumes of structured and unstructured data from multiple sources Excellent analytical, problem-solving, and data design skills Ability to work collaboratively in a fast-paced, team-oriented environment Preferred Qualifications Experience with star schema design including fact tables, grain levels, and slowly changing dimensions (SCDs) Knowledge of multiple database programming languages and both relational and columnar database systems Familiarity with the Microsoft BI Stack (SSRS, Power BI, Azure) Experience with Matillion , Databricks and PySpark Experience with CI/CD pipelines and Infrastructure as Code (IaC) tools such as Terraform Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver across borders. Innovation is in our blood We’re pioneers in our industry Our curiosity is insatiable We bring the best ideas to life. We Do What We Say We’re accountable for our work and actions Excellence comes as standard We’re open, honest and kind, always. We are caring We learn from each other’s experiences Stop and listen; every opinion matters We embrace diversity, equity and inclusion. More About Cint We’re proud to be recognised in Newsweek’s 2025 Global Top 100 Most Loved Workplaces®, reflecting our commitment to a culture of trust, respect, and employee growth. In June 2021, Cint acquired Berlin-based GapFish – the world’s largest ISO certified online panel community in the DACH region – and in January 2022, completed the acquisition of US-based Lucid – a programmatic research technology platform that provides access to first-party survey data in over 110 countries. Cint Group AB (publ), listed on Nasdaq Stockholm, this growth has made Cint a strong global platform with teams across its many global offices, including Stockholm, London, New York, New Orleans, Singapore, Tokyo and Sydney. (www.cint.com)

Posted 2 hours ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 hours ago

Apply

3.0 - 6.0 years

0 - 3 Lacs

Pune, Jaipur, Mumbai (All Areas)

Work from Office

Naukri logo

Designation Sr. Machine Learning Engineer Location Mumbai/Pune Experience 3+years We are seeking a skilled and enthusiastic Machine Learning Engineer with a minimum of 3 years of experience to join our dynamic team. The ideal candidate will have a strong background in Python programming, classical machine learning algorithms, and a solid understanding of core mathematical concepts. Additionally, proficiency in Databricks, PySpark, Azure ML pipelines and SDLCs are highly desirable. The candidate should also possess hands-on experience with Azure and/or other cloud platforms. Job Description: 1. Develop and implement machine learning models using Python and classical ML algorithms to address business challenges. 2. Collaborate with cross-functional teams to understand requirements and translate them into actionable technical solutions. 3. Utilize Databricks PySpark for scalable data processing and analysis tasks. 4. Monitor model performance and implement drift management strategies to ensure model effectiveness over time. 5. Stay updated with advancements in machine learning and contribute to the adoption of best practices within the team. 6. Conduct experiments with deep learning techniques and integrate them into the existing workflow where appropriate. Must Have: 1. Bachelor’s or master’s degree in computer science, Engineering, Mathematics, or a related field. 2. Minimum of 3 years of experience in machine learning engineering or related roles. 3. Proficiency in Python programming language and SQL. 4. Strong understanding of classical machine learning algorithms related to classification, regression, time series forecasting and recommendation systems and their applications. 5. Experience with Databricks PySpark for big data processing and analysis. 6. Experience with Azure Machine Learning Pipelines and deploying REST APIs over AKS/ACR/App Service. 7. Must be familiar with CI/CD pipelines, Time Series Forecasting and Predictive Maintenance 8. Familiarity with drift management techniques and tools. 9. Hands-on experience with Azure and other cloud platforms. 10. Knowledge of deep learning concepts and frameworks (e.g., TensorFlow, PyTorch). 11. Solid grasp of core mathematical concepts such as algebra, calculus, co-ordinate geometry and statistics. 12. Excellent problem-solving skills and ability to work in a fast-paced environment. 13. Strong communication and interpersonal skills, with the ability to collaborate effectively within a team. Preferred Qualifications: 1. Previous experience in Manufacturing industry. 2. Contributions to open-source projects or publications in the field of machine learning.

Posted 2 hours ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

We are seeking a detail-oriented and analytical Business Analyst to join our team. The ideal candidate will act as a bridge between business stakeholders and the technology team to ensure clear understanding of business needs and successful delivery of technical solutions. This role involves gathering and documenting requirements, analyzing data and processes, and helping to deliver strategic and operational improvements. Key Responsibilities: Work with stakeholders to understand business needs, objectives, and challenges Elicit, document, and prioritize functional and non-functional requirements Translate business needs into clear, actionable user stories or requirement specifications Conduct gap analyses, SWOT assessments, and feasibility studies Collaborate with cross-functional teams including developers, QA, and project managers Create process models, workflows, and diagrams to support business understanding Analyze and interpret large datasets to provide actionable insights Support user acceptance testing (UAT) and validate solutions meet business expectations Monitor and report project progress, risks, and issues to stakeholders Recommend process improvements and assist in change management initiatives Requirements & Qualifications: 5-8 years of experience as a Business Analyst or in a related role Strong understanding of business process modeling, data analysis, and requirements gathering Proficiency with tools like MS Excel, PowerPoint, JIRA, Confluence, Visio, or similar Knowledge of SQL, data visualization tools (Power BI, Tableau) is a plus Excellent verbal and written communication skills Strong analytical thinking, problem-solving, and organizational abilities Ability to work effectively both independently and as part of a team Experience with Agile/Scrum or Waterfall methodologies Domain knowledge in Healthcare & Lifesciences Tech Stack AWS Cloud S3, EC2, EMR, Lambda, IAM, Snowflake DB Databricks Spark/Pyspark, Python Good Knowledge of Bedrock and Mistral AI

Posted 2 hours ago

Apply

0.0 - 3.0 years

15 - 22 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Title: Big Data Engineer Experience: 3+ Years Location: Bengaluru (Hybrid – 2-3 days onsite per week) Salary: ₹15–22 LPA (Depending on experience and skillset) Employment Type: Full-Time About the Role: We are seeking a skilled and detail-oriented Big Data Engineer with strong expertise in Apache Spark to join our dynamic data team in Bengaluru. This role offers the opportunity to work on large-scale data engineering projects that drive business decisions and innovation across the organization. Key Responsibilities Design and develop scalable data processing solutions using Apache Spark (Core, SQL, Streaming, MLlib). Build and optimize robust data pipelines and ETL processes for both structured and unstructured data. Collaborate with data scientists, analysts, and software engineers to integrate Spark-based data products into broader systems. Ensure data quality, integrity, and security across the data lifecycle. Monitor and troubleshoot Spark jobs, and continuously improve performance in production environments . Integrate Spark with major cloud platforms such as AWS, Azure, or GCP . Required Skills and Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 3+ years of hands-on experience with Apache Spark in large-scale data environments. Strong programming proficiency in Scala, Python, or Java (preference for Scala). Experience with data storage systems like HDFS, Hive, HBase, S3 , etc. Solid knowledge of SQL , Kafka , Airflow , and NoSQL databases . Understanding of distributed systems and parallel computing principles. Exposure to Docker/Kubernetes is a strong plus. Preferred Qualifications Certification in Big Data or Apache Spark (e.g., Databricks Certified Developer). Experience working in a DevOps/CI-CD environment. Familiarity with data warehousing tools like Snowflake or Redshift . Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,200,000.00 per year Schedule: Day shift Monday to Friday Ability to commute/relocate: Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Big data: 3 years (Required) Data Storage: 3 years (Required) Apache Hive: 3 years (Preferred) Apache spark: 3 years (Required) Work Location: In person

Posted 3 hours ago

Apply

3.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from Colan Infotech!! Role - Data Scientist Experience - 3-10 Years Job Location - Chennai/Bangalore Notice Period - Immediate to 30 Days Primary Skills Needed : AI/ML, Tensorflow, Django, Pytorch, NLP, Image processing,Gen AI,LLM Secondary Skills Needed : Keras, OpenCV, Azure or AWS Job Description:- Practical knowledge and working experience on Statistics and Operation Research methods. Practical knowledge and working experience in tools and frameworks like Flask, PySpark, Pytorch, tensorflow, keras, Databricks, OpenCV, Pillow/PIL, streamlit, d3js, dashplotly, neo4j. Good understanding of how to apply predictive and machine learning techniques like regression models, XGBoost, random forest, GBM, Neural Nets, SVM etc. Proficient with NLP techniques like RNN, LSTM and Attention based models and effectively handle readily available stanford, IBM, Azure, Open AI NLP models. Good understanding of SQL from a perspective of how to write efficient queries for pulling the data from database. Hands on experience on any version control tool (github, bitbucket). Experience of deploying ML models into production environment experience (MLOps) in any one of the cloud platforms like Azure and AWS Comprehend business issues and propose valuable business solutions. Design Factual or AI/profound learning models to address business issues. Design Statistical Models/ML/DL models and deploy them for production. Formulate what information is accessible from where and how to augment it. Develop innovative graphs for data comprehension using d3js, dashplotly and neo4j Interested candidates send updated resume to kumudha.r@colanonline.com

Posted 3 hours ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: SSE – DevOps Engineer Mode of work: Work from Office Experience: 4 - 10 Years of Experience Know your team At ValueMomentum’s Engineering Center, we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. Join a team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development leveraging immersive learning platforms. You'll have the opportunity to showcase your talents by contributing to impactful projects. Requirements - Must Have: 5+ years in DevOps with strong data pipeline experience Build and maintain CI/CD pipelines for Azure Data Factory and Databricks notebooks The role demands deep expertise in Databricks, including the automation of unit, integration, and QA testing workflows. Additionally, strong data architecture skills are essential, as the position involves implementing CI/CD pipelines for schema updates. Strong experience with Azure DevOps Pipelines, YAML builds, and release workflows. Proficiency in scripting languages like Python, PowerShell, Terraform Working knowledge of Azure services: ADF, Databricks, DABs, ADLS Gen2, Key Vault, ADO . Maintain infrastructure-as-code practices Collaborate with Data Engineers and Platform teams to maintain development, staging, and production environments. Monitor and troubleshoot pipeline failures and deployment inconsistencies. About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry. Our culture – Our fuel At ValueMomentum, we believe in making employees win by nurturing them from within, collaborating and looking out for each other. People first - Empower employees to succeed. Nurture leaders - Nurture from within. Enjoy wins – Recognize and celebrate wins. Collaboration – Foster a culture of collaboration and people-centricity. Diversity – Committed to diversity, equity, and inclusion. Fun – Create a fun and engaging work environment. Warm welcome – Provide a personalized onboarding experience. Company Benefits Compensation - Competitive compensation package comparable to the best in the industry. Career Growth - Career development, comprehensive training & certification programs, and fast track growth for high potential associates. Benefits: Comprehensive health benefits and life insurance.

Posted 4 hours ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required.

Posted 4 hours ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 7+yrs Location: Chennai, Hyderabad Immediate joiner only, WFO Mandatory skills: SQL, Python, Pyspark, Databricks (strong in core databricks), AWS (AWS is mandate) JD: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Regards R Usha usha@livecjobs.com

Posted 4 hours ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Data Scientist II - Analytics At Expedia Group, our mission is to power global travel for everyone, everywhere. If you're passionate about creating exceptional customer experiences and thrive in a collaborative, fast-paced, high-growth environment, you’ll love being part of Expedia Group’s InsurTech team. In InsurTech, we empower travelers around the world with confidence to book, peace of mind, and protection when the unexpected happens—through an innovative portfolio of customer-centric, risk-based products. We’re looking for an exceptional Data Scientist II, Analytics to join our team. If you have a strong analytical mindset, a strategic approach, and a bias for action, this is your opportunity to make a real impact. In this role, you’ll collaborate closely with product partners to optimize and launch best-in-class insurance and FinTech products. This is an exciting opportunity to join a dynamic, international team where data is at the heart of every decision, and customer centricity drives everything we do. What You Will Do Collaborate with product teams to harness data from diverse sources—uncovering new product opportunities or identifying friction points in existing experiences. Translate insights into actionable strategies that drive product innovation and enhancement. Design and execute A/B tests to rigorously evaluate product performance and feature improvements. Quantify outcomes into clear financial impact, and perform in-depth analysis to extract insights that inform the next iteration of development. Define, track, and visualize key metrics through intuitive dashboards to enable real-time business monitoring. Proactively surface metric shifts and conduct root-cause analyses to identify underlying drivers. Deliver impactful product and customer insights to cross-functional stakeholders, empowering leadership and product teams to make informed, data-driven decisions. Support product roadmap planning and go-to-market (GTM) strategy by providing analytical guidance and strategic recommendations that align with business objectives and customer needs. Who You Are Must Have: Educational Background: Bachelor’s degree (or equivalent) in Statistics, Mathematics, Economics, Data Science, or a related field. Experience: 2+ years of analytics experience with a focus on product analytics and measurements. Technical Expertise: Strong proficiency in big data tools (SQL, Python, Databricks, AWS, etc.) for working with large, complex datasets. Cross-functional Collaboration: Proven track record of collaborating with teams across Product, Finance, and other departments to integrate data insights into business strategies. Problem-Solving & General Management: Strong analytical, problem-solving, and leadership skills to manage complex initiatives and cross-functional relationships. Good To Have Advanced Analytical Skills: Expertise in experimentation, causal inference, and advanced analytics, including the ability to dive deeper into A/B test data beyond statistical significance. Economic Modeling: Knowledge of economic modeling techniques such as price elasticity and time series analysis is beneficial. Multivariate Analysis: Familiarity with regression, classification, and other multivariate analysis methods. Machine Learning Models : Exposure to machine learning models such as recommendations, multi-armed bandit algorithms, or reinforcement learning Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 4 hours ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Greetings from Fresh Gravity About Fresh Gravity: Founded in 2015, Fresh Gravity helps businesses make data-driven decisions. We are driven by data and its potential as an asset to drive business growth and efficiency. Our consultants are passionate innovators who solve clients' business problems by applying best-in-class data and analytics solutions. We provide a range of consulting and systems integration services and solutions to our clients in the areas of Data Management, Analytics and Machine Learning, and Artificial Intelligence. In the last 10 years, we have put together an exceptional team and have delivered 200+ projects for over 80 clients ranging from startups to several fortune 500 companies. We are on a mission to solve some of the most complex business problems for our clients using some of the most exciting new technologies, providing the best of learning opportunities for our team. We are focused and intentional about building a strong corporate culture in which individuals feel valued, supported, and cared for. We foster an environment where creativity thrives, paving the way for groundbreaking solutions and personal growth. Our open, collaborative, and empowering work culture is the main reason for our growth and success. To know more about our culture and employee benefits, visit out website https://www.freshgravity.com/employee-benefits/ . We promise rich opportunities for you to succeed, to shine, to exceed even your own expectations. We are data driven. We are passionate. We are innovators. We are Fresh Gravity. Requirements Strong foundation in machine learning theory and the full model development lifecycle Proficient in Python and libraries: scikit-learn, pandas, numpy, transformers. Hands-on with TensorFlow and PyTorch for deep learning Experience using MLflow for tracking, versioning, and deployment. Working knowledge of ETL, SQL, and data modeling Practical experience with Azure services, including: Azure Databricks Azure Machine Learning / MLflow Azure Data Factory Azure Data Lake / Blob Storage Azure OpenAI Familiar with MLOps, version control, and pipeline automation Strong communication skills and experience in Agile, cross-functional teams Benefits In addition to a competitive package, we promise rich opportunities for you to succeed, to shine, to exceed even your own expectations. In keeping with Fresh Gravity's challenger ethos, we have developed the 5Dimensions (5D) benefits program. This program recognizes the multiple dimensions within each of us and seeks to provide opportunities for deep development across these dimensions. Enrich Myself; Enhance My Client; Build my Company, Nurture My Family; and Better Humanity.

Posted 4 hours ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

Company Description At Simbus, we bring Richard Branson’s saying to life: “Train people well enough so they can leave, treat them well enough so they don't want to.” We empower businesses through results-driven IT consulting services focusing on Kinaxis Maestro Supply Chain Planning and Databricks platforms. With 15 years of expertise in Supply Chain Planning, we deliver high-impact solutions to enhance agility, boost efficiency, and enable smarter decision-making. Our strategic blend of supply chain and data capabilities ensures seamless transformation for our clients. We have aggressive plans to expand globally with a specific focus on USA and are looking for an experienced Sales head to drive our next phase of growth. Role Description This is a full-time remote role for a GM/VP - IT Services Sales at Simbus Technologies Pvt. Ltd. The candidate will be responsible for driving sales and revenue growth within the IT services domain, particularly focusing on Kinaxis Maestro and Databricks platforms. Key Deliverables include developing and executing sales strategies, identifying new sales opportunities and closing them, and managing client relationships. The candidate will work closely with the consulting and delivery teams to ensure client satisfaction and success. The role is part of the senior management team of the company and will play a critical role in our scaling up journey WHAT YOU’LL BRING: Bachelor’s degree in business, Marketing, or a related field; MBA preferred. Minimum of 10 years of experience in sales, with at least 5 years in a leadership role. Proven track record of achieving sales targets in the IT services industry preferably in Supply Chain / Data Engineering / Data Analytics Domain. Experience in selling to mid-sized companies in India and the US. Strong understanding of the end-to-end sales cycle in the IT Services Industry. Proven track record of exceeding sales targets more than USD 1.5 Million Annually Excellent communication, negotiation, and interpersonal skills. Ability to work independently and manage multiple priorities. Proficiency in CRM software and sales analytics tools. Exceptional English communication skills, both spoken and written. WHAT YOU’LL DO: Develop and execute strategic sales plans to achieve revenue targets for IT services in Kinaxis and Databricks Identify and pursue new business opportunities in the Indian and US markets. Manage the entire sales cycle from lead generation to closing deals. Build and maintain strong relationships with key clients and stakeholders. Provide regular sales forecasts and reports to senior management. Stay updated on industry trends and competitor activities . Network and build contacts by attending Industry events WHAT WE’LL OFFER YOU: An open transparent work culture with focus on employee delight. Opportunity to work on challenging projects. A strong process-oriented work environment using modern work tools to drive efficiency in service delivery. A work from home policy focused on optimum work life balance. While the role is remote, there will be both domestic and international travel to the extent of 7 to 10 days in a month. Industry leading pay scale with benefits. A collaborative work culture with emphasis on learning and innovation.

Posted 5 hours ago

Apply

4.0 - 7.0 years

15 - 20 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Naukri logo

Must have excellent coding skills either Python or Scala, preferably Python. Must have at least 5+ years of experience in Data Engineering domain with total of 7+ years. Must have implemented at least 2 project end-to-end in Databricks. Musthave at least 2+ years of experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration

Posted 5 hours ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. We are looking for a Data Scientist with industry experience in applying a variety of machine learning solutions to real-world large-scale data to build intelligent systems. Healthcare background is a plus. Passion for travel can help you score some brownie points. THE THINGS YOU'LL BE DOING ▶ Design scalable solutions for real-time performance on a significantly large data set. Use big data technologies to optimally use infrastructure and improve performance. ▶ Build intelligent systems to capture and model the vast amount of behavioral data to enrich the content understanding with behavioral information ▶ Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. ▶ Define technical architecture to productize Innovaccer's machine-learning algorithms and take them to market with partnerships with different organizations ▶ Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows. ▶ Work with customers and BI experts to build out reports and dashboards that are most useful to customers ▶ Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle. ▶ Define and execute on the roadmap Requirements ▶ Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) ▶ 2+ years of experience in Data Science (healthcare experience will be a plus) ▶ Strong written and spoken communication skills ▶ Strong hands-on experience in SQL and Python (Pandas, Scikit-learn) ▶ Experience working in classical ML techniques - XGboost, Clustering, Feature Engineering ▶ Experience in at least 1 Deep Learning frameworks like Pytorch/Tensorflow and and using LLMs in GenAI workflows. ▶ Good to have - knowledge of Sagemaker/Databricks ▶ Experience of containerizing models with Docker, Git, APIs, etc ▶ Ability to work with and influence multiple stakeholders and deliver solutions in a given time frame Benefits Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .

Posted 5 hours ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. We are looking for a Data Scientist with industry experience in applying a variety of machine learning solutions to real-world large-scale data to build intelligent systems. Healthcare background is a plus. Passion for travel can help you score some brownie points. THE THINGS YOU'LL BE DOING ▶ Design scalable solutions for real-time performance on a significantly large data set. Use big data technologies to optimally use infrastructure and improve performance. ▶ Build intelligent systems to capture and model the vast amount of behavioral data to enrich the content understanding with behavioral information ▶ Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. ▶ Define technical architecture to productize Innovaccer's machine-learning algorithms and take them to market with partnerships with different organizations ▶ Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows. ▶ Work with customers and BI experts to build out reports and dashboards that are most useful to customers ▶ Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle. ▶ Define and execute on the roadmap Requirements ▶ Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) ▶ 2+ years of experience in Data Science (healthcare experience will be a plus) ▶ Strong written and spoken communication skills ▶ Strong hands-on experience in SQL and Python (Pandas, Scikit-learn) ▶ Experience working in classical ML techniques - XGboost, Clustering, Feature Engineering ▶ Experience in at least 1 Deep Learning frameworks like Pytorch/Tensorflow and and using LLMs in GenAI workflows. ▶ Good to have - knowledge of Sagemaker/Databricks ▶ Experience of containerizing models with Docker, Git, APIs, etc ▶ Ability to work with and influence multiple stakeholders and deliver solutions in a given time frame Benefits Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .

Posted 5 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Designation – Sr. Machine Learning Engineers Locations – Pune, Mumbai and Jaipur Experience – 3+years Positions - Multiple We are seeking a skilled and enthusiastic Machine Learning Engineer with a minimum of 3 years of experience to join our dynamic team. The ideal candidate will have a strong background in Python programming, classical machine learning algorithms, and a solid understanding of core mathematical concepts. Additionally, proficiency in Databricks, PySpark, Azure ML pipelines and SDLCs are highly desirable. The candidate should also possess hands-on experience with Azure and/or other cloud platforms. Job Description: 1. Develop and implement machine learning models using Python and classical ML algorithms to address business challenges. 2. Collaborate with cross-functional teams to understand requirements and translate them into actionable technical solutions. 3. Utilize Databricks PySpark for scalable data processing and analysis tasks. 4. Monitor model performance and implement drift management strategies to ensure model effectiveness over time. 5. Stay updated with advancements in machine learning and contribute to the adoption of best practices within the team. 6. Conduct experiments with deep learning techniques and integrate them into the existing workflow where appropriate. Must Have: 1. Bachelor’s or master’s degree in computer science, Engineering, Mathematics, or a related field. 2. Minimum of 3 years of experience in machine learning engineering or related roles. 3. Proficiency in Python programming language and SQL. 4. Strong understanding of classical machine learning algorithms related to classification, regression, time series forecasting and recommendation systems and their applications. 5. Experience with Databricks PySpark for big data processing and analysis. 6. Experience with Azure Machine Learning Pipelines and deploying REST APIs over AKS/ACR/App Service. 7. Must be familiar with CI/CD pipelines, Time Series Forecasting and Predictive Maintenance 8. Familiarity with drift management techniques and tools. 9. Hands-on experience with Azure and other cloud platforms. 10. Knowledge of deep learning concepts and frameworks (e.g., TensorFlow, PyTorch). 11. Solid grasp of core mathematical concepts such as algebra, calculus, co-ordinate geometry and statistics. 12. Excellent problem-solving skills and ability to work in a fast-paced environment. 13. Strong communication and interpersonal skills, with the ability to collaborate effectively within a team.

Posted 5 hours ago

Apply

2.0 - 4.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

Job Profile : Data Engineer -I/II - IN (Operations/ Support) Work Timings : 24x7 (IST) Work Location : Remote Job Description Summary The Data engineer is responsible for managing and operating upon Tableau, Tableau bridge server, Databricks, Dbt, SQL, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI. The engineer will work closely with the customer and team lead to manage and operate cloud data platform JOB COMPLEXITY: This role requires extensive problem solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as Databricks/AWS/Tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task EXPERIENCE/EDUCATION: Requires a Bachelor’s degree in computer science or other related field plus 2-4 years of hands-on experience in configuring and managing Tableau/Databricks and SQL based data analytics solution. Experience with Tableau/Databricks and SQL Datawarehouse environment is desired Knowledge/ Skills Good hands on Tableau, Tableau bridge server, Databricks, SSRS/ SSIS, AWS DWS, AWS APP Flow, PowerBI Ability to read and write sql and stored procedures Experience on AWS Good hands on experience in configuring, managing and troubleshooting along with general analytical and problem solving skills Excellent written and verbal communication skills Ability to communicate technical info and ideas so others will understand Ability to successfully work and promote inclusiveness in small groups. Job Responsibilities Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests – if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed

Posted 6 hours ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together Primary Responsibilities Machine Learning Model Development: Assist in designing and developing machine learning models to address business challenges Perform data preprocessing and feature engineering Support model selection and initial optimization Model Deployment and Monitoring: Assist in deploying machine learning models into production environments Monitor model performance and suggest updates and improvements Ensure models are reliable and maintainable Data Science and Data Engineering: Conduct exploratory data analysis and visualize data insights Help develop and maintain data pipelines for efficient data processing Work with data engineers to ensure data quality and integrity Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor's degree in Computer Science, Data Science, Engineering, or a related field 3+ years of experience in AI/ML engineering, data science, or a related role Experience in developing and deploying machine learning models Experience with natural language processing (NLP) and large language models (LLMs) Experience in Data Manipulation and Analysis (e.g., Pandas, NumPy, Matplotlib, Jupyter Notebooks, Databricks, AI Studio) Proficiency in data manipulation and analysis using SQL/NoSQL, Pandas, or similar tools/technologies Familiarity with cloud platforms (e.g., AWS, Azure, GCP) for model deployment Solid problem-solving skills and attention to detail Proven good communication skills Knowledge of big data technologies (e.g., Hadoop, Spark, Hive, Kafka) Solid programming skills in Python, R, Spark/PySpark, Scala, SQL Familiarity with Machine Learning Frameworks and Libraries (e.g., PyTorch, scikit-learn, TensorFlow, MLflow, Keras, XGBoost) Basic understanding of data engineering, ETL processes, and data warehousing Knowledge of visualization tools (e.g., Tableau, Power BI) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 6 hours ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior Data Engineer Location: Noida | Gurgaon | Hyderabad | Bangalore (Hybrid – 2 days/month in office) Experience: 8+ years Employment Type: Full-time | Hybrid Skills: PySpark | Databricks | ADF | Big Data | Hadoop | Hive About the Role: We are looking for a highly experienced and results-driven Senior Data Engineer to join our growing team. This role is ideal for a data enthusiast who thrives in managing and optimizing big data pipelines using modern cloud and big data tools. You’ll play a key role in designing scalable data architectures and enabling data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and Databricks Develop ETL workflows and orchestrate data pipelines using Azure Data Factory (ADF) Work with structured and unstructured data across the Hadoop ecosystem (HDFS, Hive, Spark) Optimize data processing and storage for high performance and reliability Collaborate with data scientists, analysts, and business teams to ensure data availability and quality Implement data governance, data quality, and security best practices Monitor and troubleshoot production data pipelines and jobs Document technical solutions and standard operating procedures Required Skills & Qualifications: 8+ years of hands-on experience in data engineering and big data technologies Proficiency in PySpark , Databricks , and Azure Data Factory (ADF) Strong experience with Big Data technologies: Hadoop , Hive , Spark , HDFS Solid understanding of data modeling, warehousing concepts, and performance tuning Familiarity with cloud data platforms, preferably Azure Strong SQL skills and experience in managing large-scale data systems Excellent problem-solving, debugging, and communication skills Nice to Have: Experience with Delta Lake , Apache Airflow , or Kafka Exposure to CI/CD for data pipelines Knowledge of data lake architectures and data mesh principles

Posted 6 hours ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Us: Chryselys is a Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. We specialize in digital technologies and advanced data science techniques that provide strategic and operational insights. Who we are: People - Our team of industry veterans, advisors and senior strategists have diverse backgrounds and have worked at top tier companies. Quality - Our goal is to deliver the value of a big five consulting company without the big five cost. Technology - Our solutions are Business centric built on cloud native technologies. Key Responsibilities and Core Competencies: · You will be responsible for managing and delivering multiple Pharma projects. · Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. · Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. · Good collaboration with onshore and Senior folks. · Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). · Exposure to Project management methodologies including Agile and Waterfall. · Experience working in RFPs would be a plus. Required Technical Skills: · Proficient in Python, Pyspark, SQL · Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc . · Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. · Good experience in ETL, Data Modelling, building ETL Pipelines. · Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. · Sound knowledge in Data operations, quality and data governance. Preferred Qualifications: · Bachelor’s or master’s Engineering/ MCA or equivalent degree. · 7+ years of experience as Data Engineer , with atleast 2 years in managing medium to large scale programs. · Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. · High motivation, good work ethic, maturity, self-organized and personal initiative. · Ability to work collaboratively and providing the support to the team. · Excellent written and verbal communication skills. · Strong analytical and problem-solving skills. Location · Preferably Hyderabad, India How to Apply: Ready to make an impact? Apply now by clicking [here] or visit our careers page at https://chryselys.com/chryselys-career/ Please include your resume and a cover letter detailing why you’re the perfect fit for this role. Equal Employment Opportunity: Chryselys is proud to be an Equal Employment Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Connect with Us: Follow us for updates and more opportunities: https://www.linkedin.com/company/chryselys/mycompany/ Discover more about our team and culture: www.chryselys.com

Posted 7 hours ago

Apply

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Overview Impetus Technologies enables the Intelligent Enterprise™ with innovative data engineering, cloud, and enterprise AI services. Recognized as an AWS Advanced Consulting Partner, Elite Databricks Consulting Partner, Data & AI Solutions Microsoft Partner, and Elite Snowflake Services Partner, Impetus offers a comprehensive suite of cutting-edge IT services and solutions to drive innovation and transformation for businesses across various industries. With a proven track record with Fortune 500 clients, Impetus drives growth, enhances efficiency, and ensures a competitive edge through continuous innovation and flawless, zero-defect delivery. Website http://www.impetus.com Position - BI Developer Nature of the job - Contract to Hire Skills & Qualifications: Education: Bachelor’s degree in Computer Science, Information Systems, Business Analytics, or a related field. Technical Skills: Proficiency in SQL and query optimization for complex data sets. Experience with BI tools such as Power BI, Tableau, QlikView, or similar. Experience in data warehousing, ETL development, and data modeling. Familiarity with programming/scripting languages such as Python or R is a plus. Knowledge of cloud platforms like AWS, Azure, or Google Cloud is beneficial. Experience: 3+ years of experience in BI development or similar roles. Proven track record in developing BI solutions and delivering actionable business insights. Analytical Skills: Strong ability to interpret data and provide actionable insights. Experience in identifying trends, patterns, and anomalies in large data sets. Soft Skills: Strong communication skills, with the ability to explain complex data insights to non-technical stakeholders. Strong problem-solving abilities and attention to detail.

Posted 7 hours ago

Apply

0.0 - 2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Roles & Responsibilities Understanding of Azure data services, including Azure SQL Database, Azure Data Lake Storage, and Azure Databricks Knowledge of data integration and ETL concepts Familiarity with SQL and programming languages such as Python or Scala Basic understanding of data modeling and database design. Good communication skills. Experience 0-2 Years Skills Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): databricks, Azure Data Factory, Pyspark About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.

Posted 7 hours ago

Apply

5.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

About Rearc At Rearc, we empower engineers to build incredible products and experiences. We believe in challenging the status quo and fostering a culture of ownership, innovation, and impact. Since our founding in 2016, we’ve built a team of problem-solvers who thrive in collaborative environments and value the freedom to think creatively. If you're a cloud professional who wants to make a difference, grow your career, and work on impactful projects in a supportive and innovative culture, we’d love to hear from you. What Makes Rearc Special? Engineering-Driven Culture: We prioritize technical excellence and invest in our engineers through certifications, courses, and professional development. Work-Life Balance: No micromanagement, no on-call requirements, and a supportive, down-to-earth team. Cutting-Edge Technology: Work on high-impact projects using modern cloud tools and services Career Growth: A structured path to help you achieve your professional goals in a fast-growing company. Collaborative Environment: Be part of a global team solving complex challenges in regulated enterprise environments. As a Senior Cloud Engineer at Rearc, you will play a key role in designing and building modern, cloud-native platforms that are reliable, scalable, and secure. This is a hands-on role that requires a deep understanding of cloud technologies, particularly AWS, and the ability to implement solutions in highly regulated enterprise environments. You’ll work closely with our engineering team and customers to solve complex challenges while maintaining a strong focus on scalability, security, and best practices. Your creativity and problem-solving skills will be instrumental in delivering real impact, and your passion for innovation will thrive in our engineering-first, collaborative culture. What You Bring 5+ years of hands-on cloud engineering experience (AWS preferred; GCP or Azure as a bonus), including expertise in compute, storage, networking, and security services. Strong knowledge of AWS CDK, IAM, and S3. Proficiency in Kubernetes and containerization tools such as Docker or Amazon ECS. Familiarity with cloud-native architectures and experience implementing infrastructure-as-code using tools like Terraform, CloudFormation, or AWS CDK. Solid programming skills in a high-level language such as Python or JavaScript. Experience building CI/CD pipelines using tools such as GitHub Actions, GitLab CI, or CodePipeline. Bonus: Knowledge of data engineering tools such as Snowflake, Databricks, or Glue, and experience with large-scale data ingestion pipelines. A proactive attitude with a passion for continuous learning and innovation. Strong communication skills and a collaborative, team-oriented mindset. What You'll Do Lead the design and implementation of cloud-native platforms, focusing on reliability, scalability, and security. Evaluate customers’ existing infrastructure and provide recommendations for improving their cloud architecture, security posture, and developer experience. Develop and manage CI/CD pipelines, ensuring secure and efficient deployment of applications to cloud environments. Utilize infrastructure-as-code tools to automate the deployment of robust and immutable infrastructure. Stay current with emerging technologies and best practices in cloud engineering, sharing insights and knowledge with your team and the broader engineering community. Collaborate with cross-functional teams to deliver innovative solutions that solve real-world problems.

Posted 7 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies