Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
We’re reinventing the market research industry. Let’s reinvent it together. At Numerator, we believe tomorrow’s success starts with today’s market intelligence. We empower the world’s leading brands and retailers with unmatched insights into consumer behavior and the influencers that drive it. We are seeking a highly skilled Senior Data Engineer with extensive experience in designing, building, and optimizing high-volume data pipelines. The ideal candidate will have strong expertise in Python, Databricks on Azure Cloud services, DevOps, and CI/CD tools, along with a solid understanding of AI/ML techniques and big data processing frameworks like Apache Spark and PySpark. Responsibilities Adhere to coding and Numerator technology standards Build suitable automation test suites within Azure DevOps Maintain and update automation test suites as required Carry out manual testing, load testing, exploratory testing as required Work closely with Business Analysts and Senior Data Developers to consistently achieve sprint goals Assist in estimation of sprint-by-sprint stories and tasks Pro-actively take a responsible approach to product delivery What You'll Bring to Numerator Requirements 7-10 years of experience in data engineering roles Good Python skills Experience working with Microsoft Azure Cloud Experience in Agile methodologies (Scrum/Kanban) Experience with Apache Spark, PySpark, Databricks Experience working with Devops pipeline, preferably Azure DevOps Preferred Qualifications Bachelor's or master's degree in computer science, Information Technology, Data Science, or a related field. Experience working in a support focused role Knowledge/experience in AI ML techniques Certification in relevant Data Engineering discipline or related fields.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 3-6 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300100
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 3-6 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300100
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 3-6 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300100
Posted 1 week ago
15.0 years
30 - 50 Lacs
Pune, Maharashtra, India
On-site
💼 Job Title : Head – Data & Analytics as a Service 📍 Location : Pune or Bangalore, India ⏰ Shift : General 🔹 Role Overview Head of Data & Analytics as a Service to lead a bold transformation in enterprise data and AI within its BPM Digital Unit. This role demands a strategic leader with deep domain knowledge and a passion for creating intelligent, scalable data solutions that unlock business value for clients. You will drive innovation across AI, machine learning, data productization, and cloud-native analytics while fostering a high-performance, customer-first analytics organization. 🔧 Key Responsibilities Strategic Leadership Define and lead the roadmap for Data & Analytics as a Service practice aligned to Infosys BPM’s AI-first strategy Build next-generation data platforms using cloud-native technologies (e.g., Snowflake, Databricks) Champion innovation, aligning digital transformation with GTM and business goals Represent Infosys BPM in industry and analyst forums, establishing it as a thought leader Client-Focused Innovation Act as a trusted advisor for key clients, co-creating bespoke data solutions Deliver AI-driven insights that enable clients to improve efficiency, agility, and CX Build strategic client partnerships and establish Analytics Centers of Excellence (CoEs) Advanced Analytics Delivery Guide teams in executing projects involving machine learning, predictive analytics, and data visualization Drive development of interactive dashboards and real-time decision tools Ensure analytics deliver measurable impact by uncovering trends and optimization opportunities Pre-Sales & Business Growth Collaborate with sales and GTM teams to craft high-impact proposals and solutions Support technical assessments, RFP responses, and demonstrations for key opportunities Monitor analytics market trends to ensure relevance and competitive advantage Data Product Leadership Own the lifecycle of AI/ML-driven, cloud-native data products Develop commercialization strategies and integrate solutions with client ecosystems Leverage Infosys BPM’s digital assets marketplace for scalable deployment People Leadership Build and mentor a team of data scientists, BI developers, and analytics consultants Promote a culture of learning, agility, and design thinking Drive delivery excellence while meeting timelines, budgets, and quality KPIs 🔑 Key Competencies Advanced analytical and problem-solving abilities Strong strategic execution mindset and business acumen Expertise in cloud-native analytics platforms – Snowflake, Databricks, etc. Proven track record in AI/ML-led transformation Ability to lead in high-change, cross-functional environments Exceptional communication, influence, and stakeholder engagement skills 🎓 Qualifications Bachelor’s or Master’s in Data Science, Analytics, Statistics, or related fields 15+ years of experience in analytics leadership, with a focus on large-scale, cloud-based projects Proficient in tools like SQL, Python, Power BI, and enterprise data integration Demonstrated experience delivering data products, analytics platforms, and AI capabilities across industries Skills: predictive analytics,snowflake,enterprise data integration,python,ai,data,analytics,cloud-native technologies,business intelligence,databricks,sql,data science,machine learning,data visualization,power bi,statistics,leadership
Posted 1 week ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
💼 Job Title : Head – Data & Analytics as a Service 📍 Location : Pune or Bangalore, India 🏢 Department : Infosys BPM – Digital Unit ⏰ Shift : General 🔹 Role Overview Infosys BPM is seeking a visionary Head of Data & Analytics as a Service to lead a bold transformation in enterprise data and AI within its BPM Digital Unit. This role demands a strategic leader with deep domain knowledge and a passion for creating intelligent, scalable data solutions that unlock business value for clients. You will drive innovation across AI, machine learning, data productization, and cloud-native analytics while fostering a high-performance, customer-first analytics organization. 🔧 Key Responsibilities Strategic Leadership Define and lead the roadmap for Data & Analytics as a Service practice aligned to Infosys BPM’s AI-first strategy Build next-generation data platforms using cloud-native technologies (e.g., Snowflake, Databricks) Champion innovation, aligning digital transformation with GTM and business goals Represent Infosys BPM in industry and analyst forums , establishing it as a thought leader Client-Focused Innovation Act as a trusted advisor for key clients, co-creating bespoke data solutions Deliver AI-driven insights that enable clients to improve efficiency, agility, and CX Build strategic client partnerships and establish Analytics Centers of Excellence (CoEs) Advanced Analytics Delivery Guide teams in executing projects involving machine learning, predictive analytics, and data visualization Drive development of interactive dashboards and real-time decision tools Ensure analytics deliver measurable impact by uncovering trends and optimization opportunities Pre-Sales & Business Growth Collaborate with sales and GTM teams to craft high-impact proposals and solutions Support technical assessments, RFP responses, and demonstrations for key opportunities Monitor analytics market trends to ensure relevance and competitive advantage Data Product Leadership Own the lifecycle of AI/ML-driven, cloud-native data products Develop commercialization strategies and integrate solutions with client ecosystems Leverage Infosys BPM’s digital assets marketplace for scalable deployment People Leadership Build and mentor a team of data scientists, BI developers, and analytics consultants Promote a culture of learning, agility, and design thinking Drive delivery excellence while meeting timelines, budgets, and quality KPIs 🔑 Key Competencies Advanced analytical and problem-solving abilities Strong strategic execution mindset and business acumen Expertise in cloud-native analytics platforms – Snowflake, Databricks, etc. Proven track record in AI/ML-led transformation Ability to lead in high-change, cross-functional environments Exceptional communication, influence, and stakeholder engagement skills 🎓 Qualifications Bachelor’s or Master’s in Data Science, Analytics, Statistics, or related fields 15+ years of experience in analytics leadership, with a focus on large-scale, cloud-based projects Proficient in tools like SQL, Python, Power BI , and enterprise data integration Demonstrated experience delivering data products , analytics platforms, and AI capabilities across industries
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 3-6 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300100
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Lead Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Lead Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Design and implementation of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 8+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 04/30/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R323237
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Kinaxis Elevate your career journey by embracing a new challenge with Kinaxis. We are experts in tech, but it’s really our people who give us passion to always seek ways to do things better. As such, we’re serious about your career growth and professional development, because People matter at Kinaxis. In 1984, we started out as a team of three engineers based in Ottawa, Canada. Today, we have grown to become a global organization with over 2000 employees around the world, and support 40,000+ users in over 100 countries. As a global leader in end-to-end supply chain management, we enable supply chain excellence for all industries. We are expanding our team in Chennai and around the world as we continue to innovate and revolutionize how we support our customers. Our journey in India began in 2020 and we have been growing steadily since then! Building a high-trust and high-performance culture is important to us and we are proud to be Great Place to Work® CertifiedTM. Our state-of-the-art office, located in the World Trade Centre in Chennai, offers our growing team space for expansion and collaboration. Location Chennai, India About The Team Kinaxis is looking for a talented data engineer to work within the Machine Learning R&D team. The team is responsible for applying machine learning algorithms to develop intelligent supply chains. The uniqueness of the team is that it performs at the intersection of technology and real business problems. You will contribute to the product that delights customers world-wide! What you will do If you love solving complex problems, analyzing complex datasets, finding insights from data, creating data model and learning new technologies, this role is for you. As a software developer, you are passionate about shipping large-scale software systems in a fast-paced environment but can balance longer term issues such as maintainability, scalability, and quality. You are an experienced software engineer who is passionate about delivering software that supports and facilitates business operations of ML & AI solutions. You have a strong understanding of Cloud technologies and Cloud agnostic software architecture and have experience troubleshooting high scale solutions that are deployed and upgraded on a regular cadence. You have a passion for software reliability and know how to ensure user needs are met through cross-functional stakeholder understanding and engagement. You enjoy understanding both the details of the use cases that end-users are performing using the solution as well as the architecture and implementation of the system end to end. You have a strong interest in resolving issues as well as designing effective methods for troubleshooting, preventing, and debugging problems in software systems, getting to the root cause of issues, meeting the users’ needs and influencing the product development roadmap. You are excited about finding ways to develop product capabilities and tools that increase robustness of the user experience, reduce the cost of troubleshooting, or reduce the time required to address issues. You are fluent in Python, have experience working with distributed computing, big data frameworks and are very knowledgeable about Kubernetes and Docker. You also have experience working with and building Machine Learning pipelines and models. You have the ability and enthusiasm to learn new technologies whether they are infrastructure or language or platform and easily adapt to change. You excel as a team player, a quick starter, and a problem solver. You thrive in cross-functional teams, actively listening and contributing to discussions. Your expertise lies in engineering solutions for complex machine learning challenges, developing Python-based applications, containerizing apps with Docker, orchestrating container swarms in Kubernetes, and building Argo Workflows. These efforts play a key role in creating ML software systems that deliver critical value to the business and its customers. What We Are Looking For BS or MS in Computer Science/Software Engineering or equivalent work experience. 6 to 9 years of relevant experience. You have excellent communication skills with the ability to clearly explain technical terms to non-technical audience. Strong software engineering skills and strong programming skills in Python/Pandas/ML Libs. Experience in working with any cloud provider, Azure/GCP/AWS. Strong expertise in Docker, Kuberenetes, Argo Workflows and Helm. Expertise in version control systems – GitHub. Experience in developing CI/CD pipelines – GitHub Actions. Experience in developing REST APIs – Flask/Fast API. Proven understanding of distributed computing architectures. Experience with Machine Learning Solutions and productization. Understanding of the ML / Modelling process, Feature Generation, Training, Hyper-parameter tuning, predictions (scoring) You enjoy solving puzzles and troubleshooting issues. You enjoy multi-tasking and providing significant positive impact to the business through your work. Nice To Have Data manipulations in python – wrangling & manipulating large datasets in spark & pandas dataframes. Supply chain domain knowledge - Supply Chain Management, especially demand planning aspects, CPG, Manufacturing etc. Knowledge of how drivers influence demand, e.g., pricing, promotions, initiatives, external factors like weather patterns etc. Exposure to Databricks. #Associate #Fulltime Work With Impact: Our platform directly helps companies power the world’s supply chains. We see the results of what we do out in the world every day—when we see store shelves stocked, when medications are available for our loved ones, and so much more. Work with Fortune 500 Brands: Companies across industries trust us to help them take control of their integrated business planning and digital supply chain. Some of our customers include Ford, Unilever, Yamaha, P&G, Lockheed-Martin, and more. Social Responsibility at Kinaxis: Our Diversity, Equity, and Inclusion Committee weighs in on hiring practices, talent assessment training materials, and mandatory training on unconscious bias and inclusion fundamentals. Sustainability is key to what we do and we’re committed to net-zero operations strategy for the long term. We are involved in our communities and support causes where we can make the most impact. People matter at Kinaxis and these are some of the perks and benefits we created for our team: Flexible vacation and Kinaxis Days (company-wide day off on the last Friday of every month) Flexible work options Physical and mental well-being programs Regularly scheduled virtual fitness classes Mentorship programs and training and career development Recognition programs and referral rewards Hackathons For more information, visit the Kinaxis web site at www.kinaxis.com or the company’s blog at http://blog.kinaxis.com . Kinaxis welcomes candidates to apply to our inclusive community. We provide accommodations upon request to ensure fairness and accessibility throughout our recruitment process for all candidates, including those with specific needs or disabilities. If you require an accommodation, please reach out to us at recruitmentprograms@kinaxis.com. Please note that this contact information is strictly for accessibility requests and cannot be used to inquire about application statuses. Kinaxis is committed to ensuring a fair and transparent recruitment process. We use artificial intelligence (AI) tools in the initial step of the recruitment process to compare submitted resumes against the job description, to identify candidates whose education, experience and skills most closely match the requirements of the role. After the initial screening, all subsequent decisions regarding your application, including final selection, are made by our human recruitment team. AI does not make any final hiring decisions.
Posted 1 week ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Data Engineer Location: Bengaluru L&T Technology Services is seeking a Data Engineer (Experience range - 9+ years) of experience, proficient in: 9+ years relevant data engineering hands on work experience- data ingestion, processing, exploratory analysis to build solutions that deliver value through data as an asset. Data engineer build ,test and deploy data pipelines efficiently and reliably move data across systems and should be top of latest architectural trends on AZURE cloud. Folks who understand parallel and distributed processing, storage, concurrency, fault tolerant systems. Folks who thrive on new technologies, able to adapt and learn easily to meet the needs of next generation engineering challenges. Technical Skills (Must-Have) Applied experience with distributed data processing frameworks - Spark , Databricks with Python and SQL Must have worked at least 2 end-end data analytics projects with Databricks Configuration , Unity Catalog, Delta Sharing and medallion architecture. Applied experience with Azure Data services ADLS , Delta Required Skills: Azure Data Lake Storage (ADLS), Advanced SQL and Python Programming, Databricks Expertise with Medallion Architecture, Data Governance and Security, #AzureDataEngineer, #AzureCloud, #AzureDatabricks, #AzureDataLake, #AzureSynapse, #AzureDataFactory, #AzureSQL, #Databricks, #DataEngineering, #Python, #Flask
Posted 1 week ago
5.0 - 9.0 years
1 - 3 Lacs
Kolkata, Chennai, Bengaluru
Hybrid
Location- Pune, Mumbai, Nagpur, Goa, Noida, Gurgaon, Ahmedabad, Jaipur, Indore, Kolkata, Kochi, Hyderabad, Bangalore, Chennai,) Experience: 5-7 years Notice: 0-15 days Open position: 6 JD: Proven experience with DataStage for ETL development. Strong understanding of data warehousing concepts and best practices. Hands-on experience with Apache Airflow for workflow management. Proficiency in SQL and Python for data manipulation and scripting. Solid knowledge of Unix/Linux shell scripting. Experience with Apache Spark and Databricks for big data processing. Expertise in Snowflake for cloud data warehousing. Familiarity with version control systems (e.g., Git) and CI/CD pipelines. Excellent problem-solving and communication skills.
Posted 1 week ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Data Architect is responsible to define and lead the Data Architecture, Data Quality, Data Governance, ingesting, processing, and storing millions of rows of data per day. This hands-on role helps solve real big data problems. You will be working with our product, business, engineering stakeholders, understanding our current eco-systems, and then building consensus to designing solutions, writing codes and automation, defining standards, establishing best practices across the company and building world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. Role Design, implement and lead Data Architecture, Data Quality, Data Governance Defining data modeling standards and foundational best practices Develop and evangelize data quality standards and practices Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data Drive the successful adoption of organizational data utilization and self-serviced data platforms Create and maintain critical data standards and metadata that allows data to be understood and leveraged as a shared asset Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data Provide architectural assessments, strategies, and roadmaps for data management Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models Define templates and processes for the design and analysis of data models, data flows, and integration Lead and mentor Data Analytics team members in best practices, processes, and technologies in Data platforms Qualifications B.S. or M.S. in Computer Science, or equivalent degree 10+ years of hands-on experience in Data Warehouse, ETL, Data Modeling & Reporting 7+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Delta Parquet, Avro, and more Extensive experience in understanding a variety of complex business use cases and modeling the data in the data warehouse Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, and Tableau or similar tools Proven experience in building a Custom Enterprise Data Warehouse or implementing tools like Data Catalogs, Spark, Tableau, Kubernetes, and Docker Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with hands-on experience in Amazon Web Services (AWS) Strong verbal and written communications skills are a must and should work effectively across internal and external organizations and virtual teams Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem Deep knowledge of data structures and algorithms Experience working in large teams using CI/CD and agile methodologies Unique ID -
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description PayPay's rapid growth necessitates the expansion of its product teams and underscores the critical need for a resilient Data Engineering Platform. This platform is vital to support our increasing business demands. The Data Pipeline team is tasked with creating, deploying, and managing this platform, utilizing leading technologies like Databricks, Delta Lake, Spark, PySpark, Scala, and the AWS suite. We are actively seeking skilled Data Engineers to join our team and contribute to scaling our platform across the organization. Main Responsibilities Create and manage robust data ingestion pipelines leveraging Databricks, Airflow, Kafka, and Terraform. Ensure high performance, reliability, and efficiency by optimizing large-scale data pipelines. Develop data processing workflows using Databricks, Delta Lake, and Spark technologies. Maintain and improve the Data Lakehouse, utilizing Unity Catalog for efficient data management and discovery. Construct automation, frameworks, and enhanced tools to streamline data engineering workflows. Collaborate across teams to facilitate smooth data flow and integration. Enforce best practices in observability, data governance, security, and regulatory compliance Qualifications Minimum 5 years as a Data Engineer or similar role. Hands-on experience with Databricks, Delta Lake, Spark, and Scala. Proven ability to design, build, and operate Data Lakes or Data Warehouses. Proficiency with Data Orchestration tools (Airflow, Dagster, Prefect). Familiarity with Change Data Capture tools (Canal, Debezium, Maxwell). Strong command of at least one primary language (Scala, Python, etc.) and SQL. Experience with data catalog and metadata management (Unity Catalog, Lakeformation). Experience in Infrastructure as Code (IaC) using Terraform. Excellent problem-solving and debugging abilities for complex data challenges. Strong communication and collaboration skills. Capability to make informed decisions, learn quickly, and consider complex technical contexts.
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Good day, We have immediate opportunity for Azure Data Engineer. Job Role: Azure Data Engineer Job Location: Kharadi , Pune Experience- 6 Years - 12 Years Notice Period: Immediate to 30 days. About Company: At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honoured with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,700+ and has 55 offices in 20 countries within key global markets. For more information on the company, please visit our website or LinkedIn community. Diversity, Equity, and Inclusion: Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and an affirmative-action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Job Description: As an Azure Data Engineer, you will be responsible for designing, implementing, and maintaining data management systems on the Azure cloud platform. You will be responsible for creating and handling scalable data pipelines, assuring data quality, and maximizing data processing performance You will also work closely with other teams, such as data analysts, data scientists, and software developers, to provide them with the data they need to perform their job functions. We are looking for candidates with 8+ years of overall experience and a minimum of 4+ years’ experience in Azure. Technical Skills (Must Have): Azure Databricks, Spark, ADF (Azure Data Factory) Optional Skills (Good to have) : Spark, structure Streaming, SQL, GITLAB Responsibilities : Designing and implementing data storage solutions on Azure Building and maintaining data pipelines for data integration and processing Ensuring data quality and accuracy through testing and validation Developing and maintaining data models and schemas Collaborating with other teams to provide data for analytics and reporting. ensuring data security and privacy prerequisites are followed. Primary Skills : We will require professional with a career reflecting technical abilities coupled with “hands-on” experience in a diverse range of software projects: Strong exposure on DataBricks, Azure Data Factory, ADLS Strong exposure to Spark and structure streaming. Exposure on Cloud Integration and container services (Azure) Oracle and MS-SQL experience, terraform will be an asset. Expertise in managing Repository (GITLAB). Clean coding and refactoring skills and Test-Driven-Development (TDD). Performance optimization and scalability. Know-how of Agile Development practices (Scrum, XP, Kanban, etc.) Adaptable, able to work across teams, functions, applications. Enthusiastic, self-motivated and client focused. Prior Financial/ Banking experience is desirable. Secondary skills : Familiarity with data processing frameworks such as Apache Spark and Hadoop Understanding of data modeling and schema design principles Ability to work with large datasets and perform data analysis. Strong problem-solving and troubleshooting skills. If you find this opportunity interesting kindly share your below details (Mandatory) Total Experience Experience in Azure - Experience in Power BI – Experience in Databricks- Current CTC- Expected CTC- Notice period- Current Location- If you had gone through any interviews in Synechron before? If Yes when Regards, Recruitment Team, Pune
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary Job Title: Technical Team Lead Location: TechM TMLW Hyderabad Years of Experience: 5 7 Years Job Summary: We are seeking a highly skilled and motivated Technical Team Lead with a strong background in Databricks to join our dynamic team in Hyderabad. The ideal candidate will have 5 7 years of experience in leading technical teams and delivering high quality software solutions. This role requires a deep understanding of Databricks and its ecosystem, as well as the ability to mentor and guide team members in best practices and innovative solutions. Responsibilities: Lead and manage a team of software engineers, ensuring timely delivery of projects and adherence to quality standards. Design, develop, and implement scalable data solutions using Databricks. Collaborate with cross functional teams to define project requirements and deliverables. Provide technical guidance and mentorship to team members, fostering a culture of continuous learning and improvement. Conduct code reviews and ensure best practices in software development are followed. Monitor project progress and report on key metrics to stakeholders. Stay updated with the latest trends and technologies in data engineering and analytics. Facilitate communication between technical and non technical stakeholders. Mandatory Skills: Strong expertise in Databricks, including data engineering and analytics capabilities. Proficient in programming languages such as Python, Scala, or Java. Experience with cloud platforms (AWS, Azure, or GCP) and their data services. Solid understanding of data warehousing concepts and ETL processes. Excellent problem solving skills and the ability to work under pressure. Strong leadership and team management skills. Preferred Skills: Experience with Apache Spark and big data technologies. Familiarity with machine learning frameworks and libraries. Knowledge of DevOps practices and CI/CD pipelines. Experience in Agile methodologies and project management tools. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 5 7 years of experience in software development and team leadership. Proven track record of successful project delivery in a technical lead role. Strong communication and interpersonal skills. If you are passionate about leading technical teams and have a strong foundation in Databricks, we encourage you to apply for this exciting opportunity to make a significant impact in our organization.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
You will develop a deep understanding of internal customer data and analytical needs, focusing on customer-facing model leadership. You will champion the use of local customer insight in strategic and resourcing decisions across the business and identify and prioritize enhancements and new analytic capabilities. You should be able to lead the development, deployment, and embedding of such capabilities across the organization while instilling a culture of continuous improvement, testing, and deployment of new capabilities. Developing innovative analytical practices to create and sustain a competitive advantage is expected, as is accessing appropriate information via a variety of tools and sources, and summarizing and presenting findings through various communication channels. You should be capable of applying deeper technical skills and knowledge that is shared across the functional area, including deep knowledge of key data sets and modeling capabilities. A high level of understanding of the points of integration between your work and that of colleagues is important. Monitoring the external environment to stay current on leading analytic capabilities, both within and outside of pharma, and applying those insights within the organization will also be key. Essential For The Role A quantitative bachelor degree from an accredited college or university is required in one of the following or related fields: Engineering, Operations Research, Management Science, Economics, Statistics, Applied Math, Computer Science, or Data Science. An advanced degree (Masters, MBA, or PhD) is preferred. You should have 3+ years of experience applying advanced methods and statistical procedures on large and disparate datasets, along with recent experience and proficiency in PySpark, Python, R, and SQL. Working knowledge of data visualization tools such as Power BI, MicroStrategy, Tableau, QlikView, D3.js, or similar is expected, as well as familiarity with platforms like Databricks and experience with IQVIA data sets. Strong skills in Excel and PowerPoint are required, along with expertise in managing and analyzing a range of large, transactional databases. A background in statistical analysis and modeling is a plus, as is experience with machine learning. You should be able to derive, summarize, and communicate insights from analyses and have strong organization and time management skills. Desirable Strong leadership and interpersonal skills are important, with a demonstrated ability to work collaboratively with business leaders and cross-functional partners. You should have excellent communication and influencing skills and the ability to develop and present clear, compelling reviews of independently developed analyses, including insights and business implications. Strategic and critical thinking skills are essential to engage and maintain credibility with the Commercial Leadership Team.
Posted 1 week ago
3.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
You will develop a deep understanding of internal customer data and analytical needs, focusing on customer-facing model leadership. You will champion the use of local customer insight in strategic and resourcing decisions across the business and identify and prioritize enhancements and new analytic capabilities. You should be able to lead the development, deployment, and embedding of such capabilities across the organization while instilling a culture of continuous improvement, testing, and deployment of new capabilities. Developing innovative analytical practices to create and sustain a competitive advantage is expected, as is accessing appropriate information via a variety of tools and sources, and summarizing and presenting findings through various communication channels. You should be capable of applying deeper technical skills and knowledge that is shared across the functional area, including deep knowledge of key data sets and modeling capabilities. A high level of understanding of the points of integration between your work and that of colleagues is important. Monitoring the external environment to stay current on leading analytic capabilities, both within and outside of pharma, and applying those insights within the organization will also be key. Essential For The Role A quantitative bachelor degree from an accredited college or university is required in one of the following or related fields: Engineering, Operations Research, Management Science, Economics, Statistics, Applied Math, Computer Science, or Data Science. An advanced degree (Masters, MBA, or PhD) is preferred. You should have 3+ years of experience applying advanced methods and statistical procedures on large and disparate datasets, along with recent experience and proficiency in PySpark, Python, R, and SQL. Working knowledge of data visualization tools such as Power BI, MicroStrategy, Tableau, QlikView, D3.js, or similar is expected, as well as familiarity with platforms like Databricks and experience with IQVIA data sets. Strong skills in Excel and PowerPoint are required, along with expertise in managing and analyzing a range of large, transactional databases. A background in statistical analysis and modeling is a plus, as is experience with machine learning. You should be able to derive, summarize, and communicate insights from analyses and have strong organization and time management skills. Desirable Strong leadership and interpersonal skills are important, with a demonstrated ability to work collaboratively with business leaders and cross-functional partners. You should have excellent communication and influencing skills and the ability to develop and present clear, compelling reviews of independently developed analyses, including insights and business implications. Strategic and critical thinking skills are essential to engage and maintain credibility with the Commercial Leadership Team.
Posted 1 week ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Generative AI Engineer Experience: 6–9 years About the Role We are seeking a Generative AI Engineer with 8–12 years of experience who can independently explore, prototype, and present the art of the possible using LLMs, agentic frameworks, and emerging Gen AI techniques. This role combines deep technical hands-on development with non-technical influence and presentation skills. You will contribute to key Gen AI innovation initiatives, help define new protocols (like MCP and A2A ) and deliver fully functional prototypes that push the boundaries of enterprise AI — not just in Jupyter notebooks, but as real applications ready for production exploration. Key Responsibilities LLM Applications & Agentic Frameworks Design and implement end-to-end LLM applications using OpenAI, Claude, Mistral, Gemini, or LLaMA on AWS, Databricks, Azure or GCP. Build intelligent, autonomous agents using LangGraph, AutoGen, LlamaIndex, Crew.ai, or custom frameworks. Develop Multi Model, Multi Agent, Retrieval-Augmented Generation (RAG) applications with secure context embedding and tracing with reports. Rapidly explore and showcase the art of the possible through functional, demonstrable POCs Advanced AI Experimentation Fine-tune LLMs and Small Language Models (SLMs) for domain-specific use. Create and leverage synthetic datasets to simulate edge cases and scale training. Evaluate agents using custom agent evaluation frameworks (success rates, latency, reliability) Evaluate emerging agent communication standards — A2A (Agent-to-Agent) and MCP (Model Context Protocol) Business Alignment & Cross-Team Collaboration Translate ambiguous requirements into structured, AI-enabled solutions. Clearly communicate and present ideas, outcomes, and system behaviors to technical and non-technical stakeholders Good-To-Have Microsoft Copilot Studio DevRev Codium Cursor Atlassian AI Databricks Mosaic AI Qualifications 6–9 years of experience in software development or AI/ML engineering At least 3 years working with LLMs, GenAI applications, or agentic frameworks. Proficient in AI/ML, MLOps concepts, Python, embeddings, prompt engineering, and model orchestration Proven track record of developing functional AI prototypes beyond notebooks. Strong presentation and storytelling skills to clearly convey GenAI concepts and value. Ability to independently drive AI experiments from ideation to working demo.
Posted 1 week ago
5.0 years
0 Lacs
Delhi, India
On-site
Enablemining is a global mining consultancy headquartered in Australia. We specialize in providing a full spectrum of services, including strategy, mine planning, and technical evaluations for coal and metalliferous mines. Our expertise in structured problem-solving and innovative thinking helps clients maximize the value of their assets. We collaborate as trusted partners, delivering impactful solutions that drive success in the mining industry. As a Power BI Developer, you will play a key role in creating data-driven solutions that empower decision-making. This position involves designing interactive dashboards, automating reporting systems, and developing tools for productivity, cost reporting, and forecasting. Responsibilities Dashboard Development : Design, build, and maintain interactive Power BI dashboards to provide actionable insights. Data Querying : Query and extract information from Databricks and other database systems to ensure accurate reporting. Report Automation : Automate reporting processes to streamline data delivery and improve efficiency. System Development : Develop productivity, cost reporting, and forecasting systems to support business objectives. Data Integration : Automate reporting processes to streamline data delivery and improve efficiency. Data Transformation : Use Python for advanced analytics and data transformation. Optimization : Ensure data accuracy, performance optimization, and scalability of reports and systems. Support : Assisting stakeholders and end users in understanding reports/outputs, troubleshooting issues and responding to queries. Qualifications Over 5 years of experience in Power BI development Proficient in Python, SQL and Databricks for data analytics and pipeline creation Strong analytical and problem-solving skills with keen attention to detail Excellent communication and collaboration skills to work effectively with crossfunctional teams
Posted 1 week ago
4.0 years
0 Lacs
Mumbai, Maharashtra
Remote
Solution Engineering - Cloud & AI - Data Mumbai, Maharashtra, India Date posted Jul 16, 2025 Job number 1847893 Work site Up to 50% work from home Travel 25-50 % Role type Individual Contributor Profession Technology Sales Discipline Solution Engineering Employment type Full-Time Overview Are you insatiably curious, deeply passionate about the realm of databases and analytics, and ready to tackle complex challenges in a dynamic environment in the era of AI? If so, we invite you to join our team as a Cloud & AI Solution Engineer in Innovative Data Platform for commercial customers at Microsoft. Here, you'll be at the forefront of innovation, working on cutting-edge projects that leverage the latest technologies to drive meaningful impact. Join us and be part of a team that thrives on collaboration, creativity, and continuous learning. Databases & Analytics is a growth opportunity for Microsoft Azure, as well as its partners and customers. It includes a rich portfolio of products including IaaS and PaaS services on the Azure Platform in the age of AI. These technologies empower customers to build, deploy, and manage database and analytics applications in a cloud-native way. As an Innovative Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Qualifications 10+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and competitors (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. Responsibilities Drive technical sales with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
1.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in data, analytics and technology solutions at PwC will assist clients in developing solutions that help build trust, drive improvement, and detect, monitor, and predict risk. Your work will involve using advanced analytics, data wrangling technology, and automation tools to leverage data and focus on establishing the right processes and structures to enable our clients to make efficient and effective decisions based on accurate information that is complete and trustworthy. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. About The Job Introduction to PwC Service Delivery Center PricewaterhouseCoopers Acceleration Centre (Kolkata) Private Limited is a joint venture in India among members of the PricewaterhouseCoopers Network that will leverage the scale and capabilities of its network. It is a member firm of PricewaterhouseCoopers International Limited and has its registered office in Kolkata, India. The Delivery Center will provide a professional with an opportunity to work in a dynamic environment where you will have the ability to develop process and quality-based skills. To really stand out and make us fit for the future in a constantly changing world, each one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional, our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. The DET supports client engagement teams in the planning, execution, and delivery of various data enabled firm solutions. Within the DET, Data and ERP competencies are strategically organized to take advantage of natural synergies that drive more efficient, cost-effective solutions. The DET leverages professionals to optimize capabilities that drive the innovation and delivery of solutions that are standardized, repeatable and enabled with the best data and automation to ensure continued quality outcomes and the ability to deliver sustained, continuous improvement and innovation at scale. As an Associate, you will work as part of a team of problem solvers and help clients solve their complex business issues from strategy to execution. The candidate will report to an AC Manager. The AC team works as an extension of our overseas Engagement Teams and works closely with those teams as well as clients directly. Requirements Preferred Knowledge/Skills: Candidates must have a bachelor's degree in any reputable tertiary organization to join as campus hires. Basic knowledge and understanding of financial risk management, operational risk management, and compliance requirements. Strong verbal, written, and interpersonal communication skills. Good analytical skills with high attention to detail and accuracy. Good knowledge of Microsoft suite tools (e.g. Word, Excel, Access, PowerPoint). Functional Skills Hands on experience with Data management as per business requirements for Analytics or audit analytics. Experience in dealing with financial data, journal entry testing, and data analytics for business processes. Experience in performing data transformation (ETL), data quality checks, and data blending. Demonstrates good knowledge and understanding of performing on project teams and providing deliverables. Involving multiphase data analysis related to the evaluation of compliance, finance, and risk issues. Technical Tools Must have: Hands-on experience with MS-SQL / ACL or other structured query language. Demonstrates good knowledge and/or a proven record of success leveraging data manipulation and analysis technologies inclusive of Microsoft SQL Server, SQL, Oracle, or DB2. Demonstrates knowledge in Excel and its functionality. Good To Have Experience in a similar role in their current profile. Good accounting knowledge and experience in dealing with financial data are a plus. Knowledge of Azure Databricks / Alteryx / Python / SAS / Knime. Demonstrates good knowledge and / or proven record of success leveraging data visualization tools such as Power BI and Tableau. Education/Qualification Preferred BCA/ MCA/ B.Tech / M.Tech or equivalent 1+ years of experience in Data Analytics / Data Migration / Data Transformation Certification in Data Analytics / Data Science
Posted 1 week ago
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In quality engineering at PwC, you will focus on implementing leading practice standards of quality in software development and testing processes. In this field, you will use your experience to identify and resolve defects, optimise performance, and enhance user experience. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the AI Engineering team you will design, develop, and scale AI-driven web applications and platforms. As a Senior Associate you will analyze complex problems, mentor others, and maintain rigorous standards while building meaningful client connections and navigating increasingly complex situations. This role is well-suited for engineers eager to blend their entire stack development skills with the emerging world of AI and machine learning in a fast-paced, cross-functional environment. Responsibilities Design and implement AI-driven web applications and platforms Analyze complex challenges and develop impactful solutions Mentor junior team members and foster their professional growth Maintain exemplary standards of quality in every deliverable Build and nurture meaningful relationships with clients Navigate intricate situations and adapt to evolving requirements Collaborate in a fast-paced, cross-functional team environment Leverage broad stack development skills in AI and machine learning projects What You Must Have Bachelor's Degree in Computer Science, Software Engineering, or a related field 4-9 years of experience Oral and written proficiency in English required What Sets You Apart Bachelor's Degree in Computer Science, Engineering Skilled in modern frontend frameworks like React or Angular Demonstrating hands-on experience with GenAI applications Familiarity with LLM orchestration tools Understanding of Responsible AI practices Experience with DevOps tools like Terraform and Kubernetes Knowledge of MLOps capabilities Security experience with OpenID Connect and OAuth2 Experience in AI/ML R&D or cross-functional teams Preferred Knowledge/Skills Role Overview We are looking for a skilled and proactive Full Stack Engineer to join our AI Engineering team. You will play a pivotal role in designing, developing, and scaling AI-driven web applications and platforms. This role is ideal for engineers who are passionate about blending full stack development skills with the emerging world of AI and machine learning, and who thrive in cross-functional, fast-paced environments. Key Responsibilities Develop and maintain scalable web applications and APIs using Python (FastAPI, Flask, Django) and modern frontend frameworks (React.js, Angular.js). Build intuitive, responsive UIs using JavaScript/TypeScript, CSS3, Bootstrap, and Material UI for AI-powered products. Collaborate closely with product teams to deliver GenAI/RAG-based solutions. Design backend services for: Data pipelines (Azure Data Factory, Data Lake, Delta Lake) Model inference Embedding and metadata storage (SQL, NoSQL, Vector DBs) Optimize application performance for AI inference and data-intensive workloads. Integrate third-party APIs, model-hosting platforms (OpenAI, Azure ML, AWS SageMaker), and vector databases. Implement robust CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Participate in architectural reviews and contribute to design best practices across the engineering organization. Required Skills & Experience 4–9 years of professional full-stack engineering experience. Bachelor's degree in Computer Science, Engineering, or related technical field (BE/BTech/MCA) Strong Python development skills, particularly with FastAPI, Flask, or Django. Experience with data processing using Pandas. Proficient in JavaScript/TypeScript with at least one modern frontend framework (React, Angular). Solid understanding of RESTful and GraphQL API design. Experience with at least one cloud platform: Azure: Functions, App Service, AI Search, Service Bus, AI Foundry AWS: Lambda, S3, SageMaker, EC2 Hands-on experience building GenAI applications using RAG and agent frameworks. Database proficiency with: Relational databases: PostgreSQL, SQL Server NoSQL databases: MongoDB, DynamoDB Vector stores for embedding retrieval Familiarity with LLM orchestration tools: LangChain, AutoGen, LangGraph, Crew AI, A2A, MCP Understanding of Responsible AI practices and working knowledge of LLM providers (OpenAI, Anthropic, Google PaLM, AWS Bedrock) Good To Have Skills DevOps & Infrastructure: Terraform, Kubernetes, Docker, Jenkins MLOps capabilities: model versioning, inference monitoring, automated retraining Security experience with OpenID Connect, OAuth2, JWT Deep experience with data platforms: Databricks, Microsoft Fabric Prior experience in AI/ML R&D or working within cross-functional product teams
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Senior / Lead Data Scientist with hands-on experience in ML/DL, NLP, GenAI, LLMs, and Azure Databricks to lead strategic data initiatives. In this role, you'll lead end-to-end project execution, work on advanced AI/ML models, and guide junior team members. Your expertise will help align data-driven solutions with business goals and deliver real impact. Key Responsibilities Collect, clean, and validate large volumes of structured and unstructured data Design, develop, and implement ML/DL models, NLP solutions, and LLM-based applications Leverage Generative AI techniques to drive innovation in product development Lead Azure Databricks-based workflows and scalable model deployment pipelines Interpret data, analyze results, and present actionable insights to stakeholders Own the delivery of full data science project lifecycles from problem scoping to production deployment Collaborate cross-functionally with engineering, product, and business teams Create effective visualizations, dashboards, and reports Conduct experiments, research new techniques, and continuously enhance model performance Ensure alignment of data initiatives with organizational strategy and KPIs Mentor junior data scientists and contribute to team growth and Skills & Qualifications : 6+ years of hands-on experience in Data Science or Machine Learning roles Strong command over Python (R/MATLAB is a plus) Hands-on experience with NLP, Chatbots, GenAI, LLMs (e.g., GPT, LLaMA) Proficient in Azure Databricks (Mandatory) Experience with both SQL and NoSQL databases Solid grasp of statistical modeling, data mining, and predictive analytics Familiarity with data visualization tools (e.g., Power BI, Tableau, Matplotlib, Seaborn) Experience in deploying ML models in production environments Bachelors or Masters degree in Computer Science, Mathematics, Data Science, or related field Nice To Have Experience in RPA tools, MLOps, or AI product development Exposure to cloud platforms (Azure preferred; AWS/GCP is a bonus) Familiarity with version control and CI/CD practices (ref:hirist.tech)
Posted 1 week ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title : Sr. Data Engineer (AWS) Location : Ahmedabad, Gujarat Job Type : Full Time Experience : 4+ years Department : Data Engineering About Simform Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market. Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow. Role Overview The Sr. Data Engineer (AWS/Azure) will be responsible for building and managing robust, scalable, and secure data pipelines across cloud-based infrastructure. The role includes designing ETL/ELT workflows, implementing data lake and warehouse solutions, and integrating with real-time and batch data systems using AWS/Azure services. You will work closely with data scientists, ML engineers, and software teams to power data-driven applications and analytics. Key Responsibilities Design, develop, and maintain scalable end-to-end data pipelines on AWS/Azure. Build robust ETL/ELT workflows for both batch and streaming data workloads. Design high-performance data models and manage large-scale structured and unstructured datasets (100GB+). Develop distributed data processing solutions using Apache Kafka, Spark, Flink, and Airflow. Implement best practices for data transformation, data quality, and error handling. Optimize SQL queries, implement indexing, partitioning, and tuning strategies for performance improvement. Integrate various data sources including PostgreSQL, SQL Server, MySQL, MongoDB, Cassandra, and Neptune. Collaborate with software developers, ML engineers, and stakeholders to support business and analytics initiatives. Ensure adherence to data governance, security, and compliance standards. Participate in client meetings, provide technical guidance, and document architecture decisions. Preferred Qualifications (Nice To Have) Exposure to data lake architecture and lakehouse frameworks. Understanding of integrating data pipelines with ML workflows. Experience in CI/CD automation for data pipeline deployments. Familiarity with data observability and monitoring tools. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market. Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow. Aws AWS Glue Lambda Redshift RDS (Experience with EMR is a plus) Azure Azure Data Factory Synapse Analytics Databricks Azure Functions Build robust ETL/ELT workflows for both batch and streaming data workloads. Design high-performance data models and manage large-scale structured and unstructured datasets (100GB+). Develop distributed data processing solutions using Apache Kafka, Spark, Flink, and Airflow. Implement best practices for data transformation, data quality, and error handling. Optimize SQL queries, implement indexing, partitioning, and tuning strategies for performance improvement. Integrate various data sources including PostgreSQL, SQL Server, MySQL, MongoDB, Cassandra, and Neptune. Collaborate with software developers, ML engineers, and stakeholders to support business and analytics initiatives. Ensure adherence to data governance, security, and compliance standards. Participate in client meetings, provide technical guidance, and document architecture decisions. (ref:hirist.tech)
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France