Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In quality engineering at PwC, you will focus on implementing leading practice standards of quality in software development and testing processes. In this field, you will use your experience to identify and resolve defects, optimise performance, and enhance user experience. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the AI Engineering team you will design, develop, and scale AI-driven web applications and platforms. As a Senior Associate you will analyze complex problems, mentor others, and maintain rigorous standards while building meaningful client connections and navigating increasingly complex situations. This role is well-suited for engineers eager to blend their entire stack development skills with the emerging world of AI and machine learning in a fast-paced, cross-functional environment. Responsibilities Design and implement AI-driven web applications and platforms Analyze complex challenges and develop impactful solutions Mentor junior team members and foster their professional growth Maintain exemplary standards of quality in every deliverable Build and nurture meaningful relationships with clients Navigate intricate situations and adapt to evolving requirements Collaborate in a fast-paced, cross-functional team environment Leverage broad stack development skills in AI and machine learning projects What You Must Have Bachelor's Degree in Computer Science, Software Engineering, or a related field 4-9 years of experience Oral and written proficiency in English required What Sets You Apart Bachelor's Degree in Computer Science, Engineering Skilled in modern frontend frameworks like React or Angular Demonstrating hands-on experience with GenAI applications Familiarity with LLM orchestration tools Understanding of Responsible AI practices Experience with DevOps tools like Terraform and Kubernetes Knowledge of MLOps capabilities Security experience with OpenID Connect and OAuth2 Experience in AI/ML R&D or cross-functional teams Preferred Knowledge/Skills Role Overview We are looking for a skilled and proactive Full Stack Engineer to join our AI Engineering team. You will play a pivotal role in designing, developing, and scaling AI-driven web applications and platforms. This role is ideal for engineers who are passionate about blending full stack development skills with the emerging world of AI and machine learning, and who thrive in cross-functional, fast-paced environments. Key Responsibilities Develop and maintain scalable web applications and APIs using Python (FastAPI, Flask, Django) and modern frontend frameworks (React.js, Angular.js). Build intuitive, responsive UIs using JavaScript/TypeScript, CSS3, Bootstrap, and Material UI for AI-powered products. Collaborate closely with product teams to deliver GenAI/RAG-based solutions. Design backend services for: Data pipelines (Azure Data Factory, Data Lake, Delta Lake) Model inference Embedding and metadata storage (SQL, NoSQL, Vector DBs) Optimize application performance for AI inference and data-intensive workloads. Integrate third-party APIs, model-hosting platforms (OpenAI, Azure ML, AWS SageMaker), and vector databases. Implement robust CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Participate in architectural reviews and contribute to design best practices across the engineering organization. Required Skills & Experience 4–9 years of professional full-stack engineering experience. Bachelor's degree in Computer Science, Engineering, or related technical field (BE/BTech/MCA) Strong Python development skills, particularly with FastAPI, Flask, or Django. Experience with data processing using Pandas. Proficient in JavaScript/TypeScript with at least one modern frontend framework (React, Angular). Solid understanding of RESTful and GraphQL API design. Experience with at least one cloud platform: Azure: Functions, App Service, AI Search, Service Bus, AI Foundry AWS: Lambda, S3, SageMaker, EC2 Hands-on experience building GenAI applications using RAG and agent frameworks. Database proficiency with: Relational databases: PostgreSQL, SQL Server NoSQL databases: MongoDB, DynamoDB Vector stores for embedding retrieval Familiarity with LLM orchestration tools: LangChain, AutoGen, LangGraph, Crew AI, A2A, MCP Understanding of Responsible AI practices and working knowledge of LLM providers (OpenAI, Anthropic, Google PaLM, AWS Bedrock) Good To Have Skills DevOps & Infrastructure: Terraform, Kubernetes, Docker, Jenkins MLOps capabilities: model versioning, inference monitoring, automated retraining Security experience with OpenID Connect, OAuth2, JWT Deep experience with data platforms: Databricks, Microsoft Fabric Prior experience in AI/ML R&D or working within cross-functional product teams
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Senior / Lead Data Scientist with hands-on experience in ML/DL, NLP, GenAI, LLMs, and Azure Databricks to lead strategic data initiatives. In this role, you'll lead end-to-end project execution, work on advanced AI/ML models, and guide junior team members. Your expertise will help align data-driven solutions with business goals and deliver real impact. Key Responsibilities Collect, clean, and validate large volumes of structured and unstructured data Design, develop, and implement ML/DL models, NLP solutions, and LLM-based applications Leverage Generative AI techniques to drive innovation in product development Lead Azure Databricks-based workflows and scalable model deployment pipelines Interpret data, analyze results, and present actionable insights to stakeholders Own the delivery of full data science project lifecycles from problem scoping to production deployment Collaborate cross-functionally with engineering, product, and business teams Create effective visualizations, dashboards, and reports Conduct experiments, research new techniques, and continuously enhance model performance Ensure alignment of data initiatives with organizational strategy and KPIs Mentor junior data scientists and contribute to team growth and Skills & Qualifications : 6+ years of hands-on experience in Data Science or Machine Learning roles Strong command over Python (R/MATLAB is a plus) Hands-on experience with NLP, Chatbots, GenAI, LLMs (e.g., GPT, LLaMA) Proficient in Azure Databricks (Mandatory) Experience with both SQL and NoSQL databases Solid grasp of statistical modeling, data mining, and predictive analytics Familiarity with data visualization tools (e.g., Power BI, Tableau, Matplotlib, Seaborn) Experience in deploying ML models in production environments Bachelors or Masters degree in Computer Science, Mathematics, Data Science, or related field Nice To Have Experience in RPA tools, MLOps, or AI product development Exposure to cloud platforms (Azure preferred; AWS/GCP is a bonus) Familiarity with version control and CI/CD practices (ref:hirist.tech)
Posted 1 week ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title : Sr. Data Engineer (AWS) Location : Ahmedabad, Gujarat Job Type : Full Time Experience : 4+ years Department : Data Engineering About Simform Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market. Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow. Role Overview The Sr. Data Engineer (AWS/Azure) will be responsible for building and managing robust, scalable, and secure data pipelines across cloud-based infrastructure. The role includes designing ETL/ELT workflows, implementing data lake and warehouse solutions, and integrating with real-time and batch data systems using AWS/Azure services. You will work closely with data scientists, ML engineers, and software teams to power data-driven applications and analytics. Key Responsibilities Design, develop, and maintain scalable end-to-end data pipelines on AWS/Azure. Build robust ETL/ELT workflows for both batch and streaming data workloads. Design high-performance data models and manage large-scale structured and unstructured datasets (100GB+). Develop distributed data processing solutions using Apache Kafka, Spark, Flink, and Airflow. Implement best practices for data transformation, data quality, and error handling. Optimize SQL queries, implement indexing, partitioning, and tuning strategies for performance improvement. Integrate various data sources including PostgreSQL, SQL Server, MySQL, MongoDB, Cassandra, and Neptune. Collaborate with software developers, ML engineers, and stakeholders to support business and analytics initiatives. Ensure adherence to data governance, security, and compliance standards. Participate in client meetings, provide technical guidance, and document architecture decisions. Preferred Qualifications (Nice To Have) Exposure to data lake architecture and lakehouse frameworks. Understanding of integrating data pipelines with ML workflows. Experience in CI/CD automation for data pipeline deployments. Familiarity with data observability and monitoring tools. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market. Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow. Aws AWS Glue Lambda Redshift RDS (Experience with EMR is a plus) Azure Azure Data Factory Synapse Analytics Databricks Azure Functions Build robust ETL/ELT workflows for both batch and streaming data workloads. Design high-performance data models and manage large-scale structured and unstructured datasets (100GB+). Develop distributed data processing solutions using Apache Kafka, Spark, Flink, and Airflow. Implement best practices for data transformation, data quality, and error handling. Optimize SQL queries, implement indexing, partitioning, and tuning strategies for performance improvement. Integrate various data sources including PostgreSQL, SQL Server, MySQL, MongoDB, Cassandra, and Neptune. Collaborate with software developers, ML engineers, and stakeholders to support business and analytics initiatives. Ensure adherence to data governance, security, and compliance standards. Participate in client meetings, provide technical guidance, and document architecture decisions. (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Greater Bengaluru Area
On-site
Galileo.ai is the leading platform for Gen AI evaluation and observability, with a mission to democratize building safe, reliable and robust applications in the new era of AI powered software development. Our foundation is built on pioneering the early technology behind the world's most ubiquitous AI applications including Apple's Siri and Google Speech. We firmly believe that AI developers require meticulously crafted, research-driven tools to create trustworthy and high-quality generative AI applications that will revolutionize our work and lifestyle. Galileo addresses the complexities inherent in implementing, evaluating, and monitoring GenAI applications, optimizing the development process for both individual developers and teams by offering a comprehensive platform that spans the full AI development lifecycle. Galileo bridges critical gaps, significantly enhancing developers' ability to refine and deploy reliable and precise GenAI applications. Since its inception, Galileo has rapidly gained traction, serving Fortune 100 banks, Fortune 50 telecom companies, as well as AI teams at prominent organizations such as Reddit and Headspace Health, among dozens of others. Galileo has AI research at its core, with the founders coming from Google and Uber where they solved challenging AI/ML problems in the Speech, Evaluation and ML Infra domains. It is now a Series B business backed by tier 1 investors including Battery Ventures, Scale Venture Partners, and Databricks Ventures, with $68M in total funding. We are headquartered in the San Francisco Bay Area with locations such as New York and Bangalore, India forming our areas of future growth. Role Description Galileo AI is seeking a Technical Recruiter to drive high-impact hiring across teams such as: engineering, product, marketing, and sales teams. In this role, you’ll serve as a true partner to hiring managers—owning the search, shaping team structures, and playing a critical role in building our technical talent foundation. Main Responsibilities Full-cycle recruitment across teams (Engineering, Product, Marketing, Sales), from sourcing through close Collaborate with managers to define hiring needs, build role requirements, and shape ideal team configurations Drive top-of-funnel strategies, including passive sourcing, referrals, and agency coordination Provide an exceptional candidate experience and act as a brand ambassador for Galileo AI Partner with the Operations Leaders to ensure efficient processes and reporting Maintain regular check-ins with hiring managers to track progress, adjust strategies, and keep hiring on pace Leverage market insights to advise hiring teams and adjust tactics as needed Minimum Qualifications 5+ years of full-cycle technical recruiting experience at a fast-growing tech company or startup, preferably a Developer Tooling/tools company Excellent sourcing skills and a strong network in the engineering/AI talent market Proven experience managing agencies, recruiting resources, and referral programs Confident in managing multiple reqs and stakeholders simultaneously A thoughtful, consultative partner who can influence decisions and drive outcomes Passionate about candidate experience and building diverse, high-performing teams Familiarity with ATS tools (Rippling ATS, LinkedIn Recruiter, Pave, etc.) and data reporting Why Galileo Join a seasoned founding team that has previously led product and engineering teams from 0 to $100M+ in revenue and from 0 to 1B+ users globally We obsess over our team’s culture driven by inclusivity, empathy and curiosity We invest in our team’s development and happiness because our employees are the keys to our success and ensuring happy customers – towards that end, we offer: Unlimited PTO 🌊 Parental leave for birthing or non-birthing parents – 100% pay for 8 weeks 🚼 Employee Stock Participation Plan 📈 Commuter Benefits 🚖 Mental and Physical Wellness 🧘 Company Paid Lunches 🌯 Headquarters Office in San Francisco 🌉 and a hub in New York 🌇 *Build the company with the Founders* 🧑💻
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Telangana, India
On-site
About Chubb JOB DESCRIPTION Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Role : MLops Engineer Engineer Experience : 5-10 Years Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers Qualifications tbd
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Telangana, India
On-site
About Chubb JOB DESCRIPTION Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Role : MLops Engineer Engineer Experience : 5-10 Years Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers
Posted 1 week ago
8.0 years
0 Lacs
Telangana, India
On-site
JOB DESCRIPTION Role : Senior Data Engineer Experience : 8+ Years Mandatory Skill: Azure + Python + Databricks/Spark+ SQL Location: Bangalore/Hyderabad Notice Period: less than 30 Days Job Description: Skills: Azure, Python/PySpark, SQL, Databricks At least 8-10 years of experience in development of data solutions using cloud platforms Strong programming skills in Python Strong SQL skills and experience writing complex yet efficient SPROCs/Functions/Views using T-SQL Solid understand of spark architecture and experience with performance tuning big data workloads in spark Building complex data transformations on both structure and semi-structured data (XML/JSON) using Pyspark & SQL Familiarity with Azure Databricks environment Good understanding of Azure cloud ecosystem; Azure data certification of DP-200/201/203 will be an advantage Proficient source control using GIT Good understanding of Agile, DevOps and CI-CD automated deployment (e.g. Azure DevOps, Jenkins) QUALIFICATIONS Role : Senior Data Engineer Experience : 8+ Years Mandatory Skill: Azure + Python + Databricks/Spark+ SQL Location: Bangalore/Hyderabad Notice Period: less than 30 Days Job Description: Skills: Azure, Python/PySpark, SQL, Databricks At least 8-10 years of experience in development of data solutions using cloud platforms Strong programming skills in Python Strong SQL skills and experience writing complex yet efficient SPROCs/Functions/Views using T-SQL Solid understand of spark architecture and experience with performance tuning big data workloads in spark Building complex data transformations on both structure and semi-structured data (XML/JSON) using Pyspark & SQL Familiarity with Azure Databricks environment Good understanding of Azure cloud ecosystem; Azure data certification of DP-200/201/203 will be an advantage Proficient source control using GIT Good understanding of Agile, DevOps and CI-CD automated deployment (e.g. Azure DevOps, Jenkins)
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As an AI/ML Manager at our Pune location, you will be responsible for leading the development of machine learning proof of concepts (PoCs) and demos using structured/tabular data for various use cases like forecasting, risk scoring, churn prediction, and optimization. Your role will involve collaborating with sales engineering teams to understand client requirements and presenting ML solutions during pre-sales calls and technical workshops. You will be expected to build ML workflows using tools such as SageMaker, Azure ML, or Databricks ML, managing training, tuning, evaluation, and model packaging. Applying supervised, unsupervised, and semi-supervised techniques like XGBoost, CatBoost, k-Means, PCA, and time-series models will be a key part of your responsibilities. Working closely with data engineering teams, you will define data ingestion, preprocessing, and feature engineering pipelines using Python, Spark, and cloud-native tools. Packaging and documenting ML assets for scalability and transition into delivery teams post-demo will be essential. Staying updated with the latest best practices in ML explainability, model performance monitoring, and MLOps practices is also expected. Participation in internal knowledge sharing, tooling evaluation, and continuous improvement of lab processes are additional aspects of this role. To qualify for this position, you should have at least 8+ years of experience in developing and deploying classical machine learning models in production or PoC environments. Strong hands-on experience with Python, pandas, scikit-learn, and ML libraries like XGBoost, CatBoost, LightGBM is required. Familiarity with cloud-based ML environments such as AWS SageMaker, Azure ML, or Databricks is preferred. A solid understanding of feature engineering, model tuning, cross-validation, and error analysis is necessary. Experience with unsupervised learning, clustering, anomaly detection, and dimensionality reduction techniques will be beneficial. You should be comfortable presenting models and insights to both technical and non-technical stakeholders during pre-sales engagements. Working knowledge of MLOps concepts, including model versioning, deployment automation, and drift detection, will be an advantage. If you are interested in this opportunity, please apply or share your resume at kanika.garg@austere.co.in.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Quality Engineer, you will collaborate with product, engineering, and customer teams to gather requirements and develop a comprehensive data quality strategy. You will lead data governance processes, including data preparation, obfuscation, integration, slicing, and quality control. Testing data pipelines, ETL processes, APIs, and system performance to ensure reliability and accuracy will be a key responsibility. Additionally, you will prepare test data sets, conduct data profiling, and perform benchmarking to identify inconsistencies or inefficiencies. Creating and implementing strategies to verify the quality of data products and ensuring alignment with business standards will be crucial. You will set up data quality environments and applications in compliance with defined standards, contributing to CI/CD process improvements. Participation in the design and maintenance of data platforms, as well as building automation frameworks for data quality testing and resolving potential issues, will be part of your role. Providing support in troubleshooting data-related issues to ensure timely resolution is also expected. It is essential to ensure that all data quality processes and tools align with organizational goals and industry best practices. Collaboration with stakeholders to enhance data platforms and optimize data quality workflows will be necessary to drive success in this role. Requirements: - Bachelors degree in Computer Science or a related technical field involving coding, such as physics or mathematics - At least three years of hands-on experience in Data Management, Data Quality verification, Data Governance, or Data Integration - Strong understanding of data pipelines, Data Lakes, and ETL testing methodologies - Proficiency in CI/CD principles and their application in data processing - Comprehensive knowledge of SQL, including aggregation and window functions - Experience in scripting with Python or similar programming languages - Databricks and Snowflake experience is a must, with good exposure to notebook, SQL editor, etc. - Experience in developing test automation frameworks for data quality assurance - Familiarity with Big Data principles and their application in modern data systems - Experience in data analysis and requirements validation, including gathering and interpreting business needs - Experience in maintaining QA environments to ensure smooth testing and deployment processes - Hands-on experience in Test Planning, Test Case design, and Test Result Reporting in data projects - Strong analytical skills, with the ability to approach problems methodically and communicate solutions effectively - English proficiency at B2 level or higher, with excellent verbal and written communication skills Nice to have: - Familiarity with advanced data visualization tools to enhance reporting and insights - Experience in working with distributed data systems and frameworks like Hadoop,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
The company is looking for a Lead SQL Server DBA with over 10 years of experience to manage and support DBA operations in various environments, including on-premises and Azure cloud infrastructure. The ideal candidate will have expertise in cloud migration, performance tuning, database security, and leading support teams across different levels. As a Lead SQL Server DBA, you will be responsible for overseeing a team of 10+ DBAs supporting SQL Server, Oracle, and MySQL databases in multiple entities. You will lead cloud migration efforts and manage Azure-based database systems while ensuring high availability, disaster recovery, and optimized backup/restore procedures. Additionally, you will be involved in overseeing structured and unstructured database systems, optimizing performance, enhancing security, and ensuring compliance with cybersecurity regulations. Your role will also involve reviewing and managing database code quality, enforcing best practices, and supporting development teams. You will be responsible for managing DevOps database deployment pipelines using tools like Redgate and acting as a technical lead in support engagements, handling escalations and critical issues. Collaboration in onshore/offshore models and Agile environments will also be a part of your responsibilities. The ideal candidate should have at least 10 years of experience in SQL Server administration, with a strong focus on cloud migration and support, especially on Microsoft Azure. Hands-on experience with Databricks, lakehouse concepts, and data warehousing is essential. In-depth knowledge of HA/DR architecture, security optimization, compliance standards, and performance tuning is required. Familiarity with support across different service tiers, expertise in DevOps practices, and excellent communication skills are also necessary. Desirable qualifications include experience in the insurance domain, familiarity with Active Directory and Windows Server environments, and Microsoft certifications in relevant database/cloud technologies. This is a full-time position with Provident Fund benefits. The work location is in person.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be joining Papigen, a fast-growing global technology services company that focuses on delivering innovative digital solutions through deep industry experience and cutting-edge expertise. The company specializes in technology transformation, enterprise modernization, and dynamic areas such as Cloud, Big Data, Java, React, DevOps, and more. The client-centric approach of Papigen combines consulting, engineering, and data science to help businesses evolve and scale efficiently. Your role as a Senior Data QA Analyst will involve supporting data integration, transformation, and reporting validation for enterprise-scale systems. This position requires close collaboration with data engineers, business analysts, and stakeholders to ensure the quality, accuracy, and reliability of data workflows, particularly in Azure Data Bricks and ETL pipelines. Key responsibilities include collaborating with Business Analysts and Data Engineers to understand requirements and translating them into test scenarios and test cases. You will need to develop and execute comprehensive test plans and scripts for data validation, as well as log and manage defects using tools like Azure DevOps. Your role will also involve supporting UAT and post-go-live smoke testing. You will be responsible for understanding data architecture and workflows, including ETL processes and data movement. Writing and executing complex SQL queries to validate data accuracy, completeness, and consistency will be crucial. Additionally, ensuring the correctness of data transformations and mappings based on business logic is essential. As a Senior Data QA Analyst, you will validate the structure, metrics, and content of BI reports. Performing cross-checks of report outputs against source systems and ensuring that reports reflect accurate calculations and align with business requirements will be part of your responsibilities. To be successful in this role, you should have a Bachelor's degree in IT, Computer Science, MIS, or a related field. You should also possess 8+ years of experience in QA, especially in data validation or data warehouse testing. Strong hands-on experience with SQL and data analysis is required, along with proven experience working with Azure Data Bricks, Python, and PySpark (preferred). Familiarity with data models like Data Marts, EDW, and Operational Data Stores is also necessary. Excellent understanding of data transformation, mapping logic, and BI validation is crucial, as well as experience with test case documentation, defect tracking, and Agile methodologies. Strong verbal and written communication skills are essential, along with the ability to work in a cross-functional environment. Working at Papigen will provide you with the opportunity to work with leading global clients, exposure to modern technology stacks and tools, a supportive and collaborative team environment, and continuous learning and career development opportunities.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As an Expert Software Engineer Java at SAP, you will play a critical role in leading strategic initiatives within the App2App Integration team in SAP Business Data Cloud. Your primary responsibility will be to accelerate the development and adoption of seamless, low-latency integration patterns across SAP applications and the BDC data fabric. Your expertise in Java, ETL, distributed data processing, Kafka, cloud-native development, and DevOps will be essential in driving architectural direction, overseeing key integration frameworks, and providing hands-on leadership to build real-time, event-driven, and secure communication solutions across a distributed enterprise landscape. In this role, you will collaborate closely with stakeholders across SAP's data platform initiatives, guiding the evolution of reusable integration patterns, automation practices, and platform consistency while mentoring teams, conducting code reviews, and contributing to team-level architectural decisions. Your responsibilities will include leading and designing App2App integration components and services using Java, RESTful APIs, and messaging frameworks such as Apache Kafka. You will architect and build scalable ETL and data transformation pipelines for both real-time and batch processing needs, integrating data workflows with platforms like Databricks, Apache Spark, or other modern data engineering tools. You will drive the evolution of reusable integration patterns, automation practices, and platform consistency across services, architect and build distributed data processing pipelines, support large-scale data ingestion, transformation, and routing, guide the DevOps strategy to define and improve CI/CD pipelines, monitoring, and deployment strategies using modern GitOps practices, and guide cloud-native secure deployment of services on SAP BTP and major Hyperscalers (AWS, Azure, GCP). Additionally, you will collaborate with SAP's broader Data Platform efforts, mentor junior developers, and contribute to team-level architectural and technical decisions. To be successful in this role, you should hold a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, with 10+ years of hands-on experience in backend development using Java, strong object-oriented design skills, and integration patterns expertise. Proven experience in designing and building ETL pipelines, large-scale data processing frameworks, and familiarity with platforms like Databricks, Spark, or other data engineering tools is highly desirable. Proficiency in SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud, or HANA, designing CI/CD pipelines, containerization, Kubernetes, DevOps best practices, Hyperscaler environments (AWS, Azure, GCP), and driving engineering excellence within complex enterprise systems are key qualifications that you should bring to this role. Join us at SAP, where our culture of inclusion, focus on health and well-being, and flexible working models ensure that everyone, regardless of background, feels included and can perform at their best. We believe in unleashing all talent, investing in our employees, and creating a better, more equitable world. SAP is an equal opportunity workplace and an affirmative action employer, committed to Equal Employment Opportunity and providing accessibility accommodations to applicants with physical and/or mental disabilities. If you are ready to bring out your best and contribute to SAP's mission of helping the world run better, we encourage you to apply for this exciting opportunity.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As an I&F Decision Sci Practitioner Sr Analyst at Accenture, you will be responsible for defining warranty offerings, running outsourced after-sales warranty support and entitlement programs, evaluating customer feedback and planned versus actual costs of warranty coverage, utilizing warranty data analytics to reduce costs and enhance product quality, increasing recoveries from suppliers, and designing and deploying warranty solutions. To excel in this role, you should have expertise in Warranty Analytics, Automotive Warranty, Scripting, Data Analysis & Interpretation, Business Intelligence, Data Engineering/SQL, Databricks, and ML. Your commitment to quality, adaptability, agility for quick learning, ability to work well in a team, and strong written and verbal communication skills are essential for success. In this position, you will be required to analyze and solve increasingly complex problems, interact with peers within Accenture, and potentially engage with clients and/or Accenture management. You will receive minimal instruction on daily tasks and a moderate level of guidance on new assignments. Your decisions will impact your own work and may also influence the work of others. This role may involve serving as an individual contributor and/or overseeing a small work effort and/or team. A minimum qualification of Any Graduation is required for this position at Accenture.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
At Medtronic, you can embark on a life-long career dedicated to exploration and innovation, all while contributing to the cause of advancing healthcare access and equity for all. Your role will be pivotal in leading with purpose to break down barriers to innovation in a more connected and compassionate world. As a PySpark Data Engineer at Medtronic's new Minimed India Hub, you will play a crucial part in designing, developing, and maintaining data pipelines using PySpark. Collaborating closely with data scientists, analysts, and other stakeholders, your responsibilities will revolve around ensuring the efficient processing and analysis of large datasets, managing complex transformations, and aggregations. This opportunity allows you to make a significant impact within Medtronic's Diabetes business. With the announcement of the intention to separate the Diabetes division to drive future growth and innovation, you will have the chance to operate with increased speed and agility. This move is expected to unlock potential and drive innovation to enhance the impact on patient care. Key Responsibilities: - Design, develop, and maintain scalable and efficient ETL pipelines using PySpark. - Collaborate with data scientists and analysts to understand data requirements and deliver high-quality datasets. - Implement data quality checks, ensure data integrity, and troubleshoot data pipeline issues. - Stay updated with the latest trends and technologies in big data and distributed computing. Required Knowledge and Experience: - Bachelor's degree in computer science, Engineering, or related field. - 4-5 years of experience in data engineering with a focus on PySpark. - Proficiency in Python and Spark, strong coding and debugging skills. - Strong knowledge of SQL and experience with relational databases. - Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. - Experience with data warehousing solutions like Redshift, Snowflake, Databricks, or Google BigQuery. - Familiarity with data lake architectures, big data technologies, and data storage solutions. - Excellent problem-solving skills and ability to troubleshoot complex issues. - Strong communication and collaboration skills. Preferred Skills: - Experience with Databricks and orchestration tools like Apache Airflow or AWS Step Functions. - Knowledge of machine learning workflows and data security best practices. - Familiarity with streaming data platforms, real-time data processing, and CI/CD pipelines. Medtronic offers a competitive Salary and flexible Benefits Package. The company values its employees and provides resources and compensation plans to support their growth at every career stage. This position is eligible for the Medtronic Incentive Plan (MIP). About Medtronic: Medtronic is a global healthcare technology leader committed to addressing the most challenging health problems facing humanity. With a mission to alleviate pain, restore health, and extend life, the company unites a team of over 95,000 passionate individuals who work tirelessly to generate real solutions for real people through engineering and innovation.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer in Pune, your responsibilities will include designing, implementing, and optimizing end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. You will be developing data pipelines to extract and transform data in near real-time using cloud-native technologies. Implementing data validation and quality checks to ensure accuracy and consistency will also be part of your role. Monitoring system performance, troubleshooting issues, and implementing optimizations to enhance reliability and efficiency will be crucial tasks. Collaboration with business users, analysts, and other stakeholders to understand data requirements and deliver tailored solutions is an essential aspect of this position. Documentation of technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation will be expected. Providing technical guidance and support to team members and stakeholders as needed will also be a key responsibility. Desirable competencies for this role include having 8+ years of work experience, proficiency in writing complex SQL queries on MPP systems such as Snowflake or Redshift, experience in Databricks and Delta tables, data engineering experience with Spark, Scala, or Python, familiarity with the Microsoft Azure stack including Azure Storage Accounts, Data Factory, and Databricks, experience in Azure DevOps and CI/CD pipelines, working knowledge of Python, and being comfortable participating in 2-week sprint development cycles.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer at our organization, you will have the opportunity to work on building smart, automated testing solutions. We are seeking individuals who are passionate about data engineering and eager to contribute to our growing team. Ideally, you should hold a Bachelor's or Master's degree in Computer Science, IT, or equivalent field, with a minimum of 4 to 8 years of experience in building and deploying complex data pipelines and data solutions. For junior profiles, a similar educational background is preferred. Your responsibilities will include deploying data pipelines using technologies like Databricks, as well as demonstrating hands-on experience with Java and Databricks. Additionally, experience with visualization software such as Splunk (or alternatives like Grafana, Prometheus, PowerBI, Tableau) is desired. Proficiency in SQL and Java, along with hands-on experience in data modeling, is essential for this role. Familiarity with Pyspark or Spark for managing distributed data is also expected. Knowledge of Splunk (SPL), data schemas (e.g., JSON/XML/Avro), and deploying services as containers (e.g., Docker, Kubernetes) will be beneficial. Experience working with cloud services, particularly Azure, is advantageous. Familiarity with streaming and/or batch storage technologies like Kafka and data quality management and monitoring will be considered a plus. Strong communication skills in English are essential for effective collaboration within our team. If you are excited about this opportunity and possess the required qualifications, we encourage you to connect with us by sending your updated CV to nivetha.s@eminds.ai. Join us and become a part of our exciting journey!,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for a highly skilled Data Quality Manager with expertise in SQL, PySpark, Databricks, Snowflake, and CI/CD processes. As a Data Quality Manager, you will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support our data analytics and business intelligence requirements. You will collaborate closely with data scientists, analysts, and stakeholders to ensure the efficient processing and delivery of high-quality data. Your key responsibilities will include designing, developing, and optimizing data pipelines using PySpark, writing complex SQL queries for data extraction, transformation, and loading (ETL), working with Databricks to build and maintain collaborative and scalable data solutions, implementing and managing CI/CD processes for data pipeline deployments, collaborating with data scientists and business analysts to understand data requirements, ensuring data quality, integrity, and security, and monitoring and troubleshooting data pipelines and workflows. To qualify for this role, you should have a Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. You must possess proven experience with PySpark, advanced proficiency in SQL, hands-on experience with Databricks, a strong understanding of CI/CD pipelines, familiarity with cloud platforms, excellent problem-solving skills, attention to detail, and strong communication and collaboration skills. Preferred skills for this role include knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift) and familiarity with the kedro framework. Novartis is dedicated to helping people with diseases and their families, and we believe this requires a community of smart, passionate individuals like you. If you are ready to collaborate, support, and inspire each other to achieve breakthroughs that positively impact patients" lives, we invite you to join us in creating a brighter future together. If you are interested in this opportunity or want to stay connected for future career opportunities at Novartis, please visit our talent community at https://talentnetwork.novartis.com/network. For more information about the benefits and rewards we offer, please refer to our handbook at https://www.novartis.com/careers/benefits-rewards. Novartis is committed to an inclusive work environment and diverse teams that reflect the patients and communities we serve.,
Posted 1 week ago
6.0 years
0 Lacs
Greater Madurai Area
On-site
Kyndryl Data Science Bengaluru, Karnataka, India Gurugram, Haryana, India Hyderabad, Telangana, India Hyderabad, Telangana, India Mumbai, Maharashtra, India Pune, Maharashtra, India Pune, Maharashtra, India Berlin, Germany Chennai, Tamil Nadu, India Posted on Jul 15, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills And Experience 6 Years experience in Data Engineer 2 to 3 Years relevant experience in ELK Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Datapro Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB Excellent problem-solving, analytical, and critical thinking skills Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills And Experience Experience working as a Data Engineer and/or in cloud modernization Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization Cloud platform certification, e.g., AWS Certified Data Analytics – Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Job Sanofi is a pioneering global healthcare company committed to advancing the miracles of science to enhance the well-being of individuals worldwide. Operating in over 100 countries, our dedicated team is focused on reshaping the landscape of medicine, transforming the seemingly impossible into reality. We strive to provide life-changing treatment options and life-saving vaccines, placing sustainability and social responsibility at the forefront of our aspirations. Embarking on an expansive digital transformation journey, Sanofi is committed to accelerating its data transformation and embracing artificial intelligence (AI) and machine learning (ML) solutions. This strategic initiative aims to expedite research and development, enhance manufacturing processes, elevate commercial performance, and deliver superior drugs and vaccines to patients faster, ultimately improving global health and saving lives. What you will be doing:: As a dynamic Data Science practitioner, you are passionate about challenging the status quo and ensuring the development and impact of Sanofi's AI solutions for the patients of tomorrow. You are an influential leader with hands-on experience deploying AI/ML and GenAI solutions, applying state-of-the-art algorithms with technically robust lifecycle management. Your keen eye for improvement opportunities and demonstrated ability to deliver solutions in cross-functional environments make you an invaluable asset to our team. Main Responsibilities This role demands a dynamic and collaborative individual with a strong technical background, capable of leading the development and deployment of advanced machine learning while maintaining a focus on meeting business objectives and adhering to industry best practices. Key highlights include: Model Design and Development: Lead the development of custom Machine Learning (ML) and Large Language Model (LLM) components for both batch and stream processing-based AI ML pipelines. Create model components, including data ingestion, preprocessing, search and retrieval, Retrieval Augmented Generation (RAG), and fine-tuning, ensuring alignment with technical and business requirements. Develop and maintain full-stack applications that integrate ML models, focusing on both backend processes and frontend interfaces Collaborative Development: Work closely with data engineer, ML Ops, software engineers and other team Tech team members to collaboratively design, develop, and implement ML model solutions, fostering a cross-functional and innovative environment. Contribute to both backend and frontend development tasks to ensure seamless user experiences. Model Evaluation: Collaborate with other data science team members to develop, validate, and maintain robust evaluation solutions and tools for assessing model performance, accuracy, consistency, and reliability during development and User Acceptance Testing (UAT). Implement model optimizations to enhance system efficiency based on evaluation results. Model Deployment: Work closely with the MLOps team to facilitate the deployment of ML and Gen AI models into production environments, ensuring reliability, scalability, and seamless integration with existing systems. Contribute to the development and implementation of deployment strategies for ML and Gen AI models. Implement frontend interfaces to monitor and manage deployed models effectively. Internal Collaboration: Collaborate closely with product teams, business stakeholders, data science team members to ensure the smooth integration of machine learning models into production systems. Foster strong communication channels and cooperation across different teams for successful project outcomes. Problem Solving: Proactively troubleshoot complex issues related to machine learning model development and data pipelines. Innovatively develop solutions to overcome challenges, contributing to continuous improvement in model performance and system efficiency. Key Functional Requirements & Qualifications Education and experience: PhD in mathematics, computer science, engineering, physics, statistics, economics, operation research or a related quantitative discipline with strong coding skills, OR Master’s Degree in relevant domain with 3+ years of data science experience Technical skills: Disciplined AI/ML development, including CI/CD and orchestration Cloud and high-performance computing proficiency (AWS, GCP, Databricks, Apache Spark). Experience deploying models in agile, product-focused environments Full-stack AI application expertise preferred, including experience with front-end frameworks (e.g., React) and backend technologies. Communication and collaboration: Excellent written and verbal communication A demonstrated ability to collaborate with cross-functional team (e.g. business, product and digital) Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Sanofi achieves its mission, in part, by offering rewarding career opportunities which inspire employee growth and development. Our 6 Recruitment Principles clarify our commitment to you and your role in driving your career. Our people are responsible for managing their career Sanofi posts all non-executive opportunities for our people We give priority to internal candidates Managers provide constructive feedback to all internal interviewed candidates We embrace diversity to hire best talent We expect managers to encourage career moves across the whole organization Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. You are a strategic thinker passionate about driving solutions in business architecture and data management. You have found the right team. As a Banking Book Product Owner Analyst in our Firmwide Finance Business Architecture (FFBA) team, you will spend each day defining, refining, and delivering set goals for our firm. You will partner with stakeholders across various lines of business and subject matter experts to understand products, data, source system flows, and business requirements related to Finance and Risk applications and infrastructure. As a Product Owner on the Business Architecture team, you will work closely with Line of Business stakeholders, data Subject Matter Experts, Consumers, and technology teams across Finance, Credit Risk & Treasury, and various Program Management teams. Your primary responsibilities will include prioritizing the traditional credit product book of work, developing roadmaps, and delivering on multiple projects and programs during monthly releases. Your expertise in data analysis and knowledge will be instrumental in identifying trends, optimizing processes, and driving business growth. As our organization grows, so does our reliance on insightful, data-driven decisions. You will dissect complex datasets to unearth actionable insights while possessing a strong understanding of data governance, data quality, and data management principles. Job Responsibilities Utilize Agile Framework to write business requirements in the form of user stories to enhance data, test execution, reporting automation, and digital analytics toolsets. Engage with development teams to translate business needs into technical specifications, ensuring acceptance criteria are met. Drive adherence to product and Release Management standards and operating models. Manage the release plan, including scope, milestones, sourcing requirements, test strategy, execution, and stakeholder activities. Collaborate with lines of business to understand products, data capture methods, and strategic data sourcing into a cloud-based big data architecture. Identify and implement solutions for business process improvements, creating supporting documentation and enhancing end-user experience. Collaborate with Implementation leads, Release managers, Project managers, and data SMEs to align data and system flows with Finance and Risk applications. Oversee the entire Software Development Life Cycle (SDLC) from requirements gathering to testing and deployment, ensuring seamless integration and execution. Required Qualifications, Capabilities, And Skills Bachelor’s degree with 3+ years of experience in Project Management or Product Ownership, with a focus on process re-engineering. Proven experience as a Product Owner with a strong understanding of agile principles and delivering complex programs. Strong analytical and problem-solving abilities, with the capacity to quickly assimilate business and technical knowledge. Experience in Finance, Risk, or Operations as a Product Lead. Familiarity with Traditional Credit Products and Liquidity and Credit reporting data. Highly responsible, detail-oriented, and able to work with tight deadlines. Excellent written and verbal communication skills, with the ability to articulate complex concepts to diverse audiences. Strong organizational abilities to manage multiple work streams concurrently, maintaining sound judgment and a risk mindset. Solid understanding of financial and regulatory reporting processes. Energetic, adaptable, self-motivated, and effective under pressure. Basic knowledge of cloud technologies (e.g., AWS). Preferred Qualifications, Capabilities, And Skills Knowledge of JIRA, SQL, Microsoft suite of applications, Databricks and data visualization/analytical tools (Tableau, Alteryx, Python) is a plus. Knowledge and experience of Traditional Credit Products (Loans, Deposits, Cash etc.,) and Trading Products (Derivatives and Securities) a plus. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Global Finance & Business Management works to strategically manage capital, drive growth and efficiencies, maintain financial reporting and proactively manage risk. By providing information, analysis and recommendations to improve results and drive decisions, teams ensure the company can navigate all types of market conditions while protecting our fortress balance sheet.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Manager - Data Engineering Career Level - E Introduction To Role Join our Commercial IT Data Analytics & AI (DAAI) team as a Product Quality Leader, where you will play a pivotal role in ensuring the quality and stability of our data platforms built on AWS services, Databricks, and Snaplogic. Based in Chennai GITC, you will drive the quality engineering strategy, lead a team of quality engineers, and contribute to the overall success of our data platform. Accountabilities As the Product Quality Team Leader for data platforms, your key accountabilities will include leadership and mentorship, quality engineering standards, collaboration, technical expertise, and innovation and process improvement. You will lead the design, development, and maintenance of scalable and secure data infrastructure and tools to support the data analytics and data science teams. You will also develop and implement data and data engineering quality assurance strategies and plans tailored to data product build and operations. Essential Skills/Experience Bachelor’s degree or equivalent in Computer Engineering, Computer Science, or a related field Proven experience in a product quality engineering or similar role, with at least 3 years of experience in managing and leading a team. Experience of working within a quality and compliance environment and application of policies, procedures, and guidelines A broad understanding of cloud architecture (preferably in AWS) Strong experience in Databricks, Pyspark and the AWS suite of applications (like S3, Redshift, Lambda, Glue, EMR). Proficiency in programming languages such as Python Experienced in Agile Development techniques and Methodologies. Solid understanding of data modelling, ETL processes and data warehousing concepts Excellent communication and leadership skills, with the ability to collaborate effectively with the technical and non-technical stakeholders. Experience with big data technologies such as Hadoop or Spark Certification in AWS or Databricks. Prior significant experience working in Pharmaceutical or Healthcare industry IT environment. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are committed to disrupting an industry and changing lives. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak and lead a new way of working, combining cutting-edge science with leading digital technology platforms and data. We dare to lead, applying our problem-solving mindset to identify and tackle opportunities across the whole enterprise. Our spirit of experimentation is lived every day through our events like hackathons. We enable AstraZeneca to perform at its peak by delivering world-class technology and data solutions. Are you ready to be part of a team that has the backing to innovate, disrupt an industry and change lives? Apply now to join us on this exciting journey! Date Posted 15-Jul-2025 Closing Date 20-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsibilities Lead 4-8 data scientists to deliver ML capabilities within a Databricks-Azure platform Guide delivery of complex ML systems that align with product and platform goals Balance scientific rigor with practical engineering Define model lifecycle, tooling, and architectural direction Requirements Skills & Experience Advanced ML: Supervised/unsupervised modeling, time-series, interpretability, MLflow, Spark, TensorFlow/PyTorch Engineering: Feature pipelines, model serving, CI/CD, production deployment Leadership: Mentorship, architectural alignment across subsystems, experimentation strategy Communication: Translate ML results into business impact Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day This is a contract role based in Abu Dhabi. If relocation from India is required, the company will cover travel and accommodation expenses in addition to your salary About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsibilities Act as both a hands-on tech lead and product manager Deliver data/ML platforms and pipelines in a Databricks-Azure environment Lead a small delivery team and coordinate with enabling teams for product, architecture, and data science Translate business needs into product strategy and technical delivery with a platform-first mindset Requirements Skills & Experience Technical: Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, Azure Product: Agile delivery, discovery cycles, outcome-focused planning, trunk-based development Collaboration: Able to coach engineers, work with cross-functional teams, and drive self-service platforms Communication: Clear in articulating decisions, roadmap, and priorities Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day This is a contract role based in Abu Dhabi. If relocation from India is required, the company will cover travel and accommodation expenses in addition to your salary About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Location HYDERABAD OFFICE INDIA Job Description Are you looking to take your career to the next level? We’re looking for a DevOps Engineer to join our Data & Analytics Core Data Lake Platform engineering team. We are searching for self-motivated candidates, who will leverage modern Agile and DevOps practices to design, develop, test and deploy IT systems and applications, delivering global projects in multinational teams. P&G Core Data Lake Platform is a central component of P&G data and analytics ecosystem. CDL Platform is used to deliver a broad scope of digital products and frameworks used by data engineers and business analysts. In this role you will have an opportunity to leverage data engineering skillset to deliver solutions enriching data cataloging and data discoverability for our users. With our approach to building solutions that would fit the scale P&G business is operating, we combine data engineering best practices (Databricks) with modern software engineering standards (Azure, DevOps, SRE) to deliver value for P&G. RESPONSIBILITIES: Writing and testing code for Data & Analytics platform applications and building E2E cloud native (Azure) solutions. Engineering applications throughout its entire lifecycle from development, deployment, upgrade, and replacement/termination Ensuring that development and architecture enforce to established standards, including modern software engineering practices (CICD, Agile, DevOps) Collaborate with internal technical specialists and vendors to develop final product to improve overall performance, efficiency and/or to enable adaptation of new business process. Qualifications Job Qualifications Bachelor’s degree in computer science or related technical field. 8+ years of experience working as Software/Data Engineer (with focus on developing in Python, PySpark, Databricks, ADF) Experience leveraging modern software engineering practices (code standards, Gitflow, automated testing, CICD, DevOps) Experience working with Cloud infrastructure (Azure preferred) Strong verbal, written, and interpersonal communication skills. A strong desire to produce high quality software through cross functional collaboration, testing, code reviews, and other best practices. YOU ALSO SHOULD HAVE: Strong written and verbal English communication skills to influence others Demonstrated use of data and tools Ability to handle multiple priorities Ability to work collaboratively across different functions and geographies Job Schedule Full time Job Number R000134774 Job Segmentation Experienced Professionals (Job Segmentation)
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France