Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
10 - 20 Lacs
Pune
Hybrid
Hi, Greetings! This is regarding a job opportunity for the position of Data Modeller with a US based MNC in Healthcare Domain. This opportunity is under the direct pay roll of US based MNC. Job Location: Pune, Mundhwa Mode of work: Hybrid (3 days work from office) Shift timings: 1pm to 10pm About the Company: The Global MNC is a mission-driven startup transforming the healthcare payer industry. Our secure, cloud-enabled platform empowers health insurers to unlock siloed data, improve patient outcomes, and reduce healthcare costs. Since our founding in 2017, we've raised over $81 million from top-tier VCs and built a thriving SaaS business. Join us in shaping the future of healthcare data. With our deep expertise in cloud-enabled technologies and knowledge of the healthcare industry, we have built an innovative data integration and management platform that allows healthcare payers access to data that has been historically siloed and inaccessible. As a result, these payers can ingest and manage all the information they need to transform their business by supporting their analytical, operational, and financial needs through our platform. Since our founding in 2017, it has built a highly successful SaaS business, raising more than $80 Million by leading VC firms with profound expertise in the healthcare and technology industries. We are solving massive complex problems in an industry ready for disruption. We're building powerful momentum and would love for you to be a part of it! Interview process: 5 rounds of interview 4 rounds of Technical Interview 1 round of HR or Fitment discussion Job Description: Data Modeller About the Role: Were seeking a Data Modeler to join our global data modeling team. Youll play a key role in translating business requirements into conceptual and logical data models that support both operational and analytical use cases. This is a high-impact opportunity to work with cutting-edge technologies and contribute to the evolution of healthcare data platforms. What Youll Do Design and build conceptual and logical data models aligned with enterprise architecture and healthcare standards. Perform data profiling and apply data integrity principles using SQL. Collaborate with cross-functional teams to ensure models meet client and business needs. Use tools like Erwin, ER/Studio, DBT, or similar for enterprise data modeling. Maintain metadata, business glossaries, and data dictionaries. Support client implementation teams with data model expertise. What Were Looking For 2+ years of experience in data modeling and cloud-based data engineering. Proficiency in enterprise data modeling tools (Erwin, ER/Studio, DBSchema). Experience with Databricks, Snowflake, and data lakehouse architectures. Strong SQL skills and familiarity with schema evolution and data versioning. Deep understanding of healthcare data domains (Claims, Enrollment, Provider, FHIR, HL7, etc.). Excellent collaboration and communication skills. In case you have query, please feel free to contact me on the below mention email or whatsapp or call. Thanks & Regards, Priyanka Das Email: priyanka.das@dctinc.com Contact Number: 74399 37568
Posted 1 week ago
3.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Databricks Developer Location: Pune, Maharashtra (Hybrid – 2 to 3 days from office) Experience: 3 to 10 Years Employment Type: Full-Time / Contract Notice Period: Immediate to 30 Days Preferred About the Role: We are seeking a highly skilled and motivated Databricks Developer to join our dynamic data engineering team in Chennai. As a Databricks expert, you will be responsible for designing, developing, and maintaining robust big data pipelines and solutions using Databricks, Spark, and other modern data technologies. Key Responsibilities: Design and develop scalable data pipelines using Databricks , Apache Spark , and Delta Lake . Implement ETL/ELT workflows for processing large volumes of structured and unstructured data. Collaborate with data scientists, analysts, and stakeholders to define data models and deliver business insights. Optimize queries and performance on big data platforms. Integrate data from various sources including Azure Data Lake , SQL , and NoSQL systems. Build reusable code and libraries for future use in the data pipeline framework. Maintain and enhance data governance, quality, and security standards. Troubleshoot and resolve technical issues and support production pipelines. Required Skills & Experience: 3 to 10 years of experience in data engineering or big data development . Strong hands-on experience in Databricks and Apache Spark (PySpark preferred). Proficient in Python and/or Scala for data processing. Solid understanding of data warehousing concepts , data lakes , and data lakehouse architecture . Experience working with Azure Cloud services (ADF, ADLS, Synapse) or AWS Glue/EMR is a plus. Strong experience in SQL and performance tuning of queries. Experience in CI/CD integration and version control (e.g., Git, Azure DevOps). Good understanding of Delta Lake , MLFlow , and Notebooks . Nice to Have: Databricks certification (Developer or Data Engineer Associate/Professional). Knowledge of streaming frameworks like Kafka or Structured Streaming. Exposure to Airflow , Azure Data Factory , or similar orchestration tools.
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
About the company: NIIT is a leading Skills and Talent Development Corporation, building a manpower pool for global industry requirements. Founded in 1981 to help the nascent IT industry overcome its human resource challenges, the company today ranks among the world’s leading training companies owing to its vast yet comprehensive array of talent development programs. With a footprint across 40 nations, NIIT offers training and development solutions to Individuals, Enterprises, and Institutions. Link for our LinkedIn page: https://www.linkedin.com/company/niit-limited/ Link to our website - https://www.niit.com/en/learning-outsourcing/ Position : Power BI Consultant Duration : 6-month contract Responsibilities 🔹 Power BI (Advanced) Strong in dashboard development and data modelling Manages datasets, refresh schedules, workspace roles, and permissions in Power BI Service Excellent eye for dashboard design for senior leaders (layout, user experience, colour palette) Minimum 5 years of development experience. 🔹 SQL (Advanced) Writes and optimises queries for Power BI data ingestion and performance Ability to create views & stored procedures Load data into SQL from various sources Validate raw data; familiar with SQL Server Management Studio (SSMS) 🔹 Databricks Databricks certification Workspace & account management, pipeline management Experience in implementation and/or usage 🔹 Excel (Intermediate–Advanced) Proficient with pivot tables, XLOOKUP, INDEX-MATCH, data validation, and cleansing 🔹 Interpersonal skills Maintain a positive working relationship with stakeholders A self-starter who can work independently to achieve the desired objectives. Craft and send out high-quality communication emails and PowerPoint packs to leaders Can manage multiple urgent priority items from leaders May require ad hoc additional hours during busy periods NIIT is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other protected characteristic. Thanks & Regards NIIT GS Team
Posted 1 week ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 5+ years if Experience Roles and Responsibilities Design, develop, and maintain scalable data pipelines using Spark (PySpark or Spark with Scala). Build data ingestion and transformation frameworks for structured and unstructured data sources. Collaborate with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions. Work with large volumes of data and ensure quality, integrity, and consistency. Optimize data workflows for performance, scalability, and cost efficiency on cloud platforms (AWS, Azure, or GCP). Implement data quality checks and automation for ETL/ELT pipelines. Monitor and troubleshoot data issues in production and perform root cause analysis. Document technical processes, system designs, and operational procedures. Must-Have Skills 3+ years of experience as a Data Engineer or similar role. Hands-on experience with PySpark or Spark using Scala . Strong knowledge of SQL for data querying and transformation. Experience working with any cloud platform (AWS, Azure, or GCP). Solid understanding of data warehousing concepts and big data architecture. Experience with version control systems like Git. Good-to-Have Skills Experience with data orchestration tools like Apache Airflow , Databricks Workflows , or similar. Knowledge of Delta Lake , HDFS , or Kafka . Familiarity with containerization tools (Docker/Kubernetes). Exposure to CI/CD practices and DevOps principles. Understanding of data governance, security, and compliance standards. .
Posted 1 week ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 5+ years if Experience Roles and Responsibilities Design, develop, and maintain scalable data pipelines using Spark (PySpark or Spark with Scala). Build data ingestion and transformation frameworks for structured and unstructured data sources. Collaborate with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions. Work with large volumes of data and ensure quality, integrity, and consistency. Optimize data workflows for performance, scalability, and cost efficiency on cloud platforms (AWS, Azure, or GCP). Implement data quality checks and automation for ETL/ELT pipelines. Monitor and troubleshoot data issues in production and perform root cause analysis. Document technical processes, system designs, and operational procedures. Must-Have Skills 3+ years of experience as a Data Engineer or similar role. Hands-on experience with PySpark or Spark using Scala . Strong knowledge of SQL for data querying and transformation. Experience working with any cloud platform (AWS, Azure, or GCP). Solid understanding of data warehousing concepts and big data architecture. Experience with version control systems like Git. Good-to-Have Skills Experience with data orchestration tools like Apache Airflow , Databricks Workflows , or similar. Knowledge of Delta Lake , HDFS , or Kafka . Familiarity with containerization tools (Docker/Kubernetes). Exposure to CI/CD practices and DevOps principles. Understanding of data governance, security, and compliance standards.
Posted 1 week ago
0.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Senior Data Engineer- Azure Synapse (5 to 8 Years of Experience) Location : Ahmedabad (India) Work mode: Hybrid Job type: Full-time Must Have : Azure Synapse and deep Azure data engineering experience, solid understanding of data platform, infrastructure, and security Job Description As a Senior Data Engineer, you will provide expertise in building data technology and modern data platform solutions in Azure. You will play a crucial role in developing complex data solutions, supporting our Data Architects, Presales Architects, and Cloud Engineers, and mentoring junior team members. Key Responsibilities Design and build complex data solutions leveraging Azure data services. Support and collaborate with Data Architects, Presales Architects, and Cloud Engineers to deliver top-notch solutions. Mentor and guide junior team members, fostering a culture of continuous learning and improvement. Conduct R&D to stay ahead of industry trends and integrate new technologies. Develop and enforce best practices in data engineering and platform development. What We Need From You Substantial experience in data engineering and Azure data services. Strong analytical and problem-solving skills. Proven experience working with a variety of customers. In-depth knowledge of engineering practices and processes. Expertise in developing data pipelines, working with APIs, multiple file formats, and databases. Specific Technologies and Disciplines Fabric – nice to have, not essential Synapse ADLS2 Databricks Azure Data Factory (including metadata-driven pipelines) Azure SQL Keyvault Azure Security (e.g., use of private endpoints) CI/CD, especially within Azure DevOps Agile delivery Job Type: Full-time Pay: Up to ₹3,400,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): What is your notice period (in days)? What is your current annual salary? What is your expected annual salary? In which city do you currently live? Experience: Azure data services: 6 years (Required) Azure Synapse: 6 years (Required) Databricks: 6 years (Required) Azure Data Factory : 6 years (Required) Location: Ahmedabad, Gujarat (Required) Work Location: In person
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 5+ years if Experience. Roles and Responsibilities Design, develop, and maintain scalable data pipelines using Spark (PySpark or Spark with Scala). Build data ingestion and transformation frameworks for structured and unstructured data sources. Collaborate with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions. Work with large volumes of data and ensure quality, integrity, and consistency. Optimize data workflows for performance, scalability, and cost efficiency on cloud platforms (AWS, Azure, or GCP). Implement data quality checks and automation for ETL/ELT pipelines. Monitor and troubleshoot data issues in production and perform root cause analysis. Document technical processes, system designs, and operational procedures. Must-Have Skills 3+ years of experience as a Data Engineer or similar role. Hands-on experience with PySpark or Spark using Scala . Strong knowledge of SQL for data querying and transformation. Experience working with any cloud platform (AWS, Azure, or GCP). Solid understanding of data warehousing concepts and big data architecture. Experience with version control systems like Git. Good-to-Have Skills Experience with data orchestration tools like Apache Airflow , Databricks Workflows , or similar. Knowledge of Delta Lake , HDFS , or Kafka . Familiarity with containerization tools (Docker/Kubernetes). Exposure to CI/CD practices and DevOps principles. Understanding of data governance, security, and compliance standards.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a QA Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Cloud/Bigdata Automation, Python, Pyspark, SQL, Hive, Databricks, Airflow, Performance testing other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub Develop, troubleshoot, debug and make application enhancements and create automation framework leveraging Python/Pyspark, SQL/NOSQL as the core development languages. Develop and build scalable automation frameworks and test suites working across technologies. Develop Gen AI Automation solutions using AI ML Framework. Develop documents and maintain test plans, procedures and scripts and perform well defined product level integration tests. Implement execute and debug automated test scripts using various technologies and tools. Perform manual testing, the scope of which will encompass all functionalities of services as prequel to automation. Work closely with other quality and development engineers to build, evolve and maintain a scalable continuous build and deployment pipeline. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Research and evaluate a variety of software products and development tools. Provides technical guidance and support to colleagues such as code reviews, testing and software documentation as required. Qualifications 6-10 Years of years of applicable software engineering experience Strong fundamentals with experience in Cloud/Bigdata Automation Testing, Python, Pyspark, Machine Learning, SQL, Hive, Databricks, Airflow, UI and Service Testing, Performance Testing. Must have Pyspark / Python experience and proficient enough to create scalable big data Automation testing framework in Cloud Hands on experience in Test Automation, Creating Automation framework in Cloud, API microservice automation. Must have SQL knowledge. Must have knowledge of relational databases, preferably PostgreSQL. Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in UI and Services functional validations. Good to have experience in AI ML and validation using AI ML framework. Good to have experience in Selenium, TestNG, JAVA. Good to have experience in performance testing using Load Runner or Jmeter. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have experience on Retail Domain. Solid understanding of software testing principles, methodologies and best practices. Strong analytical, problem-solving skills and ability to learn and apply new technologies quickly Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
5.0 years
8 - 9 Lacs
Hyderābād
On-site
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals. Job responsibilities Develop appropriate level designs and ensure consensus from peers where necessary. Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines. Work with software engineers and teams to design, develop, test, and implement solutions within applications. Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively. Understand leadership objectives and proactively address issues before they impact customers. Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources. Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark. Monitor and optimize the performance of data systems and pipelines. Implement best practices for data storage, retrieval, and processing Maintain comprehensive documentation of data systems, processes, and workflows. Ensure compliance with data governance and security policies Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Formal training or certification in AWS/Databricks with 10+ years of applied experience. Expertise in programming languages such as Python and PySpark. 10+ years of professional experience in designing and implementing data pipelines in a cloud environment. Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc. Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform. Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker. Ability to troubleshoot common Big Data and Cloud technologies and issues. Practical cloud native experience Preferred qualifications, capabilities, and skills 5+ years of experience in leading and developing data solutions in the AWS cloud. 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies
Posted 1 week ago
4.0 years
8 Lacs
Hyderābād
On-site
Job Title: Data Engineer Experience: 4–6 Years Location: Hyderabad (Onsite) Employment Type: Contract (6 Months – 1 Year) Industry: Information Technology / Data Engineering / Cloud Working Model: Onsite About the Role: We are seeking a highly skilled and motivated Data Engineer with strong experience in building scalable data pipelines and working with modern cloud-based data ecosystems. The ideal candidate will have hands-on experience with Databricks, Apache Spark, and Google Cloud Platform (GCP) , especially BigQuery , and a passion for driving data initiatives that power intelligent decision-making across the organization. Key Responsibilities: Design, build, and optimize large-scale, reliable data pipelines using Databricks , GCP (BigQuery) , and other modern tools. Perform advanced SQL querying, data wrangling , and complex data transformations to support analytics and machine learning initiatives. Handle structured and semi-structured data, and apply Exploratory Data Analysis (EDA) techniques to derive insights. Work closely with data scientists to implement and deploy data models and pipelines into production environments. Ensure data quality, reliability, lineage , and security across the entire data pipeline lifecycle. Participate in data architecture discussions and influence decisions around data design and storage strategy . Contribute to data democratization by ensuring business users have access to clean and usable data. Create detailed documentation and reusable frameworks for data ingestion, transformation, and operational workflows. Required Skills & Qualifications: 3–6+ years of experience in a Data Engineering role or similar. Strong expertise in Databricks and Apache Spark . Deep hands-on experience with GCP BigQuery – including performance tuning, partitioning, and optimization. Proficiency in advanced SQL – including complex joins, CTEs, window functions, and query optimization. Solid experience with Python for data manipulation and developing robust pipelines. Familiarity with data science concepts – such as feature engineering, basic model implementation, and evaluation metrics. Knowledge of data profiling , EDA , and statistical analysis . Sound understanding of data structures , normalization/denormalization , and metadata management . Demonstrated understanding of how data impacts business decisions and product development. Strong problem-solving , communication, and collaboration skills. Education: Bachelor’s degree in Computer Science , Information Systems , Engineering , Computer Applications , or a related technical discipline. Preferred Qualifications (Nice to Have): Exposure to modern data orchestration tools (e.g., Airflow, dbt). Experience working in Agile environments and cross-functional teams. For more information or to apply, contact us at: -career@munificentresource.in -Call/WhatsApp: +91 9064363461 -Subject Line: Application for Data Engineer – Hyderabad Would you like a PDF version of this JD as well? Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: Up to ₹70,000.00 per month Work Location: In person
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Role : MLops Engineer Engineer Experience : 5-10 Years Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. · Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers tbd
Posted 1 week ago
2.0 - 5.0 years
6 - 9 Lacs
Hyderābād
On-site
Data Engineer, DT US PxE The Data Engineer is an integral part of the technical application development team and primarily responsible for analyze, plan, design, develop, and implement the Azure Data engineering solutions to meet strategic, usability, performance, reliability, control, and security requirements of Data science processes. Requires demonstrable knowledge in areas of Data engineering, AI/ML, Data warehouse and reporting applications. Must be innovative. Work you will do A unique opportunity to be a part of growing team that works on a premier unified data science analytics platform within Deloitte. You will be responsible for implementing/delivering/supporting Data engineering and AI/ML solutions to support the Deloitte US Member Firm. Outcome-Driven Accountability Collaborate with business and IT leaders to develop and refine ideas for integrating predictive and prescriptive analytics within business processes, ensuring measurable customer and business outcomes. Decompose complex business problems into manageable components, facilitating the use of multiple analytic modeling methods for holistic and valuable solutions. Develop and refine prototypes and proofs of concepts, presenting results to business and IT leaders, and demonstrating the impact on customer needs and business outcomes. Technical Leadership and Advocacy Engage in data analysis, generating and testing hypotheses, preparing and analyzing historical data, identifying patterns, and applying statistical methods to formulate solutions that deliver high-quality outcomes. Develop project plans, including resource needs and task dependencies, to meet project deliverables with a focus on incremental and iterative delivery. Engineering Craftsmanship Participate in defining project scope, objectives, and quality controls for new projects, ensuring alignment with customer-centric engineering principles. Present and communicate project deliverable results, emphasizing the value delivered to customers and the business. Customer-Centric Engineering Assist in recruiting and mentoring team members, fostering a culture of engineering craftsmanship and continuous learning. Incremental and Iterative Delivery Stay abreast of changes in technology, leading new technology evaluations for predictive and statistical analytics, and advocating for innovative, lean, and feasible solutions. Education: Bachelor’s degree in Computer Science or Business Information Systems or MCA or equivalent degree. Qualifications: 2 to 5 years Advanced Level of experience in Azure Data engineering Expertise in Development, deployment and monitoring ADF pipelines (using visual studio and browsers) Expertise in Azure databricks internal programming using (PySpark, SparkR and SparkSQL) or Amazon EMR (Elastic MapReduce). Expertise in managing azure storage (Azure Datalake Gen2, Azure Blob Storage, Azure SQL database) or Azure Blob Storage, Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory Advanced programming skills in Python, R and SQL (SQL for HANA, MS SQL) Hands on experience in Visualization tools (Tableau / PowerBI) Hands on experience in Data science studios like (Dataiku, Azure ML studio, Amazon SageMaker) The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. How you will grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources, including live classrooms, team-based learning, and eLearning. Deloitte University (DU): The Leadership Center in India, our state-of-the-art, world-class learning center in the Hyderabad office, is an extension of the DU in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Disclaimer: Please note that this description is subject to change basis business/engagement requirements and at the discretion of the management. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302720
Posted 1 week ago
3.0 years
7 - 10 Lacs
Hyderābād
On-site
About the Job : Sanofi is a pioneering global healthcare company committed to advancing the miracles of science to enhance the well-being of individuals worldwide. Operating in over 100 countries, our dedicated team is focused on reshaping the landscape of medicine, transforming the seemingly impossible into reality. We strive to provide life-changing treatment options and life-saving vaccines, placing sustainability and social responsibility at the forefront of our aspirations. Embarking on an expansive digital transformation journey, Sanofi is committed to accelerating its data transformation and embracing artificial intelligence (AI) and machine learning (ML) solutions. This strategic initiative aims to expedite research and development, enhance manufacturing processes, elevate commercial performance, and deliver superior drugs and vaccines to patients faster, ultimately improving global health and saving lives. What you will be doing:: As a dynamic Data Science practitioner, you are passionate about challenging the status quo and ensuring the development and impact of Sanofi's AI solutions for the patients of tomorrow. You are an influential leader with hands-on experience deploying AI/ML and GenAI solutions, applying state-of-the-art algorithms with technically robust lifecycle management. Your keen eye for improvement opportunities and demonstrated ability to deliver solutions in cross-functional environments make you an invaluable asset to our team. Main Responsibilities: This role demands a dynamic and collaborative individual with a strong technical background, capable of leading the development and deployment of advanced machine learning while maintaining a focus on meeting business objectives and adhering to industry best practices. Key highlights include: Model Design and Development: Lead the development of custom Machine Learning (ML) and Large Language Model (LLM) components for both batch and stream processing-based AI ML pipelines. Create model components, including data ingestion, preprocessing, search and retrieval, Retrieval Augmented Generation (RAG), and fine-tuning, ensuring alignment with technical and business requirements. Develop and maintain full-stack applications that integrate ML models, focusing on both backend processes and frontend interfaces Collaborative Development: Work closely with data engineer, ML Ops, software engineers and other team Tech team members to collaboratively design, develop, and implement ML model solutions, fostering a cross-functional and innovative environment. Contribute to both backend and frontend development tasks to ensure seamless user experiences. Model Evaluation: Collaborate with other data science team members to develop, validate, and maintain robust evaluation solutions and tools for assessing model performance, accuracy, consistency, and reliability during development and User Acceptance Testing (UAT). Implement model optimizations to enhance system efficiency based on evaluation results. Model Deployment: Work closely with the MLOps team to facilitate the deployment of ML and Gen AI models into production environments, ensuring reliability, scalability, and seamless integration with existing systems. Contribute to the development and implementation of deployment strategies for ML and Gen AI models. Implement frontend interfaces to monitor and manage deployed models effectively. Internal Collaboration: Collaborate closely with product teams, business stakeholders, data science team members to ensure the smooth integration of machine learning models into production systems. Foster strong communication channels and cooperation across different teams for successful project outcomes. Problem Solving: Proactively troubleshoot complex issues related to machine learning model development and data pipelines. Innovatively develop solutions to overcome challenges, contributing to continuous improvement in model performance and system efficiency. Key Functional Requirements & Qualifications: Education and experience: PhD in mathematics, computer science, engineering, physics, statistics, economics, operation research or a related quantitative discipline with strong coding skills, OR Master’s Degree in relevant domain with 3+ years of data science experience Technical skills: Disciplined AI/ML development, including CI/CD and orchestration Cloud and high-performance computing proficiency (AWS, GCP, Databricks, Apache Spark). Experience deploying models in agile, product-focused environments Full-stack AI application expertise preferred, including experience with front-end frameworks (e.g., React) and backend technologies. Communication and collaboration: Excellent written and verbal communication A demonstrated ability to collaborate with cross-functional team (e.g. business, product and digital) Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Sanofi achieves its mission, in part, by offering rewarding career opportunities which inspire employee growth and development. Our 6 Recruitment Principles clarify our commitment to you and your role in driving your career. Our people are responsible for managing their career Sanofi posts all non-executive opportunities for our people We give priority to internal candidates Managers provide constructive feedback to all internal interviewed candidates We embrace diversity to hire best talent We expect managers to encourage career moves across the whole organization Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people.
Posted 1 week ago
0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Title : Data Engineer Function/Department : Technology Location : Bubaneshwar Employment Type : Full Time Reports To : Mayank Gupta Role Overview Key Responsibilities Must Have – IICS, Snowflake, SQL Good to Have – Python, Databricks Exp :: 8 to 10 Yrs Why Join Us? Be at the forefront of digital transformation in the insurance industry. Lead impactful initiatives that simplify claims processing and enhance customer satisfaction. Work alongside experienced professionals in a collaborative, innovation-driven environment. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence : At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture : Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success : As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Hybrid Work Environment, Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1 : Submit your application via the Chubb Careers Portal. Step 2 : Engage with our recruitment team for an initial discussion. Step 3 : Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4 : Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion , and are ready to make a difference, we invite you to be part of Chubb India’s journey . Apply Now : https://www.chubb.com/emea-careers/ About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Title : Data Engineer Function/Department : Technology Location : Bubaneshwar Employment Type : Full Time Reports To : Mayank Gupta Role Overview Key Responsibilities Must Have – IICS, Snowflake, SQL Good to Have – Python, Databricks Exp :: 8 to 10 Yrs Why Join Us? Be at the forefront of digital transformation in the insurance industry. Lead impactful initiatives that simplify claims processing and enhance customer satisfaction. Work alongside experienced professionals in a collaborative, innovation-driven environment. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence : At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture : Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success : As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Hybrid Work Environment, Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1 : Submit your application via the Chubb Careers Portal. Step 2 : Engage with our recruitment team for an initial discussion. Step 3 : Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4 : Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion , and are ready to make a difference, we invite you to be part of Chubb India’s journey . Apply Now : https://www.chubb.com/emea-careers/
Posted 1 week ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location Chennai Work from Office Experience Level 8 10 years Tier T2 We are seeking a highly skilled and experienced Senior Data Engineer to lead the design and development of scalable secure and high performance data pipelines hosted on a cloud platform The ideal candidate will have deep expertise in Databricks Data Fabric MDM Informatica and Unity Catalog and a strong foundation in data modelling software engineering and DevOps practices This role is critical to building a next generation healthcare data platform that will power advanced analytics operational efficiency and business innovation Key Responsibilities 1 Data Pipeline Design Development Translate business requirements into actionable technical specifications defining application components enhancement needs data models and integration workflows Design develop and optimize end to end data pipelines using Databricks and related cloud native tools Create and maintain detailed technical design documentation and provide accurate estimations for storage compute resources cost efficiency and operational readiness Implement reusable and scalable ingestion transformation and orchestration patterns for structured and unstructured data sources Ensure pipelines meet functional and non functional requirements such as latency throughput fault tolerance and scalability 2 Cloud Platform Architecture Build and deploy data solutions on Microsoft Azure Azure Fabric leveraging Data Lake Unity Catalog Integrate pipelines with Data Fabric and Master Data Management MDM platforms for consistent and governed data delivery Follow best practices in cloud security encryption access controls and identity management 3 Data Modeling Metadata Management Design robust and extensible data models supporting analytics AI ML and operational reporting Ensure metadata is cataloged documented and accessible through Unity Catalog and MDM frameworks Collaborate with data architects and analysts to ensure alignment with business requirements 4 DevOps CI CD Automation Adopt DevOps best practices for data pipelines including automated testing deployment monitoring and rollback strategies Work closely with platform engineers to manage infrastructure as code containerization and CI CD pipelines Ensure compliance with enterprise SDLC security and data governance policies 5 Collaboration Continuous Improvement Partner with data analysts and product teams to understand data needs and translate them into technical solutions Continuously evaluate and integrate new tools frameworks and patterns to improve pipeline performance and maintainability Key Skills Technologies Required Databricks Delta Lake Spark Unity Catalog Azure Data Platform Data Factory Data Lake Azure Functions Azure Fabric Unity Catalog for metadata and data governance Strong programming skills in Python SQL Experience with data modeling data warehousing and star snowflake schema design Proficiency in DevOps tools Git Azure DevOps Jenkins Terraform Docker Preferred Experience with healthcare or regulated industry data environments Familiarity with data security standards e g HIPAA GDPR
Posted 1 week ago
6.0 years
1 Lacs
Hyderābād
On-site
About us: Where elite tech talent meets world-class opportunities! At Xenon7, we work with leading enterprises and innovative startups on exciting, cutting-edge projects that leverage the latest technologies across various domains of IT including Data, Web, Infrastructure, AI, and many others. Our expertise in IT solutions development and on-demand resources allows us to partner with clients on transformative initiatives, driving innovation and business growth. Whether it's empowering global organizations or collaborating with trailblazing startups, we are committed to delivering advanced, impactful solutions that meet today’s most complex challenges. We are building a community of top-tier experts and we’re opening the doors to an exclusive group of exceptional AI & ML Professionals ready to solve real-world problems and shape the future of intelligent systems. Structured Onboarding Process We ensure every member is aligned and empowered: Screening – We review your application and experience in Data & AI, ML engineering, and solution delivery Technical Assessment – 2-step technical assessment process that includes an interactive problem-solving test, and a verbal interview about your skills and experience Matching you to Opportunity – We explore how your skills align with ongoing projects and innovation tracks Who We're Looking For As a Data Analyst, you will work closely with business stakeholders, data engineers, and data scientists to analyze large datasets, build scalable queries and dashboards, and provide deep insights that guide strategic decisions. You’ll use Databricks for querying, transformation, and reporting across Delta Lake and other data sources.nd act on data with confidence. Requirements 6+ years of experience in data analysis, BI, or analytics roles Strong experience with Databricks Notebooks , SQL, and Delta Lake Proficiency in writing complex SQL queries (joins, CTEs, window functions) Experience with data profiling, data validation, and root-cause analysis Comfortable working with large-scale datasets and performance tuning Solid understanding of data modeling concepts and ETL workflows Experience with business intelligence tools (e.g., Power BI, Tableau) Familiarity with Unity Catalog and data access governance (a plus) Exposure to Python or PySpark for data wrangling (a plus) Benefits At Xenon7, we're not just building AI systems—we're building a community of talent with the mindset to lead, collaborate, and innovate together. Ecosystem of Opportunity: You'll be part of a growing network where client engagements, thought leadership, research collaborations, and mentorship paths are interconnected. Whether you're building solutions or nurturing the next generation of talent, this is a place to scale your influence. Collaborative Environment: Our culture thrives on openness, continuous learning, and engineering excellence. You'll work alongside seasoned practitioners who value smart execution and shared growth. Flexible & Impact-Driven Work: Whether you're contributing from a client project, innovation sprint, or open-source initiative, we focus on outcomes—not hours. Autonomy, ownership, and curiosity are encouraged here. Talent-Led Innovation: We believe communities are strongest when built around real practitioners. Our Innovation Community isn’t just a knowledge-sharing forum—it’s a launchpad for members to lead new projects, co-develop tools, and shape the direction of AI itself.
Posted 1 week ago
4.0 years
20 Lacs
Hyderābād
On-site
Job Description : We are seeking a skilled and dynamic Azure Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and maintaining data pipelines and working with large datasets on the Azure cloud platform. The Azure Data Engineer will be responsible for developing and implementing efficient ETL processes, working with data warehouses, and leveraging cloud technologies such as Azure Data Factory (ADF), Azure Databricks, PySpark, and SQL to process and transform data for analytical purposes. Key Responsibilities : - Data Pipeline Development : Design, develop, and implement scalable, reliable, and high-performance data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark. - Data Processing : Develop complex data transformations, aggregations, and cleansing processes using PySpark and Databricks for big data workloads. - Data Integration : Integrate and process data from various sources such as databases, APIs, cloud storage (e.g., Blob Storage, Data Lake), and third-party services into Azure Data Services. - Optimization : Optimize data workflows and ETL processes to ensure efficient data loading, transformation, and retrieval while ensuring data integrity and high performance. - SQL Development : Write complex SQL queries for data extraction, aggregation, and transformation. Maintain and optimize relational databases and data warehouses. - Collaboration : Work closely with data scientists, analysts, and other engineering teams to understand data requirements and design solutions that meet business and analytical needs. - Automation & Monitoring : Implement automation for data pipeline deployment and ensure monitoring, logging, and alerting mechanisms are in place for pipeline health. - Cloud Infrastructure Management : Work with cloud technologies (e.g., Azure Data Lake, Blob Storage) to store, manage, and process large datasets. - Documentation & Best Practices : Maintain thorough documentation of data pipelines, workflows, and best practices for data engineering solutions. Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Experience: Azure: 4 years (Required) Python: 4 years (Required) SQL: 4 years (Required) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
Are you looking to take your career to the next level? We’re looking for a DevOps Engineer to join our Data & Analytics Core Data Lake Platform engineering team. We are searching for self-motivated candidates, who will leverage modern Agile and DevOps practices to design, develop, test and deploy IT systems and applications, delivering global projects in multinational teams. P&G Core Data Lake Platform is a central component of P&G data and analytics ecosystem. CDL Platform is used to deliver a broad scope of digital products and frameworks used by data engineers and business analysts. In this role you will have an opportunity to leverage data engineering skillset to deliver solutions enriching data cataloging and data discoverability for our users. With our approach to building solutions that would fit the scale P&G business is operating, we combine data engineering best practices (Databricks) with modern software engineering standards (Azure, DevOps, SRE) to deliver value for P&G. RESPONSIBILITIES: Writing and testing code for Data & Analytics platform applications and building E2E cloud native (Azure) solutions. Engineering applications throughout its entire lifecycle from development, deployment, upgrade, and replacement/termination Ensuring that development and architecture enforce to established standards, including modern software engineering practices (CICD, Agile, DevOps) Collaborate with internal technical specialists and vendors to develop final product to improve overall performance, efficiency and/or to enable adaptation of new business process.
Posted 1 week ago
8.0 years
3 - 8 Lacs
Gurgaon
On-site
We are seeking a Digital Architect to lead the design and implementation of our Azure-based digital and AI platform that enables scalable and secure product delivery across IT and OT domains. This individual will shape the platform architecture in close collaboration with the Enterprise Architect , ensuring alignment with our overall digital ecosystem. The role bridges plant-floor operations with cloud innovation, integrating OT sensor data from PLCs, SCADA, and IoT devices into a centralized, governed Lakehouse environment. Key Job Functions: 1) Architect and implement the Azure digital platform using IoT Hub, IoT Edge, Synapse, Databricks, and Purview 2) Collaborate with the Enterprise Architect to align platform capabilities with broader enterprise architecture and digital roadmap 3) Design data ingestion flows and edge-to-cloud integration from OT systems (SCADA, PLC, MQTT, OPC-UA) 4) Define platform standards for ingestion, transformation (Bronze Silver Gold), and downstream AI/BI consumption 5) Ensure security, governance, and compliance (ISA-95, Purdue Model) 6) Lead technical validation of platform components and guide platform scaling across global sites 7) Apply microservices architecture patterns using containers (Docker) and orchestration (Kubernetes) for platform modularity and scalability Requirements: 1) 8+ years in architecture or platform engineering roles 2) Strong hands-on experience with Azure services (Data Lake, Synapse, Databricks, IoT Edge/Hub) 3) Deep understanding of industrial data protocols (OPC-UA, MQTT, Modbus) 4) Proven experience designing IT/OT integration solutions in manufacturing environments 5) Familiar with Medallion architecture, time-series data, and Azure security best practices 6) TOGAF or Azure Solutions Architect certification required
Posted 1 week ago
10.0 years
1 - 8 Lacs
Gurgaon
On-site
Requisition Number: 101627 Architect II Location: The role will be a hybrid position located in Delhi NCR, Hyderabad, Pune, Trivandrum and Bangalore, India Insight at a Glance 14,000+ engaged teammates globally #20 on Fortune’s World's Best Workplaces™ list $9.2 billion in revenue Received 35+ industry and partner awards in the past year $1.4M+ total charitable contributions in 2023 by Insight globally Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. About the role The Architect-II Data will focus on leading our Business Intelligence (BI) and Data Warehousing (DW) initiatives. This role involves designing and implementing end-to-end data pipelines using cloud services and data frameworks. They will collaborate with stakeholders and ETL/BI developers in an agile environment to create scalable, secure data architectures ensuring alignment with business requirements, industry best practices, and regulatory compliance. Responsibilities: Architect and implement end-to-end data pipelines, data lakes, and warehouses using modern cloud services and architectural patterns. Develop and build analytics tools that deliver actionable insights to the business. Integrate and manage large, complex data sets to meet strategic business requirements. Optimize data processing workflows using frameworks such as PySpark. Establish and enforce best practices for data quality, integrity, security, and performance across the entire data ecosystem. Collaborate with cross-functional teams to prioritize deliverables and design solutions. Develop compelling business cases and return on investment (ROI) analyses to support strategic initiatives. Drive process improvements for enhanced data delivery speed and reliability. Provide technical leadership, training, and mentorship to team members, promoting a culture of excellence. Qualification: 10+ years in Business Intelligence (BI) solution design, with 8+ years specializing in ETL processes and data warehouse architecture. 8+ years of hands-on experience with Azure Data services including Azure Data Factory, Azure Databricks, Azure Data Lake Gen2, Azure SQL DB, Synapse, Power BI, and MS Fabric (Knowledge) Strong Python and PySpark software engineering proficiency, coupled with a proven track record of building and optimizing big data pipelines, architectures, and datasets. Proficient in transforming, processing, and extracting insights from vast, disparate datasets, and building robust data pipelines for metadata, dependency, and workload management. Familiarity with software development lifecycles/methodologies, particularly Agile. Experience with SAP/ERP/Datasphere data modeling is a significant plus. Excellent presentation and collaboration skills, capable of creating formal documentation and supporting cross-functional teams in a dynamic environment. Strong problem-solving, time management, and organizational abilities. Keen to learn new languages and technologies continually. Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or an equivalent field. What you can expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambITious journey starts here. Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process.At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you! Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon
On-site
We are looking for an AI/ML Specialist who can build intelligent systems using OT sensor data and Azure ML tools. You will work closely with data scientists, engineers, and operations teams to create scalable AI solutions that solve critical manufacturing challenges like predictive maintenance, process optimization, and anomaly detection. This role bridges the edge and cloud environments, deploying AI to run either in the cloud or on industrial edge devices Key Functions: Design and develop ML models using time-series sensor data from OT systems Collaborate with engineering and data science teams to translate manufacturing problems into AI use cases Implement MLOps pipelines on Azure ML and integrate with Databricks/Delta Lake Deploy and monitor models at the edge using Azure IoT Edge Conduct model validation, retraining, and performance monitoring- Work with plant operations to contextualize insights and embed them into workflows Qualifications needed 5+ years of experience in machine learning and AI Hands-on experience with Azure ML, ML flow, Databricks, and PyTorch/TensorFlow Proven ability to work with OT sensor data (temperature, vibration, flow, etc.) Strong background in time-series modeling, edge inferencing, and MLOps Familiarity with manufacturing KPIs and predictive modeling use cases
Posted 1 week ago
0 years
8 - 9 Lacs
Gurgaon
On-site
Engineer, Quality Engineering Gurgaon, India Information Technology 314958 Job Description About The Role: Grade Level (for internal use): 09 The Team: The team works in an Agile environment and adheres to all basic principles of Agile. As a Quality Engineer, you will work with a team of intelligent, ambitious, and hard-working software professionals. The team is independent in driving all decisions and responsible for the architecture, design and development of our products with high quality. The Impact: Achieve Individual objectives and contribute to the achievement of team objectives. Work on problems of moderate scope where analysis of situations or data requires a review of a variety of factors. ETL Testing from various feeds on server (Oracle, SQL, HIVE server, Databricks) using different testing strategy to ensure the data quality and data consistency, timeliness. Achieve the above intelligently and economically using QA best practices. What is in it for you: Be the part of a successful team which works on delivering top priority projects which will directly contributing to Company’s strategy. This is the place to enhance your Testing skills while adding value to the business. As an experienced member of the team, you will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets. Responsibilities: As a Quality Engineer, you are responsible for: Defining Quality Metrics: Defining quality standards and metrics for the current project/product. Working with all stake holders to ensure that the quality metrics is reviewed, closed, and agreed upon. Create a list of milestones and checkpoints and set measurable criteria to check the quality on timely basis. Defining Testing Strategies: Defining processes for test plan and several phases of testing cycle. Planning and scheduling several milestones and tasks like alpha and beta testing. Ensuring all development tasks meet quality criteria through test planning, test execution, quality assurance and issue tracking. Work closely on the deadlines of the project. Keep raising the bar and standards of all the quality processes with every project. Thinking of continuous innovation. Managing Risks: Understanding and defining areas to calculate the overall risk to the project. Creating strategies to mitigate those risks and take necessary measures to control the risks. Communicating or creating awareness to all the stake holders for the various risks Understand & review the current risks and escalate. Process Improvements: Challenge yourself continuously to move towards automation for all daily works and help others in the automation. Create milestones for yearly improvement projects and set. Work with the development team to ensure that the quality engineers get apt support like automation hooks or debug builds wherever and whenever possible. What we are looking for: Basic Qualifications: Bachelor's/PG degree in Computer Science, Information Systems or equivalent. 3-6 yrs of intensive experience in Database and ETL testing. Experience in running queries, data management, managing large data sets and dealing with databases. Strong in creating SQL queries that can parse and validate business rules/calculations. Experience in writing complex SQL Scripts, Stored Procedures, Integration packages. Experience in tuning and improving DB performance of complex enterprise class applications. Develop comprehensive test strategy, test plan and test cases to test big data implementation. Proficient with software development lifecycle (SDLC) methodologies like Agile, QA methodologies, defect management system, and documentation. Good at setting Quality standards in various new testing technologies in the industry. Good at identifying and defining areas to calculate the overall risk to the project and creating strategies to mitigate those risks and escalate as necessary. Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies. Preferred Qualifications: Strong in ETL and Big Data Testing Proficiency in SQL About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 314958 Posted On: 2025-07-15 Location: Gurgaon, India
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Position: Senior Data Engineer Location: Bhubaneswar/Hyderabad Experience Required: 7 to 11 years Job Type: Full-time Job Description – Requirement 1(Azure Focus) We are looking for a Data Engineer with strong expertise in Microsoft Azure to help us build and manage scalable, secure, and high-performance data pipelines and storage solutions. Key Responsibilities: Design and implement robust data pipelines using Azure Data Factory, Azure Data Lake, and other Azure services. Develop and optimize large-scale data processing using Databricks and PySpark . Work with both relational (SQL) and NoSQL databases to manage structured and semi-structured data. Write clean, maintainable code using Python and/or Scala . Collaborate with cross-functional teams including data scientists, analysts, and DevOps engineers. Required Skills: 7–11 years of professional experience as a Data Engineer. Deep hands-on expertise with Azure Data Services . Solid experience with Databricks and PySpark . Proficiency in SQL and NoSQL databases. Strong background in Python , Scala , and object-oriented programming. Job Description – Requirement 2(GCP & Databricks Specialist) We are seeking a highly motivated and technically skilled Data Engineer with GCP and Databricks experience to join our growing team. This role requires strong data pipeline development skills, advanced SQL capabilities, and an understanding of data science workflows. Key Responsibilities: Build, maintain, and optimize scalable data pipelines using Databricks and Google Cloud (BigQuery, Cloud Storage, etc.) . Collaborate with analytics and data science teams to enable data-driven decision-making. Perform exploratory data analysis (EDA) to understand and prepare data for modeling and business use. Ensure best practices in data modeling, governance, and lifecycle management. Required Skills: 7–11 years of professional experience in data engineering. Expert-level proficiency in Databricks , BigQuery , and Python . Strong command over SQL with large datasets. Exposure to data science concepts and EDA methodologies. Familiarity with CI/CD processes and version control systems.
Posted 1 week ago
0 years
1 - 6 Lacs
Noida
On-site
Summary: We are seeking a talented and motivated AI Engineer to join our team and focus on building cutting-edge Generative AI applications. The ideal candidate will possess a strong background in data science, machine learning, and deep learning, with specific experience in developing and fine-tuning Large Language Models (LLMs) and Small Language Models (SLMs). You should be comfortable managing the full lifecycle of AI projects, from initial design and data handling to deployment and production monitoring. A foundational understanding of software engineering principles is also required to collaborate effectively with engineering teams and ensure robust deployments. Responsibilities: Design, develop, and implement Generative AI solutions, including applications leveraging Retrieval-Augmented Generation (RAG) techniques. Fine-tune existing Large Language Models (LLMs) and potentially develop smaller, specialized language models (SLMs) for specific tasks. Manage the end-to-end lifecycle of AI model development, including data curation, feature extraction, model training, validation, deployment, and monitoring. Research and experiment with state-of-the-art AI/ML/DL techniques to enhance model performance and capabilities. Build and maintain scalable production pipelines for AI models. Collaborate with data engineering and IT teams to define deployment roadmaps and integrate AI solutions into existing systems. Develop AI-powered tools to solve business problems, such as summarization, chatbots, recommendation systems, or code assistance. Stay updated with the latest advancements in Generative AI, machine learning, and deep learning. Qualifications: Proven experience as a Data Scientist, Machine Learning Engineer, or AI Engineer with a focus on LLMs and Generative AI. Strong experience with Generative AI techniques and frameworks (e.g., RAG, Fine-tuning, Langchain, LlamaIndex, PEFT, LoRA). Solid foundation in machine learning (e.g., Regression, Classification, Clustering, XGBoost, SVM) and deep learning (e.g., ANN, LSTM, RNN, CNN) concepts and applications. Proficiency in Python and relevant libraries (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch). Experience with data science principles, including statistics, hypothesis testing, and A/B testing. Experience deploying and managing models in production environments (e.g., using platforms like AWS, Databricks, MLFlow). Familiarity with data handling and processing tools (e.g., SQL, Spark/PySpark). Basic understanding of software engineering practices, including version control (Git) and containerization (Docker). Bachelor's or master’s degree in computer science, Artificial Intelligence, Data Science, or a related quantitative field. Preferred Skills: Experience building RAG-based chatbots or similar applications. Experience developing custom SLMs. Experience with MLOps principles and tools (e.g., MLFlow, Airflow). Experience migrating ML workflows between cloud platforms. Familiarity with vector databases and indexing techniques. Experience with Python web frameworks (e.g., Django, Flask). Experience building and integrating APIs (e.g., RESTful APIs). Basic experience with front-end development or UI building for showcasing AI applications. Qualifications Bachelorʼs or Masterʼs degree in Computer Science, Engineering, or a related discipline.
Posted 1 week ago
1.0 years
0 Lacs
Noida
On-site
Do you want to be on the leading edge of using big data and help drive engineering and product decisions for the biggest productivity software on the planet? Office Product Group (OPG) has embarked on a mission to delight our customers by using data-informed engineering to develop compelling products and services. OPG is looking for an experienced professional with a passion for delivering business value with data insights and analytics to join our team as a Data & Applied Scientist. We are looking for a strong Senior Data Scientist with a proven track record of solving large, complex data analysis problems in a real-world software product development setting. Ideal candidates should be able to take a business or engineering problem from a Product Manager or Engineering leader and translate it to a data problem. This includes all the steps to identify and deeply understand potential data sources, conduct the appropriate analysis to reveal actionable insights, and then operationalize the metrics or solution into PowerBI dashboards. You will be delivering results through innovation and persistence when similar candidates have given up. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Dashboard Development and Maintenance: Design, build, and maintain interactive dashboards and reports in PowerBI to visualize key business metrics and insights. Work closely with stakeholders to understand their data visualization needs and translate business requirements into technical specifications. Data Extraction and Analysis: Perform ad-hoc data extraction and analysis from various data sources, including SQL databases, cloud-based data storage solutions, and external APIs. Ensure data accuracy and integrity in reporting and analysis. Deliver high impact analysis to diagnose and drive business critical insights to guide product and business development. Metric Development and Tracking: Be the SME who understand landscape of what data (telemetry) are and should be captured Advice feature teams on telemetry best practices to ensure business needs for data are met. Collaborate with product owners and other stakeholders to define and track key performance indicators (KPIs) and other relevant metrics for business performance. Identify trends and insights in the data to support decision-making processes. User Journey and Funnel Analysis: Assist product owners in mapping out user journeys and funnels to understand user behavior and identify opportunities for feature improvement. Develop and implement ML models to analyze user journeys and funnels. Utilize a variety of techniques to uncover patterns in user behavior that can help improve the product. Forecasting and Growth Analysis: Support the forecasting of key results (KRs) and growth metrics through data analysis and predictive modeling. Provide insights and recommendations to help drive strategic planning and execution. Qualifications Required Qualifications: Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techn OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR equivalent experience. 2+ years customer-facing, project-delivery experience, professional services, and/or consulting experience. Preferred Qualifications: 7+ years of experience involving programming with languages Python/R and hands on experience using technologies such as SQL, Kusto, Databricks, Spark etc. 7+ years of experience working with data exploration and data visualization tools like PowerBI or similar. Candidate must be able to communicate complex ideas and concepts to leadership and deliver results. Candidate must be comfortable in manipulating and analyzing complex, high dimensional data from varying sources to solve difficult problems. Bachelors or higher degrees in Computer Science, Statistics, Mathematics, Physics, Engineering, or related disciplines. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi