Home
Jobs

3866 Databricks Jobs - Page 13

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Description Lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, dbt, Airflow, and other tools. Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance. Develop and enforce data engineering best practices, including coding standards, testing, and documentation. Mentor junior data engineers, providing technical leadership and fostering a culture of continuous learning and improvement. Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to business operations. Stay up to date with the latest industry trends and technologies, and proactively recommend improvements to our data engineering practices. Qualifications Systems (MIS), Data Science or related field. 15 years of experience in data engineering and/or architecture, with a focus on big data technologies. Extensive production experience with Databricks, Apache Spark, and other related technologies. Familiarity with orchestration and ELT tools like Airflow, dbt, etc. Expert SQL knowledge. Proficiency in programming languages such as Python, Scala, or Java. Strong understanding of data warehousing concepts. Experience with cloud platforms such as Azure, AWS, Google Cloud. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication and leadership skills, with the ability to effectively mentor and guide Experience with machine learning and data science workflows Knowledge of data governance and security best practices Certification in Databricks, Azure, Google Cloud or related technologies. Job Information Technology Primary Location India-Maharashtra-Mumbai Schedule: Full-time Travel: No Req ID: 250903 Job Hire Type Experienced Not Applicable #BMI N/A

Posted 4 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Databricks & AWS Lakehouse Engineer Budget : Max 25 LPA Gurugram Client: HCL Strong hands-on skills in Python (primary), Spark SQL, pipeline engineering, CI/CD automation , observability, and platform governance. 7–10 years of data engineering experience, with 5+ years on Databricks and Apache Spark. Expert-level hands-on experience with Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). Primary language: Python ; strong skills in Spark SQL . Deep understanding of Lakehouse architecture, Delta Lake, Parquet, Iceberg. Strong experience with Databricks Workflows, Unity Catalog, Runtime upgrades, and cost optimization. Experience with Databricks native monitoring tools and Datadog integration. Security and compliance expertise across data governance and infrastructure layers. Experience with CI/CD automation using Terraform, CloudFormation, and Git. Hands-on experience with disaster recovery and multi-region architecture. Strong problem-solving, debugging, and documentation skills.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Lead Data Engineer (Databricks) Experience: 7-10 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: Databricks, SQL OR Python, ETL tools OR Data Modelling OR Data Warehousing Inferenz (One of Uplers' Clients) is looking for: About Inferenz: At Inferenz, our team of innovative technologists and domain experts help accelerating the business growth through digital enablement and navigating the industries with data, cloud and AI services and solutions. We dedicate our resources to increase efficiency and gain a greater competitive advantage by leveraging various next generation technologies. Our technology expertise has helped us delivering the innovative solutions in key industries such as Healthcare & Life Sciences, Consumer & Retail, Financial Services and Emerging industries. Our main capabilities and solutions: Data Strategy & Architecture Data & Cloud Migration Data Quality & Governance Data Engineering Predictive Analytics Machine Learning/Artificial Intelligence Generative AI Specialties: Data and Cloud Strategy, Data Modernization, On-Premise to Cloud Migration, SQL to Snowflake Migration, Hadoop to Snowflake Migration, Cloud Data Platform and Warehouses, Data Engineering and Pipeline, Data Virtualization, Business Intelligence, Data Democratization, Marketing Analytics, Attribution Modelling, Machine Learning, Computer Vision, Natural Language Processing and Augmented Reality. Job Description Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools How to apply for this opportunity: Easy 3 Step Process: 1. Click On Apply and register or log in to our portal 2.Upload updated Resume & complete the Screening Form 3. Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: India Job Type: Full-time Experience Level: Mid-Level/Senior Must have skills: Strong proficiency in PySpark , Python, SQL and Azure Data Factory . Good to have skills: working knowledge on Azure Synapse Analytics, Azure functions, Logic app workflows, Log analytics and Azure DevOps. Job Summary We are looking for a highly skilled Azure Data Engineer / Databricks Developer to join our data and analytics team. The ideal candidate will have deep expertise in building robust, scalable, and efficient data solutions using Azure cloud services and Apache Spark on Databricks . You will be instrumental in developing end-to-end data pipelines that support advanced analytics, and business intelligence initiatives. Key Responsibilities Design and implement scalable data pipelines using Databricks , Azure Data Factory , Azure SQL , and other Azure services. Write efficient PySpark / Spark SQL code for data transformation, cleansing, and enrichment. Implement data ingestion from various sources including structured, semi-structured, and unstructured data. Optimize data processing workflows for performance, cost, and reliability. Collaborate with data analysts, and stakeholders to understand data needs and deliver high-quality datasets. Ensure data governance, security, and compliance using Azure-native tools. Participate in code reviews, documentation, and deployment of data solutions using DevOps practices.

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

We're looking for a Senior Data Engineer to design and build scalable data solutions using Azure Data Services, Power BI, and modern data engineering best practices. You'll work across teams to create efficient data pipelines, optimise databases, and enable smart, data-driven decisions. If you enjoy solving complex data challenges, collaborating globally, and making a real impact, we'd love to hear from you. Be a part of our Data Management & Reporting team and help us deliver innovative solutions that make a real impact. At SkyCell, we're on a mission to change the world by revolutionizing the global supply chain. Our cutting-edge temperature-controlled container solutions are designed to ensure the safe and secure delivery of life-saving pharmaceuticals, with sustainability at the core of everything we do. We're a fast-growing, purpose-driven scale-up where you'll make an impact, feel empowered, and thrive in a diverse, innovative environment. Why SkyCell? 🌱 Purpose-Driven Work: Make a real difference by contributing to a more sustainable future in global logistics and healthcare 🚀 Innovation at Heart: Work with cutting-edge technology and be at the forefront of supply chain innovation 🌎 Stronger together: Join a supportive team of talented individuals from over 40 countries, where we work together every step of the way 💡 Growth Opportunities: We believe in investing in our people - continuous learning and development are key pillars of SkyCell 🏆 Award-Winning Culture: Join a workplace recognized for its commitment to excellence with a ‘Great Place to work' award, as well as a Platinum Ecovadis rating highlighting our sustainability and employee well-being What You'll Do: Design, build, and maintain scalable data pipelines and databases using Azure Data Services for both structured and unstructured data Optimise and monitor database performance, ensuring efficient data retrieval and storage Develop, optimise, and maintain complex SQL queries, stored procedures, and data transformation logic Develop efficient workflows, automate data processes, and provide analytical support through data extraction, transformation, and interpretation Create and optimise Power BI dashboards and models, including DAX tuning and data modelling; explore additional reporting tools as needed Integrate data from multiple sources and ensure accuracy through quality checks, validation, and consistency measures Implement data security measures and ensure compliance with data governance policies Investigate and support the business in providing solutions for data issues Collaborate with cross-functional teams, contribute to code reviews, and uphold coding standards Continuously evaluate tools, document data flows and system architecture, and improve engineering practices Provide technical leadership, mentor junior engineers, and support in hiring, onboarding, and training Requirements What You'll Bring: Bachelor's degree in Computer Science or a related field (Master Degree is a plus) Proven expertise in designing and implementing data solutions using Azure Data Factory, Azure Databricks, and Azure SQL Database; certifications are a plus Strong proficiency in SQL development and optimization, good knowledge of NoSQL databases Extensive experience in developing complex dashboards and reports including DAX Knowledge of at least one data analysis language (Python, R, etc.) Knowledge of SAP Analytics Cloud is advantageous Ability to design and implement data models to ensure efficient storage, retrieval, and analysis of structured and unstructured data Benefits What's In It For You? ⚡ Flexibility & Balance: Flexible working hours and work-life balance allow you to tailor work to fit your life 🌟 Recognition & Growth: Opportunities for career advancement in a company that values your contributions 💼 Hybrid Workplace: Modern workspaces (in Zurich, Zug and Hyderabad as well as our Skyhub in Basel) and a remote-friendly culture to inspire collaboration amongst a globally diverse team 🎉 Company-wide Events: Join us for company events to celebrate successes, build teams, and share our vision. Plus, new joiners experience SkyWeek, our immersive onboarding program 👶 Generous Maternity & Paternity Leave: Support for new parents with competitive maternity and paternity leave 🏖️ Annual Leave & Bank Holidays: Enjoy a generous annual leave package, plus local bank holidays to recharge and unwind Ready to Make an Impact? We're not just offering a job; we're offering a chance to be part of something bigger. At SkyCell, you'll help build a future where pharmaceutical delivery is efficient, sustainable, and transformative. Stay Connected with SkyCell Visit http://www.skycell.ch and explore #WeAreSkyCell on LinkedIn How To Apply Simply click ‘apply for this job' below! We can't wait to meet you and discuss how you can contribute to our mission! Please note, we are unable to consider applications sent to us via email. If you have any questions, you can contact our Talent Team (talent@skycell.ch). SkyCell AG is an equal opportunity employer that values diversity and is committed to creating an inclusive environment for all. We do not discriminate based on race, religion, colour, national origin, gender, sexual orientation, gender identity, age, disability, or any other legally protected characteristic. For this position, if you are not located in, or able to relocate (without sponsorship) to one of the above locations, your application cannot be considered.

Posted 4 days ago

Apply

0 years

0 Lacs

Thiruporur, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Finance Data Steward is responsible for supporting the Finance Data services team – governance and operations - in measuring and reporting the master data consumption, consolidated reports, volumes (BVI), process performance and quality metrics (KPI’s) such as consistency, accuracy, validity etc. for all the relevant finance data objects covered by the team (e.g. cost centers, Project WBS, GL accounts, Internal Business Partners etc.) The above is achieved in close collaboration with the finance data team (data stewards, data SME’s, Data maintainers, reporting teams, data curators, business analysts and any other stakeholders), the Finance Data Steward being responsible and accountable for gathering the data requirements, setting up the data model design, architecture, documentation and developing the scalable data models for Finance data objects via Power BI dashboards, SQL programming, Power Automate and other data analysis and processing tools. How You Will Contribute And What You Will Learn Connect and integrate various data sources (e.g., MDG, ERP systems, EDP, Redbox, Project Cube, ngOCt etc.) to create a unified view of Finance and Business partner master data consolidated reports, volumes (BVI), process performance and quality metrics (KPI’s) Design and implement data models and transformations to prepare Finance and Business Partner master data for performance and quality analysis, reporting and consolidation and visualization. Build interactive and insightful dashboards using data visualization tools (Power BI – data flows and Design, SQL, Power Automate, Data Bricks, Azure) Manage and execute technical activities and projects for Finance and Business partner master data analytics use cases Prepare schedule and technical activities plan for Finance and Business partner master data analytics use cases Seek and communicate cost efficient solutions for Finance and Business partner data analytics Write functional and technical specifications and other guiding documentation for Finance and Business partner master data analytics Proactively identify new data insights and opportunities for improvement based on master data analysis. Maintain and continuously improve the current Finance and Business partner master data dashboards addressing Business user feedback Work closely with business stakeholders to understand their data needs and translate them into effective dashboards. Adhere to data governance policies and regulations, ensuring data security and privacy. Develop and implement data quality checks and validation processes to ensure the accuracy and completeness of master data. Key Skills And Experience Deep understanding of Master data management principles, processes, and tools. This includes data governance, data quality, data cleansing, and data integration. Programming & Data visualization Skills: knowledge of Power BI – dashboards, data flows and design (advanced), SQL, DataBricks, Power Automate, HTML, Python, Sharepoint etc. Experience with Data repositories such as EDP, Azure Excellent written and oral communication in English Hands-on experience with data analytics tools Problem-solving aptitude and analytic mindset Effective networking capabilities and comfortable in multicultural environment and virtual teams Team player About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities: Able to develop , manage, design and performance on SQL and PL/SQL Skill set Design, develop, and maintain robust Databricks pipelines to support data processing and analytics. Implement and optimize Spark engine concepts for efficient data processing. Collaborate with business stakeholders to understand and translate business requirements into technical solutions. Utilize Informatica for data integration and ETL processes. Write complex SQL queries to extract, manipulate, and analyze data. Perform data analysis to support business decision-making and identify trends and insights. Ensure data quality and integrity across various data sources and platforms. Communicate effectively with cross-functional teams to deliver data solutions that meet business needs. Stay updated with the latest industry trends and technologies in data engineering and analytics. Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 5+ years of experience in data engineering or a related role. Strong expertise in Databricks and Spark engine concepts. Proficiency in Informatica for ETL processes. Advanced SQL skills for data extraction and analysis. Excellent analytical skills with the ability to interpret complex data sets. Strong communication skills to effectively collaborate with business stakeholders and technical teams. Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus. Knowledge of data warehousing concepts and tools is desirable.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com. Job Description YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a senior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Python, Pyspark, SQL, Hive, Databricks, Airflow. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Git HubAction, and GIT Hub. WHAT YOU’LL DO: Develop, troubleshoot, debug and make application enhancements and create code leveraging Python, SQL as the core development languages. Develop new BE functionalities working closely with the FE team. Contribute to the expansion of NRPS scope Qualifications WE’RE LOOKING FOR PEOPLE WHO HAVE: Must have 5-10 Years of years of applicable software engineering experience Must have a strong experience Python Strong fundamentals with experience in Bigdata, Python, Pyspark, SQL, Hive, Airflow Must have SQL knowledge. Good to have Good to have experience in scala and databricks. Good to have experience in Linux and KSH Good to have experience with DevOps Technologies as GIT Hub, GIT Hub action, Docker. Good to have experience in the Retail Domain. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 4 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Full-time Company Description NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com. Job Description YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a senior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Python, Pyspark, SQL, Hive, Databricks, Airflow. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Git HubAction, and GIT Hub. What You’ll Do Develop, troubleshoot, debug and make application enhancements and create code leveraging Python, SQL as the core development languages. Develop new BE functionalities working closely with the FE team. Contribute to the expansion of NRPS scope Qualifications WE’RE LOOKING FOR PEOPLE WHO HAVE: Must have 5-10 Years of years of applicable software engineering experience Must have a strong experience Python Strong fundamentals with experience in Bigdata, Python, Pyspark, SQL, Hive, Airflow Must have SQL knowledge. Good to have Good to have experience in scala and databricks. Good to have experience in Linux and KSH Good to have experience with DevOps Technologies as GIT Hub, GIT Hub action, Docker. Good to have experience in the Retail Domain. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

THIS REQUIREMENT IS FOR A CLIENT!!! Job Overview: We are seeking a skilled Machine Learning Engineer with a strong grasp of ML algorithms , techniques, and best practices. This role offers the opportunity to design, build, and deploy scalable machine learning solutions in a dynamic environment. Responsibilities: Strong understanding of ML algorithms, techniques, and best practices. Strong understanding of Databricks, Azure AI services and other ML platforms and cloud computing platforms (e.g., AWS, Azure, GCP) and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of Mlflow or Kubeflow frameworks Strong programming skills in python and Data analytical expertise Experience in building Gen AI based solutions like chatbots using RAG approaches Expertise in any of the gen ai frameworks such as Lang chain/ Lang graph, autogen, crewai, etc. Requirements: Proven experience as a Machine Learning Engineer, Data Scientist, or similar role, with a focus on product matching, image matching, and LLM. Solid understanding of machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with product matching algorithms and image recognition techniques. Experience with natural language processing and large language models (LLMs) such as GPT, BERT, or similar architectures. Optimize and fine-tune models for performance and scalability.. Collaborate with cross-functional teams to integrate ML solutions into products. Stay updated with the latest advancements in AI and machine learning. Please feel free to drop your resume to pushpa.belliappa@tekworks.in

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Role : Data Engineer Full time / Contract Experience : 2 to 6Yrs Mode of Work : WFO Only Location : Chennai Job description: Key Skills: SQL, ETL Tools, ADF, ADB, SSIS, Reporting Tools Key Requirements: The day-to-day development activities will need knowledge of the below concepts.  Expert-level knowledge in RDBMS (SQL Server) with clear understanding of SQL query writing, object creation and management and performance and optimisation of DB/DWH operations.  Good understanding of Transactional and Dimensional Data Modelling, Star Schema, Facts/Dimensions, Relationships  Good understanding of ETL concepts and exposure to tools such as Azure Data Factory, Azure Databricks, Airflow.  In-depth expertise in Azure Data Factory and Databricks, including building scalable data pipelines, orchestrating complex workflows, implementing dynamic and parameterized pipelines, and optimizing Spark-based data transformations for large-scale integrations.  Hands-on experience with Databricks Unity Catalog for centralized data governance, fine-grained access control, auditing, and managing data assets securely across multiple workspaces.  Should have worked on at least 1 development lifecycle of one of the below:  End-to-end ETL project (Involving above mentioned ETL tools)  Ability to write and review test cases, test code and validate code.  Good understanding of SDLC practices like source control, version management, usage of Azure Devops and CI/CD practices. Project context:  Should have the skill to fully understand the context and use-case of a project and have a personal vision for it – Play the role of interfacing with customer directly on a daily basis.  Should be able to converse with functional users and convert requirements into tangible processes/models and documentation in available templates.  Should be able to provide consultative options to customer on best way to execute projects  Should have a good understanding of project dynamics – scoping, setting estimates, setting timelines, working around timelines in case of exceptions, etc. Preferred skills:  Knowledge of Python is a bonus  Knowledge of SSIS is a bonus  Knowledge of Azure Devops, Source Control/Repos is good to have

Posted 4 days ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About the Role: We are looking for a Delivery leader to driven our team of data and domain enthusiasts in Health Care payment integrity. you will get an opportunity to work with various payers and providers and get to know how we reduce provider abrasion and help provider engagement with our innovative and highly scalable solutions. Location -Chennai MRC Nagar 5 days working from office Shift starts from 12/12:30 pm Summary: Candidate Should have overall 12+ years of experience in analytics delivery and/or consulting At least 8 years of experience in leading the Analytics Team, Solution Creation and Defining Roadmaps. 5+ years experience preferably from Top tier Analytics/consulting/Analytics startup companies - Mathcompany, Tredence, Mu Sigma, Tiger Analytics, Latent view etc. Candidates with on healthcare data problems would be preferred (Eg: those from United/Optum ABCO, Truven/IBM, Astra Zeneca, Zoom Rx, Buddi.ai, Sagitec solutions (Health), Cognizants Healthcare Payer ) MBA from Top Tier B-schools working in Operations Delivery would be preferred. Responsibilities : Delivery Management : Overall supervision of various data/business initiatives and deliverables with special emphasis on Operational Excellence EXLs Payment Integrity business is outcomes driven (Reiterating: Our revenue is achieved only when the KPI targets are met we find savings to the client, our revenue is achieved as commission). Responsible for Monitoring and delivering on the KPIs while working with Operations, Client Partners, MIS, technology reporting teams and internal platform and data management team. Proactive identification of risks to the business KPIs through steerage from Analytics and data science. Overseeing a multi-dimensional team – Analytics Production (Rules & Model runs monthly, Daily for 5+ clients), Shared R&D/Data Mining, ML & NLP development, Operational Analytics end to end A deeper Understanding and experience of the Analytics levers available– SQL Rule based, ML based, Data Mining and/or Optimization initiatives or NLP automation inventory. Demonstrate/Consult the data insights and enable change management with key stakeholders Assembling a core team to steer the delivery -- Hire, Train & mentor team members. Regular cadences with team members, Hiring and attrition management strategy Define and review Analytics roadmap with team members, stakeholders and collaborators to resolve open items within agreed timelines Stakeholder Management: Build senior and strategic cross functional relationships through delivery excellence and interpersonal stakeholder management. Close cooperation with the global teams and onshore counterparts regarding cross pollinating ideas, governing key stakeholder and client communications Centralize and Prioritize requirements from Analytics for Technology/Platform and Data management counterparts Skills and Roles - MBA From Top Tier B Schools is preferred. Strong problem solving and analytical skills Experienced team manager - from hiring to career pathing. 5+ years of experience in Data Analytics and Consulting using SAS, SQL, Python, MS Suite mandatory. AWS/Azure, SaaS, product (functional and technical) design, digital frameworks is preferred. Strong understanding in business optimization framework and techniques Demonstrated experience in handling Analytics Delivery & ROI frameworks. Ability to design data driven solutions and Frameworks (Descriptive and Predictive) from scratch & consult in a leadership capacity on potential Solutions/Storyboards and POCs Drives business metrics that add to the top-line and / or profitability for EXL revenue optimization business Develops Descriptive (reporting) through to Prescriptive Analytics frameworks Identifies and translates Business problems into data analytics/data science and communicate insights back to the stakeholders Domain understanding of US Healthcare value chain of Payers and Providers will be preferred. Excellent Written and verbal communication in English About EXL Health Payments Analytics: At EXL Health Payments Analytics Center of Excellence, we are looking for passionate individuals with growth/startup mindset to experiment, fail fast, learn and contribute to our 5 fold growth story of $200M to $1B EXL is considered Special investigation Unit by 6 of top 10 US health insurance companies (~1/3rd US healthcare data is handled by us) helping with error/overpayment detection of the hospital/doctor claims. Unlike typical Analytics services/consulting companies we make our revenue from the savings we identify for the client(i.e. Commission/Outcome basis). We develop and maintain algorithms and R&D accelerators that are intended to be used across multiple health insurance clients for the above business case. So expect an ecosystem that has : 1. 100+ members Analytics team of data enthusiasts, decision scientists and Business/Subject matter experts. 2. Massive Data Assets (Millions of structured data and thousands of unstructured records processed monthly) 3. Tech investment (On Prem GPUs, Azure, AWS, Databricks, On Prem- Hadoop-Hive, Hive etc) 4. Leadership push to Digitization, Data-led decisions and AI Our Typical day: Monitoring business performance and operations Problem solve by applying the different analytics levers or involving different teams doing -- ML models, SQL rules, Hospital Profiling, Pattern Mining etc to meet client savings target. The Analytics teams acts as the R&D and Operational excellence team who constantly find new patterns through all the state of art libraries, technologies from SQL queries to LLM agents.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCG Data Engineer – Databricks & Python ETL Job Summary: We are seeking a skilled Data Engineer with expertise in Databricks and Python scripting to enhance our ETL (Extract, Transform, Load) processes. The ideal candidate will have a proven track record of developing and optimizing data pipelines, implementing data solutions, and contributing to the overall data architecture. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Databricks and Python. Develop ETL processes that ingest and transform data from various sources into a structured and usable format. Collaborate with cross-functional teams to gather requirements and deliver data engineering solutions that support business objectives. Write and optimize Python scripts for data extraction, transformation, and loading tasks. Ensure data quality and integrity by implementing best practices and standards for data engineering. Monitor and troubleshoot ETL processes, performing root cause analysis and implementing fixes to improve performance and reliability. Document data engineering processes, creating clear and concise technical documentation for data pipelines and architectures. Stay current with industry trends and advancements in data engineering technologies and methodologies. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Minimum of 2 years of experience in data engineering, with a focus on Databricks and Python scripting for ETL implementation. Strong understanding of data warehousing concepts and experience with SQL and NoSQL databases. Proficiency in Python and familiarity with data engineering libraries and frameworks. Experience with cloud platforms (e.g., AWS, Azure) and big data technologies is a plus. Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a collaborative team. Working Conditions: Innovative and dynamic work environment with a strong emphasis on delivering high-quality data solutions. Opportunity to work with a diverse team of data professionals and contribute to impactful projects. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Cochin

On-site

GlassDoor logo

Minimum Required Experience : 3 years Full Time Skills SQL Ssas Mdx Ssis ETL Description Software Developer Overall, 3 to 5 years’ experience. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Collaborate with business stakeholders to understand their needs and requirements. Ability to work independently and meet deadlines. Technical skills: Big Data & Data Modelling (understanding of big data concepts, data warehousing, and data modelling techniques) Databricks (incl. working with “Delta Lake” and the “Unity Catalog) Knowledge in Python SQL (writing and optimizing complex SQL queries for data manipulation and analysis).

Posted 4 days ago

Apply

15.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Essential Skills Enthusiasm for building analytics & data science solution, keeping up with latest trends Analytical mindset and experience in building Data Science solutions. Experience in solving business problems using Data. Strategic Mindset with a Customer Focus Ability to articulate complex issues and desired outcomes of analytical output Proven analytical skills and evidence-based decision making Excellent problem solving, troubleshooting & documentation skills Excellent written and verbal communication skills Excellent collaboration and interpersonal skills Ability to innovate, adapt and grasp new skills and content Ability to work closely with key stakeholders and build collaborative partnerships Ability to share complex data & Data science approach with business stakeholders Ability to self-manage tasks aligning with business outcome. Proactively gathering feedback and continuously improving Proven ability to experiment with different Data Science packages to deliver business value. Ability to modify data science solution based on enterprise technology stack It is expected that the role holder will most likely have the following qualifications and experience 15+ years’ experience leading Machine learning engineers, Data Scientists Lead teams to build and deploy machine learning models to achieve business outcomes. Proficient users in SQL, Python, PySpark, Spark ML language. Good understanding of cloud platforms such as Databricks (preferred), AWS, Azure or GCP. Proficient in source code controls using GitHub. Highly developed data analytic skills, working with structured, semi-structured unstructured datasets Experience is using different data science packages to solve business problems. Good understanding of customer data, digital data, data science landscape Ability to provide data-driven insights to support decision making or improve customer experience. Explore new methodologies to solve business problems in a collaborative way. Strong stakeholder management and influencing skills to shape the Data Science backlog with stakeholders. Strong communications skills, able to convey complex messages simply and effectively. Ability to visualise data with different data visualisation tools to share insights with stakeholder. Lead the use of Data Science techniques across marketing to provide data-driven insights and support strategic decision making across teams. Accountable for identifying, embedding, promoting, and ensuring continuous improvement within the use of new data and advanced analytics across the teams. Present analytical findings using data visualisation tools and creation of insight presentations for various audiences. Development of new or improved analytic techniques to support customers met and un-met financial needs. Qualification: A bachelor's or postgraduate degree in Statistics, Mathematics, Computer Science, Economics, Engineering, or other relevant fields of study. Evidence of short courses to update skills based on changing Data or Data Science landscape. Key Decisions Key Accountabilities Lead the use of advanced data science techniques across Marketing use cases to provide data-driven insights and support strategic decision making across teams. Lead the enhancement of feature store and feature engineering techniques for Marketing use cases. Leverage online (Adobe Analytics) data in Marketing use cases. Deliver on AI/ML roadmap for Marketing based on the architecture and platform capability. Accountable for identifying, embedding, promoting, and ensuring continuous improvement within the use of new data and data science techniques across the teams. Present analytical findings using data visualisation tools and creation of insight presentations for senior stakeholders. Development of new or improved analytic techniques to support customers unmet financial needs. Job Types: Full-time, Permanent Pay: ₹6,500,000.00 - ₹75,000,000.00 per year Benefits: Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Application Question(s): Do you have experience in delivering an AI/ML roadmap for Marketing based on the architecture and platform capability? Experience: Data science: 10 years (Required) Work Location: In person

Posted 4 days ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Full Stack Developer with Automation Expertise | India About The Job Experience : 3+ years Work Location : EXL Company Overview EXL is the indispensable partner for leading businesses in data-led industries such as insurance, banking and financial services, healthcare, retail and logistics. We bring a unique combination of data, advanced analytics, digital technology and industry expertise to help our clients turn data into insights, streamline operations, improve customer experience, and transform their business Please visit www.exlservice.com for more information about EXL. Why Join Us Join EXL for Transformative Outcomes : At EXL, we prioritize our employees, placing them at the heart of our commitment to transformative outcomes. From upskilling programs to cutting-edge technologies like generative AI, we empower our team for success. Collaborate with industry leaders and advance your career with the support you need. Work-Life Balance Matters : EXL recognizes the significance of work-life balance. Benefit from our flexible work hours, enabling you to manage your time efficiently and seamlessly integrate work and life for a healthier, more fulfilling experience. Collaborative and inclusive work culture : We are committed to providing a supportive environment where your ideas are heard, and your skills are honed. Job Description We are looking for a Full Stack Developer with strong automation and cloud skills to build robust applications and integrate intelligent automation tools such as Microsoft Power Platform (Power Apps, Power Automate) and UiPath. The ideal candidate should have strong technical expertise across the full tech stack and experience deploying scalable solutions on cloud platforms Role Description Must have: Strong command of JavaScript fundamentals (ES6+), including [GK1] closures, promises, async/await, and modular architecture Proficiency in front-end frameworks: React.js / Angular / Vue.js Back-end development using Node.js, Express.js, REST APIs Experience building and integrating microservices-based architectures Exposure to AI/ML concepts and how to apply ML models in production environments Basic understanding of working with LLMs (e.g., OpenAI, Azure OpenAI) in automation workflows Experience with cloud databases like Azure SQL, AWS RDS, Databricks Automation experience with tools such as UiPath, Power Automate, Power Apps Familiarity with DevOps tools and practices (Git, CI/CD, Docker) Strong business acumen and experience in building client-ready decks and presentations Good To Have Experience integrating Power Platform with other cloud services Familiarity with Python and SQL for backend scripting and data integration Working knowledge of serverless architecture (Azure Functions, AWS Lambda) Prior experience building scalable, secure applications in agile teams Qualification Bachelor’s degree in computer science, Information Systems, or related field. 3+ years in full stack development with automation exposure Prior work on client-facing digital projects and internal tools automation Experience working with cloud platforms (AWS, Azure) Certifications in cloud (Azure/AWS), automation (UiPath/Power Platform) are a plus

Posted 4 days ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Business Unit: Cubic Transportation Systems Company Details: When you join Cubic, you become part of a company that creates and delivers technology solutions in transportation to make people’s lives easier by simplifying their daily journeys, and defense capabilities to help promote mission success and safety for those who serve their nation. Led by our talented teams around the world, Cubic is committed to solving global issues through innovation and service to our customers and partners. We have a top-tier portfolio of businesses, including Cubic Transportation Systems (CTS) and Cubic Defense (CD). Explore more on Cubic.com. Job Details: Job Summary: We are seeking a hands-on and highly skilled Principal Data Analyst to join our dynamic DMAP team. The ideal candidate will have extensive experience in SQL, Power BI, data modeling, and a strong understanding of analytics and ETL processes. This role requires a proactive individual with strong analytical thinking, an appetite for learning emerging technologies, and a commitment to delivery excellence. Key Responsibilities: Design, develop, and optimize Power BI reports and dashboards. Write and tune complex SQL queries and joins for data extraction and analysis. Develop advanced DAX measures and optimize performance. Implement Row-Level Security (RLS) and create scalable data models (star/snowflake schemas). Perform data transformations and integrate data from various sources. Collaborate with engineering and business teams to translate requirements into reporting solutions. Support Power BI Service administration and deployment best practices. Contribute to cloud-based data solutions using Azure services. Support change requests and provide production-level analytics support. Maintain data governance standards including GDPR and PII handling. Work closely on a daily basis with the backend engineering team to ensure seamless data integration and alignment of deliverables. Required Skills and Qualifications: Minimum 5+ years of hands-on experience in: SQL (Advanced concepts, joins, regular querying) Power BI (Report development, DAX, RLS, performance tuning) Data Modeling (Tabular, star, snowflake) Analytics and ETL concepts Proficiency in Power BI Fabric, Data Connectors, and transformation techniques. Strong understanding of Azure components: Active Directory, Azure SQL, Azure Data Factory. Experience with data governance practices (GDPR, PII). Preferred Skills (Good to Have): Familiarity with Power BI license types and Service Admin tasks. Knowledge of Delta Tables, Databricks, Synapse, Data Lakes, and Warehouses. Exposure to QLIK Replicate, Oracle Golden Gate (OGG). Working knowledge of Power Automate, Logic Apps. Basic understanding of Python. Experience with other BI tools like Tableau, Google Data Studio. Microsoft Certified: Power BI Data Analyst Associate Awareness of data engineering and advanced analytics concepts. Soft Skills & Expectations: Strong logical and analytical problem-solving abilities. High levels of initiative and proactiveness. Good communication skills – must be able to express ideas clearly and confidently in front of stakeholders. Willingness to learn new technologies and adapt to changing requirements. A committed and dependable work ethic; should not exhibit complacency. Worker Type: Employee

Posted 4 days ago

Apply

5.0 - 7.0 years

20 - 22 Lacs

Hyderābād

On-site

GlassDoor logo

Job Description : Big Data Engineer Experience : 5 to 7 Years Work location: Chennai, Bangalore and Hyderabad Shift Timing : 2 to 11 PM Interview process : L1, L2 & Client round Budget-22 LPA Mandatory skills:- AWS, Python, Pyspark This is basically AWS data engineer with Pyspark experience Job Summary: We are seeking a skilled and detail-oriented Big Data Engineer with strong expertise in Apache Spark to join our data team. Key Responsibilities: Design and develop scalable data processing solutions using Apache Spark (Core, SQL, Streaming, MLlib). Build and optimize data pipelines and ETL processes for structured and unstructured data. Collaborate with data scientists, analysts, and software engineers to integrate Spark-based data products. Ensure data quality, integrity, and security across the data lifecycle. Monitor, troubleshoot, and improve performance of Spark jobs in production. Integrate Spark with cloud platforms such as AWS, Azure, or GCP. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5 to 7 years of hands-on experience with Apache Spark in large-scale data environments. Strong proficiency in Scala, Python, or Java (with a preference for Scala). Experience with data storage technologies such as HDFS, Hive, HBase, S3, etc. Familiarity with SQL, Kafka, Airflow, and NoSQL databases. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Solid understanding of distributed systems and parallel computing. Preferred Qualifications: Certification in Big Data or Spark (e.g., Databricks Certified Developer). Experience working in a DevOps/CI-CD environment. Knowledge of data warehousing concepts and tools like Snowflake or Redshift. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,200,000.00 per year Benefits: Health insurance Schedule: Day shift Work Location: In person

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderābād

Remote

GlassDoor logo

Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description About the Team The Finance Analytics & Insights (FA&I) team is transforming how Finance operates by embedding AI/ML into the core of our decision-making processes. We are building intelligent, scalable data products that power use cases across forecasting, anomaly detection, case summarization, and agentic automation. Our global team includes data product managers, analysts, and engineers who are passionate about delivering measurable business value. Role Overview We are seeking a highly motivated and analytically strong ML Engineer to join our India-based team. This role will support the development and scaling of AI/ML-powered data products that drive strategic insights across Finance. As an IC3-level individual contributor, you will work closely with the Data Product Manager and Insights Analyst to build AI/ML solutions that deliver measurable business value. Key Responsibilities Design, build, and deploy machine learning models that support use cases such as: Forecasting Anomaly detection Case summarization Agentic AI assistants Partner with the Insights Analyst to perform feature engineering, exploratory data analysis, and hypothesis testing Build and iterate on proof-of-concepts (POCs) to validate model design and demonstrate business value Collaborate with the Data Product Manager to align model development with product strategy and business outcomes Own and manage the Databricks instance for the FA&I team—partnering with the DT Data & Analytics team to define a roadmap of capabilities, test and validate new features, and ensure the platform supports scalable ML development and deployment Ensure models are production-ready, scalable, and maintainable—working closely with DT and D&A teams to integrate into enterprise platforms Monitor model performance, implement feedback loops, and retrain models as needed Contribute to agile product development processes including sprint planning, backlog grooming, and user story creation Qualifications Required Skills & Experience 3–5 years of experience in machine learning engineering, data science, or applied AI roles Strong proficiency in Python and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch) Solid understanding of feature engineering, model evaluation, and MLOps practices Experience working with large datasets using SQL and Snowflake Familiarity with Databricks for model development and orchestration Experience with CI/CD pipelines, version control (Git), and ML workflow tools Ability to translate business problems into ML solutions and communicate technical concepts to non-technical stakeholders Experience working in agile teams and collaborating with product managers, analysts, and engineers Preferred Qualifications Experience working in or supporting Finance or Accounting teams Prior experience deploying models in production environments and integrating with enterprise systems Familiarity with GenAI, prompt engineering, or LLM-based applications is a plus Experience with MLflow, Azure ML, or similar platforms Comfort with async collaboration tools and practices, including Teams, recorded video demos, and documentation-first communication Experience working in a global, cross-functional environment with stakeholders across time zones Key Behaviors & Mindsets Builder’s Mentality: You love turning ideas into working models and iterating quickly to improve them. Collaborative Engineer: You work closely with analysts and product managers to co-create solutions that solve real business problems. Customer-Centric: You care deeply about the end user and build models that are interpretable, actionable, and aligned with business needs. Bias for Action: You move fast, test often, and focus on delivering value—not just code. Global Mindset: You thrive in a distributed team and proactively align to US morning hours (PST overlap) to keep momentum across geographies. Async-First Communicator: You’re comfortable working in a hybrid async environment—leveraging Teams, recorded demos, and documentation to keep work moving forward. Growth-Oriented: You’re always learning—whether it’s a new algorithm, tool, or business domain—and you help others grow too. Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Job Title – Senior Analyst Preferred Location - Hyderabad Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do Responsibilities Creating reports in Power BI, Excel, and, potentially, other systems Adapting existing reports Supporting monthly, quarterly, and annual KPI closing (Material Productivity, Average weighted Payment Terms, Supplier On Time Delivery, etc.) Ad-hoc reporting and analysis support for Supply Chain colleagues Providing mass data analysis Analyzing data and providing initial comments on performance Initiating and participating in continuous improvement projects Harmonization and optimization of master data quality to improve KPI quality Skills and Experience Technical Expertise Sound understanding of SAP systems (S4 HANA and R3) Experience with Databricks and PowerBI Knowledge of data technologies (e.g., SQL, data modeling, data analysis) Analytics Ability to analyze mass data/ KPIs and translate results of analysis into improvement measures Strong understanding of purchasing and procurement processes Communication and Collaboration Excellent communication & interpersonal skills Strong analytical skills and ability to solve problems Strong team player Ability to work independently with a very strong sense of quality and accuracy Benefits We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice: Click on this link to read the Job Applicant's Privacy Notice

Posted 4 days ago

Apply

0 years

7 - 9 Lacs

Gurgaon

On-site

GlassDoor logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI . Inviting applications for the role of Principal Consultant- Databricks Lead Developer! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Overall <<>>> years of experience in IT Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 1 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training . Job Principal Consultant Primary Location India-Gurugram Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 25, 2025, 5:06:38 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 4 days ago

Apply

0 years

3 - 9 Lacs

Gurgaon

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for an AI/ML Specialist with expertise in clustering algorithms, dimension reduction algorithms, and anomaly detection techniques on large datasets. The ideal candidate will be well-versed in unsupervised learning techniques and have solid proficiency in Python and various ML modules such as PyTorch, TensorFlow, or Scikit-learn. Additionally, the candidate should have extensive knowledge of Snowflake, Databricks, and Azure services like Azure ML and AKS. Knowledge of healthcare and FHIR standards. Primary Responsibilities: Develop and implement clustering algorithms to analyze large datasets and detect anomalies Apply dimension reduction techniques to enhance data processing and model performance Utilize unsupervised learning techniques to uncover patterns and insights from data Select, write, train, and test AI/ML models to ensure optimal accuracy and performance Collaborate with cross-functional teams to integrate AI/ML solutions into existing systems Utilize Python and ML modules (PyTorch, TensorFlow, Scikit-learn) for model development and deployment Leverage Snowflake and Databricks for data warehousing and management Utilize Azure services such as Azure ML and AKS for model deployment and management Continuously monitor and refine models to improve their effectiveness Document processes and results, and present findings to stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, or a related field Proven experience in developing and implementing clustering algorithms, dimension reduction techniques, and anomaly detection Experience with Azure services (Azure ML, AKS) Knowledge of Snowflake and Databricks Proficiency in Python and ML modules (PyTorch, TensorFlow, Scikit-learn) Solid understanding of unsupervised learning techniques Proven excellent problem-solving skills and attention to detail Proven ability to work independently and as part of a team. Proven solid communication skills to convey complex technical concepts to non-technical stakeholders At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 4 days ago

Apply

5.0 - 8.0 years

4 - 4 Lacs

Noida

On-site

GlassDoor logo

Assistant Vice President EXL/AVP/1402351 ServicesNoida Posted On 24 Jun 2025 End Date 08 Aug 2025 Required Experience 5 - 8 Years Basic Section Number Of Positions 1 Band D1 Band Name Assistant Vice President Cost Code D009428 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 2000000.0000 - 4000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Healthcare Organization Services LOB Healthcare Analytics SBU Healthcare Management Country India City Noida Center Noida-SEZ BPO Solutions Skills Skill AZURE DATABRICKS AZURE DATA LAKE AZURE DEVOPS POWER BI DATA FACTORY Minimum Qualification B.TECH/B.E Certification No data available Job Description Job Summary: We are looking for an experienced Azure Databricks Architect who has a strong background in designing and implementing end-to-end data solutions on Azure Cloud. The ideal candidate should have expertise in Databricks, Azure Data Factory, Azure DevOps, Power BI, Unity Catalog, and data governance, cataloging, and modelling. The Azure Databricks Architect will work closely with our data engineering, data science, and business stakeholders to design and implement scalable, secure, and efficient data architectures that meet our business requirements. Key Responsibilities: 1. Design and implement scalable, secure, and efficient data architectures on Azure Cloud using Databricks, Azure Data Factory, and other related technologies. 2. Lead the development of end-to-end data solutions on Azure Cloud, including data ingestion, processing, storage, and visualization. 3. Collaborate with data engineering, data science, and business stakeholders to identify business requirements and design data architectures that meet those requirements. 4. Develop and maintain data governance, cataloging, and modelling frameworks to ensure data quality, security, and compliance. 5. Implement data security and access controls using Azure Active Directory, Unity Catalog, and other related technologies. 6. Develop and maintain data pipelines using Azure Data Factory, Databricks, and other related technologies. 7. Develop and maintain data visualizations using Power BI, Databricks, and other related technologies. 8. Collaborate with DevOps engineers to develop and maintain CI/CD pipelines using Azure DevOps, Databricks, and other related technologies. 9. Stay up-to-date with the latest Azure Cloud technologies and trends, and apply that knowledge to improve our data architectures and solutions. Requirements: 1. 5-6 years of experience in designing and implementing data architectures on Azure Cloud using Databricks, Azure Data Factory, and other related technologies. 2. Strong experience in data governance, cataloging, and modelling, including data quality, security, and compliance. 3. Experience in developing and maintaining data pipelines using Azure Data Factory, Databricks, and other related technologies. 5. Experience in developing and maintaining data visualizations using Power BI or Tableau. 6. Experience in collaborating with DevOps engineers to develop and maintain CI/CD pipelines using Azure DevOps, Databricks, and other related technologies. 7. Strong understanding of Azure Cloud security, including Azure Active Directory, Unity Catalog, and other related technologies. 8. Strong understanding of data architecture principles, including scalability, security, and efficiency. 9. Bachelor's degree in Computer Science, Information Technology, or a related field. Key Technical Expertise on: Python Pyspark SPARK SQL Workflow Workflow Type Digital Solution Center

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies