Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
5 - 40 Lacs
Gurugram, Haryana, India
On-site
Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect
Posted 1 week ago
3.0 years
5 - 40 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect
Posted 1 week ago
12.0 years
10 - 45 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Pune,Bangalore, Gurgaon, Chennai, Kolkata Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,technical architecture,azure databricks,data lake,architect designing,data warehouse,azure synapse,etl,sql,data,skills,azure datafactory,data pipeline,azure synapses,data engineering,azure,airflow,architect,python
Posted 1 week ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description NielsenIQ is a consumer intelligence company that delivers the Full View™, the world’s most complete and clear understanding of consumer buying behavior that reveals new pathways to growth. Since 1923, NIQ has moved measurement forward for industries and economies across the globe. We are putting the brightest and most dedicated minds together to accelerate progress. Our diversity brings out the best in each other so we can leave a lasting legacy on the work that we do and the people that we do it with. NielsenIQ offers a range of products and services that leverage Machine Learning and Artificial Intelligence to provide insights into consumer behavior and market trends. This position opens the opportunity to apply the latest state of the art in AI/ML and data science to global and key strategic projects. Job Description At NielsenIQ Technology, we are evolving the Discover platform, a unified, global, open data cloud ecosystem. Organizations around the world rely on our data and insights to innovate and grow. As a Platform Architect, you will play a crucial role in defining the architecture of our platforms to realize the company’s business strategy and objectives. You will collaborate closely with colleagues in Architecture and Engineering to take architecture designs from concept to delivery. As you gain knowledge of our platform, the scope of your role will expand to include an end-to-end focus. Key Responsibilities: Architect new products using NIQ’s core platforms Assess the viability of new technologies as alternatives to existing platform selections Drive innovations through proof of concepts and support technology migration from planning to production Produce high-level approaches for platform components to guide component architects Create, maintain, and promote reference architectures for key areas of the platform Collaborate with Product Managers, Engineering Managers, Tech Leads, and Site Reliability Engineers (SREs) to govern the architecture design Create High-Level Architectures (HLAs) and Architecture Decision Records (ADRs) for new requirements Maintain architecture designs and diagrams Provide architecture reviews for new intakes Qualifications 15+ years of experience, including a strong engineering background with 5+ years in architecture/design roles. Hands-on experience building scalable enterprise platforms. Proficiency with SQL. Experience with relational databases such as PostgreSQL, document-oriented databases such as MongoDB, and search engines such as Elasticsearch. Proficiency in Java, Python, and/or JavaScript. Familiarity with common frameworks like Spring, OpenAPI, PySpark, React, Angular, etc. Background in TypeScript and Node.js a plus. Bachelor's degree in computer science or a related field (required; master’s preferred). Strong knowledge in Azure and GCP public cloud providers desirable. Good knowledge of Azure Cloud technologies, including Azure Databricks, Azure Data Factory, and Azure cloud storage (ADLS/Azure Blob). Experience with Snowflake is a definite plus. Good knowledge of Google Cloud Platform (GCP) services, including BigQuery, Workflows, Kubernetes Engine and Cloud Storage. Good understanding of Containers/Kubernetes and CI/CD. Knowledge of BI tools and analytics features is a plus. Advanced knowledge of data structures, algorithms, and designing for performance, scalability, and availability. Experience in agile software development practices. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Andhra Pradesh
On-site
Software Engineering Senior Analyst - HIH - Evernorth About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Software Engineering Senior Analyst Position Overview: Software Engineer supporting Cigna's Provider Technology organization. Responsibilities: Design and implement the software in for provider experience group on various initiatives. Provide support to our end-users by resolving their issues, responding to queries, and helping them analyze/interpret the results from the models. Develop, code, and unit test with variety of cloud services and infrastructure code using Terraform, build ETL using Python / PySpark and testing automation pipeline. Participate in peer code reviews. Develop reusable infrastructure code for commonly occurring work across multiple processes and services. Participate in planning and technical design discussions with other developers, managers, and architects to meet application requirements and performance goals. Manage the Pipeline using JENKINS to move the application to higher environments such as System Testing, User Acceptance Testing, Release Testing, and Users Training environments. Contribute to production support to resolve application production issues. Follow the guidelines of Cloud COE and other teams for production deployment and maintenance activities for all applications running in AWS. Manage the application demos to business users and Product Owners regularly in Sprint and PI demos. Work with Business users and Product Owners to understand business requirements. Participate in Program Increment (PI) planning and user stories grooming with Scrum masters, developers, QA Analysts, and product owners. Participate in daily stand-up meetings to provide daily work status updates to the Scrum master and product owner, following Agile Methodology. Write Structured Query Language ( SQL ) stored procedures and SQL queries for create, read, update, and delete (CRUD) operations for database. Write and maintain technical and design documents. Understand best practices for using the Guarantee Management’s tools and applications. Required Skills: Excellent debugging, analytical, and problem-solving skills. Excellent communication skills. Required Experience & Education: Bachelor's in computer science or related field, or equivalent relevant work experience and technical knowledge. 3-5 years of total related experience. Experience in Full Stack Python / PySpark Developer and Hands-on experience on AWS Cloud Services.. Hands on Experience in AWS Cloud Development. Experience in CI/CD tools such as AWS CloudFormation, Jenkins, Conduits, GitHub. Experience in Microservice Architecture. Exposure to SOLID, Architectural Patterns, Development Best Practices. Experience in Unit Testing automation, Test Driven Development, and use of mocking frameworks. Experience working in Agile/Scrum teams. Hands on experience in infrastructure as a code in a Terraform. Desired Experience: Experience building in Event Driven Architecture a plus. Security Engineering or Knowledge of AWS IAM Principles a plus Kafka knowledge a plus. NoSQL Solutions a plus. Good to have Databricks experience Application development is also needed Experienced in software development in Java and open source tech stack. Strong and Proficient in React or NodeJS client-side languages and frameworks Equal Opportunity Statement: Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Andhra Pradesh
On-site
HIH - Software Engineering Associate Advisor Position Summary: Evernorth, a leading Health Services company, is looking for exceptional data engineers/developers in our Data and Analytics organization. In this role, you will actively participate with your development team on initiatives that support Evernorth's strategic goals as well as subject matter experts to understand business logic you will be engineering. As a software engineer, you will help develop an integrated architectural strategy to support next-generation reporting and analytical capabilities on an enterprise-wide scale. You will work in an agile environment, delivering user-oriented products which will be available both internally and externally by our customers, clients, and providers. Candidates will be provided the opportunity to work on a range of technologies and data manipulation concepts. Specifically, this may include developing healthcare data structures and data transformation logic to allow for analytics and reporting for customer journeys, personalization opportunities, pre-active actions, text mining, action prediction, fraud detection, text/sentiment classification, collaborative filtering/recommendation, and/or signal detection. This position will involve taking these skills and applying them to some of the most exciting and massive health data opportunities that exist here at Evernorth. The ideal candidate will work in a team environment that demands technical excellence, whose members are expected to hold each other accountable for the overall success of the end product. Focus for this team is on the delivery of innovative solutions to complex problems, but also with a mind to drive simplicity in refining and supporting of the solution by others Job Description & Responsibilities : Be accountable for delivery of business functionality. Work on the AWS cloud to migrate/re-engineer data and applications from on premise to cloud. Responsible for engineering solutions conformant to enterprise standards, architecture, and technologies Provide technical expertise through a hands-on approach, developing solutions that automate testing between systems. Perform peer code reviews, merge requests and production releases. Implement design/functionality using Agile principles. Proven track record of quality software development and an ability to innovate outside of traditional architecture/software patterns when needed. A desire to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. Be entrepreneurial, business minded, ask smart questions, take risks, and champion new ideas. Take ownership and accountability. Experience Required: 8 - 11 years of experience in application program development Experience Desired: Knowledge and/or experience with healthcare information domains. Documented experience in a business intelligence or analytic development role on a variety of large-scale projects. Documented experience working with databases larger than 5TB and excellent data analysis skills. Experience with TDD/BDD Experience working with SPARK and real time analytic framework Education and Training Required: Bachelor’s degree in Engineering, Computer Science Primary Skills: PYTHON, Databricks, TERADATA, SQL, UNIX, ETL, Data Structures, Looker, Tableau, GIT, Jenkins, RESTful & GraphQL APIs. AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM Additional Skills: Ability to rapidly prototype and storyboard/wireframe development as part of application design. Write referenceable and modular code. Willingness to continuously learn & share learnings with others. Ability to communicate design processes, ideas, and solutions clearly and effectively to teams and clients. Ability to manipulate and transform large datasets efficiently. Excellent troubleshooting skills to root cause complex issues About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
11.0 - 13.0 years
0 Lacs
Andhra Pradesh
On-site
Application Development Advisor – Data Engineering Position Overview We're seeking a versatile Data Engineer to contribute to all aspects of our customer data platform. You'll work on various data engineering tasks, from designing data pipelines to implementing data integration solutions, playing a crucial role in enhancing our data infrastructure. Roles & Responsibilities Develop and maintain full-stack applications using our cutting-edge tech stack Develop and maintain data pipelines using Snowflake, Databricks, and AWS services. Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. Optimize data storage and retrieval for performance and scalability. Implement data quality checks and monitoring to ensure data integrity. Participate in code reviews and contribute to best practices. Experiment with new technologies and tools to improve our data platform Qualifications Required Experience & Skills: 11 to 13 years of development experience Strong proficiency in Python Extensive experience with Snowflake and Databricks Proficiency with AWS services (Lambda, EC2, Glue, Kinesis, SQS, SNS, SES, Pinpoint) Strong analytical skills and the ability to work independently Familiarity with SQL and database management Proficiency with version control systems (e.g., Git) Preferred Experience & Skills: Advanced proficiency with Snowflake and Databricks Knowledge of cloud architecture and best practices Ability to experiment with and adopt new technologies quickly Experience with end-user messaging platforms Familiarity with containerization and orchestration (e.g., Docker, Kubernetes) Understanding of DevOps practices and CI/CD pipelines About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Principal Data Engineer (Remote) Location: Remote (Global) Employment Type: Full-time About The Role As a Principal Data Engineer, you will be instrumental in driving the evolution and scalability of our data platform, leveraging Databricks on AWS. You’ll engineer reliable, production-grade data pipelines and collaborate across teams to build data systems that are secure, efficient, and maintainable. We embrace CI/CD, DataOps principles, and treat data pipelines like software. You’ll work in a fully remote team that prioritises clear documentation, async-first communication, and a culture of mentorship, innovation, and autonomy. Key Responsibilities Architect and evolve the overall data engineering strategy and platform capabilities. Drive adoption of best practices in modular pipeline design, data quality, and observability. Partner with leadership to define and implement data platform roadmaps. Provide technical leadership across engineering squads and champion innovation. Contribute to foundational frameworks and tooling for DataOps and CI/CD. Mentor senior and junior engineers and foster a culture of continuous improvement. Required Skills Expert-level knowledge of Databricks, Spark, and distributed data systems. Strong Python and SQL expertise with deep understanding of data engineering patterns. Experience designing high-throughput, low-latency pipelines and resilient data architectures. Proven ability to lead technical initiatives and influence cross-functional teams. Familiarity with AWS infrastructure and modern CI/CD tooling. Strong technical writing and ability to communicate complex ideas to varied audiences. Nice to Have Thought leadership in data mesh, contracts, or federated governance. Experience with event-driven architectures (Kafka, Flink, or Kinesis). Understanding of lineage, cataloguing, and metadata management tools. Experience with cost optimisation and scaling data workloads efficiently. Public contributions to open source or knowledge sharing (blogs, talks, etc.). What We Offer Fully remote and flexible working environment Mentorship and opportunities for technical leadership Defined career path in data engineering, architecture, or ML infrastructure Learning budget for courses, certifications, or conferences Culture that values transparency, autonomy, and technical excellence As a Principal Data Engineer, you will be instrumental in driving the evolution and scalability of our data platform, leveraging Databricks on AWS. You’ll engineer reliable, production-grade data pipelines and collaborate across teams to build data systems that are secure, efficient, and maintainable. We embrace CI/CD, DataOps principles, and treat data pipelines like software. Our platform applies modern data modeling techniques such as the Medallion Architecture and Data Vault, promoting governed, modular, and scalable data structures. You’ll help us evolve our platform toward a Data-as-a-Product approach—ensuring that data is discoverable, trustworthy, and owned. You’ll work in a fully remote team that prioritises clear documentation, async-first communication, and a culture of mentorship, innovation, and autonomy.
Posted 1 week ago
12.0 years
10 - 45 Lacs
Greater Kolkata Area
On-site
Location: Pune,Bangalore, Gurgaon, Chennai, Kolkata Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,technical architecture,azure databricks,data lake,architect designing,data warehouse,azure synapse,etl,sql,data,skills,azure datafactory,data pipeline,azure synapses,data engineering,azure,airflow,architect,python
Posted 1 week ago
3.0 years
5 - 40 Lacs
Greater Kolkata Area
On-site
Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview Cvent is a leading meetings, events, and hospitality technology provider with more than 4,800 employees and ~22,000 customers worldwide, including 53% of the Fortune 500. Founded in 1999, Cvent delivers a comprehensive event marketing and management platform for marketers and event professionals and offers software solutions to hotels, special event venues and destinations to help them grow their group/MICE and corporate travel business. Our technology brings millions of people together at events around the world. In short, we’re transforming the meetings and events industry through innovative technology that powers the human connection. The DNA of Cvent is our people, and our culture has an emphasis on fostering intrapreneurship - a system that encourages Cventers to think and act like individual entrepreneurs and empowers them to take action, embrace risk, and make decisions as if they had founded the company themselves. At Cvent, we value the diverse perspectives that each individual brings. Whether working with a team of colleagues or with clients, we ensure that we foster a culture that celebrates differences and builds on shared connections. In This Role, You Will Work with a talented group of data scientists, software developers and product owners to identify possible applications for AI and machine learning. Understand Cvent's product lines, their business models and the data collected. Perform machine learning research on various types of data ( numeric, text, audio, video, image). Deliver machine learning models in a state that can be picked up by a software development team and operationalized via Cvent's products. Thoroughly document your work as well as results, to the extent that another data scientist can replicate them. Progressively enhance your skills in machine learning, writing production-quality Python code, and communication. Here's What You Need A Bachelor's degree in a quantitative field (natural sciences, math, statistics, computer science). At least 3 years of experience working as a data scientist in industry. In-depth familiarity with the Linux operating system and command-line work. Conceptual and technical understanding of machine learning, including model training and evaluation. Experience with formal Python coding. Proficiency in machine learning packages in Python. Familiarity with Generative AI based system development. Experience with relational databases and query-writing in SQL. Knowledge of linear algebra and statistics. Skills in data exploration and interpretation. It Will Be Excellent If You Also Have A Master's or PhD degree in a quantitative field. Experience with Databricks and/or Snowflake platforms. Ability to write production-quality Python code with testing coverage. Experience working on a cloud platform (AWS/Azure/Google Cloud), especially machine learning R&D on a cloud platform. Knowledge of the software development lifecycle, including Git processes, code reviews, test-driven development and CI/CD. Experience with A/B testing. Skills in data visualization and dashboarding. Knowledge of how to interact with a REST API. Proven ability for proactive, independent work with some supervision. Strong verbal and written communication.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Your role Atlas Copco is a Swedish multinational industrial company that was founded in 1873. It manufactures industrial tools and equipment. The Atlas Copco Group is a global industrial group of companies headquartered in Nacka, Sweden In alignment with our strategic objectives at Atlas Copco Group to enhance our focus on Digital Technology initiatives and following the recent announcement of the Global IT Hub, we have now estabilished as Global IT hub in HInjawadi, Pune, India effective July 1, 2025. This initiative is designed to empower Digital Technology Practice to reach their maximum potential. At Global IT Hub India (IPW), we boast a formidable Data Analytics Expertise committed to delivering state-of-the-art analytical services to various Business areas, Divisions, Product Companies, Customer centers and Distribution centers across globe. Job Description As a Senior BI Engineers for Data Analytics, you will execute the Data visualization projects assigned to you and will coordinate for the execution and delivery of the projects. Senior BI Engineers to work on the projects in Data Analytics Competence in Global IT hub. You will work closely with team in Atlas Copco CTS. The Role We are looking for a passionate and committed Senior Power BI developer to join the data analytics team. As a Senior Power BI developer, you will be part of a team of the members of Data Analytics Competence team who are continuously delivering valuable services to the Compressor Technique Business Area of Atlas Copco. You will work together with product owners and data engineers. You are responsible for the Power BI front-end design, development, and delivery for highly visible data-driven applications in Atlas Copco Group. You always take a quality-first approach where you ensure the data is visualized in a clear, accurate and user-friendly manner. You ensure standards and best practices are followed and ensure documentation is created and maintained. Where needed, you take initiative and make recommendations to drive improvements. In this role you will also be involved in the tracking, monitoring and performance analysis of production issues and the implementation of bugfixes and enhancements. You have strong technical knowledge, specific knowledge of the Data Analysis tools, methodologies, reporting etc. To succeed, you will need Key Responsibilities And Accountabilities Skills and Experience Understand business requirements in BI context and design data models to transform raw data into meaningful insights. Performing data modelling with DAX language with reporting KPI Create complex dashboards and interactive visual reports using Power BI Analyse data and present data through reports that aid decision-making. Create relationships between data and develop tabular and other multidimensional data models. Chart creation and data documentation explaining algorithms, parameters, models, and relations. Promote the use of standard Microsoft visualizations. Mentor and guide Junior team members You are a Power BI front-end developer that can develop performant data sets and appealing, insightful visualizations. You have a good understanding of data warehousing and relational database concepts & technologies. Be able to assemble large, complex data sets from multiple data sources that meet business requirements. Competences You are a team player and able to motivate people. You are a customer focused, flexible, enthusiastic. You have a strong drive, and you take initiative. You are flexible, eager to learn new things and able to adapt in a fast-changing world. You are result oriented and quality focused, both in terms of deliveries and processes. Stakeholder Management You can discuss with stakeholders about their requirements and clearly communicate back your understanding of the requirements. You can work with the stakeholders to understand the big picture and work towards it Educate users regarding the functionalities provided and train them in using the visualization tool efficiently. Work with team members and other users from the organization at different levels for performance improvement and suggestions Must to have PowerBI, SQL Data modeling, DAX, Reporting KPI Knowledge about databricks – data engineering Python, SQL, Acquainted with data bricks Excellent knowledge (oral and written) of English is required. Nice to have. Knowledge/experience of Microsoft Azure applications and Azure data pipelines Experience of a Source Control system such as Git or SVN Working Knowledge of Agile scrum ceremonies. Able to take E2E ownership analyzing requirements/build basic structure backend/front end . Use data products/source build specific consumption data products based on use case Education Any Graduate (B.E / B.Tech/MCA/MTech/BSc/MSc/MCS) in Engineering with 3-6 years of experience in BI Engineering field , projects and tools. Personality Requirements You like open communication and enjoy working with different cultures You are decisive (not afraid of making errors and willing to learn by doing) Open minded; expect the unexpected stay sceptical when viewing at the data and info) Not afraid to approach other stakeholders (customer centre specialists, local engineering, or technical support specialists) You work with a structured approach when collecting info and data, but also when closing the feedback loop to further increase your learning. In return, we offer What We Offer Flexible working hours Flexible office and home working policies A modern infrastructure providing you with the latest tools and technologies at your disposal. A challenging environment which contributes to your constant learning and professional growth To be part of the data competence team which is constantly growing. Depending on the country you are enrolled in group healthcare insurance plans covering your medical needs. A chance to become part of a global, innovative company, supporting sustainable productivity. You get the opportunity to bring revolutionary ideas fostering innovation and execute qualified ideas. A friendly culture with immense professional and personal development, education, and opportunities for career growth Free access to LinkedIn Learning and many other internal and external trainings Atlas Copco offers trainings on a regular basis to acquire new skills. You get the opportunity to make the world a better place through our sustainable goals and by contributing and being part of our Water for all projects. Friendly and open culture of Swedish company. Very high visibility in the organization with "No door" culture, you can always talk to anyone in the organization. Free Canteen & Transport facility. Job location Hybrid This role offers a hybrid working arrangement, allowing you to split your time between working remotely and being on-site in Pune. Contact information Talent Acquisition Team: Shreya Pore Uniting curious minds Behind every innovative solution, there are people working together to transform the future. With careers sparked by initiative and lifelong learning, we unite curious minds, and you could be one of them.
Posted 1 week ago
8.0 years
0 Lacs
New Delhi, Delhi, India
On-site
In this strategic role, you'll: 🔹 Act as the key contact for data engineering tracks supporting AI/ML, Snowflake, and Databricks platforms 🔹 Architect solutions for large-scale data science and analytics projects 🔹 Lead high-impact Data Engineering initiatives across global pharma use cases 🔹 Bridge business and IT, influencing leadership and guiding teams with best-in-class engineering practices 🔹 Evangelize modern data stewardship, architecture, and ownership in senior forums 🔹 Supervise POCs on emerging technologies and drive real-world impact across domains 💡 If you're passionate about scalable data pipelines, cloud-native platforms, and being a thought leader in the pharma analytics space, this is your opportunity to make a mark. 📍 Location: Bangalore 🌐 Industry: Pharma | Analytics | AI/ML | Enterprise Data Platforms Experience Level: 8+ Years Let’s build what’s next—together. 👉 Know someone perfect for this? Drop an email at prcha@7n.com.
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. R147951
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position:- Data Architect – Telecom Domain Job Role:- Full Time Location:- Noida,Gurgraon,Hyderbad Exp- 12+ Exp NP :- Immediate/Max 15 Days Data Architect – Telecom Domain About the Role: We are seeking an experienced Telecom Data Architect to join our team. In this role, you will be responsible for designing comprehensive data architecture and technical solutions specifically for telecommunications industry challenges, leveraging TMforum frameworks and modern data platforms. You will work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Responsibilities: Design and articulate enterprise-scale telecom data architectures incorporating TMforum standards and frameworks, including SID (Shared Information/Data Model), TAM (Telecom Application Map), and eTOM (enhanced Telecom Operations Map) Develop comprehensive data models aligned with TMforum guidelines for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create data architectures that support telecom-specific use cases including customer journey analytics, network performance optimization, fraud detection, and revenue assurance Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Conduct technical discovery sessions with telecom clients to understand their OSS/BSS architecture, network analytics needs, customer experience requirements, and digital transformation objectives Design and deliver proof of concepts (POCs) and technical demonstrations showcasing modern data platforms solving real-world telecommunications challenges Create comprehensive architectural diagrams and implementation roadmaps for telecom data ecosystems spanning cloud, on-premises, and hybrid environments Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on telecom-specific requirements and regulatory compliance needs. Design data governance frameworks compliant with telecom industry standards and regulatory requirements (GDPR, data localization, etc.) Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Qualifications: Bachelor's or Master's degree in Computer Science, Telecommunications Engineering, Data Science, or a related technical field 10+ years of experience in data architecture, data engineering, or solution architecture roles with at least 5 years in telecommunications industry Deep knowledge of TMforum frameworks including SID (Shared Information/Data Model), eTOM, TAM, and their practical implementation in telecom data architectures Demonstrated ability to estimate project efforts, resource requirements, and implementation timelines for complex telecom data initiatives Hands-on experience building data models and platforms aligned with TMforum standards and telecommunications business processes Strong understanding of telecom OSS/BSS systems, network management, customer experience management, and revenue management domains Hands-on experience with data platforms including Databricks, and Microsoft Azure in telecommunications contexts Experience with modern data processing frameworks such as Apache Kafka, Spark and Airflow for real-time telecom data streaming Proficiency in Azure cloud platform and its respective data services with an understanding of telecom-specific deployment requirements Knowledge of system monitoring and observability tools for telecommunications data infrastructure Experience implementing automated testing frameworks for telecom data platforms and pipelines Familiarity with telecom data integration patterns, ETL/ELT processes, and data governance practices specific to telecommunications Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for telecom use cases Proficiency in programming languages commonly used in data processing (Python, Scala, SQL) with telecom domain applications Understanding of telecommunications regulatory requirements and data privacy compliance (GDPR, local data protection laws) Excellent communication and presentation skills with ability to explain complex technical concepts to telecom stakeholders Strong problem-solving skills and ability to think creatively to address telecommunications industry challenges Good to have TMforum certifications or telecommunications industry certifications Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
Greetings!!! Role:- Senior Data Engineer with Databricks Experience:- 4+ Years Location- Remote Duration: 3 Months Contract Required Skills & Experience: o 3+ years’ experience of Hands-on in data structures, distributed systems, Spark, SQL, PySpark, NoSQL Databases o Strong software development skills in at least one of: Python, Pyspark or Scala. o Develop and maintain scalable & modular data pipelines using databricks & Apache Spark o Experience working on either of the cloud storages - AWS S3, Azure data lake, GCS buckets o Exposure to the ELT tools offered by the cloud platforms like, ADF, AWS Glue, Google dataflow o Integrate Databricks with other cloud services like AWS, Azure or Google Cloud If you are interested , please share your resume to prachi@iitjobs.com
Posted 1 week ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments 5+ years of experience in Azure Databricks development Strong knowledge in SQL constraints, operators, modifying and querying data from the table Interact with customer/client or counterparts to understand the requirements Analyse the requirement (IRD/IDD, Extraction SQLs) and able to develop independently Mentor and guide juniors in the team, able to review the code and provide valuable suggestions Collaborates with other counterparts to explore existing systems, determines areas of complexity, potential risks to successful completion of the project and bring value add to the customer. Sharing knowledge and best practices around quality assurance process with team Skills azure data bricks,Azure,SQL
Posted 1 week ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description We are seeking a highly experienced and motivated Azure Cloud Administrator with a strong background in Windows Server infrastructure , Azure IaaS/PaaS services , and cloud networking . The ideal candidate will have over 10 years of relevant experience and will be responsible for managing and optimizing our Azure environment while ensuring high availability, scalability, and security of our infrastructure. Key Responsibilities Administer and manage Azure Cloud infrastructure including both IaaS and PaaS services. Deploy, configure, and maintain Windows Servers (2016/2019/2022). Manage Azure resources such as Virtual Machines, Storage Accounts, SQL Managed Instances, Azure Functions, Logic Apps, App Services, Azure Monitor, Azure Key Vault, Azure Recovery Services, Databricks, ADF, Synapse, and more. Ensure security and network compliance through effective use of Azure Networking features including NSGs, Load Balancers, and VPN gateways. Monitor and troubleshoot infrastructure issues using tools such as Log Analytics, Application Insights, and Azure Metrics. Perform server health checks, patch management, upgrades, backup/restoration, and DR testing. Implement and maintain Group Policies, DNS, IIS, Active Directory, and Entra ID (formerly Azure AD). Collaborate with DevOps teams to support infrastructure automation using Terraform and Azure DevOps. Support ITIL-based processes including incident, change, and problem management. Deliver Root Cause Analysis (RCA) and post-incident reviews for high-severity issues. Provide after-hours support as required during outages or maintenance windows. Required Technical Skills Windows Server Administration – Deep expertise in Windows Server 2016/2019/2022. Azure Administration – Strong hands-on experience with Azure IaaS/PaaS services. Azure Networking – Solid understanding of cloud networking principles and security best practices. Azure Monitoring – Familiarity with Azure Monitor, Log Analytics, Application Insights. Infrastructure Tools – Experience with Microsoft IIS, DNS, AD, Group Policy, and Entra ID Connect. Cloud Automation – Good to have working knowledge of Terraform and Azure DevOps pipelines. Troubleshooting & RCA – Proven ability to analyze, resolve, and document complex technical issues. Skills Azure,Windows, Monitoring
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. R147951
Posted 1 week ago
3.0 years
5 - 40 Lacs
Pune, Maharashtra, India
On-site
Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks/Snow pipe Good to Have-Snowpro Certification Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,pl/sql,snowflake,reporting,snowpipe,databricks,spark,azure data factory,projects,data warehouse,unix shell scripting,snowsql,troubleshooting,rdbms,sql,data management principles,data,query optimization,snow pipe,skills,azure datafactory,nosql databases,circleci,git,terraform,snowflake utilities,performance tuning,etl,azure,architect
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Only Immediate Joiners (Within 12-15 days) 4+ Years Bangalore, Hyderabad, Pune, Coimbatore (Hybrid- Onsite) Face to face interview About the Role We’re looking for a Cloud Migration Consultant with hands-on experience assessing and migrating complex applications to Azure . You'll work closely with Microsoft business units, participating in Intake & Assessment and Planning & Design phases, creating migration artifacts, and leading client interactions. You’ll also support application modernization efforts in Azure , with exposure to AWS as needed. Key Responsibilities Assess application readiness and document architecture, dependencies, and migration strategy. Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate , CloudockIt , PowerShell . Create architecture diagrams , migration playbooks , and maintain Azure DevOps boards. Set up applications both on-premises and in cloud environments (primarily Azure). Support proof-of-concepts (PoCs) and advise on migration options. Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams. Track progress, blockers, and risks, reporting timely status to project leadership. Required Skills 4+ years of experience in cloud migration and assessment Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.) Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3) Experience with Java (SpringBoot)/C#, .Net/Python , Angular/React.js , REST APIs Working knowledge of Kafka , Docker/Kubernetes , Azure DevOps Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs) IAM knowledge: OAuth, SAML, Okta/SiteMinder Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB Preferred Qualifications Azure or AWS certifications
Posted 1 week ago
12.0 years
10 - 45 Lacs
Pune, Maharashtra, India
On-site
Location: Pune,Bangalore, Gurgaon, Chennai, Kolkata Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,technical architecture,azure databricks,data lake,architect designing,data warehouse,azure synapse,etl,sql,data,skills,azure datafactory,data pipeline,azure synapses,data engineering,azure,airflow,architect,python
Posted 1 week ago
12.0 years
10 - 45 Lacs
Gurugram, Haryana, India
On-site
Location: Pune,Bangalore, Gurgaon, Chennai, Kolkata Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: pipelines,pyspark,technical architecture,azure databricks,data lake,architect designing,data warehouse,azure synapse,etl,sql,data,skills,azure datafactory,data pipeline,azure synapses,data engineering,azure,airflow,architect,python
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. R147951
Posted 1 week ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. Together with analytics team leaders you will support our business with excellent data models to uncover trends that can drive long-term business results. How You Will Contribute You will: Work in close partnership with the business leadership team to execute the analytics agenda Identify and incubate best-in-class external partners to drive delivery on strategic projects Develop custom models/algorithms to uncover signals/patterns and trends to drive long-term business performance Execute the business analytics program agenda using a methodical approach that conveys to stakeholders what business analytics will deliver What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Using data analysis to make recommendations to senior leaders Technical experience in roles in best-in-class analytics practices Experience deploying new analytical approaches in a complex and highly matrixed organization Savvy in usage of the analytics techniques to create business impacts As part of the Global MSC (Mondelez Supply Chain) Data & Analytics team, you will support our business to uncover trends that can drive long-term business results. In this role, you will be a key technical leader in developing our cutting-edge Supply Chain Data Product ecosystem. You'll have the opportunity to design, build, and automate data ingestion, harmonization, and transformation processes, driving advanced analytics, reporting, and insights to optimize Supply Chain performance across the organization. You will play an instrumental part in engineering robust and scalable data solutions, acting as a hands-on expert for Supply Chain data, and contributing to how these data products are visualized and interacted with. What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: SAP Data Expertise: Deep hands-on experience in extracting, transforming, and modeling data from SAP ECC/S4HANA (modules like MM, SD, PP, QM, FI/CO) and SAP BW/HANA. Proven ability to understand SAP data structures and business processes within Supply Chain. Cloud Data Engineering (GCP Focused): Strong proficiency and hands-on experience in data warehousing solutions and data engineering services within the Google Cloud Platform (GCP) ecosystem (e.g., BigQuery, Dataflow, Dataproc, Cloud Composer, Pub/Sub). Data Pipeline Development: Design, build, and maintain robust and efficient ETL/ELT processes for data integration, ensuring data accuracy, integrity, and timeliness. BI & Analytics Enablement: Collaborate with data scientists, analysts, and business users to provide high-quality, reliable data for their analyses and models. Support the development of data consumption layers, including dashboards (e.g., Tableau, Power BI). Hands-on experience with Databricks (desirable): ideally deployed on GCP or with GCP integration for large-scale data processing, Spark-based transformations, and advanced analytics. System Monitoring & Optimization (desirable): Monitor data processing systems and pipelines to ensure efficiency, reliability, performance, and uptime; proactively identify and resolve bottlenecks. Industry Knowledge: Solid understanding of the consumer goods industry, particularly Supply Chain processes and relevant key performance indicators (KPIs). What extra ingredients you will bring: Excellent communication and collaboration skills to facilitate effective teamwork and Supply Chain stakeholders’ engagement. Ability to explain complex data concepts to both technical and non-technical individuals. Experience delegating work and assignments to team members and guiding them through technical issues and challenges. Ability to thrive in an entrepreneurial, fast-paced setting, managing complex data challenges with a solutions-oriented approach. Strong problem-solving skills and business acumen, particularly within the Supply Chain domain. Experience working in Agile development environments with a Product mindset is a plus. Education / Certifications: Bachelor's degree in Information Systems/Technology, Computer Science, Analytics, Engineering, or a related field. 6+ years of hands-on experience in data engineering, data warehousing, or a similar technical role, preferably in CPG or manufacturing with a strong focus on Supply Chain data. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France