Jobs
Interviews

3717 Data Quality Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 1 week ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Kolkata

Work from Office

About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.

Posted 1 week ago

Apply

6.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Career Category Engineering Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Minimum Experience of 6-9 years Must have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e. g. , GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 1 week ago

Apply

3.0 - 8.0 years

8 - 11 Lacs

Pune

Work from Office

Job purpose The Service Contract Support Specialist will have ownership of a designated service contract base and will be responsible for driving the renewal process for those contracts with business stakeholders. They will also have ownership for any changes occurring within the contract lifecycle, parts updates, change orders, cancellations, monitoring data quality. The main objectives are Create pricing and documentation for assigned contract base, accurately and on time, to ensure proposals are sent to business stakeholders on time, following GSM Processes. Drive contract renewals correctly in CRM with SOLs, with all data and required documentation, to ensure no delays/errors are incurred during booking Ownership for any changes occurring within the contract lifecycle, parts updates, change orders, cancellations, monitoring data quality. This will include the first year of newly booked contracts, which have been handed over from Contract Proposal Team Primary responsibilitie Develop a clear understanding of assigned contract base Develop and maintain good working relationships with key personnel within relevant Honeywell teams, including Service Operation Leaders, Field Service Managers and any other supporting functions (GCC, ISLC, CPT) Proactively drive assigned renewals with self and other stakeholders, to adhere to renewal tasks due dates and RNOC given to SOL SLA s Maintain accurate and timely information in CRM for renewals, including attaching documentation for all stages of the renewal process Update opportunity Next Step comments weekly for all renewals in progress and against a CSS renewal milestone Provides accurate updates of each contract renewal and any issues, during weekly MOS call with Service Contract Support Pole Lead Escalate issues in a timely manner to Service Contract Support Pole Lead, which may delay renewal process - do not wait for next MOS call Maintains good knowledge of the renewal process SOP and Work Instructions Ensures that a renewal opportunity exists and is linked to any renewal case/PSC in progress and is also linked to the service contract in CRM Identifies scope for renewal of designated service contract base and works with Service Operations Leader to validate that scope during weekly MOS with SOL Ensure renewal case is created for each active renewal entitlement in CRM Prices scope accurately and obtains proposals from other depts (Cyber, HCP, Third Party), when needed for inclusion in pricing tool Ensure pricing matches between pricing tool and PSC Obtains financial approval for all renewals before issuing the proposal to Service Operations Leader Creates accurate proposal and/or other documentation for the Service Operations Leader When customer PO is received, check details on PO vs Pricing tools and proposal, including sold-to party, payment terms, invoicing frequency Create accurate and complete booking package to handover renewal for financial booking in CRM and ERP and follow on activities (critical spare parts setup, third vendor purchase orders, SOFs and any other special instructions. ) Continuously learns renewal process, pricing tools and CRM to identify possible improvement areas within the renewal process/tools Create and issue Welcome Packet to SOL within 7 days of contract booking (excluding exceptions) Takes part in tools Dev and UAT when needed, to support enhancements and to continuously learn new functionality Cover absences for CSS colleagues as and when needed, to keep renewals moving forward Ensure in progress work is handed over to CSS backup when having planned leave Be involved with the training of new employees, including buddy system for support with live renewals Agree deadlines for tasks/actions required by other stakeholders and keep track of those actions/deadlines/owners via CRM or RAIL Continually develop own knowledge and skills to support current role and career path Ensure any changes made to VRW asset list during booking, must be communicated back to the Asset Support Team, to ensure correct data alignment Contact Service Contract Pole Lead as first point of contact on any issues or questions Proactively drive own IDP, goals and KPIs to meet targets Hold quarterly meetings with Direct Manager to drive own Individual Development Plan Use dashboard available in SF and Power BI to drive renewal tasks to on time completion Drive CSS pricing with SOL, so that local pricing is not used, excluding agreed countries. Support standardization in Contract Renewal process by developing reusable standard documents like Standard Operating Procedures (SOP), Self Learning Packages (SLP), Checklists, guidelines, etc. Provide technical guidance to other team members for different Contract Renewal entitlements and steps. Collect overall contract renewal data, prepare status/ progress reports and present to GBE team. 3. Principal Networks & Contact Links Internal Service Contract Pole Operations Manager Service Contract Support Pole Lead - Matrix Manager - first point of escalation Service Operation Leaders Regional Service Operations Managers Field Service Manager Global Customer Care A360 Performance Managers ISA Managers Asset Support Team Contract Proposal Team ISLC External None 4. Supervisory Responsibilities None 5. Geographic Scope & Travel Requirements Located within a central location (Hadapsar, Pune, India) Adherence to local office working policy Typically assigned to a particular pole, handling # service contracts within the pole. Working hours afternoon to midnight shift (2PM to 6PM from office and 8PM to 12:00AM from home). This can be changed based on organization policy and pole in which candidate is working. Travel not required for primary task, on exception base for secondary tasks (e. g. training/workshops) 6. Key Performance Measures RNOC given to SOL as per current SLA Zero renewal cases without renewal opportunity 100% welcome packets issued where needed, excluding exceptions 100% renewal case for active renewal entitlements CPQ adoption as per plan PSC rejections due to CSS error Corrective actions Weekly update Next Step Comment 1. Education Required Bachelors Degree - Administrative or technical; OR 3-4 years Honeywell Process Solutions / LSS Experience in similar positions 2. Work Experience Required 7-8 years of experience with process controls/pricing-proposal environment 3-4 years of experience in Honeywell LSS organization (Preferred, not required) Excellent working knowledge of SFDC, CPQ and SAP, MS Word and MS Excel 3. Technical Skills & Specific Knowledge Required Strong Math skills, including basic commercial awareness (booking margins, cash flow) Basic knowledge of pricing of a service agreements. 4. Behavioural Competencies Required Able to forge strong internal business relationships and deliver on commitments. Demonstrates a strong commercial awareness. Excellent interpersonal skills as well as good verbal, written and presentation skills. Ability to multi-task and prioritise work. Self-motivated and able to work with minimum supervision. Demonstrates a high level of planning & organisation skills daily. Highly Customer Focused approach, demonstrating success through a Voice of the Customer approach daily. Highly self-aware, recognising the impact of approach and behaviours on peers, direct reports, customers and other internal and external contacts. Ability to work within a remote team and support each other when needed Daily demonstration of the Honeywell Behaviours. 5. Language Requirements Fluent in English 1. Education Required Bachelors Degree - Administrative or technical; OR 3-4 years Honeywell Process Solutions / LSS Experience in similar positions 2. Work Experience Required 7-8 years of experience with process controls/pricing-proposal environment 3-4 years of experience in Honeywell LSS organization (Preferred, not required) Excellent working knowledge of SFDC, CPQ and SAP, MS Word and MS Excel 3. Technical Skills & Specific Knowledge Required Strong Math skills, including basic commercial awareness (booking margins, cash flow) Basic knowledge of pricing of a service agreements. 4. Behavioural Competencies Required Able to forge strong internal business relationships and deliver on commitments. Demonstrates a strong commercial awareness. Excellent interpersonal skills as well as good verbal, written and presentation skills. Ability to multi-task and prioritise work. Self-motivated and able to work with minimum supervision. Demonstrates a high level of planning & organisation skills daily. Highly Customer Focused approach, demonstrating success through a Voice of the Customer approach daily. Highly self-aware, recognising the impact of approach and behaviours on peers, direct reports, customers and other internal and external contacts. Ability to work within a remote team and support each other when needed Daily demonstration of the Honeywell Behaviours. 5. Language Requirements Fluent in English

Posted 1 week ago

Apply

7.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Remote

Employment Type : Contract (Remote). Job Summary : We are looking for a highly skilled Data Engineer / Data Modeler with strong experience in Snowflake, DBT, and GCP to support our data infrastructure and modeling initiatives. The ideal candidate should possess excellent SQL skills, hands-on experience with Erwin Data Modeler, and a strong background in modern data architectures and data modeling techniques. Key Responsibilities : - Design and implement scalable data models using Snowflake and Erwin Data Modeler. - Create, maintain, and enhance data pipelines using DBT and GCP (BigQuery, Cloud Storage, Dataflow). - Perform reverse engineering on existing systems (e.g., Sailfish/DDMS) using DBeaver or similar tools to understand and rebuild data models. - Develop efficient SQL queries and stored procedures for data transformation, quality, and validation. - Collaborate with business analysts and stakeholders to gather data requirements and convert them into physical and logical models. - Ensure performance tuning, security, and optimization of the Snowflake data warehouse. - Document metadata, data lineage, and business logic behind data structures and flows. - Participate in code reviews, enforce coding standards, and provide best practices for data modeling and governance. Must-Have Skills : - Snowflake architecture, schema design, and data warehouse experience. - DBT (Data Build Tool) for data transformation and pipeline development. - Strong expertise in SQL (query optimization, complex joins, window functions, etc.) - Hands-on experience with Erwin Data Modeler (logical and physical modeling). - Experience with GCP (BigQuery, Cloud Composer, Cloud Storage). - Experience in reverse engineering legacy systems like Sailfish or DDMS using DBeaver. Good To Have : - Experience with CI/CD tools and DevOps for data environments. - Familiarity with data governance, security, and privacy practices. - Exposure to Agile methodologies and working in distributed teams. - Knowledge of Python for data engineering tasks and orchestration scripts. Soft Skills : - Excellent problem-solving and analytical skills. - Strong communication and stakeholder management. - Self-driven with the ability to work independently in a remote setup.

Posted 1 week ago

Apply

8.0 - 10.0 years

7 Lacs

Pune

Work from Office

About Tarento: Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. Were proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. Job Details: Experience: 8+ years of experience predominantly in data related disciplines such as data governance, data quality and data cleansing in oil and gas or financial services domain Roles & Responsibilities: Experience of working on data management tools such as Alation and MDG Demonstrate deep understanding of the data governance framework and play a key SME role supporting the Data Governance manager in designing processes for consistent implementation Good understanding of data visualization platforms such as Power BI, Tableau or Qlikview Exposure to data analytics, machine learning, artificial intelligence In-depth understanding of procurement, finance, customer business processes Solid knowledge of data governance concepts around data definition and catalog, data ownership, data lineage, data policies and controls, data monitoring and data governance forums Partner with the business and program team teams to document business data glossary for assigned domain by capturing data definitions, data standards, data lineage, data quality rules and KPIs. Ensure the data glossary always remains up to date by following a stringent change governance. Ensure smooth onboarding for data owners and data stewards by providing them necessary trainings to carry out their role effectively. Engage with them on a regular basis to provide progress updates and to seek support to eliminate impediments if any. Extensive knowledge on Customer master and Material master Data by understanding integration with upstream and downstream legacy systems Demonstrate deep understanding of the data governance framework and play a key SME role supporting the Data Governance manager in crafting processes for consistent implementation Ensure adherence to policies related to data privacy, data lifecycle management and data quality management for the assigned data asset Build a rapport with business stakeholders, technology team, program team and wider digital solution and transformation team to identify opportunities and areas to make a difference through the implementation of data governance framework. Expert knowledge of data governance concepts around data definition and catalog, data ownership, data lineage, data policies and controls, data monitoring and data governance forums Deep knowledge of SAP ERP and associated data structures Must have been part of large, multi-year transformational change across multiple geographies across multiple data domains Comfortable to interact with senior stakeholders and chair meetings/trainings related to data governance Soft Skills Active listening, communication and collaboration, presentation, Problem solving, , Stakeholder management Project management. Domain knowledge [Procurement, Finance, Customer], Business Acumen, Critical thinking, Story telling Awareness of best practices and emerging technologies in data management, data analytics space

Posted 1 week ago

Apply

9.0 - 11.0 years

12 - 16 Lacs

Pune

Work from Office

About Tarento: Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. Were proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. Job Details: Experience: 9+ years of experience predominantly in data related disciplines such as Data Governance, SAP master Data and data quality in oil and gas or financial services domain Technology: Deep knowledge of SAP ERP and associated data structures Job Description: Coordinating with Data Srewards/Data Owners to enable identification of Critical data elements for SAP master Data - Supplier/Finance/Bank master. Develop and maintain a business-facing data glossary and data catalog for SAP master data (Supplier, Customer, Finance (GL, Cost Center, Profit Center etc), capturing data definitions, lineage, and usage for relevant SAP master Data Develop and implement data governance policies, standards, and processes to ensure data quality, data management, and compliance for relevant SAP Master Data (Finance, Supplier and Customer Master Data) Develop both end-state and interim-state architecture for master data, ensuring alignment with business requirements and industry best practices. Define and implement data models that align with business needs and Gather requirements for master data structures. Design scalable and maintainable data models by ensuring data creation through single source of truth Conduct data quality assessments and implement corrective actions to address data quality issues. Collaborate with cross-functional teams to ensure data governance practices are integrated into all SAP relevant business processes. Manage data cataloging and lineage to provide visibility into data assets, their origins, and transformations in SAP environment Facilitate governance forums, data domain councils, and change advisory boards to review data issues, standards, and continuous improvements. Collaborate with the Data Governance Manager to advance the data governance agenda. Responsible to prepare data documentation, including data models, process flows, governance policies, and stewardship responsibilities. Collaborate with IT, data management, and business units to implement data governance best practices and migrate from ECC to S/4 MDG Monitor data governance activities, measure progress, and report on key metrics to senior management. Conduct training sessions and create awareness programs to promote data governance within the organization. Demonstrate deep understanding of SAP (and other ERP system such as JD Edwards etc.) master data structures such as Vendor, Customer, Cost center, Profit Center, GL Accounts etc. Primary Skills Experience of implementing data governance in SAP environment both transactional and master data Expert knowledge of data governance concepts around data definition and catalog, data ownership, data lineage, data policies and controls, data monitoring and data governance forums Ability to influence senior stakeholders and key business contact to gather and review the requirements for MDG Proven experience in driving Master data solutioning to implement S/4 Hana Green field. Strong knowledge on SAP peripheral systems and good understanding of Upstream and downstream impact of master Data Strong understanding of Master data attributes and its impact Strong analytical and problem-solving abilities. Soft Skills Active listening, communication and collaboration, presentation, Problem solving, , Team management, Stakeholder management Project management. Domain knowledge [Procurement, Finance, Customer], Business Acumen, Critical thinking, Story telling Stay updated with industry trends, best practices and emerging technologies in data management, data analytics space

Posted 1 week ago

Apply

2.0 - 5.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Some careers have more impact than others. If you re looking for a career where you can make a real impression, join HSBC and discover how valued you ll be. HSBC is one of the largest banking and financial services organizations in the world, with operations in 62 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Cross Controls Business Analyst Principal responsibilities The role holder will report to their local Functional Manager within the Data and Analytics Office (DAO). He / She will support the design, refinement, implementation, and operation of controls, with a particular focus on deliverables that span across the data control suite. This may include areas such as end user computing, third party related data risk management, and automation. The role holder will support the Global Work Stream while operating from India to engage stakeholders & SMEs from Businesses and Functions to understand and assess regulatory & HSBC s Data Management policy requirements. The role holder will be responsible for analysis and documentation of data management best practices, including data standards, lineage, quality & process level controls; and support key control design decisions. The role holder is expected to apply and uphold HSBC s standards and guidelines at all times. Support the delivery of key cross data control initiatives. This includes supporting operating model changes to achieve the team objectives. Develop and maintain key control documentation & Support in the execution of end user computing controls and governance; including working with businesses and functions to ensure policy adherence. Run and contribute to delivery pods and internal/ external staff supporting control design and operation. Drive business and data conversations to close process gaps through business and technology improvement & Support the defining of requirements and propose optimization to controls and processes. Requirements 8+ years of relevant experience. Bachelor s or Master s degree from reputed university with specialization in numerical discipline and concentration in computer science, information systems or other engineering specializations. Detail knowledge of data management framework data governance, data privacy, business architecture and data quality. Strong analytical skills with business analysis aptitude. Ability to comprehend intricate and diverse range of business problems and analyze them with limited or complex data and provide a feasible solution framework. Experience on working on Data/Information Management projects of varying complexities. Knowledge and understanding of financial-services/ banking-operations in a Global Bank. Understanding of accounting principles, data flows and regulatory landscape BCBS239, B3R, CCAR, IFRS9 etc. You ll achieve more at HSBC HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment. We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc. , We consider all applications based on merit and suitability to the role. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website.

Posted 1 week ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the client’s needs. Your primary responsibilities include* Design, build, optimize and support new and existing data models and ETL processes based on our client’s business requirements Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design, develop, and maintain Ab Initio graphs for extracting, transforming, and loading (ETL) data from diverse sources to various target systems. Implement data quality and validation processes within Ab Initio. Data Modelling and Analysis. Collaborate with data architects and business analysts to understand data requirements and translate them into effective ETL processes. Analyse and model data to ensure optimal ETL design and performance. Ab Initio Components, Utilize Ab Initio components such as Transform Functions, Rollup, Join, Normalize, and others to build scalable and efficient data integration solutions. Implement best practices for reusable Ab Initio components Preferred technical and professional experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minimal resource utilization. Conduct performance tuning and troubleshooting as needed. Collaboration. Work closely with cross-functional teams, including data analysts, database administrators, and quality assurance, to ensure seamless integration of ETL processes. Participate in design reviews and provide technical expertise to enhance overall solution quality. Documentation

Posted 1 week ago

Apply

3.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Technology-Data Management - Data Integration Administration-Informatica Administration Preferred Skills: Technology-Data Management - Data Integration Administration-Informatica Administration

Posted 1 week ago

Apply

3.0 - 4.0 years

5 - 6 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Description Circle K (Part of Alimentation Couche-Tard Inc., (ACT)) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has a footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT s Global Data & Analytics Team, and the Data Scientist will be a key player on this team that will help grow analytics globally at ACT. This is a unique opportunity to be a part of an experienced team of data scientists and analysts within a large organization. The Data Scientist is responsible for delivering advanced analytics and insights that drive business results and operational excellence to our dynamic and forward-thinking Merchandise team in Europe. The ideal candidate should possess both technical capabilities as well as commercial savviness, should be able to drive independent analysis as well as work effectively in a group. About the role We are looking for an individual who is a proven problem solver with exceptional critical thinking ability. The candidate should have a high sense of curiosity and be comfortable with ambiguity when faced with a difficult challenge. Additionally, the candidate should possess excellent communication skills, the ability to collaborate with others, and simply and effectively communicate complex concepts with a non-technical audience. Roles & Responsibilities Analytics (Data & Insights) Evaluate performance of categories and activities, using proven and advanced analytical methods Support stakeholders with actionable insights based on transactional, financial or customer data on an ongoing basis Oversee the design and measurement of experiments and pilots Initiate and conduct advanced analytics projects such as clustering, forecasting, causal impact Build highly impactful and intuitive dashboards that bring the underlying data to life through insights Operational Excellence Improve data quality by using and improving tools to automatically detect issues Develop analytical solutions or dashboards using user-centric design techniques in alignment with ACT s protocol Study industry/organization benchmarks and design/develop analytical solutions to monitor or improve business performance across retail, marketing, and other business areas Stakeholder Management Work with Peers, Functional Consultants, Data Engineers, and cross-functional teams to lead / support the complete lifecycle of analytical applications, from development of mock-ups and storyboards to complete production ready application Provide regular updates to stakeholders to simplify and clarify complex concepts, and communicate the output of work to business Create compelling documentation or artefacts that connects business to the solutions Coordinate internally to share key learning with other teams and lead to accelerated business performance Be an advocate for a data-driven culture among the stakeholders Job Requirements Education A higher degree in an analytical discipline like Finance, Mathematics, Statistics, Engineering, or similar Relevant Experience Experience: 3-4 years for Data Scientist Relevant working experience in a quantitative/ applied analytics role Experience with programming, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Spark / SQL / Python Excellent communication skills in English, both verbal and written Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Retail Analytics, Supply Chain Analytics, Marketing Analytics, Customer Analytics, etc.) Working understanding of Statistical modelling & Time Series Analysis using Analytical tools (Python, PySpark, R, etc.) Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), database management systems Business intelligence & reporting (Power BI) Cloud computing services in Azure/ AWS/ GCP for analytics #LI-DS1

Posted 1 week ago

Apply

7.0 - 13.0 years

9 - 15 Lacs

Bengaluru

Work from Office

Required Skills Technology | Infrastructure Monitoring Tool - Splunk Domain | IT in Banking | Customer Support Behavioral | Aptitude | Communication Education Qualification : Any Graduate (Engineering / Science) Certification Mandatory / Desirable : Technology | SESC/SE As a Level 3 Splunk Administrator, you will be responsible for advanced configuration, optimization, and management of Splunk environments for data analytics, log management, and security monitoring. You will lead the development of strategies, provide expert support, and ensure the effectiveness of our Splunk solutions. Key Responsibilities: 1. Splunk Environment Design and Optimization: - Lead the design, architecture, and advanced optimization of Splunk Enterprise, Universal Forwarders, and Splunk apps. - Customize Splunk settings, indexes, and data sources for maximum performance, scalability, and reliability. 2. Data Ingestion and Indexing: - Design and implement advanced data ingestion strategies from various sources into Splunk, ensuring data quality and reliability. - Oversee data indexing and categorization for efficient search, analysis, and correlation. 3. Advanced Searches and Alerts: - Perform complex searches, queries, and correlations in Splunk to retrieve and analyze data. - Configure advanced alerts, notifications, and incident response workflows for comprehensive security and performance monitoring. 4. Data Analysis and Reporting: - Utilize advanced data analysis techniques, statistical analysis, and machine learning to derive actionable insights from Splunk data. - Create advanced reports, dashboards, and predictive analytics for improved data analysis and incident management. 5. Automation and Scripting: - Develop and maintain advanced automation scripts and apps using Splunk SPL, REST API, and other relevant technologies to streamline data collection and incident response. - Implement automation for proactive issue resolution and resource provisioning. 6. Documentation and Knowledge Sharing: - Maintain comprehensive documentation of Splunk configurations, changes, and best practices. - Mentor and train junior administrators, sharing expertise, best practices, and providing advanced training.

Posted 1 week ago

Apply

7.0 - 9.0 years

9 - 11 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. At Circle K, we are building a best-in-class global data engineering practice to support intelligent business decision-making and drive value across our retail ecosystem. As we scale our engineering capabilities, we re seeking a Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod. This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India. This position plays a dual role: contributing hands-on to engineering execution while mentoring and developing engineers in their technical careers. About the role The ideal candidate combines deep technical acumen, stakeholder awareness, and a people-first leadership mindset. You ll collaborate with global tech leads, managers, platform teams, and business analysts to build trusted, performant data pipelines that serve use cases beyond traditional data domains. Responsibilities Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines) Architect data models and re-usable layers consumed by multiple downstream pods Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks Mentoring and coaching team Partner with product and platform leaders to ensure engineering consistency and delivery excellence Act as an L3 escalation point for operational data issues impacting foundational pipelines Own engineering best practices, sprint planning, and quality across the Enablement pod Contribute to platform discussions and architectural decisions across regions Job Requirements Education Bachelor s or master s degree in computer science, Engineering, or related field Relevant Experience 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse Knowledge and Preferred Skills Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices. Solid grasp of data governance, metadata tagging, and role-based access control. Proven ability to mentor and grow engineers in a matrixed or global environment. Strong verbal and written communication skills, with the ability to operate cross-functionally. Certifications in Azure, Databricks, or Snowflake are a plus. Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use : Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI #LI-DS1

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. . Data Streaming Engineer: - Required Skills and Competencies: - Experience: 3+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture. Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all.

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Data Streaming Engineer: - Required Skills and Competencies: - Experience: 3+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture. Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all. Atos is a recognized leader in its industry across Environment, Social and Governance (ESG) criteria. Find out more on our CSR commitment. Choose your future. Choose Atos.

Posted 1 week ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Description Circle K (Part of Alimentation Couche-Tard group) is a global leader in the convenience store and fuel space, it has a footprint across 31 countries and territories. At the Circle K Business Centre in India, we are #OneTeam using the power of data analytics to drive our decisions and strengthen Circle K s global capabilities. We make it easy for our customers all over the world - we partner with the business to empower the right decisions and deliver effectively, while rapidly unlocking value for our customers across the enterprise. Our team in India is an integral part of our talent ecosystem that helps advance us on our journey to becoming a data-centric company. The future of data analytics at Circle K is bright - and we re only just getting started. About the role The India Data & Analytics Global Capability Centre is an integral part of ACT s Global Data & Analytics Team, and the Associate Data Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. The incumbent will be responsible for deploying analytics algorithms and tools on chosen tech stack for efficient and effective delivery. Responsibilities include delivering insights and targeted action plans, address specific areas of risk and opportunity, work cross-functionally with business and technology teams, and leverage the support of global teams for analysis and data. Roles & Responsibilities Analytics (Data & Insights) Clean and organize large datasets for analysis and visualization using statistical methods; verify and ensure accuracy, integrity, and consistency of data Identifying trends and patterns in data and using this information to drive business decisions Create the requirement artefacts e.g., Functional specification document, use cases, requirement traceability matrix, business test cases and process mapping documents, user stories for analytics projects Build highly impactful and intuitive dashboards that bring the underlying data to life through insights Generate ad-hoc analysis for leadership to deliver relevant, action-oriented, and innovative recommendations Operational Excellence Improve data quality by using and improving tools to automatically detect issues Develop analytical solutions or dashboards using user-centric design techniques in alignment with ACT s protocol Study industry/organization benchmarks and design/develop analytical solutions to monitor or improve business performance across retail, marketing, and other business areas Stakeholder Management Work with high-performing Functional Consultants, Data Engineers, and cross-functional teams to lead / support the complete lifecycle of visual analytical applications, from development of mock-ups and storyboards to complete production ready application Provide regular updates to stakeholders to simplify and clarify complex concepts, and communicate the output of work to business Create compelling documentation or artefacts that connects business to the solutions Coordinate internally to share key learning with other teams and lead to accelerated business performance Behavioral Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Retail Analytics, Supply Chain Analytics, Marketing Analytics, Customer Analytics, etc.) Working understanding of Statistical modelling using Analytical tools (Python, PySpark, R, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server, etc.), and non-relational (MongoDB, DynamoDB) database management systems Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Cloud computing services in Azure/AWS/GCP for analytics Education Bachelor s degree in computer science, Information Management or related technical fields Experience 1+ years for Associate. Data Analyst Relevant working experience in a quantitative/applied analytics role Experience with programming and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Spark / SQL / Python #LI-DS1

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Infosys Quality Engineering Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Preferred Skills: Technology-ETL & Data Quality-ETL - Others

Posted 1 week ago

Apply

3.0 - 12.0 years

5 - 14 Lacs

Gurugram

Work from Office

Responsibilities Serving as a core member of an agile team that drives user story analysis and elaboration, designs and develops responsive web applications using the best engineering practices Performing hands-on software development, typically spending most of time actually writing code and unit tests, doing proof of concepts, conducting code reviews, and testing in ongoing sprints Performing ongoing refactoring of code, and delivering continuous improvement Developing deep understanding of integrations with other systems and platforms within the supported domains Manage your own time, and work well both independently and as part of a team. Bring a culture of innovation, ideas, and continuous improvement Challenging status quo, demonstrate risk taking, and implement creative ideas Work closely with product managers, back-end and other front-end engineers to implement versatile solutions to tricky web development problems Embrace emerging standards while promoting best practices and consistent framework usage. Qualifications: BS or MS degree in computer science, computer engineering, or other technical discipline Total Experience: 3-12 Years ; 2+ years experience working in Java and able to demonstrate good Java knowledge Java 7 and Java8 preferred Able to demonstrate good web fundamentals and HTTP protocol knowledge Good attitude, communication, willingness to learn and collaborate 2+ yrs development experience in developing Java applications in an enterprise setting 2+ yrs experience developing java applications in frameworks such as Spring, Spring Boot, Drop wizard f is a plus 2+ years Experience with Test Driven Development (TDD) / Behavior Driven Development (BDD) practices, unit testing, functional testing, system integration testing, regression testing, GUI testing, web service testing, and browser compatibility testing, including frameworks such as Selenium, WebDriverIO, Cucumber, JUnit, Mockito Experience with continuous integration and continuous delivery environment 2+ yrs working in an Agile or SAFe development environment is a plus Data Engineer : Responsibilities The Ideal candidate will be responsible for Designing, Developing and maintaining data pipelines. Serving as a core member of an agile team that drives user story analysis and elaboration, designs and develops responsive web applications using the best engineering practices You will closely work with data scientists, analysts and other partners to ensure the flawless flow of data. You will be Building and optimize reports for analytical and business purpose. Monitor and solve data pipelines issues to ensure smooth operation. Implementing data quality checks and validation process to ensure the accuracy completeness and consistency of data Implementing data governance policies , access controls , and security measures to protect critical data and ensure compliance. Developing deep understanding of integrations with other systems and platforms within the supported domains. Bring a culture of innovation, ideas, and continuous improvement. Challenging status quo, demonstrate risk taking, and implement creative ideas Lead your own time, and work well both independently and as part of a team. Adopt emerging standards while promoting best practices and consistent framework usage. Work with Product Owners to define requirements for new features and plan increments of work. Qualifications BS or MS degree in computer science, computer engineering, or other technical subject area or equivalent 3+ years of work experience 3-12 Years At least 5 year of hands-on experience with SQL, including schema design, query optimization and performance tuning. Experience with distributed computing frameworks like Hadoop,Hive,Spark for processing large scale data sets. Proficiency in any of the programming language python, pyspark for building data pipeline and automation scripts. Understanding of cloud computing and exposure to any cloud GCP,AWS or Azure. knowledge of CICD, GIT commands and deployment process. Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and optimize data processing workflows Excellent communication and collaboration skills. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally:

Posted 1 week ago

Apply

5.0 - 12.0 years

7 - 14 Lacs

Hyderabad

Work from Office

SAP HANA DB Modeler/Developer" is responsible for designing, developing, and maintaining data models within the SAP HANA database, utilizing advanced modeling techniques to optimize data access and analysis for reporting and applications, often collaborating with business analysts to translate requirements into efficient database structures while ensuring data integrity and performance within the HANA platform. Must have skills should include: Hana Views( Analytical, Calculation etc.) , SAP HANA XS JavaScript and XS OData services, Advanced DB Modelling for Hana, SAP HANA Data Services ETL based Replication amongst others. Minimum 2 3 end to end implementations. Key responsibilities may include: Data Modeling: Designing and creating complex data models in SAP HANA Studio using analytical views, attribute views, calculation views, and hierarchies to represent business data effectively. Implementing data transformations, calculations, and data cleansing logic within the HANA model. Optimizing data structures for fast query performance and efficient data access. Development: Writing SQL scripts and stored procedures to manipulate and retrieve data from the HANA database. Developing custom HANA functions (CE functions) for advanced data processing and calculations. Implementing data loading and ETL processes to populate the HANA database. Performance Tuning: Analyzing query performance and identifying bottlenecks to optimize data access and query execution. Implementing indexing strategies and data partitioning for improved query performance. Collaboration: Working closely with business analysts to understand data requirements and translate them into technical data models. Collaborating with application developers to integrate HANA data models into applications. Security and Governance: Implementing data security measures within the HANA database, defining user roles and permissions. Maintaining data quality and consistency by defining data validation rules. Required Skills: Technical Skills: Strong understanding of relational database concepts and data modeling principles. Expertise in SAP HANA modeling tools and features (HANA Studio, Calculation Views, Analytical Views) Proficiency in SQL and SQL optimization techniques Knowledge of data warehousing concepts and best practices Soft Skills: Excellent analytical and problem solving abilities Strong communication skills to collaborate with business users and technical teams Ability to work independently and as part of a team

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Mumbai

Work from Office

Operational Excellence with seamless, cost effective and efficient lean operations aligned to customer needs and organization strategy by synergizing services of the partners with quality Operational Excellence with strong interlock with Service Providers (SPs) as OPT, Westcon, Octopian, OEM and Internal stakeholders Ensure KPIs, SLAs met, SC Spend within budget, Customer deliverables are maintained by services providers Translate customer needs in operational requirement for effective execution by SPs and / or OEMs Govern quality of services by services providers is as per standards with robust mechanism Drive Transformation & CI projects for cost reductions, optimization and state of the art lean operations Evolution & Change management - transition of transformation projects into operational environment SME for Organization wide or functional transformation projects including digital and data Align vendors processes and execution policies to the OBS organizational strategies Escalation and Exception Management with OPT and Internal Stakeholders Identification and execution of Repair and reuse opportunities contributing to circular economy and green act Anticipate the changes, skills enhancement needs for the team and drive such programs Stakeholder collaboration and alignment, Team leadership, development & evolution Ensure Supply Chain Data quality & Business Analytics driven culture Customer Satisfaction Spend Control, Cost Reduction & Avoidance Lean Operations Transformation & Continuous Improvements Initiatives SC Risk Management Lean Methodologies Governance through Data Analytics Customer Orientation Global Delivery & Operations

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Mumbai

Work from Office

The SME - Global Procurement Operations is responsible for overseeing outsourced procurement activities, ensuring operational excellence, managing User Acceptance Testing (UAT), handling critical escalations, and supporting process improvements The role acts as a key point of contact for outsourced vendors and internal stakeholders, ensuring seamless execution of procurement processes Operational Excellence with seamless, cost effective and efficient lean operations aligned to customer needs and organization strategy by synergizing services of the partners with quality Operational Excellence with strong interlock with Service Providers (SPs) as OPT, Westcon, Octopian, OEM and Internal stakeholders Ensure KPIs, SLAs met, SC Spend within budget, Customer deliverables are maintained by services providers Translate customer needs in operational requirement for effective execution by SPs and / or OEMs Govern quality of services by services providers is as per standards with robust mechanism Drive Transformation & CI projects for cost reductions, optimization and state of the art lean operations Evolution & Change management - transition of transformation projects into operational environment SME for Organization wide or functional transformation projects including digital and data Align vendors processes and execution policies to the OBS organizational strategies Escalation and Exception Management with OPT and Internal Stakeholders Identification and execution of Repair and reuse opportunities contributing to circular economy and green act Anticipate the changes, skills enhancement needs for the team and drive such programs Stakeholder collaboration and alignment, Team leadership, development & evolution Ensure Supply Chain Data quality & Business Analytics driven culture Customer Satisfaction Spend Control, Cost Reduction & Avoidance Lean Operations Transformation & Continuous Improvements Initiatives SC Risk Management Lean Methodologies Governance through Data Analytics Customer Orientation Global Delivery & Operations

Posted 1 week ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Greater Noida

Work from Office

SQL DEVELOPER: Design and implement relational database structures optimized for performance and scalability. Develop and maintain complex SQL queries, stored procedures, triggers, and functions. Optimize database performance through indexing, query tuning, and regular maintenance. Ensure data integrity, consistency, and security across multiple environments. Collaborate with cross-functional teams to integrate SQL databases with applications and reporting tools. Develop and manage ETL (Extract, Transform, Load) processes for data ingestion and transformation. Monitor and troubleshoot database performance issues. Automate routine database tasks using scripts and tools. Document database architecture, processes, and procedures for future reference. Stay updated with the latest SQL best practices and database technologies.Data Retrieval: SQL Developers must be able to query large and complex databases to extract relevant data for analysis or reporting. Data Transformation: They often clean, join, and reshape data using SQL to prepare it for downstream processes like analytics or machine learning. Performance Optimization: Writing queries that run efficiently is key, especially when dealing with big data or real-time systems. Understanding of Database Schemas: Knowing how tables relate and how to navigate normalized or denormalized structures is essential. QE: Design, develop, and execute test plans and test cases for data pipelines, ETL processes, and data platforms. Validate data quality, integrity, and consistency across various data sources and destinations. Automate data validation and testing using tools such as PyTest, Great Expectations, or custom Python/SQL scripts. Collaborate with data engineers, analysts, and product managers to understand data requirements and ensure test coverage. Monitor data pipelines and proactively identify data quality issues or anomalies. Contribute to the development of data quality frameworks and best practices. Participate in code reviews and provide feedback on data quality and testability. Strong SQL skills and experience with large-scale data sets. Proficiency in Python or another scripting language for test automation. Experience with data testing tools Familiarity with cloud platforms and data warehousing solutions

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. Key Responsibilities : A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Cigna TTK Health Insurance Company Limited is looking for Data Measurement & Reporting Analyst to join our dynamic team and embark on a rewarding career journey Data Collection: Collect and extract data from various sources, such as databases, spreadsheets, and software applications Data Analysis: Analyze data to identify trends, patterns, and anomalies, using statistical and data analysis techniques Report Development: Create, design, and develop reports and dashboards using reporting and data visualization tools, such as Excel, Tableau, Power BI, or custom-built solutions Data Cleansing: Ensure data accuracy and consistency by cleaning and validating data, addressing missing or incomplete information Data Interpretation: Translate data findings into actionable insights and recommendations for management or stakeholders KPI Monitoring: Track key performance indicators (KPIs) and metrics, and report on performance against goals and targets Trend Analysis: Monitor and report on long-term trends and make predictions based on historical data Ad Hoc Reporting: Generate ad hoc reports and analyses in response to specific business questions or requests Data Automation: Develop and implement automated reporting processes to streamline and improve reporting efficiency Data Visualization: Create visually appealing charts, graphs, and presentations to make data more understandable and accessible to non-technical stakeholders Data Governance: Ensure data quality and compliance with data governance and security policies

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies