Jobs
Interviews

3436 Data Quality Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

25.0 - 30.0 years

12 - 16 Lacs

Gurugram

Work from Office

At McCormick, we bring our passion for flavor to work each day. We encourage growth, respect everyones contributions and do whats right for our business, our people, our communities and our planet. Join us on our quest to make every meal and moment better.Founded in Baltimore, MD in 1889 in a room and a cellar by 25-year-old Willoughby McCormick with three employees, McCormick is a global leader in flavour. With over 14,000 employees around the world and more than $6 Billion in annual sales, the Company manufactures, markets, and distributes spices, seasoning mixes, condiments and other flavourful products to the entire food industry, retail outlets, food manufactures, food service businesses and consumers.While our global headquarters are in the Baltimore, Maryland, USA area, McCormick operates and serves customers from nearly 60 locations in 25 countries and 170 markets in Asia-Pacific, China, Europe, Middle East and Africa, and the Americas, including North, South and Central America with recognized brands.At McCormick, we have over a 100-year legacy based on our Power of People principle. This principle fosters an unusually dedicated workforce requiring a culture of respect, recognition, inclusion and collaboration based on the highest ethical values. POSITION OVERVIEW: The M2D/D2D Data Lead will perform a variety of tasks related to Data activities within the process areas of manufacturing and/or distribution on Program Leo. This will encompass Data Validation, Data Construction, Data Conversion, and Data Governance activities. They are responsible for understanding data requirements for their assigned workstream. RESPONSIBILITIES: The M2D/D2D Data Lead reports to a Global Process Manager and demonstrates strong business and technical acumen in the space of manufacturing and distribution. The Data Lead will be responsible for the following activities: Data Conversion o Functional Specification Mapping o Conversation action rules o Maintaining cross reference files o Designing and creating business rules o Functional Unit Testing Data Validation o Pre-load and Post-load validation activities o Coordination with cutover team according to project plan o Point of contact/Liaison for part-time data validators o Completion of Data Validation reports o Determination of data quality/accuracy percentages Data Construction o Populate data construction templates (DCT) based on process design decisions o Work with non-SAP legacy sites to bring them up to speed on process design o Consistently review and update DCT files leading up to load cycles Data Governance o Functional Specification review with GD team o Perform user acceptance testing (UAT) o Identify SLA and metrics for MDG workflows o Steward data through MDG workflows REQUIRED QUALIFICATION & EXPERIENCE Master s Degree in related professional area, 6 years minimum of business or professional experience. Bachelors Degree in related professional area and 8 years minimum of business or professional experience. Associates Degree in related professional area and 10 years minimum of business or professional experience. In lieu of a degree in related professional area, 12 years minimum of business or professional experience Demonstrated knowledge and understanding of McCormick s current business processes, their assigned functional area, and ways of working Demnstrated buy-in for transformational change at McCormick and specifically Program Leo Demonstrated SAP experience and ability to quickly develop working knowledge of SAP S/4HANA Well-respected as a go-to person with ability to quickly learn computer applications Proficient with Microsoft Office tools (e.g., PowerPoint, Word, Teams) Strong written and verbal communication skills, including presentation skills Global mindset and cross-cultural competency Strong organizational, analytical, and administrative skills in a deadline-driven environment Aptitude and enthusiasm for learning and teaching Highly flexible and adaptable to change Strong team player and ability to work well within a process team WHY WORK AT MCCORMICK? United by flavor. Driven by results. As a McCormick employee you ll be empowered to focus on more than your individual responsibilities. You ll have the opportunity to be part of something bigger than yourself to have a say in where the company is going and how it s growing. Between our passion for flavor, our 130-year history of leadership and integrity, the competitive and comprehensive benefits we offer, and our culture, which is built on respect and opportunities for growth, there are many reasons to join us at McCormick.

Posted 6 days ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Chennai

Work from Office

At WPP, technology is at the heart of everything we do, and it is WPP IT s mission to enable everyone to collaborate, create and thrive. WPP IT is undergoing a significant transformation to modernise ways of working, shift to cloud and micro-service-based architectures, drive automation, digitise colleague and client experiences and deliver insight from WPP s petabytes of data. WPP Media is the world s leading media investment company responsible for more than $63B in annual media investment through agencies Mindshare, MediaCom, Wavemaker, Essence and m/SIX, as well as the outcomes-driven programmatic audience company, Xaxis and data and technology company Choreograph. WPP Medias portfolio includes Data & Technology, Investment and Services, all united in a vision to shape the next era of media where advertising works better for people. By leveraging all the benefits of scale, the company innovates, differentiates and generates sustained value for our clients wherever they do business. The WPP Media team in WPP IT are the technology solutions partner for the WPP Media group of agencies and are accountable for co-ordinating and assuring end-to-end change delivery, managing the WPP Media IT technology life-cycle and innovation pipeline. As part of the global Data & Measure team, this role will play a key part in building scalable, insightful and user-centric data products. Working closely with global stakeholders and cross-functional teams, the Senior Power BI Developer will lead both the frontend development and backend data modeling for business-critical dashboards and analytics solutions. This is not a pure report-builder role success depends on a solid understanding of data architecture, ETL, and integration, paired with sharp visual storytelling and dashboard design. What youll be doing: Primary Responsibilities: Design and build interactive dashboards and reports using Power BI Work closely with product teams to understand reporting needs and translate them into scalable data products Own data transformation and modeling in Power Query and DAX Maintain and optimize data flows, datasets, and semantic models Ensure data accuracy, usability, and access control across published solutions Collaborate with backend teams to shape data sources for frontend consumption Document BI solutions, including business logic, KPIs, and metadata Partner with global teams to define standards and reuse patterns Additional Responsibilities: Support Power BI integration into existing or evolving global platforms, such as Measure. Contribute to defining global best practices for BI and self-service enablement Participate in data quality reviews and source system integration discussions Guide junior developers or offshore partners when necessary What youll need: Required Skills: Bachelor s degree in computer science, Engineering, or related field 5+ years of experience with Power BI development (incl. DAX, Power Query) Experience working with large datasets and complex data models Strong understanding of SQL and backend data principles Ability to bridge business needs with technical implementation Excellent data visualization and storytelling skills Experience working in cross-functional, global teams Strong English communication skills (verbal and written) Preferred Skills: Experience with Azure SQL, Synapse, or similar data platforms Familiarity with Azure Data Factory (ADF) for orchestrating data pipelines and ETL processes Exposure to data warehousing or MDM concepts Familiarity with DevOps processes and version control (e.g., Git) Experience working in an agile or product-based environment Power Platform exposure (Power Automate, Power Apps) is a plus

Posted 6 days ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Pune

Work from Office

About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Data Streaming Engineer: - Required Skills and Competencies: - Experience: 3+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture. Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all. Atos is a recognized leader in its industry across Environment, Social and Governance (ESG) criteria. Find out more on our CSR commitment. Choose your future. Choose Atos.

Posted 6 days ago

Apply

10.0 - 15.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About Netskope Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter @Netskope . About the role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Netskope Data Security team is focused on providing unrivaled visibility and real-time data protection. Our products ensure that customer s data is protected wherever it lives. As a Sr. Staff Software Engineer on the Data Security team, you will be working on a greenfield initiative that complements the existing data protection technologies. If you enjoy innovation, building brand-new software applications or shipping new products, this is the position for you. What s in it for you As a Staff Software Engineer in the Data Security team, you will be responsible for designing, implementing, and enhancing client software for Netskope Cloud Security solutions, and solving some of the most challenging cloud access & data security problems for enterprise customers. What you will be doing Wear multiple hats, adapting to challenges as they arise, with the focus on the seamless, end-to-end functionality of the product. Build and maintain robust and scalable ETL processes to ingest, transform, and load data from various sources into data warehouses, data lakes, or other storage systems. Implement data validation, monitoring, and cleansing techniques to ensure high data accuracy and consistency across all systems. Work with a group of brilliant engineers to build service technologies that enable a holistic data security solution. Work with Product Management to understand requirement specs, research/find appropriate solutions, design/implement the feature with high quality ensuring reliability, sustainability, scalability, observability. Work closely with multiple dependency teams to align the system boundary, API, etc. Work with QA to create and maintain robust test cases. Required skills and experience 10+ years of industry experience building high-performance, scalable, and secure data plane services. Strong analytical skills and a meticulous approach to identify and resolve data quality issues. Knowledge of data modeling, schema design and query optimization in relational and non-relational databases. Experience with big data technologies such as Kafka or Spark. Programming mastery in Golang, C++, or Python. Strong ability to think out of the box to solve challenging and unforeseen problems, and implement solutions. Strong verbal and written communication skills. Able to communicate in an open, transparent and consistent manner with team and co-workers. Experience in building ML classifiers is a plus. Experience with Docker and Kubernetes is a plus. Experience in CI/CD using tools like Drone, Jenkins, or other similar technologies is a plus. Education BS Computer Science required, MS Computer Science strongly preferred #LI-DN1

Posted 6 days ago

Apply

15.0 - 20.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Job Title Marketing Data Science - General Manager Job Description Job Responsibilities: Analyzes and synthesizes complex data sets to develop innovative solutions, formulate advanced hypotheses, and create sophisticated proof of concepts to validate and enhance analytical models. Ensures the integrity and reliability of data through the implementation of robust validation protocols, advanced data governance frameworks, and comprehensive quality assurance processes, working under limited supervision. Participates in large-scale data mining projects, utilizing cutting-edge algorithms, AI, and machine learning techniques to uncover deep insights and actionable intelligence from vast, multi-source data sets. Coordinates the end-to-end deployment and rigorous validation of data science solutions, ensuring optimal performance, seamless integration, and alignment with enterprise systems and business processes. Builds and optimizes scalable, high-performance data pipelines and workflows using state-of-the-art big data technologies, ensuring efficient handling and processing of large-scale data for advanced analytics. Develops, documents, and disseminates advanced analytical methodologies and best practices, ensuring reproducibility, transparency, and fostering cross-functional collaboration and knowledge sharing. Evaluates and implements cutting-edge AI and machine learning techniques, pioneering innovative approaches to address complex business challenges and drive strategic advancements in analytics. Monitors, maintains, and continuously enhances the performance of deployed models, conducting regular reviews, updates, and optimizations to ensure sustained accuracy, relevance, and business impact. Identifies, assesses, and leads the implementation of advanced data science methodologies and technologies, driving continuous innovation and elevating Philips analytics capabilities to new heights. Guides junior data scientists and analysts, providing expert advice, training, and fostering a culture of excellence, continuous learning, and professional growth within the data science team. Participates in strategic initiatives to evaluate and integrate emerging technologies, open-source tools, and advanced data science frameworks, ensuring Philips stays at the forefront of innovation and maintains a competitive edge in the analytics domain. Minimum required Education: Bachelors / Masters Degree in Computer Science, Econometrics, Artificial Intelligence, Applied Mathematics, Statistics or equivalent. Minimum required Experience: Minimum 15 years of experience with Bachelors OR Minimum 13 years of experience with Masters in areas such as Data Analytics, Data Science, Data Mining, Artificial Intelligence, Pattern Recognition or equivalent. Preferred Skills: Data Analysis & Interpretation Data Governance Statistical Methods Statistical Programming Software Business Intelligence Tools Data Mining Machine Learning Engineering Fundamentals Research & Analysis Requirements Analysis Root Cause Analysis (RCA) Data Quality Management Systems Regulatory Compliance How we work together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. Onsite roles require full-time presence in the company s facilities. Field roles are most effectively done outside of the company s main facilities, generally at the customers or suppliers locations. Indicate if this role is an office/field/onsite role. If you re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here .

Posted 6 days ago

Apply

1.0 - 2.0 years

9 - 10 Lacs

Hyderabad

Work from Office

Req ID: 332357 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a SAP BP/SD Consultant to join our team in hyderabad, Telangana (IN-TG), India (IN). SAP BP/SD Functional Consultant Position Overview Our SAP Implementation BP/SD Functional Consultant will lead others through the implementation of SAP. Duties include but may not be limited to: Ability to lead business workshops for blueprinting activities Analyzing legacy data, quality check of submitted load files, testing load quality, leading user testing and acceptance of load process, providing input into specifications for automated data migration transformation rules, analysis and identification of duplicate records and other activities required for successful and on time data migration Perform high-level analysis of customer business processes and requirements to support project Data Migration requirements with proposed solutions Perform detailed analysis of customer requirements to produce custom solution specifications Provide direction to SAP BODS / ADM development teams for solution realization and participate, as necessary, during coding, testing, documentation, and maintenance activities Work closely with Data Migration team to document data migration requirements, on project planning and load scheduling, testing, troubleshooting, and issue resolution Job Requirements 8+ of Overall experience in SAP BP/SD (Sales and Distribution processes) Implementations / Rollouts 1 - 2 Yrs. of experience with S/4 HANA Implementations / Rollout experience SAP S/4 HANA data migration expertise in SAP Sales and Distribution Master / transaction data including Customer Master (BP), Routes, Route Determination, Pricing Conditions, Rebates, Customer Credit Data, Customer Material info-records etc.. Experience with creating functional documentation for data migration field mapping, rules etc..

Posted 6 days ago

Apply

3.0 - 10.0 years

20 - 27 Lacs

Kolkata

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decisionmaking and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decisionmaking and automation. s Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Nonparametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong handson experience of endtoend statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of finetuning for pretrained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from opensource distributions like Llama, Stable Diffusion Mandatory skill sets AI chatbots, Data structures, GenAI objectoriented programming, IDE, API, LLM Prompts, Streamlit Preferred skill sets AI chatbots, Data structures, GenAI objectoriented programming, IDE, API, LLM Prompts, Streamlit Years of experience required 310 Years Education qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education Degrees/Field of Study required Bachelor of Technology, Bachelor of Engineering, Master Degree Degrees/Field of Study preferred Required Skills Generative AI Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, Analytical Thinking, C++ Programming Language, Communication, Complex Data Analysis, Creativity, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Embracing Change, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Learning Agility, Machine Learning {+ 25 more} Travel Requirements Government Clearance Required?

Posted 6 days ago

Apply

3.0 - 5.0 years

8 - 12 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Duties Counterparty Credit Risk Reporting (CCRR) Counterparty Credit Risk Reporting is responsible for generating risk reports used for identifying, assessing, and mitigating risks associated with counterparties, including financial institutions, corporations, and other entities. The role involves supporting counterparty credit risk managers by providing risk reports that monitor counterparty credit risk metrics on a timely manner. Which covers generating Financial Markets Credit Exposure reports covering risk metrics like Potential Future Exposure (PFE)/ Derivative Loan Exposure (DLE)/ Current exposure etc. Team monitors pre-settlement, settlement and post settlement risk. Risk reporting involves daily, weekly, bi-weekly, monthly and quarterly reporting that is circulated within the bank and also reporting that goes for regulatory submission. Global, regional, local and legal entity wise reporting are handled by the reporting team members. OTC, SFT and ETD products exposure monitoring is done on a daily basis. Reporting team monitors risk exposures against limit and Risk Appetite. Team works closely with risk managers for excess remediation and risk reporting queries. Reports prepared by the team provides meaningful aggregation for users to review the top and regional data where required. CCR Reporting team and team members are also responsible for preparing reports across various financial market products like Asset Backed Securities across desks for monitoring risk against the portfolio limits. Risk reports to monitor repo and reverse repo products to capture all Fixed Income Securities financing trades covered under Repo/Reverse Repo/Stock Borrowing/Stock Lending Businesses are generated by the team on a daily basis. Reporting team also monitors risk appetite across FM for Secured Financing business monitoring Cash out on a bi-weekly basis. Counterparty credit risk stress monitoring reports are also handled by the team to monitor CCR NCE Increase and covers global and topical scenario. CCR Localised Stress Dashboard is used to monitor the Red, Amber, and EANPP counterparties from Localised Scenarios on a weekly basis. CCR reporting team members will work closely with RFO, FM Operations and Risk Data Quality teams etc to ensure accurate risk reporting, data sourcing and approval of risk numbers. Team also monitors SCB s exposure to various clearing houses and clearing brokers Top 100 Counterparty Credit Exposure Reporting is a weekly/ monthly reporting requirement from PRA which handled by the team. The data is collated by PRA on behalf of European Banking Authority which is keen on understanding the movement of funds between large Corporate Groups, FIs and Government bodies because of transactions done on Global Market Products Apart from above mentioned risk reporting team covers an array of different exposure management and MIS reports that reach a gamut of TRM personnel and often under scrutiny and changes. The production and distribution of daily, weekly & monthly Counterparty Credit Risk reports, within defined KPI targets and record keeping of any data adjustments made. Qualification/Desired Skills Post-Graduation in Finance with relevant 3+ yrs experience Risk Certifications from established institutions is recommended FRM Certified candidates will be preferred Any technical Certification in MS Office Advanced tools / SQL / Tableau / SAS/ will be preferred Strong knowledge in banking products across asset classes (FX/Rates/ Equity/ Credit and Commodities) Strong knowledge in Counterparty credit risk and Market risk Strong knowledge in Data Analytics (Including ML capabilities) Awareness of trading environment, macroeconomic fundamentals, market moves & potential impact on exposures. Excellent communication skills - verbal and written Ability to work under pressure Have excellent attention to detail Be able to work in a global team Expertise in programming language is not required but good to have Strong knowledge in MS Excel, Access Documentation skills Working knowledge of the Group s businesses, governance structure and approach to risk management

Posted 6 days ago

Apply

4.0 - 6.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Associate Manager - Data Privacy : We are seeking a highly motivated and detail-oriented Associate Manager to join our DataPrivacy team. This role involves assisting in the development, implementation, and maintenanceof our data privacy program. As an Associate Manager - Data Privacy, you will help driveprivacy initiatives and ensure organizational compliance with applicable data protectionregulations, including Indias Digital Personal Data Protection Act (DPDPA). The ideal candidatewill have a strong understanding of data privacy laws, regulations and best practices, coupledwith excellent communication, critical thinking, and analytical skills. On a day-to-day basis, youwill be expected to leverage your expertise in privacy laws, concepts, and industry standards tounderstand and navigate complex environments and ensure robust privacy practices. LocationBangalore, HQ, full-time Reports toSenior Manager - Data Privacy FunctionData Privacy, Data Governance, and Data Ethics | Compliance Key Responsibilities: Assist in the development and implementation of privacy program components, includingdrafting and refining policies, procedures, and templates; assist in DPDPAimplementation efforts Collaborate with business and product teams to assess and understand theorganizations use of personal data; serve as the key privacy advisor and point ofcontact for internal stakeholders, guiding privacy-related decisions. Research and monitor evolving privacy laws, regulations, and industry practices / trends,and analyse their impact on our business. Conduct Data Protection Impact Assessments (DPIAs) and risk assessments toproactively address and mitigate privacy risks. Enhance the organizations existing consent management framework in alignment withlegal and regulatory requirements, ensuring seamless integration across functions. Review privacy clauses in vendor contracts and ensure compliance with data protectionrequirements; provide assistance on the third party risk management process to mitigateprivacy risks. Help investigate and manage privacy incidents, including support with reporting andresolution. Support in handling data principle requests and ensure timely responses; assist inaddressing queries related to data privacy rights and data protection practices. Assist with continuous monitoring and testing of existing privacy controls implementedacross businesses, identify gaps, and initiate corrective measures. Assist data discovery activities across the enterprise; ensure maintenance of personaldata inventory. Assist design and delivery of mandatory privacy training and role-specific awarenessprograms to increase privacy awareness. Qualifications and : Bachelors / Masters Degree in Computer Science, Law, Information Security, Privacy,or related fields 4-6 years of overall experience working in the Privacy domain In-depth knowledge of global privacy laws and frameworks, including GDPR, DPDPA,CCPA, Information Technology Act, and regulations from RBI, SEBI, IRDAI and otherregulators Strong documentation and communication skills and proficiency with frameworkdevelopment, assessments, and policy and procedure documentation; advancedworking knowledge of MS Office applications Team player; ability to interface and build relationships with stakeholders Strong bias for action with exceptional problem solving skills Good to have: Professional certifications such as DCPP, DCPLA, CIPP/E, CIPM, CIPT, or equivalentare desirable Experience working in the FinTech industry and understanding of various products andservices PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe on our blog. Life at PhonePe PhonePe in the news

Posted 6 days ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Chennai

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 6 days ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Kolkata

Work from Office

Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 6 days ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Mumbai

Work from Office

About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.

Posted 6 days ago

Apply

7.0 - 10.0 years

9 - 12 Lacs

Bengaluru

Work from Office

About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.

Posted 6 days ago

Apply

6.0 - 10.0 years

30 - 35 Lacs

Mumbai

Work from Office

: Job TitleESG Data / Operations Analyst, AVP LocationMumbai, India Role Description About DWS DWS, publicly listed asset management division of Deutsche Bank, is one of the world's leading investment management organizations with billions of assets under management. DWS provides flexible products and solutions to a wide range of investment opportunities across all asset classes from pooled funds to highly customized portfolios for a wide range of investors and offers individuals and institutions traditional and alternative investments across all major asset classes. About the DWS ESG Engine team Our dedicated DWS ESG team is part of DWS Platform Sustainability team with activities integrated into the investment process, designed to go beyond the fiduciary duties. Our proprietary DWS ESG Engine uses data from multiple vendors which allows us to rely on amongst the best data sources available for analysis. The wealth of information available from these providers means that the DWS ESG Engine has access to more than 50 million data points for over 10,000 companies. The software translates the available data into a consistent evaluation process, to derive DWS ESG ratings which are used for investment process. About DWS India (P) Ltd DWS India (P) Ltd., is a wholly owned subsidiary of DWS Group GmbH & Co. KGaA (part of Deutsche Bank Group). Our teams in Mumbai focuses on Research & Analytics, Portfolio Management Support, Business Management Support, etc. across all divisions of DWS Group What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities As an ESG Analyst you are part of the DWS ESG Engines ESG Data & Research Team. The ESG Engine Team is part of DWS international Platform Sustainability Team, which has a dual reporting line into DWS Chief Investment Officer and the Head of the Product Division. Ensure operational excellence that istimely and accurate ESG signal production. Run the monthly result validation / quality assurance procedures, which requires in-depth analysis of ESG assessments and changes in different detail levels, and coordination of review loops within the engine team, research and other internal/external functions Engage with ESG data vendors in case of concerns on data quality Engage with the DWS teams monitoring the compliance of DWS investments with guidelines based on our ESG data and ensure that all actions are based on validated results. Engage with ESG Engine engineers which design and build the platform on which the ESG results are algorithmically evaluated Provide one-stop service for ad-hoc queries from the business on ESG data and results. This requires understanding of the implemented methodology. Enhance operational excellence by continuously challenge the status-quo and propose changes to and/or further automate existing procedures Support documentation base of implemented ESG processes and methodology, and train/support the platform Your skills and experience Education and Experience Experience in production support roles, i.e. executing (semi-)manual procedures with zero failure rate Masters degree in Commerce/Science or similar Experience in quality assurance, i.e. review, challenge and/or evidence obtained data Experience on working with high volume of complex data Experience in analyzing company reports / filings Advanced Proficiency in MS-Office Strong analytical skills and mindset Passionate about responsible investing/stewardship and having positive impact Competencies Highly motivated, self-sufficient and ability to learn quickly Intercultural competence, superior communication skills and strict attention to detail Highly structured and dedicated approach to work High interest in taking on new challenges Ability to work independently and as part of the team High level of comfort in dealing with different levels of management How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We at DWS are committed to creating a diverse and inclusive workplace, one that embraces dialogue and diverse views, and treats everyone fairly to drive a high-performance culture. The value we create for our clients and investors is based on our ability to bring together various perspectives from all over the world and from different backgrounds. It is our experience that teams perform better and deliver improved outcomes when they are able to incorporate a wide range of perspectives. We call this #ConnectingTheDots.

Posted 6 days ago

Apply

1.0 - 2.0 years

7 - 11 Lacs

Pune

Work from Office

: In Scope of Position based Promotions (INTERNAL only) Job TitleSenior Risk Analyst, NCT LocationPune, India Role Description Market Valuation and Risk Management (MVRM) is responsible for managing market risk and ensuring fair value assessment of Books & Records within Deutsche Bank. The primary market risk objective is to ensure that business units of the bank optimize the risk-reward relationship and do not expose it to unacceptable losses. We work closely with risk takers, risk managers and control and support groups in the process to ensure these objectives can be met. Market Data Strategy and Analytics (MDSA) is a function within MVRM that is responsible and accountable for the procurement, analysis and governance of historical market data used as inputs in the key risk metric (eg Value at Risk, Economic Capital, Credit Counterparty Risk, Portfolio Stress Test). What well offer you : . 100% reimbursement under childcare assistance benefit (gender neutral). Sponsorship for Industry relevant certifications and education. . . Accident and Term life Insurance. Your Key Responsibilities Applying experience and subject matter expertise to perform RTB tasks such as VaR/SVaR impact analysis for both Hist Sim Sensi Based and Hist Sim Full Reval, continuous improvement of processes and coordination of changes in market data. Perform data quality checks to ensure the completeness and accuracy of EOD data / time series are at par to use for risk calculations. Take part actively in weekly Scenario Set generation for VaR/SVaR/Economic capital calculation and ensure Scenario sets are released within the cut-off time. Analyzing impacts of time-series changes on group level VaR/SVaR and ability to communicate/coordinate effectively to wider audiences. Liaising with Market risk managers, FO quant, Change teams and Methodology to perform deep dives on data challenges in new market risk models/methodology changes/RNIV and implementation of new regulations such as FRTB and IHC examination. Actively take apart in proxy decision making and come up with appropriate proxy choices for a time- series. Perform Stressed-period-selection and analyze results for accuracy and reliability. Help specify requirements and test functionality for new feed set up, processes and ability to coordinate with Risk-IT for seamless implementation of new data requirements and process enhancements. Your skills and experience 1-2 years experience in investment banking, Market Risk, specifically in controlled production environments. Good knowledge of financial instruments and markets across all asset classes Strong analytical skills with basic knowledge of Numerical techniques and applied econometrics. Basic knowledge of Derivatives Pricing & Risk calculation, VaR, PFE Hands-on experience of using external data sources such as Bloomberg, MarkIT & Reuters. Basic knowledge in Web-based technology will be a plus. Programming skills in at least any one of C/C++, Java, Python, MatLab etc will be a plus. Education | Certification (Recommended): Engineering or bachelors degree in finance from an accredited college or university with excellent project experience and grades in quantitative and numerical coursework. Certification in Financial Risk Management will be a plus. Business Competencies: Communication - Experienced Industry Knowledge - Experienced Risk Management Basic Innovation Basic Managing Complexity - Basic Technical Competencies: Risk Management Experienced Financial Product Knowledge - Experienced How well support you

Posted 6 days ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Hyderabad

Work from Office

Job Summary Join our dynamic team as a TL-Product Info & MDM where you will leverage your expertise in Customer Service Management and Supply Chain Analytics within the retail domain. With a hybrid work model and rotational shifts you will play a crucial role in optimizing order management processes and enhancing customer service experiences. This position offers an exciting opportunity to contribute to our companys growth and impact the retail industry positively. Responsibilities Lead the development and implementation of product information management strategies to enhance data accuracy and accessibility. Oversee the integration of customer service management systems to improve service delivery and customer satisfaction. Provide insights and analytics on supply chain processes to optimize retail operations and drive efficiency. Collaborate with cross-functional teams to streamline order management processes and ensure timely fulfillment. Analyze customer feedback and service metrics to identify areas for improvement and implement corrective actions. Coordinate with IT teams to ensure seamless integration of MDM solutions with existing systems. Develop and maintain documentation for product information and MDM processes to ensure consistency and compliance. Monitor industry trends and best practices to continuously improve customer service and supply chain strategies. Facilitate training sessions for team members to enhance their understanding of MDM and customer service management tools. Support the development of KPIs to measure the effectiveness of customer service and supply chain initiatives. Ensure data governance and quality standards are met across all product information and MDM activities. Drive initiatives to enhance customer experience and loyalty through improved service delivery. Collaborate with stakeholders to align MDM strategies with business objectives and customer needs. Qualifications Possess strong analytical skills with experience in supply chain analytics within the retail domain. Demonstrate expertise in customer service management and order management processes. Exhibit proficiency in MDM tools and technologies to support data management initiatives. Have a solid understanding of retail industry trends and best practices. Show excellent communication and collaboration skills to work effectively in a hybrid work model. Display the ability to work in rotational shifts and adapt to changing priorities. Hold a bachelors degree in a relevant field or equivalent work experience.

Posted 6 days ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Chennai

Work from Office

About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.

Posted 6 days ago

Apply

7.0 - 10.0 years

9 - 12 Lacs

Mumbai

Remote

Employment Type : Contract (Remote). Job Summary : We are looking for a highly skilled Data Engineer / Data Modeler with strong experience in Snowflake, DBT, and GCP to support our data infrastructure and modeling initiatives. The ideal candidate should possess excellent SQL skills, hands-on experience with Erwin Data Modeler, and a strong background in modern data architectures and data modeling techniques. Key Responsibilities : - Design and implement scalable data models using Snowflake and Erwin Data Modeler. - Create, maintain, and enhance data pipelines using DBT and GCP (BigQuery, Cloud Storage, Dataflow). - Perform reverse engineering on existing systems (e.g., Sailfish/DDMS) using DBeaver or similar tools to understand and rebuild data models. - Develop efficient SQL queries and stored procedures for data transformation, quality, and validation. - Collaborate with business analysts and stakeholders to gather data requirements and convert them into physical and logical models. - Ensure performance tuning, security, and optimization of the Snowflake data warehouse. - Document metadata, data lineage, and business logic behind data structures and flows. - Participate in code reviews, enforce coding standards, and provide best practices for data modeling and governance. Must-Have Skills : - Snowflake architecture, schema design, and data warehouse experience. - DBT (Data Build Tool) for data transformation and pipeline development. - Strong expertise in SQL (query optimization, complex joins, window functions, etc.) - Hands-on experience with Erwin Data Modeler (logical and physical modeling). - Experience with GCP (BigQuery, Cloud Composer, Cloud Storage). - Experience in reverse engineering legacy systems like Sailfish or DDMS using DBeaver. Good To Have : - Experience with CI/CD tools and DevOps for data environments. - Familiarity with data governance, security, and privacy practices. - Exposure to Agile methodologies and working in distributed teams. - Knowledge of Python for data engineering tasks and orchestration scripts. Soft Skills : - Excellent problem-solving and analytical skills. - Strong communication and stakeholder management. - Self-driven with the ability to work independently in a remote setup.

Posted 6 days ago

Apply

5.0 - 10.0 years

11 - 16 Lacs

Kolkata

Work from Office

Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms

Posted 6 days ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 6 days ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Kolkata

Work from Office

About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.

Posted 6 days ago

Apply

6.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Career Category Engineering Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let s do this. Let s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Minimum Experience of 6-9 years Must have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e. g. , GDPR, CCPA) Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 6 days ago

Apply

3.0 - 8.0 years

8 - 11 Lacs

Pune

Work from Office

Job purpose The Service Contract Support Specialist will have ownership of a designated service contract base and will be responsible for driving the renewal process for those contracts with business stakeholders. They will also have ownership for any changes occurring within the contract lifecycle, parts updates, change orders, cancellations, monitoring data quality. The main objectives are Create pricing and documentation for assigned contract base, accurately and on time, to ensure proposals are sent to business stakeholders on time, following GSM Processes. Drive contract renewals correctly in CRM with SOLs, with all data and required documentation, to ensure no delays/errors are incurred during booking Ownership for any changes occurring within the contract lifecycle, parts updates, change orders, cancellations, monitoring data quality. This will include the first year of newly booked contracts, which have been handed over from Contract Proposal Team Primary responsibilitie Develop a clear understanding of assigned contract base Develop and maintain good working relationships with key personnel within relevant Honeywell teams, including Service Operation Leaders, Field Service Managers and any other supporting functions (GCC, ISLC, CPT) Proactively drive assigned renewals with self and other stakeholders, to adhere to renewal tasks due dates and RNOC given to SOL SLA s Maintain accurate and timely information in CRM for renewals, including attaching documentation for all stages of the renewal process Update opportunity Next Step comments weekly for all renewals in progress and against a CSS renewal milestone Provides accurate updates of each contract renewal and any issues, during weekly MOS call with Service Contract Support Pole Lead Escalate issues in a timely manner to Service Contract Support Pole Lead, which may delay renewal process - do not wait for next MOS call Maintains good knowledge of the renewal process SOP and Work Instructions Ensures that a renewal opportunity exists and is linked to any renewal case/PSC in progress and is also linked to the service contract in CRM Identifies scope for renewal of designated service contract base and works with Service Operations Leader to validate that scope during weekly MOS with SOL Ensure renewal case is created for each active renewal entitlement in CRM Prices scope accurately and obtains proposals from other depts (Cyber, HCP, Third Party), when needed for inclusion in pricing tool Ensure pricing matches between pricing tool and PSC Obtains financial approval for all renewals before issuing the proposal to Service Operations Leader Creates accurate proposal and/or other documentation for the Service Operations Leader When customer PO is received, check details on PO vs Pricing tools and proposal, including sold-to party, payment terms, invoicing frequency Create accurate and complete booking package to handover renewal for financial booking in CRM and ERP and follow on activities (critical spare parts setup, third vendor purchase orders, SOFs and any other special instructions. ) Continuously learns renewal process, pricing tools and CRM to identify possible improvement areas within the renewal process/tools Create and issue Welcome Packet to SOL within 7 days of contract booking (excluding exceptions) Takes part in tools Dev and UAT when needed, to support enhancements and to continuously learn new functionality Cover absences for CSS colleagues as and when needed, to keep renewals moving forward Ensure in progress work is handed over to CSS backup when having planned leave Be involved with the training of new employees, including buddy system for support with live renewals Agree deadlines for tasks/actions required by other stakeholders and keep track of those actions/deadlines/owners via CRM or RAIL Continually develop own knowledge and skills to support current role and career path Ensure any changes made to VRW asset list during booking, must be communicated back to the Asset Support Team, to ensure correct data alignment Contact Service Contract Pole Lead as first point of contact on any issues or questions Proactively drive own IDP, goals and KPIs to meet targets Hold quarterly meetings with Direct Manager to drive own Individual Development Plan Use dashboard available in SF and Power BI to drive renewal tasks to on time completion Drive CSS pricing with SOL, so that local pricing is not used, excluding agreed countries. Support standardization in Contract Renewal process by developing reusable standard documents like Standard Operating Procedures (SOP), Self Learning Packages (SLP), Checklists, guidelines, etc. Provide technical guidance to other team members for different Contract Renewal entitlements and steps. Collect overall contract renewal data, prepare status/ progress reports and present to GBE team. 3. Principal Networks & Contact Links Internal Service Contract Pole Operations Manager Service Contract Support Pole Lead - Matrix Manager - first point of escalation Service Operation Leaders Regional Service Operations Managers Field Service Manager Global Customer Care A360 Performance Managers ISA Managers Asset Support Team Contract Proposal Team ISLC External None 4. Supervisory Responsibilities None 5. Geographic Scope & Travel Requirements Located within a central location (Hadapsar, Pune, India) Adherence to local office working policy Typically assigned to a particular pole, handling # service contracts within the pole. Working hours afternoon to midnight shift (2PM to 6PM from office and 8PM to 12:00AM from home). This can be changed based on organization policy and pole in which candidate is working. Travel not required for primary task, on exception base for secondary tasks (e. g. training/workshops) 6. Key Performance Measures RNOC given to SOL as per current SLA Zero renewal cases without renewal opportunity 100% welcome packets issued where needed, excluding exceptions 100% renewal case for active renewal entitlements CPQ adoption as per plan PSC rejections due to CSS error Corrective actions Weekly update Next Step Comment 1. Education Required Bachelors Degree - Administrative or technical; OR 3-4 years Honeywell Process Solutions / LSS Experience in similar positions 2. Work Experience Required 7-8 years of experience with process controls/pricing-proposal environment 3-4 years of experience in Honeywell LSS organization (Preferred, not required) Excellent working knowledge of SFDC, CPQ and SAP, MS Word and MS Excel 3. Technical Skills & Specific Knowledge Required Strong Math skills, including basic commercial awareness (booking margins, cash flow) Basic knowledge of pricing of a service agreements. 4. Behavioural Competencies Required Able to forge strong internal business relationships and deliver on commitments. Demonstrates a strong commercial awareness. Excellent interpersonal skills as well as good verbal, written and presentation skills. Ability to multi-task and prioritise work. Self-motivated and able to work with minimum supervision. Demonstrates a high level of planning & organisation skills daily. Highly Customer Focused approach, demonstrating success through a Voice of the Customer approach daily. Highly self-aware, recognising the impact of approach and behaviours on peers, direct reports, customers and other internal and external contacts. Ability to work within a remote team and support each other when needed Daily demonstration of the Honeywell Behaviours. 5. Language Requirements Fluent in English 1. Education Required Bachelors Degree - Administrative or technical; OR 3-4 years Honeywell Process Solutions / LSS Experience in similar positions 2. Work Experience Required 7-8 years of experience with process controls/pricing-proposal environment 3-4 years of experience in Honeywell LSS organization (Preferred, not required) Excellent working knowledge of SFDC, CPQ and SAP, MS Word and MS Excel 3. Technical Skills & Specific Knowledge Required Strong Math skills, including basic commercial awareness (booking margins, cash flow) Basic knowledge of pricing of a service agreements. 4. Behavioural Competencies Required Able to forge strong internal business relationships and deliver on commitments. Demonstrates a strong commercial awareness. Excellent interpersonal skills as well as good verbal, written and presentation skills. Ability to multi-task and prioritise work. Self-motivated and able to work with minimum supervision. Demonstrates a high level of planning & organisation skills daily. Highly Customer Focused approach, demonstrating success through a Voice of the Customer approach daily. Highly self-aware, recognising the impact of approach and behaviours on peers, direct reports, customers and other internal and external contacts. Ability to work within a remote team and support each other when needed Daily demonstration of the Honeywell Behaviours. 5. Language Requirements Fluent in English

Posted 6 days ago

Apply

7.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Remote

Employment Type : Contract (Remote). Job Summary : We are looking for a highly skilled Data Engineer / Data Modeler with strong experience in Snowflake, DBT, and GCP to support our data infrastructure and modeling initiatives. The ideal candidate should possess excellent SQL skills, hands-on experience with Erwin Data Modeler, and a strong background in modern data architectures and data modeling techniques. Key Responsibilities : - Design and implement scalable data models using Snowflake and Erwin Data Modeler. - Create, maintain, and enhance data pipelines using DBT and GCP (BigQuery, Cloud Storage, Dataflow). - Perform reverse engineering on existing systems (e.g., Sailfish/DDMS) using DBeaver or similar tools to understand and rebuild data models. - Develop efficient SQL queries and stored procedures for data transformation, quality, and validation. - Collaborate with business analysts and stakeholders to gather data requirements and convert them into physical and logical models. - Ensure performance tuning, security, and optimization of the Snowflake data warehouse. - Document metadata, data lineage, and business logic behind data structures and flows. - Participate in code reviews, enforce coding standards, and provide best practices for data modeling and governance. Must-Have Skills : - Snowflake architecture, schema design, and data warehouse experience. - DBT (Data Build Tool) for data transformation and pipeline development. - Strong expertise in SQL (query optimization, complex joins, window functions, etc.) - Hands-on experience with Erwin Data Modeler (logical and physical modeling). - Experience with GCP (BigQuery, Cloud Composer, Cloud Storage). - Experience in reverse engineering legacy systems like Sailfish or DDMS using DBeaver. Good To Have : - Experience with CI/CD tools and DevOps for data environments. - Familiarity with data governance, security, and privacy practices. - Exposure to Agile methodologies and working in distributed teams. - Knowledge of Python for data engineering tasks and orchestration scripts. Soft Skills : - Excellent problem-solving and analytical skills. - Strong communication and stakeholder management. - Self-driven with the ability to work independently in a remote setup.

Posted 6 days ago

Apply

8.0 - 10.0 years

7 Lacs

Pune

Work from Office

About Tarento: Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. Were proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. Job Details: Experience: 8+ years of experience predominantly in data related disciplines such as data governance, data quality and data cleansing in oil and gas or financial services domain Roles & Responsibilities: Experience of working on data management tools such as Alation and MDG Demonstrate deep understanding of the data governance framework and play a key SME role supporting the Data Governance manager in designing processes for consistent implementation Good understanding of data visualization platforms such as Power BI, Tableau or Qlikview Exposure to data analytics, machine learning, artificial intelligence In-depth understanding of procurement, finance, customer business processes Solid knowledge of data governance concepts around data definition and catalog, data ownership, data lineage, data policies and controls, data monitoring and data governance forums Partner with the business and program team teams to document business data glossary for assigned domain by capturing data definitions, data standards, data lineage, data quality rules and KPIs. Ensure the data glossary always remains up to date by following a stringent change governance. Ensure smooth onboarding for data owners and data stewards by providing them necessary trainings to carry out their role effectively. Engage with them on a regular basis to provide progress updates and to seek support to eliminate impediments if any. Extensive knowledge on Customer master and Material master Data by understanding integration with upstream and downstream legacy systems Demonstrate deep understanding of the data governance framework and play a key SME role supporting the Data Governance manager in crafting processes for consistent implementation Ensure adherence to policies related to data privacy, data lifecycle management and data quality management for the assigned data asset Build a rapport with business stakeholders, technology team, program team and wider digital solution and transformation team to identify opportunities and areas to make a difference through the implementation of data governance framework. Expert knowledge of data governance concepts around data definition and catalog, data ownership, data lineage, data policies and controls, data monitoring and data governance forums Deep knowledge of SAP ERP and associated data structures Must have been part of large, multi-year transformational change across multiple geographies across multiple data domains Comfortable to interact with senior stakeholders and chair meetings/trainings related to data governance Soft Skills Active listening, communication and collaboration, presentation, Problem solving, , Stakeholder management Project management. Domain knowledge [Procurement, Finance, Customer], Business Acumen, Critical thinking, Story telling Awareness of best practices and emerging technologies in data management, data analytics space

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies