Jobs
Interviews

210 Metadata Management Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 10.0 years

10 - 12 Lacs

Hyderabad

Work from Office

Details of the role: 8 to 10 years experience as Informatica Admin (IICS) Key responsibilities: Understand the programs service catalog and document the list of tasks which has to be performed for each Lead the design, development, and maintenance of ETL processes to extract, transform, and load data from various sources into our data warehouse. Implement best practices for data loading, ensuring optimal performance and data quality. Utilize your expertise in IDMC to establish and maintain data governance, data quality, and metadata management processes. Implement data controls to ensure compliance with data standards, security policies, and regulatory requirements. Collaborate with data architects to design and implement scalable and efficient data architectures that support business intelligence and analytics requirements. Work on data modeling and schema design to optimize database structures for ETL processes. Identify and implement performance optimization strategies for ETL processes, ensuring timely and efficient data loading. Troubleshoot and resolve issues related to data integration and performance bottlenecks. Collaborate with cross-functional teams, including data scientists, business analysts, and other engineering teams, to understand data requirements and deliver effective solutions. Provide guidance and mentorship to junior members of the data engineering team. Create and maintain comprehensive documentation for ETL processes, data models, and data flows. Ensure that documentation is kept up-to-date with any changes to data architecture or ETL workflows. Use Jira for task tracking and project management. Implement data quality checks and validation processes to ensure data integrity and reliability. Maintain detailed documentation of data engineering processes and solutions. Required Skills: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Senior ETL Data Engineer, with a focus on IDMC / IICS Strong proficiency in ETL tools and frameworks (e.g., Informatica Cloud, Talend, Apache NiFi). Expertise in IDMC principles, including data governance, data quality, and metadata management. Solid understanding of data warehousing concepts and practices. Strong SQL skills and experience working with relational databases. Excellent problem-solving and analytical skills.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 2 weeks ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Kolkata

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Informatica Data Quality Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and implement Informatica Data Quality solutions.- Collaborate with cross-functional teams to analyze and address data quality issues.- Create and maintain documentation for data quality processes.- Participate in data quality improvement initiatives.- Assist in training junior professionals in data quality best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica Data Quality.- Strong understanding of data quality principles.- Experience with ETL processes and data integration.- Knowledge of data profiling and cleansing techniques.- Familiarity with data governance and metadata management. Additional Information:- The candidate should have a minimum of 3 years of experience in Informatica Data Quality.- This position is based at our Kolkata office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Job Title: Data Governance & Quality Specialist Experience: 3-8 Years Location: Bangalore (Hybrid) Domain: Financial Services Notice Period: Immediate to 30 Days What Youll Do: Define and enforce data-governance policies (BCBS 239/GDPR) across credit-risk datasets Design, monitor and report on data-quality KPIs; perform profiling & root-cause analysis in SAS/SQL Collaborate with data stewards, risk teams and auditors to remediate data issues Develop governance artifacts: data-lineage maps, stewardship RACI, council presentations Must Have: 38 yrs in data-governance or data-quality roles (financial services) Advanced SAS for data profiling & reporting; strong SQL skills Hands-on with governance frameworks and regulatory requirements Excellent stakeholder-management and documentation abilities Nice to Have: Experience with Collibra, Informatica or Talend Exposure to credit-risk model inputs (PD/LGD/EAD) Automation via SAS macros or Python scripting If Interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 2 weeks ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

Bengaluru

Work from Office

Job Title: Data Governance Specialist Experience: 5-7 Years Location: Bangalore, India Domain: Financial Services Notice Period: Immediate to 30 Days Job Description: Required a skilled Data Governance Specialist to join its data management team in Bangalore. This role will focus on implementing and maintaining data governance frameworks, ensuring high-quality data assets, and enabling consistent use of metadata across the organization. Key Responsibilities: Establish and maintain data governance policies, standards, and processes. Develop and manage the enterprise data glossary and metadata repositories. Monitor and improve data quality metrics, ensuring accuracy and consistency across systems. Work closely with business and technical teams to ensure data lineage and traceability. Support Agile delivery using tools like JIRA and Confluence. Collaborate across departments to promote data stewardship and governance awareness. Key Requirements: 57 years of experience in data governance, metadata management, or data quality roles. Strong knowledge of data glossary, lineage, and metadata practices. Experience working in Agile environments; familiarity with JIRA and Confluence. Excellent communication and stakeholder management skills. Prior experience in the financial services or banking domain is preferred. Preferred Skills: Knowledge of data governance tools (e.g., Collibra, Informatica, Alation) is a plus. Understanding of regulatory data requirements (e.g., BCBS 239, GDPR) is an advantage. Intake call Notes: Data governance, Data Glossary, metadata management, data quality, agile, JIRA, confluence Keywords - data governance, data quality and agile If interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 2 weeks ago

Apply

8.0 - 13.0 years

19 - 25 Lacs

Bengaluru

Work from Office

In this role, you will play a key role in designing, building, and optimizing scalable data products within the Telecom Analytics domain. You will collaborate with cross-functional teams to implement AI-driven analytics, autonomous operations, and programmable data solutions. This position offers the opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering expertise, and contribute to advancing Nokias data-driven telecom strategies. If you are passionate about creating innovative data solutions, mastering cloud and big data platforms, and working in a fast-paced, collaborative environment, this role is for you! You have: Bachelors or masters degree in computer science, Data Engineering, or related field with 8+ years of experience in data engineering with a focus on Big Data, Cloud, and Telecom Analytics. Hands-on expertise in Ab Initio for data cataloguing, metadata management, and lineage. Skills in data warehousing, OLAP, and modelling using Big Query, Click house, and SQL. Experience with data persistence technologies like S3, HDFS, and Iceberg. Hands-on experience with Python and scripting languages. It would be nice if you also had: Experience with data exploration and visualization using Superset or BI tools. Knowledge in ETL processes and streaming tools such as Kafka. Background in building data products for the telecom domain and understanding of AI and machine learning pipeline integration. Data GovernanceManage source data within the Metadata Hub and Data Catalog. ETL DevelopmentDevelop and execute data processing graphs using Express It and the Co-Operating System. ETL OptimizationDebug and optimize data processing graphs using the Graphical Development Environment (GDE). API IntegrationLeverage Ab Initio APIs for metadata and graph artifact management. CI/CD ImplementationImplement and maintain CI/CD pipelines for metadata and graph deployments. Team Leadership & MentorshipMentor team members and foster best practices in Ab Initio development and deployment.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

Mumbai

Work from Office

Subject matter experts in Marketing and Comms provide business stakeholders with specialized advice on their subjects, and act as an advisor leveraging on a specific MC expertise. She/he is a person with in-depth, unique knowledge and expertise on a specific subject or in a particular industry ex digital marketing, internal comms, telecom, etc. : Familiarity with metadata management and tagging best practices. Exceptional attention to detail, with a strong ability to spot errors and inconsistencies in large datasets or digital assets. Strong analytical skills with the ability to identify data quality issues and root causes and implement corrective actions. Ability to work effectively with cross-functional teams, including marketing, creative, IT, and product teams, to resolve data issues and ensure alignment across the organization. Strong problem-solving skills to address data discrepancies, identify issues within workflows, and propose effective solutions. Proven track record of optimizing data management processes, improving workflows, and implementing data quality initiatives. Primary Skills: 4-6 years of experience in digital asset management, with a focus on maintaining data accuracy and consistency across systems. 2+ years Sitecore/Aprimo/AEM OR Veeva any one Digital Asset Management tools. Secondary Skills: Familiarity with data validation tools, reporting platforms (e.g., Excel, Power BI), and basic SQL or query languages for managing and analyzing data. Excellent written and verbal communication skills, with the ability to document processes, provide training, and explain data issues clearly to both technical and non-technical stakeholders.

Posted 2 weeks ago

Apply

5.0 - 6.0 years

8 - 10 Lacs

Mumbai

Work from Office

We are seeking a skilled SAS Administrator with at least 5 years of experience in managing SAS platforms, including installation, configuration, and administration. The ideal candidate should have hands-on expertise in SAS Viya 3.4, SAS Viya 3.5, SAS Viya 4, SAS Management Console (SMC), and server-level configurations. Experience working in government or large enterprise environments is preferred. Key Responsibilities: Perform installation, configuration, and maintenance of SAS 9.x and SAS Viya 4 on Linux/Unix server environments. Manage SAS Environment Manager, SAS Management Console (SMC), metadata administration, and user/group/role permissions. Monitor and tune system performance, ensuring platform availability and integrity. Administer SAS server security, backups, and patching. Collaborate with IT teams to troubleshoot server-level or configuration issues. Perform regular upgrades, migrations, and license updates. Coordinate with SAS Tech Support for critical issues or escalations. Prepare and maintain technical documentation and SOPs. Required Skills: Minimum 5 years of hands-on experience in SAS Platform Administration. Strong experience in SAS Viya 4 administration and traditional SAS (9.x). Good knowledge of SAS SMC, metadata management, and server architecture. Experience in installation/configuration on Linux/Unix environments. Familiar with security setup, resource management, and system health monitoring. Knowledge of shell scripting is a plus. Preferred Qualifications: BE / Btech / Mtech / MCA / MSc - Stats. Prior experience working with government or public sector clients is a plus. SAS certifications (e.g., SAS Certified Platform Administrator) are a bonus

Posted 2 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram, Delhi / NCR

Work from Office

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 3 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram

Hybrid

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 15 Lacs

Pune

Work from Office

Ab Initio Admin Must Have Skills code deployment, Ab Initio Product Suit(co-op, Express>IT, eme, control center, Authorization Gateway, Mdhub) installation, Linux commands, shell scripting, Ab initio environment administration, mfs file system Required Candidate profile Ab Initio Admin good To Have Skills: Networking and Security, ServiceNow, Jenkins, ansible, Exposure to Ab Initio Product Suit upgrades, Oracle Golden Gate, Oracle, Grafana/Kibana,

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

We are looking for a skilled SAP Data Intelligence Application Support Specialist with 8 to 15 years of experience. The ideal candidate will have a strong background in managing complex data integration and providing advanced technical support for the SAP Data Intelligence platform. Roles and Responsibility Provide advanced technical support for the SAP Data Intelligence platform, managing complex data integration. Resolve incidents and manage problems, developing comprehensive root cause analyses and implementing proactive solutions. Assist in designing, configuring, and maintaining the SAP Data Intelligence architecture, collaborating with cross-functional teams to optimize data integration strategies. Conduct regular system performance assessments, implement monitoring tools, and develop optimization strategies. Ensure robust security protocols within the SAP Data Intelligence environment, implementing and maintaining data access controls. Manage complex data source integrations, including enterprise systems, cloud platforms, databases, and applications. Job Minimum 8 years of experience in a related field. Strong knowledge of SAP Data Intelligence, metadata management, and system performance. Experience in incident resolution, problem management, and root cause analysis. Ability to design, configure, and maintain SAP Data Intelligence architecture. Strong understanding of data flow transformation, governance, and scalability. Excellent communication and collaboration skills.

Posted 3 weeks ago

Apply

5.0 - 12.0 years

25 - 30 Lacs

Mumbai

Work from Office

Job Summary / Objective Act as a strategic advisor and engagement lead, providing executive oversight and direction for the clients OCC-driven data remediation initiatives Ensure alignment of data management and governance and quality improvement strategies with regulatory requirements and business objectives Key Responsibilities / Duties Define and communicate the strategic vision for data governance remediation to client executives Guide the client in modernizing data architecture, risk aggregation, and regulatory reporting processes Advise on development and enforcement of enterprise-wide data policies, standards, and controls Support executive and Board-level reporting and engagement with OCC or other regulators Lead efforts to foster a culture of data accountability and continuous improvement within the client organization Required Skill Sets & Requirements Enterprise Data Analysis and Management: Extensive experience designing and implementing data analysis and management programs in large financial institutions Strong understanding of data quality metrics, master data management, and metadata management Regulatory & Risk Management: Experience in Operational risk domains including but not limited to Data risk, Fraud risk, Tech risk, Cyber risk, Op resiliency risk, third party risk, Processing risk, Services and Enterprise ops risk, Regulatory management reporting and financial statement reporting risk Responsibilities include requirements gathering, data acquisition, data quality assessment, and building risk monitoring tools Deep knowledge of regulatory frameworks (BCBS 239) and experience supporting regulatory remediation Technical & Analytical: Programing proficiency in Python, SQL and reporting tools like Tableau, PowerBI, and Jira Experience guiding IT modernization, system integration, and process optimization Advanced problem-solving, decision-making, and client advisory skills Communication & Board Reporting: Excellent communication, negotiation, and presentation skills with demonstrated experience in Board-level engagement Qualifications Masters or advanced degree preferred 12+ yearsexperience in consulting or executive roles in financial services Professional certifications (CDMP, PMP) highly desirable

Posted 3 weeks ago

Apply

8.0 - 12.0 years

12 - 16 Lacs

Mumbai

Work from Office

Operational Risk Data Management Job Summary / Objective Act as a strategic advisor and engagement lead, providing executive oversight and direction for the clients OCC-driven data remediation initiatives Ensure alignment of data management and governance and quality improvement strategies with regulatory requirements and business objectives Key Responsibilities / Duties Define and communicate the strategic vision for data governance remediation to client executives Guide the client in modernizing data architecture, risk aggregation, and regulatory reporting processes Advise on development and enforcement of enterprise-wide data policies, standards, and controls Support executive and Board-level reporting and engagement with OCC or other regulators Lead efforts to foster a culture of data accountability and continuous improvement within the client organization Required Skill Sets & Requirements Enterprise Data Analysis and Management: Extensive experience designing and implementing data analysis and management programs in large financial institutions Strong understanding of data quality metrics, master data management, and metadata management Regulatory & Risk Management: Experience in Operational risk domains including but not limited to Data risk, Fraud risk, Tech risk, Cyber risk, Op resiliency risk, third party risk, Processing risk, Services and Enterprise ops risk, Regulatory management reporting and financial statement reporting risk Responsibilities include requirements gathering, data acquisition, data quality assessment, and building risk monitoring tools Deep knowledge of regulatory frameworks (BCBS 239) and experience supporting regulatory remediation Technical & Analytical: Programing proficiency in Python, SQL and reporting tools like Tableau, PowerBI, and Jira Experience guiding IT modernization, system integration, and process optimization Advanced problem-solving, decision-making, and client advisory skills Communication & Board Reporting: Excellent communication, negotiation, and presentation skills with demonstrated experience in Board-level engagement Qualifications Masters or advanced degree preferred 6+ yearsexperience in consulting or executive roles in financial services Professional certifications (CDMP, PMP) highly desirable ORM-Level 1 Support experience required Indian Passport with 1 Year Validity Mandatory

Posted 3 weeks ago

Apply

6.0 - 11.0 years

11 - 20 Lacs

Pune

Hybrid

A Day in the Life Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. Medtronic is hiring a Senior Data Governance Engineer. As a Senior engineer, you will be a key member of our Data Governance team, defining scope, assessing current data governance capabilities, building roadmaps to mature data governance capabilities. This role offers a dynamic opportunity to join Medtronic's Diabetes business. Medtronic has announced its intention to separate the Diabetes division to promote future growth and innovation within the business and reallocate investments and resources across Medtronic, subject to applicable information and consultation requirements. While you will start your employment with Medtronic, upon establishment of SpinCo or the transition of the Diabetes business to another company, your employment may transfer to either SpinCo or the other company, at Medtronic's discretion and subject to any applicable information and consultation requirements in your jurisdiction. Responsibilities may include the following and other duties may be assigned. Data Governance Strategy Development - responsible for the strategic development, architectural definition, and enterprise-wide data governance and data management initiatives supporting the delivery of data as a service Provide Data Governance and Data Management advisory expertise Responsible for the identification and evaluation of metadata platform alternatives, the development of the logical metadata framework architecture, and the definition and implementation of metadata maintenance processes Defining Data Governance operating models, roles, and responsibilities in collaboration with Medtronic requirements Responsible for the assessment and selection of Data Management tools landscape including: data profiling, data quality, metadata, MDM, rules engines, and pattern matching Refine and develop the data-centric approach and methodologies which may include areas such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery Assess current state capabilities, identify gaps versus leading practices, and recommend future state data requirements Identify opportunities for new workflows, third party glossary/metadata tools, database architectures, and third-party pre-existing data models. Work closely with business and technology stakeholders to assist in understanding and documenting data requirements, lineage, and partnering with IT data integration and data warehouse teams to ensure requirements are effectively executed Help define data modeling naming standards, abbreviations, guidelines, and best practices Enhance or design data model review processes based on business requirements Minimum Qualification: At least 5 years of experience developing / structuring an enterprise wide data governance organization and business process (operating models, roles, partner organizations, responsibilities) Hands-on experience with both the business side and IT side implementing or supporting an MDM and/or Data Warehouse and Reporting IT solutions Utilize strong business knowledge of the investment management industry and common data management operations Broad understanding of the role of data management within global markets organizations, information flow, and data governance issues Domain expertise in specific areas of Data Management such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery Note : Immedite or Notice period serving candidates preferred. If interested please share your updated CV on ashwini.ukekar@medtronic.com

Posted 3 weeks ago

Apply

6.0 - 7.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary: Experience : 5 - 8 Years Location : Bangalore Contribute to building state-of-the-art data platforms in AWS, leveraging Python and Spark. Be part of a dynamic team, building data solutions in a supportive and hybrid work environment. This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement, and optimize ETL workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable experience as a senior data engineer. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience with data services in Lakehouse architecture. Good background and proven experience of data modelling for data platforms Nice To Have Tech Skills: A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Provides guidance on best practices in design, development, and implementation, ensuring solutions meet business requirements and technical standards. Works closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading design and planning of these components. Drive the migration of existing data processing workflows to the Lakehouse architecture, leveraging Iceberg capabilities. Communicates complex technical information clearly, tailoring messages to the appropriate audience to ensure alignment. Key Skills: Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using AWS data services and Lakehouse capabilities. Highly proficient in Python, Spark and familiar with a variety of development technologies. Skilled in decomposing solutions into components (Epics, stories) to streamline development. Proficient in creating clear, comprehensive documentation. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Previous Financial Services experience delivering data solutions against financial and market reference data. Solid grasp of Data Governance and Data Management concepts, including metadata management, master data management, and data quality. Educational Background: Bachelor’s degree in computer science, Software Engineering, or related field essential. Bonus Skills: A working knowledge of Indices, Index construction and Asset Management principles.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

20 - 35 Lacs

Greater Noida

Work from Office

We are seeking a seasoned Informatica CDGC expert to work with Informatica team and lead the implementation and optimization of Informatica Cloud Data Governance and Catalog solutions. The ideal candidate will establish best practices, drive data governance initiatives, and mentor a team of data professionals to ensure a scalable and efficient governance framework aligned with business objectives Roles and Responsibilities Lead the end-to-end implementation of Informatica Cloud Data Governance and Catalog (CDGC) solutions, ensuring timely and high-quality delivery. Design, configure, and deploy data governance frameworks using Informatica CDGC aligned with organizational standards and compliance requirements. Develop and implement best practices for metadata management, data lineage, data quality, and stewardship within the Informatica CDGC environment. Collaborate with cross-functional teams, including data architects, engineers, analysts, and business stakeholders to drive data governance adoption. Provide expert guidance on data governance policies, workflows, and tool utilization to maximize the value of Informatica CDGC. Mentor and coach team members on technical and governance aspects of Informatica CDGC, fostering skill development and knowledge sharing. Troubleshoot and resolve complex technical issues related to Informatica CDGC deployment and integrations. Stay current with Informatica CDGC product updates, industry trends, and data governance best practices to continuously enhance governance capabilities. Create and maintain documentation, including architecture diagrams, configuration guides, and training materials. Support audit and compliance activities related to data governance and metadata management. Proven experience working with Informatica Data Governance and Catalog tools, preferably Cloud Data Governance and Catalog (CDGC). Strong understanding of data governance concepts, metadata management, data lineage, and data quality principles. Hands-on experience implementing and configuring Informatica CDGC solutions in enterprise environments. Proficiency with ETL/ELT processes, metadata integration, and data cataloging. Solid knowledge of data management frameworks and regulatory compliance (e.g., GDPR, CCPA). Excellent problem-solving and analytical skills with the ability to mentor and lead a team. Strong communication skills with experience working across technical and business stakeholders. Ability to create and deliver training sessions, workshops, and detailed technical documentation.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Collibra Data Quality & Observability Good to have skills : Collibra Data GovernanceMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are functioning optimally. You will also engage in problem-solving activities, providing support and enhancements to existing applications while maintaining a focus on quality and efficiency. Key Responsibilities:Configure and implement Collibra Data Quality (CDQ) rules, workflows, dashboards, and data quality scoring metrics.Collaborate with data stewards, data owners, and business analysts to define data quality KPIs and thresholds.Develop data profiling and rule-based monitoring using CDQ's native rule engine or integrations (e.g., with Informatica, Talend, or BigQuery).Build and maintain Data Quality Dashboards and Issue Management workflows within Collibra.Integrate CDQ with Collibra Data Intelligence Cloud for end-to-end governance visibility.Drive root cause analysis and remediation plans for data quality issues.Support metadata and lineage enrichment to improve data traceability.Document standards, rule logic, and DQ policies in the Collibra Catalog.Conduct user training and promote data quality best practices across teams.Required Skills and Experience:3+ years of experience in data quality, metadata management, or data governance.Hands-on experience with Collibra Data Quality & Observability (CDQ) platform.Knowledge of Collibra Data Intelligence Cloud including Catalog, Glossary, and Workflow Designer.Proficiency in SQL and understanding of data profiling techniques.Experience integrating CDQ with enterprise data sources (Snowflake, BigQuery, Databricks, etc.).Familiarity with data governance frameworks and data quality dimensions (accuracy, completeness, consistency, etc.).Excellent analytical, problem-solving, and communication skills. Additional Information:- The candidate should have minimum 7.5 years of experience in Collibra Data Quality & Observability.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Responsibilities: Data Quality Implementation & Monitoring (Acceldata & Demand Tools): Design, develop, and implement data quality rules and checks using Acceldata to monitor data accuracy, completeness, consistency, and timeliness. Configure and utilize Acceldata to profile data, identify anomalies, and establish data quality thresholds. Investigate and resolve data quality issues identified by Acceldata, working with relevant teams for remediation. Leverage DemandTools within our Salesforce environment to identify, merge, and prevent duplicate records across Leads, Contacts, and Accounts. Implement data standardization and cleansing processes within Salesforce using DemandTools. Develop and maintain data quality dashboards and reports using Acceldata to provide visibility into data health. Data Onboarding & Integration Quality: Collaborate with engineers and platform teams to understand data sources and pipelines built using Fivetran / ingestion tool. Ensure data transformations within Fivetran to maintain data integrity and quality. Develop and execute test plans and test cases to validate the successful and accurate onboarding of data into our snowflake environment. Metadata Management & Data Governance: Work with the Atlan platform to understand and contribute to the establishment of a comprehensive data catalog. Assist in defining and implementing data governance policies and standards within Atlan. Validate the accuracy and completeness of metadata within Atlan to ensure data discoverability and understanding. Collaborate on data lineage tracking and impact analysis using Atlan. Collaboration & Communication: Work closely with data engineers, platform team, data analysts, business stakeholders, and Salesforce administrators. Clearly communicate data quality findings, risks, and remediation steps. Participate in data governance meetings and contribute to the development of data quality best practices. Document data quality rules, processes, and monitoring procedures. Required Skills & Experience: Proven experience (e.g., 3+ years) as a Data Quality Engineer or similar role. Hands-on experience with Fivetran / data ingestion application for data integration and understanding its data transformation capabilities. Familiarity with Atlan or other modern data catalog and metadata management tools. Strong practical experience with Acceldata or similar data quality monitoring and observability platforms. Familiarity in using DemandTools for data quality management within Salesforce. Solid understanding of data quality principles, methodologies, and best practices. Strong SQL skills for data querying and analysis. Experience with data profiling and data analysis techniques. Excellent analytical, problem-solving, and troubleshooting skills. Strong communication and collaboration skills. Ability to work independently and manage tasks effectively in a remote environment. Preferred Skills & Experience: Experience with other data quality tools or frameworks. Knowledge of data warehousing concepts and technologies (e.g., Snowflake, BigQuery). Experience with scripting languages like Python for data manipulation and automation. Familiarity with Salesforce data model and administration.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,Bachelor Of Technology (Integrated),Bachelor Of Technology,Bachelor Of Business Adm.,Master Of Business Adm.,Master of Science (Technology),Master Of Technology,Master of Technology (Integrated) Service Line Enterprise Package Application Services Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Technical and Professional : At least 2 years of configuration & development experience with respect to implementation of OFSAA Solutions (such as ERM, EPM, etc.) Expertise in implementing OFSAA Technical areas covering OFSAAI and frameworks Data Integrator, Metadata Management, Data Modelling. Perform and understand data mapping from Source systems to OFSAA staging; Execution of OFSAA batches and analyses result area tables and derived entities. Perform data analysis using OFSAA Metadata (i.e Technical Metadata, Rule Metadata, Business Metadata) and identify if any data mapping gap and report them to stakeholders. Participate in requirements workshops, help in implementation of designed solution, testing (UT, SIT), coordinate user acceptance testing etc. Knowledge and experience with full SDLC lifecycle Experience with Lean / Agile development methodologies Preferred Skills: Technology-Oracle Industry Solutions-Oracle Financial Services Analytical Applications (OFSAA)

Posted 3 weeks ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Pune

Work from Office

To be successful in this role, you should meet the following requirements(Must have ) Payments and Banking experience is a must. Experience in implementing and monitoring data governance using standard methodology throughout the data life cycle, within a large organisation. Demonstrate up-to-date knowledge of data governance theory, standard methodology and the practical considerations. Demonstrate knowledge of data governance industry standards and tools. Overall experience of 10+ years experience in Data governance, encompassing Data Quality management, Master data management, Data privacy & compliance, Data cataloguing and metadata management, Data security, maturity and lineage. Prior experience in implementing an end-to-end data governance framework. Experience in Automating Data cataloguing, ensuring accurate, consistent metadata, making data easily discoverable and usable. Domain experience across the payments and banking lifecycle. An analytical mind and inclination for problem-solving, with an attention to detail. Ability to effectively navigate and deliver transformation programmes in large global financial organisations, amidst the challenges posed by bureaucracy, globally distributed teams and local data regulations. Strong communication skills coupled with presentation skills of complex information and data. A first-class degree in Engineering or relevant field - with 2 or more of the following subjects as a major Mathematics, Computer Science, Statistics, Economics. The successful candidate will also meet the following requirements(Good to have ) Database TypeRelational, NoSQL, DocumentDB DatabasesOracle, PostgreSQL, BigQuery, Big Table, MongoDB, Neo4j Experience in Conceptual/ Logical/Physical Data Modeling. Experience in Agile methodology and leading agile delivery, aligned with organisational needs. Effective leader as well as team player with a strong commitment to quality and efficiency.

Posted 3 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Informatica MDM Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Advanced Application Engineer, you will develop innovative technology solutions for emerging industries and products. Your typical day will involve interpreting system requirements into design specifications, collaborating with various teams to ensure alignment on project goals, and actively participating in problem-solving discussions to enhance project outcomes. You will engage in continuous learning to stay updated with industry trends and technologies, ensuring that your contributions are both relevant and impactful. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica MDM.- Strong understanding of data integration and data quality management.- Experience with data modeling and metadata management.- Familiarity with ETL processes and data warehousing concepts.- Ability to troubleshoot and resolve technical issues related to MDM. Additional Information:- The candidate should have minimum 5 years of experience in Informatica MDM.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

8.0 - 10.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Urgent Opening for Solution Architect- Data Warehouse-Bangalore Posted On 04th Jul 2019 12:25 PM Location Bangalore Role / Position Solution Architect- Data Warehouse Experience (required) 8 Plus years Description 8-10 years experience in consulting or IT experience supporting Enterprise Data Warehouses & Business Intelligence environments, including experience with data warehouse architecture & design, ETL design/development, and Analytics. Responsiblefor defining the data strategy andfor ensuring that the programs and project align to that strategy. Provides thought leadership in following areas: -Data Access, Data Integration, Data Visualization, Data Modeling, Data Quality and Metadata management -Analytics, Data Discovery, Use Statistical methods, Database Design and Implementation Expertise in Database Appliance, RDBMS, Teradata,Netezza Hands-on experience with data architecting, data mining, large-scale data modeling, and business requirements gathering/analysis. Experience in ETL and Data Migration Tools. Direct experience in implementing enterprise data management processes, procedures, and decision support. Responsiblefor defining the data strategy andfor ensuring that the programs and project align to that strategy. Strong understanding of relational data structures, theories, principles, and practices. Strong familiarity with metadata management and associated processes. Hands-on knowledge of enterprise repository tools, data modeling tools, data mapping tools, and data profiling tools. Demonstrated expertise with repository creation, and data and information system life cycle methodologies. Experience with business requirements analysis, entity relationship planning, database design, reporting structures, and so on. Ability to manage data and metadata migration. Experience with data processing flowcharting techniques. Hands on Experience in Big Data Technologies(5 years)-Hadoop, MapReduce, MongoDB, and Integration with the Legacy environmentswould be preferred . Experience with Spark using Scala or Python is a big plus Experience in Cloud Technologies(primarily in AWS, Azure) and integration with on premise existing Data warehouse technologies. Have good knowledge on S3, Redshift, Blob Storage, Presto DB etc. Attitude to learn and adopt emerging technologies. Send Resumes to girish.expertiz@gmail.com -->Upload Resume

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies