Jobs
Interviews

214 Metadata Management Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled Ab Initio Developer with over 6 years of total experience and at least 4 years of relevant experience. Your primary responsibilities will include leading the design, development, and optimization of ETL processes using Ab Initio's Graphical Development Environment (GDE). It is essential to ensure data accuracy, consistency, and availability throughout the data integration workflows. You will be tasked with building, maintaining, and optimizing data integration workflows to facilitate seamless data flow across various systems and platforms. Your expertise in designing intricate data transformations, data cleansing, and data enrichment logic within Ab Initio graphs will be critical. Utilizing Ab Initio's metadata capabilities for documenting data lineage, transformations, and data definitions is essential to ensure transparency and compliance. Monitoring and optimizing Ab Initio ETL processes for efficiency, scalability, and performance will be part of your routine. You must address and resolve any bottlenecks that may arise. Developing robust error handling and logging mechanisms to track and manage ETL job failures and exceptions is crucial to maintain the integrity of data processes. Collaboration with cross-functional teams, including data engineers, data analysts, data scientists, and business stakeholders, is necessary. Understanding requirements and ensuring successful delivery of data integration projects will be a key aspect of your role. Using version control systems such as Git to manage Ab Initio code and collaborate effectively with team members is essential. Creating and maintaining comprehensive documentation of Ab Initio graphs, data integration processes, best practices, and standards for the team is expected. You will also be responsible for investigating and resolving complex ETL-related issues, providing support to team members and users, and conducting root cause analysis when problems arise. Overall, as an Ab Initio Developer, you will be a vital part of the data engineering team, contributing to the design, development, and maintenance of data integration and ETL solutions using Ab Initio's suite of tools.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Key Skills Required For The Data Modeler Role Data Modeling Expertise Ability to analyze and translate business needs into long-term data models. Metadata Management Strong knowledge of metadata management and related tools. Machine Learning Experience 5-8+ years of experience in machine learning in production. Statistical Analysis Knowledge of mathematical foundations and statistical methods. Database Systems Evaluating and optimizing existing data systems. Data Flow Design Creating conceptual data models and data flows. Coding Best Practices Developing best practices for data coding to ensure consistency. System Optimization Updating and troubleshooting data systems for efficiency. Collaboration Skills Working with cross-functional teams (Product Owners, Data Scientists, Engineers, Analysts, Developers, Architects). Technical Documentation Preparing training materials, SOPs, and knowledge base articles. Communication & Presentation Strong interpersonal, communication, and presentation skills. Multi-Stakeholder Engagement Ability to work with multiple stakeholders in a multicultural environment. Data Modeling Certification Desirable but not mandatory.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Governance Engineer at Aviatrix, you will be playing a crucial role in establishing and maintaining our data governance framework. Your responsibilities will involve designing and implementing an efficient and scalable data governance framework to ensure data quality, security, and compliance. You will work towards promoting best practices, enhancing data transparency, and fostering seamless cross-functional collaboration within the organization. One of your key responsibilities will be to develop and enforce data quality and validation processes across various data sources, including MySQL, Snowflake, and other platforms, to maintain high standards of data reliability and usability. You will also be tasked with establishing and maintaining data cataloging practices to ensure complete data lineage across all sources. Collaboration will be a significant aspect of your role as you will be the central point of contact for data governance, working closely with teams such as Data Engineering, Product Innovation, and Security to implement effective data governance practices. Furthermore, you will collaborate with DevOps and Security teams to ensure data security and compliance with relevant regulations like GDPR and CCPA by implementing appropriate access controls and permissions. In addition to these responsibilities, you will be responsible for creating and maintaining comprehensive documentation on data governance policies, data lineage, and best practices. Proactive monitoring of data pipelines and recommending tools and technologies that support data cataloging, quality monitoring, and metadata management will also be part of your role. To excel in this position, you should have a strong knowledge of SQL and experience with data warehousing platforms like Snowflake, AWS Redshift, and MySQL. Proficiency in Python for data analysis, scripting, and automation of data processes is essential. Moreover, you should possess knowledge of data quality frameworks, experience working with large-scale data in cloud environments, and familiarity with data governance tools and metadata management platforms. Ideally, you should hold a Bachelor's degree in information systems, Computer Science, or a related field, along with at least 3 years of relevant experience in Data Governance, Data Engineering, or a similar role. Your total compensation package will be based on your job-related knowledge, education, certifications, and location, as per aligned ranges. Aviatrix is a cloud networking expert dedicated to simplifying cloud networking for enterprises. If you are passionate about making a difference, growing your career, and being part of a dynamic community, we encourage you to apply. At Aviatrix, we value diversity and welcome candidates who bring unique perspectives and skills to our team. Your journey and background matter to us, and we are committed to helping you achieve your goals and unleash your full potential.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Work from Office

Description: The Data & Analytics Team is seeking a Data Engineer with a hybrid skillset in data integration and application development. This role is crucial for designing, engineering, governing, and improving our entire Data Platform, which serves customers, partners, and employees through self-service access. You'll demonstrate expertise in data & metadata management, data integration, data warehousing, data quality, machine learning, and core engineering principles Requirements: • 5+ years of experience with system/data integration, development, or implementation of enterprise and/or cloud software. • Strong experience with Web APIs (RESTful and SOAP). • Strong experience setting up data warehousing solutions and associated pipelines, including ETL tools (preferably Informatica Cloud). • Demonstrated proficiency with Python. • Strong experience with data wrangling and query authoring in SQL and NoSQL environments for both structured and unstructured data. • Experience in a cloud-based computing environment, specifically GCP. • Expertise in documenting Business Requirement, Functional & Technical documentation. • Expertise in writing Unit & Functional Test Cases, Test Scripts & Run books. • Expertise in incident management systems like Jira, Service Now etc. • Working knowledge of Agile Software development methodology. • Strong organizational and troubleshooting skills with attention to detail. • Strong analytical ability, judgment, and problem analysis techniques. • Excellent interpersonal skills with the ability to work effectively in a cross-functional team. Job Responsibilities: • Lead system/data integration, development, or implementation efforts for enterprise and/or cloud software. • Design and implement data warehousing solutions and associated pipelines for internal and external data sources, including ETL processes. • Perform extensive data wrangling and author complex queries in both SQL and NoSQL environments for structured and unstructured data. • Develop and integrate applications, leveraging strong proficiency in Python and Web APIs (RESTful and SOAP). • Provide operational support for the data platform and applications, including incident management. • Create comprehensive Business Requirement, Functional, and Technical documentation. • Develop Unit & Functional Test Cases, Test Scripts, and Run Books to ensure solution quality. • Manage incidents effectively using systems like Jira, Service Now, etc. • Prepare change management packages and implementation plans for migrations across different environments. • Actively participate in Enterprise Risk Management Processes. • Work within an Agile Software Development methodology, contributing to team success. • Collaborate effectively within cross-functional teams. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 2 weeks ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Hyderabad

Work from Office

Details of the role: 8 to 10 years experience as Informatica Admin (IICS) Key responsibilities: Understand the programs service catalog and document the list of tasks which has to be performed for each Lead the design, development, and maintenance of ETL processes to extract, transform, and load data from various sources into our data warehouse. Implement best practices for data loading, ensuring optimal performance and data quality. Utilize your expertise in IDMC to establish and maintain data governance, data quality, and metadata management processes. Implement data controls to ensure compliance with data standards, security policies, and regulatory requirements. Collaborate with data architects to design and implement scalable and efficient data architectures that support business intelligence and analytics requirements. Work on data modeling and schema design to optimize database structures for ETL processes. Identify and implement performance optimization strategies for ETL processes, ensuring timely and efficient data loading. Troubleshoot and resolve issues related to data integration and performance bottlenecks. Collaborate with cross-functional teams, including data scientists, business analysts, and other engineering teams, to understand data requirements and deliver effective solutions. Provide guidance and mentorship to junior members of the data engineering team. Create and maintain comprehensive documentation for ETL processes, data models, and data flows. Ensure that documentation is kept up-to-date with any changes to data architecture or ETL workflows. Use Jira for task tracking and project management. Implement data quality checks and validation processes to ensure data integrity and reliability. Maintain detailed documentation of data engineering processes and solutions. Required Skills: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Senior ETL Data Engineer, with a focus on IDMC / IICS Strong proficiency in ETL tools and frameworks (e.g., Informatica Cloud, Talend, Apache NiFi). Expertise in IDMC principles, including data governance, data quality, and metadata management. Solid understanding of data warehousing concepts and practices. Strong SQL skills and experience working with relational databases. Excellent problem-solving and analytical skills.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 2 weeks ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Kolkata

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Informatica Data Quality Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and implement Informatica Data Quality solutions.- Collaborate with cross-functional teams to analyze and address data quality issues.- Create and maintain documentation for data quality processes.- Participate in data quality improvement initiatives.- Assist in training junior professionals in data quality best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica Data Quality.- Strong understanding of data quality principles.- Experience with ETL processes and data integration.- Knowledge of data profiling and cleansing techniques.- Familiarity with data governance and metadata management. Additional Information:- The candidate should have a minimum of 3 years of experience in Informatica Data Quality.- This position is based at our Kolkata office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Job Title: Data Governance & Quality Specialist Experience: 3-8 Years Location: Bangalore (Hybrid) Domain: Financial Services Notice Period: Immediate to 30 Days What Youll Do: Define and enforce data-governance policies (BCBS 239/GDPR) across credit-risk datasets Design, monitor and report on data-quality KPIs; perform profiling & root-cause analysis in SAS/SQL Collaborate with data stewards, risk teams and auditors to remediate data issues Develop governance artifacts: data-lineage maps, stewardship RACI, council presentations Must Have: 38 yrs in data-governance or data-quality roles (financial services) Advanced SAS for data profiling & reporting; strong SQL skills Hands-on with governance frameworks and regulatory requirements Excellent stakeholder-management and documentation abilities Nice to Have: Experience with Collibra, Informatica or Talend Exposure to credit-risk model inputs (PD/LGD/EAD) Automation via SAS macros or Python scripting If Interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 2 weeks ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

Bengaluru

Work from Office

Job Title: Data Governance Specialist Experience: 5-7 Years Location: Bangalore, India Domain: Financial Services Notice Period: Immediate to 30 Days Job Description: Required a skilled Data Governance Specialist to join its data management team in Bangalore. This role will focus on implementing and maintaining data governance frameworks, ensuring high-quality data assets, and enabling consistent use of metadata across the organization. Key Responsibilities: Establish and maintain data governance policies, standards, and processes. Develop and manage the enterprise data glossary and metadata repositories. Monitor and improve data quality metrics, ensuring accuracy and consistency across systems. Work closely with business and technical teams to ensure data lineage and traceability. Support Agile delivery using tools like JIRA and Confluence. Collaborate across departments to promote data stewardship and governance awareness. Key Requirements: 57 years of experience in data governance, metadata management, or data quality roles. Strong knowledge of data glossary, lineage, and metadata practices. Experience working in Agile environments; familiarity with JIRA and Confluence. Excellent communication and stakeholder management skills. Prior experience in the financial services or banking domain is preferred. Preferred Skills: Knowledge of data governance tools (e.g., Collibra, Informatica, Alation) is a plus. Understanding of regulatory data requirements (e.g., BCBS 239, GDPR) is an advantage. Intake call Notes: Data governance, Data Glossary, metadata management, data quality, agile, JIRA, confluence Keywords - data governance, data quality and agile If interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 2 weeks ago

Apply

8.0 - 13.0 years

19 - 25 Lacs

Bengaluru

Work from Office

In this role, you will play a key role in designing, building, and optimizing scalable data products within the Telecom Analytics domain. You will collaborate with cross-functional teams to implement AI-driven analytics, autonomous operations, and programmable data solutions. This position offers the opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering expertise, and contribute to advancing Nokias data-driven telecom strategies. If you are passionate about creating innovative data solutions, mastering cloud and big data platforms, and working in a fast-paced, collaborative environment, this role is for you! You have: Bachelors or masters degree in computer science, Data Engineering, or related field with 8+ years of experience in data engineering with a focus on Big Data, Cloud, and Telecom Analytics. Hands-on expertise in Ab Initio for data cataloguing, metadata management, and lineage. Skills in data warehousing, OLAP, and modelling using Big Query, Click house, and SQL. Experience with data persistence technologies like S3, HDFS, and Iceberg. Hands-on experience with Python and scripting languages. It would be nice if you also had: Experience with data exploration and visualization using Superset or BI tools. Knowledge in ETL processes and streaming tools such as Kafka. Background in building data products for the telecom domain and understanding of AI and machine learning pipeline integration. Data GovernanceManage source data within the Metadata Hub and Data Catalog. ETL DevelopmentDevelop and execute data processing graphs using Express It and the Co-Operating System. ETL OptimizationDebug and optimize data processing graphs using the Graphical Development Environment (GDE). API IntegrationLeverage Ab Initio APIs for metadata and graph artifact management. CI/CD ImplementationImplement and maintain CI/CD pipelines for metadata and graph deployments. Team Leadership & MentorshipMentor team members and foster best practices in Ab Initio development and deployment.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

Mumbai

Work from Office

Subject matter experts in Marketing and Comms provide business stakeholders with specialized advice on their subjects, and act as an advisor leveraging on a specific MC expertise. She/he is a person with in-depth, unique knowledge and expertise on a specific subject or in a particular industry ex digital marketing, internal comms, telecom, etc. : Familiarity with metadata management and tagging best practices. Exceptional attention to detail, with a strong ability to spot errors and inconsistencies in large datasets or digital assets. Strong analytical skills with the ability to identify data quality issues and root causes and implement corrective actions. Ability to work effectively with cross-functional teams, including marketing, creative, IT, and product teams, to resolve data issues and ensure alignment across the organization. Strong problem-solving skills to address data discrepancies, identify issues within workflows, and propose effective solutions. Proven track record of optimizing data management processes, improving workflows, and implementing data quality initiatives. Primary Skills: 4-6 years of experience in digital asset management, with a focus on maintaining data accuracy and consistency across systems. 2+ years Sitecore/Aprimo/AEM OR Veeva any one Digital Asset Management tools. Secondary Skills: Familiarity with data validation tools, reporting platforms (e.g., Excel, Power BI), and basic SQL or query languages for managing and analyzing data. Excellent written and verbal communication skills, with the ability to document processes, provide training, and explain data issues clearly to both technical and non-technical stakeholders.

Posted 3 weeks ago

Apply

5.0 - 6.0 years

8 - 10 Lacs

Mumbai

Work from Office

We are seeking a skilled SAS Administrator with at least 5 years of experience in managing SAS platforms, including installation, configuration, and administration. The ideal candidate should have hands-on expertise in SAS Viya 3.4, SAS Viya 3.5, SAS Viya 4, SAS Management Console (SMC), and server-level configurations. Experience working in government or large enterprise environments is preferred. Key Responsibilities: Perform installation, configuration, and maintenance of SAS 9.x and SAS Viya 4 on Linux/Unix server environments. Manage SAS Environment Manager, SAS Management Console (SMC), metadata administration, and user/group/role permissions. Monitor and tune system performance, ensuring platform availability and integrity. Administer SAS server security, backups, and patching. Collaborate with IT teams to troubleshoot server-level or configuration issues. Perform regular upgrades, migrations, and license updates. Coordinate with SAS Tech Support for critical issues or escalations. Prepare and maintain technical documentation and SOPs. Required Skills: Minimum 5 years of hands-on experience in SAS Platform Administration. Strong experience in SAS Viya 4 administration and traditional SAS (9.x). Good knowledge of SAS SMC, metadata management, and server architecture. Experience in installation/configuration on Linux/Unix environments. Familiar with security setup, resource management, and system health monitoring. Knowledge of shell scripting is a plus. Preferred Qualifications: BE / Btech / Mtech / MCA / MSc - Stats. Prior experience working with government or public sector clients is a plus. SAS certifications (e.g., SAS Certified Platform Administrator) are a bonus

Posted 3 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram, Delhi / NCR

Work from Office

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 3 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram

Hybrid

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 15 Lacs

Pune

Work from Office

Ab Initio Admin Must Have Skills code deployment, Ab Initio Product Suit(co-op, Express>IT, eme, control center, Authorization Gateway, Mdhub) installation, Linux commands, shell scripting, Ab initio environment administration, mfs file system Required Candidate profile Ab Initio Admin good To Have Skills: Networking and Security, ServiceNow, Jenkins, ansible, Exposure to Ab Initio Product Suit upgrades, Oracle Golden Gate, Oracle, Grafana/Kibana,

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

We are looking for a skilled SAP Data Intelligence Application Support Specialist with 8 to 15 years of experience. The ideal candidate will have a strong background in managing complex data integration and providing advanced technical support for the SAP Data Intelligence platform. Roles and Responsibility Provide advanced technical support for the SAP Data Intelligence platform, managing complex data integration. Resolve incidents and manage problems, developing comprehensive root cause analyses and implementing proactive solutions. Assist in designing, configuring, and maintaining the SAP Data Intelligence architecture, collaborating with cross-functional teams to optimize data integration strategies. Conduct regular system performance assessments, implement monitoring tools, and develop optimization strategies. Ensure robust security protocols within the SAP Data Intelligence environment, implementing and maintaining data access controls. Manage complex data source integrations, including enterprise systems, cloud platforms, databases, and applications. Job Minimum 8 years of experience in a related field. Strong knowledge of SAP Data Intelligence, metadata management, and system performance. Experience in incident resolution, problem management, and root cause analysis. Ability to design, configure, and maintain SAP Data Intelligence architecture. Strong understanding of data flow transformation, governance, and scalability. Excellent communication and collaboration skills.

Posted 3 weeks ago

Apply

5.0 - 12.0 years

25 - 30 Lacs

Mumbai

Work from Office

Job Summary / Objective Act as a strategic advisor and engagement lead, providing executive oversight and direction for the clients OCC-driven data remediation initiatives Ensure alignment of data management and governance and quality improvement strategies with regulatory requirements and business objectives Key Responsibilities / Duties Define and communicate the strategic vision for data governance remediation to client executives Guide the client in modernizing data architecture, risk aggregation, and regulatory reporting processes Advise on development and enforcement of enterprise-wide data policies, standards, and controls Support executive and Board-level reporting and engagement with OCC or other regulators Lead efforts to foster a culture of data accountability and continuous improvement within the client organization Required Skill Sets & Requirements Enterprise Data Analysis and Management: Extensive experience designing and implementing data analysis and management programs in large financial institutions Strong understanding of data quality metrics, master data management, and metadata management Regulatory & Risk Management: Experience in Operational risk domains including but not limited to Data risk, Fraud risk, Tech risk, Cyber risk, Op resiliency risk, third party risk, Processing risk, Services and Enterprise ops risk, Regulatory management reporting and financial statement reporting risk Responsibilities include requirements gathering, data acquisition, data quality assessment, and building risk monitoring tools Deep knowledge of regulatory frameworks (BCBS 239) and experience supporting regulatory remediation Technical & Analytical: Programing proficiency in Python, SQL and reporting tools like Tableau, PowerBI, and Jira Experience guiding IT modernization, system integration, and process optimization Advanced problem-solving, decision-making, and client advisory skills Communication & Board Reporting: Excellent communication, negotiation, and presentation skills with demonstrated experience in Board-level engagement Qualifications Masters or advanced degree preferred 12+ yearsexperience in consulting or executive roles in financial services Professional certifications (CDMP, PMP) highly desirable

Posted 3 weeks ago

Apply

8.0 - 12.0 years

12 - 16 Lacs

Mumbai

Work from Office

Operational Risk Data Management Job Summary / Objective Act as a strategic advisor and engagement lead, providing executive oversight and direction for the clients OCC-driven data remediation initiatives Ensure alignment of data management and governance and quality improvement strategies with regulatory requirements and business objectives Key Responsibilities / Duties Define and communicate the strategic vision for data governance remediation to client executives Guide the client in modernizing data architecture, risk aggregation, and regulatory reporting processes Advise on development and enforcement of enterprise-wide data policies, standards, and controls Support executive and Board-level reporting and engagement with OCC or other regulators Lead efforts to foster a culture of data accountability and continuous improvement within the client organization Required Skill Sets & Requirements Enterprise Data Analysis and Management: Extensive experience designing and implementing data analysis and management programs in large financial institutions Strong understanding of data quality metrics, master data management, and metadata management Regulatory & Risk Management: Experience in Operational risk domains including but not limited to Data risk, Fraud risk, Tech risk, Cyber risk, Op resiliency risk, third party risk, Processing risk, Services and Enterprise ops risk, Regulatory management reporting and financial statement reporting risk Responsibilities include requirements gathering, data acquisition, data quality assessment, and building risk monitoring tools Deep knowledge of regulatory frameworks (BCBS 239) and experience supporting regulatory remediation Technical & Analytical: Programing proficiency in Python, SQL and reporting tools like Tableau, PowerBI, and Jira Experience guiding IT modernization, system integration, and process optimization Advanced problem-solving, decision-making, and client advisory skills Communication & Board Reporting: Excellent communication, negotiation, and presentation skills with demonstrated experience in Board-level engagement Qualifications Masters or advanced degree preferred 6+ yearsexperience in consulting or executive roles in financial services Professional certifications (CDMP, PMP) highly desirable ORM-Level 1 Support experience required Indian Passport with 1 Year Validity Mandatory

Posted 3 weeks ago

Apply

6.0 - 11.0 years

11 - 20 Lacs

Pune

Hybrid

A Day in the Life Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. Medtronic is hiring a Senior Data Governance Engineer. As a Senior engineer, you will be a key member of our Data Governance team, defining scope, assessing current data governance capabilities, building roadmaps to mature data governance capabilities. This role offers a dynamic opportunity to join Medtronic's Diabetes business. Medtronic has announced its intention to separate the Diabetes division to promote future growth and innovation within the business and reallocate investments and resources across Medtronic, subject to applicable information and consultation requirements. While you will start your employment with Medtronic, upon establishment of SpinCo or the transition of the Diabetes business to another company, your employment may transfer to either SpinCo or the other company, at Medtronic's discretion and subject to any applicable information and consultation requirements in your jurisdiction. Responsibilities may include the following and other duties may be assigned. Data Governance Strategy Development - responsible for the strategic development, architectural definition, and enterprise-wide data governance and data management initiatives supporting the delivery of data as a service Provide Data Governance and Data Management advisory expertise Responsible for the identification and evaluation of metadata platform alternatives, the development of the logical metadata framework architecture, and the definition and implementation of metadata maintenance processes Defining Data Governance operating models, roles, and responsibilities in collaboration with Medtronic requirements Responsible for the assessment and selection of Data Management tools landscape including: data profiling, data quality, metadata, MDM, rules engines, and pattern matching Refine and develop the data-centric approach and methodologies which may include areas such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery Assess current state capabilities, identify gaps versus leading practices, and recommend future state data requirements Identify opportunities for new workflows, third party glossary/metadata tools, database architectures, and third-party pre-existing data models. Work closely with business and technology stakeholders to assist in understanding and documenting data requirements, lineage, and partnering with IT data integration and data warehouse teams to ensure requirements are effectively executed Help define data modeling naming standards, abbreviations, guidelines, and best practices Enhance or design data model review processes based on business requirements Minimum Qualification: At least 5 years of experience developing / structuring an enterprise wide data governance organization and business process (operating models, roles, partner organizations, responsibilities) Hands-on experience with both the business side and IT side implementing or supporting an MDM and/or Data Warehouse and Reporting IT solutions Utilize strong business knowledge of the investment management industry and common data management operations Broad understanding of the role of data management within global markets organizations, information flow, and data governance issues Domain expertise in specific areas of Data Management such as: data strategy, data governance, data lineage, analytics, business intelligence, data architecture, data quality, master data management and data integration and delivery Note : Immedite or Notice period serving candidates preferred. If interested please share your updated CV on ashwini.ukekar@medtronic.com

Posted 3 weeks ago

Apply

6.0 - 7.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary: Experience : 5 - 8 Years Location : Bangalore Contribute to building state-of-the-art data platforms in AWS, leveraging Python and Spark. Be part of a dynamic team, building data solutions in a supportive and hybrid work environment. This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement, and optimize ETL workflows using Python and Spark, contributing to our robust data Lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team. Must Have Tech Skills: Demonstrable experience as a senior data engineer. Expert in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Experience of implementing data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Experience with data services in Lakehouse architecture. Good background and proven experience of data modelling for data platforms Nice To Have Tech Skills: A master’s degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous Key Accountabilities: Provides guidance on best practices in design, development, and implementation, ensuring solutions meet business requirements and technical standards. Works closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading design and planning of these components. Drive the migration of existing data processing workflows to the Lakehouse architecture, leveraging Iceberg capabilities. Communicates complex technical information clearly, tailoring messages to the appropriate audience to ensure alignment. Key Skills: Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using AWS data services and Lakehouse capabilities. Highly proficient in Python, Spark and familiar with a variety of development technologies. Skilled in decomposing solutions into components (Epics, stories) to streamline development. Proficient in creating clear, comprehensive documentation. Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Previous Financial Services experience delivering data solutions against financial and market reference data. Solid grasp of Data Governance and Data Management concepts, including metadata management, master data management, and data quality. Educational Background: Bachelor’s degree in computer science, Software Engineering, or related field essential. Bonus Skills: A working knowledge of Indices, Index construction and Asset Management principles.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

20 - 35 Lacs

Greater Noida

Work from Office

We are seeking a seasoned Informatica CDGC expert to work with Informatica team and lead the implementation and optimization of Informatica Cloud Data Governance and Catalog solutions. The ideal candidate will establish best practices, drive data governance initiatives, and mentor a team of data professionals to ensure a scalable and efficient governance framework aligned with business objectives Roles and Responsibilities Lead the end-to-end implementation of Informatica Cloud Data Governance and Catalog (CDGC) solutions, ensuring timely and high-quality delivery. Design, configure, and deploy data governance frameworks using Informatica CDGC aligned with organizational standards and compliance requirements. Develop and implement best practices for metadata management, data lineage, data quality, and stewardship within the Informatica CDGC environment. Collaborate with cross-functional teams, including data architects, engineers, analysts, and business stakeholders to drive data governance adoption. Provide expert guidance on data governance policies, workflows, and tool utilization to maximize the value of Informatica CDGC. Mentor and coach team members on technical and governance aspects of Informatica CDGC, fostering skill development and knowledge sharing. Troubleshoot and resolve complex technical issues related to Informatica CDGC deployment and integrations. Stay current with Informatica CDGC product updates, industry trends, and data governance best practices to continuously enhance governance capabilities. Create and maintain documentation, including architecture diagrams, configuration guides, and training materials. Support audit and compliance activities related to data governance and metadata management. Proven experience working with Informatica Data Governance and Catalog tools, preferably Cloud Data Governance and Catalog (CDGC). Strong understanding of data governance concepts, metadata management, data lineage, and data quality principles. Hands-on experience implementing and configuring Informatica CDGC solutions in enterprise environments. Proficiency with ETL/ELT processes, metadata integration, and data cataloging. Solid knowledge of data management frameworks and regulatory compliance (e.g., GDPR, CCPA). Excellent problem-solving and analytical skills with the ability to mentor and lead a team. Strong communication skills with experience working across technical and business stakeholders. Ability to create and deliver training sessions, workshops, and detailed technical documentation.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Collibra Data Quality & Observability Good to have skills : Collibra Data GovernanceMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are functioning optimally. You will also engage in problem-solving activities, providing support and enhancements to existing applications while maintaining a focus on quality and efficiency. Key Responsibilities:Configure and implement Collibra Data Quality (CDQ) rules, workflows, dashboards, and data quality scoring metrics.Collaborate with data stewards, data owners, and business analysts to define data quality KPIs and thresholds.Develop data profiling and rule-based monitoring using CDQ's native rule engine or integrations (e.g., with Informatica, Talend, or BigQuery).Build and maintain Data Quality Dashboards and Issue Management workflows within Collibra.Integrate CDQ with Collibra Data Intelligence Cloud for end-to-end governance visibility.Drive root cause analysis and remediation plans for data quality issues.Support metadata and lineage enrichment to improve data traceability.Document standards, rule logic, and DQ policies in the Collibra Catalog.Conduct user training and promote data quality best practices across teams.Required Skills and Experience:3+ years of experience in data quality, metadata management, or data governance.Hands-on experience with Collibra Data Quality & Observability (CDQ) platform.Knowledge of Collibra Data Intelligence Cloud including Catalog, Glossary, and Workflow Designer.Proficiency in SQL and understanding of data profiling techniques.Experience integrating CDQ with enterprise data sources (Snowflake, BigQuery, Databricks, etc.).Familiarity with data governance frameworks and data quality dimensions (accuracy, completeness, consistency, etc.).Excellent analytical, problem-solving, and communication skills. Additional Information:- The candidate should have minimum 7.5 years of experience in Collibra Data Quality & Observability.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Responsibilities: Data Quality Implementation & Monitoring (Acceldata & Demand Tools): Design, develop, and implement data quality rules and checks using Acceldata to monitor data accuracy, completeness, consistency, and timeliness. Configure and utilize Acceldata to profile data, identify anomalies, and establish data quality thresholds. Investigate and resolve data quality issues identified by Acceldata, working with relevant teams for remediation. Leverage DemandTools within our Salesforce environment to identify, merge, and prevent duplicate records across Leads, Contacts, and Accounts. Implement data standardization and cleansing processes within Salesforce using DemandTools. Develop and maintain data quality dashboards and reports using Acceldata to provide visibility into data health. Data Onboarding & Integration Quality: Collaborate with engineers and platform teams to understand data sources and pipelines built using Fivetran / ingestion tool. Ensure data transformations within Fivetran to maintain data integrity and quality. Develop and execute test plans and test cases to validate the successful and accurate onboarding of data into our snowflake environment. Metadata Management & Data Governance: Work with the Atlan platform to understand and contribute to the establishment of a comprehensive data catalog. Assist in defining and implementing data governance policies and standards within Atlan. Validate the accuracy and completeness of metadata within Atlan to ensure data discoverability and understanding. Collaborate on data lineage tracking and impact analysis using Atlan. Collaboration & Communication: Work closely with data engineers, platform team, data analysts, business stakeholders, and Salesforce administrators. Clearly communicate data quality findings, risks, and remediation steps. Participate in data governance meetings and contribute to the development of data quality best practices. Document data quality rules, processes, and monitoring procedures. Required Skills & Experience: Proven experience (e.g., 3+ years) as a Data Quality Engineer or similar role. Hands-on experience with Fivetran / data ingestion application for data integration and understanding its data transformation capabilities. Familiarity with Atlan or other modern data catalog and metadata management tools. Strong practical experience with Acceldata or similar data quality monitoring and observability platforms. Familiarity in using DemandTools for data quality management within Salesforce. Solid understanding of data quality principles, methodologies, and best practices. Strong SQL skills for data querying and analysis. Experience with data profiling and data analysis techniques. Excellent analytical, problem-solving, and troubleshooting skills. Strong communication and collaboration skills. Ability to work independently and manage tasks effectively in a remote environment. Preferred Skills & Experience: Experience with other data quality tools or frameworks. Knowledge of data warehousing concepts and technologies (e.g., Snowflake, BigQuery). Experience with scripting languages like Python for data manipulation and automation. Familiarity with Salesforce data model and administration.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies