Home
Jobs

3 Phi Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

5 - 8 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Naukri logo

Role Overview As a Data Governance Developer at Kanerika, you will be responsible for developing and managing robust metadata, lineage, and compliance frameworks using Microsoft Purview and other leading tools. Youll work closely with engineering and business teams to ensure data integrity, regulatory compliance, and operational transparency. Key Responsibilities Set up and manage Microsoft Purview: accounts, collections, RBAC, and policies. Integrate Purview with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake. Schedule & monitor metadata scanning, classification, and lineage tracking jobs. Build ingestion workflows for technical, business, and operational metadata. Tag, enrich, and organize assets with glossary terms and metadata. Automate lineage, glossary, and scanning processes via REST APIs, PowerShell, ADF, and Logic Apps. Design and enforce classification rules for PII, PCI, PHI. Collaborate with domain owners for glossary and metadata quality governance. Generate compliance dashboards and lineage maps in Power BI. Tools & Technologies Governance Platforms: Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog Integration Tools: Azure Data Factory, dbt, Talend Automation & Scripting: PowerShell, Azure Functions, Logic Apps, REST APIs Compliance Areas in Purview: Sensitivity Labels, Policy Management, Auto-labeling Data Loss Prevention (DLP), Insider Risk Mgmt, Records Management Compliance Manager, Lifecycle Mgmt, eDiscovery, Audit DSPM, Information Barriers, Unified Catalog Required Qualifications 46 years of experience in Data Governance / Data Management. Hands-on with Microsoft Purview, especially lineage and classification workflows. Strong understanding of metadata management, glossary governance, and data classification. Familiarity with Azure Data Factory, dbt, Talend. Working knowledge of data compliance regulations: GDPR, CCPA, SOX, HIPAA. Strong communication skills to collaborate across technical and non-technical teams.

Posted 3 days ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Remote

Naukri logo

Design, develop, and implement Salesforce Health Cloud solutions, integrate with external systems, support call center tech, ensure compliance, manage enhancements, and communicate with business teams to align tech with patient service goals. Required Candidate profile Experienced Salesforce Health Cloud developer (5–7 yrs) with Platform Developer II certification, Apex/Visualforce, integration, pharma/healthcare understanding, and Agile development background

Posted 1 month ago

Apply

10 - 20 years

30 - 40 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Title: Sr Data and MLOps Engineer Location: Hybrid (Bangalore/Chennai/Trichy) Description: • Experience within the Azure ecosystem, including Azure AI Search, Azure Storage Blob, Azure Postgres, with expertise in leveraging these tools for data processing, storage, and analytics tasks. • Proficiency in data preprocessing and cleaning large datasets efficiently using Azure Tools, Python, and other data manipulation tools. • Strong background in Data Science/MLOps, with hands-on experience in DevOps, CI/CD, Azure Cloud computing, and model monitoring. • Expertise in healthcare data standards, such as HIPAA and FHIR, with a deep understanding of sensitive data handling and data masking techniques to protect PII and PHI. • In-depth knowledge of search algorithms, indexing techniques, and retrieval models for effective information retrieval tasks. Experience with chunking techniques and working with vectors and vector databases like Pinecone. • Ability to design, develop, and maintain scalable data pipelines for processing and transforming large volumes of structured and unstructured data, ensuring performance and scalability. • Implement best practices for data storage, retrieval, and access control to maintain data integrity, security, and compliance with regulatory requirements. • Implement efficient data processing workflows to support the training and evaluation of solutions using large language models (LLMs), ensuring that models are reliable, scalable, and performant. • Proactively identify and resolve data quality issues, pipeline failures, or resource contention to minimize disruption to systems. • Experience with large language model frameworks, such as Langchain, and the ability to integrate them into data pipelines for natural language processing tasks. • Familiarity with Snowflake for data management and analytics, with the ability to work within the Snowflake ecosystem to support data processes. • Knowledge of cloud computing principles and hands-on experience with deploying, scaling, and monitoring AI solutions on platforms like Azure, AWS, and Snowflake. • Ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders, and collaborate with cross-functional teams. • Analytical mindset with attention to detail, coupled with the ability to solve complex problems efficiently and effectively. • Knowledge of cloud cost management principles and best practices to optimize cloud resource usage and minimize costs. • Experience with ML model deployment, including testing, validation, and integration of machine learning models into production systems. • Knowledge of model versioning and management tools, such as MLflow, DVC, or Azure Machine Learning, for tracking experiments, versions, and deployments. • Model monitoring and performance optimization, including tracking model drift and addressing performance issues to ensure models remain accurate and reliable. • Automation of ML workflows through CI/CD pipelines, enabling smooth model training, testing, validation, and deployment. • Monitoring and logging of AI/ML systems post-deployment to ensure consistent reliability, scalability, and performance. • Collaboration with data scientists and engineering teams to facilitate model retraining, fine-tuning, and updating. • Familiarity with containerization technologies, like Docker and Kubernetes, for deploying and scaling machine learning models in production environments. • Ability to implement model governance practices to ensure compliance and auditability of AI/ML systems. • Understanding of model explainability and the use of tools and techniques to provide transparent insights into model behavior. Must Have: • Minimum of 10 years experience as a data engineer • Hands-on experience with Azure Cloud eco-system. • Hands-on experience using Python for data manipulation. • Deep understanding of vectors and vector databases. • Hands-on experience scaling POC to production. • Hands-on experience using tools such as Document Intelligence, Snowflake, function app. Azure AI Search • Experience working with PII/PHI • Hands-on experience working with unstructured data. Role & responsibilities

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies