Role : AI Governance Director
Department : Legal, AI Governance
Job Location : Bangalore/Mumbai
Experience Range : 12+ Years
Job Summary
The AI Governance director at LTIMindtree will play a pivotal role in shaping and safeguarding the organization’s enterprise-wide AI Governance and Compliance Program.
This position acts as a critical bridge between business, technology, cybersecurity, data privacy, and governance functions, ensuring that AI is deployed responsibly, ethically, and in alignment with global regulatory standards.
This role will drive the development and continuous evolution of AI policies, conduct Responsible AI assessments, ensure regulatory compliance, and champion stakeholder education. By embedding Responsible AI (RAI) principles into the organization’s DNA, the officer will ensure LTIMindtree remains a trusted and forward-thinking leader in the IT services and consulting industry.
This role owns the enterprise accountability framework for Responsible AI, with binding authority to enforce compliance. The role will mandate collaboration with various stakeholders and drafting standards, tool kits and frameworks required for Responsible AI adoption. The role will be responsible for championing adoption of governance practices by embedding controls into business workflows, driving cultural change, and measuring policy uptake across teams."
Key Responsibilities
1) AI Compliance Strategy & Governance
- Design and lead the enterprise-wide Responsible AI governance framework adoption.
- Develop compliance roadmaps for AI/ML initiatives in collaboration with business and technical stakeholders.
- Collaborate and coordinate with business and IT leadership for governing AI Risk & Ethics governance.
- Be a part of and provide inputs to the AI governance board
- Define and institutionalize “AI risk appetite” and “compliance thresholds” for AI/ML deployments.
- As a part of the AI governance office charter manage the “enterprise-wide AI governance framework” aligned with EU AI Act, NIST AI RMF, OECD AI Principles, and other emerging regulations
- Implement, Manage and Govern the AI assurance framework
2) Policy Development & Implementation
- Map and maintain the regulatory landscape in line with the Responsible AI framework
- Draft and maintain AI-related policies, procedures, and controls across the organization.
- Work with AI governance office and maintain the Regulatory compliance
- Ensure AI governance aligns with internal policies and external standards like ISO, GDPR, HIPAA, AI regulations and client-specific requirements.
- Build and manage standard operating procedures (SOPs) and Tool kits for AI lifecycle management and risk controls.
- Collaborate and assist “InfoSec” to integrate AI compliance into “DevSecOps & MLOps pipelines”
3) Responsible AI framework implementation, governance & Oversight
- Manage and improvise the Responsible AI assessment frameworks tailored for AI use cases (e.g., bias, security, explainability, and related risks).
- Collaborate with Technology teams to assess AI models and recommend mitigations.
- Collaborate with Technology and Quality assurance teams to implement the Responsible AI testing framework
- Own and represent AI governance for internal and external audits
- Maintain AI risk register, including use case risk profiling and residual risk monitoring.
- Implement “AI audit mechanisms” (model monitoring, impact assessments)
- Institutionalize the AI impact assessments from AI inventory, Risk categorization and AI assurance assessments
- Ensure all AI systems adopt the AI impact assessment framework through the AI lifecycle
- Implement, Institutionalize and monitor AI system approval process
4) Regulatory Monitoring and Engagement
- Track and analyze global regulatory developments (e.g., EU AI Act, NIST AI RMF, OECD Guidelines, India’s DPDP Act). along with the Privacy office and AI governance office
- Map and maintain the regulatory landscape in line with the Responsible AI framework
- Act as liaison to legal and government affairs teams to assess impact of evolving laws.
- Engage with industry bodies (Partnership on AI, IEEE, ISO) to shape AI standards.
- Prepare compliance documentation and assist in regulatory or client audits involving AI.
5) Training and Culture Building
- Own the design and roll out of Responsible AI training modules across technical, business, and executive audiences.
- Promote awareness of AI ethics and responsible innovation culture across the organization.
- Drive change management and accountability culture through internal campaigns and workshops.
- Create “AI playbooks” and “AI tool kits” for AI Development, Deployment teams.
6) Client Engagement & Advisory
- Advise clients on “Responsible AI framework” and “AI governance framework”.
- Support pre-sales & proposals with AI governance insights.
- Collaborate with the Delivery excellence team and Project teams to ensure AI solutions meet client contractual and regulatory obligations.
7) Accountability & Enforcement
- Own end-to-end accountability for implementing the Responsible AI framework, AI Governance, AI assurance, AI Literacy, Responsible AI toolkit adoption, AI risk management and AI compliance breaches.
- Escalate AI deployments failing risk/compliance thresholds and escalate to the AI governance office/AIGB.
8) Adoption & Change Management
- Drive **enterprise-wide adoption of Responsible AI practices, AI policies, responsible AI impact assessments through:
- AI impact assessments
- Mandatory compliance gates** in AI project lifecycles (e.g., ethics review before model deployment).
- Integration with existing workflows** (e.g., SDLC, procurement, sales).
- Define and track **adoption KPIs** (e.g., "% of AI projects passing RAI audits").
Key Competencies
- Domain: Strong understanding of Responsible AI framework and AI governance
- Domain: Understanding of AI regulations (EU AI Act, NIST RMF), AI ethics
- Technical: AI/ML lifecycle, MLOps, XAI, AI security, Agentic AI, GRC tools
- Technical: AI systems assessments and defining assessment parameters and standards
- Leadership: Stakeholder influence, compliance strategy, cross-functional collaboration
- Ability to adopt new technologies and have experience in putting together a compliance framework
- Ability to understand frameworks and translate them into process and enable the organization for effective adoption via frameworks, toolkits, guidelines etc.
- Excellent communication skills
- Excellent presentation skills
- Excellent collaborative skills
- Excellent research skills
- Ability to come up with frameworks for new tech adoption
- Proactively take on ownership of tasks and take them to closure
Required Qualifications
- 12-18 years in Information Technology, Compliance, Technical governance, Risk management, with 3+ years in AI/ML-related domains.
- Strong knowledge of AI regulatory frameworks (EU AI Act, NIST AI RMF, OECD AI Principles).
- Experience working with cross-functional teams (Delivery, InfoSec, Legal, Data Privacy).
- Familiarity with AI/ML model lifecycle (training, validation, testing, deployment, monitoring).
Preferred Qualifications (Optional)
- Background in Law, Public Policy, Data Governance, or AI Ethics.
- Certifications in AI Governance (AIGB, IAPP CIPM/CIPT, MIT RAII), Privacy (CIPP/E)
- Experience in Global IT services/consulting firms/product companies
- Exposure to data-centric AI product governance or AI MLOps platforms (e.g., Azure ML, SageMaker, DataRobot), Agentic AI implementation, etc.