DPDP Act

AI Regulation India 2026: What Legal Teams Must Prepare For

LexiReview Editorial Team29 March 202624 min read

Key Takeaway

India's artificial intelligence regulatory landscape is shifting beneath the feet of every legal professional in the country. With the Digital Personal Data Protection Act now in active enforcement, MeitY's advisory framework gaining statutory teeth, and sector regulators from RBI to SEBI issuing increasingly specific AI directives, the era of treating AI governance as a future concern is over.

AI Regulation India 2026: What Legal Teams Must Prepare For

India's artificial intelligence regulatory landscape is shifting beneath the feet of every legal professional in the country. With the Digital Personal Data Protection Act now in active enforcement, MeitY's advisory framework gaining statutory teeth, and sector regulators from RBI to SEBI issuing increasingly specific AI directives, the era of treating AI governance as a future concern is over.

For legal teams — whether in-house counsel at a fintech startup, partners at a litigation powerhouse, or compliance officers at a multinational operating in India — the question is no longer whether AI regulation will affect your practice. It is how soon you will need to demonstrate compliance, and how deeply you will need to restructure your contracts, workflows, and vendor relationships.

This guide breaks down every dimension of AI regulation that Indian legal professionals need to prepare for in 2026 and beyond.

Key Takeaway

  • India's AI regulatory environment is moving from advisory guidelines to enforceable obligations under the DPDP Act and forthcoming Digital India Act
  • Sector-specific regulators (RBI, SEBI, IRDAI) have issued binding AI usage rules that carry real penalties for non-compliance
  • The EU AI Act has extraterritorial reach that affects Indian IT services, GCC operations, and export-facing businesses
  • Legal teams must prepare AI audit frameworks, algorithmic transparency documentation, bias testing protocols, and updated consent mechanisms
  • Contracts involving AI — from vendor agreements to client engagement letters — need new clauses covering output liability, IP ownership, and data processing
  • AI-powered legal tools themselves must comply with these regulations, making vendor due diligence essential

The Current State of AI Regulation in India

Unlike the European Union, which chose a single omnibus regulation, India's approach to governing artificial intelligence has been multi-layered — a patchwork of existing statutes, executive advisories, and sector-specific rules that together form an increasingly cohesive framework. Understanding each layer is essential before you can build a compliance strategy.

The Digital Personal Data Protection Act (DPDP Act): The Foundation

The DPDP Act, which received presidential assent in August 2023 and whose rules have been progressively notified through 2025 and into 2026, is the primary legislative anchor for AI regulation in India. While the Act does not mention "artificial intelligence" by name in its core provisions, its impact on AI systems is sweeping.

Automated decision-making provisions. The DPDP Act's requirements around purpose limitation (Section 4), consent (Section 6), and the rights of Data Principals create direct obligations for any organisation deploying AI that processes personal data. The right of a Data Principal to obtain information about the processing of their data logically extends to understanding when and how an AI system made a decision about them.

Data Fiduciary obligations. Any entity that determines the purpose and means of processing personal data — which includes training AI models, running inference, or using AI tools that process client data — qualifies as a Data Fiduciary. The compliance obligations are significant: maintaining records of processing activities, implementing reasonable security safeguards, conducting Data Protection Impact Assessments for high-risk processing, and appointing Data Protection Officers where required.

Consent architecture for AI. The DPDP Act's consent framework demands that consent be free, specific, informed, unconditional, and unambiguous. For AI systems, this means that blanket consent clauses buried in terms of service are insufficient. Users must be told, in clear language, that their data will be processed by an AI system, what the purpose of that processing is, and what decisions may be influenced by the AI output.

Children's data and AI. Section 9 of the DPDP Act places strict restrictions on processing children's data, including an outright prohibition on tracking, behavioural monitoring, and targeted advertising directed at children. Any AI system that might process data belonging to individuals under 18 — from edtech platforms to healthcare apps — faces heightened compliance requirements.

The Data Protection Board of India has begun accepting complaints under the DPDP Act. Legal teams should assume enforcement is active and treat compliance deadlines as firm, not aspirational. Penalties for non-compliance can reach up to ₹250 crore per instance.

The Information Technology Act: Existing Hooks

The IT Act, 2000, and its associated rules — particularly the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — continue to provide regulatory hooks for AI governance.

The 2023 MeitY advisory requiring platforms to label AI-generated content and seek government approval before deploying under-tested or unreliable AI models on the Indian internet was an early signal. While the legal enforceability of that specific advisory was debated, it set the policy direction. Subsequent amendments to the IT Rules have formalised several of these requirements, particularly around deepfakes, AI-generated misinformation, and platform accountability for algorithmic amplification.

Section 43A of the IT Act, which imposes liability for failure to protect sensitive personal data, also applies to AI systems that handle such data — providing an additional, and often overlooked, basis for regulatory action.

MeitY's AI Governance Framework

The Ministry of Electronics and Information Technology has been progressively building an AI governance framework that sits alongside the DPDP Act. Key elements that legal teams should track include:

  • The IndiaAI Mission's responsible AI principles, which emphasise safety, transparency, accountability, and inclusivity. While currently advisory, these principles are being referenced in government procurement standards and are likely to become binding for entities providing AI services to the government
  • Sector-specific AI safety guidelines issued in coordination with line ministries, covering areas like AI in healthcare diagnostics, AI in education assessment, and AI in public service delivery
  • The AI incident reporting framework, which is expected to require organisations to report significant AI failures, bias incidents, or safety events to a designated authority

The Proposed Digital India Act

The Digital India Act (DIA), which has been under active development since 2023, represents the most significant potential shift in India's AI regulatory landscape. Intended to replace the IT Act, 2000, the DIA's draft provisions include:

  • A risk-based classification for AI systems, drawing partial inspiration from the EU AI Act but tailored to Indian contexts
  • Mandatory algorithmic impact assessments for high-risk AI deployments
  • Algorithmic transparency requirements, including the right to meaningful explanation for automated decisions that significantly affect individuals
  • A dedicated institutional mechanism for AI oversight, potentially under MeitY or as a standalone body

While the DIA has not yet been enacted, its trajectory is clear enough that legal teams should begin preparing for its provisions now. Waiting for final notification risks being caught unprepared.

Try LexiReview Free

Sector-Specific AI Rules: Where the Rubber Meets the Road

While the DPDP Act and the forthcoming DIA provide the broad framework, sector regulators have moved faster and more specifically. For legal teams advising clients in regulated industries, these rules demand immediate attention.

RBI: AI in Banking and Financial Services

The Reserve Bank of India has been among the most active regulators on AI governance. Key requirements include:

AI model governance. RBI's guidelines on IT governance and risk management (updated through circulars in 2025) now explicitly address AI and machine learning models used in credit scoring, fraud detection, and customer service. Banks and NBFCs must maintain model inventories, conduct regular model validation, and ensure human oversight of AI-driven credit decisions.

Explainability in lending. The RBI has signalled — through its discussion papers and supervisory expectations — that AI-driven lending decisions must be explainable to borrowers. A customer denied credit cannot simply be told "the algorithm decided." The lending institution must be able to articulate the key factors that influenced the decision in terms the customer can understand.

Outsourcing and third-party AI. RBI's outsourcing framework applies to banks and NBFCs that use third-party AI tools. This means that if your client uses an external AI platform for KYC verification, transaction monitoring, or customer analytics, the regulatory obligations remain with the regulated entity, not the vendor. Contracts with AI vendors must reflect this.

Data localisation. RBI's data localisation requirements for payment data interact with AI deployments. If an AI model is trained on or processes payment data, the data residency requirements apply — which has implications for cloud-hosted AI services.

SEBI: AI in Capital Markets

The Securities and Exchange Board of India has addressed AI through multiple channels:

  • Algorithmic trading regulations now encompass AI-driven trading strategies, with requirements for pre-trade risk controls, audit trails, and kill switches
  • AI in investment advisory falls under SEBI's registered investment adviser framework. AI-powered robo-advisory platforms must comply with suitability requirements and cannot hide behind algorithmic opacity
  • Market surveillance. SEBI itself uses AI for market surveillance, but has also imposed obligations on market intermediaries to ensure their AI systems do not facilitate market manipulation or insider trading

IRDAI: AI in Insurance

The Insurance Regulatory and Development Authority of India has issued guidelines covering:

  • AI in underwriting, where insurers must ensure that AI models do not introduce unfair discrimination based on protected characteristics
  • Claims processing automation, which must maintain human review mechanisms for disputed or high-value claims
  • AI-driven product recommendations, which must comply with suitability and disclosure requirements

Sector-specific AI rules often carry their own penalty frameworks, separate from the DPDP Act. An AI compliance failure in banking could trigger both DPDP Act penalties and RBI supervisory action — a compounding risk that legal teams must account for in their risk assessments.

Healthcare, Education, and Other Emerging Sectors

While not yet as codified as financial services regulation, AI rules are rapidly emerging in:

  • Healthcare, where the Central Drugs Standard Control Organisation (CDSCO) is developing frameworks for AI-based medical devices and diagnostic tools, and the National Medical Commission has weighed in on AI in clinical practice
  • Education, where AI-driven assessment, proctoring, and personalised learning tools face scrutiny under both the DPDP Act (particularly regarding children's data) and sector-specific quality standards
  • Employment, where AI in recruitment and performance evaluation is attracting regulatory attention, particularly around bias and discrimination

Global Context: The EU AI Act and Its Reach Into India

No discussion of AI regulation in India is complete without addressing the EU AI Act, which entered into force in August 2024 and whose provisions are being phased in through 2026.

Extraterritorial Application

The EU AI Act applies to providers of AI systems that are placed on the market or put into service in the EU, regardless of where those providers are established. It also applies to deployers of AI systems located within the EU, and to providers and deployers located outside the EU where the output produced by their AI system is used in the EU.

For Indian companies, this means:

  • IT services companies delivering AI solutions to European clients must comply with the EU AI Act's requirements for their risk-tier classification
  • GCCs (Global Capability Centres) in India developing AI systems for their European parent entities are within scope
  • Export-facing businesses whose AI-generated outputs (reports, analyses, recommendations) are consumed by EU-based entities may fall under the Act's provisions

Legal teams advising Indian companies with EU exposure must:

  1. Classify AI systems according to the EU AI Act's risk tiers (unacceptable, high-risk, limited, minimal)
  2. Implement conformity assessments for high-risk AI systems
  3. Maintain technical documentation to EU standards, which are more prescriptive than current Indian requirements
  4. Establish AI governance structures that satisfy both Indian and EU regulatory expectations
  5. Review contracts with EU clients and partners to ensure AI-related obligations are clearly allocated

The convergence between Indian and EU regulatory approaches also means that companies investing in EU AI Act compliance today will likely find themselves ahead of the curve when India's own framework matures.

Moving from regulatory analysis to actionable preparation, here is what legal teams — whether in-house or external advisors — need to build in 2026.

1. AI Audit and Inventory Requirements

Before you can comply, you must know what you are dealing with. Legal teams should lead or participate in an AI audit covering:

  • AI system inventory. Catalogue every AI system used within the organisation, including purpose, data inputs, decision outputs, vendor relationships, and risk classification
  • Data flow mapping for AI. Trace how personal and sensitive data flows into, through, and out of each AI system — from training data to inference outputs to storage
  • Vendor AI assessment. Evaluate every third-party AI tool against regulatory requirements, including data processing agreements, security certifications, and jurisdictional considerations
  • Legacy system review. Identify AI or automated decision-making embedded in older systems that may not have been built with current compliance standards in mind

2. Algorithmic Transparency and Explainability

The trend across Indian and global regulation is clear: organisations that deploy AI must be able to explain what their AI does and why it does it. Legal teams should prepare:

  • Transparency statements for each AI system, written in clear language, explaining the system's purpose, the types of data it processes, and the nature of its outputs
  • Explainability protocols for AI-driven decisions that affect individuals, particularly in lending, insurance, employment, and public services
  • Disclosure templates for client-facing communications, informing data subjects when AI is used in decisions affecting them

3. Bias Documentation and Fairness Testing

Algorithmic bias is a legal risk, not merely an ethical concern. Documentation requirements are tightening, and legal teams should ensure:

  • Bias testing protocols are established for AI systems, particularly those used in decision-making about individuals
  • Fairness metrics are defined, documented, and regularly measured
  • Remediation procedures exist for when bias is detected
  • Records of bias testing are maintained for regulatory inspection

Under the DPDP Act, consent for AI processing must be granular and informed. Legal teams need to:

  • Audit existing consent mechanisms to ensure they adequately cover AI processing
  • Draft AI-specific consent language that complies with the DPDP Act's requirements for specificity and clarity
  • Implement layered consent where AI processing is secondary to the primary purpose of data collection
  • Build consent withdrawal mechanisms that can operationally stop AI processing of a specific individual's data
Try LexiReview Free

Contractual Implications: Rewriting the Rules of Engagement

AI regulation does not just change compliance workflows — it fundamentally alters contractual relationships. Legal teams must update templates, negotiate new terms, and advise clients on a rapidly shifting landscape.

AI Output Liability

Who is liable when an AI system produces an incorrect, harmful, or misleading output? This question is at the heart of dozens of contract disputes emerging across jurisdictions, and Indian law is still catching up.

Current position. Under Indian contract law and the IT Act, liability for AI outputs generally falls on the entity that deploys the AI system and presents its output to the affected party. However, contracts between AI vendors and deployers can — and should — allocate this risk more precisely.

What contracts should address:

  • Warranties (or disclaimers) regarding AI output accuracy
  • Indemnification for losses arising from AI errors, hallucinations, or bias
  • Limitations of liability specific to AI-related claims
  • Insurance requirements for AI-related risks
  • Notification obligations when AI output quality degrades

IP Ownership of AI-Generated Content

The intellectual property status of AI-generated content remains legally unsettled in India. The Copyright Act, 1957, requires a human author, and the Copyright Office has not yet issued definitive guidance on AI-generated works.

Contractual solutions legal teams should implement:

  • Clear IP assignment clauses that address AI-generated and AI-assisted outputs separately
  • Work-for-hire provisions that contemplate AI involvement in the creative process
  • Licensing terms that account for the possibility that AI-generated content may not be copyrightable
  • Representations regarding AI use in content creation, particularly for creative, legal, and advisory deliverables

Vendor AI Usage Clauses

When your organisation engages a vendor, you need to know whether and how that vendor uses AI. Equally, when your organisation is the vendor, your clients will increasingly demand transparency about AI use.

Key clauses to include in vendor agreements:

  • AI disclosure requirements — vendors must disclose when AI is used to deliver services
  • AI processing restrictions — limitations on using client data for AI model training
  • Sub-processor transparency — disclosure of AI sub-processors and their data practices
  • Right to audit — the ability to audit vendor AI systems for compliance, bias, and security
  • Change notification — obligation to notify before materially changing AI systems used in service delivery
  • Exit provisions — data portability and model isolation requirements upon contract termination

When negotiating AI vendor contracts, insist on a separate AI addendum or schedule that specifically addresses AI governance obligations. Burying AI terms in general data processing agreements leads to gaps and ambiguity that will surface at the worst possible moment.

Client Engagement Letters and Advisory Disclaimers

Law firms and legal service providers using AI tools must update their own client-facing documents:

  • Engagement letters should disclose the use of AI tools in legal research, document review, or contract analysis
  • Confidentiality provisions must account for data processed through AI platforms, including cloud-hosted tools
  • Professional responsibility disclosures should address the role of AI in legal work product, consistent with Bar Council of India expectations
  • Limitation of liability clauses should contemplate AI-assisted work

How AI Contract Review Tools Comply with Emerging Regulations

Legal teams evaluating AI-powered contract review and intelligence platforms must ask how those very tools comply with the regulations they are designed to help navigate. This is not academic — it is a practical due diligence necessity.

Data residency and localisation. Does the platform store and process data within India, or does it rely on overseas cloud infrastructure? For sensitive legal documents, Indian data residency is increasingly a baseline requirement.

Purpose limitation. Does the AI tool use your contract data only for the purpose you consented to — reviewing your contracts — or does it also use your data to train its models, improve its algorithms, or generate insights for other clients?

Security architecture. What security certifications does the platform hold? Is data encrypted at rest and in transit? Are there access controls, audit logs, and breach notification procedures?

Transparency about AI methods. Does the vendor explain how its AI works — what models it uses, how they were trained, what their known limitations are? Opacity in an AI tool designed for legal compliance is a red flag.

Human oversight mechanisms. Does the platform build in human review touchpoints, or does it encourage fully automated workflows? For high-stakes legal decisions, human oversight is both a regulatory requirement and a professional responsibility.

Bias and accuracy reporting. Does the vendor provide data on its AI's accuracy, error rates, and any known biases? Is there a feedback mechanism for reporting and correcting errors?

At LexiReview, these questions are not afterthoughts — they are foundational to how the platform is built. From Indian-first data residency to transparent AI methods and built-in human oversight, LexiReview's contract intelligence platform is engineered for the regulatory environment legal teams are operating in today, not the one that existed five years ago.

When evaluating any AI legal tool, request the vendor's data processing agreement, AI governance documentation, and security certifications before uploading a single document. If the vendor cannot provide these, treat it as a disqualifying factor.

Use this checklist to assess your organisation's readiness for India's AI regulatory landscape in 2026:

Governance and Policy

  • [ ] Designated AI governance lead or committee within the legal team
  • [ ] AI usage policy approved and communicated across the organisation
  • [ ] AI risk classification framework aligned with Indian and (where applicable) EU requirements
  • [ ] Incident response plan covering AI failures, bias events, and data breaches involving AI

Compliance Infrastructure

  • [ ] Complete inventory of all AI systems in use, including vendor tools
  • [ ] Data flow maps for every AI system processing personal data
  • [ ] Data Protection Impact Assessments completed for high-risk AI processing
  • [ ] Consent mechanisms audited and updated for AI-specific processing

Documentation and Transparency

  • [ ] Algorithmic transparency statements for each AI system
  • [ ] Bias testing protocols established and documented
  • [ ] Explainability frameworks for AI-driven decisions affecting individuals
  • [ ] Records of AI model validation and performance monitoring

Contracts and Agreements

  • [ ] AI-specific clauses added to vendor and procurement templates
  • [ ] Client engagement letters updated to address AI tool usage
  • [ ] IP ownership provisions revised for AI-generated content
  • [ ] Data processing agreements updated with AI-specific terms
  • [ ] AI liability and indemnification clauses standardised

Sector-Specific Compliance

  • [ ] RBI AI guidelines mapped to banking and fintech operations (if applicable)
  • [ ] SEBI algorithmic trading and advisory requirements addressed (if applicable)
  • [ ] IRDAI AI in insurance requirements reviewed (if applicable)
  • [ ] Healthcare AI device and diagnostic regulations tracked (if applicable)

Training and Awareness

  • [ ] Legal team trained on AI regulation fundamentals
  • [ ] Business stakeholders briefed on AI compliance obligations
  • [ ] Regular regulatory update mechanism established
  • [ ] External expert engagement for complex AI regulatory questions

Key upcoming dates to track: DPDP Act rules are being notified in phases through 2026. The Digital India Act's draft is expected to advance significantly this year. EU AI Act high-risk system requirements are now enforceable. Do not wait for final deadlines — start preparing now, because demonstrating a good-faith compliance trajectory matters to regulators.

Try LexiReview Free

Frequently Asked Questions

Is there a dedicated AI regulation law in India as of 2026?

Not yet in the form of a single, standalone AI Act. India's AI regulatory framework is currently composed of multiple overlapping instruments: the Digital Personal Data Protection Act (which governs data processing by AI systems), MeitY's AI governance guidelines, sector-specific rules from regulators like RBI, SEBI, and IRDAI, and provisions of the IT Act. The proposed Digital India Act is expected to introduce more unified AI-specific provisions, including risk-based classification and mandatory algorithmic impact assessments, but it has not yet been enacted. Legal teams should treat the current multi-layered framework as binding and prepare for the DIA's provisions proactively.

How does the DPDP Act apply to AI systems specifically?

The DPDP Act applies to AI systems whenever they process personal data — which covers the vast majority of commercial AI deployments. Key provisions include: consent requirements that demand clear disclosure of AI processing, purpose limitation rules that restrict how data collected for one purpose can be used in AI training or inference, Data Fiduciary obligations for maintaining processing records and implementing security safeguards, and the rights of Data Principals to access information about how their data is processed. For organisations classified as Significant Data Fiduciaries, additional obligations include Data Protection Impact Assessments and appointing a Data Protection Officer — both of which directly apply to high-risk AI processing activities.

Do Indian companies need to comply with the EU AI Act?

Indian companies may need to comply with the EU AI Act if they place AI systems on the EU market, provide AI services to EU-based clients, or produce AI outputs that are used within the EU. This is particularly relevant for Indian IT services companies, GCCs developing AI for European parent entities, and businesses exporting AI-powered solutions to EU markets. The EU AI Act's extraterritorial reach is broad and mirrors the approach taken by GDPR. Legal teams should conduct a jurisdictional assessment of their AI deployments to determine EU AI Act exposure and build compliance into their EU-facing operations.

What should legal teams include in contracts involving AI vendors?

At minimum, contracts with AI vendors should include: AI disclosure and transparency requirements, restrictions on using client data for model training, sub-processor disclosure obligations, audit rights covering AI systems, change notification requirements, data processing agreements specific to AI operations, liability and indemnification clauses for AI output errors or bias, security and certification requirements, data portability and exit provisions, and compliance representations covering applicable Indian regulations (DPDP Act, sector-specific rules) and, where relevant, the EU AI Act. A separate AI addendum or schedule is recommended to ensure comprehensive coverage without ambiguity.

Who owns the intellectual property in AI-generated content under Indian law?

This remains legally unsettled in India. The Copyright Act, 1957, requires a human author for copyright protection, and the Copyright Office has not issued definitive guidance on AI-generated works. This creates uncertainty about whether purely AI-generated content is copyrightable at all. The practical solution is contractual: parties should use clear IP assignment and licensing clauses that specifically address AI-generated and AI-assisted outputs, distinguish between human-directed AI use and fully autonomous AI generation, and allocate rights and risks accordingly. Until the law is clarified through legislation or authoritative judicial decisions, contractual clarity is the legal team's primary tool for managing AI IP risk.

How can legal teams ensure their own use of AI tools is compliant?

Legal teams should: conduct due diligence on any AI tool before adoption — reviewing the vendor's data processing agreement, security certifications, AI governance documentation, and data residency practices; update client engagement letters to disclose AI tool usage; ensure confidentiality and privilege protections extend to data processed through AI platforms; maintain human oversight of AI outputs, particularly for advice, filings, and high-stakes work product; document their AI tool usage and the governance frameworks they apply; and stay informed about evolving Bar Council of India guidance on technology in legal practice. Choosing AI tools built for Indian regulatory compliance — such as LexiReview — reduces the compliance burden significantly.

Looking Ahead: The Regulatory Trajectory

India's AI regulatory framework is on an unmistakable trajectory toward greater specificity, enforceability, and institutional capacity. The legal teams that will thrive in this environment are those that begin building compliance infrastructure now — not when the final rules are notified, not when the first enforcement action is publicised, but today.

The convergence between India's domestic framework and global standards like the EU AI Act also means that investments in AI governance are not jurisdiction-specific. A robust AI compliance programme built for India will largely satisfy — or can be readily adapted to satisfy — requirements in other markets.

For legal professionals, AI regulation is not a threat to practice — it is an expansion of it. Every AI compliance obligation creates demand for legal expertise: drafting policies, negotiating contracts, conducting audits, advising on risk, and representing clients in regulatory matters. The legal teams that understand AI regulation deeply will be the ones that capture this growing demand.

Start with the checklist above. Map your AI landscape. Update your contracts. Build your governance framework. And choose tools — for contract review, for compliance, for practice management — that are built for the regulatory reality you are operating in.

LexiReview is India's AI-powered contract intelligence platform, built from the ground up for Indian law and Indian regulatory compliance. To learn how LexiReview helps legal teams navigate AI regulation while using AI responsibly, schedule a demo today.

LR

LexiReview Editorial Team

Our editorial team comprises legal tech experts, compliance specialists, and AI researchers focused on transforming contract management for Indian businesses.

Related Articles

Ready to automate your contract workflows?

Join leading Indian legal teams using LexiReview to streamline compliance, reduce risk, and close contracts faster.