top of page

AI and Privacy – Navigating the DPDP Act in the Age of Emerging Tech

  • Sunidhi & Satviki
  • Jan 13
  • 7 min read

Sunidhi Khabya and Satviki Agnihotri, 3rd Year, BA.L.L.B Students, NLU Jodhpur


  1. The challenge of AI to Data Protection 


The advent of Artificial Intelligence in the contemporary world has led to data accumulation and processing at a substantially larger scale. With the help of predictive models, which continuously learn from patterns of past user behaviour and often extend beyond what is covered under the direct consent of the data subject. AI Systems process not only data directly provided by users, but also data that is derived or inferred from such inputs, which mainly constitutes “Personal Data”, as it falls under the definition of the said term in accordance with section 2(t) of the Digital Personal Data Protection Act, 2023 [“DPDP Act”], which describes the term to mean “any data about an individual who is identifiable by or in relation to such data.” Building upon user profiles that evolve with new behaviour patterns and each additional data. This model is particularly in conflict with the object and scheme of the DPDP Act which relies heavily on the consent – centric framework of notice and consent under Section 6, along with the specificity of the consent under section 7. Section 6 proceeds on the main assumption that personal data is clearly defined, purpose specific and has clear time bound limits in accordance with the act (generally one year, unless the specific law requires it to be otherwise), and the same needs to be disclosed to the data principal through prior notice and accepted through an informed and affirmative act of consent. Affirmative herein would mean, not merely I Agree boxes, rather clear yes and know checkboxes for more clear and specific consent. However, where the data processing is continuous and layered or has been repurposed across multiple operational functions, it messes with the foundational premise of purpose – specific consent, thereby diluting it, placing such models at odds with the Act’s reliance on traditional notice and consent mechanisms. 


However, on the other hand, AI models operate through continuous data flows, repeated uses, and non-transparent decision making processes, thereby directly challenging the principles of purposive limitation and data minimisation. Thus, there arises an inescapable dilemma, whether the obligations under the DPDP Act can extend to the algorithmic and automated data processing that lies beyond the explicit consent frameworks.


  1. Automated Decision-Making and Profiling 


One of the most crucial lacunae in the DPDP Act is its silence on automated decision making and profiling done by AI models. Unlike Article 22 of the European Union’s General Data Protection Regulation [“GDPR”], which grants individuals the right not to be subjected to decisions based solely on automated processing, the DPDP Act however, does not recognise such a right. This absence of provision in domestic law leaves the risks of algorithmic bias, transparency and error that are the usual consequences of AI driven profiling, unaddressed and unchecked. This risks the sensitive informational privacy and autonomy of the data principal. 


Section 4 and Section 10 of the DPDP Act impose obligations upon data fiduciaries and significant data fiduciaries to ensure a lawful, fair and transparent processing of personal data. Although these provisions do not expressly refer to algorithmic systems, they must be read purposively to accommodate the duties of explainability and human oversight.  A significant data fiduciary under Section 10 is required to undertake data protection impact assessments and periodic audits, which can extend to assessing the ethical and fairness implications of automated decision making systems. However, these obligations are merely regulatory and compliance mechanisms and thus do not automatically translate into independent or enforceable rights for individuals or data principles under the DPDP Act. 


The challenge in interpretation is therefore, whether the judiciary can read in an obligation of algorithmic accountability within these sections in order to fill in the legislative gap? The mischief rule of statutory interpretation as given in the Heydon’s case supports this approach, as the mischief to be remedied is the unaccounted algorithmic profiling and the purposive solution is the expanded construction of fiduciary duties in order to ensure transparency and fairness.


In the landmark judgement of in Maneka Gandhi v. Union of India, the Supreme Court held that reasonableness and fairness are inherent in every administrative action that affects personal liberty. Applying this doctrine of fair and just administrative action to data processing the algorithmic decisions which materially affect the personal freedom or the right to consent of individuals must be informed by procedural fairness and right to explanation. While the doctrine was created in context of state action, the Indian judiciary has progressively extended the horizontal application of fundamental rights when private actors such as data fiduciaries (specially Significant data fiduciaries) have a direct and substantial impact on individual’s autonomy, dignity and liberty. The same principle was reiterated in Justice Puttaswamy v. Union of India, where the court established the right to informational self-determination meaning that data processing must satisfy the thresholds of legality, necessity and proportionality.  


These constitutional standards must be held to have contemporary meaning, thus must extend to algorithmic governance. Indian courts must therefore interpret the DPDP Act to require human oversight in cases of profiling and automated decision-making, especially when such decisions have significant personal consequences.


The GDPR provides another model, under which Article 22 recognises the individual’s right to obtain human intervention and a right to explanation for automated decisions. Similar provisions exist in the UK Data Protection Act, 2018 and the Canadian Consumer Privacy Protection Act [“Bill C-27”], which regard algorithmic fairness as a substantive right. Indian jurisprudence must similarly apply these protections through judicial interpretation, ensuring that algorithmic systems align with the rights of dignity and privacy as provided under the constitution of India. Until the legislative reform explicitly incorporates such safeguards, the courts must interpret Section 4 and 10 of the DPDP Act to ensure balancing privacy with automated decision making. 


  1. Algorithmic Bias, Discrimination and Equality 


While profiling and automated decision making primarily raised concern over privacy and consent, more complex implications emerge when these systems start distinguishing between individuals and groups, thereby summoning the equality jurisprudence under the constitution of India. 


The very inherent AI algorithms points towards bias, while it may prima facie seem neutral, it reproduces existing prejudices by learning from biased datasets. Predictive AI models used for hiring or policing generally increase the gender, caste and religious prejudices embedded within the data. Such bias is against the constitutional mandate for equality and fairness. The DPDP Act lacks the explicit mandate for fair or non-discriminate processing, Section 9 of the act addresses reasonable security safeguards, but primarily focuses solely on data protection from breaches rather than substantive fairness in processing. Therefore, there is no legal mechanism for challenging these decisions that might produce unequal impacts. 


Article 14 of the Indian Constitution as interpreted in Anuj Garg v. Hotel Association of India, requires that the state must prevent indirect discrimination arising from structural bias. In Navtej Singh Johar v. Union of India, the court recognised that equality is not merely formal, but substantive, and must protect individuals from explicit and implicit discrimination. In pursuance of this reasoning, profiling that disproportionately affects marginalised groups must be treated as a violation of equality and fairness. And the DPDP Act must be purposively read to align with the spirit of Article 14.


In State v. Loomis , the Wisconsin Supreme Court upheld the criminal sentencing tool accused of bias against African-American defendants, the court acknowledged due process and concerns of transparency in automated decisions. This case serves as a warning example of how algorithmic systems can undermine fairness and unchecked legal safeguards.  


Section 2(h) of the DPDP Act defines ‘harm’ to include financial loss, identity theft and reputational injury but omits discrimination. Indian courts applying purposive interpretation could expand the definition of algorithmic bias as a form of dignitary harm under Article 14 and Article 21. Such reading harmonises constitutional morality and ensures that AI systems protect the right to equality. 


The DPDP Act, while a landmark in India’s privacy regulation, remains underprepared for complex challenges posed by AI. The static structure of the act does not allow it to address the automated decision-making, profiling and bias in algorithms. Judicial activism thus becomes necessary. Indian courts must interpret the DPDP Act as a living instrument, capable of evolving in response to technological changes.


  1. Conclusion 


The research also shows that the DPDP Act does not address the issue of AI-based decision-making and algorithmic bias, which are new challenges for the principles of equality and transparency. Even though AI may be a source of efficiency, if it is used without human oversight, there is a possibility that discriminatory profiling may occur and that the individual's autonomy may be gradually taken away. At this point, the intervention of the judiciary is of utmost importance. Courts, by interpreting Sections 4 and 10 in a purposive manner, can impose procedural obligations on data fiduciaries, which include complying with requirements such as sending the Breach and consent notices, clear delineation of the purpose for which the said consent is being obtained record – keeping for only the required time frame and not hoarding the data of the Data principals, thus, recognising their substantive rights, which include protection against arbitrary profiling, discriminatory automated outcomes, and unfair decision making, thereby ensuring that algorithmic governance aligns with constitutional values Inter alia fairness, transparency and accountability. This way, Indian jurisprudence will be in line with the rest of the world with respect to regulations such as Article 22 of the GDPR and the cases of Digital Rights Ireland and Carpenter v. United States, which emphasizes the need for proportionality and monitoring in digital regulation.

The DPDP Act in the end, is an occasion for India to delineate the digital constitutionalism principles, a system where technology is compatible with freedom and governance with accountability. It is a task of the judiciary to develop an interpretative tradition that keeps the equilibrium. The law interpreted purposively  and in line with constitutional morality, can evolve into a living instrument that not only safeguards the citizens' rights but at the same time facilitates their demands for more responsible innovation.


Recent Posts

See All

Comments


bottom of page