AI Governance in an Age of “Technological Somnambulism”
- Varun Pathak & Rudraditya
- 8 hours ago
- 7 min read
Written by - Varun Pathak (Partner at Shardul Amarchand Mangaldas) & Rudraditya (Student at RGNUL)
I The Purpose Problem
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose we really desire and not merely a colorful imitation of it. - Norbert Wiener (50 Years Ago)
The Dutch tax office employed a tool for finding fraud and analyzing data that mistakenly convicted nearly 35,000 parents of fraud. This led to debt, broken families, and a lot of unfairness. The scandal was so big that it made the Dutch cabinet resign in January 2021. This is a real-life illustration of how automated decision systems can make the state stronger and more likely to make mistakes.
As the capabilities of Artificial Intelligence have increased over the years, so have the challenges with respect to its usage and application. A model of regulation that only looks at the state's role is structurally incomplete because frontier AI is mostly made and used by private actors, but it has a big impact on public life. The way AI power is spread and the democratic values that are at stake are better suited to a “co-governance model”, in which the government has power and developers, auditors, researchers, and members of civil society all have structured ways to participate.
Already AI is being deployed in impactful, and hopefully meaningful ways in transportation, health, energy, water, social deliberation, education, science, manufacturing, employment, surveillance, policing, military, at a range of sectors with interdependence and purpose for improvement in productivity and upliftment of human civilizations. Langdon Winner dubbed it "technological somnambulism," which means that we "sleepwalk" into new technologies, arguing about their qualities and efficiency while disregarding how they change power.
Today technology may create inequality, turbulence and authoritarianism due to digitally mediated and AI filtered communication. Advancements in AI could potentially become more complex, unpredictable, and may enable faster escalation than what humans can manage.
Contemporary legal systems often allocate liability and permits based on intention; awareness, objective, and carelessness. AI agents are 'risky actors' capable of causing harm without human-like intents, necessitating governance that emphasizes ex ante risk controls (audits, licensing, safety cases) instead of only depending on retrospective accountability. While arguments about whether AI is a "person" may be interesting to think about, the real issue that needs to be dealt with right away is often governance: systems can do terrible things without being people. The immediate legal question is not about morality, but about who is responsible (developers, deployers, procuring states) and how victims can get help.
Automated systems provide a "moral crumple zone." When something goes wrong, the person closest to the output is wrongly blamed, while the system's design decisions stay safe.
There is a "alignment problem" with AI governance approaches as well. Disclosure, registration, licensing, and auditing all sound like they can be done, but how well they work relies on how easy they are to measure and how well institutions can handle them. If that layer of possibility is taken away, regulation might just be a sign; a lot of papers for not much safety.
While many analyses of AI tend to focus more on risks rather than opportunities, potential of AI robustly enabling improved welfare, wealth, sustainability and sustainability amongst larger societal goods cannot be ruled out. AI governance ought to look at workable solutions to pressing challenges, given the potential common interest.
II AI as Political Infrastructure
AI as Decision Infrastructure - AI’s governance relevance lies less in “intelligence” and more in delegation: once a model is plugged into a workflow, the system starts allocating benefits, burdens, suspicion, and credibility at scale. Human oversight often becomes symbolic because people tend to over-rely on automated recommendations, a dynamic Parasuraman & Riley describe as automation “misuse” and “abuse,” where decision aids quietly reshape human judgment rather than merely assisting it. Two public-law illustrations show why this matters. In State v. Loomis, a sentencing court relied on a proprietary risk assessment whose methodology was not disclosed to the defendant, spotlighting how automation can degrade contestability even when formal due-process boxes appear checked. And in Australia’s “Robodebt” scheme, automated debt calculations at scale produced systemic illegality and harm, later examined in detail by the Royal Commission, demonstrating what happens when automation becomes administration without robust institutional safeguards.
This is the deeper point Citron captures with “technological due process”: when code mediates state decisions, ordinary administrative safeguards (notice, reasons, meaningful review) can collapse unless deliberately rebuilt for automated systems.
AI as Informational Power - AI does not merely “process information”; it concentrates informational power by compressing messy reality into scores, predictions, and thresholds that are easily operationalized. This matters because what looks like neutral engineering often becomes “policy-by-parameters”: which variables count as signals, what error-rates are tolerated, which populations are “high risk,” and where the decision threshold is set. Lessig’s core insight; often simplified as “code is law” fits: code architecture can regulate behaviour as effectively as formal legal rules, sometimes more quietly and more pervasively. Karen Yeung describes this as algorithmic regulation: systems that govern a domain by collecting data, generating knowledge, and refining interventions toward pre-specified goals; often in continuous feedback loops. The human-rights challenge is that “governance” can occur through technical design choices that are politically consequential yet procedurally insulated, this implies that the procedure effects the political landscape but the people who shape technical design choices go scot-free.
The SyRI litigation in the Netherlands makes the risk concrete. SyRI was used for welfare-fraud risk indication; the Hague District Court held the legal framework governing SyRI violated higher law, including privacy protections under the ECHR, underscoring how data-driven risk systems can expand state capacity while weakening transparency and proportionality. The Bridges litigation makes the governance point vivid: the Court of Appeal scrutinised the lawfulness of police use of automated facial recognition through a bundle of constraints; legal basis, proportionality, and safeguards; rather than accepting “efficiency” or “innovation” as self-justifying. The lesson for policymakers is straightforward: where AI reshapes public power or materially affects individuals, legitimacy must be engineered through structured safeguards (clear rules, constrained discretion, documented reasoning, and redress), not promised through “ethics language”.
III AI in Critical Infrastructure and The Prospective Remedies
To erase the line between man and machine is to obscure the line between men and gods. - From the movie “Ex Machina”
Automated decision systems don’t just “assist” governance; they re-allocate public power by turning eligibility, suspicion, and risk into outputs that are hard to challenge. This becomes a rule-of-law problem when the person affected cannot see the decisive features, cannot meaningfully contest them, and cannot identify the human decision-maker behind the machine. Scholars have warned that administrative agencies must not apply machine learning “cavalierly” without safeguards aligned with good-governance values.
The infrastructure risk is not only bias or opacity; it is speed. When complex systems make consequential choices on machine-time, humans become supervisors of processes they cannot realistically interrupt. Financial markets already show the pattern: the “Flash Crash” literature is often read as a warning about feedback loops and automation interacting at high velocity i.e high-speed algorithms can amplify errors into sudden market crashes. In military and security contexts, the worry is sharper: autonomy can shorten deliberation windows and increase the likelihood of inadvertent escalation. The ICRC (International Committee of Red Cross) has repeatedly emphasized the dangers of unpredictable effects and the need for meaningful human control in weapon systems.
AI governance is also distributional governance: who gets screened, priced, targeted, audited, hired, or denied; and who bears errors. Barocas and Selbst’s foundational work shows how data-driven systems can generate disparate impact even without explicit discriminatory intent, because the pipeline learns from and reproduces structural inequalities. In labour markets, the IMF (International Monetary Fund) has warned that AI can reshape work and bargaining power in ways that widen inequality unless institutions adapt. And beyond economics, AI can become persuasion infrastructure: Yeung’s “hypernudge” - A hypernudge is an AI-powered “nudge” that changes in real time. Instead of a one-time design choice (like a default option), the system continuously personalizes what you see; rankings, prompts, recommendations, based on your data and behaviour, steering decisions subtly but persistently. Thus AI even creates scope of psychological persuasion infrastructure.
The Industrial Revolution Lesson - What the Industrial Revolution teaches is not merely that transformative technology produces social harm, but that societies only began to control that harm once they built regulatory machinery capable of operating at the technology’s scale. Early factory reform moved beyond moral outrage by creating enforcement capacity: the UK’s Factory Act of 1833 is remembered as a turning point largely because it established a small factory inspectorate with powers to enforce compliance and impose penalties; even while acknowledging that early enforcement was thin and widely evaded, the institutional template had been set.
Modern safety governance followed the same pattern. In the United States, the Occupational Safety and Health Act (1970) did not just announce a right to safer workplaces; it authorised the enforcement of standards, inspections, and penalties, explicitly building an administrative system to prevent harm rather than only compensating it afterward. The broader design point is that high-risk domains tend to shift toward ex ante assurance as they mature: aviation safety relies on safety-management systems and continuous risk management embedded into operations, not only post-crash blame. Nuclear safety uses layered safeguards (“defence in depth”) because single-point human intention is an unreliable barrier against complex failure. High-risk medical devices require rigorous premarket review oriented to safety and effectiveness, reflecting a similar “prove it before deployment” logic.
If AI is political infrastructure, then AI governance must borrow this institutional DNA: purpose constraints, measurable safety thresholds, independent oversight, incident reporting, and lifecycle monitoring; not because principles are wrong, but because principles without enforceable processes become symbolic compliance. Administrative-law scholarship on “regulating by robot” makes the same institutional point: government can use ML tools, but only if their deployment is disciplined by legality, reason-giving, accountability, and workable oversight structures.






Comments