top of page

Algorithmic Accountability Without Algorithms: A Technical Response

  • Chirkankshit Bulani and Shivansh Bajpai
  • 10 hours ago
  • 8 min read

Written By Chirkankshit Bulani (Rajiv Gandhi National University Of Law) and Shivansh Bajpai



Introduction


A recent blog piece published in the Tech law Forum at NALSAR university addressed algorithmic manipulation of political information, and proposed a listener-centric approach to accountability. While the normative concerns raised through the blog are indubitably qualified to warrant serious attention, this critique/response attempts to identify technical deficiencies which hamper the operationalization of the framework as regulation. 

This response shall limit itself to technological claims, conceding the constitutional arguments. The author of this piece argues that where proposals of technical system regulation come up through legal scholarship, they must have 3 prerequisites. Firstly, they must have coherent descriptions of system functioning. Secondly, they must have clear causal models linking conduct to harms and thirdly, they must adopt operational definitions which enable enforcement. This original piece falls short on these prerequisites, not through appropriate legal abstraction but rather technical claims which are internally contradictory or empirically lacking. 

This response shall delve into 3 issues, firstly the logical tension where algorithmic systems are simultaneously opaque and also manipulable. Secondly, the absence of mechanistic explanation and thirdly, the failure to define “manipulation” in enforceable terms. These problems, once addressed, bring out the previously established prerequisites for a technically sound regulation. 


I. The Opacity-Exploitability Tension


The Problem Stated


The original piece often addresses algorithms as “black boxes”. It calls then opaque systems whose operations resist accountability. At the same time, it asserts that political actors are engaged in “manipulation” through strategic conduct by using "bots, algorithmic information curation systems, profile optimisation, organised trolls, and account buying." This creates a tension in regards to liability. 


Intentional manipulation, by its definition, requires that actors predict system responses, develop causal models of outcomes and test strategies by getting feedback. These prerequisites demand that a system has “functional transparency” meaning that it has input-output correlations which are observable and learnable through experimentation. If the systems are functionally opaque, intentional manipulation is the incorrect terminology, it should be called accidental discovery. 


The issue at hand isn’t concerned with the possibility of exploitation through trial-and-error. Rather, it is if the actors possess predictive knowledge necessary for establishing the required mens rea. Intentional exploitation of known vulnerabilities is very different from accidental learning of what works, both legally and ethically. 


Distinguishing Forms of Opacity


At present, Computer Science distinguishes between three types of opacity. The first is technical opacity, where computational complexity makes it difficult to trace decision paths. Second is institutional opacity, where organizational choices (due to competitive reasons) withhold system design information. Third is functional opacity, here even after observation of inputs and outputs, one cannot predict system behaviour. 


One must understand that political actors need not necessarily understand implementation details to use systems strategically. The use required factor is functional transparency, where they understand observable patterns through use. To explain through an example, Search engine optimization allows users to systematically improve rankings. This is without knowing algorithm code but through experimentation. The algorithm here is functionally opaque as the argument in the original article, but its functionally learnable at the same time.


The original article overlaps institutional opacity (where platforms stay opaque to regulators) and functional opacity (impossibility of learning through use). This misses the exact regulatory challenge. If the systems are truly functionally opaque, the problem lies in platform design which enables inadvertent exploitation, and hence solution isn’t actor liability but suggesting design standards. The same approach is reflected in the EU Digital Services Act, which requires assessment of systemic risks without full transparency. While application of EU regulation is a debateable point in their direct application to Indian context, this discussion is outside the scope of this blog. 


Implications for Liability Design


Usually, legal liability requires a culpable mental state. This can be intentional, knowing, reckless or negligence. However, if actors cannot predict system responses because its genuinely opaque, intent to manipulate cannot be formed. The framework in the original piece hence enters a contradiction. It wants to assign liability for intentional exploitation but at the same time argues that systems are unknowable. 


The piece acknowledges this : "Although the role of algorithms in disseminating harmful content is significant, it is often complemented by human manipulation." This categorisation treats algorithmic functioning and human intent separately. However, if manipulation is intentional, these aren’t separate but rather intent flowing from systems. The question arises is that if the actors exploit learned system behaviours, which require functional transparency or they just created content which accidentally performed well under unpredictable criteria, meaning they have non-culpable conduct?


Unless this tension is resolved, any liability assignment framework has an unsound foundation without establishing mental state and conduct required for legal liability. 


II. The Missing Mechanism


No Specification of Technical Process


The original piece describes “algorithmic manipulation” albeit inadequately. It fails to shed light on actor inputs, the system process or output generation. For proportionate liability, this prevents causal attribution. About missing inputs, its fails to explain whether actors optimize content features or generate artificial engagement through coordination or they purchase targeted advertising. Each conduct has different legal implications, yet they are treated similarly under the “manipulation umbrella”. 

In regards to missing processes, it doesn’t shed light on what signals do algorithms exactly process, whether it is engagement velocity, sentiment or network structure? It also doesn’t differentiate between supervised learning and reinforcement learning. Research shows that the approach differs across platforms. Without specifications, risk of over-regulation and under-regulation prevail. Further, the piece doesn’t clarify if the algorithms amplify existing organic reach multiplicatively. The answer to whether it creates exposure that wouldn’t occur through networks along is imperative since if algorithms only scale content, their causal role differs vs if they were creating new exposure. 


Causal Structure Unspecified


While the blog mentioned the strategic use of algorithmic systems, it didn’t shed light on the exact model through which algorithms casually contribute. Three models are possible for this, firstly is Amplification. Algorithms can multiply organic spread, making them scaling factors rather than independent causes. Hence, regulatory intervention here must be amplification coefficients and viral thresholds. The second model is gatekeepers, where algorithms determine exposure. Here, algorithms are necessary conditions for content to spread. Here regulatory intervention must be ranking criteria and inclusion decisions. The third model is mediator. Here, algorithms shape exposure but humans determine effects. Here, content spreads to algorithm, but humans regulate ten environments in which such content is received. This makes liability allocation confusing, since should law target exposure or interpretation. 


Given which model is at play, liability would differ and should be proportionate to involvement. Without specification that which model is applicable, the framework shall fail to determine how much causal responsibility lies with which actor. Research has shown that factors affecting content spread include a web of complex factors, which includes content features, network structure, platform affordances and user behaviour. Any attribution to algorithms without specification on which elements are algorithmic and which are social is a technological oversimplification as well as against sound regulatory design. 


Empirical Assumptions Unexamined


The piece argues that "Political disinformation travels to a larger audience within a few hours due to an emotional pinch," suggesting that emotional content spreads faster, where algorithms amplify it and this is what causes rapid spread. However, this is contested in empirical literature, where research has shown that the rapid spread behind false news is not algorithmic recommendation but human sharing. It also shows that algorithmic recommendations work against the blog’s assumptions, increasing diverse exposure and that misinformation is usually concentrated to few segments


The blog also treats filer bubbles as established facts which are also equally contested. Building regulatory responses on such premises without an acknowledgement of risks arising from uncertainty is not an appropriate solution. If systems primarily further content which reflects user preference and not create new on their own, a reasonable regulatory solution should addressed factors such as media literacy and not algorithmic curation. 


III. Manipulation Without Operational Definition


The Definitional Problem


The blog piece engages in the use of the word “manipulation” a several dozen times, however it doesn’t shed any distinction on when legitimate political communication becomes prohibited communication. This is a strict prerequisite for enforceable regulation. There are four widely used definition, each having different regulatory consequences. 


Firstly, is deception-based, where manipulation means false belief creation through hidden influence. This definition allows for personalized targeting if there is transparent disclosure. Second is autonomy -based, where cognitive biases are exploited to bypass rational deliberation. The test here is that if the technique in question undermines agency. This could allow prohibition of certain persuasion methods. The third definition is intent-based, where systems are used opposite to their states purpose. This gives the question, is the platform is made for engagement and the creator makes engaging content, is it really misuse? The fourth is outcome-based, where manipulation leads to distorted informational environments. But there is a crucial question, which is: who decides what is distorted and what is the baseline? This brings in normative choices to technical measures. 


However, the blog doesn’t clarify which definition it relies on, creating uncertainty because of the concerns mentioned above. 


The Enforcement Problem


Consider this passage: "Political actors benefit from the audience's existing bias by tailoring their content to these identities through personalisation." Application of the four definitions given above brings up significant concerns. For a deception-based definition, personalization isn’t deceptive fi targeting methods are disclosed. For autonomy-based definition, does personalization really affect deliberation more than traditional politics does? Rhetorical adaptation isn’t a new practice to politics, and in foundational to democratic discourse. 


For intent-based definition, if targeting tools are already available, political actors are merely playing by the rules and design, not exploiting unintended functionality. For an outcome-based definition, while personalization does reduce diversity, it could lead to an increase in political information among previously disengaged citizens. It is imperative to answer which outcome determines what is communication and what is manipulation. The blog treats personalization is inherently manipulative but fails to explain the WHEN and WHY. With this, each political campaign is under suspicion. 


Why This Prevents Implementation


Enforcement is done through operational definitions by specification of conduct which is prohibited in observable terms, specification of mental state which is required and what harms occurs from which specific causal connections. As the blog currently exists, its definition of “algorithmic manipulation” would include purchasing targeted advertising (which is a standard commercial practice), A/B testing messages, creation of emotional content and coordination of fake accounts for false engagement, all under one definition. Without clearly distinguishing these practices, enforcement is through arbitrary discretion, and its common knowledge how that turns out. Regulators would engage in post hoc declaration of what is manipulative rather than following the ex-ante standards. 


Conclusion


Through this response piece, the author has identified three crucial technical shortcomings. Firstly, a liability relevant tension, where framework is assumed to be opaque yet functionally manipulable intentionally by political actors. It is crucial to resolve this tension to establish mens rea for legal liability. The argument is not that no exploitation exists, but rather that the framework cannot provide it sufficient ground. The second shortcoming is mechanistic vacancy, where the blog provides no explanation of inputs, processes or causal chains which link algorithms and harms. Without this, it’s impossible to arrive at a technically sound framework. This isn’t a demand for engineering specifications but rather presence of some mechanistic account to assume the link. The third shortcoming is the definitional crisis, where multiple definitions of manipulation cannot distinguish prohibited and allowed conduct, making enforcement arbitrary. These gaps aren’t minor, but affect the feasibility of the regulation itself. 


This critique is not to dismiss the concerns on computational propaganda, which the author conceded to be real issues warranting significant attention. Nor is the author’s intention to argue against listener-centric approach, which is a normative argument. The claim here is much narrower in nature, such that the framework suggested in the blog would fail successful implementation due to its internal contradictions. 


Any sound analysis of such a framework would require not engineering-level detail, but rather include coherent descriptions of implicated algorithms, specific causal pathways, enforceable criteria through operational definitions and acknowledgement of empirical uncertainty. These requirements reflect that law’s regulation of technical systems must support implementation, maintain law’s abstraction but not avoid technical inaccuracy.


Alternatives might include transparency mandates about platform disclosures, disclosure requirements for computational propaganda services and standards for coordinated inauthentic behaviour. However, each trade-off has to be grounded in technical accuracy to be implementable. Before allocating liability, it is the onus upon legal scholarship that it must engage in the process to explain what occurred, through what mechanisms and establish the why of cognizable harm. This is not an overreaching requirement, but the very foundation of enforceable, proportionate regulation distinguishing legitimate from illegitimate conduct without arbitrary discretion.

i National University of Law0


Recent Posts

See All
bottom of page