Why Regulating AI Feels Premature
Not rules, but regulation as a control system where humans are responsible for defending their own dignity
There is a growing consensus—at least in public discourse—that artificial intelligence must be regulated. Governments are drafting frameworks while simultaneously rushing all available resources to win the AI wars. While the sense of urgency is understandable, urgency should not be confused with readiness.
The question I want to explore is a simple one, though its implications are not:
What does it mean to regulate a system whose failure modes have not yet stabilized into recognizable patterns?
Let’s try to unpack this briefly to prepare some context for our argument. The etymological root for the word regulation comes from Latin regula “rule, straight piece of wood” with derivatives meaning “to direct in a straight line,” thus “to lead, rule” (https://www.etymonline.com/word/regulation). Regulation, therefore, implies the definition of a measuring stick, something that unequivocally gives a definite direction.
An example of a simple regulation is this: in soccer (or football, as only the true connaisseurs of the sport know it), if the ball clears the goal line, a goal is awarded, but if there is even a slight overlap between the ball and the line, the goal is not awarded. This is a clear-cut rule, and in fact, it is automated via what is called goal-line technology, which sends a signal to the referee’s smart watch: no need to debate about it. As opposed to such a simple rule, there is the offside rule (which I won’t even try to reproduce here, given its length and complexity). In fact, the offside rule, despite all technological advances, still requires Video Assisted Referee review and possibly on-field review because it involves elements of human judgment. The definition of the offside rule is probably the single rule that changed the most over time: it is complex, non-linear, and feels almost arbitrary. And do not get me started on the arbitrariness of penalty-kick rules!
Therefore, given the complexities of AI as a system, it will be wise to treat regulation not primarily as an absolute moral statement or a legal artifact, but rather as a control system designed to constrain unethical behavior and correct deviations over time.
Regulation as a Control System
In engineering, a control system presupposes a few basic elements: a system whose behavior can be observed; a notion of undesirable states; a feedback mechanism capable of correction; and a model—explicit or implicit—that links intervention to outcome.
Regulation plays an analogous role in social systems. Laws, norms, and enforcement mechanisms form feedback loops meant to discourage certain behaviors and encourage others. When they work, regulations reduce volatility and channel activity into relatively stable regimes.
But control systems function only to the extent that the system being controlled is understood. Poor models lead to unstable control. Excessive gain produces oscillations; insufficient gain leads to drift. And premature control—applied before the relevant variables have even been identified—tends to constrain the wrong degrees of freedom.
Some argue that AI’s general-purpose nature makes harm intrinsic rather than contingent: misinformation, surveillance, or weaponization would then not be accidental side effects, but natural expressions of the technology itself. That may turn out to be true. Yet distinguishing intrinsic from extrinsic harms is itself an empirical problem. It requires observing how such systems behave across contexts, incentives, and constraints—not merely extrapolating from first impressions.
This is the perspective from which I want to examine AI regulation.
A Blank Slate: Regulating Theft Before Theft Exists
Consider this deliberately simple analogy.
Imagine a blank slate primitive society with people, possessions, and interactions—but no concept of theft, no legal prohibition against it, and no shared notion of justice associated with it.
Now imagine what we might call patient zero: the first act of theft.
Person A takes something belonging to person B. Person B is upset and appeals to the community authority—person C—seeking justice. But there is a problem: there is no concept of theft (yet), no category under which the grievance can be processed. There is only a complaint and a request for intervention.
At this stage, several things are true:
An act has occurred, but it is isolated.
There is no evidence of recurrence.
There is no pattern from which to generalize.
Any response will necessarily be ad hoc.
Now, suppose the act never repeats. Months pass. Years pass. No second theft occurs.
In such a world, constructing a comprehensive regulatory framework for theft would be difficult to justify. A control system designed to suppress a non-recurring phenomenon is not merely unnecessary; it risks distorting behavior by solving a problem that does not, in fact, exist.
Regulation Requires Patterns, Not Incidents
Regulation does not arise from isolated events. It arises from patterns. Theft law exists not because theft happened once, but because it happened repeatedly, in varied forms, across contexts that forced societies to draw distinctions.
Over time, legal systems learned to differentiate:
opportunistic versus premeditated theft,
theft by force versus theft by deception,
physical theft versus digital theft,
fraud, embezzlement, and accounting manipulation.
These distinctions were not imagined in advance; they were extracted from experience. They emerged through failures of earlier rules, exploitation of loopholes, and the gradual accumulation of cases that made certain variables salient and others irrelevant. New distinctions continue to be introduced into regulations, as new technologies advance.
Crucially, many forms of theft could not even be conceived until the systems enabling them existed. Digital theft presupposes digital infrastructure. Financial fraud presupposes complex accounting. Regulatory categories followed the realized space of behavior, not the hypothetical one.
This matters because regulation carries costs that go beyond money. Every rule encodes assumptions about what matters, what can go wrong, and which variables deserve attention. If those assumptions are incomplete or mistaken, regulation does more than fail—it shapes understanding, directs attention toward the wrong factors, and can obscure the system’s actual dynamics.
Every rule encodes assumptions about incentives, failure modes, and relevant dimensions of harm. When those assumptions are wrong, regulation does not merely fail to prevent harm; it can actively misdirect attention and lock in unhelpful abstractions.
The AI Parallel
Now return to artificial intelligence.
We are no longer at absolute “patient zero,” but we are still in an early and unstable phase. Biased decision systems, large-scale scams, deepfakes, and brittle automation in high-stakes settings represent real harms that have already identified. But the distribution of these harms remains noisy, context-dependent, and tightly coupled to specific implementations that change rapidly.
Much of what is cited in support of sweeping regulation still falls into three categories:
speculative future harms,
isolated or poorly characterized incidents,
harms more plausibly attributed to human misuse than to autonomous system behavior.
Critics rightly point out that some patterns are emerging—particularly in misinformation and fraud. But even here, the regularities are weak, the mechanisms contested, and the technological substrate in constant flux. We are attempting to regulate a system whose effective degrees of freedom are still being discovered.
The Problem of Hypothetical Cases
The standard response is that regulation must anticipate not only what has happened, but also what could happen: the infamous unknown unknowns.
This sounds prudent, but it conceals a difficulty: hypothetical cases are unconstrained by reality. They proliferate combinatorially and tend to privilege abstract possibilities over empirically grounded risks.
In control-theoretic terms, this is equivalent to designing a controller for a system whose dynamics are largely speculative. The result is often an overbroad constraint applied to the wrong variables.
Premature Regulation and Misaligned Control
When regulation is applied too early, several predictable failure modes appear:
regulation targets proxies (such as model size or architecture) rather than behaviors;
transient design choices become legally entrenched;
actors optimize for compliance rather than safety;
beneficial variation and exploration are suppressed.
It is true that delayed regulation can allow harms to become entrenched. Yet premature regulation carries a comparable risk: rules that are primarily symbolic, giving the appearance of control while exerting little real influence on outcomes.
Feedback Is Not Optional
Effective regulation depends on feedback. Rules must be tested against behavior, revised in response to circumvention, and adjusted as systems evolve.
AI complicates this because the system is changing faster than the regulatory feedback loop can close. Delayed correction applied to a fast-moving target is a classic recipe for instability. Meaningful feedback requires scale, time, and repeated interaction with failure.
A More Modest Starting Point
If regulation is understood as a control system, then the appropriate early intervention may not be a heavy constraint, but improved instrumentation:
better measurement of real-world harms,
clearer attribution of responsibility,
transparent reporting of failures,
taxonomies that evolve with observed behavior rather than speculation.
This approach is often criticized as insufficient or overly deferential to industry. Perhaps it is. We are not arguing that AI should roam free and wipe out all human civilization: we are arguing that, without reliable data in case law, stronger control is not just difficult—it is likely to be misguided.
Historically, durable regulatory frameworks are discovered through iteration, not imposed ex nihilo. AI is unlikely to be an exception. The real danger may not be regulating too little, but regulating too confidently, too early, and on the basis of assumptions that have not yet earned their authority.
In a future post, we will continue to refine the arguments presented in this analysis and, hopefully, provide a framework that includes solutions that first and foremost respect the dignity of the human being.


