levineuwirth.org/content/drafts/essays/specification_dilemma.md

3.9 KiB

title date abstract tags status confidence importance evidence scope novelty practicality confidence-history
The Specification Dilemma 2026-04-20 As we approach AI, the increase in the ability of Artificial Intelligence models to infer a robust specification from a sparse prompt will lead to a devastating trend of homogeneity. We argue that this is the primary concern regarding the interaction of AI and human intelligence, rather than blanket claims that "AI reduces human cognitive ability."
ai
tech
Draft 85 5 3 civilizational idiosyncratic high

There are at least two distinct ways to reduce the search space over which AGI will have to operate. The first involves a harmonious interaction of agent and human, not transactional in origin, not fully autonomous nor fully human-driven, but rather collaborative in nature - the agent augments the capacity of the human, just as any other good tool for thought does, by working within the scope of something well specified and ideated upon. This is not to say that the agent cannot have a place in such planning, but rather that the human is ultimately the driver of the actions and tasks, defining the scope of what is to be done in as much detail as possible without being the one to actually do it.

The second is a starkly different picture: the human, who only has a vague idea of their own intentions and has not thought over this much, jumps straight into the work of creating via the agent, without thought on the nature of their specification. The agent is forced to infer the majority of the details, make the majority of the decisions, and the human makes none. We may already be seeing this with Vibe Coding, but as we continue scaling to AGI, I forsee it happening widely across all sorts of domains^[Some have argued of late that ["only the humanities will survive"](TODO: find source), but I am not so optimistic. If AGI does interact with us in the latter reductive manner that I describe here, then the humanities will be stripped of anything that actually makes them human, at least for the majority of participants.].

These two represent diverging definitions of intelligence, both for the models and for their users, or, if you prefer, their collaborators. The first is a definition of intelligence that depends both on what one has the capacity to specify and what one has the capacity to see through. The latter depends wholly on what one has the capacity to see through, and places even more emphasize on this metric than the first, for the amount of recalibration and prompt adjustment necessary to build a specification continuously throughout the duration of a task is always greater than paying the upfront cost of developing a strong specification from the onset. We the programmers have known this for years. The first future is chiefly preferable, and the second, which seems to be the unfortunate reality we are racing towards, is not only a realization of the worst affect that AI could have on our cognition, but may also unnecessarily constrain the breadth of intelligence that AGI can achieve.

What does "Autonomy" really mean?