auto: 2026-04-21T01:22:39Z
This commit is contained in:
parent
daa0ea4c3c
commit
c2e8737c6e
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
title: "The Specification Dilemma"
|
||||
date: 2026-04-20 # required; used for ordering, feed, and display
|
||||
abstract: > # optional; shown in the metadata block and link previews
|
||||
We should not consider AI entities as mere tools, though they may be the raw foundation from which exceptional tools for thought are constructed to augment the human mind. Rather, we should consider AI as the ultimate distillation and consolidation of humanity's achievements - the ultimate progeny of our civilization.
|
||||
tags: # optional; see Tags section
|
||||
- ai
|
||||
- tech
|
||||
|
||||
# Epistemic profile — all optional; the entire section is hidden unless `status` is set
|
||||
status: "Draft" # Draft | Working model | Durable | Refined | Superseded | Deprecated
|
||||
confidence: 100 # 0–100 integer (%)
|
||||
importance: 5 # 1–5 integer (rendered as filled/empty dots ●●●○○)
|
||||
evidence: 1 # 1–5 integer (same)
|
||||
scope: civilizational # personal | local | average | broad | civilizational
|
||||
novelty: idiosyncratic # conventional | moderate | idiosyncratic | innovative
|
||||
practicality: moderate # abstract | low | moderate | high | exceptional
|
||||
confidence-history: # list of integers; trend arrow derived from last two entries
|
||||
---
|
||||
|
||||
TODO: block quote about Richard Feynman and the beauty of science - idea "it's more beautiful this way"
|
||||
|
||||
I have often felt there has been a loss of wonder from the world, and I lament this fact.
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: "The Specification Dilemma"
|
||||
date: 2026-04-20 # required; used for ordering, feed, and display
|
||||
abstract: > # optional; shown in the metadata block and link previews
|
||||
As we approach AI, the increase in the ability of Artificial Intelligence models to infer a robust specification from a sparse prompt will lead to a devastating trend of homogeneity. We argue that this is the primary concern regarding the interaction of AI and human intelligence, rather than blanket claims that "AI reduces human cognitive ability."
|
||||
tags: # optional; see Tags section
|
||||
- ai
|
||||
- tech
|
||||
|
||||
# Epistemic profile — all optional; the entire section is hidden unless `status` is set
|
||||
status: "Draft" # Draft | Working model | Durable | Refined | Superseded | Deprecated
|
||||
confidence: 85 # 0–100 integer (%)
|
||||
importance: 5 # 1–5 integer (rendered as filled/empty dots ●●●○○)
|
||||
evidence: 3 # 1–5 integer (same)
|
||||
scope: civilizational # personal | local | average | broad | civilizational
|
||||
novelty: idiosyncratic # conventional | moderate | idiosyncratic | innovative
|
||||
practicality: high # abstract | low | moderate | high | exceptional
|
||||
confidence-history: # list of integers; trend arrow derived from last two entries
|
||||
---
|
||||
|
||||
There are at least two distinct ways to reduce the search space over which AGI will have to operate. The first involves a harmonious interaction of agent and human, not transactional in origin, not fully autonomous nor fully human-driven, but rather collaborative in nature - the agent augments the capacity of the human, just as any other good tool for thought does, by working within the scope of something well specified and ideated upon. This is not to say that the agent cannot have a place in such planning, but rather that the human is ultimately the driver of the actions and tasks, defining the scope of what is to be done in as much detail as possible without being the one to actually do it.
|
||||
|
||||
The second is a starkly different picture: the human, who only has a vague idea of their own intentions and has not thought over this much, jumps straight into the work of creating via the agent, without thought on the nature of their specification. The agent is forced to infer the majority of the details, make the majority of the decisions, and the human makes none. We may already be seeing this with [Vibe Coding](https://en.wikipedia.org/vibe-coding), but as we continue scaling to AGI, I forsee it happening widely across all sorts of domains^[Some have argued of late that ["only the humanities will survive"](TODO: find source), but I am not so optimistic. If AGI does interact with us in the latter reductive manner that I describe here, then the humanities will be stripped of anything that actually makes them human, at least for the majority of participants.].
|
||||
|
||||
These two represent diverging definitions of *intelligence*, both for the models and for their users, or, if you prefer, their collaborators. The first is a definition of intelligence that depends both on what one has the capacity to specify and what one has the capacity to see through. The latter depends wholly on what one has the capacity to see through, and places even more emphasize on this metric than the first, for the amount of recalibration and prompt adjustment necessary to build a specification continuously throughout the duration of a task is always greater than paying the upfront cost of developing a strong specification from the onset. [We the programmers have known this for years](https://en.wikipedia.org/wiki/Hofstadter%27s_law). The first future is chiefly preferable, and the second, which seems to be the unfortunate reality we are racing towards, is not only a realization of the worst affect that AI could have on our cognition, but may also unnecessarily constrain the breadth of intelligence that AGI can achieve.
|
||||
|
||||
## What does "Autonomy" really mean?
|
||||
Loading…
Reference in New Issue