auto: 2026-05-02T13:07:26Z [skip ci]
This commit is contained in:
parent
0fc9fec708
commit
b12f6cc387
|
|
@ -22,7 +22,7 @@ practicality: moderate
|
|||
confidence-history:
|
||||
---
|
||||
|
||||
There are at least two distinct ways to reduce the search space over which AGI will have to operate. The first involves a harmonious interaction of agent and human, not transactional in origin, not fully autonomous nor fully human-driven, but rather collaborative in nature - the agent augments the capacity of the human, just as any other good tool for thought does, by working within the scope of something well specified and ideated upon. This is not to say that the agent cannot have a place in such planning, but rather that the human is ultimately the driver of the actions and tasks, defining the scope of what is to be done in as much detail as possible without being the one to actually do it.
|
||||
There are at least two distinct ways to reduce the search space over which AGI^[The definition of "Artificial General Intelligence", or whether such a definition exists, is contentious. My use of the term is not intended to endorse any proposed timeline for AGI, nor to suggest that it is inevitable. It is rather to provide calibration through a hypothetical goal that clearly justifies pursuit.] will have to operate. The first involves a harmonious interaction of agent and human, not transactional in origin, not fully autonomous nor fully human-driven, but rather collaborative in nature - the agent augments the capacity of the human, just as any other good tool for thought does, by working within the scope of something well specified and ideated upon. This is not to say that the agent cannot have a place in such planning, but rather that the human is ultimately the driver of the actions and tasks, defining the scope of what is to be done in as much detail as possible without being the one to actually do it.
|
||||
|
||||
The second is a starkly different picture: the human, who only has a vague idea of their own intentions and has not thought over this much, jumps straight into the work of creating via the agent, without thought on the nature of their specification. The agent is forced to infer the majority of the details, make the majority of the decisions, and the human makes none. We may already be seeing this with [Vibe Coding](https://en.wikipedia.org/wiki/Vibe_coding), but as we continue scaling to AGI, I foresee it happening widely across all sorts of domains^[Some have argued of late that "only the humanities will survive", but I am not so optimistic. If AGI does interact with us in the latter reductive manner that I describe here, then the humanities will be stripped of anything that actually makes them human, at least for the majority of participants.].
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue