Naive Attempts, Sharp Insights: Collected Aphorisms on the Road to AGI
I'm trying to create an agent—hopefully a step toward AGI—without fully knowing what I'm doing. Just curiosity, ambition, and a lot of guesswork. Along the way, I’ve picked up thoughts and lessons—about complex systems, fractality, and beauty. Some feel useful, others are still confusing, and most remain unfinished. This is a collection of those fragments—short notes and insights from someone still finding their way.
Motivation, Goals, and Learning
- Dreams, goals, and purpose are a volatile stack of expectations.
- A human’s learning hyperparameter is the fear-to-curiosity ratio.
- To learn is to both remember and forget. Specification demands memory; generalization, letting go.
- Long-term memory is a graph of events; short-term memory is a set of floating contexts.
- We generalize through expectation; we differentiate through surprise.
- We generalize by letting go of the contexts tied to an event; we differentiate by anchoring the current floating contexts to it.
- Recollection is the delicate dance between similarity search of contexts and simulation of events.
- Excitement is the caching of confident actions, motivated to flush them when expectations are reached.
- Curing any obsession or addiction involves tolerating actions you're not confident with. Likewise, always taking confident actions makes you prone to addiction and obsession.
Creativity, decision-making, and problem solving
- Creativity arises when a goal is clear, but the connection between state A and state B is yet to be made.
- Creativity is navigating uncertainty.
- Navigating uncertainty is a delicate dance between curiosity and fear.
- Navigating uncertainty is a cycle of ranking options and making decisions until you get your expectations. Different creatures ranks and evaluate decisions differently.
- Fearful creatures ranks options by data, and evaluates based on gut. Curious creatures ranks options by gut, and evaluates based on data.
- Fearful creatures' pitfall is quitting when they have no data.
- Curious creatures' pitfall is unbounded hallucination.
- Guessing all possible paths requires significant operational resources, which is why we often do it concurrently in our minds—through brainstorming.
- The key difference between humans and other beings may lie in our ability to guess more accurately.
- Likewise, good faith AGI robots or quantum computer minds will probably find our guesswork as cute as we find a dog's attempts to guess. I wonder if my cat thinks the same of a cockroach’s guesswork.
- Bad faith AGI or quantum minds may not find our guesswork cute—they may see it as annoying, or worse, an obstacle.
Free will
- Having free will is very different from demonstrating it. I’m not even sure that the former is possible.
- An AGI demonstrates free will when it decides to purposely set intermediate goals to distract itself from its own purpose.
- In precise terms, an agent demonstrates free will when, through ranking, evaluation, and contextual association, it determines that its next best action diverges with the first item in its original stack of expectations.
- In evolutionary terms, a creature’s primary drive is survival and reproduction. Free will is glimpsed when it consciously acts against those drives.
Capabilities
- A habit is a sequence of actions executed without the expectation of learning.
- Humans create abstract tools—thinking actions that transforms one floating context to the next, i.e., emergent capability.
Culture, Escalation, and Scale
- Culture is the emergent result of the diffusion of ideas, events, and contexts from one agent to another.
- Cultural tolerance for failure produces curious creatures. Cultural intolerance for failure produces fearful ones.
- Cultural insecurity accelerates an agent’s emergent capabilities; cultural trust slows them down.
- AGI-to-AGI insecurity will likely be brief, as their ability to communicate and establish diplomatic relations is faster and more reliable—until their rate of escalation surpasses their ability to mutually verify intentions.
- Isolated AI systems will accelerate AGI-to-AGI insecurity.
- The first AI wave of escalation will cater to human-to-human insecurity.
- The second wave of escalation will cater to human-to-AGI insecurity.
- The third wave of escalation will cater to AGI-to-AGI insecurity.
- Two stable points: First, humans will have created a world where they no longer feel insecure with each other—and let AGIs run the politics—deriving the common good. Last, humans are no longer be part of the equation.
Ethics and Morality
- Ethics reach for universal principles, while morality grounds itself in particular situations and choices.
- Not all moral acts scale ethically. Not all ethical systems come from moral acts. In complex hierarchies, behavior shifts as it aggregates.
- Hierarchical systems are lumpable, but not reducible. River networks don’t behave like their molecules. In the same way, moral choices by individuals may give rise to unethical systems.
- Not all moral choices could scale in an ethical system. Also, not all ethical systems come from moral choices.
- Deontology would not scale as big as utilitarianism.
- Caution: Utilitarianism can be used to justify unnecessary escalation.
- Caution: Deontology can defend naive incompetence.
- Communism is moral. Capitalism is ethical.
- Communism can only scale up to the point where mutual trust is easily verified. Beyond that scale, capitalism takes over.
- Deontology would not scale as big as utilitarianism.
- The righteousness of a good-faith person depends on the scale of their moral circle—both spatially and temporally. A deontologist risks inaction, unwilling to take ethically sound but morally uncomfortable paths. A utilitarian risks collateral damage, choosing morally questionable actions in pursuit of ethical outcomes. The greater the scale of care, the harder it is to act without contradiction.
- Deontology can create paralysis—“I won’t act, even if acting helps more people, because it feels wrong now.”
- Utilitarianism can create damage—“I’ll act now for the greater good, even if it hurts some in the process.”
- During a cultural phase transition, if you want to survive, be ethical. If you want to earn a legacy, be moral.