An article written by Vernor Vinge, predicting that there will be a singularity within 2030
https://edoras.sdsu.edu/~vinge/misc/singularity.html
- the potential pathways
- The development of computers that are “awake” and superhumanly intelligent. (if this is possible, then there is little doubt that beings more intelligent can be constructed shortly thereafter)
- Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
- Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
- Biological science may find ways to improve upon the natural human intellect.
- the singularity
- machines more intelligent than humans create more intelligent machines
- superhumanity
- the first ultraintelligent machine will be humanity’s last invention
- ideas will spread super quick, even the most radical ones will be commonplace
- ppl’s potential reactions
- will they accept it? it’s reasonable to doubt super smart ai is possible since we dont have the hardware that matches the human brain’s capabilities
- technological unemployment gradually, more machines automating higher & higher lvl jobs
anti-singularity arguments
- interesting phrase: Golden age that becomes the end of progress
- actually very interesting to think about… kinda haunting in a way but also calming, it’s better than the other scenarios
- critics
- roger penrose - human consciousness is non-computational, it might depend on quantum mechanics & cant be replicated by algorithms
- john searle - famous for “chinese room” argument, saying a computer might simulate understanding without actually having it
- some thought we were 3 orders of magnitude away, but some thought 10 orders
- 3 orders = 1,000x less powerful than the brain → kind of close
- 10 orders = 10,000,000,000x less powerful → nowhere near close
- if this is true, then singularity wont happen, altho we an get cool digital tech - fast processors, good signal processing, but no sentient machines/post human transformation
but here are the big ideas
- You can’t safely “box” something that’s smarter and faster than you. It’ll out-think you.
- strong humanity VS weak humanity
- Most people imagine superintelligence as “fast humans” (weak superhumanity), but real Singularity probably involves strong superhumanity — minds that are alien & not just quick.
- weak superhumanity - human brains but just much faster, like turbocharged intelligence
- strong super humanity - not just faster, but structurally fundamentally different, new mental architectures, mind beyond humanity’s understanding
- If we want to understand the post-Singularity world, we should think in terms of strong superintelligence, not just fast AIs.
bad scenarios
- displacement, or extinction - we might become like animals or subservient tools, where we’re still “alive” but treated in a way that fundamentally changes our understanding of what it means to be a human
- the ethics of ai-human relationship - how will they treat us?
good scenarios
- we are the initiators and can create the initial conditions
- Intelligence Amplification (IA) = enhancing human intelligence through technology
- The Internet, human-computer symbiosis, and group collaboration are already evolving into superhuman systems.
- maybe an easier road to super humanity
- IA projects
- Human/computer team automation for complex problem solving
- Ubiquitous computing and mobile interfaces (think: smartphones, AR glasses)
- Symmetrical decision support systems – where users also train or guide the system, not just consume its outputs
- Groupware as an augmented organism – collaboration tools like Slack, Notion, GitHub are crude versions of this
- Internet as a global mind – a shared, chaotic but evolving intelligence formed by human-machine coevolution (think Reddit, StackOverflow, USENET).
- biological-computer symbiosis
- computers don’t just augment human intelligence externally (via networks, tools, etc.), but also interface directly with our brains and bodies
- brain computer interfaces
- prosthetics
- embryonic brains grow around artificial interfaces:
- Instead of inserting electrodes into a fully formed brain (which is difficult and damaging), let developing brains integrate with technology
- animals with extended senses or novel cognitive abilities (holy moly)
- this could be the path to the technological singularity
- instead of building separate machine intelligences, we enhance ourselves → creates a powerful trajectory
lets say everything went well
- what if singularity is controlled and humans become their successors, evolving into the superintelligent beings & preserve our humanity
- ppl left behind are treated kindly (or flattered with the illusion of control lol)
- golden age of technological transcendence & human dignity coexisting
- immortality might be possible?
- the philosophical cost
- where no one is killed, everyone is uplifted, the notion of selfhood becomes unrecognizeable → you become ur own ancestor
- what even is “you”?
- Growth will outpace memory and empathy with the past
- even if the best possible outcome happens (no war, extinction, kindness & mutuality), post-human world will be utterly alien
========== lol what a ride sheesh 4.22.25
- So I basically got access to this article randomly while I was reading about AI development. It turns out this was a somewhat monumental essay and had made ripple effects. Just reading through it gave me existential dread/horror paragraph after paragraph, until the ending of the paper just reached the peak. lol
- But to say as of april 2025, it seems like we’re not in this “clean” timeline that Vince suggested. Rather, it’s more messy and disorganized and chaotic. Few things to list:
- open source vs corporate AIs
- devs building ai agents
- governments not even understanding ai fundamentally making laws
- ai slop filling the internet
- It’s more to say.. that it’s chaotic. I don’t think that a clean timeline of us reaching a state of AGI might come soon below 2030, given the current state of the economy.