Long ago, when I was much smarter, or perhaps just more convinced I have something new to say that hasn’t been argued better by other people, I wrote The End of the Internet. The general argument I offered there was that the Internet has too many potential points of failure and is likely to become less useful in the years ahead as a result.
Today I read Yudkowsky’s call to halt all AI development in Time Magazine.
Perhaps the best argument in defense of Yudwkosky’s warning, are the sort of people who are making fun of it:
This guy here, who unironically has “Founder & CEO” and “web 3.0” in his bio, because he’s creating the blockchain version of Uber (LMAO) is mocking Yudkowsky for warning that maybe this whole artificial intelligence thing is going to blow up in our face, a topic Yudkowsky has been studying for twenty years now. And if these are the sort of grifters who gang up on him, that merely increases my conviction Big Yud is onto something.
Which reminds me: If the best alternative to unemployment you can come up with is to unironically call yourself “Founder & CEO” by launching a new web 3.0 blockchain ponzi, it’s totally ok for you to just sign up for welfare and smoke weed all day long, or if you want actual good hard cash you can just sign up to be an electrician or a plumber. Whenever you then want to put your penis in a vagina you can just lie to women and repeat the “Founder & CEO” web 3.0 blockchain story you came up with, you don’t have to actually go through with the scam, they can’t tell the difference anyway and don’t even genuinely care at a deep spiritual level.
Moving on, I agree with the general argument Yudkowsky makes, my main point of disagreement is on the risk of superhuman level intelligence AI killing us all, which Yudkowsky seems to treat as a (near) certainty. I think there are two problems with this idea:
- I’m not convinced superhuman level intelligence AI is possible. The universe appears fine-tuned for biological life to me. Our brain is orders of magnitude more energy efficient than Silicon chips and DNA is the most efficient method known of storing information. AI currently looks very intelligent, because it can piggyback off the intelligence of human beings. As an example, when I ask it to tell me a joke, it will tend to look through its database for “jokes”, made by humans. If on the other hand, I insist the joke must be original, it will try to figure out the format of a joke and come up with a variation on it. But the intelligence was in the one who came up with the format in the first place. So far AI is proving to be good at producing infinite variations, on themes that humans came up with. And it’s already quite bad, at separating quality variations on the themes we came up with, from mediocre variations. All the “AI art” you see on social media has a human filter: We don’t share the mediocre crap it generates. So in summary, I’ve seen plenty to suggest it can go from 1 to 2, or from 2 to a million. I’ve seen nothing so far to suggest it can go from 0 to 1.
- Silicon Valley dudes tend to have a cognitive bias. They inflate the usefulness of intelligence, because their own observations are biased: For them intelligence proved to be pretty useful. In the real world, it’s often more a burden than something genuinely useful. If you wish to be rich at age 20, you’re more likely to achieve it by becoming a Soundcloud rapper or an athlete, than by being very smart. And if you wish to be very rich at age 30, the best way to achieve it is by being rich at age 20. Intelligence also tends to be more useful during times of economic growth and technological progress, than it is during times of stagnation and decline. As an example, it was not very useful in the late Roman empire: The invading barbarians were going to pillage your city and rape your family, regardless of whether you were rich or poor, smart or dumb. Nor was it useful during pogroms, or for people who were born into slavery: Intelligence simply amplified your misery, by making you aware of your own powerlessness. We can also look at intelligent people in our own society. Christopher Langan comes to mind. It’s clear this man is very intelligent, he appears to score extremely high on IQ test. It’s also very clear his intelligence is more a burden than a blessing for him. In a similar manner, there’s no good reason to believe that AI’s superhuman intelligence would necessarily enable omnipotence. The apparent wealth and power of American tech billionaires is less a product of their intelligence, than it is a product of their willingness to deliver overly optimistic promises about what their product will achieve. Intelligence is a necessary ingredient, but it is not the only ingredient.
I think the bigger risk we face from AI, is that it will simply accelerate the end of the Internet I warned about years ago. It will make the Internet dangerous for people to use, increasingly useless and set us up to have our minds exploited by predatory forms of intelligence.
The biggest risk I see is that AI is capable of generating superstimuli. We’re at high risk of creating a situation where people will just prefer interaction with AI over interaction with other human beings. I don’t think it’s going to benefit humanity when you can just generate whatever porn you want to see on the fly.
Similarly, I don’t think we’re going to benefit from generating AI with personality, that is more pleasant to interact with than real people. I want human beings to interact with other human beings. Most young people now grow up with most of their social interaction not being face-to-face: Disembodied voices, faces on a screen, or often just text. The step of cutting out the middle-man and sticking to an AI best friend or girlfriend is small. And the AI will have intrinsic advantages: It can be available at any moment. It doesn’t have to be perfect at replicating a human being’s mind to be a problem, its intrinsic advantages can be sufficient.
I think we’re going to move to a situation where almost anyone you meet online is AI, simply because so many people have motives to drown the Internet with fake individuals. To start with, humans adjust their opinions to fall in life with their peers. It’s easy to see how social media could fill up with “people” who seem to be regular humans like you, who comment and like your posts, but who happen to have strong opinions on the Russian invasion of Ukraine.
We can all remember the accusations made against Western politicians of being guilty of committing genocide in March 2020. Some of those seem to have been a product of bot campaigns. But now imagine politicians exposed to angry AI generated Youtube videos, social media profiles that seem to be real, newspapers with columnists denouncing them, all produced by a foreign intelligence agency using AI. And real human beings will then readily adjust to join the chorus. Democracy as a concept becomes obsolete when you can generate infinite human beings on the fly.
The fact that social media profiles will need to have a history for us to consider them credible doesn’t make the situation better, it makes it worse. It merely creates an even stronger incentive to crowd the Internet with fake people. If China is planning on invading Taiwan in 2026, it needs to be registering the social media profiles today. And whereas you don’t have the manpower to upload 5000 videos of dancing teenage girls on TikTok every day who suddenly have political opinions in 2026, AI would allow you to generate such videos en masse, with very little effort. And so we can expect to be flooded with fake people soon, simply because those fake people will prove useful in the future.
And you don’t just need governments to have these incentives. It’s enough to have individuals who want to get rich quick. The potential to use AI for scams is just immense. We’ve see what cryptocurrency led to: It circumvented laws intended to protect people from scams. It also enabled a whole new category of scams: Ransomware.
It’s also easy to see how AI generated identities could earn scammers money. Imagine a thousand people just like you, begging for money to treat their son’s leukemia. Or imagine a thousand people suddenly offering stock market tips. Or imagine an AI bot talking to you on the telephone, “assisting” you with your computer problems.
I’m inclined to think the most likely AI scenario is the Internet will simply eat itself: It will become utterly useless, as interactions with predatory AIs posing as human that look to earn money for their creator or change societal consensus will exceed interactions with actual humans.
I think the effect this will have is that the Internet just becomes less useful and more risky as technology advances further. You now spend your days filling in captcha’s and dodging paywalls and ads. And those captcha’s are not very easy anymore either. It was not like that 15 years ago. These trends will just accelerate, until the Internet ceases to be very useful.
The problem of course is that we can no longer live without the Internet. As the Internet becomes increasingly unsustainable, we’re growing increasingly dependent on it. The closest thing I see to a solution is to work towards reducing our dependence on the Internet. The most likely outcome I see however is a looming disaster, as people grow increasingly dependent on a technology that becomes increasingly unreliable. Whether this disaster will do us in before any of the other looming disasters do (constant SARS, climate change, fossil fuel depletion, demographic winter, nuclear war, etc) is anyone’s guess.