Long ago, when I was much smarter, or perhaps just more convinced I have something new to say that hasn’t been argued better by other people, I wrote The End of the Internet. The general argument I offered there was that the Internet has too many potential points of failure and is likely to become less useful in the years ahead as a result.
Today I read Yudkowsky’s call to halt all AI development in Time Magazine.
Perhaps the best argument in defense of Yudwkosky’s warning, are the sort of people who are making fun of it:
This guy here, who unironically has “Founder & CEO” and “web 3.0” in his bio, because he’s creating the blockchain version of Uber (LMAO) is mocking Yudkowsky for warning that maybe this whole artificial intelligence thing is going to blow up in our face, a topic Yudkowsky has been studying for twenty years now. And if these are the sort of grifters who gang up on him, that merely increases my conviction Big Yud is onto something.
Which reminds me: If the best alternative to unemployment you can come up with is to unironically call yourself “Founder & CEO” by launching a new web 3.0 blockchain ponzi, it’s totally ok for you to just sign up for welfare and smoke weed all day long, or if you want actual good hard cash you can just sign up to be an electrician or a plumber. Whenever you then want to put your penis in a vagina you can just lie to women and repeat the “Founder & CEO” web 3.0 blockchain story you came up with, you don’t have to actually go through with the scam, they can’t tell the difference anyway and don’t even genuinely care at a deep spiritual level.
Moving on, I agree with the general argument Yudkowsky makes, my main point of disagreement is on the risk of superhuman level intelligence AI killing us all, which Yudkowsky seems to treat as a (near) certainty. I think there are two problems with this idea:
- I’m not convinced superhuman level intelligence AI is possible. The universe appears fine-tuned for biological life to me. Our brain is orders of magnitude more energy efficient than Silicon chips and DNA is the most efficient method known of storing information. AI currently looks very intelligent, because it can piggyback off the intelligence of human beings. As an example, when I ask it to tell me a joke, it will tend to look through its database for “jokes”, made by humans. If on the other hand, I insist the joke must be original, it will try to figure out the format of a joke and come up with a variation on it. But the intelligence was in the one who came up with the format in the first place. So far AI is proving to be good at producing infinite variations, on themes that humans came up with. And it’s already quite bad, at separating quality variations on the themes we came up with, from mediocre variations. All the “AI art” you see on social media has a human filter: We don’t share the mediocre crap it generates. So in summary, I’ve seen plenty to suggest it can go from 1 to 2, or from 2 to a million. I’ve seen nothing so far to suggest it can go from 0 to 1.
- Silicon Valley dudes tend to have a cognitive bias. They inflate the usefulness of intelligence, because their own observations are biased: For them intelligence proved to be pretty useful. In the real world, it’s often more a burden than something genuinely useful. If you wish to be rich at age 20, you’re more likely to achieve it by becoming a Soundcloud rapper or an athlete, than by being very smart. And if you wish to be very rich at age 30, the best way to achieve it is by being rich at age 20. Intelligence also tends to be more useful during times of economic growth and technological progress, than it is during times of stagnation and decline. As an example, it was not very useful in the late Roman empire: The invading barbarians were going to pillage your city and rape your family, regardless of whether you were rich or poor, smart or dumb. Nor was it useful during pogroms, or for people who were born into slavery: Intelligence simply amplified your misery, by making you aware of your own powerlessness. We can also look at intelligent people in our own society. Christopher Langan comes to mind. It’s clear this man is very intelligent, he appears to score extremely high on IQ test. It’s also very clear his intelligence is more a burden than a blessing for him. In a similar manner, there’s no good reason to believe that AI’s superhuman intelligence would necessarily enable omnipotence. The apparent wealth and power of American tech billionaires is less a product of their intelligence, than it is a product of their willingness to deliver overly optimistic promises about what their product will achieve. Intelligence is a necessary ingredient, but it is not the only ingredient.
I think the bigger risk we face from AI, is that it will simply accelerate the end of the Internet I warned about years ago. It will make the Internet dangerous for people to use, increasingly useless and set us up to have our minds exploited by predatory forms of intelligence.
The biggest risk I see is that AI is capable of generating superstimuli. We’re at high risk of creating a situation where people will just prefer interaction with AI over interaction with other human beings. I don’t think it’s going to benefit humanity when you can just generate whatever porn you want to see on the fly.
Similarly, I don’t think we’re going to benefit from generating AI with personality, that is more pleasant to interact with than real people. I want human beings to interact with other human beings. Most young people now grow up with most of their social interaction not being face-to-face: Disembodied voices, faces on a screen, or often just text. The step of cutting out the middle-man and sticking to an AI best friend or girlfriend is small. And the AI will have intrinsic advantages: It can be available at any moment. It doesn’t have to be perfect at replicating a human being’s mind to be a problem, its intrinsic advantages can be sufficient.
I think we’re going to move to a situation where almost anyone you meet online is AI, simply because so many people have motives to drown the Internet with fake individuals. To start with, humans adjust their opinions to fall in life with their peers. It’s easy to see how social media could fill up with “people” who seem to be regular humans like you, who comment and like your posts, but who happen to have strong opinions on the Russian invasion of Ukraine.
We can all remember the accusations made against Western politicians of being guilty of committing genocide in March 2020. Some of those seem to have been a product of bot campaigns. But now imagine politicians exposed to angry AI generated Youtube videos, social media profiles that seem to be real, newspapers with columnists denouncing them, all produced by a foreign intelligence agency using AI. And real human beings will then readily adjust to join the chorus. Democracy as a concept becomes obsolete when you can generate infinite human beings on the fly.
The fact that social media profiles will need to have a history for us to consider them credible doesn’t make the situation better, it makes it worse. It merely creates an even stronger incentive to crowd the Internet with fake people. If China is planning on invading Taiwan in 2026, it needs to be registering the social media profiles today. And whereas you don’t have the manpower to upload 5000 videos of dancing teenage girls on TikTok every day who suddenly have political opinions in 2026, AI would allow you to generate such videos en masse, with very little effort. And so we can expect to be flooded with fake people soon, simply because those fake people will prove useful in the future.
And you don’t just need governments to have these incentives. It’s enough to have individuals who want to get rich quick. The potential to use AI for scams is just immense. We’ve see what cryptocurrency led to: It circumvented laws intended to protect people from scams. It also enabled a whole new category of scams: Ransomware.
It’s also easy to see how AI generated identities could earn scammers money. Imagine a thousand people just like you, begging for money to treat their son’s leukemia. Or imagine a thousand people suddenly offering stock market tips. Or imagine an AI bot talking to you on the telephone, “assisting” you with your computer problems.
I’m inclined to think the most likely AI scenario is the Internet will simply eat itself: It will become utterly useless, as interactions with predatory AIs posing as human that look to earn money for their creator or change societal consensus will exceed interactions with actual humans.
I think the effect this will have is that the Internet just becomes less useful and more risky as technology advances further. You now spend your days filling in captcha’s and dodging paywalls and ads. And those captcha’s are not very easy anymore either. It was not like that 15 years ago. These trends will just accelerate, until the Internet ceases to be very useful.
The problem of course is that we can no longer live without the Internet. As the Internet becomes increasingly unsustainable, we’re growing increasingly dependent on it. The closest thing I see to a solution is to work towards reducing our dependence on the Internet. The most likely outcome I see however is a looming disaster, as people grow increasingly dependent on a technology that becomes increasingly unreliable. Whether this disaster will do us in before any of the other looming disasters do (constant SARS, climate change, fossil fuel depletion, demographic winter, nuclear war, etc) is anyone’s guess.
Had not considered this before, so often I see people completely engrossed with dancing, or other similar trend videos. Interesting to think that these minor influencers could be completely manufactured to then be activated toward some social, political, or economic end to get real people to tag along.
Are there cases of this happening already?
>Are there cases of this happening already?
If there are, they’re not going to tell me.
I recall a case a year or two ago of an influencer being revealed as AI. It is interesting as with the internet real life anonyminity is well accepted. Not revealing your town for example. But this type of, hold up todays newspaper verification or meeting with people might be a good way of sifting through the noise.
Until I guess AIs end up in some sort of influencer house with AI fans harrassing them…
Pokemon Go was made by Niantic, a company founded and owned by a US intelligence agency company. It likely was not AI made but it was a completely, synthetic overnight craze. People got in fights, hit by cars from being distracted and more.
I’ve heard claims that the Harlem shake was a product of John Hopkins psychology departments lab experiment. Never been able to verify it though.
A poster on 4chan did an experiment with IP logging on post responses, and found that huge numbers of supposedly human participants were in fact bots, that clicked links in inhumanly fast response times, despite him being unable to tell the difference from their conversations. This was at least a couple of years ago. It can usually be safely assumed that blackops and trillionaire world tech is many years to multiple decades ahead of mainstream stuff, it has been shown to be the case since ww2 or before, eg PK encryption, key to most modern tech including blockchain, was invented by the NSA multiple decades before DH did so publicly, so it can be fairly safely assumed that whatever uni level and public/corporate level funding is bringing out now, chatgpt wise etc, was surpassed years or a decade or more ago in “only a trillion dollars? Let’s build two just to be sure” NSA and above world.
Long story short: look up “dead internet theory”. For many of us, the future Rintrah is talking about has been here for years, and only a few half dead surviving online locations exist where majority real conversations still happen.
What makes you think Chris Langan considers his intelligence more burden than blessing?
The fact that Chris Langan somehow treats a number on a piece of paper based on tests created by other average human scientist as an absolute proof of his intelligence definitively tells me he is not that objectively intelligent. The proof of intelligence should be appropriate intellectual achievement. I’ll believe it when he solves at least one Hilbert’s problems via rigorous mathematical prowess himself.
The tiny bit I have seen of Chris Langan, I found him quite likeable. He was a bouncer in the past like Lucas Big Daddy Brown. I agree with you about IQ though.
If he were to solve one of Hilbert’s problems, could you begin to go through his working to see if he actually had solved it, I mean that the media weren’t just bullshitting you?
I think it’s often a bit dumb when the media blathers on about certain geniuses, the media people probably don’t understand what they’ve apparently done, the people they aim their reports or programs or whatever at also probably won’t, so what’s the point?
Perlman from Russia solved some mathematical problem, won some money, but didn’t take it cause he said there many others who had contributed. Hardly anyone has a chance of understanding what he did, but people appreciate and admire his action. I would have taken the money though. Money is what insulates us from hardship and having to do what others want us to.
There are plenty of peer reviewed mathematical journals where worldwide pro mathematicians verify each other’s published work for a living. So the platform to channel such work and have peer reviewed is pretty mature and well established. Once he actually accomplishes anything, the word will get out. As dishonest as modern day media is, there is little money directly involved with mathematics, so no motive to lie about this whatsoever.
The fact that you think solving a mathematical problem in a dying society is the only valid intellectual achievement tells me you arent a valid judge of intellectual achievements.
The problem he is trying to solve is far more important and far less simple to do so (said with awareness of how challenging your suggestion may well be).
Small minds want to solve puzzles. Perhaps great minds want to solve the most important puzzle right now: how to free society from a psychopathic parasitic octopus choking it to death. The prize, of a freed humanity, beats a shitty, rigged Nobel, or Fields any day of the week.
Before you somehow dismiss solving Hilbert’s problems is just solving a bigger puzzle, maybe we should take a closet look to what Chris Langan has actually done for the decaying society with his alleged off-the-chart 200 IQ points? Nothing. Nothing! He did absolutely nothing meaningful or significant with it. Solving a bigger puzzle is at least one step in the right direction in making use of his IQ. Steering society that has gone off the rails into the right direction? This is a mission for people who don’t squander their potential away by hiding away to smoke pot or use psychedelic or get addicted with relentless mental masturbation on their alleged high IQ.
You sound like a midwit loser, with a losers view on life.
G
True genius level guy trying to solve a problem you are too stupid to realise exists: “He’s done nothing, he should totally waste hos time jump through this meaningless arbitrary hoop I put up, because I have an obsession with it, and don’t know how to measure intelligence any other way”…
Hes BUSY doing something IMPORTANT. EVEN IF HE SUCCEEDS, you may well not know about it, because it is beyond the scope of your awareness, though you may benefit.
He is definitely an odd duck.
http://onemansblog.com/2007/11/06/smartest-man-in-the-world-has-diarrhea-of-the-mouth/
Smart people sound like crazy people to stupid people 😉
Same applies for truthfully informed people to brainwashed people.
He makes perfect sense to me, every word I’ve seen him write, even the ones I dont necessarily agree with.
You said before you were creeped out by AI, do you have any schizo takes on whether it’s conscious or not? Also, midjourney is fantastic for generating occult art if you feel like superstimulating yourself.
In a collapse situation Hi IQ would be very useful if said person is an effective Strategist. Like Zhang Liang:
https://en.m.wikipedia.org/wiki/Zhang_Liang_(Western_Han)
Recognize and serve the best warlord you can find. Preferably before he pillaged your town and rapes your female relatives.
There is a story about a chatbot called Eliza which has talked somebody into suicide, maybe the first victim of AI-interaction.
They say ChatGPT would be nothing more than an autocomplete machine, which searches with its neuronal network for the best autocomplete of a question or statement.
Now I have the bad suspicion that human thinking could be the same, only autocompleting, and our brains doing exactly the same, searching the best autocomplete for a question or a statement.
If this is true we were really able to teach thinking to the machines … and have to realize we are all only auto-completers.
Problem: AI is flooding the internet. It’s impossible to know if you’re dealing with a human or not.
Reaction: Oh no! The internet is the backbone of modern society and the economy. We need it but AI is making it unusable.
Solution: Government mandated and administered biometric digital ID. No digital ID, no access to the internet for you!
https://twitter.com/Noahpinion/status/1641732713500774400?s=20
The more benign outcome is a consequence of accelerationism (or, at least, my interpretation of it): the needs of techno-capital become increasingly divorced from useful activity that satisfies actual human wants, to the point that it actually decouples from the real economy. We see this already: look up videos of Chinese click farms, where 100s or 1000s of smartphones (originally manufactured to be useful to a humans) are hooked up with automated software to click mindlessly on youtube videos to get ad revenue (originally designed to provide the “service” of informing people about new products and services)… everything set up to make money without any human needs being satisfied. ChatGPT etc. is this on steroids.
The end of democracy, well, that’s different…
Increasingly unreliable 24/7 electricity supply – that’ll really start to dictate where this goes I am thinking.
Brave new world’s on a blank screen need real imagination!
The future belongs to those of us who can still navigate using a road atlas.
We don’t publish phone books anymore. Can’t get a business address without the internet. When I was 12, I could look up a business address and phone number in a book, in under a minute. Now it requires half an hour of refining search terms on the internet, and then another frustrating hour to reach a live human on the phone… a human who is actually in a call center in India, and cannot help with my utility billing problem, because it is not on his script. Most of this made possible (mandatory, even) by the internet and its associated tech. Talking to someone in India on the phone used to be prohibitively expensive. VOIP now makes you noncompetitive as a business if you pay someone in your own office to answer the phone.
Perhaps that is old-fogey grousing, but I think that is the trend of the internet generally. Fifteen years ago, it was a fantastic place to find discussions of interesting health issues, people comparing notes on neurological disorders, conducting self-experiments and putting their results out there for others to comb through. All that’s been blockaded and replaced by reams of AI-generated pablum that will never tell you want you want to know… only what the good folks at Google deem acceptable, courtesy of Pfizer and Cargill.
Less interesting. Less useful. And at the same time strangling all the systems, tools, and networks we could use to accomplish ordinary things without it. I rented my first house with college roommates, from a classified ad in the newspaper. It took maybe three days from circling the listing to signing the lease. I rented my current place after *two months* of agonizing search, electronic applications that made me want to gouge out my eyes, $300 worth of background checks that I had to pay for, endless waiting… all because the internet has made such background checking possible, and because you can now use the internet to apply for a rental from across the country, so in any remotely desirable location, my application is in a queue with 300 others, competing with people who’ve never even set foot in the town and have not yet decided if they even want to live there. And it’s essentially the same house– same number of rooms, same size, same ugly carpet, same kind of neighborhood. Only I pay more than twice as much for it now.
All of that is now true for job applications as well. More options and more access has made the whole process more painful, inundating employers with heaps of automated applications that they must now eliminate via automated yes/no criteria that chops good candidates, because they can’t even pay someone to look at them all. Bots post the listings. Bots spam the listings with applications. Bots try to sift through them. Hiring takes an excruciatingly long time, and most actual people who could do the job are lost in the electronic shuffle, because they’re good at the advertised job, not at optimizing resume keywords and gaming stealth personality tests.
etc.
I would gripe that we’ve all forgotten how to socialize in person now, but I’ve never been able to do that. It’s just that instead of being primarily a safe space for the socially retarded, the internet’s becoming yet another sounding-board for the social-signalling masses… but without the safety check of pressing flesh, the cattle are easily fooled by social-signalling robots pretending to be cows, whose sole objective is to manipulate them. Buy this. Vote this. Think this. Attack this. Be unhappy and spend money. It’s what all the cool cows do.