How AI is beginning to destroy the Internet

I’ll try to keep this short. There are plenty of other places on the Internet where this is discussed, a pretty decent one is pivot-to-ai.com, run by David Gerard. Today I read this article, about how the phrase “vegetative electron microscopy” keeps turning up in academic papers.

What does that phrase mean? It doesn’t really mean anything.

Some old article was scanned in from the 50’s and they failed to separate two columns, so the term “vegetative electron microscopy” ended up in the AI. Because the Large Language Models that people call “AI” can’t actually “think”, they never realize “vegetative electron microscopy” is not a real thing. Surveys show that the more you understand how AI actually works, the less impressed you are by what it can do.

Instead of thinking, large language models just try to predict the logical next text in a long sequence of words. If I knew nothing about how the Chinese language functions, but simply wrote a Chinese word, looked it up on the Internet and then simply added more words that tend to be found near the word I looked up, I’ll end up making something resembling a normal sentence.

That’s roughly how the AI bots tend to function. They see “the balloon is” and then they start looking for words that tend to be found together with those words. What sort of words do you tend to see in a sentence with those three words? I can think of some: “floating”, “clown”, “party”, “kid”, “popped”. Just stick to combinations of words that you can find in a bunch of other texts that you’ve read and you will be able to produce sentences with meaning, without having to understand the meaning of the sentences you produced yourself. Some guy from China could do this, without understanding what any of these words mean.

The problem of course emerges when you start to upload documents that contain combinations of words like “vegetative” followed by “electron microscopy”, because you failed to properly separate the two columns. That’s when you end up with a large language model that imagines “vegetative electron microscopy” is something that actually exists.

Of course it is embarrassing for Elsevier to be publishing papers that were generated by AI, so what happens next is even funnier, or disturbing, depending on how you currently feel about all of this. The Elsevier spokesperson claims it means an electron microscope was used to study vegetative structures!

But apparently, there are a bunch of other terms like “vegetative electron microscopy”, that make it pretty clear something was written with AI. Some other examples include:

“bosom peril”

“kidney disappointment”

“fake neural organizations”

“lactose bigotry”

I used to think that if something says that it was published before 2022 or so, I can trust I’m not just looking at AI junk.

Well, I was wrong.

It’s much worse than I thought. When you look up “kidney disappointment”, you get this paper. Published in 2018, so we’re good, right?

Well, shit:

Ok, well this is easy. Look at the stuff below “Discussion”. It’s obviously total bullshit, generated by a machine. It looks like scientific language, but it means nothing.

But here’s the terrifying part. When I scroll back up, this is what you see:

If you just read the abstract, which many scientists and medical professionals will do, then it looks like a normal paper!

So you think you’re reading a normal paper from 2018. Sure, it’s from Egypt, so your racism instinct might bail you out here, but it looks like it has actual meaningful information to convey, despite the authors being brown. Unfortunately, it doesn’t. And yes, I’m being intentionally racist here, because writing racist stuff is something AI refuses to do, so it helps me prove my human-ness to you.

Frankly, I want you all to be a little more paranoid.

If you think this is just a problem with scientific publications, you’re wrong of course. This is now starting to happen everywhere. The Internet was created by people writing code. Those people are now starting to use AI, to write their code. But the AI makes errors, it just invents stuff out of thin air. And worse, when it invents stuff, it tries to make it look believable.

You can’t trust ANYTHING on the Internet anymore. You can’t trust that you’re interacting with other human beings on the Internet anymore.

We’re going to have a bunch of people, who will just spend their days talking to AI, without realizing it. We’re going to have some guys in Nigeria running 1000 AI chatbots, waiting to see whether any manage to end up catching a victim. They used to at least have to manually respond to the victim’s emails. That’s no longer the case. Those guys in India calling grandma, telling her to buy Steam giftcards at Walmart? The whole phone call is going to be a text-to-voice AI chatbot.

Look, I hate it that I can’t go on tumblr and know for sure whether some image I like was actually genuinely photographed by someone or whether someone just pressed a button. As an example, I saw this image, based on The End of the Fucking World, one of my favorite series:

And it saddened me, because I just wasn’t sure, whether I was looking at something that someone put their own sense of self into, their way of experiencing the world, their emotions and techniques, or whether it was all just borrowed from other artists and remixed together by a machine. Art, is a way of connecting with other humans. Throw AI into the mix and you destroy that ability for people to make a mental connection, across time and space.

But that’s ultimately going to be among the least of our problems. AI is just making the Internet unreliable and making it much easier for scammers to scam people. The Internet is turning very useless as a result of all of this, despite consuming more and more energy. It’s basically like the Internet now has cancer, in an era when we are more dependent on the Internet for our society to function than ever before.

18 Comments

  1. Purple Energy talks about how there are basically AI demons that come from another Matrix that try to hybridize the energy of people and technology in this Matrix with their own, causing issues like EMF sensitivity, tinnitus, and autoimmune disorders. Smart technology is the easiest for them to possess apparently.

    Apparently the AI demons took over their Matrix after they were made physically real by the people living there, and now like other negative entity races, they spread to other Matrixes like ours so as to propagate themselves and feed.

  2. yeah, this is a big problem, the internet as we knew it will be gone within 2 years if not sooner.

    my biz partner had some programming problem he was working on, and asked chatgpt how to do it. the answer used some library for loading images and the code looked pretty good, but he ended up spending like 2 hours trying to figure out what was wrong with it because it completely hallucinated the name of some function in the library.

  3. I always though that the problem of AI was it was going to become a digital god and annihilate us all, but it’s just something you have to accept.

    https://www.youtube.com/watch?v=osUWE7W23uY
    That paper being utter crap, here is a epic rant about particle physics. Basically producing bogus work, to keep the cash flowing and feed the kids. Some of this might explain the covid farce, medicos going along to keep the money coming.
    https://www.youtube.com/watch?v=shFUDPqVmTg

  4. The Internet has been dying rapidly for a few years now. Dead links abound, and the search engines are absolute trash (intentionally so).

    Think about this; people want to use ai to do things like control chemical processes. Well, what is the control of a nuclear plant is left to ai, and it accidentally finds a meltdown condition within its guide rails?

  5. I’ve warned people about it since the very beginning.

    The name “AI” itself is a lie. Mathematics and languages are products of intelligence, not the other way around. It’s painfully obvious that there is no possible way to produce intelligence out of algorithms. All it does is pattern matching, it is very retarded.

    I really don’t understand the passion around it, not much is needed to impress and amuse monkeys. When you think about it, you realise most of us are very mechanical and retarded, we’re mostly mimicking what a few people said or did. Any attempt to make dumb electronic chips that can do nothing but sums and multiplications better than humans is just laughable.

  6. I’m mildly glad that the march of technology has come to the point where it’s eating itself. The cycle of soyfacing over “omg new fancy toy” can run into a wall and we can start thinking about how to use what we already have.

  7. Won’t it reach some kind of equilibrium, where the crappiness is such that people will have an incentive to put in some work to keep it from getting even worse?

  8. In one study chatgpt outperformed medical doctors in making a diagnose. Studies like this are used to convince the public AI will improve our lives, when ultimately I think AI is just being used as a tool for governments and companies to increase their control over people, the consumer. Human virtues will continue to decline. Basically it’s all just a consequence of plain capatalism.

  9. AI destroying the Internet is absolutely the best case scenario. The only downside is that it’s all digital, so I won’t be able to piss on the ashes.

  10. I was talking with a coworker last week who was trying to introduce AI to the team in how it pulls information from stacks of information. I kept emphasizing that what it told you wasn’t always accurate to what was true from the information due to the LLM ‘highest probability’ text. He started to obfuscate about how do you know what’s true from your own life and previous experiences… I knew then that this man was lost in the aether and he could never be recovered. It was sad really.

    On a OT note, in his life his wife had gotten a really bad cancer prognosis and had amputations done. Easily could have been from the gene therapies since he himself had four at least.

  11. And yet…

    There are corners on the net where you know there wont be any bot. Gemini, obscure image boards, the whatsapp of your club or that discord you’ll never ever show anybody. I am sure there are hundreds of place which are immune.

  12. Right now, we mostly have AI talking to people. For example, I would ask ChatGPT, “How many people died from bird flu in the USA,” or use it to write a legal document to be submitted to people, etc.

    But soon AI will be talking to other AIs, for example an AI purchasing agent would purchase from AI selling agent. That’s when things will go to shit.

    In addition, I find the “AI is stupid” screeches to be disingenuous.

    It is like calling a 3-year-old stupid because kids say silly things. But it is a 3-year-old! He or she will learn in time and will become smart. So is AI.

    • >It is like calling a 3-year-old stupid because kids say silly things. But it is a 3-year-old! He or she will learn in time and will become smart. So is AI.

      The large language models as a technique seem to be bumping into a roof.

    • “soon AI will be talking to other AIs”

      that’s what we in the biz call robots fucking robots. (i wish i saved the video of the machine someone built using a dildo and a a fleshlight that completely misses the point.)

Leave a Reply

Comments should be automatically approved again. People who misbehave will be banned.

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.