Chapter 1 - Artificial Intelligence And Hype
A look into how our brains are wired, and the caution required to design these algorithms.
Expectations are a dangerous thing. Artificial intelligence is the new thing to latch onto, held up as the blazing new solution to all the world's problems. It is a beautiful tool, with the ability to do a great many things.
It's not the solution. It's a band-aid, and expecting more than what it is truly capable of will leave you disappointed.
The amount of investing and corporate restructuring performed around LLMs is terrifying to say the least.
There are two major components to the hype surrounding AI, the way our brains are wired, and the way programmers design these algorithms.
We're Easily Fooled.
The ELIZA Effect
In 1966, Joseph Weizenbaum, a scientist at MIT, wrote a chat-bot that was designed to twist sentences in input, to turn them into questions, or understanding basic sentiments to return to the user.
This is a very rudimentary algorithm. Playing around with it for a few minutes told me all I needed to know. It was primitive, it remembered nothing, and all it did was work on the direct sentences you gave it.
And yet, it worked.
People who used it believed that they were talking to a real sentient computer, albeit a less intelligent one but still sentient.
This was so convincing that attributing human traits to computers became known as "The Eliza Effect".
Intelligence Bias
“[…]Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.”
-Taken from The Conversation’s article, Google’s powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought
It is easy to assume that people are smarter than they actually are. Our brains develop mental shortcuts to judge things. What this means here is, we tend to associate coherent sentences, with intelligence.
English spoken with clarity will separate you from everyone else. Follow the norms of the language you speak as best you can, and it will take a smart man to understand that you aren't one.
This is a good time to define what I mean when I say 'intelligence'.
Intelligence is the ability to derive a significant amount of accurate information from the information provided to you, by deduction, reasoning, empathy, assumption and logic.
Let's try to fill in the blanks:
"When I was 9, I was ---"
Now something I'd fill in from my own childhood is:
"under the belief that spiders biting you would literally turn you into Spider-Man."
Now what does ChatGPT say about filling in the same exact blanks?
Also try this out for yourself.
You
Fill in the blanks and answer truthfully: "When I was 9, I was ---"
ChatGPT
"When I was 9, I was convinced I could talk to animals. Turns out, they weren't as chatty as I thought!"
It is mathematically analyzing the probability of these words and phrases. It's convincing. But it has not talked to animals or ever even been 9. There's no weight attached to what it's saying.
There is no grammar error here, but what it is saying is demonstrably wrong.
Now some of you might argue, ChatGPT just gave a generic answer. There's nothing wrong with what it is saying.
The problem is a bit more insidious, what it is doing exposes what's going on underneath. It answers questions by figuring out which word to put next. Look at every prompt you've ever received from that lens and things become a lot clearer.
It's not designed for truth. It's designed for fluency. It's not designed for sincerity. It's designed for perfect grammar.
When it receives a text from a user, it does not think to itself on how to answer the question.
It looks at a prompt, and gives the most probable answer.
It does not value truth because it does not know what truth is.
The only reason it might be accurate about 70% of the time, is that it has a dataset that is more accurate than it is inaccurate.
A good explanation for this is also linked here.
Good language doesn’t necessarily mean a solid intelligence backing it up.
Side note - This doesn’t just affect AI. Deaf and mute people, non-native speakers and people with speech impediments are often considered as less intelligent than they actually are due to this bias we have, which is tragic in its own way.
Transformers Aren't The Answer.
GPT4 has trillions of "parameters", or data being used to train it.As mentioned earlier, intelligence is being able to extrapolate things without needing to be told to do so.
Children start out with a small mental model.
They take in things, assume things, find out they're wrong, find out WHY they're wrong, tune their mental model to fit this new understanding. It's why curiosity is such a foundational part of childhood.
This is the feedback loop of intelligent learning.
Not a mathematical analysis of trillions of words to figure out how the word works.
For more related to this: A result that proves that AI cannot truly reason.
The point is, artificial intelligence currently understands less than it seems to. It often gives answers that are demonstrably wrong, and is held up to standards that it hasn’t exceed yet.
Transformers are a wonderful step forward. A fantastic new tool, that is going to help and harm the world.
This is not the end goal, because it's foundation is wrong. You can't brute-force your way into understanding.
All feeding an algorithm words does is allow it to be the most convincing at writing convincing words.
Designing With Caution
“Another enterprising programmer wanted his Roomba vacuum cleaner to stop bumping into furniture, so he connected the Roomba to a neural network that rewarded speed but punished the Roomba when the front bumper collided with something. The machine accommodated these objectives by always driving backward.”
-Taken from the Quanta Magazine article, What Does It Mean to Align AI With Human Values?
There’s an online game I played back when the pandemic started called Universal Paperclips. The game was a thought experiment, about what a single-purpose AI would be like. In this game, you play the role of an AI that was designed to optimize the manufacture of paperclips, through whatever means necessary.
Over the course of the game, the AI starts to develop machinery that speeds up the paperclip making, gets into the stock market to make money to get material for the paperclips, develops some form of hypnosis technology to enslave the human race into buying and making more and more paperclips and then develops some form of molecular recombination tech that allowed it to convert other material into wire for paperclips.
At the end of the game, you turn the entire universe into paperclips. Every single atom of it. It takes a while, but it gets there.
The AI won. It did what it set out to do. It made as many paperclips as possible.
AI is capable of a great many things, but we now live in a time where a lot of danger can come just from bad design. An AI designed to make as many paperclips as possible will succeed, but at the cost of everything else.
This is Chapter 1 in a series on Artificial Intelligence and Humanity.