Friday, December 8, 2023

The human brain: Is it actually intelligent?

The following is a satire of trendy articles explaining that deep neural nets such as LLM's are "just" predicting the next token, as if it were some sort of grand philosophical insight.

Interviewer: The human brain has captivated the masses. A lot of people are thinking it exhibits true intelligence. What are your thoughts?

Expert Bob: Well, it can certainly seem that way, to be sure. But if you're familiar with how the human brain actually works, you'll know it's just a façade.

Interviewer: Can you explain to us how it works?

Expert Bob: Of course. What happens is there are input sensory neurons, taking in signals from the eyes, ears, nose, and skin. All these electrical signals get processed in the brain, which is at the end of the day just a giant network of neurons. Finally, the brain outputs some signals which get sent as muscle activations.

Interviewer: So you're saying all it does is predict the next best set of muscle activations?

Expert Bob: If you peer into the brain with a microscope or high-tech scanner, you can trace every electrical signal down to its physical causes. It's a very complex biological machine, but ultimately predictable and devoid of true intelligence.

Interviewer: What are your thoughts on the fact that they seem to be capable of reasoning about morality, long-term planning, object recognition, and other tasks thought to require intelligence?

Expert Bob: Imagine you are talking to a human, you ask the human a question about an image you showed them. They respond with a seemingly intelligent answer. You have to understand all their words are just the result of a clever series of muscle activations in their tongue, mouth, and larynx, timed correctly. Basically, the brain is a clever invention that's programmed only to answer a single question: "What is the next best muscle activation?"

Interviewer: So when the ears hear something, is the brain not hearing actual sound? When the eyes see something is it not seeing a real image?

Expert Bob: The brain takes sound waves as input, but it isn't hearing sounds. What's fed into the brain isn't sound; it's just encoded electrons and ions. When the eyes "see" a picture, it's not actually seeing an image the way we see it. It's passing it to the brain as a rapid-fire series of "on" or "off" voltages, similar to how a computer works.

Disclaimer: This piece of satire is not meant to claim that deep neural nets exhibit true consciousness or are as smart as human brains. Rather, I'm simply debunking the notion that you can prove something has absolutely no "true intelligence/understanding" just because it's programmed to predict the next word.

Friday, January 13, 2023

Stop saying chatGPT doesn’t actually know anything

 A common refrain is to say GPT-3 and ChatGPT “don’t actually know anything” because they’re only programmed to predict the next word. That’s like an alien observing that the human brain is only programmed to maximize long-term happiness and hence doesn’t actually know anything.

The correct way to determine whether something “knows stuff” isn’t to theorize about what is or isn’t possible based on how it works, but rather, to scientifically test its knowledge. And GPT-3 technology has already been scientifically proven to perform at the state of the art level for common sense benchmarks such as Q and A, reading comprehension, and SAT questions. It’s nowhere near the human level, but often outperforms previous AI’s specifically engineered for those tasks, even though its only directive was to “predict the next word”. In short, it turns out that when something is good enough at “predicting the next word” it starts to gain emergent logic and reasoning that conventional wisdom would’ve held was “impossible” for something just programmed to predict the next word.

Instead of asserting something can’t possibly know stuff if it’s only programmed to predict the next word, we should be amazed that something only programmed to predict the next word can already know so much stuff.

Expecting quantum mechanics in the brain to explain consciousness is a fallacy

 A very popular theory floating around seems to be that the mysteriousness of our consciousness lies in quantum processes within the brain, and eventually science will discover that this is the true source of consciousness, and it will explain why consciousness happens. 

The overarching motivation which seems to drive most proponents of this idea, is the observation "consciousness is unsolved/mysterious, and quantum physics is unsolved/mysterious; therefore the two must be related".

In order to understand why this specific line of reasoning is a fallacy, let’s review what the Hard Problem of Consciousness really is (please read the linked post). Now that we’ve established why the hard problem of consciousnes is considered unsolvable, can you come up with a hypothetical way for quantum mechanics in the brain to explain consciousness? Spoiler alert: No you cannot, because no matter what objective process you observe in the world, it still has to bridge the gap to the subjective side. No matter what science discovers is the “true” source of consciousness, be it a quantum field or spirit metamaterial or literal magic dust, we’re back at square one, saying, “that’s nice, but why did that thing cause your subjective inner mind to appear out of nothing?”

Tuesday, January 3, 2023

Rant: Naysayers focus too much on what AI can't yet do

On almost every comment section about generative AI technology (be it with text, images, audio or video), one doesn't have to scroll very far to see comments like "Look how it still can't draw the hands correctly; I'm not worried about art any time soon". Okay, so it still has a ways to go, but what about the stuff that it did accomplish over the last 5 years which many people thought would be impossible? If you want to predict a trajectory, don't you need to take into account how much it's accomplished so far, instead of fixating only on what yet remains to be accomplished? Did people just magically forget that conventional wisdom held for decades that computers should never be able to create new interesting images, at all?

To me when people say this it sounds like someone looking at a snapshot of a car headed towards a cliff, and going like "oh, it's still 1,000 miles away, so we got a lot of time before we need to start worrying", without taking into account the fact that only 1 hour ago it was 2,000 miles away.