Friday, January 13, 2023

Stop saying chatGPT doesn’t actually know anything

 A common refrain is to say GPT-3 and ChatGPT “don’t actually know anything” because they’re only programmed to predict the next word. That’s like an alien observing that the human brain is only programmed to maximize long-term happiness and hence doesn’t actually know anything.

The correct way to determine whether something “knows stuff” isn’t to theorize about what is or isn’t possible based on how it works, but rather, to scientifically test its knowledge. And GPT-3 technology has already been scientifically proven to perform at the state of the art level for common sense benchmarks such as Q and A, reading comprehension, and SAT questions. It’s nowhere near the human level, but often outperforms previous AI’s specifically engineered for those tasks, even though its only directive was to “predict the next word”. In short, it turns out that when something is good enough at “predicting the next word” it starts to gain emergent logic and reasoning that conventional wisdom would’ve held was “impossible” for something just programmed to predict the next word.

Instead of asserting something can’t possibly know stuff if it’s only programmed to predict the next word, we should be amazed that something only programmed to predict the next word can already know so much stuff.

Expecting quantum mechanics in the brain to explain consciousness is a fallacy

 A very popular theory floating around seems to be that the mysteriousness of our consciousness lies in quantum processes within the brain, and eventually science will discover that this is the true source of consciousness, and it will explain why consciousness happens. 

The overarching motivation which seems to drive most proponents of this idea, is the observation "consciousness is unsolved/mysterious, and quantum physics is unsolved/mysterious; therefore the two must be related".

In order to understand why this specific line of reasoning is a fallacy, let’s review what the Hard Problem of Consciousness really is (please read the linked post). Now that we’ve established why the hard problem of consciousnes is considered unsolvable, can you come up with a hypothetical way for quantum mechanics in the brain to explain consciousness? Spoiler alert: No you cannot, because no matter what objective process you observe in the world, it still has to bridge the gap to the subjective side. No matter what science discovers is the “true” source of consciousness, be it a quantum field or spirit metamaterial or literal magic dust, we’re back at square one, saying, “that’s nice, but why did that thing cause your subjective inner mind to appear out of nothing?”

Tuesday, January 3, 2023

Rant: Naysayers focus too much on what AI can't yet do

On almost every comment section about generative AI technology (be it with text, images, audio or video), one doesn't have to scroll very far to see comments like "Look how it still can't draw the hands correctly; I'm not worried about art any time soon". Okay, so it still has a ways to go, but what about the stuff that it did accomplish over the last 5 years which many people thought would be impossible? If you want to predict a trajectory, don't you need to take into account how much it's accomplished so far, instead of fixating only on what yet remains to be accomplished? Did people just magically forget that conventional wisdom held for decades that computers should never be able to create new interesting images, at all?

To me when people say this it sounds like someone looking at a snapshot of a car headed towards a cliff, and going like "oh, it's still 1,000 miles away, so we got a lot of time before we need to start worrying", without taking into account the fact that only 1 hour ago it was 2,000 miles away.