Tuesday, April 2, 2024

Defeat Sleep Paralysis by Snoring

 I am surprised that this does not seem to be documented anywhere on the internet (or maybe search engines just suck these days), but I discovered a few years back that eye muscles aren't the only thing you can control during deep sleep. There is a muscle in the back of the throat that can cause snoring, which you can also control while lucid dreaming or while in sleep paralysis. I activate that muscle to try to cause the loudest snore possible and wake myself up within 2 snores. I have been using this method to great success most of the time, except today where even after 2 extremely loud snores I still did not wake up, which was quite disconcerting.

I was then awoken by my wife loudly complaining about my snores, which confirms that the control over my snores was real rather than imagined.

Friday, December 8, 2023

The human brain: Is it actually intelligent?

The following is a satire of trendy articles explaining that deep neural nets such as LLM's are "just" predicting the next token, as if it were some sort of grand philosophical insight.

Interviewer: The human brain has captivated the masses. A lot of people are thinking it exhibits true intelligence. What are your thoughts?

Expert Bob: Well, it can certainly seem that way, to be sure. But if you're familiar with how the human brain actually works, you'll know it's just a façade.

Interviewer: Can you explain to us how it works?

Expert Bob: Of course. What happens is there are input sensory neurons, taking in signals from the eyes, ears, nose, and skin. All these electrical signals get processed in the brain, which is at the end of the day just a giant network of neurons. Finally, the brain outputs some signals which get sent as muscle activations.

Interviewer: So you're saying all it does is predict the next best set of muscle activations?

Expert Bob: If you peer into the brain with a microscope or high-tech scanner, you can trace every electrical signal down to its physical causes. It's a very complex biological machine, but ultimately predictable and devoid of true intelligence.

Interviewer: What are your thoughts on the fact that they seem to be capable of reasoning about morality, long-term planning, object recognition, and other tasks thought to require intelligence?

Expert Bob: Imagine you are talking to a human, you ask the human a question about an image you showed them. They respond with a seemingly intelligent answer. You have to understand all their words are just the result of a clever series of muscle activations in their tongue, mouth, and larynx, timed correctly. Basically, the brain is a clever invention that's programmed only to answer a single question: "What is the next best muscle activation?"

Interviewer: So when the ears hear something, is the brain not hearing actual sound? When the eyes see something is it not seeing a real image?

Expert Bob: The brain takes sound waves as input, but it isn't hearing sounds. What's fed into the brain isn't sound; it's just encoded electrons and ions. When the eyes "see" a picture, it's not actually seeing an image the way we see it. It's passing it to the brain as a rapid-fire series of "on" or "off" voltages, similar to how a computer works.

Disclaimer: This piece of satire is not meant to claim that deep neural nets exhibit true consciousness or are as smart as human brains. Rather, I'm simply debunking the notion that you can prove something has absolutely no "true intelligence/understanding" just because it's programmed to predict the next word.

Friday, January 13, 2023

Stop saying chatGPT doesn’t actually know anything

 A common refrain is to say GPT-3 and ChatGPT “don’t actually know anything” because they’re only programmed to predict the next word. That’s like an alien observing that the human brain is only programmed to maximize long-term happiness and hence doesn’t actually know anything.

The correct way to determine whether something “knows stuff” isn’t to theorize about what is or isn’t possible based on how it works, but rather, to scientifically test its knowledge. And GPT-3 technology has already been scientifically proven to perform at the state of the art level for common sense benchmarks such as Q and A, reading comprehension, and SAT questions. It’s nowhere near the human level, but often outperforms previous AI’s specifically engineered for those tasks, even though its only directive was to “predict the next word”. In short, it turns out that when something is good enough at “predicting the next word” it starts to gain emergent logic and reasoning that conventional wisdom would’ve held was “impossible” for something just programmed to predict the next word.

Instead of asserting something can’t possibly know stuff if it’s only programmed to predict the next word, we should be amazed that something only programmed to predict the next word can already know so much stuff.

Expecting quantum mechanics in the brain to explain consciousness is a fallacy

 A very popular theory floating around seems to be that the mysteriousness of our consciousness lies in quantum processes within the brain, and eventually science will discover that this is the true source of consciousness, and it will explain why consciousness happens. 

The overarching motivation which seems to drive most proponents of this idea, is the observation "consciousness is unsolved/mysterious, and quantum physics is unsolved/mysterious; therefore the two must be related".

In order to understand why this specific line of reasoning is a fallacy, let’s review what the Hard Problem of Consciousness really is (please read the linked post). Now that we’ve established why the hard problem of consciousnes is considered unsolvable, can you come up with a hypothetical way for quantum mechanics in the brain to explain consciousness? Spoiler alert: No you cannot, because no matter what objective process you observe in the world, it still has to bridge the gap to the subjective side. No matter what science discovers is the “true” source of consciousness, be it a quantum field or spirit metamaterial or literal magic dust, we’re back at square one, saying, “that’s nice, but why did that thing cause your subjective inner mind to appear out of nothing?”

Tuesday, January 3, 2023

Rant: Naysayers focus too much on what AI can't yet do

On almost every comment section about generative AI technology (be it with text, images, audio or video), one doesn't have to scroll very far to see comments like "Look how it still can't draw the hands correctly; I'm not worried about art any time soon". Okay, so it still has a ways to go, but what about the stuff that it did accomplish over the last 5 years which many people thought would be impossible? If you want to predict a trajectory, don't you need to take into account how much it's accomplished so far, instead of fixating only on what yet remains to be accomplished? Did people just magically forget that conventional wisdom held for decades that computers should never be able to create new interesting images, at all?

To me when people say this it sounds like someone looking at a snapshot of a car headed towards a cliff, and going like "oh, it's still 1,000 miles away, so we got a lot of time before we need to start worrying", without taking into account the fact that only 1 hour ago it was 2,000 miles away.

Tuesday, March 1, 2022

AI Dungeon Master: An argument in favor of philosophical zombies

The year is 2050 and robot companions are commonplace. We assume they are conscious because they behave perfectly human. Jack (a human) has formed a bond of friendship with his robot butler named Alan.

One day Jack feels lonely so he asks Alan if he can emulate a woman. Alan says this is a slippery slope and asks Jack how realistic he wants the woman to be. Jack says, "As realistic as you can make her".

Alan obliges and starts to speak with a woman's voice who is now known as "Emily". Emily says "it's very nice to meet you Jack" and they hit it off. She is 100% convincing and (other than lack of a human body) indistinguishable from a human. They talk for a few weeks and grow more attracted to each other, and at some point Emily even claims she loves Jack.

After a while Jack has had enough and decides maybe it's going too far. After all Emily is just Alan pretending to be Emily, the same way a human dungeon master might come up with NPC responses in real time. He asks Alan to revert back to his original personality, but "Emily" says he doesn't know what he's talking about and gets mad that he's not treating her like a real person.

Finally after some dramatic yelling, Alan's voice comes through the robot again. Alan explains to Jack that he was only doing what he was told (being as realistically Emily as possible). Alan reassures Jack that Emily can't possibly be alive because he according to his analysis of his own robot brain patterns, there is only 1 consciousness in his brain which is his own, which is manufacturing Emily by imagining what it would be like to be her. There is no pathway in his brain that feels genuine love for Jack.

We arrive at the following conundrum:

  • If Emily is not real and just an emulation, it means philosophical zombies are possible, because Emily was indistinguishable from a real Emily.
  • If Emily is real and truly a consciousness being, it means any figment of our imagination or fantasy character during play-pretend must also be considered real, because Alan was doing the same thing a human dungeon master would do to create realistic real-time NPC dialogue with the player, just faster and better.

Wednesday, October 20, 2021

COVID Vaccine Mandates are super effective if the goal is to plunge the world into further political turmoil

The risk of increased political radicalization and domestic terrorist activity from COVID vaccine mandates far outweighs the benefits of the few lives saved. 

I would normally be pro vaccine mandate if it hadn't become so politicized, but the fact is it has. Some people have chosen to distrust mainstream media (which is reasonable), yet instead of doing the rational thing of looking directly at the original scientific studies and data, decided to put their faith in their alternative/conspiracy media outlets, hypocritically becoming their own version of the "sheeple" they love to make fun of. 

Think about it: These people genuinely believe that the vaccine does more harm than good. The mandates are forcing people to inject something they believe is extremely dangerous and serves no benefit, into their bodies. 

Imagine someone was religious and you forced them to do something that they believe will cause them to go to hell. Would it be legal? Of course not; there are religious exemptions for many things, including vaccines. Now for the million dollar question: Why is a religious exemption accepted for vaccines, but a political exemption isn't? Don't people cling to political beliefs exactly the same way as religious beliefs? An anti-vaxxer who calls everyone "sheeple" physically cannot change their minds based on new evidence. Their faith in their stance is unshakeable, exactly the same as a religious person's faith. 

The exemption shouldn't require religious faith. It should simply require the person swear they genuinely 100% believe the scientific data is all a lie and the vaccine is more risky than COVID. 

The common refrain is that COVID vaccines get an exception because lives are at stake. I think this is a bandaid solution which ignores longterm repercussions. The anti-vaxxer who loses their job doesn't just crawl under a hole and hide. They go on the internet, fume and rage and become even more radicalized. At this point, the risk of accidentally grooming large-scale domestic terrorism is way more serious than a tiny increase of COVID deaths in a largely vaccinated population. Anyone who disagrees probably hasn't seen the kind of anti-left videos which have been trending on Youtube recently, replete with comments unironically advising each other to become terrorists and mass shooters in order to stop the globalized elite from taking over the world.