Sunday, December 28, 2025

Stop saying generative AI "isn't actually AI"

 Do you know the historical industry-standard definition of AI? Do you? I invite you to look up "History of Artificial Intelligence" on Wikipedia because the term is a lot more nebulous than many people assume. 

In fact, "AI" used to refer to practically everything under the sun related to even the most rudimentary attempts to simulate intelligence. The field of "AI" included basic tree search, decision trees, basic if/then algorithms, and statistical techniques such as Markov models. Many are surprised to learn that machine learning was the most advanced subset of "AI". In my 2007 undergrad class called "Artificial Intelligence", machine learning was the last unit of our course, and neural nets were only mentioned as an afterthought on the last day of class, with the professor noting it had potential in the future. Today, they're ubiquitous.

Just about the only field which defined "AI" as anything more than a machine, was from the entertainment industry including movies and video games. Mass Effect popularized the terminology of "VI" vs "AI" in which a VI (virtual intelligence) isn't conscious, but an AI (artificial intelligence) is. I will bet you that the vast majority of people who insist that AI by definition refers to a conscious entity is actually remembering this terminology from Mass Effect, a science fiction game, and in their heads had somehow convinced themselves that it was an actual industry-standard naming system used by computer science researchers.

Wednesday, December 24, 2025

Someone explain the obsession with the claim that AI can't replace senior developers and can replace junior developers

I seriously can't go 5 minutes on the internet without seeing this claim. Bunch of people say AI coders will replace junior devs, but not senior devs. Gets 100+ upvotes no matter what the social media website.

Now the first question I would ask is, "do you remember what you did as a junior dev"? 100% of the time I ask this question I GET NO RESPONSE.

A junior dev is capable of developing an entire project from scratch with nothing other than stack overflow and some grit. This is in the pre-AI era. If you don't believe me, ask one of these "senior" devs who claims AI will replace junior devs and won't replace senior devs, what they remember doing as a junior dev, and what they were capable of.

Here is my claim: If AI can replace the average junior developer, it can replace the average senior developer within 6 months. If AI can't replace the average senior developer, it can't replace the average junior developer. Let time tell whether I am right and how well my claim has aged.

Thursday, November 6, 2025

Prediction: Most software won't run on code in the future

Are you jaded about the trend where as hardware gets better, software gets less efficient and more bloated? Buckle up because it's about to get a lot weirder.

As AI inference trends towards faster and cheaper, and as the models' accuracy improves, there may come a time where a computer is no longer running on code; it's running on the AI telling it what to do in real time. Now you might think, isn't that the same as AI writing code? No it's not; the AI won't need to write code anymore; it will sit between the user and low-level operations in real time, and decide immediately what pixels to show the user and what I/O operations to do based on the context and the user input.

How did I come up with this [dumbass|genius] idea you might ask. One of the holy grails of software engineering is to eliminate bugs. Obviously bugs can't be completely eliminated outright, but if you'll notice, most bugs have a clear intended behavior. The next time you encounter a software bug, ask yourself whether the bug would've happened if, instead of software controlling everything, it were a human with very good memory that could think at 1000x speed.

Won't this introduce a ton of other bugs and unexpected behavior, you might ask. Yes, that's why I agree my idea sounds ridiculous, and would only work if AI models became really good and really fast.

Can a giant neural net really replace a huge amount of code like an entire operating system? It seems like a crazy idea, but crazier things have happened in the past (like the fact that text and images can be generated coherently today would've been considered pretty crazy 10 years ago, which for some reason people now dismiss saying "of course computers can do that" just because it's not as good as a human).

We can see an extremely nascent version of a similar idea in DeepMind's Genie 3, which generates pixels directly rather than using game state as an intermediary (the game state is actually stored in the neural net itself with no manually coded logic)

Friday, August 8, 2025

Circular Time Travel isn't Paradoxical

When I was younger I remember always being pissed at those movies with the typical circular logic time travel loop: The protagonist is convinced through some event to travel back in time. They end up causing the events which convinced them to travel through time in the first place and ultimately either couldn’t or wouldn’t change the past. I thought this was paradoxical, because it relied on circular logic, and if at any point the protagonist broke the chain they could’ve changed the past and the whole movie wouldn’t make any sense.

But then I watched Dark (which I loved, until they ruined it with a feel-good ending about being able to change your fate), and it occurred to me that time loops aren’t necessarily paradoxical. After all what’s stopping the existence of a loop where everyone agrees to and somehow serendipitously does everything right in order to make the loop self-consistent? All it requires it everyone in the universe is either unwilling or unable to change their pasts. 

As for what happens if anyone in the universe is willing or able to change their past, the answer is simple: Such a universe could not have existed in the first place. Therefore by the anthropic principle, the only time loops that can exist, are those were everyone is unwilling or unable to change it.

For this same reason, it’s silly for time travel protagonists to worry about causing a paradox, because if they were able to cause a paradox the entire universe they inhabit couldn’t have existed in the first place. Then again, it's also fair to say that their illogical worrying is what allows that whole universe to exist in the first place. Just know that if time travel were possible in real life, you wouldn't need to worry about causing a paradox, because the fact you and your universe exist guarantees it's logically impossible for you to cause a paradox.

Remember: A time loop cannot change over time! Imagining a time loop as a looping video where people can change over each iteration, and/or winking in or out of existence “over time” (like the end of Dark) is illogical because a time loop already includes time


Note: Regarding the idea it's paradoxical because you don't know what "caused" the time loop itself: I think this stems from a misconception that a time loop can be caused or created at some point in time, rather than simply existing. The concept of cause and effect only make sense within a causal chain. If you ask what caused a causal chain itself, it's naturally unanswerable, just like the question of what caused the first cause of our universe. Here is the reddit argument which inspired this post.

Note: this post purposely ignores the “branching universes” model of time travel because it’s a cop-out that avoids all paradoxes trivially because you don’t need to tie together any loose ends.

Sunday, October 13, 2024

The sheer stupidity of using the r’s in strawberry question as evidence of LLM incompetence

The meme of asking an LLM how many R's are in the word strawberry went viral because of public ignorance about how LLMs work. An LLM doesn't even see the individual letters in a word. It takes its input in the form of word-sized chunks. As such, questions relating to individual letters, like asking it to count the number of letters or asking which words start with a specific letter, are uniquely difficult and completely unrelated to performance on other tasks in general.

If you think about it, the only way it can even infer strawberry has any r's at all is purely based on context clues from somewhere deep in its training set where someone may have spelled out the word "b-e-r-r-y" with dashes in some random blog or forum post. Therefore, the fact they even sometimes get these types of questions right should be considered nothing short of amazing.

Tuesday, July 9, 2024

On Generative AI Art: A Prompter is not an Artist (yet)

I have heard a number of people claiming that they are creative authors of these works of art because they are refining their prompts until they get what they want. However, this fails the litmus test: Isn't refining a prompt still more similar to what a manager or client would traditionally do? They'd tell the artist what to make, the artist makes it, then they tell it what edits they want, etc. until the piece is done to their liking. Therefore a prompter's role is still more like a client who hired an artist, the artist in this case being the AI.

Additionally, I propose a way to objectively measure creative contribution, without any technical/knowledge gatekeeping nor requiring it be done a certain way; a mathematical definition of creative authorship:

The fewer variations of an output that can match your spec, the more you contributed to the creative process.

Consider the 3 scenarios: A classical piece with all notes fully written out; a jazz piece which allows significant improvisation from the soloists; an AI-generated piece where someone prompted “Epic orchestral soundtrack, sci-fi battle, heroic with French horn melody, high-quality”.

1. The composer already restricted the subspace of possible sounds to a small slice, so they’re recognized as contributing to the majority of the creative process. There is still room for interpretation for dynamics, expression etc., and the musicians are recognized for putting their personal touch when performing these pieces, reducing the subspace from that small slice to that unique recording.

2. There is much more leeway for interpretation via improvisation. In any particular recording, the composer and the soloists are all rightfully recognized as having contributed significantly to the work, because the soloists are composing important parts of the piece on the fly.

3. It is such a vaguely specified instruction that there’s an uncountable number of wildly different pieces that could’ve fit the requirements. As such, your role as the prompter was more like a client who hired the AI to do all the creative work for you, rather than an actual artist.

This concept can be applied universally to both music and visual arts. It also leaves the door open in the future for a prompter to legitimately be an artist; they'd just have to be incredibly specific with their instructions and the model would need to be very good at following those instructions accurately. 

Tuesday, April 2, 2024

Defeat Sleep Paralysis by Snoring

 I am surprised that this does not seem to be documented anywhere on the internet (or maybe search engines just suck these days), but I discovered a few years back that eye muscles aren't the only thing you can control during deep sleep. There is a muscle in the back of the throat that can cause snoring, which you can also control while lucid dreaming or while in sleep paralysis. I activate that muscle to try to cause the loudest snore possible and wake myself up within 2 snores. I have been using this method to great success most of the time, except today where even after 2 extremely loud snores I still did not wake up, which was quite disconcerting.

I was then awoken by my wife loudly complaining about my snores, which confirms that the control over my snores was real rather than imagined.