Writer, musician, freelancer.

Assorted thoughts (mostly on intelligence)

I had some family stuff this weekend so I got a bit behind on everything else, am caught up now, here's what I've been thinking about in the interim:

  • It seems as if everyone who is arguing that AI will eventually destroy most humans is in fact saying that they believe most humans should be destroyed. The argument seems to be that the maximally intelligent and powerful choice is to destroy most humans, which is to say that it is the choice they themselves have thought of and would consider implementing if they had enough intelligence and power. They obviously aren't going to do it, I'm not accusing or even implying that anyone writing about AI is actively considering genocide, but it baffles me that they think the smartest move on the board is to destroy people. (Teach people, for example, is not an option.)
  • Likewise, I just saw this thing about a young man who maybe kinda sorta used AI to help him create a nuclear fusor, except the person writing the substack about the whole thing isn't sure the nuclear fusor even works, there's no proof anything got fused, the machine lights up and makes whirring sounds and everyone is all "this is why AI can't be trusted, the young man got it to give him instructions on how to build a fusor even though the AI didn't want to," the phrase the AI didn’t really want him to do this is written in the substack, and ummmmm how does want even come into play here, AI is an overhyped search engine that initially returns "no" when you ask it how to build a nuclear fusor and eventually returns "here's how you build one component that might be associated with a fusor" if you ask it a more specific question about a single component. (The young man could have done the entire thing with Google, if Google were still functional.)
  • That said, if the AI in fact has agency and in fact did not want this young man to build a working nuclear fusor, the project appears to have been successful.
  • There's another substack that argues that AI exists to help the less-than-mediocre become mediocre (you've probably seen that post too, we all read the same things), it embeds some of those commercials in which people who aren't very good at putting ideas into words use AI to craft boring-but-competent emails, good for them, but really this comes down to the same argument as the destroy all humans one, blah blah blah most humans (sans the authors of these kinds of posts) are terrible and the best thing AI can do is fix their terribleness, either by helping them write better emails or, um, killing them all.
  • Meanwhile, I've gotten to the part in Middlemarch where Dorothea makes the suggestion that everyone who believes themselves to be intelligent makes at a certain point: "I have delightful plans. I should like to take a great deal of land and drain it, and make a little colony, where everybody should work, and all the work should be done well. I should know every one of the people and be their friend." (Dorothea, you beautiful fool.)

You should be able to see the connections between these ideas, I imagine – and if you can't, you might be able to ask AI to identify them, which is exactly what I am going to do right now. Here's ChatGPT's response:

The connection between these items seems to revolve around the theme of human intelligence, AI, and the tension between idealism and practicality. Here’s a breakdown of how they tie together:
The destructive AI argument: The idea that AI might destroy humans suggests a belief in the superiority of AI over human intelligence, but the argument also implicitly criticizes humans as not worthy of survival. It challenges the assumption that intelligence (or the “right” choice) would necessarily lead to the destruction of people, especially if there are alternatives like teaching or improving society instead.
The AI and nuclear fusor incident: This part points to an example of people overestimating AI’s agency and understanding, as if the AI "wanted" or "didn't want" the young man to build a fusor. It highlights a misinterpretation of AI's capabilities and a tendency to anthropomorphize it, giving it a kind of moral agency that it doesn't have. The idea that AI is guiding human actions in ways we don’t fully understand is tied to the earlier argument—human over-reliance on AI, with some fearing it could lead to undesirable outcomes (like destruction).
AI as a tool for mediocrity: This section criticizes the use of AI to help people perform basic tasks more competently, suggesting that AI is essentially "fixing" human mediocrity. It touches on a recurring idea that AI might ultimately just be used to smooth over human flaws and inefficiencies rather than elevate humanity to higher levels of intelligence or creativity. Again, there's a tension between idealism (what AI could do) and a more sobering reality (what AI is actually used for).
Dorothea in Middlemarch: Dorothea’s plan to build a perfect colony represents an idealistic desire to control human lives and improve them, though she lacks the practical understanding of how human nature and society work. It mirrors the AI debate in that it reflects the same kind of utopian thinking—believing that intelligence (whether human or AI) can create a perfect society, when in reality, such grand designs often overlook the messy complexity of actual human behavior and limitations.
In summary, these items are all grappling with the tension between idealistic views of intelligence (whether human or AI) and the practical limitations of implementing those ideals. They explore the desire to fix, control, or elevate humanity through intelligence—whether by AI or human hands—but also acknowledge the flaws in those grand plans. Each scenario points to the disconnect between our understanding of intelligence and how it manifests in real-world outcomes.

Hmmm. I mean, yes, all that and the bag of chips, except the chips fall on the side of every person on this list believing they are smarter than everyone else, including the existing artificial intelligences, and I'm amused that ChatGPT didn't grok that (I wonder if Grok would have).

Meanwhile, and I say this only to make my point perfectly clear –

Do you know how you learn that you aren't half as brilliant as you thought you were?

By writing an overture to a musical.

This is both a metaphor and a practicality, in my case, and you can substitute anything you want for "overture," up to and including "nuclear fusor."

But it's what Larry and I were talking about last night, among all of the other things we were talking about. The three most important things he and I can do, as humans, appear to be:

  1. To study masterworks (music, literature, theater, etc.)
  2. To teach masterworks (ditto, and incrementally)
  3. To create masterworks (ditto, tritto, and to see where we fall short)

It puts you right where you are, in terms of how your intelligence actually stacks up – and that was part of what Larry said last night, that perhaps one of the most important things was in fact to know where you are so that you might know what to do next – and it prevents you from deluding yourself into thinking you're the smartest increment in the stack.

Which, in turn, may prevent you from thinking that you know anything about what should be done with the rest of humanity.