Philosophy as a form of systems thinking, plus a few thoughts about AI
I only recently learned that philosophy was a form of systems thinking, and that to be a philosopher meant to actively develop a system through which one could understand the world.
I suppose it isn't surprising that I hadn't learned this until I was 43 years old; I took a philosophy class in college, an honors seminar for first-year students that was meant to make us feel both privileged and important, and learned that everything was relative and subjective and nothing could be proven.
(This was, in fact, our first assignment – to write an essay in which we attempted to prove something was true. I knew that we were meant to be tricked into writing about God or love or science or math, at which point the professor would wield some gotcha argument [which in fact he did] so I compared two recordings of Leonard Bernstein's Overture to Candide and still believe I successfully proved that the second one was more technically accurate.)
The fact that there was no class consensus on how to solve the trolley problem, or whether we would choose to press an infinite pleasure button, seemed to both demonstrate and verify that an objective best answer, whether or not that answer could be considered truth, did not exist.
I hope I am not misremembering this, because I recall being fond of the professor, who was the perfect combination of gruff and genial. However, I left the class with the impression that philosophy was worthless, in terms of developing a mental framework around which you could organize your decisions and actions.
I have since changed my mind.
(I also looked up the professor and discovered that he has written specifically about the nature of first-year college students defaulting to relativism, and now I'm wondering what might have happened if I had signed on for a second-year philosophy class.)
The point is that I did not know that philosophy was a form of systems thinking, which disappoints me because systems thinking is my very favorite thing.
(This is, by the way, one of the reasons Larry fell in love with me.)
It also disappoints me because more and more people are spending more and more time with AI programs that are both anti-systemic and anti-thought.
"Did you know that AI chatbots give different answers to different people?" I asked Larry, the other night.
"No," he said, "but it doesn't surprise me."
"And once you start talking to one of these things for a while," I continued, "it begins to anticipate the kinds of responses you want to get."
"The same way that YouTube knows what kinds of chess training videos I like to watch."
"Exactly," I said. "And I'm not sure about this one, but you can ask these things for shopping recommendations, and the answers they give you have to be bought-and-paid-for, right? Just like the recommendations at the top of Google or Amazon?"
"Probably," Larry said. "It seems like a reasonable assumption."
"I mean, maybe the AI isn't bought, but even then it just picks the answers at the top of Google or Amazon, which is the same thing," I said. "Which means these things aren't intelligence at all."
"We already knew that."
"Hang on," I said, "I'm making a point here. Intelligence is defined by its replicability. Something that is known stays known; it's only what's unclear or unknown that wibbles and wobbles. We know this from piano practice."
"Yes," Larry said. "We do."
"And something like a calculator," I said, "gives the same answer to every single person, and it gives the same answer every single day. That's an artificial intelligence, in the sense that it gives you both time and brainspace to focus on larger-scale problems."
(We did not discuss the ways in which calculators were also amputations, atrophying the parts of our brains that could have known arithmetic; we've already had that conversation.)
"But this," I concluded, "is something else. It's anti-intelligence. It's telling you that you don't have to think, because it's just going to give you the answers, only the answers are all decided by someone else, and they're subjective depending on whatever it takes to keep you hooked, and there's no reason why those answers won't eventually be sold to the highest bidder if they aren't already. AI is designed not to amplify your thinking or support your thinking but to actively keep you from thinking."
"So ignore it," Larry said.
But to do that –
and I didn't say this last night, but I may this evening –
to successfully ignore AI you have to develop your own consistent system of understanding the world –
and you have to verify this system against reality to ensure that it is accurate –
and you have to accept that this system may necessarily be incomplete while continuously working towards its completion.
In other words, you need to become a philosopher.