Yeah, I know it’s only a few more weeks before that all-important November election, but let’s take a break and discuss something that’s probably more important in the long run than politics (and it still could affect that election!): That ubiquitous AI fad.
First, let me tell you why AI’s generally dangerous. AI is certainly “artificial”: Any software is artificial, no matter how cleverly it’s written. Think about it: Someone is creating a bunch of instructions (basically some sequence of ones and zeroes) to do something on machines that run on ones and zeroes. (These include your smartphone and laptop.) Sometimes that “something” might be useful; other times (a lot more with every passing month!) it’s just annoying or basically crap. And human beings creating it can also be either one, which means we should always be wary of both the software and its creators!
AI is certainly not “intelligence.” I’m using a very restrictive definition of intelligence, of course: I mean that current AI is far from even beginning to have the abilities of HAL (even HAL’s good version 2.0 from 2010, not 2001). Most current AI programs are little more than sophisticated browsers that can search speedily through information available on the internet and sometimes (emphasis on “sometimes”!) use that information to answer users’ questions, correctly or incorrectly. (While it can’t dream, sometimes it can hallucinate.) In other words, current AI is just a super-fast version of that old pre-internet Q&A program you could load onto your Color Computer and play with long ago. (Interestingly enough, because I couldn’t remember its name—my Lord, that was only the nineties!—I asked the Google browser what the name of that program was…and it didn’t know either!)
So far, so good, so let’s get to some of the many dangers of current AI:
Can current AI help inform medical patients? Oh yes, it certainly can, depending on what you mean by “inform.” Patients shouldn’t necessarily believe what information or recommendations it comes up with, though. A recent item in 9/26/2024’s home edition of The NY Times underlines the problem: During the Covid pandemic, telemedicine took off…and probably for good reasons. (After all, most patients were smart enough not to believe what Trump and his cronies said to do, e.g., inject disinfectant as a Covid cure.) Patients started using their computers to flood doctors’ offices with all sorts of questions because they were reluctant to have person-to-person appointments…and many doctors and nurses were reluctant to offer them! What happened in this case mimicked what occurred in many social interactions, even after Covid: The questions continued. Epic Software, the creator of MyChart (I demand to know why this damned software is not patient-specific instead of doctor- and institution-specific so I don’t need a handful of MyChart applications?), saw that doctors were swamped with health queries. That’s mostly the doctors’ fault, of course: Healthcare is a billion-dollar business, and doctors and the AMA want to keep it that way, so they apply that old economic bludgeon (which is mostly true) that a lot of demand and a short supply (in this case, practicing doctors) guarantees lots of profit for all the greedy bastards. So what did Epic do? They added AI to MyChart so at best patients get answers to their questions that might or might not only be doctors editing Epic’s AI’s answers. “Danger, danger, Will Robinson!”
(This Times’ article doesn’t begin to discuss the problems associated with AI software reading x-rays, cat scans, MRIs, and so forth…and providers of those procedures forcing patients to pay even more money for questionable reads by AIs sometimes to completely replace doctors they claim are more accurate. The motivating factor for all of this seems to be greed in the healthcare business, not improving patients’ diagnoses!)
Can current AI be used to simplify political advertising? Oh my yes! All advertising even! But let’s move on to just politics for now.
In fact, especially in political campaigning, AI can be used to create lies one candidate can use to attack another’s integrity and record. Or, even worse, it can be used by a US adversary to create those kind of conspiracy theories that some gullibly stupid voters just love to believe. That’s all political advertising on steroids!
With AI, a political campaign can turn the opposition into monsters. China, Russia, Iran, and Korea, and any other autocratic and/or economic foes of the US, for that matter, can create even more chaos in US elections, from the presidential race down to your local schoolboard. In fact, AI is very good at mimicking a candidate’s voice and body language—those are just ones and zeroes too, so not much cleverness is required, just the dumb AI software.
What about creating infrastructure problems? Ancient and automatic systems running power plants, shipping and plane schedules, law enforcement ops, and so forth were already very susceptible to hacking. Now the bad actors can do a lot more damage by using AI during or after the hacking. Current AI is quite capable even now of doing what the machines achieved in the Terminator movies…and they don’t need old Arnold to do it!
Energy concerns. This is yet another negative for AI, but no one seems to mention it very much. AI needs a lot of energy to cast its broad nets across the world. It’s estimated that just the current energy that’s used to drive AI now uses as much energy as what a modern country like Sweden uses. Here’s another easy way to describe this danger: Using AI works against finding any solution to the global warming problem because it forces us to use even more energy, not less! This will only get worse as more and more countries turn to it for whatever perceived advantages they think AI brings.
The more we let computers invade and run our lives, the bigger the chance there is for AI to cause a disaster. That’s world disaster, perhaps world-ending, not just one for the US! That is the ultimate danger of current AI. It’s still very primitive, but it’s also already so very dangerous. Let’s not be too quick to trust it. That old maxim from the X-Files deserves some thoughtful modernization: Trust no one and nothing, human or AI; and always verify! And if you don’t feel competent enough to do that, don’t ask AI to do it for you!