Back when I was in grad school, I was listening to the NPR news on the way to class, and heard a story about the military meeting with ethics profs to discuss using robots in war, notably autonomous robots. There was some mention of concerns about “rogue A.I.,” and I grinned a little as the closing music clip came on. It was the theme to the first Terminator movie. (That’s also when I discovered that my Advisor didn’t know about things like Terminator. I was mildly surprised.)
I’ve been listening to the last month’s breathless reportage about A.I. and what it can do and how it will eliminate jobs (for what, the tenth time already?) and how perhaps the singularity is coming soon and so on and so forth, and how A.I. will do it all. First, it confirms my belief that 99% of journalists don’t know anything about computers. How to use programs, yes, perhaps, but not how the things work and the basic way programs do their thing. Second, I get the sense that these people have never, ever read dystopian techno-fiction or early cyberpunk, or watched things like Terminator or that TV show for kids (with the interactive way to shoot at the bad robots on the screen.)
Very early on, Isaac Azimov developed the Three Laws of Robotics, and used various short stories and then novels to explore the ramifications of that. the movie 2001: A Space Odyssey guaranteed that no one of a Certain Generation will use “Hal” as a key term for a voice-activated system, unless they are warped. Really warped. When people started talking about how wonderful it would be to have computers in our minds and cybernetic augmentations to our bodies, along came the Cybermen from Doctor Who. And a few other things. All are about computers that got a “wee bit” out of hand, and either decided that humans were superfluous, or that humans were actively antithetical to the computers’ needs and should be eliminated. The Cybermen traded physical survival for their humanity, with really bad results for everyone else around them.
I tend to be untrusting of technology in the first place, so I latched onto the dystopian-technology stories. Yes, computers and bionics and other things could do wonderful things in fiction. But … I’ve had computers die at awkward moments. I’ve had GPS systems get migraines when I really needed them (in the weather, when my hands were full of “first fly the plane”, just as the last ground-based beacon went out of range.) Computers are literal. Yes, we program them to deal with hundreds of variables, and some models for things look very good. But we programmed them. And truly complex systems? Go look at the percentage of success retrocasting weather and climate using climate models and supercomputers. I’ll wait.
So when the latest breathless “A.I will revolutionize writing! It will make cover artists obsolete! It will replace humans for [whatever,]”, I don’t believe them. Artificial intelligence programs are still programs. They adapt and process data quickly, but thus far, they can’t make the leaps people do. They can improve, as MindJourney has with anatomy (although human hands are still a challenge, among other things), but those are programs with inputs and patient corrections. ChatGPT likewise, and as people play with it, it becomes obvious that it can’t analyze literature worth a fig. It is programmed to have a certain bias and to have blind spots, because it’s a program. It’s a creation of humans who want it to have a bias.
Computers and robots work for some things, like delicate and repeating tasks (welding certain things, taking burger orders.) If you have a limited range of parameters, computers and robots are great. “Two beef patties, no lettuce, white cheese, no mayo, and a medium fry” the things can deal with, as long as a person is around to make sure that the right patties went into the hopper and that the other things are where they should be. Writing ad copy? Perhaps, since the psychology of advertising is fairly well known, even if it is not always aimed properly, as recent misadventures have shown.
Aritifical Intelligence dealing with weapons? Autonomous police robots that are programmed to deal with violent crime? Ah, I saw Robocop. I’ve read a few other things too. What one person can program, another can hack and reverse. Or too many variables will overload the system and it will react in ways the programmers didn’t anticipate. You know, like the in-flight computer that did a reboot after the plane experienced turbulence outside the program parameters. The software designers wanted to save space, so they assumed that he plane would never exceed X degrees of bank, Y degrees of roll, and a certain ascent-descent rate in cruise. The plane did (ah, CAT, how I hate thee) and the pilots became passengers until the system rebooted. Rare? Yes. Bad? Very yes!
A. I. is a program, or at least all the A.I. stuff I’ve seen and heard of to date are just programs. They process data quickly, and seem to think, but they don’t. Yet. I still have doubts about them. I’ve read sci-fi. I know what people are like. Terminator is just one possibility.