The Terminators… are Stupid

Stop. If you haven’t seen the original Terminator movie, or its even better sequel, Terminator 2, you need to brush up. If you saw Terminator Dark Fate, you still haven’t seen the Terminator. One is a can’t miss Arnold Schwarzenegger action flick. The other, in my opinion, is not worth recording on a spare hard-drive found on Kijiji, where the present owner will pay you to take it away. If you’re caught downloading it illegally, they’ll likely pay you.

Over the first two Terminator movies Arnie is both a fantastic bad, then good guy, and always ready to destroy. While some see the movies as having some meaning outside of retina-popping action – that over-confidence in technology can nearly wipe out the human race – I do not. I see the films as examples of Arnie’s tried and true model for a commercially successfully movie. Terminator technology is way too dumb to take over anything.

Of course, if you give the Terminator films a bit of scrutiny the story line turns up chock-o-block full of holes. First, the terminators themselves are large, humanoid-shaped robots. Every weapon known to man has been made to kill… humanoids. Second, the bigger a target the easier it is to hit, right? Drunk, you could hit a terminator from across the park with a mangled frisbee. Next: two legs and two arms. Another glaring defect. Shoot off a leg and what happens? Humanity is now threatened by Crawley the disabled robot? Run up a staircase and the robots are defeated. Multi-arm robot? Nope. Two arms and one gun, who can’t defeat that?

It gets even dumber. After mastering time travel, to which time period do the terminators travel? The bow and arrow era? No, they’d rather head back to the early era of armor-piecing munitions, automatic weapons and the rocket launcher. In the real world, flying, toy-like drones have been used to drop very real bombs on very real humans. Do the terminators send an army of flying mini drones, for which we have few defenses? Once again, no. Big ass, human-shaped robots with fat heads come to get us. Who is running the show over there at terminator HQ? It’s more Clown Town then Armageddon when it comes to realism. But I digress.

The auto industry, well known for touting even the smallest of meager achievements (even on the fanciest of cars – how on earth does AM/FM radio still count as a feature?) has decided that instead of cracking the hyper-mileage problem, which is really hard, it will pour its resources into automated driving technology which, according to the group-think, is definitely something we need. Now that we have a feature to avoid getting caught playing with our cell phones, instead diverting our fingers and attention to the car’s phone-linked infotainment system, we now need another technology to fix the problem created by the previously mentioned distracting feature. Even better, unlike the myriad cell phone driving laws, automated driving has few, if any laws at all. What could possibly go wrong with no accountability?

The automotive industry is spending billions on autonomous driving R&D, so they must be getting something right, no?

Consider that the U.S. military spends more in a week than the automotive industry spends in a year, yet even the U.S. military can’t quite crack autonomous, self-guided projectiles.

Take the U.S. Army’s Patriot missile defense system, which has the capability to identify enemies and fire instantly. In 2003 this super system decided to recognise a British military jet as a missile, shooting it down and killing its two pilots. Don’t missiles travel much faster than jets? I guess the software team hadn’t figured that out yet. The Patriot wasn’t finished its fratricide just yet. In another example of a complex system not quite ready for the real world, it then proceeded to shoot down a U.S. Navy F-18 jet, killing its pilot. Some had rumoured the missile was in ‘automated’ mode at the time of the F-18 shoot down. Automated mode allows the system to select a target and fire all on its own.

Allied and friendly pilots had come to fear the Patriot more than Iraqi air defenses. In one incident, when the Patriot decided to zero-in on a U.S. Air Force F-16 fighter jet, its pilot reacted immediately upon hearing his plane’s warning alarm informing him that a missile system was tracking him. He fired his own anti-missile missile at the Patriot, destroying its radar but not harming its crew. With three ‘red and blue on red and blue’ kills, and one near miss, the Patriot was ordered out of automated mode. Just like with rifles, humans would now make the decision to fire or not. The Patriot hasn’t shot down another friendly, or any other kind of jet, since then.

Some will say the task of missile defense is much harder than the mastery of autonomous driving. Missile defense certainly isn’t an easy problem but if you’re a missile, your job is to seek out a hot metallic object in the sky. How many could there be at time of launch? Cars, on the other hand, need to navigate around every single object its path crosses on the ground.

While missiles are built with brand new, cutting-edge, ground-breaking, never even heard of, tried for the first time tech, cars are built upon proven technologies that have been around a long time. No one wants to find out that their “cutting-edge” brake rotors melt like wax if used at ambient temperatures above 34-degrees Celsius. Damn. Another recall.

Consider the case of the infamous 100-million airbag recall, wherein Takata (the airbag maker) decided to use a different kind of explosive charge. At least seven people have died because someone decided that testing was for the feeble. “No self-respecting army of Spartans has time for long-term testing,” they must have thought. The point here: because of the possibility that the public could lose confidence in the safety of one of the biggest engines of economic prosperity – the automobile – well proven systems are generally preferred over ‘hope-that-works’ solutions.

Autonomous driving is far from ‘well proven.’

Automation, even with its many benefits, suffers a significant dilemma. If we hear a story wherein Pilot X, depressed because his family disowned him, girlfriend left him, and even his dog ran away to be with the neighbour, intentionally crashes his aircraft into the ground, we call that a tragedy. To be fair, his dog did go to the neighbour, and any pilot finding themselves in this predicament should be, at least until another dog accepts him, grounded.

But when Boeing’s newly-automated 737 MAX auto-trim system slammed two high-tech airliners into the ground, killing all onboard, the world was unanimous in its condemnation – this loss was simply unacceptable. Boeing’s otherwise state-of-the-art 737 MAX planes have been grounded for months. Billions have been lost. An aerospace giant’s future is uncertain.

Put simply, we can understand a poor human choice, but cannot accept a bad computer decision. When automation promises perfection, what are the consequences when it fails to deliver?

Computer experts say, “We wish to build a computer that works like the human brain,” but brain experts say, “We still don’t quite understand how the human brain works.” What’s the goal here? To make something work like something we don’t quite understand? No wonder it doesn’t work.

Many “experts” tell us that AI will lead to safer driving. Automakers are billions in, but autonomy and chaos are still kissing cousins. How about automakers repurpose their billion-dollar AI budgets, forwarding $500 annually to every driver of their marque who doesn’t crash. Perhaps we’re abandoning human intelligence (HI) too soon?

We have fines for discouraging bad behaviour, why don’t we have incentives for good behaviour? I guess the super-rich could still happily smash their cars, indifferent to such incentives, but they got rich making, not throwing away money. Maybe they too would be even better drivers.  Maybe we’d all be.

[Photo credit: Paramount Pictures; Laguna Beach Police Department/AP]

Comment

There is no comment on this post. Be the first one.

Leave a comment