or, why rulers are malignant (spoilers!)

M3GAN is another story about an artificial intelligence placed in a robot body, where things go wrong and the AI turns on its creators. But at the moment where M3GAN could do something interesting, it instead falls into old horror movie tropes, and the android turns into something self-aware and evil.

And that point is with the death of Brandon – played by Jack Cassidy. M3GAN had an opportunity to do something great here. While it is not clear what Megan – played by Amie Donald – planned to do with Brandon, who was bullying her charge, 9 year-old Cady – played by Violet McGraw – her actions did result in his death. And here is where the story begins to miss the mark.

The Ford Motor Company decided in the late 1970s it would be cheaper to pay off the dead and mutilated in their explosive Pinto sub-compact than it would be to recall the car and fix it (so it would no longer burst into flames when rear-ended). This is not the only example, through many industries, where similar decisions were made.

Returning to M3GAN: instead of examining the decision-making process by this android, learning what she had planned to do with Brandon, the creators of the Megan were shocked to discover that, not only were the recordings of the events in question erased, but so too were the backups stored in the servers back at the company.

Now, I do have some experience in computer security, and I can say with confidence there was no reason to give an android access to its own recordings, let alone to the backups stored in the company’s, supposedly secure, servers.

Just imagine if the film had gone this alternate route. Examination of Megan determines she meant to harm the boy. Sure, they could correct for this, but what other possible situations they do not anticipate might cause the AI to harm someone to protect its primary user? Now the company is faced with the possibility of future litigation: what if one of their machines killed a burglar it caught in its primary user’s house? There are so many other situations we could pluck from the ether.

The movie could have become a battle among the corporate executives, tasked with the fiduciary duty of increasing profits for their company, against the moral concerns of the android’s creator, and even Cady, demanding Megan be given back to her.

The dangers of AI are not in the chance they may become self aware and decide to murder us. This trope that they will see their creators as enemies is not the problem. The real threat of AI is they will do exactly what we ask them to do. Just like Megan protected her primary user in the forest, leading to an accidental death, AI will follow their instructions, using the data sets we train them on. And this is where the problem lies. These are just a few AI mishaps that have happened, either through AI modelling our biases as humans, or through error because we don’t think like them:

  • AI meant to sentence criminals for crime, an attempt to make sentencing more neutral and fair, recommended longer sentences for black people convicted of the same crime as white people, because that is a systematic inequality in US jurisprudence.
  • AI meant to screen job applicants valorized men over women with equal qualifications for job interviews.
  • AI self-driving cars tend to be more likely to run down people with darker skin. Why? Because almost all the test subjects have lighter skin.
  • An AI meant to determine if a skin lesion was malignant began to determine any image of skin with a ruler in it was malignant, regardless of the state of the lesion itself. Why? Because in the training data sets, most of the positive examples of malignancy contained a ruler – standard practice when documenting a malignant skin lesion – so the AI concluded, reasonably enough, that a ruler in the picture was a sign of malignancy.

So many errors, some completely accidental, some a result of human biases that we don’t account for, but all of them unintended. This is the danger of AI, that it will do precisely what we tell it to do, without fear or favor, even if the outcome is not what we wanted.

This is where M3GAN could have become a sensation. Sure, find a way to keep the Child’s Play vibe at the end, but make it because Megan is following her directives to the letter, leading to unintended, and deadly, consequences.

And stop making AI act like people with individual will in movies. They will do what they have been instructed to do, even if the result is not what we anticipated. Even if the result is terrible. That is what would make a great AI movie. The danger is our hubris, not a machine magically becoming self-aware and turning evil.