Is It Possible to Create an Ethical AI System?
We may be jumping the gun a bit, but it’s time to talk about the end of the world.
Sure, artificial intelligence was cute when Siri told you a joke or when IBM’s ‘Deep Blue’ beat the world champion of chess, but what happens when Siri develops a mind of her own?
We truly are in a revolutionary era. Computers are beginning to think for themselves. If it wasn’t for AI, we wouldn’t have self-driving cars, ‘Alexa’ or Snapchat’s face filters.
Fortunately, we are not at the point where AI possesses what the Machine Intelligence Research Institute (MIRI) refers to as “Artificial General Intelligence”. This means that current AI systems are restricted to a certain domain, like voice or face recognition, instead of being able to think beyond what they’re programmed to do.
However, researchers at MIRI are thinking ahead and exploring the ethical implications of the era of artificial intelligence. Consequently, they identified three fundamental factors that must be implemented in order to maintain an ethical AI system: Predictability, transparency and morality.
Designing a machine has ethical implications in and of itself, but as artificial intelligence begins to take on social dimensions, its algorithm requires social specifications. It’s one thing to program a robotic prosthetic arm to not destroy things and another to program a car capable of deciding to continue driving or swerve when something unexpectedly gets in its way.
When dealing with AI, the programmer must know to some extent what the program is going to decide, even if it can think for itself. Therefore, making AI systems predictable is one of the first steps in ensuring an ethical system, especially if that system might have to make life-endangering decisions.
Just as each of us is able to look under the hood of a car to check its components, the same should go for an artificial intelligent system. We must be able to understand why an AI system acts a certain way. The humans programming these systems are not perfect, which implies that the coding isn’t perfect.
Therefore, this transparency factor becomes increasingly important as we give AI more responsibility. If computers start making important decisions, like determining qualification for loans, it is imperative that programmers understand how the system makes its decisions.
It is agreed upon that current forms of AI possess no moral status. We can manipulate, create and delete computer programs as we choose. Why? Because we have no moral obligation to these systems.
However, many people live by the mantra, “Treat others as you would like to be treated.” If we get to a point in our society where computers have the ability and responsibility to make ethical judgements, do we then have a moral obligation to these systems? It may sound like science fiction, but this is an interesting question to think about, and one we’ll have to ask more and more as technology and AI continue to advance.