Some of today’s tech giants believe that artificial intelligence (AI) should be more widely utilized. However, there are many ethical and risk assessment issues to be considered before this can become reality. We discuss these below.
1. How do we deal with unemployment?
The majority of people sell most of their waking time just to have enough income to keep themselves and their families alive. The success of artificial intelligence, because of the amount of time it saves, will provide people the opportunity to spend more time caring for their families, becoming involved in their communities, and experiencing new ways of contributing to human society.
Let’s take for example the trucking industry, where millions of people are currently employed in the United States alone. If Tesla’s Elon Musk delivers on his promise of self-driving trucks and they become widely available within the next decade, then what’s going to happen to those millions of people? But self-driving trucks do seem like an ethical option when we consider their ability to lower our accident rates.
2. How can we equitably distribute the wealth created by machines?
Artificial intelligence, if it becomes widely used, can reduce a company’s reliance on the human workforce, which means that revenues will go primarily to persons who own AI-driven companies.
Already, we are seeing start-up founders take home the majority of the economic surplus they generate. So how do we equitably distribute the wealth created by machines?
3. Can machines influence our behavior and interaction?
AI bots are becoming more effective at imitating human relationships and conversations. A bot named Eugene Goostman had emerged victorious from the Turing Challenge for the first time in 2015. This challenge requires human raters to use text input to chat with an unknown entity, and then guess whether the entity is human or machine. Over half of the raters chatting with Eugene Goostman believed it was human.
While this can prove very useful in nudging society towards more beneficial behavior, it can also prove detrimental in the wrong hands.
4. How do we guard against possible detrimental mistakes?
Intelligence results from learning, whether you’re human or machine. Systems normally have a training phase where they “learn” to detect the right patterns and act according to their input. After the training phase, the system then goes to the test phase where more scenarios are thrown at it to see how it performs.
Because it is highly unlikely that the training phase can cover all the possible scenarios that the system may encounter in the real world, the system can be fooled in ways that humans wouldn’t be. Therefore, if we are to rely on AI to replace human labor, we need to ensure it performs as planned and cannot be overpowered by humans with selfish intentions.
5. Can we eliminate AI bias?
Let’s not forget that AI systems are created by humans, who can sometimes be very judgmental and biased. Yes, AI, if used right, can become a catalyst for positive change, but it can also fuel discrimination. AI has the capability of speed and capacity processing that far exceeds the capabilities of humans, however, due to human influence, it cannot always be trusted to be neutral and fair.
6. How do we protect AI from adversaries?
The more powerful the technology, the more it can be used for good as well as nefarious purposes. AI is no exception, therefore, cybersecurity becomes all the more important.
7. How can unintended consequences be avoided?
There’s also the possibility that AI could turn against us, not in an evil way, but rather unintentionally. Let’s take, for example, an AI system that is asked to rid the world of cancer. After all of its computing, it spits out a formula that does just that, it kills everyone on the planet. Yes, the goal was achieved, but not in the way that humans had intended it.
8. Is there any way we could remain in total control of AI?
Human dominance is not due to strong muscles and sharp teeth, but rather intelligence and ingenuity. We are able to defeat stronger, bigger, and faster animals because we’re able to create and use not only physical but also cognitive tools to control them.
This presents a real concern that AI will one day have the same advantage over us. Sufficiently trained machines may be able to anticipate our every move and defend themselves against us “pulling the plug”.
9. Should humane treatment of AI be considered?
Machines imitate us so well that they’re becoming more and more like humans by the day. Soon we’re going to get to the point where we consider machines as entities that can feel, perceive, and act. Once we get there we might ponder their legal status. Can “feeling” machines really suffer?
So how do we address those ethical issues?
Many believe that because AI is so powerful and ubiquitous, it is imperative that it be tightly regulated, however, there is little consensus about how this should be done. Who makes the rules? So far, companies that develop and use AI systems are mostly self-policed. They rely on existing laws and negative reactions from consumers and shareholders to keep them in line. Is it realistic to continue this way? Obviously not, but as it stands, regulatory bodies are not equipped with the AI expertise necessary to oversee those companies.
Jason Furman, a professor of the practice of economic policy at the Kennedy School and a former top economic adviser to President Barack Obama, suggests that “The problem is these big tech companies are neither self-regulating nor subject to adequate government regulation. I think there needs to be more of both…We can’t assume that market forces by themselves will sort it out. That’s a mistake, as we’ve seen with Facebook and other tech giants…We have to enable all students to learn enough about tech and about the ethical implications of new technologies so that when they are running companies or when they are acting as democratic citizens, they will be able to ensure that technology serves human purposes rather than undermines a decent civic life.”
While technological progress translates to better lives for everyone, we should bear in mind that some ethical concerns will develop centering around mitigating suffering and risking negative outcomes.