Chappie suggests it’s time to think about the rights of robots


From The Terminator to the Matrix, science-fiction movies have captured our fear of dystopian futures where we are ruled or subjugated by our own robotic creations. But Neill Blomkamp’s new film Chappie features a far more humanised robot. Along with other recent films like Ex Machina, Hollywood has acknowledged that our future struggle with the ethics of artificial intelligence (AI) will be much more complex than previously considered. Why? Because of the issue of robot rights.

Chappie tells the tale of the world’s first robot police force, who are tackling spiralling crime in Johannesburg in 2016. One of these robots is injured during an operation, and is then reprogrammed by its creator Deon (Dev Patel), to think for itself. This robot, christened Chappie, grows from inception as a meek child learning to speak and paint, to an adolescent “gangsta-robot”. Deon’s colleague Vincent (Hugh Jackman) sees Deon’s thinking robots as unnatural, and tries to destroy them all with his own heavily-armed human controlled droid, the Moose.

As such, Deon and Vincent represent the two extremes of human concern over artificial intelligence. Should Chappie be treated with the same concern as any other intelligent being? Or is he unnatural, dangerous, to be eliminated?

Dev Patel as the robot’s creator.
Sony Pictures

Biggest threat to humanity?

Many of the ethical issues raised in Chappie have been echoed by world-leading scientists and engineers. Professor Stephen Hawking recently warned:

The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate.

Bill Gates has also expressed concern at the advance of AI. And Elon Musk has donated $10 million for research to be “beneficial to humanity” due to considering AI our “biggest existential threat”.

Their fears are based in part on the theory of technological singularity, which suggests that advances in AI will surpass human evolution. In such cases AI could become all-powerful by an ability to evolve and rewrite their own programming, leaving us as unwanted competitors for scarce resources.

Such use of robots is perhaps not as far in the future as it may seem. Robots and AI are already used from factory floors to voice recognition in smart phones. The use of Predator drone strikes in Yemen, Pakistan and Syria, represent the ability of human controlled robots to be used to kill other human beings.

Robot laws

Human Rights Watch and other human rights advocates support the Stop Killer Robots campaign, which seeks to outlaw autonomous robots that can target humans. They are Vincent’s side of the debate in Chappie. The thinking behind this is that robots are unable to respect the laws of war. These laws require humans to distinguish civilians from combatants, and to only use force that is proportionate and militarily necessary.

While advanced AIs could have complex algorithms to process such issues, they require very human and subjective decision-making, one that values human life. The campaigners argue that robots do not have the emotions or compassion that enables making humane decisions. Although some argue that the use of robots in conflict will limit the risks to human soldiers, it will also increase the likelihood of conflicts and civilians being caught in the crossfire.

These fears are seen in Chappie when the Moose robot uses missiles and cluster bombs that violate the principles of the laws of war. This scene also reminds us that it is not only robots who are a threat to humanity, but the human controllers that wield such great power.

So laws would need to be created that governed these killing robots. In his 1950 book I, Robot, Isaac Asimov suggests that in the future robots could be governed by the Three Law of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This might seem simple enough. But Asimov then goes on to describe various situations where such rules conflict. Writing laws for humans is difficult enough – writing laws that robots would comply with and couldn’t find loopholes in, with their potentially infinite amount of processing power, would be a very different thing.

Anything goes.
Sony Pictures

Robot rights

And so the only answer would be to programme robots with an innate respect for human life. And in relation to the use of autonomous robots for warfare, there remain limits with our current technology that have yet to pass the Turing test of human communication. Building robots that can appreciate nuanced human needs is an extraordinarily long way off. The difficulties of holding robots accountable for killing humans, and not their human controllers, also raises sincere questions for their future development.

But the film Chappie doesn’t only ask this one legal question – how and if we could curtail the power of AI. It also asks about the laws that might be written to protect robots. The robot Chappie, despite not being limited by any rules, more often than not tries to protect his human maker and adopted family. This is a robot with some sense of innate respect for human life.

And so while Chappie runs over familiar concerns of AI’s impact on humanity, its robot human-likeness raises moral concerns as to the worth of existence, consciousness and dignity beyond our mortal coils.

The question remains how we can best regulate such technology to our benefit, while potentially developing sentient life for robots. Robots could be our greatest achievement, or mark our own downfall. But if we are successful in developing conscious robots, like Chappie, we need to ensure that they have some basic rights of existence, as well as responsibilities to protect humanity.

The Conversation

This article was originally published on The Conversation.
Read the original article.

on Twitter