Category Archives: Robots

Why we don’t trust robots


Joffrey Becker, Collège de France

Robots raise all kinds of concerns. They could steal our jobs, as some experts think. And if artificial intelligence grows, they might even be tempted to enslave us, or to annihilate the whole of humanity. The Conversation

Robots are strange creatures, and not only for these frequently invoked reasons. We have good cause to be a little worried about these machines.

An advertisement for Kuka robotics: can these machines really replace us?

Imagine that you are visiting the Quai Branly-Jacques Chirac, a museum in Paris dedicated to anthropology and ethnology. As you walk through the collection, your curiosity leads you to a certain piece. After a while, you begin to sense a familiar presence heading towards the same objet d’art that has caught your attention.

You move slowly, and as you turn your head a strange feeling seizes you because what you seem to distinguish, still blurry in your peripheral vision, is a not-quite-human figure. Anxiety takes over.

As your head turns, and your vision become sharper, this feeling gets stronger. You realise that this is a humanoid machine, a robot called Berenson. Named after the American art critic Bernard Berenson and designed by the roboticist Philippe Gaussier (Image and Signal processing Lab) and the anthropologist Denis Vidal (Institut de recherche sur le développement), Berenson is part of an experiment underway at the Quai Branly museum since 2012.

The strangeness of the encounter with Berenson leaves you suddenly frightened, and you step back, away from the machine.

The uncanny valley

This feeling has been explored in robotics since the 1970s, when Japanese researcher Professor Masahiro Mori proposed his “uncanny valley” theory. If a robot resembles us, he suggested, we are inclined to consider its presence in the same way as we would that of a human being.

But when the machine reveals its robot nature to us, we will feel discomfort. Enter what Mori dubbed “the uncanny valley”. The robot will then be regarded as something of a zombie.

Mori’s theory cannot be systematically verified. But the feelings we experience when we meet an autonomous machine are certainly tinged with both incomprehension and curiosity.

The experiment conducted with Berenson at the Quai Branly, for example, shows that the robot’s presence can elicit paradoxical behaviour in museum goers. It underlines the deep ambiguity that characterises the relationship one can have with a robot, particularly the many communication problems they pose for humans.

If we are wary of such machines, it is mainly because it is not clear to us whether they have intentions. And, if so, what they are and how to establish a basis for the minimal understanding that is essential in any interaction. Thus, it is common to see visitors of the Quai Branly adopting social behaviour with Berenson, such as talking to it, or facing it, to find out how it perceives its environment.

In one way or another, visitors mainly try to establish contact. It appears that there is something strategic in considering the robot, even temporarily, as a person. And these social behaviours are not only observed when humans interact with machines that resembles us: it seems we make anthropomorphic projections whenever humans and robots meet.

Social interactions

An interdisciplinary team has recently been set up to explore the many dimensions revealed during these interactions. In particular, they are looking at the moments when, in our minds, we are ready to endow robots with intentions and intelligence.

This is how the PsyPhINe project was born. Based on interactions between humans and a robotic lamp, this project seeks to better understand people’s tendency to anthropomorphise machines.

After they get accustomed to the strangeness of the situation, it is not uncommon to observe that people are socially engaging with the lamp. During a game in which people are invited to play with this robot, they can be seen reacting to its movements and sometimes speaking to it, commenting on what it is doing or on the situation itself.

Mistrust often characterises the first moments of our relations with machines. Beyond their appearance, most people don’t know exactly what robots are made of, what their functions are and what their intentions might be. The robot world seems way too far from ours.

But this feeling quickly disappears. Assuming they have not already run away from the machine, people usually seek to define and maintain a frame for communication. Typically, they rely on existing communication habits, such as those used when talking to pets, for example, or with any living being whose world is to some degree different from theirs.

Ultimately, it seems, we humans are as suspicious of our technologies as we are fascinated by the possibilities they open up.

Joffrey Becker, Anthropologue, Laboratoire d’anthropologie sociale, Collège de France

This article was originally published on The Conversation. Read the original article.

We could soon face a robot crimewave … the law needs to be ready


Christopher Markou, University of Cambridge

This is where we are at in 2017: sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases. By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. Just ask the toddler who was run over by a security robot at a California mall last year. The Conversation

How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.

Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

Robot guilt

The role of the law can be defined in many ways, but ultimately it is a system within society for stabilising people’s expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.

But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit – and so humans are considered by the law to be “legal persons”.

To varying extents, companies are endowed with legal personhood, too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?

A guilty AI mind?

The criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.

Second, criminal law requires that an accused is culpable for their actions. This is known as a “guilty mind” or mens rea. The idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.

Blind justice for a AI.
Shutterstock

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

Take driverless cars. Cars drive on roads and there are regulatory frameworks in place to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.

As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of “robots gone awry” but the legal realities should not be left to Hollywood.

So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea. How do we know the robot intended to do what it did?

For now, we are nowhere near the level of building a fully sentient or “conscious” humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and discriminatory algorithmic mischief already abounds.

Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let’s say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

But what would we need to prove the existence of mens rea? Could we simply cross-examine the AI like we do a human defendant? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine “tick”.

And what would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defense or the extent of premeditation?

Let’s go even further. After all, we’re not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card – and it decided to buy contraband. This isn’t fiction; it has happened. Two London-based artists created a bot that purchased random items off the dark web. And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and ten ecstasy pills. Should these artists be liable for what the bot they created bought?

Maybe. But what if the bot “decided” to make the purchases itself?

Robo-jails?

Even if you solve these legal issues, you are still left with the question of punishment. What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to “reflect” on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty’s leisure. And what would building “remorse” into machines say about us as their builders?

Would robot wardens patrol robot jails?
Shutterstock

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

AI has already helped with emergent concepts in medicine, and we are learning things about the universe with AI systems that even an army of Stephen Hawkings might not reveal.

The hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.

At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can’t help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.

Christopher Markou, PhD Candidate, Faculty of Law, University of Cambridge

This article was originally published on The Conversation. Read the original article.

Merging our brains with machines won’t stop the rise of the robots


Michael Milford, Queensland University of Technology

Tesla chief executive and OpenAI founder Elon Musk suggested last week that humanity might stave off irrelevance from the rise of the machines by merging with the machines and becoming cyborgs. The Conversation

However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.

Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.

But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.

The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.

However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.

Current technologies are primitive compared to the picture of brain-machine interfaces we’re sold in science fiction movies such as The Matrix.

Deep learning quirks

Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.

The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.

We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.

One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.

Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.

Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.

Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.

This image shows how you can fool AI image recognition by adding imperceptible noise to the image.
From Goodfellow et al, 2014

Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.

Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.

Google research scientist, Ian Goodfellow, shows how easy it is to fool a deep learning system.

Trust me, I’m a robot

Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?

We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.

Research into trusted autonomous systems deals with both humans trusting machines and machines trusting humans.

There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.

In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?

In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?

In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.

Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.

The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.

Michael Milford, Associate professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Biohybrid robots built from living tissue start to take shape


Think of a traditional robot and you probably imagine something made from metal and plastic. Such “nuts-and-bolts” robots are made of hard materials. As robots take on more roles beyond the lab, such rigid systems can present safety risks to the people they interact with. For example, if an industrial robot swings into a person, there is the risk of bruises or bone damage.

Researchers are increasingly looking for solutions to make robots softer or more compliant – less like rigid machines, more like animals. With traditional actuators – such as motors – this can mean using air muscles or adding springs in parallel with motors. For example, on a Whegs robot, having a spring between a motor and the wheel leg (Wheg) means that if the robot runs into something (like a person), the spring absorbs some of the energy so the person isn’t hurt. The bumper on a Roomba vacuuming robot is another example; it’s spring-loaded so the Roomba doesn’t damage the things it bumps into.

But there’s a growing area of research that’s taking a different approach. By combining robotics with tissue engineering, we’re starting to build robots powered by living muscle tissue or cells. These devices can be stimulated electrically or with light to make the cells contract to bend their skeletons, causing the robot to swim or crawl. The resulting biobots can move around and are soft like animals. They’re safer around people and typically less harmful to the environment they work in than a traditional robot might be. And since, like animals, they need nutrients to power their muscles, not batteries, biohybrid robots tend to be lighter too.

Tissue-engineered biobots on titanium molds.
Karaghen Hudson and Sung-Jin Park, CC BY-ND

Building a biobot

Researchers fabricate biobots by growing living cells, usually from heart or skeletal muscle of rats or chickens, on scaffolds that are nontoxic to the cells. If the substrate is a polymer, the device created is a biohybrid robot – a hybrid between natural and human-made materials.

If you just place cells on a molded skeleton without any guidance, they wind up in random orientations. That means when researchers apply electricity to make them move, the cells’ contraction forces will be applied in all directions, making the device inefficient at best.

So to better harness the cells’ power, researchers turn to micropatterning. We stamp or print microscale lines on the skeleton made of substances that the cells prefer to attach to. These lines guide the cells so that as they grow, they align along the printed pattern. With the cells all lined up, researchers can direct how their contraction force is applied to the substrate. So rather than just a mess of firing cells, they can all work in unison to move a leg or fin of the device.

Tissue-engineered soft robotic ray that’s controlled with light.
Karaghen Hudson and Michael Rosnach, CC BY-ND

Biohybrid robots inspired by animals

Beyond a wide array of biohybrid robots, researchers have even created some completely organic robots using natural materials, like the collagen in skin, rather than polymers for the body of the device. Some can crawl or swim when stimulated by an electric field. Some take inspiration from medical tissue engineering techniques and use long rectangular arms (or cantilevers) to pull themselves forward.

Others have taken their cues from nature, creating biologically inspired biohybrids. For example, a group led by researchers at California Institute of Technology developed a biohybrid robot inspired by jellyfish. This device, which they call a medusoid, has arms arranged in a circle. Each arm is micropatterned with protein lines so that cells grow in patterns similar to the muscles in a living jellyfish. When the cells contract, the arms bend inwards, propelling the biohybrid robot forward in nutrient-rich liquid.

More recently, researchers have demonstrated how to steer their biohybrid creations. A group at Harvard used genetically modified heart cells to make a biologically inspired manta ray-shaped robot swim. The heart cells were altered to contract in response to specific frequencies of light – one side of the ray had cells that would respond to one frequency, the other side’s cells responded to another.

When the researchers shone light on the front of the robot, the cells there contracted and sent electrical signals to the cells further along the manta ray’s body. The contraction would propagate down the robot’s body, moving the device forward. The researchers could make the robot turn to the right or left by varying the frequency of the light they used. If they shone more light of the frequency the cells on one side would respond to, the contractions on that side of the manta ray would be stronger, allowing the researchers to steer the robot’s movement.

Toughening up the biobots

While exciting developments have been made in the field of biohybrid robotics, there’s still significant work to be done to get the devices out of the lab. Devices currently have limited lifespans and low force outputs, limiting their speed and ability to complete tasks. Robots made from mammalian or avian cells are very picky about their environmental conditions. For example, the ambient temperature must be near biological body temperature and the cells require regular feeding with nutrient-rich liquid. One possible remedy is to package the devices so that the muscle is protected from the external environment and constantly bathed in nutrients.

The sea slug Aplysia californica.
Jeff Gill, CC BY-ND

Another option is to use more robust cells as actuators. Here at Case Western Reserve University, we’ve recently begun to investigate this possibility by turning to the hardy marine sea slug Aplysia californica. Since A. californica lives in the intertidal region, it can experience big changes in temperature and environmental salinity over the course of a day. When the tide goes out, the sea slugs can get trapped in tide pools. As the sun beats down, water can evaporate and the temperature will rise. Conversely in the event of rain, the saltiness of the surrounding water can decrease. When the tide eventually comes in, the sea slugs are freed from the tidal pools. Sea slugs have evolved very hardy cells to endure this changeable habitat.

Sea turtle-inspired biohybrid robot, powered by muscle from the sea slug.
Dr. Andrew Horchler, CC BY-ND

We’ve been able to use Aplysia tissue to actuate a biohybrid robot, suggesting that we can manufacture tougher biobots using these resilient tissues. The devices are large enough to carry a small payload – approximately 1.5 inches long and one inch wide.

A further challenge in developing biobots is that currently the devices lack any sort of on-board control system. Instead, engineers control them via external electrical fields or light. In order to develop completely autonomous biohybrid devices, we’ll need controllers that interface directly with the muscle and provide sensory inputs to the biohybrid robot itself. One possibility is to use neurons or clusters of neurons called ganglia as organic controllers.

That’s another reason we’re excited about using Aplysia in our lab. This sea slug has been a model system for neurobiology research for decades. A great deal is already known about the relationships between its neural system and its muscles – opening the possibility that we could use its neurons as organic controllers that could tell the robot which way to move and help it perform tasks, such as finding toxins or following a light.

While the field is still in its infancy, researchers envision many intriguing applications for biohybrid robots. For example, our tiny devices using slug tissue could be released as swarms into water supplies or the ocean to seek out toxins or leaking pipes. Due to the biocompatibility of the devices, if they break down or are eaten by wildlife these environmental sensors theoretically wouldn’t pose the same threat to the environment traditional nuts-and-bolts robots would.

One day, devices could be fabricated from human cells and used for medical applications. Biobots could provide targeted drug delivery, clean up clots or serve as compliant actuatable stents. By using organic substrates rather than polymers, such stents could be used to strengthen weak blood vessels to prevent aneurysms – and over time the device would be remodeled and integrated into the body. Beyond the small-scale biohybrid robots currently being developed, ongoing research in tissue engineering, such as attempts to grow vascular systems, may open the possibility of growing large-scale robots actuated by muscle.

The Conversation

Victoria Webster, Ph.D. Candidate in Mechanical and Aerospace Engineering, Case Western Reserve University

This article was originally published on The Conversation. Read the original article.

How might drone racing drive innovation?


Over the past 15 years, drones have progressed from laboratory demonstrations to widely available toys. Technological improvements have brought ever-smaller components required for flight stabilization and control, as well as significant improvements in battery technology. Capabilities once restricted to military vehicles are now found on toys that can be purchased at Wal-Mart.

Small cameras and transmitters mounted on a drone even allow real-time video to be sent back to the pilot. For a few hundred dollars, anyone can buy a “first person view” (FPV) system that puts the pilot of a small drone in a virtual cockpit. The result is an immersive experience: Flying an FPV drone is like Luke Skywalker or Princess Leia flying a speeder bike through the forests of Endor.

First-person viewing puts you in the virtual cockpit of a drone, like flying a speeder on Endor.

Perhaps inevitably, hobbyists started racing drones soon after FPV rigs became available. Now several drone racing leagues have begun, both in the U.S. and internationally. If, like auto racing, drone racing becomes a long-lasting sport yielding financial rewards for backers of winning teams, might technologies developed in the new sport of drone racing find their way into commercial and consumer products?

A drone race, as a spectator and on board the drones.

An example from history

Auto racing has a long history of developing and demonstrating new technologies that find their way into passenger cars, buses and trucks. Formula 1 racing teams developed many innovations that are now standard in commercially available vehicles.

Racing for innovation: Formula 1 teams.
Morio, CC BY-SA

These include disk brakes, tire design and materials, electronic engine control and monitoring systems, the sequential gearbox and paddle shifters, active suspension systems and traction control (so successful that both were banned from Formula 1 competition), and automotive use of composite materials such as carbon fiber reinforced plastics.

A look inside the World Drone Prix.

Starting with the basics

Aerodynamically, the multi-rotor drones that are used for racing are not sophisticated: A racing drone is essentially a brick (the battery and flight electronics) with four rotors attached. A rectangular block has a drag coefficient of roughly 1, while a carefully streamlined body with about the same proportions has a drag coefficient of about 0.05. Reducing the drag force means a drone needs less power to fly at high speed. That in turn allows a smaller battery to be carried, which means lighter weight and greater maneuverability.

A brick with rotors, ripe for aerodynamic improvement.
Drone image via shutterstock.com

This is a case where technologies from aircraft and helicopter aerodynamics will find their way to the smaller vehicles. Commercial drone manufacturers have begun working on aerodynamic optimization, using techniques such as wind tunnel testing and computational fluid dynamics originally developed for analysis and design of full-scale aircraft and helicopters.

That may be able to enable longer flight times. If so, it would give drone operators more time to take money-making photos and video in flight. It could also boost drones’ ability to assist missions such as searching for lost hikers. If drone racing becomes a billion-dollar per year sport – like auto racing – teams will deploy well-funded research labs to eke out every last bit of performance. That additional incentive – and spending – could be poured into racing advances that will push drone technology farther and faster than might otherwise be the case.

Organized competition isn’t the only way to innovate, of course: Drone development has accelerated even without it. Today, the cheapest drones cost under US$50, though they can fly only indoors and have very limited flight capabilities. Hobby drones costing hundreds of dollars can perform stunning aerobatic feats in the hands of a skilled pilot. Drones capable of autonomous flight are also available, though they cost thousands of dollars and are used for more specialized purposes like scientific research, cinematography, law enforcement, and search and rescue.

Advancing control and awareness

The drones used in racing (and indeed, all current multi-rotor drones) contain hardware and software to improve stability. This is essentially a low-level autopilot responsible for “balancing” the vehicle. The human pilot controls the vehicle’s front/back and left/right tilt angles and the magnitude of the total thrust, as well as how fast the vehicle turns and changes direction.

There is no reason why this must be done via control sticks, as is currently common: Pilots could use a smartphone to control the drone instead. There is, in fact, no reason why drone control needs to be done using a physical interface: recently the University of Florida hosted a (very basic) drone race using brain-machine interfaces to control the drones.

Racing drones steered by brain signals.

Aside from flight control, situation awareness is a key problem in drone operations. It is all too easy to crash a remotely operated vehicle into a pillar on the left when the cameras are all pointed forwards. In addition, the pilot of the lead drone in a race has no way of knowing where the competitors are: They could all be a long way behind, or one could be in a position to pass.

Robots need multiple camera angles to see themselves and their surroundings, like this mosaic self-portrait of NASA’s Curiosity Rover on Mars.
NASA

Solving this problem could have payoffs for other telepresence robotics operations, such as remotely operated underwater vehicles and even planetary rovers. Vision systems consisting of several cameras and a computer to stitch together the different views could help, or a haptic system could vibrate to alert a pilot to the presence of a drone or other obstacle nearby. Those sorts of technologies to improve the pilot’s awareness during a race could also be used to assist a remote-control robot pilot operating a vehicle at an oil drilling platform or near a hydrothermal vent in the deep ocean.

This is of course still very speculative: Drone racing is a sport still in its infancy. It is not yet clear whether it will become a massively popular sport. If it does, we could see very exciting advances coming from drone racing into both the toys that we fly in our living rooms and parks and into the drones used by professional videographers, engineers and scientists.

The Conversation

Jack Langelaan, Associate Professor of Aerospace Engineering, Pennsylvania State University

This article was originally published on The Conversation. Read the original article.

Beyond Asimov: how to plan for ethical robots


As robots become integrated into society more widely, we need to be sure they’ll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?

Asimov knew they weren’t perfect


Rowena Morrill/GFDL, CC BY-SA

Asimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories “Runaround” and “Catch that Rabbit,” requiring human ingenuity to resolve. In the story “Liar!,” a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.

In “Escape!,” Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In “The Evitable Conflict,” the machines that control the world’s economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimov’s later introduction of the “Zeroth Law” that can supersede the original three, potentially allowing a robot to harm a human being for humanity’s greater good.

0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.

Asimov’s laws are in a particular order, for good reason.
Randall Munroe/xkcd, CC BY-NC

Robots without ethics

It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

The 1991 movie “Terminator 2: Judgment Day” begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a “perfect operational record”). Skynet “begins to learn at a geometric rate,” scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality).

Catastrophe results from giving too much power to artificial intelligence.

Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a “flash crash.” Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference.

Toward defining robot ethics

While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings.

These properties are inspired by a number of sources including
the Engineering and Physical Sciences Research Council (EPSRC) Principles of Robotics and
recent work on the cognitive science of morality and ethics focused on
neuroscience,
social psychology,
developmental psychology and
philosophy.

The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society.

  • If robots (and other AIs) increasingly participate in our society, then they will need to follow moral and ethical rules much as people
    do. Some rules are embodied in laws against killing, stealing, lying and driving on the wrong side of the street. Others are less formal but nonetheless important, like being helpful and cooperative when the opportunity arises.
  • Some situations require a quick moral judgment and response – for example, a child running into traffic or the opportunity to pocket a dropped wallet. Simple rules can provide automatic real-time response, when there is no time for deliberation and a cost-benefit analysis. (Someday, robots may reach human-level intelligence while operating far faster than human thought, allowing careful deliberation in milliseconds, but that day has not yet arrived, and it may be far in the future.)
  • A quick response may not always be the right one, which may be recognized after feedback from others or careful personal reflection. Therefore, the agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules.
  • To benefit from feedback from others in society, the robot must be able to explain and justify its decisions about ethical actions, and to understand explanations and critiques from others.
  • Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust. The critical mistake humans made with Skynet in “Terminator 2” was handing over control of the nuclear arsenal.
  • Trust, and trustworthiness, must be earned by the robot. Trust is earned slowly, through extensive experience, but can be lost quickly, through a single bad decision.
  • As with a human, any time a robot acts, the selection of that action in that situation sends a signal to the rest of society about how that agent makes decisions, and therefore how trustworthy it is.
  • A robot mind is software, which can be backed up, restored if the original is damaged or destroyed, or duplicated in another body. If robots of a certain kind are exact duplicates of each other, then trust may not need to be earned individually. Trust earned (or lost) by one robot could be shared by other robots of the same kind.
  • Behaving morally and well toward others is not the same as taking moral responsibility. Only competent adult humans can take full responsibility for their actions, but we expect children, animals, corporations, and robots to behave well to the best of their abilities.

Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimov’s Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.

The Conversation

Benjamin Kuipers, Professor of Computer Science and Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

White House launches public workshops on AI issues


The White House today announced a series of public workshops on artificial intelligence (AI) and the creation of an interagency working group to learn more about the benefits and risks of artificial intelligence. The first workshop Artificial Intelligence: Law and Policy will take place on May 24 at the University of Washington School of Law, cohosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.

Speakers include:

The final workshop will be held on July 7th at the Skirball Center for the Performing Arts, New York. The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term will address the near-term impacts of AI technologies across social and economic systems. The event is hosted by the White House and New York University’s Information Law Institute, with support from Google Open Research and Microsoft Research.

The focus will be the challenges of the next 5-10 years, specifically addressing five themes: social inequality, labor, financial markets, healthcare, and ethics. Leaders from industry, academia, and civil society will share ideas for technical design, research and policy directions.

You can learn more about these events via the links to the event websites below, and each workshop will be livestreamed:

According to Ed Felton, Deputy U.S. Chief Technology Officer, “There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. For example, AI will create new jobs while phasing out some old ones—magnifying the importance of programs like TechHire that are preparing our workforce with the skills to get ahead in today’s economy, and tomorrow’s. AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery—adding to the challenge of predicting and controlling how complex technologies will behave.

There are tremendous opportunities and an array of considerations across the Federal Government in privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.

That is why the White House Office of Science and Technology Policy is excited to announce that we will be co-hosting four public workshops over the coming months on topics in AI to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology. These four workshops will be co-hosted by academic and non-profit organizations, and two of them will also be co-hosted by the National Economic Council. These workshops will feed into the development of a public report later this year. We invite anyone interested to learn more about this emergent field of technology and give input about future directions and areas of challenge and opportunity.

The Federal Government also is working to leverage AI for public good and toward a more effective government. A new National Science and Technology Council (NSTC)Subcommittee on Machine Learning and Artificial Intelligence will meet for the first time next week. This group will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

Broadly, between now and the end of the Administration, the NSTC group will work to increase the use of AI and machine learning to improve the delivery of government services. Such efforts may include empowering Federal departments and agencies to run pilot projects evaluating new AI-driven approaches and government investment in research on how to use AI to make government services more effective. Applications in AI to areas of government that are not traditionally technology-focused are especially significant; there is tremendous potential in AI-driven improvements to programs and delivery of services that help make everyday life better for Americans in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, the environment, and much more.

We look forward to engaging with the public about how best to harness the opportunities brought by artificial intelligence. Stay tuned for more information about the work we’re doing on this subject as it develops over the coming months.”

Ed Felten is a Deputy U.S. Chief Technology Officer.

Looking for art in artificial intelligence


Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Alan Turing.
Stephen Kettle sculpture; photo by Jon Callas, CC BY

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

Turing’s simple, but powerful, thought experiment gives a very general framework for testing many different aspects of the human-machine boundary, of which conversation is but a single example.

On May 18 at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

Conducting the tests

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

Can a computer write a better sonnet than this man?
Martin Droeshout (1623)

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

Judging the differences

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man.) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

Who is the artist?

Who is the creator – human or machine? Or both?
Hands image via shutterstock.com

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

We’re looking forward to seeing what our programming artists submit. Regardless of their performance on “the test,” their body of work will continue to expand the horizon of creativity and machine-human coevolution.

The Conversation

Michael Casey, James Wright Professor of Music, Professor of Computer Science, Dartmouth College and Daniel N. Rockmore, Professor, Department of Mathematics, Computational Science, and Computer Science, Dartmouth College

This article was originally published on The Conversation. Read the original article.

Are robots taking our jobs?


If you put water on the stove and heat it up, it will at first just get hotter and hotter. You may then conclude that heating water results only in hotter water. But at some point everything changes – the water starts to boil, turning from hot liquid into steam. Physicists call this a “phase transition.”

Automation, driven by technological progress, has been increasing inexorably for the past several decades. Two schools of economic thinking have for many years been engaged in a debate about the potential effects of automation on jobs, employment and human activity: will new technology spawn mass unemployment, as the robots take jobs away from humans? Or will the jobs robots take over release or unveil – or even create – demand for new human jobs?

The debate has flared up again recently because of technological achievements such as deep learning, which recently enabled a Google software program called AlphaGo to beat Go world champion Lee Sedol, a task considered even harder than beating the world’s chess champions.

Ultimately the question boils down to this: are today’s modern technological innovations like those of the past, which made obsolete the job of buggy maker, but created the job of automobile manufacturer? Or is there something about today that is markedly different?

Malcolm Gladwell’s 2006 book The Tipping Point highlighted what he called “that magic moment when an idea, trend, or social behavior crosses a threshold, tips, and spreads like wildfire.” Can we really be confident that we are not approaching a tipping point, a phase transition – that we are not mistaking the trend of technology both destroying and creating jobs for a law that it will always continue this way?

Old worries about new tech

This is not a new concern. Dating back at least as far as the Luddites of early 19th-century Britain, new technologies cause fear about the inevitable changes they bring.

It may seem easy to dismiss today’s concerns as unfounded in reality. But economists Jeffrey Sachs of Columbia University and Laurence Kotlikoff of Boston University argue, “What if machines are getting so smart, thanks to their microprocessor brains, that they no longer need unskilled labor to operate?” After all, they write:

Smart machines now collect our highway tolls, check us out at stores, take our blood pressure, massage our backs, give us directions, answer our phones, print our documents, transmit our messages, rock our babies, read our books, turn on our lights, shine our shoes, guard our homes, fly our planes, write our wills, teach our children, kill our enemies, and the list goes on.

Looking at the economic data

There is considerable evidence that this concern may be justified. Eric Brynjolfsson and Andrew McAfee of MIT recently wrote:

For several decades after World War II the economic statistics we care most about all rose together here in America as if they were tightly coupled. GDP grew, and so did productivity — our ability to get more output from each worker. At the same time, we created millions of jobs, and many of these were the kinds of jobs that allowed the average American worker, who didn’t (and still doesn’t) have a college degree, to enjoy a high and rising standard of living. But … productivity growth and employment growth started to become decoupled from each other.

Lots more productivity; not much more earning.
U.S. Department of Labor Statistics

As the decoupling data show, the U.S. economy has been performing quite poorly for the bottom 90 percent of Americans for the past 40 years. Technology is driving productivity improvements, which grow the economy. But the rising tide is not lifting all boats, and most people are not seeing any benefit from this growth. While the U.S. economy is still creating jobs, it is not creating enough of them. The labor force participation rate, which measures the active portion of the labor force, has been dropping since the late 1990s.

While manufacturing output is at an all-time high, manufacturing employment is today lower than it was in the later 1940s. Wages for private nonsupervisory employees have stagnated since the late 1960s, and the wages-to-GDP ratio has been declining since 1970. Long-term unemployment is trending upwards, and inequality has become a global discussion topic, following the publication of Thomas Piketty’s 2014 book, Capital in the Twenty-First Century.

A widening danger?

Most shockingly, economists Angus Deaton, winner of the 2015 Nobel Memorial Prize in Economic Science, and Anne Case found that mortality for white middle-age Americans has been increasing over the past 25 years, due to an epidemic of suicides and afflictions stemming from substance abuse.

Is automation, driven by progress in technology, in general, and artificial intelligence and robotics, in particular, the main cause for the economic decline of working Americans?

In economics, it is easier to agree on the data than to agree on causality. Many other factors can be in play, such as globalization, deregulation, decline of unions and the like. Yet in a 2014 poll of leading academic economists conducted by the Chicago Initiative on Global Markets, regarding the impact of technology on employment and earnings, 43 percent of those polled agreed with the statement that “information technology and automation are a central reason why median wages have been stagnant in the U.S. over the decade, despite rising productivity,” while only 28 percent disagreed. Similarly, a 2015 study by the International Monetary Fund concluded that technological progress is a major factor in the increase of inequality over the past decades.

The bottom line is that while automation is eliminating many jobs in the economy that were once done by people, there is no sign that the introduction of technologies in recent years is creating an equal number of well-paying jobs to compensate for those losses. A 2014 Oxford study found that the number of U.S. workers shifting into new industries has been strikingly small: in 2010, only 0.5 percent of the labor force was employed in industries that did not exist in 2000.

The discussion about humans, machines and work tends to be a discussion about some undetermined point in the far future. But it is time to face reality. The future is now.

The Conversation

Moshe Y. Vardi, Professor of Computer Science, Rice University

This article was originally published on The Conversation. Read the original article.