Category Archives: Robots

Why robots need to be able to say ‘No


Should you always do what other people tell you to do? Clearly not. Everyone knows that. So should future robots always obey our commands? At first glance, you might think they should, simply because they are machines and that’s what they are designed to do. But then think of all the times you would not mindlessly carry out others’ instructions – and put robots into those situations.

Just consider:

  • An elder-care robot tasked by a forgetful owner to wash the “dirty clothes,” even though the clothes had just come out of the washer
  • A preschooler who orders the daycare robot to throw a ball out the window
  • A student commanding her robot tutor to do all the homework instead doing it herself
  • A household robot instructed by its busy and distracted owner to run the garbage disposal even though spoons and knives are stuck in it.

There are plenty of benign cases where robots receive commands that ideally should not be carried out because they lead to unwanted outcomes. But not all cases will be that innocuous, even if their commands initially appear to be.

Consider a robot car instructed to back up while the dog is sleeping in the driveway behind it, or a kitchen aid robot instructed to lift a knife and walk forward when positioned behind a human chef. The commands are simple, but the outcomes are significantly worse.

How can we humans avoid such harmful results of robot obedience? If driving around the dog were not possible, the car would have to refuse to drive at all. And similarly, if avoiding stabbing the chef were not possible, the robot would have to either stop walking forward or not pick up the knife in the first place.

In either case, it is essential for both autonomous machines to detect the potential harm their actions could cause and to react to it by either attempting to avoid it, or if harm cannot be avoided, by refusing to carry out the human instruction. How do we teach robots when it’s OK to say no?

How can robots know what will happen next?

In our lab, we have started to develop robotic controls that make simple inferences based on human commands. These will determine whether the robot should carry them out as instructed or reject them because they violate an ethical principle the robot is programmed to obey.

A robot that can reject unsafe orders.

Telling robots how and when – and why – to disobey is far easier said than done. Figuring out what harm or problems might result from an action is not simply a matter of looking at direct outcomes. A ball thrown out a window could end up in the yard, with no harm done. But the ball could end up on a busy street, never to be seen again, or even causing a driver to swerve and crash. Context makes all the difference.

It is difficult for today’s robots to determine when it is okay to throw a ball – such as to a child playing catch – and when it’s not – such as out the window or in the garbage. Even harder is if the child is trying to trick the robot, pretending to play a ball game but then ducking, letting the ball disappear through the open window.

Explaining morality and law to robots

Understanding those dangers involves a significant amount of background knowledge (including the prospect that playing ball in front of an open window could send the ball through the window). It requires the robot not only to consider action outcomes by themselves, but also to contemplate the intentions of the humans giving the instructions.

To handle these complications of human instructions – benevolent or not – robots need to be able to explicitly reason through consequences of actions and compare outcomes to established social and moral principles that prescribe what is and is not desirable or legal. As seen above, our robot has a general rule that says, “If you are instructed to perform an action and it is possible that performing the action could cause harm, then you are allowed to not perform it.” Making the relationship between obligations and permissions explicit allows the robot to reason through the possible consequences of an instruction and whether they are acceptable.

In general, robots should never perform illegal actions, nor should they perform legal actions that are not desirable. Hence, they will need representations of laws, moral norms and even etiquette in order to be able to determine whether the outcomes of an instructed action, or even the action itself, might be in violation of those principles.

While our programs are still a long way from what we will need to allow robots to handle the examples above, our current system already proves an essential point: robots must be able to disobey in order to obey.

The Conversation

Matthias Scheutz, Professor of Cognitive and Computer Science, Tufts University

This article was originally published on The Conversation. Read the original article.

12 Amazing New Technologies To Check Out This Year


It seems that every year there are new technologies coming out that are even more amazing than what the previous year held. From self-driving cars to robots entering daily lives, the world is changing at an incredibly fast rate. Click the ‘Next’ button below to see 12 new technologies on the scene in 2016!

Fast navigate:

  1. This 3D-Printing Pothole Repairing Robot
  2. This Hoverboard Actually Hovers
  3. Self-Driving Cars Are Coming To A Highway Near You
  4. Turning Segways Into Robots
  5. This Passenger Drone Takes You For A Ride
  6. This Shirt Is A Biometric Wearable Computer
  7. This Spoon Tells You When To Stop Eating
  8. This Refrigerator Can Also Help You With Your Diet
  9. Into Mountain Biking? Then This May Be A Godsend
  10. This Basketball Trains You To Be A Better Player
  11. Snoring Just Got A Whole Lot Quieter
  12. These Contact Lenses Eliminate The Need For Bifocals
NEXT
1 of 13

This 3D-Printing Pothole Repairing Robot


Robert Flitsch, inventor of the Addibot, wants to fix potholes on city streets. In the near future, your local public works department might end up using this small, multi-wheeled robot to help repair those nasty holes your bikes and cars plop over, damaging your wheels and axles. It works by creating layer over layer, the same way desktop 3D printers works, and can steer itself or be driven via a remote control console. Popular Science reports,

“One of the main limitations with 3D printers is you typically have it printing inside this box, and you can really only print objects of the size of the workspace you’re printing in,” says the 22-year-old Flitsch, a mechanical engineer who graduated from the Harvard John A. Paulson School of Engineering and Applied Sciences last May. “If you take additive manufacturing implements and make them mobile, you can print objects of arbitrary size.”

What’s great about this invention is that it is designed to repair a variety of surfaces with its array of nozzles. The tar chemicals are kept on-board in a heated container, and could be powered by sunlight and/or battery.

NEXT
2 of 13

Self-driving Cars Are Coming To A Highway Near You

As much of a fad as Google's inventions tend to be, the self-driving car looks like it could actually be here to stay. But is it safe?

Chris Urmson, lobbyist for, and director of, Google’s Self-driving Car division, will soon be lobbying senators for federal help in getting driver-less cars to the public market. He is expected to pitch to the Senate Commerce Committee that the technology will improve safety, and cut costs for roads, trains and buses with the notion that robot cars will save us from ourselves. But will they?

The president seems to think so, as he offered $4 billion of tax-payer money to help fund the project, and the U.S. Transportation Department tends to agree, saying that automated vehicles would be able to drive closer together which would allow for more cars on the road and higher speeds without the risk of human error. Plus, congestion would be decreased due to the fact that each car can be GPS’d to a server to look for open parking spaces.

However, a few skeptics wonder if this new invention would make congestion better or worse. The fad of having a self-driving car may deter people from using public transportation and there may inevitably be more independent cars on the road.

NEXT
4 of 13

Travel Agents Could Be Replaced With Robots


This article is part of a series:
jobs replaced by robots

2. Travel Agents

Between travel websites that are being used more and more to book trips through mobile phones and new advances in artificial intelligence rearing their heads, the need for a human to help plan business trips and vacations is quickly going out the window. There are so many travel booking websites out there, it would be weird if booking a flight to San Francisco took longer than 5 or 10 minutes these days. There are still plenty of travel agents around the U.S. but they are indeed a dying breed as the majority of their clientele are of the older generation.

NEXT
2 of 6

Bank Clerks Are Being Replaced By Robots


This article is part of a series:
jobs replaced by robots

3. Bank Clerks

The only real reason you have a clerk at a bank is to handle complicated transactions that would otherwise hold up the line at the ATM. I know we want to think that it’s because ATM computers aren’t very sophisticated, but it’s really because humans are impatient. At the moment, ATM computers are set up in the simplest way in order to get the transaction done and over with, resulting in more transactions in less time and more potential money being collected from fees. Until an ATM is equipped with artificial intelligence units, we’re stuck dealing with people through the glass, but that won’t be happening much longer. In Japan this past April, a robot teller was introduced to the public for the first time in the history of the planet.

NEXT
3 of 6

6 Jobs That Could Be Replaced By Robots in 15 Years


This article is part of a series:
jobs replaced by robots

When I first read that an accountant or bookkeeper would be unnecessary, I thought, “Nah, there’s a lot of reasons to have a human go over my financials,” but as I thought about it further I realized how off-base I was. Why? Because computers are more efficient at number crunching. It soon hit me that almost all jobs could eventually be replaced by robots and computers.

1. Delivery (Mail, cargo, even food!)

We already know about drones, of course. They’re used primarily for dropping bombs on innocent people in the Middle East, but they’re also now beginning to be used for delivering cargo over there as well. In fact, they’re so prevalent in Afghanistan that news reporters are actually giving drone forecasts (I shit you not). While flying drones are the current standard, it’s becoming increasingly clear that self-driving cars will be handling package delivery in the near future as well.

NEXT
1 of 6

How 3D printing helped robots tackle their greatest obstacle: stairs


We’ve long attempted to recreate living creatures in robot form. From very early age of robotics, there has been attempts to reproduce systems similar to human arms and hands. This has been extended to flexible and mobile platforms reproducing different animals from dogs to snakes to climbing spider octopods, and even entire humanoids.

One of the key actions performed by animals from mantises to kangaroos is jumping. But incorporating a jumping mechanism into autonomous robots requires much more effort from designers. One of the main challenges for robots is still travelling efficiently over rugged surfaces and obstacles. Even the simple task of going up or down a staircase has proven to be rather difficult for robot engineers.

A jumping robot could provide access to areas that are inaccessible to traditional mobile wheeled or legged robots. In the case of some search-and-rescue or exploration missions, in collapsed buildings for example, such a robot might even be preferable to unmanned aerial vehicles (UAVs) or quadcopter “drones”.

There has been increasing research in the robotics field to take on the challenges of designing a mobile platform capable of jumping. Different techniques have been implemented for jumping robots such as using double jointed hydraulic legs or a carbon dioxide-powered piston to push the robot off the ground. Other methods include using “shape memory alloy” – metal that alters its shape when heated with electrical current to create a jumping force – and even controlled explosions. But currently there is no universally accepted standard solution to this complex task.

A new approach explored by researchers at the University of California San Diego and Harvard University uses a robot with a partially soft body. Most robots have largely rigid frames incorporating sensors, actuators and controllers, but a specific branch of robotic design aims to make robots that are soft, flexible and compliant with their environment – just like biological organisms. Soft frames and structures help to produce complex movements that could not be achieved by rigid frames.

Soft landing
Jacobs School of Engineering/UC San Diego/Harvard University

The new robot was created using 3D printing technology to produce a design that seamlessly integrates rigid and soft parts. The main segment comprises two hemispheres nestled inside one inside the other to create a flexible compartment. Oxygen and butane are injected into the compartment and ignited, causing it to expand and launching the robot into the air. Pneumatic legs are used to tilt the robot body in the intended jump direction.

Unlike many other mechanisms, this allows the robot to jump continuously without a pause between each movement as it recharges. For example, a spring-and-clutch mechanism would require the robot to wait for the spring to recompress and then release. The downside is that this mechanism would be difficult to mass-manufacture because of its reliance on 3D printing.

The use of a 3D printer to combine the robot’s soft and hard elements in a single structure is a big part of what makes it possible. There are now masses of different materials for different purposes in the world of 3D printing, from flexible NinjaFlex to high-strength Nylon and even traditional materials such as wood and copper.

The creation of “multi-extrusion” printers with multiple print heads means that two or more materials can be used to create one object using whatever complex design the engineer can come up with, including animal-like structures. For example, Ninjaflex, with its high flexibility could be used to create a skin or muscle-like outer material combined with Nylon near the core to protect vital inner components, just like a rib cage.

In the new robot, the top hemisphere is printed as a single component but with nine different layers of stiffness, from rubber-like flexibility on the outside to full rigidity on the inside. This gives it the necessary strength and resilience to survive the impact when it lands. By 3D printing and trialling multiple versions of the robot with different material combinations, the engineers realised a fully rigid model would jump higher but would be more likely to break and so went with the more flexible outer shell.

Once robots are capable of performing more tasks with the skill of humans or animals, such as climbing stairs, navigating on their own and manipulating objects, they will start to become more integrated into our daily lives. This latest project highlights how 3D printing can help engineers design and test different ideas along the road to that goal.

The Conversation

Ahmad Lotfi is Reader in Computational Intelligence at Nottingham Trent University.

This article was originally published on The Conversation.
Read the original article.