Category Archives: Artificial Intelligence

New and ongoing developments in computers that can think for themselves.

Why we don’t trust robots


Joffrey Becker, Collège de France

Robots raise all kinds of concerns. They could steal our jobs, as some experts think. And if artificial intelligence grows, they might even be tempted to enslave us, or to annihilate the whole of humanity. The Conversation

Robots are strange creatures, and not only for these frequently invoked reasons. We have good cause to be a little worried about these machines.

An advertisement for Kuka robotics: can these machines really replace us?

Imagine that you are visiting the Quai Branly-Jacques Chirac, a museum in Paris dedicated to anthropology and ethnology. As you walk through the collection, your curiosity leads you to a certain piece. After a while, you begin to sense a familiar presence heading towards the same objet d’art that has caught your attention.

You move slowly, and as you turn your head a strange feeling seizes you because what you seem to distinguish, still blurry in your peripheral vision, is a not-quite-human figure. Anxiety takes over.

As your head turns, and your vision become sharper, this feeling gets stronger. You realise that this is a humanoid machine, a robot called Berenson. Named after the American art critic Bernard Berenson and designed by the roboticist Philippe Gaussier (Image and Signal processing Lab) and the anthropologist Denis Vidal (Institut de recherche sur le développement), Berenson is part of an experiment underway at the Quai Branly museum since 2012.

The strangeness of the encounter with Berenson leaves you suddenly frightened, and you step back, away from the machine.

The uncanny valley

This feeling has been explored in robotics since the 1970s, when Japanese researcher Professor Masahiro Mori proposed his “uncanny valley” theory. If a robot resembles us, he suggested, we are inclined to consider its presence in the same way as we would that of a human being.

But when the machine reveals its robot nature to us, we will feel discomfort. Enter what Mori dubbed “the uncanny valley”. The robot will then be regarded as something of a zombie.

Mori’s theory cannot be systematically verified. But the feelings we experience when we meet an autonomous machine are certainly tinged with both incomprehension and curiosity.

The experiment conducted with Berenson at the Quai Branly, for example, shows that the robot’s presence can elicit paradoxical behaviour in museum goers. It underlines the deep ambiguity that characterises the relationship one can have with a robot, particularly the many communication problems they pose for humans.

If we are wary of such machines, it is mainly because it is not clear to us whether they have intentions. And, if so, what they are and how to establish a basis for the minimal understanding that is essential in any interaction. Thus, it is common to see visitors of the Quai Branly adopting social behaviour with Berenson, such as talking to it, or facing it, to find out how it perceives its environment.

In one way or another, visitors mainly try to establish contact. It appears that there is something strategic in considering the robot, even temporarily, as a person. And these social behaviours are not only observed when humans interact with machines that resembles us: it seems we make anthropomorphic projections whenever humans and robots meet.

Social interactions

An interdisciplinary team has recently been set up to explore the many dimensions revealed during these interactions. In particular, they are looking at the moments when, in our minds, we are ready to endow robots with intentions and intelligence.

This is how the PsyPhINe project was born. Based on interactions between humans and a robotic lamp, this project seeks to better understand people’s tendency to anthropomorphise machines.

After they get accustomed to the strangeness of the situation, it is not uncommon to observe that people are socially engaging with the lamp. During a game in which people are invited to play with this robot, they can be seen reacting to its movements and sometimes speaking to it, commenting on what it is doing or on the situation itself.

Mistrust often characterises the first moments of our relations with machines. Beyond their appearance, most people don’t know exactly what robots are made of, what their functions are and what their intentions might be. The robot world seems way too far from ours.

But this feeling quickly disappears. Assuming they have not already run away from the machine, people usually seek to define and maintain a frame for communication. Typically, they rely on existing communication habits, such as those used when talking to pets, for example, or with any living being whose world is to some degree different from theirs.

Ultimately, it seems, we humans are as suspicious of our technologies as we are fascinated by the possibilities they open up.

Joffrey Becker, Anthropologue, Laboratoire d’anthropologie sociale, Collège de France

This article was originally published on The Conversation. Read the original article.

We could soon face a robot crimewave … the law needs to be ready


Christopher Markou, University of Cambridge

This is where we are at in 2017: sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases. By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. Just ask the toddler who was run over by a security robot at a California mall last year. The Conversation

How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.

Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

Robot guilt

The role of the law can be defined in many ways, but ultimately it is a system within society for stabilising people’s expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.

But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit – and so humans are considered by the law to be “legal persons”.

To varying extents, companies are endowed with legal personhood, too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?

A guilty AI mind?

The criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.

Second, criminal law requires that an accused is culpable for their actions. This is known as a “guilty mind” or mens rea. The idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.

Blind justice for a AI.
Shutterstock

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

Take driverless cars. Cars drive on roads and there are regulatory frameworks in place to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.

As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of “robots gone awry” but the legal realities should not be left to Hollywood.

So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea. How do we know the robot intended to do what it did?

For now, we are nowhere near the level of building a fully sentient or “conscious” humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and discriminatory algorithmic mischief already abounds.

Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let’s say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

But what would we need to prove the existence of mens rea? Could we simply cross-examine the AI like we do a human defendant? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine “tick”.

And what would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defense or the extent of premeditation?

Let’s go even further. After all, we’re not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card – and it decided to buy contraband. This isn’t fiction; it has happened. Two London-based artists created a bot that purchased random items off the dark web. And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and ten ecstasy pills. Should these artists be liable for what the bot they created bought?

Maybe. But what if the bot “decided” to make the purchases itself?

Robo-jails?

Even if you solve these legal issues, you are still left with the question of punishment. What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to “reflect” on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty’s leisure. And what would building “remorse” into machines say about us as their builders?

Would robot wardens patrol robot jails?
Shutterstock

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

AI has already helped with emergent concepts in medicine, and we are learning things about the universe with AI systems that even an army of Stephen Hawkings might not reveal.

The hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.

At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can’t help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.

Christopher Markou, PhD Candidate, Faculty of Law, University of Cambridge

This article was originally published on The Conversation. Read the original article.

Merging our brains with machines won’t stop the rise of the robots


Michael Milford, Queensland University of Technology

Tesla chief executive and OpenAI founder Elon Musk suggested last week that humanity might stave off irrelevance from the rise of the machines by merging with the machines and becoming cyborgs. The Conversation

However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.

Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.

But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.

The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.

However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.

Current technologies are primitive compared to the picture of brain-machine interfaces we’re sold in science fiction movies such as The Matrix.

Deep learning quirks

Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.

The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.

We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.

One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.

Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.

Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.

Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.

This image shows how you can fool AI image recognition by adding imperceptible noise to the image.
From Goodfellow et al, 2014

Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.

Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.

Google research scientist, Ian Goodfellow, shows how easy it is to fool a deep learning system.

Trust me, I’m a robot

Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?

We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.

Research into trusted autonomous systems deals with both humans trusting machines and machines trusting humans.

There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.

In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?

In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?

In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.

Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.

The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.

Michael Milford, Associate professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Beyond Asimov: how to plan for ethical robots


As robots become integrated into society more widely, we need to be sure they’ll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?

Asimov knew they weren’t perfect


Rowena Morrill/GFDL, CC BY-SA

Asimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories “Runaround” and “Catch that Rabbit,” requiring human ingenuity to resolve. In the story “Liar!,” a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.

In “Escape!,” Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In “The Evitable Conflict,” the machines that control the world’s economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimov’s later introduction of the “Zeroth Law” that can supersede the original three, potentially allowing a robot to harm a human being for humanity’s greater good.

0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.

Asimov’s laws are in a particular order, for good reason.
Randall Munroe/xkcd, CC BY-NC

Robots without ethics

It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

The 1991 movie “Terminator 2: Judgment Day” begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a “perfect operational record”). Skynet “begins to learn at a geometric rate,” scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality).

Catastrophe results from giving too much power to artificial intelligence.

Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a “flash crash.” Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference.

Toward defining robot ethics

While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings.

These properties are inspired by a number of sources including
the Engineering and Physical Sciences Research Council (EPSRC) Principles of Robotics and
recent work on the cognitive science of morality and ethics focused on
neuroscience,
social psychology,
developmental psychology and
philosophy.

The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society.

  • If robots (and other AIs) increasingly participate in our society, then they will need to follow moral and ethical rules much as people
    do. Some rules are embodied in laws against killing, stealing, lying and driving on the wrong side of the street. Others are less formal but nonetheless important, like being helpful and cooperative when the opportunity arises.
  • Some situations require a quick moral judgment and response – for example, a child running into traffic or the opportunity to pocket a dropped wallet. Simple rules can provide automatic real-time response, when there is no time for deliberation and a cost-benefit analysis. (Someday, robots may reach human-level intelligence while operating far faster than human thought, allowing careful deliberation in milliseconds, but that day has not yet arrived, and it may be far in the future.)
  • A quick response may not always be the right one, which may be recognized after feedback from others or careful personal reflection. Therefore, the agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules.
  • To benefit from feedback from others in society, the robot must be able to explain and justify its decisions about ethical actions, and to understand explanations and critiques from others.
  • Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust. The critical mistake humans made with Skynet in “Terminator 2” was handing over control of the nuclear arsenal.
  • Trust, and trustworthiness, must be earned by the robot. Trust is earned slowly, through extensive experience, but can be lost quickly, through a single bad decision.
  • As with a human, any time a robot acts, the selection of that action in that situation sends a signal to the rest of society about how that agent makes decisions, and therefore how trustworthy it is.
  • A robot mind is software, which can be backed up, restored if the original is damaged or destroyed, or duplicated in another body. If robots of a certain kind are exact duplicates of each other, then trust may not need to be earned individually. Trust earned (or lost) by one robot could be shared by other robots of the same kind.
  • Behaving morally and well toward others is not the same as taking moral responsibility. Only competent adult humans can take full responsibility for their actions, but we expect children, animals, corporations, and robots to behave well to the best of their abilities.

Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimov’s Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.

The Conversation

Benjamin Kuipers, Professor of Computer Science and Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

White House launches public workshops on AI issues


The White House today announced a series of public workshops on artificial intelligence (AI) and the creation of an interagency working group to learn more about the benefits and risks of artificial intelligence. The first workshop Artificial Intelligence: Law and Policy will take place on May 24 at the University of Washington School of Law, cohosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.

Speakers include:

The final workshop will be held on July 7th at the Skirball Center for the Performing Arts, New York. The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term will address the near-term impacts of AI technologies across social and economic systems. The event is hosted by the White House and New York University’s Information Law Institute, with support from Google Open Research and Microsoft Research.

The focus will be the challenges of the next 5-10 years, specifically addressing five themes: social inequality, labor, financial markets, healthcare, and ethics. Leaders from industry, academia, and civil society will share ideas for technical design, research and policy directions.

You can learn more about these events via the links to the event websites below, and each workshop will be livestreamed:

According to Ed Felton, Deputy U.S. Chief Technology Officer, “There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. For example, AI will create new jobs while phasing out some old ones—magnifying the importance of programs like TechHire that are preparing our workforce with the skills to get ahead in today’s economy, and tomorrow’s. AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery—adding to the challenge of predicting and controlling how complex technologies will behave.

There are tremendous opportunities and an array of considerations across the Federal Government in privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.

That is why the White House Office of Science and Technology Policy is excited to announce that we will be co-hosting four public workshops over the coming months on topics in AI to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology. These four workshops will be co-hosted by academic and non-profit organizations, and two of them will also be co-hosted by the National Economic Council. These workshops will feed into the development of a public report later this year. We invite anyone interested to learn more about this emergent field of technology and give input about future directions and areas of challenge and opportunity.

The Federal Government also is working to leverage AI for public good and toward a more effective government. A new National Science and Technology Council (NSTC)Subcommittee on Machine Learning and Artificial Intelligence will meet for the first time next week. This group will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

Broadly, between now and the end of the Administration, the NSTC group will work to increase the use of AI and machine learning to improve the delivery of government services. Such efforts may include empowering Federal departments and agencies to run pilot projects evaluating new AI-driven approaches and government investment in research on how to use AI to make government services more effective. Applications in AI to areas of government that are not traditionally technology-focused are especially significant; there is tremendous potential in AI-driven improvements to programs and delivery of services that help make everyday life better for Americans in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, the environment, and much more.

We look forward to engaging with the public about how best to harness the opportunities brought by artificial intelligence. Stay tuned for more information about the work we’re doing on this subject as it develops over the coming months.”

Ed Felten is a Deputy U.S. Chief Technology Officer.

Looking for art in artificial intelligence


Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Alan Turing.
Stephen Kettle sculpture; photo by Jon Callas, CC BY

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

Turing’s simple, but powerful, thought experiment gives a very general framework for testing many different aspects of the human-machine boundary, of which conversation is but a single example.

On May 18 at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

Conducting the tests

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

Can a computer write a better sonnet than this man?
Martin Droeshout (1623)

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

Judging the differences

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man.) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

Who is the artist?

Who is the creator – human or machine? Or both?
Hands image via shutterstock.com

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

We’re looking forward to seeing what our programming artists submit. Regardless of their performance on “the test,” their body of work will continue to expand the horizon of creativity and machine-human coevolution.

The Conversation

Michael Casey, James Wright Professor of Music, Professor of Computer Science, Dartmouth College and Daniel N. Rockmore, Professor, Department of Mathematics, Computational Science, and Computer Science, Dartmouth College

This article was originally published on The Conversation. Read the original article.

Self-driving Cars Are Coming To A Highway Near You

As much of a fad as Google's inventions tend to be, the self-driving car looks like it could actually be here to stay. But is it safe?

Chris Urmson, lobbyist for, and director of, Google’s Self-driving Car division, will soon be lobbying senators for federal help in getting driver-less cars to the public market. He is expected to pitch to the Senate Commerce Committee that the technology will improve safety, and cut costs for roads, trains and buses with the notion that robot cars will save us from ourselves. But will they?

The president seems to think so, as he offered $4 billion of tax-payer money to help fund the project, and the U.S. Transportation Department tends to agree, saying that automated vehicles would be able to drive closer together which would allow for more cars on the road and higher speeds without the risk of human error. Plus, congestion would be decreased due to the fact that each car can be GPS’d to a server to look for open parking spaces.

However, a few skeptics wonder if this new invention would make congestion better or worse. The fad of having a self-driving car may deter people from using public transportation and there may inevitably be more independent cars on the road.

NEXT
4 of 13

Showtime’s ‘Dark Net’ Uncovers Deep Web Internet Culture

Through the internet, the impact of technology on our lives is both unprecedented and undeniable.

From cyber relationships, S&M culture and child abuse to biohacking, content moderation and nootropics, Dark Net finally puts into moving pictures what blogs have been typing up a storm about for the past few years.

At first glance the show seems like your run-of-the-mill cyber culture documentary, but the topics being explored are of a much more taboo persuasion — and it’s not just the underground pedophile networks accessed via Tor we’re talking about.

While Dark Net covers a lot of ground in technology subculture, it also serves as a bit of a transhumanist playground, discussing cutting edge and controversial topics such as RFID chip implants and other biohacks, nootropics, artificial intelligence girlfriends, and more. The main topic, however, seems to be the nature of human relationships being altered, augmented, and even hindered by technology, and it’s not difficult to understand why.

Through the internet, the impact of technology on our lives is both unprecedented and undeniable. Exploring subcultures and trends such as sadomasochism, porn addiction, and even internet addiction, Dark Net attempts to bring to light some otherwise undisclosed topics the most people refuse to talk about openly.

Dark Net is on Showtime, Thursday nights.

Max Klaassen
Public enema xenomorphic robot from the dimension Zrgauddon.

Understanding Cognitive Bias Helps Decision Making


in·tu·i·tion
ˌint(y)o͞oˈiSH(ə)n/
noun
noun: intuition
  1. the ability to understand something immediately, without the need for conscious reasoning.

People tend to trust their own intuition. Has there been much formal study about the veracity of intuition?

Brain science itself is a young field, and the terminology has yet to mature into a solid academic lexicon. To further increase your chances of being confused, modern life is rife with distractions, misinformation, and addictive escapisms, leaving the vast majority of society having no real idea what the hell is happening.

To illustrate my point, I’m going to do something kind of recursive. I am going to document my mind being changed about a deeply held belief as I explore my own cognitive bias. I am not here to tell you what’s REALLY going on or change your mind about your deeply held beliefs. This is just about methods of problem solving and how cognitive bias can become a positive aspect of critical thought.

Image: "Soft Bike" sculptiure by Mashanda Lazarus http://www.ilovemashanda.com/

Image: “Soft Bike” sculptiure by Mashanda Lazarus
http://www.ilovemashanda.com/

I’m advocating what I think is the best set of decision making skills, Critical Thought. The National Council for Excellence in Critical Thinking defines critical thinking as the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. (I’m torn between the terms Critical Thinking and Critical Thought, although my complaint is purely aesthetic.)

Ever since taking an introduction to Logic course at Fitchburg State college I have been convinced that Logic is a much more reliable, proven way to make decisions. Putting logic to practice when decision-making is difficult, though. Just like a math problem can be done incorrectly, Some logic can even counter-intuitive. My favorite example of intuition failing over logic is always chess. Even as I write this I can’t convince myself otherwise: I have regretted every intuitive chess move. It’s statistically impossible that all my intuitive moves have been bad moves yet logic works in the game so much better that my mind has overcompensated in favor of logic. In the microcosm of chess rules, logic really is the better decision-making tool. Often the kernel of a good move jumps out at me as intuition but then must still be thoroughly vetted with logic before I can confidently say it’s a good move.

In high school, I was an underachiever. I could pass computer science and physics classes without cracking a book. My same attempt to coast through math classes left me struggling because I could not intuitively grasp the increasingly abstract concepts. The part of my mind that controls logic was very healthy and functioning but my distrust for my own intuition was a handicap. I would be taking make up mathematics courses in the summer but getting debate team trophies during the school year.

duchamp

Photograph of Marcel Duchamp and Eve Babitz posing for the photographer Julian Wasser during the Duchamp retrospective at the Pasadena Museum of Art, 1963 © 2000 Succession Marcel Duchamp, ARS, N.Y./ADAGP, Paris.

I’m not just reminiscing; everyone’s decision making process is an constantly-updating algorithm of intuitive and logical reasoning. No one’s process is exactly the same but we all want to make the best decisions possible. For me it’s easy to rely on logic and ignore even a nagging sense of intuition. Some people trust intuition strongly yet struggle to find the most logical decision; everyone is most comfortable using a specially-tailored degree of intuition and logic. People argue on behalf of their particular decisions and the methodology behind them because a different method is useful in for each paradigm.

In chess, intuition is necessary but should be used sparingly and tempered with logic. It’s my favorite example because the game can be played without any intuition. Non-AI computers are able to beat the average human at chess. Some AI can beat chess masters. So, I’m biased towards logic. Chess is just a game, though. People are always telling me I should have more faith in intuitive thinking.

“But,” you should be asking, “Isn’t there an example of reliance on intuition as the best way to decide how to proceed?”

At least that’s what I have to ask myself. The best example I found of valuable intuition is the ability to ride a bike. It is almost impossible to learn to ride a bike in one session; it takes several tries over a week or longer to create the neural pathways needed to operate this bio-mechanical device. Samurais trained to feel that their weapon was part of themselves, or an extension of their very arm.  The mechanical motion of  the human body as it drives a bicycle becomes ingrained, literally, in the physical brain. The casual, ubiquitous expression, “It’s like riding a bike”, is used to idiomatically describe anything that can be easily mastered at an intermediate level, forgotten for years, but recalled at near perfect fidelity when encountered once again.

The Backwards Brain Bicycle – Smarter Every Day episode 133

Destin at Smarter Everyday put together a video that shows the duality of intuitive thinking. It is completely possible to train the human mind with complicated algorithms of decision making that can be embrace diversification and even contradictory modes of thinking.

Cont. below…

After watching this video, I embraced a moment of doubt and realized that there are very positive and useful aspects to intuition that I often don’t acknowledge. In this case of reversed bicycle steering, a skill that seems to only work after it has been made intuitive can be “lost” and only regained with a somewhat cumbersome level of concentration.

The video demonstrates the undeniable usefulness of what essentially amounts to anecdotal proof that neural pathways can be hacked, that contradictory new skills can be learned. It also shows that a paradigm of behavior can gain a tenacious hold on the mind via intuitive skill. It casts doubt on intuition in one respect but without at least some reliance on this intuitive paradigm of behavior it seems we wouldn’t be able to ride a bike at all.

This video forced me to both acknowledge the usefulness of ingrained, intuitive behaviors while also reminding me of how strong a hold intuition can have over the mind. Paradigms can be temporarily or perhaps permanently lost.  In the video, Destin has trouble switching back and forth between the 2 seemingly over-engaging thought systems but the transition itself can be a part of a more complicated thought algorithm, allowing the mind to master and embrace contradictory paradigms by trusting the integrity of the overall algorithm.

Including Confirmation Bias in a greater algorithm.

These paradigms can be turned on and off and just as a worker might be able to get used to driving an automatic transmission car to work and operating a stick shift truck at the job site and drive home in the automatic again after the shift.

This ability to turn on and off intuitive paradigms as a controlled feature of a greater logical algorithm requires the mind to acknowledge confirmation bias. I get a feeling of smug satisfaction that logic comprises the greater framework of a possible decision making process anytime I see evidence supporting that belief. There are just as many people out there who would view intuition as the the framework of a complex decision making process, with the ability to use or not use logical thought as merely a contributing part of a superior thought process. If my personal bias of logic over intuition is erroneous in some situations, can I trust the mode of thinking I am in? Using myself as an example, my relief at realizing data confirms what I have already accepted as true is powerful.

That feeling of relief must always be noted and kept in check before it can overshadow the ability to acknowledge data that opposes the belief. Understanding confirmation bias is the key to adding that next level to the algorithm, in the video example from Smarter Everyday, steering a normal bike is so ingrained in the neural pathway that the backwards steering’s inability to confirm actually fill in the blank and the mind sends an incorrect set of instruction of the mechanical behavior to the body. Understanding the dynamics of confirmation bias would enable the mind to embrace the greater thought system that would enable the mind to go back and forth between those conflicting behavioral paradigms. I’m positing that it should be possible to master a regular bike and the “backwards bike” and be able to switch back and forth between both bikes in quick succession. The neural pathways between both behavior paradigms can be trained and made stronger than the video shows.

I believe that with practice, someotrciksne could alternate steering mechanism quickly and without as much awkwardness as we are seeing in the video just as my initial confirmation bias, now identified, doesn’t have to dictate my decision and I might be more open minded to an intuitive interpretation leading to the best decision in certain situations.

An inability to acknowledge that one’s own mind might be susceptible to confirmation bias paradoxically makes one more susceptible.  Critical thinking is a method of building immunity to this common trap of confidence. Identifying the experience of one’s own confirmation bias is a great way to try and understand and control this intuitive tendency.  No matter what your thoughts are regarding logic and intuition, examining one’s confirmation biases and better embracing them should lead to better decision making skills.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY