Category Archives: Artificial Intelligence

New and ongoing developments in computers that can think for themselves.

To stop the machines taking over we need to think about fuzzy logic


Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.

The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.

One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.

We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.

Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.

If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.

This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.

These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency.

Fuzzy sets and AI

One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.

In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.

But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.

The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).

For example, our human-manager from the earlier example may say to human resources:

  • IF the job applicant is aged 25 to 30
  • AND has a qualification in philosophy OR literature
  • THEN arrange an interview.

This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.

But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:

  • IF the applicant is tall
  • AND attractive
  • THEN the salary offered should be higher.

The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?

Attractiveness is also subjective, even if we only account for the taste of the single human-manager.

Grey areas and fuzzy sets

In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.

One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.

To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).

Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements.

Human–robot symbiosis

An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.

For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.

In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.

This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).

Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.

In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it.

An irrational fear?

This brings us back to the big fear of AI machines turning on us in the future.

The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.

The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.

These risks, however, are the same that come with any human-made system or object.

So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.

Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.

Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.

When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Are the Dark Ages Really in the Past? How an Internet Pioneer Thinks it Could Happen Again


The sound of the ‘Dark Ages’ conjures up a medieval nightmare of violent battles, unruly mobs and heretics, a bleak past that wasn’t necessarily as ominous or in the past as movies like to portray it – after all, there are a number of barbaric and superstitious people still alive today. The term was first used to describe the millennia between 500-1500 AD, a period from which there are few remaining historical records, or at least reliable ones. Consequently, we are often given a picture of the Middle Ages that is romanticized for better or worse, rather than accurate.

With the advent of computers and centuries of documents on microfiche it seems unlikely that such an incident will ever happen again, with an ocean of articles and pictures available at our fingertips. Unfortunately, one of the few people who can proudly rank himself among the founding fathers of the Internet, Vint Cerf, is not so optimistic, proposing a hypothetical nightmare scenario that programmers have talked about for many years: a Digital Dark Age, in which a wealth of written history may be lost, and with it so many things that we are dependent on computers for.

The twenty-first century alone has already presented several ways in which our data is in great peril, according to Cerf, and there is the possibility that it may be a century that hardly crosses anyone’s mind in the year 3000. In the near future, companies such as SpaceX are hoping to soon bring internet connections to higher speeds by moving wireless connections off the planet and into space by means of micro-satellites, where the inadequate speed of light is expected to cause inevitable delays in internet service. The ISS has already found evidence of both attainable high speed internet in space as well as resistance.

Cerf, one of Google’s vice presidents and one of the first to discover data packets, made waves at a recent convention of the American Association for the Advancement of Science, where he proposed that due to the rapid evolution of file formats, even in the event that our data survives, stored safely in our iCloud or elsewhere, we may accumulate a wealth of documents and files that we are unable to open because they are in formats that will rapidly become obsolete. Software continually updates and new versions may not always readily be able to read older files – a problem already exemplified by differences between WordPerfect and Microsoft Word, and not only applications but the machines themselves. Consider already the death of floppy disks, and the disappearance of CD and DVD drives from computers due to downloadable movies, another medium currently facing a format war. It’s a problem called backwards compatibility, something that upgrades of software do not always guarantee. As 2015 is already shaping up to be a much different place than 2005 was, a time when streaming video was just beginning to catch on, the technological difference from century to century could be vastly significant.

Fortunately, Cerf, who openly expressed a healthy skepticism over whether or not Google would still be around in the fourth millennium, is not all doom and gloom in his forecast. Already, he revealed the basics of a plan already demonstrated at Carnegie Mellon University, known as ‘digital vellum.’ Although the marketing concepts have not yet come to fruition, Cerf has left open the possibility that companies in the future may for a fee offer to create X-ray snapshots of all your digital files on a virtual machine that also replicates the operating system in which your data is stored – thus imitating Microsoft Office 97 if you had a Word file kept in that particular operating system.

While the logarithms have been established and discussed, the ethical concerns have yet to be addressed, a particularly prominent issue today as politicians continue to propose net neutrality laws and other regulations. When the internet is overwhelmingly protected by laws stemming out of legislation similar to SOPA or PIPA in the future, this too may complicate how we archive and access our data in the years to come. There is also the problem of intellectual property, which may prevent users from copying software and operating systems that are actively in use. There is also the issue of who can access the virtual machine copies, as technology may also end up having to be user specific, preventing the leakage of personal data.

James Sullivan
James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.

Artificial Intelligence should benefit society, not create threats


By Toby Walsh, NICTA

Some of the biggest players in Artificial Intelligence (AI) have joined together calling for any research to focus on the benefits we can reap from AI “while avoiding potential pitfalls”. Research into AI continues to seek out new ways to develop technologies that can take on tasks currently performed by humans, but it’s not without criticisms and concerns.

I am not sure the famous British theoretical physicist Stephen Hawking does irony but it was somewhat ironic that he recently welcomed the arrival of the smarter predictive computer software that controls his speech by warning us that:

The development of full artificial intelligence could spell the end of the human race.

Of course, Hawking is not alone in this view. The serial entrepreneur and technologist Elon Musk also warned last year that:

[…] we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.

Both address an issue that taps into deep, psychological fears that have haunted mankind for centuries. What happens if our creations eventually cause our own downfall? This fear is expressed in stories like Mary Shelley’s Frankenstein.

An open letter for AI

In response to such concerns, an open letter has just been signed by top AI researchers in industry and academia (as well as by Hawking and Musk).

Signatures include those of the president of the Association for the Advancement of Artificial Intelligence, the founders of AI startups DeepMind and Vicarious, and well-known researchers at Google, Microsoft, Stanford and elsewhere.

In the interests of full disclosure, mine is also one of the early signatures on the list, which continues to attract more support by the day.

The open letter argues that there is now a broad consensus that AI research is progressing steadily and its impact on society is likely to increase.

For this reason, the letter concludes we need to start to research how to ensure that increasingly capable AI systems are robust (in their behaviours) and
beneficial (to humans). For example, we need to work out how to build AI systems that result in greater prosperity within society, even for those put out of work.

The letter includes a link to a document outlining some interdisciplinary research priorities that should be tackled in advance of developing artificial intelligence. These include short-term priorities such as optimising the economic benefits and long-term priorities such as being able to verify the formal properties of AI systems.

The AI threat to society

Hollywood has provided many memorable visions of the threat AI might pose to society, from Arthur C. Clarke’s 2001: A Space Odyssey through Robocop and Terminator to recent movies such as Her and Transcendence, all of which paint a dystopian view of a future transformed by AI.

My opinion (and one many of my colleagues share) is that AI that might threaten our society’s future is likely still some way off.

AI researchers have been predicting it will take another 30 or 40 years now for the last 30 or 40 years. And if you ask most of them today, they (as I) will still say it is likely to take another 30 or 40 years.

Making computers behave intelligently is a tough scientific nut to crack. The human brain is the most complex system we know of by orders of magnitude. Replicating the sort of intelligence that humans display will likely require significant advances in AI.

The human brain does all its magic with just 20 watts of power. This is a remarkable piece of engineering.

Other risks to society

There are also more imminent dangers facing mankind such as climate change or the ongoing global financial crisis. These need immediate attention.

The Future of Humanity Institute at the University of Oxford has a long list of threats besides AI that threaten our society including:

  • nanotechnology
  • biotechnology
  • resource depletion
  • overpopulation.

This doesn’t mean that there are not aspects of AI that need attention in the near future.

The AI debate for the future

The Campaign to Stop Killer Robots is advancing the debate on whether we need to ban fully autonomous weapons.

I am organising a debate on this topic at the next annual conference of the Association for the Advancement of Artificial Intelligence later this month in Austin, Texas, in the US.

Steve Goose, director of Human Rights Watch’s Arms Division, will speak for a ban, while Ron Arkin, an American roboticist and robo-ethicist, will argue against it.

Another issue that requires more immediate attention is the impact that AI will have on the nature of work. How does society adapt to more automation and fewer people needed to work?

If we can get this right, we could remove much of the drudgery from our lives. If we get it wrong, the increasing inequalities documented by the French economist Thomas Piketty will only get worse.

We will discuss all these issues and more at the first International Workshop on AI and Ethics, also being held in the US within the AAAI Conference on Artificial Intelligence.

It’s important we start to have these debates now, not just to avoid the potential pitfalls, but to construct a future where AI improves the world for all of us.

The Conversation

This article was originally published on The Conversation.
Read the original article.