Category Archives: Artificial Intelligence

New and ongoing developments in computers that can think for themselves.

Virtual distance: technology is rewriting the rulebook for human interaction


Consider the following two situations.

In the first scenario, a man and a woman sit across from each other at a romantically lit table in a fancy restaurant texting – looking down and talking to others, maybe each other – but rarely glancing up except to place drink and food orders.

In the second, a mother walks into a diner joining friends for lunch, carrying her 2-year-old. She sets him down at the table, hands him a tablet device, takes out her smartphone, searches messages, and half listens for only occasional moments of adult conversation squeezed in between swooshes across their collective screens.

What ties them together? The distance between them. Both scenarios reflect a new phenomenon of the digital age growing ever more rapidly. It’s called “virtual distance.”

Changing the rules of interaction

Virtual distance is a psychological and emotional sense of detachment that accumulates little by little, at the sub-conscious or unconscious level, as people trade-off time interacting with each other for time spent “screen skating” (swiping, swishing, pinching, tapping, and so on).

Maybe they’re texting each other?
Girls via www.shutterstock.com

It is also a measurable phenomenon and can cause some surprising effects. For example, when virtual distance is relatively high, people become distrustful of one another. One result: they keep their ideas to themselves instead of sharing them with others in the workplace – a critical exchange that’s necessary for taking risks needed for innovation, collaboration and learning.

Another unintended consequence: people disengage from helping behaviors – leaving others to fend for themselves causing them to feel isolated, often leading to low job satisfaction and organizational commitment.

Virtual distance research underscores that the rules of interaction have changed. It changes the way people feel – about each other, about themselves, and about how they fit into the world around them.

But the demonstrated impacts measured among adults seem comparatively benign when considered against what it might be doing to children.

Favorite toy?
Baby via www.shutterstock.com

Virtual distance and the growing child

Kids learn by looking at loved ones closely, watching what they do and listening to how they say things. The actions and behaviors parents model have a profound and lasting impact upon a child’s development. For example, the “serve and return” of interactions between children and adults is a key factor in child cognitive development.

If much of what the child notices about the world comes from a small screen where only a shallow representation is available, what do children have to mimic? How much practice do they get developing human capacities crucial to establishing emotional ease and social sensibilities?

Virtual distance is a game-changer when it comes to human relations. When technology is used as an agent for relationships, in some cases it can be beneficial. However when technology is used purposelessly as a default it doesn’t just squeeze out sophisticated interpersonal interactions, it changes the nature of what’s left.

Purposeful use of technology can support children’s learning but when technology becomes either a substitute or a proxy for relationships, language development in children can be held back. Communication becomes the transfer of impersonal information instead of the sharing of a passion. This can have an impact on language development for kids, but it can have affects on other aspects of our lives.

Taking a risk and having a go at that tricky math problem seems more difficult when a child is on their own than when with a friend. More so sticking with a difficult task (a real gym-buddy is more effective than an app).

These kinds of skills – self discipline, ethical understanding and interpersonal communication, as well as social ability, and critical thinking (among others) – are what UNESCO calls “transversal competencies.” And they can be impaired through virtual distance.

When the ripple effects of actions and inactions seem to go no further than the screen, empathy and collaborative skills can be difficult to develop. For example, children seem to have trouble looking into other people’s eyes and are less able to hold conversations.

As connectivity increases, connectedness can lose out.

Screens are everywhere, but we don’t need to let them get in the way

If two adults spend the night texting over dinner they are likely to feel emotionally disconnected. However, they can get over it because older people have the ability to look at themselves from afar and make changes to improve their lives.

That’s so…romantic?
Couple via www.shutterstock.com

They can reflect on their night out and come to realize that virtual distance got in the way of what they really wanted, what they really needed – to hold the other’s hand instead of touching the lifeless screen.

This kind of self-awareness and understanding about how we think and learn is called metacognitive insight. It allows the adults in this example to choose to change their behavior. They can intentionally turn their attention toward reviewing the situation. Most adults today grew up in an era before digital technology was as ubiquitous as it is now. They may have an instinctive understanding of what virtual distance is, even if they don’t have a term for it. And they have the experience to know that interactions can be different.

Kids, on the other hand, have less experience with the world. And many children in technologically advanced societies have grown up with smartphones, tablets and other forms of digital technology within arms reach. If they grow up with virtual distance as the norm, they might not know that interactions can be different, or how.

Knowing that virtual distance can affect their small ones in profound ways, grown-ups can stop and consider which path to choose. Educators can also act intentionally. They can create curriculum to “teach back” some of what we know kids need to learn but may miss as they mature in the digital age.

Virtual distance is simply a new facet of life that has to be deliberately folded into the way people live their lives and raise their kids. It’s not good. It’s not bad. It just is – much like the technologies that sparked its inception.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Global Smart Infrastructure – Smart Cities and Artificial Intelligence the Way Forward


Smart Cities and Artificial Intelligence – The Global Transformation has Begun LONDON, March 12, 2015 /PRNewswire/ — Transforming our cities into the Smart Cities of the future will encompass incorporating technologies and key digital developments all linked by machine-to-machine (M2M) solutions and real-time data analytics which sit under the umbrella term of the Internet of Things.… Continue reading

To stop the machines taking over we need to think about fuzzy logic


Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.

The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.

One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.

We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.

Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.

If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.

This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.

These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency.

Fuzzy sets and AI

One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.

In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.

But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.

The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).

For example, our human-manager from the earlier example may say to human resources:

  • IF the job applicant is aged 25 to 30
  • AND has a qualification in philosophy OR literature
  • THEN arrange an interview.

This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.

But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:

  • IF the applicant is tall
  • AND attractive
  • THEN the salary offered should be higher.

The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?

Attractiveness is also subjective, even if we only account for the taste of the single human-manager.

Grey areas and fuzzy sets

In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.

One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.

To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).

Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements.

Human–robot symbiosis

An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.

For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.

In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.

This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).

Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.

In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it.

An irrational fear?

This brings us back to the big fear of AI machines turning on us in the future.

The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.

The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.

These risks, however, are the same that come with any human-made system or object.

So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.

Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.

Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.

When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Are the Dark Ages Really in the Past? How an Internet Pioneer Thinks it Could Happen Again


The sound of the ‘Dark Ages’ conjures up a medieval nightmare of violent battles, unruly mobs and heretics, a bleak past that wasn’t necessarily as ominous or in the past as movies like to portray it – after all, there are a number of barbaric and superstitious people still alive today. The term was first used to describe the millennia between 500-1500 AD, a period from which there are few remaining historical records, or at least reliable ones. Consequently, we are often given a picture of the Middle Ages that is romanticized for better or worse, rather than accurate.

With the advent of computers and centuries of documents on microfiche it seems unlikely that such an incident will ever happen again, with an ocean of articles and pictures available at our fingertips. Unfortunately, one of the few people who can proudly rank himself among the founding fathers of the Internet, Vint Cerf, is not so optimistic, proposing a hypothetical nightmare scenario that programmers have talked about for many years: a Digital Dark Age, in which a wealth of written history may be lost, and with it so many things that we are dependent on computers for.

The twenty-first century alone has already presented several ways in which our data is in great peril, according to Cerf, and there is the possibility that it may be a century that hardly crosses anyone’s mind in the year 3000. In the near future, companies such as SpaceX are hoping to soon bring internet connections to higher speeds by moving wireless connections off the planet and into space by means of micro-satellites, where the inadequate speed of light is expected to cause inevitable delays in internet service. The ISS has already found evidence of both attainable high speed internet in space as well as resistance.

Cerf, one of Google’s vice presidents and one of the first to discover data packets, made waves at a recent convention of the American Association for the Advancement of Science, where he proposed that due to the rapid evolution of file formats, even in the event that our data survives, stored safely in our iCloud or elsewhere, we may accumulate a wealth of documents and files that we are unable to open because they are in formats that will rapidly become obsolete. Software continually updates and new versions may not always readily be able to read older files – a problem already exemplified by differences between WordPerfect and Microsoft Word, and not only applications but the machines themselves. Consider already the death of floppy disks, and the disappearance of CD and DVD drives from computers due to downloadable movies, another medium currently facing a format war. It’s a problem called backwards compatibility, something that upgrades of software do not always guarantee. As 2015 is already shaping up to be a much different place than 2005 was, a time when streaming video was just beginning to catch on, the technological difference from century to century could be vastly significant.

Fortunately, Cerf, who openly expressed a healthy skepticism over whether or not Google would still be around in the fourth millennium, is not all doom and gloom in his forecast. Already, he revealed the basics of a plan already demonstrated at Carnegie Mellon University, known as ‘digital vellum.’ Although the marketing concepts have not yet come to fruition, Cerf has left open the possibility that companies in the future may for a fee offer to create X-ray snapshots of all your digital files on a virtual machine that also replicates the operating system in which your data is stored – thus imitating Microsoft Office 97 if you had a Word file kept in that particular operating system.

While the logarithms have been established and discussed, the ethical concerns have yet to be addressed, a particularly prominent issue today as politicians continue to propose net neutrality laws and other regulations. When the internet is overwhelmingly protected by laws stemming out of legislation similar to SOPA or PIPA in the future, this too may complicate how we archive and access our data in the years to come. There is also the problem of intellectual property, which may prevent users from copying software and operating systems that are actively in use. There is also the issue of who can access the virtual machine copies, as technology may also end up having to be user specific, preventing the leakage of personal data.

James Sullivan
James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.