Category Archives: Technology

Using Your Windows Phone For Media Only


If you end up not using your Windows Phone as an actual phone anymore because you bought a new one, you don’t need to let it collect dust on the shelf

You can still use it for all sorts of media including ebooks, music and video on long car trips or in bed. Just because you don’t put cell phone service on it doesn’t mean it can’t still be used over wifi.

next

Tekoso Media Provides Web Hosting For Small Businesses


For Immediate Release –

Tekoso Media Web Hosting

Internet company Tekoso Media has recently begun offering web hosting services including shared hosting, cloud hosting and storage, VPS hosting and dedicated servers for both Windows and Linux platforms. Prices are very cheap and come with generous software packages, security and cutting edge computer technology.

Also servers in the shared, VPS and managed server packages come with cPanel and a whole host of free add-ons that every internet marketer needs for their launch, including ticket systems, multiple email accounts, billing systems and membership functionality.

WordPress Hosting at Tekoso Media

WordPress is obviously a part of what they offer, but they go the extra mile by offering wordpress hosting at a discounted rate as well. On top of that, their point and click, drag and drop website creators and integrations make it very easy to get up and running within minutes.

If you’re launching a new product or service, or even hosting a blog network for search engine optimization such as authority blogs, Tekoso Web Hosting is your best bet for getting started fast and cheaply. For just a few bucks you can have lightning speed servers, more hard drive space and RAM than you’ll ever need and plenty of bandwidth, all without breaking the bank the way you would with competing web hosting providers.

Try Tekoso and take them for a test drive. They even offer a thirty day money back guarantee on any server that is unsatisfactory. This offer may be limited, so check them out today!

Learn more about Tekoso Web Hosting at tekoso.com on Youtube at https://www.youtube.com/watch?v=Wu1Qo8e6hvU

Why North Korea’s nuclear threat must be taken more seriously than ever


Graham Ong-Webb, Nanyang Technological University

During what was the 2017 Easter weekend for most of the world, North Koreans celebrated the “Day of the Sun”. It was the 105th birthday of the country’s late founding leader and “eternal president” Kim Il-sung (1912-1994). The Conversation

Thousands of soldiers, military vehicles and, most notably, various ballistic missiles were paraded for the inspection of current supreme leader Kim Jong-Un (Kim Il-sung’s grandson).

But it wasn’t the parade that signalled North Korea’s belligerence; numerous other countries hold military parades to mark some significant occasion or another.

Instead, what was clearly aggressive was the presentation of a mock-up video of the country’s ballistic missiles destroying an American city during a national musical performance.

This video is the most visceral expression yet of Pyongyang’s intentions. Its telecast was likely timed to coincide with the expected arrival of the US Navy’s aircraft carrier, the USS Carl Vinson, and its accompanying fleet of warships in Korean waters.

Inching closer

On April 8, US President Donald Trump and other American officials told the media that the Carl Vinson had been ordered to make its way towards the Korean peninsula. The likely plan was to demonstrate American resolve in managing the crisis that North Korea’s nuclear weapons program has created.

Subsequent revelations that the warship was actually heading south for exercises with the Australian Navy at the time showed a series of blunders in internal communication. But the fact that the Carl Vinson has arrived off Korean waters two weeks later does not change the prospect of a military conflict between North Korea and the United States.

The key question is whether North Korea does have nuclear weapons that it can readily use against the United States and its regional allies, South Korea and Japan. It’s still unlikely North Korea has the current capability to launch a nuclear-tipped intercontinental ballistic missile that can destroy an American city.

North Korea’s scientists have yet to master the technology to build missiles that can traverse this distance and to construct warheads that can survive re-entry into the Earth’s atmosphere after space flight.

But years of testing has allowed North Korea to inch closer to getting right the extremely demanding science of building and launching viable intercontinental nuclear weapons. And this is why the United States is against further testing, to the point that the Trump administration seems serious about justifying pre-emptive strikes on the basis of further nuclear and missile tests.

What is of immediate concern is that previous tests have led to North Korea being able to achieve the relatively easier requirements of building workable medium-range ballistic missiles, with small enough warheads, to strike American bases in South Korea and Japan. These have about 80,000 US military personnel in total.

Approaching catastrophe

North Korea may already have as many 20 nuclear warheads that are small enough to be carried on its Nodong (or Rodong-1) medium-range missiles that can reach these bases. And the Trump administration seems to not want to risk the lives of American soldiers by assuming that North Korea doesn’t already have this nuclear capability.

The cost of that mistake would be the lives of not just 80,000 American military personnel but also countless South Korean and Japanese lives as well. In fact, a North Korean nuclear attack, which will likely develop into war, can be expected to create a humanitarian, environmental and economic catastrophe that will set back the international community.

This is what’s immediately at stake for everyone. And it explains why the United States is putting pressure on China, as an ally of North Korea, to influence it to stop its nuclear weapons program.

But if China and other countries fail to stop North Korea building nuclear weapons, the United States will feel pressured to use military force to destroy whatever nuclear weapons and ballistic missiles sites it can locate by satellite surveillance.

The decision to divert the Carl Vinson to waters near the Korean peninsula may also be driven by new intelligence on North Korea’s nuclear threat. The challenge is that sending an American naval armada towards North Korea risks triggering the very nuclear attack against US bases that the Trump administration is trying to avoid in the first place.

This could explain why the administration said it was sending its naval vessels two weeks ago when it really did so later. It may have been to test North Korea’s attitude without escalating the situation by the actual presence of American naval forces that could trigger military action by Kim Jong-Un’s regime.

A worrying stand-off

Why would North Korea want to use nuclear weapons against American bases in Northeast Asia in the first place? It is helpful to remember that, technically, North and South Korea have been at war since 1950 (the Korean War ended in 1953 with an armistice rather than peace). And that the United States has chosen to provide military assistance to the South to help protect it from any aggression by the North.

North Korea may have a very large army of about one million soldiers. South Korea effectively has half that number. Although the majority of South Korea’s able-bodied male citizens may contribute to a military reserve of a few million soldiers, mobilising them in time to respond to a conflict is another question and their role is often excluded from analyses.

As such, the American military personnel and the superior equipment, aircraft and ships that they operate provide the South with a better chance of avoiding defeat should war break out.

Pyongyang’s intention in using nuclear weapons would be to destroy these American bases to remove the advantage they give to South Korea’s national defence. This is why the threat of nuclear use, especially by a more brazen regime under Kim Jong-Un, needs to be taken very seriously.

Such is the current quagmire as the world waits to see how the geopolitics of the Korean peninsula will unfold over the next few months. And as strategists and policymakers scramble to find other approaches for halting North Korea’s growing nuclear threat.

Graham Ong-Webb, Research Fellow, S. Rajaratnam School of International Studies, Nanyang Technological University

Facebook’s new anti-fake news strategy is not going to work – but something else might


Have you seen some “tips to spot fake news” on your Facebook newsfeed recently? The Conversation

Over the past year, the social media company has been scrutinized for influencing the US presidential election by spreading fake news (propoganda). Obviously, the ability to spread completely made-up stories about politicians trafficking child sex slaves and imaginary terrorist attacks with impunity is bad for democracy and society.

Something had to be done.

Enter Facebook’s new, depressingly incompetent strategy for tackling fake news. The strategy has three, frustratingly ill-considered parts.

New products

The first part of the plan is to build new products to curb the spread of fake news stories. Facebook says it’s trying “to make it easier to report a false news story” and find signs of fake news such as “if reading an article makes people significantly less likely to share it.”

It will then send the story to independent fact checkers. If fake, the story “will get flagged as disputed and there will be a link to a corresponding article explaining why.”

This sounds pretty good, but it won’t work.

If non-experts could tell the difference between real news and fake news (which is doubtful), there would be no fake news problem to begin with.

What’s more, Facebook says: “We cannot become arbiters of truth ourselves — it’s not feasible given our scale, and it’s not our role.” Nonsense.

Facebook is like a megaphone. Normally, if someone says something horrible into the megaphone, it’s not the megaphone company’s fault. But Facebook is a very special kind of megaphone that listens first and then changes the volume.

The company’s algorithms largely determine both the content and order of your newsfeed. So if Facebook’s algorithms spread some neo-Nazi hate speech far and wide, yes, it is the company’s fault.

Worse yet, even if Facebook accurately labels fake news as contested, it will still affect public discourse through “availability cascades.”

Each time you see the same message repeated from (apparently) different sources, the message seems more believable and reasonable. Bold lies are extremely powerful because repeatedly fact-checking them can actually make people remember them as true.

These effects are exceptionally robust; they cannot be fixed with weak interventions such as public service announcements, which brings us to the second part of Facebook’s strategy: helping people make more informed decisions when they encounter false news.

Helping you help yourself

Facebook is releasing public service announcements and funding the “news integrity initiative” to help “people make informed judgments about the news they read and share online”.

This – also – doesn’t work.

A vast body of research in cognitive psychology concerns correcting systematic errors in reasoning such as failing to perceive propaganda and bias. We have known since the 1980s that simply warning people about their biased perceptions doesn’t work.

Similarly, funding a “news integrity” project sounds great until you realise the company is really talking about critical thinking skills.

Improving critical thinking skills is a key aim of primary, secondary and tertiary education. If four years of university barely improves these skills in students, what will this initiative do? Make some Youtube videos? A fake news FAQ?

Funding a few research projects and “meetings with industry experts” doesn’t stand a chance to change anything.

Disrupting economic incentives

The third prong of this non-strategy is cracking down on spammers and fake accounts, and making it harder for them to buy advertisements. While this is a good idea, it’s based on the false premise that most fake news comes from shady con artists rather than major news outlets.

You see, “fake news” is Orwellian newspeak — carefully crafted to mean a totally fabricated story from a fringe outlet masquerading as news for financial or political gain. But these stories are the most suspicious and therefore the least worrisome. Bias and lies from public figures, official reports and mainstream news are far more insidious.

And what about astrology, homeopathy, psychics, anti-vaccination messages, climate change denial, intelligent design, miracles, and all the rest of the irrational nonsense bandied about online? What about the vast array of deceptive marketing and stealth advertising that is core to Facebook’s business model?

As of this writing, Facebook doesn’t even have an option to report misleading advertisements.

What is Facebook to do?

Facebook’s strategy is vacuous, evanescent, lip service; a public relations exercise that makes no substantive attempt to address a serious problem.

But the problem is not unassailable. The key to reducing inaccurate perceptions is to redesign technologies to encourage more accurate perception. Facebook can do this by developing a propaganda filter — something like a spam filter for lies.

Facebook may object to becoming an “arbiter of truth”. But coming from a company that censors historic photos and comedians calling for social justice, this sounds disingenuous.

Nonetheless, Facebook has a point. To avoid accusations of bias, it should not create the propaganda filter itself. It should simply fund researchers in artificial intelligence, software engineering, journalism and design to develop an open-source propaganda filter that anyone can use.

Why should Facebook pay? Because it profits from spreading propaganda, that’s why.

Sure, people will try to game the filter, but it will still work. Spam is frequently riddled with typos, grammatical errors and circumlocution not only because it’s often written by non-native English speakers but also because the weird writing is necessary to bypass spam filters.

If the propaganda filter has a similar effect, weird writing will make the fake news that slips through more obvious. Better yet, an effective propaganda filter would actively encourage journalistic best practices such as citing primary sources.

Developing a such a tool won’t be easy. It could take years and several million dollars to refine. But Facebook made over US$8 billion last quarter, so Mark Zuckerberg can surely afford it.

Paul Ralph, Senior Lecturer in Computer Science, University of Auckland

Police around the world learn to fight global-scale cybercrime


Frank J. Cilluffo, George Washington University; Alec Nadeau, George Washington University, and Rob Wainwright, University of Exeter

From 2009 to 2016, a cybercrime network called Avalanche grew into one of the world’s most sophisticated criminal syndicates. It resembled an international conglomerate, staffed by corporate executives, advertising salespeople and customer service representatives. The Conversation

Its business, though, was not standard international trade. Avalanche provided a hacker’s delight of a one-stop shop for all kinds of cybercrime to criminals without their own technical expertise but with the motivation and ingenuity to perpetrate a scam. At the height of its activity, the Avalanche group had hijacked hundreds of thousands of computer systems in homes and businesses around the world, using them to send more than a million criminally motivated emails per week.

Our study of Avalanche, and of the groundbreaking law enforcement effort that ultimately took it down in December 2016, gives us a look at how the cybercriminal underground will operate in the future, and how police around the world must cooperate to fight back.

Cybercrime at scale

Successful cybercriminal enterprises need strong and reliable technology, but what increasingly separates the big players from the smaller nuisances is business acumen. Underground markets, forums and message systems, often hosted on the deep web, have created a service-based economy of cybercrime.

Just as regular businesses can hire online services – buying Google products to handle their email, spreadsheets and document sharing, and hosting websites on Amazon with payments handled by PayPal – cybercriminals can do the same. Sometimes these criminals use legitimate service platforms like PayPal in addition to others specifically designed for illicit marketplaces.

And just as the legal cloud-computing giants aim to efficiently offer products of broad use to a wide customer base, criminal computing services do the same. They pursue technological capabilities that a wide range of customers want to use more easily. Today, with an internet connection and some currency (bitcoin preferred), almost anyone can buy and sell narcotics online, purchase hacking services or rent botnets to cripple competitors and spread money-making malware.

The Avalanche network excelled at this, selling technically advanced products to its customers while using sophisticated techniques to evade detection and identification as the source by law enforcement. Avalanche offered, in business terms, “cybercrime as a service,” supporting a broad digital underground economy. By leaving to others the design and execution of innovative ways to use them, Avalanche and its criminal customers efficiently split the work of planning, executing and developing the technology for advanced cybercrime scams.

With Avalanche, renters – or the network’s operators themselves – could communicate with, and take control of, some or all of the hijacked computers to conduct a wide range of cyberattacks. The criminals could then, for example, knock websites offline for hours or longer. That in turn could let them extract ransom payments, disrupt online transactions to hurt a business’ bottom line or distract victims while accomplices employed stealthier methods to steal customer data or financial information. The Avalanche group also sold access to 20 unique types of malicious software. Criminal operations facilitated by Avalanche cost businesses, governments and individuals around the world hundreds of millions of dollars.

Low risk, high reward

To date, cybercrime has offered high profits – like the US$1 billion annual ransomware market – with low risk. Cybercriminals often use technical means to obscure their identities and locations, making it challenging for law enforcement to effectively pursue them.

That makes cybercrime very attractive to traditional criminals. With a lower technological bar, huge amounts of money, manpower and real-world connections have come flooding into the cybercrime ecosystem. For instance, in 2014, cybercriminals hacked into major financial firms to get information about specific companies’ stocks and to steal investors’ personal information. They first bought stock in certain companies, then sent false email advertisements to specific investors, with the goal of artificially inflating those companies’ stock prices. It worked: Stock prices went up, and the criminals sold their holdings, raking in profits they could use for their next scam.

In addition, the internet allows criminal operations to function across geographic boundaries and legal jurisdictions in ways that are simply impractical in the physical world. Criminals in the real world must be at a crime’s actual site and may leave physical evidence behind – like fingerprints on a bank vault or records of traveling to and from the place the crime occurred. In cyberspace, a criminal in Belarus can hack into a vulnerable server in Hungary to remotely direct distributed operations against victims in South America without ever setting foot below the Equator.

A path forward

All these factors present significant challenges for police, who must also contend with limited budgets and manpower with which to conduct complex investigations, the technical challenges of following sophisticated hackers through the internet and the need to work with officials in other countries.

The multinational cooperation involved in successfully taking down the Avalanche network can be a model for future efforts in fighting digital crime. Coordinated by Europol, the European Union’s police agency, the plan takes inspiration from the sharing economy.

Uber owns very few cars and Airbnb has no property; they help connect drivers and homeowners with customers who need transportation or lodging. Similarly, while Europol has no direct policing powers or unique intelligence, it can connect law enforcement agencies across the continent. This “uberization” of law enforcement was crucial to synchronizing the coordinated action that seized, blocked and redirected traffic for more than 800,000 domains across 30 countries.

Through those partnerships, various national police agencies were able to collect pieces of information from their own jurisdictions and send it, through Europol, to German authorities, who took the lead on the investigation. Analyzing all of that collected data revealed the identity of the suspects and untangled its complex network of servers and software. The nonprofit Shadowserver Foundation and others assisted with the actual takedown of the server infrastructure, while anti-virus companies helped victims clean up their computers.

Using the network against the criminals

Police are increasingly learning – often from private sector experts – how to detect and stop criminals’ online activities. Avalanche’s complex technological setup lent itself to a technique called “sinkholing,” in which malicious internet traffic is sent into the electronic equivalent of a bottomless pit. When a hijacked computer tried to contact its controller, the police-run sinkhole captured that message and prevented it from reaching the actual central controller. Without control, the infected computer couldn’t do anything nefarious.

However, interrupting the technological systems isn’t enough, unless police are able to stop the criminals too. Three times since 2010, police tried to take down the Kelihos botnet. But each time the person behind it escaped and was able to resume criminal activities using more resilient infrastructure. In early April, however, the FBI was able to arrest Peter Levashov, allegedly its longtime operator, while on a family vacation in Spain.

The effort to take down Avalanche also resulted in the arrests of five people who allegedly ran the organization. Their removal from action likely led to a temporary disruption in the broader global cybercrime environment. It forced the criminals who were Avalanche’s customers to stop and regroup, and may offer police additional intelligence, depending on what investigators can convince the people arrested to reveal.

The Avalanche network was just the beginning of the challenges law enforcement will face when it comes to combating international cybercrime. To keep their enterprises alive, the criminals will share their experiences and learn from the past. Police agencies around the world must do the same to keep up.

Frank J. Cilluffo, Director, Center for Cyber and Homeland Security, George Washington University; Alec Nadeau, Presidential Administrative Fellow, Center for Cyber and Homeland Security, George Washington University, and Rob Wainwright, Director of Europol; Honorary Fellow, Strategy and Security Institute, University of Exeter

Deep sea mining could help develop mass solar energy – is it worth the risk?


Jon Major, University of Liverpool

Scientists have just discovered massive amounts of a rare metal called tellurium, a key element in cutting-edge solar technology. As a solar expert who specialises in exactly this, I should be delighted. But here’s the catch: the deposit is found at the bottom of the sea, in an undisturbed part of the ocean. The Conversation

People often have an idealised view of solar as the perfect clean energy source. Direct conversion of sunlight to electricity, no emissions, no oil spills or contamination, perfectly clean. This however overlooks the messy reality of how solar panels are produced.

While the energy produced is indeed clean, some of the materials required to generate that power are toxic or rare. In the case of one particular technology, cadmium telluride-based solar cells, the cadmium is toxic and the telluride is hard to find.

Cadmium telluride is one of the second generation “thin-film” solar cell technologies. It’s far better at absorbing light than silicon, on which most solar power is currently based, and as a result its absorbing layer doesn’t need to be as thick. A layer of cadmium telluride just one thousandth of a millimetre thick will absorb around 90% of the light that hits it. It’s cheap and quick to set up, compared to silicon, and uses less material.

As a result, it’s the first thin-film technology to effectively make the leap from the research laboratory to mass production. Cadmium telluride solar modules now account for around 5% of global installations and, depending on how you do the sums, can produce lower cost power than silicon solar.

Topaz Solar Farm in California is the world’s fourth largest. It uses cadmium telluride panels.
Sarah Swenty/USFWS, CC BY

But cadmium telluride’s Achilles heel is the tellurium itself, one of the rarest metals in the Earth’s crust. Serious questions must be asked about whether technology based on such a rare metal is worth pursuing on a massive scale.

There has always been a divide in opinion about this. The abundancy data for tellurium suggests a real issue, but the counter argument is that no-one has been actively looking for new reserves of the material. After all, platinum and gold are similarly rare but demand for jewellery and catalytic converters (the primary use of platinum) means in practice we are able to find plenty.

The discovery of a massive new tellurium deposit in an underwater mountain in the Atlantic Ocean certainly supports the “it will turn up eventually” theory. And this is a particularly rich ore, according to the British scientists involved in the MarineE-Tech project which found it. While most tellurium is extracted as a by-product of copper mining and so is relatively low yield, their seabed samples contain concentrations 50,000 times higher than on land.

The submerged mountain, ‘Tropic Seamount’, lies off the coast of north-west Africa.
Google Earth

Extracting any of this will be formidably hard and very risky for the environment. The top of the mountain where the tellurium has been discovered is still a kilometre below the waves, and the nearest land is hundreds of miles away.

Even on dry land, mining is never a good thing for the environment. It can uproot communities, decimate forests and leave huge scars on the landscape. It often leads to groundwater contamination, despite whatever safeguards are put in place.

And on the seabed? Given the technical challenges and the pristine ecosystems involved, I think most people can intuitively guess at the type of devastation that deep-sea mining could cause. No wonder it has yet to be implemented anywhere yet, despite plans off the coast of Papua New Guinea and elsewhere. Indeed, there’s no suggestion that tellurium mining is liable to occur at this latest site any time soon.

Is deep sea mining worth the risk?

However the mere presence of such resources, or the wind turbines or electric car batteries that rely on scarce materials or risky industrial processes, raises an interesting question. These are useful low-carbon technologies, but do they also have a requirement to be environmentally ethical?

There is often the perception that everyone working in renewable energy is a lovely tree-hugging, sandal-wearing leftie, but this isn’t the case. After all, this is now a huge industry, one that is aiming to eventually supplant fossil fuels, and there are valid concerns over whether such expansion will be accompanied by a softening of regulations.

We know that solar power is ultimately a good thing, but do the ends always justify the means? Or, to put it more starkly: could we tolerate mass production of solar panels if it necessitated mining and drilling on a similar scale to the fossil fuels industry, along with the associated pitfalls?

Tolerable – as long as it’s for solar panels.
Peter Gudella / shutterstock

To my mind the answer is undoubtedly yes, we have little choice. After all, mass solar would still wipe out our carbon emissions, helping curb global warming and the associated apocalypse.

What’s reassuring is that, even as solar becomes a truly mature industry, it has started from a more noble and environmentally sound place. Cadmium telluride modules for example include a cost to cover recycling, while scarce resources such as tellurium can be recovered from panels at the end of their 20-year or more lifespan (compare this with fossil fuels, where the materials that produce the power are irreparably lost in a bright flame and a cloud of carbon).

The impact of mining for solar panels will likely be minimal in comparison to the oil or coal industries, but it will not be zero. As renewable technology becomes more crucial, we perhaps need to start calibrating our expectations to account for this.

At some point mining operations in search of solar or wind materials will cause damage or else some industrial production process will go awry and cause contamination. This may be the Faustian pact we have to accept, as the established alternatives are far worse. Unfortunately nothing is perfect.

Jon Major, Research Fellow, Stephenson Institute for Renewable Energy, University of Liverpool

Why we don’t trust robots


Joffrey Becker, Collège de France

Robots raise all kinds of concerns. They could steal our jobs, as some experts think. And if artificial intelligence grows, they might even be tempted to enslave us, or to annihilate the whole of humanity. The Conversation

Robots are strange creatures, and not only for these frequently invoked reasons. We have good cause to be a little worried about these machines.

An advertisement for Kuka robotics: can these machines really replace us?

Imagine that you are visiting the Quai Branly-Jacques Chirac, a museum in Paris dedicated to anthropology and ethnology. As you walk through the collection, your curiosity leads you to a certain piece. After a while, you begin to sense a familiar presence heading towards the same objet d’art that has caught your attention.

You move slowly, and as you turn your head a strange feeling seizes you because what you seem to distinguish, still blurry in your peripheral vision, is a not-quite-human figure. Anxiety takes over.

As your head turns, and your vision become sharper, this feeling gets stronger. You realise that this is a humanoid machine, a robot called Berenson. Named after the American art critic Bernard Berenson and designed by the roboticist Philippe Gaussier (Image and Signal processing Lab) and the anthropologist Denis Vidal (Institut de recherche sur le développement), Berenson is part of an experiment underway at the Quai Branly museum since 2012.

The strangeness of the encounter with Berenson leaves you suddenly frightened, and you step back, away from the machine.

The uncanny valley

This feeling has been explored in robotics since the 1970s, when Japanese researcher Professor Masahiro Mori proposed his “uncanny valley” theory. If a robot resembles us, he suggested, we are inclined to consider its presence in the same way as we would that of a human being.

But when the machine reveals its robot nature to us, we will feel discomfort. Enter what Mori dubbed “the uncanny valley”. The robot will then be regarded as something of a zombie.

Mori’s theory cannot be systematically verified. But the feelings we experience when we meet an autonomous machine are certainly tinged with both incomprehension and curiosity.

The experiment conducted with Berenson at the Quai Branly, for example, shows that the robot’s presence can elicit paradoxical behaviour in museum goers. It underlines the deep ambiguity that characterises the relationship one can have with a robot, particularly the many communication problems they pose for humans.

If we are wary of such machines, it is mainly because it is not clear to us whether they have intentions. And, if so, what they are and how to establish a basis for the minimal understanding that is essential in any interaction. Thus, it is common to see visitors of the Quai Branly adopting social behaviour with Berenson, such as talking to it, or facing it, to find out how it perceives its environment.

In one way or another, visitors mainly try to establish contact. It appears that there is something strategic in considering the robot, even temporarily, as a person. And these social behaviours are not only observed when humans interact with machines that resembles us: it seems we make anthropomorphic projections whenever humans and robots meet.

Social interactions

An interdisciplinary team has recently been set up to explore the many dimensions revealed during these interactions. In particular, they are looking at the moments when, in our minds, we are ready to endow robots with intentions and intelligence.

This is how the PsyPhINe project was born. Based on interactions between humans and a robotic lamp, this project seeks to better understand people’s tendency to anthropomorphise machines.

After they get accustomed to the strangeness of the situation, it is not uncommon to observe that people are socially engaging with the lamp. During a game in which people are invited to play with this robot, they can be seen reacting to its movements and sometimes speaking to it, commenting on what it is doing or on the situation itself.

Mistrust often characterises the first moments of our relations with machines. Beyond their appearance, most people don’t know exactly what robots are made of, what their functions are and what their intentions might be. The robot world seems way too far from ours.

But this feeling quickly disappears. Assuming they have not already run away from the machine, people usually seek to define and maintain a frame for communication. Typically, they rely on existing communication habits, such as those used when talking to pets, for example, or with any living being whose world is to some degree different from theirs.

Ultimately, it seems, we humans are as suspicious of our technologies as we are fascinated by the possibilities they open up.

Joffrey Becker, Anthropologue, Laboratoire d’anthropologie sociale, Collège de France

This article was originally published on The Conversation. Read the original article.

We could soon face a robot crimewave … the law needs to be ready


Christopher Markou, University of Cambridge

This is where we are at in 2017: sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases. By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. Just ask the toddler who was run over by a security robot at a California mall last year. The Conversation

How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.

Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

Robot guilt

The role of the law can be defined in many ways, but ultimately it is a system within society for stabilising people’s expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.

But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit – and so humans are considered by the law to be “legal persons”.

To varying extents, companies are endowed with legal personhood, too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?

A guilty AI mind?

The criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.

Second, criminal law requires that an accused is culpable for their actions. This is known as a “guilty mind” or mens rea. The idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.

Blind justice for a AI.
Shutterstock

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

Take driverless cars. Cars drive on roads and there are regulatory frameworks in place to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.

As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of “robots gone awry” but the legal realities should not be left to Hollywood.

So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea. How do we know the robot intended to do what it did?

For now, we are nowhere near the level of building a fully sentient or “conscious” humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and discriminatory algorithmic mischief already abounds.

Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let’s say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

But what would we need to prove the existence of mens rea? Could we simply cross-examine the AI like we do a human defendant? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine “tick”.

And what would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defense or the extent of premeditation?

Let’s go even further. After all, we’re not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card – and it decided to buy contraband. This isn’t fiction; it has happened. Two London-based artists created a bot that purchased random items off the dark web. And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and ten ecstasy pills. Should these artists be liable for what the bot they created bought?

Maybe. But what if the bot “decided” to make the purchases itself?

Robo-jails?

Even if you solve these legal issues, you are still left with the question of punishment. What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to “reflect” on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty’s leisure. And what would building “remorse” into machines say about us as their builders?

Would robot wardens patrol robot jails?
Shutterstock

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

AI has already helped with emergent concepts in medicine, and we are learning things about the universe with AI systems that even an army of Stephen Hawkings might not reveal.

The hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.

At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can’t help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.

Christopher Markou, PhD Candidate, Faculty of Law, University of Cambridge

This article was originally published on The Conversation. Read the original article.

How to make an Internet of Intelligent Things work for Africa


Martin Hall, University of Cape Town

Late in 2016 Senegal’s Banque Regionale De Marches announced the launch of the eCFA Franc; a cryptocurrency for the countries of the West African Monetary Union – Senegal, Cote d’Ivoire, Benin, Burkina Faso, Mali, Niger, Togo and Guinea-Bissau. This and similar innovations mark the coming of age of a new generation of applications – an Internet of Intelligent Things – that could provide a new infrastructure for economic development across Africa. The Conversation

The Internet of Things is a network of physical devices, vehicles, buildings and other items. They are equipped with electronics, software, sensors and network connectivity so they can collect and exchange data. There’s wide enthusiasm about spectacular innovations such as Intelligent refrigeratorsand driverless cars. But a quieter revolution is underway in everyday systems and facilities, such as financial services.

There are particular possibilities here for Africa. The potential for the continent’s economic growth is well established. There’s also an abundance of opportunity for digital innovation. This was clear from a recent continent wide entrepreneurship competition organised by the University of Cape Town’s Graduate School of Business.

More broadly, the new Internet of Things has the potential to compensate for Africa’s legacies of underdevelopment. The key here is the development of the blockchain from a fringe concept into a mainstream digital innovation.

The blockchain and Africa

The blockchain, mostly known as the technology that underpins digital currency Bitcoin, is an almost incorruptible digital ledger of transactions, agreements and contracts that is distributed across thousands of computers, worldwide.

It has the potential to be both foundation and springboard for a new developmental infrastructure.

New blockchain platforms such as Ethereum are supporting the development of distributed applications. These “DApps” can provide accessible ways to use the blockchain. They act like “autonomous agents” – little brains that receive and process information, make decisions and take actions. These new capabilities will have widespread implications when linked to cryptocurrencies through “smart contacts” that are also securely recorded in the blockchain.

DApps provide a practical and affordable means of making Things intelligent and able to interact directly with other Things. They can be programmed to take data-informed actions without human intervention.

These innovations will have particular benefits across Africa. Economic growth is underpinned and enabled by appropriate financial services. Early internet-based innovations such as Kenya’s M-PESA have clearly demonstrated the appetite for accessible, Internet-financial services. But many small and medium businesses are still restricted. Their owners usually can’t access standard loan financing. Banks will not extend credit facilities without traditional title deeds to land and buildings, or a conventional payslip.

Don and Alex Tapscott have shown in their recent book that the new blockchain can be “the ledger of everything”. A house can become an intelligent entity registered on a secure, distributed database once it’s tagged with a geospatial reference and sensors that monitor its continuing existence.

The owner of the asset can, through an Ethereum-based smart contract, secure a loan to expand a start-up enterprise. Intermediary arrangements become unnecessary. Economist Hernando de Soto has suggested this could create “a revolution in property rights”.

Water and energy

Property and financing aren’t the only areas where the new Internet of Intelligent Things has the potential to compensate for Africa’s legacies of underdevelopment.

Economic growth also depends on affordable and reliable services like water and energy. Water is an increasingly scarce resource in many parts of Africa. This is particularly true in cities. Rapid population increases are making old precepts of urban planning redundant.

Technology can help. Autonomous agents positioned across all aspects of water reticulation systems can monitor supplies of potable, storm and waste water. These “little brains” can take appropriate actions to detect and report damage and leakage and close off supply lines. Smart devices can also monitor water quality to detect health hazards. They can regulate and charge for water consumption.

Similarly, for the supply of energy, smart devices are already being deployed across conventional and ageing power grids in other parts of the world. In Australia, for instance, intelligent monitors detect when an individual pole is in trouble. They then report the fault and call out a repair crew. They can also communicate with other poles to redirect the supply and preserve the grid’s integrity.

In parallel with conventional supply systems, new digital technologies can enable full integration with renewable sources of energy and the intelligent management of supply at the household level. The new blockchain is designed for secure peer-to-peer transactions combined with incorruptible contracts between multiple parties. Individual households can manage their own supply and demand to incorporate self-generated energy. A house equipped with a simple windmill and a roof made up of photovoltaic tiles could sell surplus power to a neighbour in need. They could also buy from another house to meet a shortfall.
Such microgrids are already in development. The combination of ubiquitous and affordable bandwidth and low cost autonomous agents could bring affordable energy to communities that have never enjoyed reliable electricity supply.

A new infrastructure built up in this way could be a springboard for economic development – from small enterprises that would have the resources to take innovations to scale, to significant household efficiencies and increases in consumer purchasing power. As has been the pattern with previous digital technologies, costs of production will fall dramatically as the global market for intelligent things explodes. That which seems extraordinary today will be everyday tomorrow.

So what’s standing in the way?

Established interests

It’s not the technology that’s holding Africa back from embracing the Internet of Things. Rather, it’s the established interests in play. These include state enterprises and near-monopolies that are heavily invested in conventional systems, local patronage networks and conventional banks, and the failure of political vision.

What’s needed is effective public policy and business to ensure that the potential of this next wave of digital innovation is realised. Government and civil society innovators need to be directing much of their attention here.

This is why the West African Monetary Union’s cryptocurrency initiative is encouraging. It’s a step towards the future that Don and Alex Tapscott envision; a move towards an Internet that’s driven by the falling costs of bargaining, policing, and enforcing social and commercial agreements.

In this new space integrity, security, collaboration, the privacy of all transactions will be the name of the game. So too will the creation and distribution of value. And that’s great news for Africa.

Martin Hall, Emeritus Professor, MTN Solution Space Graduate School of Business, University of Cape Town

This article was originally published on The Conversation. Read the original article.

Merging our brains with machines won’t stop the rise of the robots


Michael Milford, Queensland University of Technology

Tesla chief executive and OpenAI founder Elon Musk suggested last week that humanity might stave off irrelevance from the rise of the machines by merging with the machines and becoming cyborgs. The Conversation

However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.

Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.

But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.

The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.

However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.

Current technologies are primitive compared to the picture of brain-machine interfaces we’re sold in science fiction movies such as The Matrix.

Deep learning quirks

Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.

The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.

We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.

One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.

Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.

Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.

Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.

This image shows how you can fool AI image recognition by adding imperceptible noise to the image.
From Goodfellow et al, 2014

Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.

Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.

Google research scientist, Ian Goodfellow, shows how easy it is to fool a deep learning system.

Trust me, I’m a robot

Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?

We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.

Research into trusted autonomous systems deals with both humans trusting machines and machines trusting humans.

There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.

In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?

In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?

In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.

Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.

The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.

Michael Milford, Associate professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.