Category Archives: Technology

Facebook’s new anti-fake news strategy is not going to work – but something else might


Have you seen some “tips to spot fake news” on your Facebook newsfeed recently? The Conversation

Over the past year, the social media company has been scrutinized for influencing the US presidential election by spreading fake news (propoganda). Obviously, the ability to spread completely made-up stories about politicians trafficking child sex slaves and imaginary terrorist attacks with impunity is bad for democracy and society.

Something had to be done.

Enter Facebook’s new, depressingly incompetent strategy for tackling fake news. The strategy has three, frustratingly ill-considered parts.

New products

The first part of the plan is to build new products to curb the spread of fake news stories. Facebook says it’s trying “to make it easier to report a false news story” and find signs of fake news such as “if reading an article makes people significantly less likely to share it.”

It will then send the story to independent fact checkers. If fake, the story “will get flagged as disputed and there will be a link to a corresponding article explaining why.”

This sounds pretty good, but it won’t work.

If non-experts could tell the difference between real news and fake news (which is doubtful), there would be no fake news problem to begin with.

What’s more, Facebook says: “We cannot become arbiters of truth ourselves — it’s not feasible given our scale, and it’s not our role.” Nonsense.

Facebook is like a megaphone. Normally, if someone says something horrible into the megaphone, it’s not the megaphone company’s fault. But Facebook is a very special kind of megaphone that listens first and then changes the volume.

The company’s algorithms largely determine both the content and order of your newsfeed. So if Facebook’s algorithms spread some neo-Nazi hate speech far and wide, yes, it is the company’s fault.

Worse yet, even if Facebook accurately labels fake news as contested, it will still affect public discourse through “availability cascades.”

Each time you see the same message repeated from (apparently) different sources, the message seems more believable and reasonable. Bold lies are extremely powerful because repeatedly fact-checking them can actually make people remember them as true.

These effects are exceptionally robust; they cannot be fixed with weak interventions such as public service announcements, which brings us to the second part of Facebook’s strategy: helping people make more informed decisions when they encounter false news.

Helping you help yourself

Facebook is releasing public service announcements and funding the “news integrity initiative” to help “people make informed judgments about the news they read and share online”.

This – also – doesn’t work.

A vast body of research in cognitive psychology concerns correcting systematic errors in reasoning such as failing to perceive propaganda and bias. We have known since the 1980s that simply warning people about their biased perceptions doesn’t work.

Similarly, funding a “news integrity” project sounds great until you realise the company is really talking about critical thinking skills.

Improving critical thinking skills is a key aim of primary, secondary and tertiary education. If four years of university barely improves these skills in students, what will this initiative do? Make some Youtube videos? A fake news FAQ?

Funding a few research projects and “meetings with industry experts” doesn’t stand a chance to change anything.

Disrupting economic incentives

The third prong of this non-strategy is cracking down on spammers and fake accounts, and making it harder for them to buy advertisements. While this is a good idea, it’s based on the false premise that most fake news comes from shady con artists rather than major news outlets.

You see, “fake news” is Orwellian newspeak — carefully crafted to mean a totally fabricated story from a fringe outlet masquerading as news for financial or political gain. But these stories are the most suspicious and therefore the least worrisome. Bias and lies from public figures, official reports and mainstream news are far more insidious.

And what about astrology, homeopathy, psychics, anti-vaccination messages, climate change denial, intelligent design, miracles, and all the rest of the irrational nonsense bandied about online? What about the vast array of deceptive marketing and stealth advertising that is core to Facebook’s business model?

As of this writing, Facebook doesn’t even have an option to report misleading advertisements.

What is Facebook to do?

Facebook’s strategy is vacuous, evanescent, lip service; a public relations exercise that makes no substantive attempt to address a serious problem.

But the problem is not unassailable. The key to reducing inaccurate perceptions is to redesign technologies to encourage more accurate perception. Facebook can do this by developing a propaganda filter — something like a spam filter for lies.

Facebook may object to becoming an “arbiter of truth”. But coming from a company that censors historic photos and comedians calling for social justice, this sounds disingenuous.

Nonetheless, Facebook has a point. To avoid accusations of bias, it should not create the propaganda filter itself. It should simply fund researchers in artificial intelligence, software engineering, journalism and design to develop an open-source propaganda filter that anyone can use.

Why should Facebook pay? Because it profits from spreading propaganda, that’s why.

Sure, people will try to game the filter, but it will still work. Spam is frequently riddled with typos, grammatical errors and circumlocution not only because it’s often written by non-native English speakers but also because the weird writing is necessary to bypass spam filters.

If the propaganda filter has a similar effect, weird writing will make the fake news that slips through more obvious. Better yet, an effective propaganda filter would actively encourage journalistic best practices such as citing primary sources.

Developing a such a tool won’t be easy. It could take years and several million dollars to refine. But Facebook made over US$8 billion last quarter, so Mark Zuckerberg can surely afford it.

Paul Ralph, Senior Lecturer in Computer Science, University of Auckland

Police around the world learn to fight global-scale cybercrime


Frank J. Cilluffo, George Washington University; Alec Nadeau, George Washington University, and Rob Wainwright, University of Exeter

From 2009 to 2016, a cybercrime network called Avalanche grew into one of the world’s most sophisticated criminal syndicates. It resembled an international conglomerate, staffed by corporate executives, advertising salespeople and customer service representatives. The Conversation

Its business, though, was not standard international trade. Avalanche provided a hacker’s delight of a one-stop shop for all kinds of cybercrime to criminals without their own technical expertise but with the motivation and ingenuity to perpetrate a scam. At the height of its activity, the Avalanche group had hijacked hundreds of thousands of computer systems in homes and businesses around the world, using them to send more than a million criminally motivated emails per week.

Our study of Avalanche, and of the groundbreaking law enforcement effort that ultimately took it down in December 2016, gives us a look at how the cybercriminal underground will operate in the future, and how police around the world must cooperate to fight back.

Cybercrime at scale

Successful cybercriminal enterprises need strong and reliable technology, but what increasingly separates the big players from the smaller nuisances is business acumen. Underground markets, forums and message systems, often hosted on the deep web, have created a service-based economy of cybercrime.

Just as regular businesses can hire online services – buying Google products to handle their email, spreadsheets and document sharing, and hosting websites on Amazon with payments handled by PayPal – cybercriminals can do the same. Sometimes these criminals use legitimate service platforms like PayPal in addition to others specifically designed for illicit marketplaces.

And just as the legal cloud-computing giants aim to efficiently offer products of broad use to a wide customer base, criminal computing services do the same. They pursue technological capabilities that a wide range of customers want to use more easily. Today, with an internet connection and some currency (bitcoin preferred), almost anyone can buy and sell narcotics online, purchase hacking services or rent botnets to cripple competitors and spread money-making malware.

The Avalanche network excelled at this, selling technically advanced products to its customers while using sophisticated techniques to evade detection and identification as the source by law enforcement. Avalanche offered, in business terms, “cybercrime as a service,” supporting a broad digital underground economy. By leaving to others the design and execution of innovative ways to use them, Avalanche and its criminal customers efficiently split the work of planning, executing and developing the technology for advanced cybercrime scams.

With Avalanche, renters – or the network’s operators themselves – could communicate with, and take control of, some or all of the hijacked computers to conduct a wide range of cyberattacks. The criminals could then, for example, knock websites offline for hours or longer. That in turn could let them extract ransom payments, disrupt online transactions to hurt a business’ bottom line or distract victims while accomplices employed stealthier methods to steal customer data or financial information. The Avalanche group also sold access to 20 unique types of malicious software. Criminal operations facilitated by Avalanche cost businesses, governments and individuals around the world hundreds of millions of dollars.

Low risk, high reward

To date, cybercrime has offered high profits – like the US$1 billion annual ransomware market – with low risk. Cybercriminals often use technical means to obscure their identities and locations, making it challenging for law enforcement to effectively pursue them.

That makes cybercrime very attractive to traditional criminals. With a lower technological bar, huge amounts of money, manpower and real-world connections have come flooding into the cybercrime ecosystem. For instance, in 2014, cybercriminals hacked into major financial firms to get information about specific companies’ stocks and to steal investors’ personal information. They first bought stock in certain companies, then sent false email advertisements to specific investors, with the goal of artificially inflating those companies’ stock prices. It worked: Stock prices went up, and the criminals sold their holdings, raking in profits they could use for their next scam.

In addition, the internet allows criminal operations to function across geographic boundaries and legal jurisdictions in ways that are simply impractical in the physical world. Criminals in the real world must be at a crime’s actual site and may leave physical evidence behind – like fingerprints on a bank vault or records of traveling to and from the place the crime occurred. In cyberspace, a criminal in Belarus can hack into a vulnerable server in Hungary to remotely direct distributed operations against victims in South America without ever setting foot below the Equator.

A path forward

All these factors present significant challenges for police, who must also contend with limited budgets and manpower with which to conduct complex investigations, the technical challenges of following sophisticated hackers through the internet and the need to work with officials in other countries.

The multinational cooperation involved in successfully taking down the Avalanche network can be a model for future efforts in fighting digital crime. Coordinated by Europol, the European Union’s police agency, the plan takes inspiration from the sharing economy.

Uber owns very few cars and Airbnb has no property; they help connect drivers and homeowners with customers who need transportation or lodging. Similarly, while Europol has no direct policing powers or unique intelligence, it can connect law enforcement agencies across the continent. This “uberization” of law enforcement was crucial to synchronizing the coordinated action that seized, blocked and redirected traffic for more than 800,000 domains across 30 countries.

Through those partnerships, various national police agencies were able to collect pieces of information from their own jurisdictions and send it, through Europol, to German authorities, who took the lead on the investigation. Analyzing all of that collected data revealed the identity of the suspects and untangled its complex network of servers and software. The nonprofit Shadowserver Foundation and others assisted with the actual takedown of the server infrastructure, while anti-virus companies helped victims clean up their computers.

Using the network against the criminals

Police are increasingly learning – often from private sector experts – how to detect and stop criminals’ online activities. Avalanche’s complex technological setup lent itself to a technique called “sinkholing,” in which malicious internet traffic is sent into the electronic equivalent of a bottomless pit. When a hijacked computer tried to contact its controller, the police-run sinkhole captured that message and prevented it from reaching the actual central controller. Without control, the infected computer couldn’t do anything nefarious.

However, interrupting the technological systems isn’t enough, unless police are able to stop the criminals too. Three times since 2010, police tried to take down the Kelihos botnet. But each time the person behind it escaped and was able to resume criminal activities using more resilient infrastructure. In early April, however, the FBI was able to arrest Peter Levashov, allegedly its longtime operator, while on a family vacation in Spain.

The effort to take down Avalanche also resulted in the arrests of five people who allegedly ran the organization. Their removal from action likely led to a temporary disruption in the broader global cybercrime environment. It forced the criminals who were Avalanche’s customers to stop and regroup, and may offer police additional intelligence, depending on what investigators can convince the people arrested to reveal.

The Avalanche network was just the beginning of the challenges law enforcement will face when it comes to combating international cybercrime. To keep their enterprises alive, the criminals will share their experiences and learn from the past. Police agencies around the world must do the same to keep up.

Frank J. Cilluffo, Director, Center for Cyber and Homeland Security, George Washington University; Alec Nadeau, Presidential Administrative Fellow, Center for Cyber and Homeland Security, George Washington University, and Rob Wainwright, Director of Europol; Honorary Fellow, Strategy and Security Institute, University of Exeter

Deep sea mining could help develop mass solar energy – is it worth the risk?


Jon Major, University of Liverpool

Scientists have just discovered massive amounts of a rare metal called tellurium, a key element in cutting-edge solar technology. As a solar expert who specialises in exactly this, I should be delighted. But here’s the catch: the deposit is found at the bottom of the sea, in an undisturbed part of the ocean. The Conversation

People often have an idealised view of solar as the perfect clean energy source. Direct conversion of sunlight to electricity, no emissions, no oil spills or contamination, perfectly clean. This however overlooks the messy reality of how solar panels are produced.

While the energy produced is indeed clean, some of the materials required to generate that power are toxic or rare. In the case of one particular technology, cadmium telluride-based solar cells, the cadmium is toxic and the telluride is hard to find.

Cadmium telluride is one of the second generation “thin-film” solar cell technologies. It’s far better at absorbing light than silicon, on which most solar power is currently based, and as a result its absorbing layer doesn’t need to be as thick. A layer of cadmium telluride just one thousandth of a millimetre thick will absorb around 90% of the light that hits it. It’s cheap and quick to set up, compared to silicon, and uses less material.

As a result, it’s the first thin-film technology to effectively make the leap from the research laboratory to mass production. Cadmium telluride solar modules now account for around 5% of global installations and, depending on how you do the sums, can produce lower cost power than silicon solar.

Topaz Solar Farm in California is the world’s fourth largest. It uses cadmium telluride panels.
Sarah Swenty/USFWS, CC BY

But cadmium telluride’s Achilles heel is the tellurium itself, one of the rarest metals in the Earth’s crust. Serious questions must be asked about whether technology based on such a rare metal is worth pursuing on a massive scale.

There has always been a divide in opinion about this. The abundancy data for tellurium suggests a real issue, but the counter argument is that no-one has been actively looking for new reserves of the material. After all, platinum and gold are similarly rare but demand for jewellery and catalytic converters (the primary use of platinum) means in practice we are able to find plenty.

The discovery of a massive new tellurium deposit in an underwater mountain in the Atlantic Ocean certainly supports the “it will turn up eventually” theory. And this is a particularly rich ore, according to the British scientists involved in the MarineE-Tech project which found it. While most tellurium is extracted as a by-product of copper mining and so is relatively low yield, their seabed samples contain concentrations 50,000 times higher than on land.

The submerged mountain, ‘Tropic Seamount’, lies off the coast of north-west Africa.
Google Earth

Extracting any of this will be formidably hard and very risky for the environment. The top of the mountain where the tellurium has been discovered is still a kilometre below the waves, and the nearest land is hundreds of miles away.

Even on dry land, mining is never a good thing for the environment. It can uproot communities, decimate forests and leave huge scars on the landscape. It often leads to groundwater contamination, despite whatever safeguards are put in place.

And on the seabed? Given the technical challenges and the pristine ecosystems involved, I think most people can intuitively guess at the type of devastation that deep-sea mining could cause. No wonder it has yet to be implemented anywhere yet, despite plans off the coast of Papua New Guinea and elsewhere. Indeed, there’s no suggestion that tellurium mining is liable to occur at this latest site any time soon.

Is deep sea mining worth the risk?

However the mere presence of such resources, or the wind turbines or electric car batteries that rely on scarce materials or risky industrial processes, raises an interesting question. These are useful low-carbon technologies, but do they also have a requirement to be environmentally ethical?

There is often the perception that everyone working in renewable energy is a lovely tree-hugging, sandal-wearing leftie, but this isn’t the case. After all, this is now a huge industry, one that is aiming to eventually supplant fossil fuels, and there are valid concerns over whether such expansion will be accompanied by a softening of regulations.

We know that solar power is ultimately a good thing, but do the ends always justify the means? Or, to put it more starkly: could we tolerate mass production of solar panels if it necessitated mining and drilling on a similar scale to the fossil fuels industry, along with the associated pitfalls?

Tolerable – as long as it’s for solar panels.
Peter Gudella / shutterstock

To my mind the answer is undoubtedly yes, we have little choice. After all, mass solar would still wipe out our carbon emissions, helping curb global warming and the associated apocalypse.

What’s reassuring is that, even as solar becomes a truly mature industry, it has started from a more noble and environmentally sound place. Cadmium telluride modules for example include a cost to cover recycling, while scarce resources such as tellurium can be recovered from panels at the end of their 20-year or more lifespan (compare this with fossil fuels, where the materials that produce the power are irreparably lost in a bright flame and a cloud of carbon).

The impact of mining for solar panels will likely be minimal in comparison to the oil or coal industries, but it will not be zero. As renewable technology becomes more crucial, we perhaps need to start calibrating our expectations to account for this.

At some point mining operations in search of solar or wind materials will cause damage or else some industrial production process will go awry and cause contamination. This may be the Faustian pact we have to accept, as the established alternatives are far worse. Unfortunately nothing is perfect.

Jon Major, Research Fellow, Stephenson Institute for Renewable Energy, University of Liverpool

Why we don’t trust robots


Joffrey Becker, Collège de France

Robots raise all kinds of concerns. They could steal our jobs, as some experts think. And if artificial intelligence grows, they might even be tempted to enslave us, or to annihilate the whole of humanity. The Conversation

Robots are strange creatures, and not only for these frequently invoked reasons. We have good cause to be a little worried about these machines.

An advertisement for Kuka robotics: can these machines really replace us?

Imagine that you are visiting the Quai Branly-Jacques Chirac, a museum in Paris dedicated to anthropology and ethnology. As you walk through the collection, your curiosity leads you to a certain piece. After a while, you begin to sense a familiar presence heading towards the same objet d’art that has caught your attention.

You move slowly, and as you turn your head a strange feeling seizes you because what you seem to distinguish, still blurry in your peripheral vision, is a not-quite-human figure. Anxiety takes over.

As your head turns, and your vision become sharper, this feeling gets stronger. You realise that this is a humanoid machine, a robot called Berenson. Named after the American art critic Bernard Berenson and designed by the roboticist Philippe Gaussier (Image and Signal processing Lab) and the anthropologist Denis Vidal (Institut de recherche sur le développement), Berenson is part of an experiment underway at the Quai Branly museum since 2012.

The strangeness of the encounter with Berenson leaves you suddenly frightened, and you step back, away from the machine.

The uncanny valley

This feeling has been explored in robotics since the 1970s, when Japanese researcher Professor Masahiro Mori proposed his “uncanny valley” theory. If a robot resembles us, he suggested, we are inclined to consider its presence in the same way as we would that of a human being.

But when the machine reveals its robot nature to us, we will feel discomfort. Enter what Mori dubbed “the uncanny valley”. The robot will then be regarded as something of a zombie.

Mori’s theory cannot be systematically verified. But the feelings we experience when we meet an autonomous machine are certainly tinged with both incomprehension and curiosity.

The experiment conducted with Berenson at the Quai Branly, for example, shows that the robot’s presence can elicit paradoxical behaviour in museum goers. It underlines the deep ambiguity that characterises the relationship one can have with a robot, particularly the many communication problems they pose for humans.

If we are wary of such machines, it is mainly because it is not clear to us whether they have intentions. And, if so, what they are and how to establish a basis for the minimal understanding that is essential in any interaction. Thus, it is common to see visitors of the Quai Branly adopting social behaviour with Berenson, such as talking to it, or facing it, to find out how it perceives its environment.

In one way or another, visitors mainly try to establish contact. It appears that there is something strategic in considering the robot, even temporarily, as a person. And these social behaviours are not only observed when humans interact with machines that resembles us: it seems we make anthropomorphic projections whenever humans and robots meet.

Social interactions

An interdisciplinary team has recently been set up to explore the many dimensions revealed during these interactions. In particular, they are looking at the moments when, in our minds, we are ready to endow robots with intentions and intelligence.

This is how the PsyPhINe project was born. Based on interactions between humans and a robotic lamp, this project seeks to better understand people’s tendency to anthropomorphise machines.

After they get accustomed to the strangeness of the situation, it is not uncommon to observe that people are socially engaging with the lamp. During a game in which people are invited to play with this robot, they can be seen reacting to its movements and sometimes speaking to it, commenting on what it is doing or on the situation itself.

Mistrust often characterises the first moments of our relations with machines. Beyond their appearance, most people don’t know exactly what robots are made of, what their functions are and what their intentions might be. The robot world seems way too far from ours.

But this feeling quickly disappears. Assuming they have not already run away from the machine, people usually seek to define and maintain a frame for communication. Typically, they rely on existing communication habits, such as those used when talking to pets, for example, or with any living being whose world is to some degree different from theirs.

Ultimately, it seems, we humans are as suspicious of our technologies as we are fascinated by the possibilities they open up.

Joffrey Becker, Anthropologue, Laboratoire d’anthropologie sociale, Collège de France

This article was originally published on The Conversation. Read the original article.

We could soon face a robot crimewave … the law needs to be ready


Christopher Markou, University of Cambridge

This is where we are at in 2017: sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases. By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. Just ask the toddler who was run over by a security robot at a California mall last year. The Conversation

How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.

Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

Robot guilt

The role of the law can be defined in many ways, but ultimately it is a system within society for stabilising people’s expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.

But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit – and so humans are considered by the law to be “legal persons”.

To varying extents, companies are endowed with legal personhood, too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?

A guilty AI mind?

The criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.

Second, criminal law requires that an accused is culpable for their actions. This is known as a “guilty mind” or mens rea. The idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.

Blind justice for a AI.
Shutterstock

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

Take driverless cars. Cars drive on roads and there are regulatory frameworks in place to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.

As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of “robots gone awry” but the legal realities should not be left to Hollywood.

So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea. How do we know the robot intended to do what it did?

For now, we are nowhere near the level of building a fully sentient or “conscious” humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and discriminatory algorithmic mischief already abounds.

Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let’s say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

But what would we need to prove the existence of mens rea? Could we simply cross-examine the AI like we do a human defendant? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine “tick”.

And what would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defense or the extent of premeditation?

Let’s go even further. After all, we’re not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card – and it decided to buy contraband. This isn’t fiction; it has happened. Two London-based artists created a bot that purchased random items off the dark web. And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and ten ecstasy pills. Should these artists be liable for what the bot they created bought?

Maybe. But what if the bot “decided” to make the purchases itself?

Robo-jails?

Even if you solve these legal issues, you are still left with the question of punishment. What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to “reflect” on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty’s leisure. And what would building “remorse” into machines say about us as their builders?

Would robot wardens patrol robot jails?
Shutterstock

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

AI has already helped with emergent concepts in medicine, and we are learning things about the universe with AI systems that even an army of Stephen Hawkings might not reveal.

The hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.

At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can’t help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.

Christopher Markou, PhD Candidate, Faculty of Law, University of Cambridge

This article was originally published on The Conversation. Read the original article.

How to make an Internet of Intelligent Things work for Africa


Martin Hall, University of Cape Town

Late in 2016 Senegal’s Banque Regionale De Marches announced the launch of the eCFA Franc; a cryptocurrency for the countries of the West African Monetary Union – Senegal, Cote d’Ivoire, Benin, Burkina Faso, Mali, Niger, Togo and Guinea-Bissau. This and similar innovations mark the coming of age of a new generation of applications – an Internet of Intelligent Things – that could provide a new infrastructure for economic development across Africa. The Conversation

The Internet of Things is a network of physical devices, vehicles, buildings and other items. They are equipped with electronics, software, sensors and network connectivity so they can collect and exchange data. There’s wide enthusiasm about spectacular innovations such as Intelligent refrigeratorsand driverless cars. But a quieter revolution is underway in everyday systems and facilities, such as financial services.

There are particular possibilities here for Africa. The potential for the continent’s economic growth is well established. There’s also an abundance of opportunity for digital innovation. This was clear from a recent continent wide entrepreneurship competition organised by the University of Cape Town’s Graduate School of Business.

More broadly, the new Internet of Things has the potential to compensate for Africa’s legacies of underdevelopment. The key here is the development of the blockchain from a fringe concept into a mainstream digital innovation.

The blockchain and Africa

The blockchain, mostly known as the technology that underpins digital currency Bitcoin, is an almost incorruptible digital ledger of transactions, agreements and contracts that is distributed across thousands of computers, worldwide.

It has the potential to be both foundation and springboard for a new developmental infrastructure.

New blockchain platforms such as Ethereum are supporting the development of distributed applications. These “DApps” can provide accessible ways to use the blockchain. They act like “autonomous agents” – little brains that receive and process information, make decisions and take actions. These new capabilities will have widespread implications when linked to cryptocurrencies through “smart contacts” that are also securely recorded in the blockchain.

DApps provide a practical and affordable means of making Things intelligent and able to interact directly with other Things. They can be programmed to take data-informed actions without human intervention.

These innovations will have particular benefits across Africa. Economic growth is underpinned and enabled by appropriate financial services. Early internet-based innovations such as Kenya’s M-PESA have clearly demonstrated the appetite for accessible, Internet-financial services. But many small and medium businesses are still restricted. Their owners usually can’t access standard loan financing. Banks will not extend credit facilities without traditional title deeds to land and buildings, or a conventional payslip.

Don and Alex Tapscott have shown in their recent book that the new blockchain can be “the ledger of everything”. A house can become an intelligent entity registered on a secure, distributed database once it’s tagged with a geospatial reference and sensors that monitor its continuing existence.

The owner of the asset can, through an Ethereum-based smart contract, secure a loan to expand a start-up enterprise. Intermediary arrangements become unnecessary. Economist Hernando de Soto has suggested this could create “a revolution in property rights”.

Water and energy

Property and financing aren’t the only areas where the new Internet of Intelligent Things has the potential to compensate for Africa’s legacies of underdevelopment.

Economic growth also depends on affordable and reliable services like water and energy. Water is an increasingly scarce resource in many parts of Africa. This is particularly true in cities. Rapid population increases are making old precepts of urban planning redundant.

Technology can help. Autonomous agents positioned across all aspects of water reticulation systems can monitor supplies of potable, storm and waste water. These “little brains” can take appropriate actions to detect and report damage and leakage and close off supply lines. Smart devices can also monitor water quality to detect health hazards. They can regulate and charge for water consumption.

Similarly, for the supply of energy, smart devices are already being deployed across conventional and ageing power grids in other parts of the world. In Australia, for instance, intelligent monitors detect when an individual pole is in trouble. They then report the fault and call out a repair crew. They can also communicate with other poles to redirect the supply and preserve the grid’s integrity.

In parallel with conventional supply systems, new digital technologies can enable full integration with renewable sources of energy and the intelligent management of supply at the household level. The new blockchain is designed for secure peer-to-peer transactions combined with incorruptible contracts between multiple parties. Individual households can manage their own supply and demand to incorporate self-generated energy. A house equipped with a simple windmill and a roof made up of photovoltaic tiles could sell surplus power to a neighbour in need. They could also buy from another house to meet a shortfall.
Such microgrids are already in development. The combination of ubiquitous and affordable bandwidth and low cost autonomous agents could bring affordable energy to communities that have never enjoyed reliable electricity supply.

A new infrastructure built up in this way could be a springboard for economic development – from small enterprises that would have the resources to take innovations to scale, to significant household efficiencies and increases in consumer purchasing power. As has been the pattern with previous digital technologies, costs of production will fall dramatically as the global market for intelligent things explodes. That which seems extraordinary today will be everyday tomorrow.

So what’s standing in the way?

Established interests

It’s not the technology that’s holding Africa back from embracing the Internet of Things. Rather, it’s the established interests in play. These include state enterprises and near-monopolies that are heavily invested in conventional systems, local patronage networks and conventional banks, and the failure of political vision.

What’s needed is effective public policy and business to ensure that the potential of this next wave of digital innovation is realised. Government and civil society innovators need to be directing much of their attention here.

This is why the West African Monetary Union’s cryptocurrency initiative is encouraging. It’s a step towards the future that Don and Alex Tapscott envision; a move towards an Internet that’s driven by the falling costs of bargaining, policing, and enforcing social and commercial agreements.

In this new space integrity, security, collaboration, the privacy of all transactions will be the name of the game. So too will the creation and distribution of value. And that’s great news for Africa.

Martin Hall, Emeritus Professor, MTN Solution Space Graduate School of Business, University of Cape Town

This article was originally published on The Conversation. Read the original article.

Merging our brains with machines won’t stop the rise of the robots


Michael Milford, Queensland University of Technology

Tesla chief executive and OpenAI founder Elon Musk suggested last week that humanity might stave off irrelevance from the rise of the machines by merging with the machines and becoming cyborgs. The Conversation

However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.

Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.

But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.

The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.

However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.

Current technologies are primitive compared to the picture of brain-machine interfaces we’re sold in science fiction movies such as The Matrix.

Deep learning quirks

Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.

The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.

We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.

One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.

Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.

Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.

Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.

This image shows how you can fool AI image recognition by adding imperceptible noise to the image.
From Goodfellow et al, 2014

Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.

Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.

Google research scientist, Ian Goodfellow, shows how easy it is to fool a deep learning system.

Trust me, I’m a robot

Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?

We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.

Research into trusted autonomous systems deals with both humans trusting machines and machines trusting humans.

There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.

In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?

In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?

In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.

Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.

The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.

Michael Milford, Associate professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

The factories of the past are turning into the data centers of the future


We live in a data-driven world. From social media to smart cities to the internet of things, we now generate huge volumes of information about nearly every detail of life. This has revolutionized everything from business to government to the pursuit of romance.

We tend to focus our attention on what is new about the era of big data. But our digital present is in fact deeply connected to our industrial past.

In Chicago, where I teach and do research, I’ve been looking at the transformation of the city’s industrial building stock to serve the needs of the data industry. Buildings where workers once processed checks, baked bread and printed Sears catalogs now stream Netflix and host servers engaged in financial trading.

The buildings themselves are a kind of witness to how the U.S. economy has changed. By observing these changes in the landscape, we get a better sense of how data exist in the physical realm. We are also struck with new questions about what the rise of an information-based economy means for the physical, social and economic development of cities. The decline of industry can actually create conditions ripe for growth – but the benefits of that growth may not reach everyone in the city.

‘Factories of the 21st century’

Data centers have been described as the factories of the 21st century. These facilities contain servers that store and process digital information. When we hear about data being stored “in the cloud,” those data are really being stored in a data center.

Servers inside a data center.
By Global Access Point, via Wikimedia Commons

But contrary to the ephemeral-sounding term “cloud,” data centers are actually incredibly energy- and capital-intensive infrastructure. Servers use tremendous amounts of electricity and generate large amounts of heat, which in turn requires extensive investments in cooling systems in order to keep servers operating. These facilities also need to be connected to fiber optic cables, which deliver information via beams of light. In most places, these cables – the “highway” part of the “information superhighway” – are buried along the rights of way provided by existing road and railroad networks. In other words, the pathways of the internet are shaped by previous rounds of development.

The interior of the Schulze Baking Company facility in 2016 showing some of the utility connections.
Graham Pickren

An economy based on information, just like one based on manufacturing, still requires a human-made environment. For the data industry, taking advantage of the places that have the power capacity, the building stock, the fiber optic connectivity and the proximity to both customers and other data centers is often central to their real estate strategy.

From analog to digital

As this real estate strategy plays out, what is particularly fascinating is the way in which infrastructure constructed to meet the needs of a different era is now being repurposed for the data sector.

In Chicago’s South Loop sits the former R.R. Donnelley & Sons printing factory. At one time, it was one of the largest printers in the U.S., producing everything from Bibles to Sears catalogs. Now, it is the Lakeside Technology Center, one of the largest data centers in the world and the second-largest consumer of electricity in the state of Illinois.

The eight-story Gothic-style building is well-suited to the needs of a massive data center. Its vertical shafts, formerly used to haul heavy stacks of printed material between floors, are now used to run fiber optic cabling through the building. (Those cables come in from the railroad spur outside.) Heavy floors built to withstand the weight of printing presses are now used to support rack upon rack of server equipment. What was once the pinnacle of the analog world is now a central node in global financial networks.

Photograph of printing press #D2, 1949. R.R. Donnelley & Sons Company.
R.R. Donnelley & Sons Company. Archive, Special Collections Research Center, University of Chicago Library

Just a few miles south of Lakeside Technology Center is the former home of Schulze Baking Company in the South Side neighborhood of Washington Park. Once famous for its butternut bread, the five-story terra cotta bakery is currently being renovated into the Midway Technology Center, a data center. Like the South Loop printing factory, the Schulze bakery contains features useful to the data industry. The building also has heavy-load bearing floors as well as louvered windows designed to dissipate the heat from bread ovens – or, in this case, servers.

It isn’t just the building itself that makes Schulze desirable, but the neighborhood as a whole. A developer working on the Schulze redevelopment project told me that, because the surrounding area had been deindustrialized, and because a large public housing project had closed down in recent decades, the nearby power substations actually had plenty of idle capacity to meet the data center’s needs.

Examples of this “adaptive reuse” of industrial building stock abound. The former Chicago Sun-Times printing facility became a 320,000-square-foot data center earlier last year. A Motorola office building and former television factory in the suburbs has been bought by one of the large data center companies. Even the once mighty retailer Sears, which has one of the largest real estate portfolios in the country, has created a real estate division tasked with spinning off some of its stores into data center properties. Beyond Chicago, Amazon is in the process of turning an old biscuit factory in Ireland into a data center, and in New York, some of the world’s most significant data center properties are housed in the former homes of Western Union and the Port Authority, two giants of 20th-century modernity.

What we see here in these stories is the seesaw of urban development. As certain industries and regions decline, some of the infrastructure retains its value. That provides an opportunity for future savvy investors to seize upon.

Schulze Baking Company advertisement.
University of Illinois Chicago Digital Collections
The Schulze Baking Company operated on Chicago’s South Side from 1914–2004. The historic building is being turned into a data center.
Graham Pickren

Data centers and public policy

What broader lessons can be drawn about the way our data-rich lives will transform our physical and social landscape?

First, there is the issue of labor and employment. Data centers generate tax revenues but don’t employ many people, so their relocation to places like Washington Park is unlikely to change the economic fortunes of local residents. If the data center is the “factory of the 21st century,” what will that mean for the working class?

Data centers are crucial to innovations such as machine-learning, which threatens to automate many routine tasks in both high and low-skilled jobs. By one measure, as much as 47 percent of U.S. employment is at risk of being automated. Both low- and high-skilled jobs that are nonroutine – in other words, difficult to automate – are growing in the U.S. Some of these jobs will be supported by data centers, freeing up workers from repetitive tasks so that they can focus on other skills.

On the flip side, employment in the manufacturing sector – which has provided so many people with a ladder into the middle class – is in decline in terms of employment. The data center embodies that economic shift, as data management enables the displacement of workers through offshoring and automation.

So buried within the question of what these facilities will mean for working people is the larger issue of the relationship between automation and the polarization of incomes. To paraphrase Joseph Schumpeter, data centers seem likely to both create and destroy.

Bakers working the conveyor belt at Schulze Baking Company, circa 1920. The new data center will employ significantly fewer workers than the bakery.
By Fred A. Behmer for the Jeffrey Manufacturing Company, via Wikimedia Commons

Second, data centers present a public policy dilemma for local and state governments. Public officials around the world are eager to grease the skids of data center development.

In many locations, generous tax incentives are often used to entice new data centers. As the Associated Press reported last year, state governments across the U.S. extended nearly US$1.5 billion in tax incentives to hundreds of data center projects nationwide during the past decade. For example, an Oregon law targeting data centers provides property tax relief on facilities, equipment, and employment for up to five years in exchange for creating one job. The costs and benefits of these kinds of subsidies have not been systematically studied.

More philosophically, as a geographer, I’ve been influenced by people like David Harvey and Neil Smith, who have theorized capitalist development as inherently uneven across time and space. Boom and bust, growth and decline: They are two sides of the same coin.

The implication here is that the landscapes we construct to serve the needs of today are always temporary. The smells of butternut bread defined part of everyday life in Washington Park for nearly a century. Today, data is in the ascendancy, constructing landscapes suitable to its needs. But those landscapes will also be impermanent, and predicting what comes next is difficult. Whatever the future holds for cities, we can be sure that what comes next will be a reflection of what came before it.

The Conversation

Graham Pickren, Assistant Professor of Sustainability Studies, Roosevelt University

This article was originally published on The Conversation. Read the original article.

How Nootropics Have Changed The Trading Landscape


If you’re a trader who isn’t biohacking, you could be missing the trade of the century..

Traders who aren’t using nootropics on a daily basis are behind the game of millions of other traders worldwide who are. I’m not saying this to make you feel bad, it’s just a fact that you can’t perform your best trades on low energy. It means you don’t have the competitive edge that so many other traders have. The good news is that the power is in your hands to take charge of your trading, and you can start by familiarizing yourself with nootropic “smart drugs”.

Nootropics have come to the forefront of the trading market and the tech arena in recent years due to their beneficial effects on cognitive function. And in the world of business and finance, outsmarting each other and the market is what it’s all about. This can be especially true when trading due to the massive impact a whale (or heavy investor) can have on a market. Understanding how to read ever-changing markets and charts is crucial and, if you’re struggling just to stay awake, you’re not going to make the best trades. Even if you’re wide awake, most days you have a limit as to how much brain power you can put into your work. That’s why nootropics are rapidly changing how trading is done in exchanges across the globe by giving traders “limitless” brain power to carry out the best trading strategies.

The Competitive Edge

Since nootropics became an interest in recent years, they’ve especially sparked a frenzy in the world of online traders. Much like how trading bots made trading more automated and faster, biohacking with nootropics made trading strategies more well thought out.

Traders who use nootropics report a higher success rate of around 30% ROI — that’s better trades without falling victim to Fear, Uncertainty and Despair (FUD), or Fear Of Missing Out (FOMO), two common problems even veteran traders face. That’s why, if you aren’t taking nootropics, you’re simply that much further behind everyone else who is.

Making profitable trades isn’t just about whether you make money on your trade or not, it’s also about how much money you make in profit. Someone could have a 70% success rate on their trades, but only make a 3% ROI. This means they always need to invest large sums in order for that 3% to be worth the time, energy, fees and stress involved in the trade.

Now, what if you were making closer to a 33% return on your investment? You could make more money faster and thus reach your financial goals that much quicker in your life. Not a bad deal for taking a couple pills every day.

What Are Nootropics?

If you don’t know what nootropics are, all you need to know is that they are time-tested, safe chemicals that can help your brain work at peak performance, often called “smart drugs” or “brain pill”. Best of all, these smart drugs are non-prescription and most of them can be bought over the counter throughout the U.S., Canada, Europe, and Australia. A raw definition of a nootropic would be a “cognitive enhancer”, which means it boosts your memory, motivation, concentration and even mood.

It’s obvious why traders recommend them now, isn’t it?

Nootropics aren’t just a fad, they are a competitive edge, the type that you can’t get away with in sports. They turn novice traders into trading monsters within days. They allow traders to work around the clock with virtually no side effects. They help traders find the best signals without falling victim to trading traps like buying too late or selling too early.

And that’s why traders around the world are using nootropic products to biohack their way to riches.

Take Anthony B. from Manhattan, New York for example. When he first started taking the nootropic stack Nooflex, he said his entire trading strategy changed and he now makes a 48% return on most of his investments.

“Before I was into biohacking, I was constantly stressed out, tired and just generally miserable after every day of trading. I ended up almost closing all my open margins and moving to a cheap city to just live for a while and relax,” he says.

“But after taking nootropics, within a week or so I started noticing just how much better my trades were becoming. At first I thought it was a coincidence, but then I noticed that I hadn’t been stressed for a while and felt calm and focused almost every minute I was at work. I even forgot to drink coffee because I never felt like I needed it.”

When asked what his routine was, he replied, “I was taking noopept and pramiracetam with every meal, but later on I tried a few stacks until I found the one that worked the best, which was Nooflex.”

What’s a Nootropic Stack?

A nootropic “stack” is a combination of these drugs meant to give the best brain boost for your buck. Some nootropics are stimulants while others are depressants, and then there are some that simply help your body function better as well as your mind. Stacks have a mix of all of these, and there are some creative ones on the market today.

The important thing to remember about a stack is that it’s meant to increase the benefits of taking daily supplements without having to take more pills. Many vitamin supplements and herbal supplements are larger pills that contain more than your body will actually use. You can cut some of that down and replace it with cognitive enhancers and cut your daily dosages in half.

Another benefit to taking a stack is that the few benign side effects that some nootropics have can be offset with another nootropic or herb. That’s why Nooflex is quickly becoming the preferred nootropic stack of many people today.

Nooflex: A Mind-Body Stack

Traders undergo more than just stress and anxiety on a daily basis. They also get fatigued and depressed more easily because they are working at peak capacity all day long. That’s why taking only one type of nootropic isn’t advisable as a stimulant alone could wear you out even more when it leaves your system (much like caffeine and sugar crashes), and a depressant could outright put you to sleep. That’s why Nooflex is the only stack on the market that protects not just your brain but your body as well.

What you get in every bottle of Nooflex is a well-known memory stimulant and cognitive enhancer, a choline which helps bind these two chemicals to proteins in your digestive system, and a pre-drug that turns into a strong body stimulant through the process of digestion. In addition to that, this stack contains omega-3 fatty acids, antioxidants, and fiber, all which help slow down the aging process and promote a strong digestive system.

Simply put, this is the healthiest nootropic stack on the market, so if you’re looking to try nootropics but are weary of the health effects, Nooflex is your answer.

You can currently purchase Nooflex from their website at a 75% discount from the list price along with free bottles during their flash sales if you catch them on time (they show up sporadically). And as brain pills go, I can tell you that most on the market aren’t cheap, so 75% off is probably the best deal you’re going to get.

What’s more is that if you remain a customer for multiple orders they’ll just start sending you free samples of other products as a “thank you”, something most companies wouldn’t dream of doing. I guess they really like their customers!  And it shows with their stellar customer support.

Check out Nooflex and other nootropic stacks at http://nooflex.com before their flash sale is over!

Lars Beniger
Lars is a freelance journalist, part-time activist, copywriter and technical writer residing in the Manhattan, New York area. For 7 years, Lars has reported on current events, political spars, technology and environmental issues.

Turning diamonds’ defects into long-term 3-D data storage


With the amount of data storage required for our daily lives growing and growing, and currently available technology being almost saturated, we’re in desperate need of a new method of data storage. The standard magnetic hard disk drive (HDD) – like what’s probably in your laptop computer – has reached its limit, holding a maximum of a few terabytes. Standard optical disk technologies, like compact disc (CD), digital video disc (DVD) and Blu-ray disc, are restricted by their two-dimensional nature – they just store data in one plane – and also by a physical law called the diffraction limit, based on the wavelength of light, that constrains our ability to focus light to a very small volume.

And then there’s the lifetime of the memory itself to consider. HDDs, as we’ve all experienced in our personal lives, may last only a few years before things start to behave strangely or just fail outright. DVDs and similar media are advertised as having a storage lifetime of hundreds of years. In practice this may be cut down to a few decades, assuming the disk is not rewritable. Rewritable disks degrade on each rewrite.

Without better solutions, we face financial and technological catastrophes as our current storage media reach their limits. How can we store large amounts of data in a way that’s secure for a long time and can be reused or recycled?

In our lab, we’re experimenting with a perhaps unexpected memory material you may even be wearing on your ring finger right now: diamond. On the atomic level, these crystals are extremely orderly – but sometimes defects arise. We’re exploiting these defects as a possible way to store information in three dimensions.

Focusing on tiny defects

One approach to improving data storage has been to continue in the direction of optical memory, but extend it to multiple dimensions. Instead of writing the data to a surface, write it to a volume; make your bits three-dimensional. The data are still limited by the physical inability to focus light to a very small space, but you now have access to an additional dimension in which to store the data. Some methods also polarize the light, giving you even more dimensions for data storage. However, most of these methods are not rewritable.

Here’s where the diamonds come in.

The orderly structure of a diamond, but with a vacancy and a nitrogen replacing two of the carbon atoms.
Zas2000

A diamond is supposed to be a pure well-ordered array of carbon atoms. Under an electron microscope it usually looks like a neatly arranged three-dimensional lattice. But occasionally there is a break in the order and a carbon atom is missing. This is what is known as a vacancy. Even further tainting the diamond, sometimes a nitrogen atom will take the place of a carbon atom. When a vacancy and a nitrogen atom are next to each other, the composite defect is called a nitrogen vacancy, or NV, center. These types of defects are always present to some degree, even in natural diamonds. In large concentrations, NV centers can impart a characteristic red color to the diamond that contains them.

This defect is having a huge impact in physics and chemistry right now. Researchers have used it to detect the unique nuclear magnetic resonance signatures of single proteins and are probing it in a variety of cutting-edge quantum mechanical experiments.

Nitrogen vacancy centers have a tendency to trap electrons, but the electron can also be forced out of the defect by a laser pulse. For many researchers, the defects are interesting only when they’re holding on to electrons. So for them, the fact that the defects can release the electrons, too, is a problem.

But in our lab, we instead look at these nitrogen vacancy centers as a potential benefit. We think of each one as a nanoscopic “bit.” If the defect has an extra electron, the bit is a one. If it doesn’t have an extra electron, the bit is a zero. This electron yes/no, on/off, one/zero property opens the door for turning the NV center’s charge state into the basis for using diamonds as a long-term storage medium.

Starting from a blank ensemble of NV centers in a diamond (1), information can be written (2), erased (3), and rewritten (4).
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Turning the defect into a benefit

Previous experiments with this defect have demonstrated some properties that make diamond a good candidate for a memory platform.

First, researchers can selectively change the charge state of an individual defect so it either holds an electron or not. We’ve used a green laser pulse to assist in trapping an electron and a high-power red laser pulse to eject an electron from the defect. A low-power red laser pulse can help check if an electron is trapped or not. If left completely in the dark, the defects maintain their charged/discharged status virtually forever.

The NV centers can encode data on various levels.
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Our method is still diffraction limited, but is 3-D in the sense that we can charge and discharge the defects at any point inside of the diamond. We also present a sort of fourth dimension. Since the defects are so small and our laser is diffraction limited, we are technically charging and discharging many defects in a single pulse. By varying the duration of the laser pulse in a single region we can control the number of charged NV centers and consequently encode multiple bits of information.

Though one could use natural diamonds for these applications, we use artificially lab-grown diamonds. That way we can efficiently control the concentration of nitrogen vacancy centers in the diamond.

All these improvements add up to about 100 times enhancement in terms of bit density relative to the current DVD technology. That means we can encode all the information from a DVD into a diamond that takes up about one percent of the space.

Past just charge, to spin as well

If we could get beyond the diffraction limit of light, we could improve storage capacities even further. We have one novel proposal on this front.

A human cell, imaged on the right with super-resolution microscope.
Dr. Muthugapatti Kandasamy, CC BY-NC-ND

Nitrogen vacancy centers have also been used in the execution of what is called super-resolution microscopy to image things that are much smaller than the wavelength of light. However, since the super-resolution technique works on the same principles of charging and discharging the defect, it will cause unintentional alteration in the pattern that one wants to encode. Therefore, we won’t be able to use it as it is for memory storage application and we’d need to back up the already written data somehow during a read or write step.

Here we propose the idea of what we call charge-to-spin conversion; we temporarily encode the charge state of the defect in the spin state of the defect’s host nitrogen nucleus. Spin is a fundamental property of any elementary particle; it’s similar to its charge, and can be imagined as having a very tiny magnet permanently attached it.

While the charges are being adjusted to read/write the information as desired, the previously written information is well protected in the nitrogen spin state. Once the charges have encoded, the information can be back converted from the nitrogen spin to the charge state through another mechanism which we call spin-to-charge conversion.

With these advanced protocols, the storage capacity of a diamond would surpass what existing technologies can achieve. This is just a beginning, but these initial results provide us a potential way of storing huge amount of data in a brand new way. We’re looking forward to transform this beautiful quirk of physics into a vastly useful technology.

The Conversation

Siddharth Dhomkar, Postdoctoral Associate in Physics, City College of New York and Jacob Henshaw, Teaching Assistant in Physics, City College of New York