Category Archives: Forensics

What if the FBI tried to crack an Android phone? We attacked one to find out


The Justice Department has managed to unlock an iPhone 5c used by the gunman Syed Rizwan Farook, who with his wife killed 14 people in San Bernardino, California, last December. The high-profile case has pitted federal law enforcement agencies against Apple, which fought a legal order to work around its passcode security feature to give law enforcement access to the phone’s data. The FBI said it relied on a third party to crack the phone’s encrypted data, raising questions about iPhone security and whether federal agencies should disclose their method.

But what if the device had been running Android? Would the same technical and legal drama have played out?

We are Android users and researchers, and the first thing we did when the FBI-Apple dispute hit popular media was read Android’s Full Disk Encryption documentation.

We attempted to replicate what the FBI had wanted to do on an Android phone and found some useful results. Beyond the fact the Android ecosystem involves more companies, we discovered some technical differences, including a way to remotely update and therefore unlock encryption keys, something the FBI was not able to do for the iPhone 5c on its own.

The easy ways in

Data encryption on smartphones involves a key that the phone creates by combining 1) a user’s unlock code, if any (often a four- to six-digit passcode), and 2) a long, complicated number specific to the individual device being used. Attackers can try to crack either the key directly – which is very hard – or combinations of the passcode and device-specific number, which is hidden and roughly equally difficult to guess.

Decoding this strong encryption can be very difficult. But sometimes getting access to encrypted data from a phone doesn’t involve any code-breaking at all. Here’s how:

  • A custom app could be installed on a target phone to extract information. In March 2011, Google remotely installed a program that cleaned up phones infected by malicious software. It is unclear if Android still allows this.
  • Many applications use Android’s Backup API. The information that is backed up, and thereby accessible from the backup site directly, depends on which applications are installed on the phone.
  • If the target data are stored on a removable SD card, it may be unencrypted. Only the most recent versions of Android allow the user to encrypt an entire removable SD card; not all apps encrypt data stored on an SD card.
  • Some phones have fingerprint readers, which can be unlocked with an image of the phone owner’s fingerprint.
  • Some people have modified their phones’ operating systems to give them “root” privileges – access to the device’s data beyond what is allowed during normal operations – and potentially weakening security.

But if these options are not available, code-breaking is the remaining way in. In what is called a “brute force” attack, a phone can be unlocked by trying every possible encryption key (i.e., all character combinations possible) until the right one is reached and the device (or data) unlocks.

Starting the attack

A very abstract representation of the derivation of the encryption keys on Android.
William Enck and Adwait Nadkarni, CC BY-ND

There are two types of brute-force attacks: offline and online. In some ways an offline attack is easier – by copying the data off the device and onto a more powerful computer, specialized software and other techniques can be used to try all different passcode combinations.

But offline attacks can also be much harder, because they require either trying every single possible encryption key, or figuring out the user’s passcode and the device-specific key (the unique ID on Apple, and the hardware-bound key on newer versions of Android).

To try every potential solution to a fairly standard 128-bit AES key means trying all 100 undecillion (1038) potential solutions – enough to take a supercomputer more than a billion billion years.

Guessing the passcode could be relatively quick: for a six-digit PIN with only numbers, that’s just a million options. If letters and special symbols like “$” and “#” are allowed, there would be more options, but still only in the hundreds of billions. However, guessing the device-specific key would likely be just as hard as guessing the encryption key.

Considering an online attack

That leaves the online attack, which happens directly on the phone. With the device-specific key readily available to the operating system, this reduces the task to the much smaller burden of trying only all potential passcodes.

However, the phone itself can be configured to resist online attacks. For example, the phone can insert a time delay between a failed passcode guess and allowing another attempt, or even delete the data after a certain number of failed attempts.

Apple’s iOS has both of these capabilities, automatically introducing increasingly long delays after each failure, and, at a user’s option, wiping the device after 10 passcode failures.

Attacking an Android phone

What happens when one tries to crack into a locked Android phone? Different manufacturers set up their Android devices differently; Nexus phones run Google’s standard Android configuration. We used a Nexus 4 device running stock Android 5.1.1 and full disk encryption enabled.

Android adds 30-second delays after every five failed attempts; snapshot of the 40th attempt.
William Enck and Adwait Nadkarni, CC BY-ND

We started with a phone that was already running but had a locked screen. Android allows PINs, passwords and pattern-based locking, in which a user must connect a series of dots in the correct sequence to unlock the phone; we conducted this test with each type. We had manually assigned the actual passcode on the phone, but our unlocking attempts were randomly generated.

After five failed passcode attempts, Android imposed a 30-second delay before allowing another try. Unlike the iPhone, the delays did not get longer with subsequent failures; over 40 attempts, we encountered only a 30-second delay after every five failures. The phone kept count of how many successive attempts had failed, but did wipe the data. (Android phones from other manufacturers may insert increasing delays similar to iOS.)

These delays impose a significant time penalty on an attacker. Brute-forcing a six-digit PIN (one million combinations) could incur a worst-case delay of just more than 69 days. If the passcode were six characters, even using only lowercase letters, the worst-case delay would be more than 58 years.

When we repeated the attack on a phone that had been turned off and was just starting up, we were asked to reboot the device after 10 failed attempts. After 20 failed attempts and two reboots, Android started a countdown of the failed attempts that would trigger a device wipe. We continued our attack, and at the 30th attempt – as warned on the screen and in the Android documentation – the device performed a “factory reset,” wiping all user data.

Just one attempt remaining before the device wipes its data.
William Enck and Adwait Nadkarni, CC BY-ND

In contrast to offline attacks, there is a difference between Android and iOS for online brute force attacks. In iOS, both the lock screen and boot process can wipe the user data after a fixed number of failed attempts, but only if the user explicitly enables this. In Android, the boot process always wipes the user data after a fixed number of failed attempts. However, our Nexus 4 device did not allow us to set a limit for lock screen failures. That said, both Android and iOS have options for remote management, which, if enabled, can wipe data after a certain number of failed attempts.

Using special tools

The iPhone 5c in the San Bernardino case is owned by the employer of one of the shooters, and has mobile device management (MDM) software installed that lets the company track it and perform other functions on the phone by remote control. Such an MDM app is usually installed as a “Device Administrator” application on an Android phone, and set up using the “Apple Configurator” tool for iOS.

Our test MDM successfully resets the password. Then, the scrypt key derivation function (KDF) is used to generate the new key encryption key (KEK).
William Enck and Adwait Nadkarni, CC BY-ND

We built our own MDM application for our Android phone, and verified that the passcode can be reset without the user’s explicit consent; this also updated the phone’s encryption keys. We could then use the new passcode to unlock the phone from the lock screen and at boot time. (For this attack to work remotely, the phone must be on and have Internet connectivity, and the MDM application must already be programmed to reset the passcode on command from a remote MDM server.)

Figuring out where to get additional help

If an attacker needed help from a phone manufacturer or software company, Android presents a more diverse landscape.

Generally, operating system software is signed with a digital code that proves it is genuine, and which the phone requires before actually installing it. Only the company with the correct digital code can create an update to the operating system software – which might include a “back door” or other entry point for an attacker who had secured the company’s assistance. For any iPhone, that’s Apple. But many companies build and sell Android phones.

Google, the primary developer of the Android operating system, signs the updates for its flagship Nexus devices. Samsung signs for its devices. Cellular carriers (such as AT&T or Verizon) may also sign. And many users install a custom version of Android (such as Cyanogenmod). The company or companies that sign the software would be the ones the FBI needed to persuade – or compel – to write software allowing a way in.

Comparing iOS and Android

Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.

But we found differences for online attacks, based on user and remote management configuration: Android has a more secure default for online attacks at start-up, but our Nexus 4 did not allow the user to set a maximum number of failed attempts from the lock screen (other devices may vary). Devices running iOS have both of these capabilities, but a user must enable them manually in advance.

Android security may also be weakened by remote control software, depending on the software used. Though the FBI was unable to gain access to the iPhone 5c by resetting the password this way, we were successful with a similar attack on our Android device.

The Conversation

William Enck, Assistant Professor of Computer Science, North Carolina State University and Adwait Nadkarni, Ph.D. Student of Computer Science, North Carolina State University

This article was originally published on The Conversation. Read the original article.

Revisiting the Death of Michael Hastings


Could emerging tech present new forensics in the suspicious early demise of controversial Rolling Stone reporter, Michael Hastings? How cheaper hardware and open-sourced coding could shed new light on a murder as the possibility of remotely hacking today’s cars gains traction.

Hacking your car might already be possible. This tweet by NYT tech writer, Nick Bilton, is a great example:

Weeks back, I wrote a short piece about CANtact, a $60 device that enables you  to interface with a car’s onboard computer through your laptop’s USB port. Eric Evenchick presented CANtact at Black Hat Asia 2015 security conference in Singapore. The onboard CPU of a motor-vehicle is called the CAN, for Controller Area Network. Evenchick hopes his device’s affordability will spur programmers to reverse engineer the firmware and proprietary languages various CAN systems use.

Read more about CANtact: CANtact Device Lets you Hack a Car’s CPU for $60

I got feedback on the CANtact story about a seemingly unrelated topic: The Death of Michael Hastings. Hastings was Rolling Stone and Buzzfeed contributor who became very vocal about the surveillance state when the  U.S. Department of Justice started investigating reporters in 2013. Hastings coined the term “war on journalism” when the Obama Administration sanctioned limitations on journalists ability to report when the White House considered it a security risk. Buzzfeed ran his last story, “Why Democrats Love to Spy On Americans”, June 7, 2013. Hastings is considered suspicious by many Americans after he died in an explosive, high -speed automobile accident, June 18, 2013, in Los Angeles, CA.

Check out one of the last interviews with Michael Hastings and scroll down for a description of the oft repeated conspiracy theory surrounding his untimely death.

The Michael Hastings Conspiracy Theory:

Unlike a lot of post-millennium conspiracy theories, which usually start online, this one actually began on television. Reporters were already contentious about the limitations the Obama admin. were attempting to impose and it seemed like extremely suspicious timing that one of the leaders of the criticism against censorship was suddenly killed. The internet ran with it and some Americans considered the crash as suspicious at the time. Public opinion is often without the merit of hard evidence, though, and this case was no different. Not everyone considered the media coverage unbiased, considering the political stake journalists had in the issue.

The first solid argument that Hasting didn’t die by accident came from Richard A. Clarke, a former U.S. National Coordinator for Security, Infrastructure Protection, and Counter-terrorism(what a title~!), who called the crash “consistent with a car cyber attack”. The conspiracy theory gestating around water coolers and message boards was truly born when Clarke went public with this outright accusation:

“There is reason to believe that intelligence agencies for major powers—including the United States—know how to remotely seize control of a car. So if there were a cyber attack on [Hastings’] car—and I’m not saying there was, I think whoever did it would probably get away with it.”

Next, WikiLeaks announced that Hastings reached out to a Wikileaks lawyer Jennifer Robinson only a few hours before the crash.

Army Staff Sergent Joe Biggs came forward with an email he thought might help in a murder investigation. The email was CCed to a few of Hastings’ colleagues, stating he was “onto a big story” and planned to “go off the radar”. Perhaps the most incriminating detail is that he warned the addressees of this email to expect a visit from the FBI. The FBI denied Hastings was being investigated in a formal press release.

LA Weekly admitted Hastings was preparing a new installment of what had been an ongoing story involving the CIA. Hastings’ wife, Elise Jordan, confirmed he had been working on a story profiling CIA Director John O. Brennan.

 

The case against foul play:

I have to admit, I got sucked in for a second but Cosmoso is a science blog and I personally believe an important part of science is to maintain rational skepticism. The details I listed above are the undisputed facts. You can research online and verify them. It might seem really likely that Hastings was onto something and silenced by some sort of foul play leading to a car accident but there is no hard evidence, no smoking gun, no suspects and nothing really proving he was a victim of murder.

The rumor online has always been that there are suspicious aspects to the explosion. Cars don’t always explode when they crash but Frank Markus director of Motor Trend said the ensuing fire after the crash was consistent with most high-speed car crashes. The usual conspiracy theorist reaction is to suspect this kind of testimony to have some advantage or involvement thus “proving” it biased. It’s pretty difficult to do that in the case of Frank Markus, who just directs a magazine and website about cars.

Hastings’ own family doesn’t seem to think the death was suspicious. His brother, Jonathan, later revealed Michael seemed “manic” in the days leading up to the crash. Elise Jordan, his wife told the press it was “just a really tragic accident”

A host of The Young Turks who was close with Hastings once said Hastings’ friends had noticed he was agitated and tense. Michael often complained that he was being followed and watched. It’s easy to dismiss the conspiracy theory when you consider it may have stemmed from the line of work he chose.

Maybe the government conspiracy angle is red herring.

Reporting on the FBI, the Military, the Whitehouse, or the CIA are what reporters do. People did it before and since. Those government organizations have accountability in ways that would make an assassination pretty unlikely.

If it wasn’t the government who would have wanted to kill Hastings?

A lot of people, it turns out. Hastings had publicly confirmed he received several death-threats after his infamous Rolling Stone article criticizing and exposing General McChrystal. Considering the United States long history of reactionary violence an alternate theory is that military personnel performed an unsanctioned hit on Hastings during a time when many right wing Americans considered the journalist unpatriotic.

Here’s where the tech comes into play:

Hastings had told USA Today his car had recently been “tampered with”, without any real explanation of what that means but most people in 2013 would assume it means physical tampering with the brakes or planting a bug. In any case he said he was scared and planned to leave town.

Now it’s only two years later, and people are starting to see how a little bit of inside knowledge of how the CAN computer works in a modern vehicle can be used to do some serious harm. We might never know if this was a murder, an assassination or an accident but hacking a car remotely seemed like a joke at the time; two years later no one is laughing.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Hybrid Fern’s Last Common Ancestor 60-Million Years Earlier


Cystocarpium roskamianum, a common fern in parts of France, seems to be a naturally cross-bred product of  two separate ferns. Ferns reproduce via spores. Read the DNA analysis here, decribing cystocarpium roskamianum as a hybrid fern with parents whose last common ancestor was nearly 60 million years ago. This hybrid is unique to the French Pyrenees but can be found in flowershops and greenhouses throughout Europe. It’s noteworthy as an intergeneric hybrid with lineages that have not cross-fertilized since that common ancestor.

The DNA report is entitled: Natural Hybridization between Genera That Diverged From Each Other Approximately 60 Million Years Ago. (Carl J. Rothfels, Anne K. Johnson, Peter H. Hovenkamp, David L. Swofford, Harry C. Roskam, Christopher R. Fraser-Jenkins, Michael D. Windham, and Kathleen M. Pryer, The American Naturalist, Vol. 185, No. 3 (March 2015), pp. 433-442)

This is an extraordinarily deep hybridization event, roughly akin to an elephant hybridizing with a manatee or a human with a lemur.

The research team went on to acknowledge that fern populations develop new adaptations slowly and with much overlap and shared DNA. Much of our planet’s biodiversity is dependent on cross pollination of species with complimentary traits as opposed to adaptive mutations that spur evolution in lifeforms of more diverse lineage.  In other words,  the history of interbreeding based on desirable traits runs 60 million years deep for ferns.

Carl Rothfels headed the study. He pointed out that cystocarpium roskamianum is a current record for this type of hybrid. “A 60 million year divergence is approximately equivalent to a human mating with a lemur.”

Just because it’s the oldest common-relative hybrid on record doesn’t mean Cystocarpium roskamianum is hard to find. It’s a common hybrid that is found in areas where the spores of one species can blow via wind into the leaves of the other. It has an unconventional look for a fern but it’s easily identifiable and commonly found.

This fern story might be part of the trending science at the root of this ridiculous ban on human-animal hybrids proposed to be written into actual bonefide American law by Georgia Republican State Rep., Tom Kirby. Alarmism is a bad way to react to new science and the conversation got pretty silly. Check out some of Kirby’s gems:

Mermaids: “Y’know the mermaids in the ocean, that’s been around for a long time,” Kirby said. “I don’t think we should create them. But if they exist, that’s fine.”

Centaurs: “Y’know I really don’t like centaurs,” Kirby said of the half-man, half-horse mythological character. “They really have bad attitudes most of the time and we’ve got enough people with bad attitudes as it is.”

Bird-men: “I think man has been trying to fly forever,” Kirby said. He approves of bird-men, he says, “if it’s a natural genetic mutation.” He acknowledges such a mutation could help solve Georgia’s transportation issues.

Werewolves: “We don’t want to laboratorily create the werewolf,” Kirby said. But “naturally occurring in the environment, absolutely.”

The research seems to point out that ferns can easily interbreed with other ferns simply because they have not evolved genetic barriers that stop them from doing so. The implication is that other lifeforms have evolved an aversion or set of obstacles that stop crossbreeding in order to fine tune the speed of adaptation to suit each lifeform’s unique attempt at evolving.

 

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Panspermia “Alien Seed” Theory Still Unproven Despite Claims of New Evidence


Panspermia is the (hilarious) name given to a theoretical discussion about terrestrial life originating from someplace extraterrestrial, beyond Earth. The panspermia argument began with Francis Crick, the co-discoverer of the structure of DNA molecules. If “panspermia” sounds like a science fiction idea from the 70’s you aren’t thinking too far off. While Crick is famous for the DNA thing, he actually had a more implausible idea about aliens and the origin of life. One that never stood up to scientific scrutiny but makes its way back into scientific debates from time to time.

Who knows? Maybe this guy started it all.

Over the years there have been several discoveries panspermia supporters point to excitedly, claiming they have found proof of alien life. The latest new piece of evidence comes from Milton Wainwright at The University of Buckingham. While the object was found embedded in a weather balloon designed to collect upper atmosphere debris, and it is undeniably a biosignature of some sort, it doesn’t definitively support the panspermia theory, despite recent, high-profile headlines.

Wainwright himself admits this: “Unless, of course, we can find details of the civilization that is supposed to have sent it in this respect, it is probably an unprovable theory.”

Um, yeah. So the guy quoted all over the new evidence is actually the first in line to warn against jumping to the conclusion that we are all aliens. Still, it’s a very exciting theory. Let’s take a look at other samples from various parts of Earth and beyond that have allowed dreamers to fabricate theories of life’s supposed extraterrestrial origin.

Biosignatures don’t have to be chemical. They can be magnetic, as suggested in this space.com article from 2011,  or it could be described by the morphology, meaning the shape and size of fossilized evidence could indicate a living thing once left its mark. Biosignatures that support the idea of alien life or panspermic origins to terrestrial life are inconclusive but that doesn’t stop enthusiasts from pointing and claiming they have proven the theory correct. Here’s why the evidence supporting panspermia is still inconclusive:

Meteorite ALH84001

Meteorite ALH84001

 

Tiny, microscopic magnetite crystals were found in meteorite ALH84001. Not a lot can be proven from this undeniably interesting piece of space rock. The meteorite is likely Martian in origin. It’s famously debated because of a handful of potential biosignatures. Some scientists insisted only bacteria could have caused the crystal formations . They turned out to be wrong; similar formations can be found forming by complex physics, without life intervening.

Several other “possible biosignatures” have been investigated int he sample. There is a working hypothesis but not an empirical confirmation of life. Proof of an extraterrestrial form of life would mean these so-called biosignatures could have been formed by a living thing and only a living thing – which is clearly not the case. One such biosig was a small-pattern texture that resembled one from a known bacteria. A scientific majority ultimately decided these textures were small to be fossilized cells.  Meteorite ALH84001 is a curiosity, a rare find and an amazing natural occurrence but it is not proof of the panspermia theory.

Then there is the Kerala red rain phenomenon happened in Kerala, India from 25 July to 23 September 2001.  Heavy showers brought a peculiar, red-coloured liquid. The “blood rains” fell all along the southern Indian state of Kerala staining fabrics and causing alarm. Other colours were reported but the majority of reports and samples were red in color. It’s happened several times since, most recently in June 2012.

Kerala Red Rain

A photo-microscopy examination brought an initial rumor to the media: the source oft he red color was a meteor shower or explosion from asteroid particles heating up on entry to Earth’s atmosphere. Early misreports like that often cause rumors or conspiracy theories when the official story gets redacted. In this case, a detailed study commissioned by the Government of India announced the rains had been dyed by airborne spores originating from a prolific colony of terrestrial, forest algae.

It’s still a mysterious phenomenon but the genetic makeup of the cells found in red rain is far too common for the sample to be extraterrestrial.

Tardigrades

Tardigrades are so durable they seem to be able to survive for a long time when they enter a strange, dehydrated state. Tardigrades are one of the only species who can suspend their metabolism and going into a state of cryptobiosis. Several varieties of tardigrade can stay hibernating for nearly 10 years. While in this state, tardigrade metabolism falls to 0.01%  and their water content goes down to 1% of normal.

Tardigrades would make excellent space travellers because they can withstand extreme environments most other lifeforms would be destroyed in, including extremes of temperature, pressure, dehydration and radiation, environmental toxins, and outer space vacuum conditions.

Wikipedia points out:  tardigrades are the first known animal to survive in space. On September 2007, dehydrated tardigrades were taken into low Earth orbit on the FOTON-M3 mission carrying the BIOPAN astrobiology payload. For 10 days, groups of tardigrades were exposed to the hard vacuum of outer space, or vacuum and solar UV radiation.[3][38][39] After being rehydrated back on Earth, over 68% of the subjects protected from high-energy UV radiation revived within 30 minutes following rehydration, but subsequent mortality was high; many of these produced viable embryos. In contrast, dehydrated samples exposed to the combined effect of vacuum and full solar UV radiation had significantly reduced survival, with only three subjects of Milnesium tardigradum surviving. In May 2011, Italian scientists sent tardigrades on board the International Space Station along with other extremophiles on STS-134, the final flight of Space Shuttle Endeavour. Their conclusion was that microgravity and cosmic radiation “did not significantly affect survival of tardigrades in flight, confirming that tardigrades represent a useful animal for space research.” In November 2011, they were among the organisms to be sent by the US-based Planetary Society on the Russian Fobos-Grunt mission’s Living Interplanetary Flight Experiment to Phobos; however, the launch failed. It remains unclear whether tardigrade specimens survived the failed launch.

Tardigrades can survive in space but that doesn’t mean they came from space. They have strong genetic ties with several other animals in the Panarthropoda group. They appear to have evolved on Earth but will likely be studied for years to come because of the adaptable nature of Earth life they represent.

Like a lot of pseudo-science, there are elements of hope and truth to tons of the details. Labeling bad science or non-science for what it is enables us to dream bigger and keep a better-informed, watchful eye on the available data. If you are feeling the sting of yet another science news story letting you down, recharge your creative side with this 90’s CGI classic that illustrates the crucial principles of panspermism:

Jonathan Howard is a skeptic and freelance writer working for Cosmoso.net

You can reach him at this email address: [email protected]

or find him on facebook: https://www.facebook.com/contact.jonhoward

 

 

 

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

How we can each fight cybercrime with smarter habits


Hackers gain access to computers and networks by exploiting the weaknesses in our cyber behaviors. Many attacks use simple phishing schemes – the hacker sends an email that appears to come from a trusted source, encouraging the recipient to click a seemingly innocuous hyperlink or attachment. Clicking will launch malware and open backdoors that can be used for nefarious actions: accessing a company’s network or serving as a virtual zombie for launching attacks on other computers and servers.

No one is safe from such attacks. Not companies at the forefront of technology such as Apple and Yahoo whose security flaws were recently exploited. Not even sophisticated national networks are home free; for instance, Israel’s was compromised using a phishing attack where an email purportedly from Shin Bet, Israel’s internal security service, with a phony PDF attachment, gave hackers remote access to its defense network.

To figure out why we fall for hackers’ tricks, I use them myself to see which kinds of attacks are successful and with whom. In my research, I simulate real attacks by sending different types of suspicious emails, friend-requests on social media, and links to spoofed websites to research subjects. Then I use a variety of direct, cognitive and psychological measures as well as unobtrusive behavioral measures to understand why individuals fall victim to such attacks.

What is apparent over the many simulations is how seemingly simple attacks, crafted with minimal sophistication, achieve a staggering victimization rate. As a case in point, merely incorporating the university’s logo and some brand markers to a phishing email resulted in close to 70% of the research subjects falling prey to the attack. Ultimately, the goal of my research is to figure out how best to teach the public to ward off these kinds of cyberattacks when they come up in their everyday lives.

Wise advice.
Julia Wolf, CC BY-NC-SA

Clicking without thinking

Many of us fall for such deception because we misunderstand the risks of online actions. I call these our cyber-risk beliefs; and more often than not, I’ve found people’s risk beliefs are inaccurate. For instance, individuals mistakenly equate their inability to manipulate a PDF document with its inherent security, and quickly open such attachments. Similar flawed beliefs lead individuals to cavalierly open webpages and attachments on their mobile devices or on certain operating systems.

Compounding such beliefs are people’s email and social media habits. Habits are the brain’s way of automating repeatedly enacted, predictable behaviors. Over time, frequently checking email, social media feeds and messages becomes a routine. People grow unaware of when – and at times why – they perform these actions. Consequently, when in the groove, people click links or open attachments without much forethought. In fact, I’ve found certain Facebook habits – such as repeatedly checking newsfeeds, frequently posting status updates, along with maintaining a large Facebook friend network – to be the biggest predictor of whether they would accept a friend-request from a stranger and whether they would reveal personal information to that stranger.

Such habitual reactions are further catalyzed by the smartphones and tablets that most of us use. These devices foster quick and reactive responses to messages though widgets, apps and push notifications. Not only do smartphone screen sizes and compressed app layouts reduce the amount of detailed information visible, but many of us also use such devices while on the go, when our distraction further compromises our ability to detect deceptive emails.

These automated cyber routines and reactive responses are, in my opinion, the reasons why the current approach of training people to be vigilant about suspicious emails remains largely ineffective. Changing people’s media habits is the key to reducing the success of cyberattacks — and therein also lies an opportunity for all of us to help.

Harnessing habits to fight cybercrime

Emerging research suggests that the best way to correct a habit is to replace it with another, what writer Charles Duhigg calls a Keystone Habit. This is a simple positive action that could replace an existing pattern. For instance, people who wish to lose weight are instructed to exercise, reduce sugar intake, read food labels and count calories. Doing this many challenging things consistently is daunting and often people are too intimidated to even begin. Many people find greater success when they instead focus on one key attainable action, such as walking half a mile each day. Repeatedly accomplishing this simple goal feels good, builds confidence and encourages more cognitive assessments — processes that quickly snowball into massive change.

Something weird in the inbox?
Martin Terber, CC BY

We could apply the same principle to improve cybersecurity by making it a keystone habit to report suspicious emails. After all, many people receive such emails. Some inadvertently fall for them, while many who are suspicious don’t. Clearly, if more of us were reporting our suspicions, many more breaches could be discovered and neutralized before they spread. We could transform the urge to click on something suspicious into a new habit: reporting the dubious email.

We need a centralized, national clearing house — perhaps an email address or phone number similar to the 911 emergency system — where anyone suspicious of a cyberthreat can quickly and effortlessly report it. This information could be collated regionally and tracked centrally, in the same way the Department of Health tracks public health and disease outbreaks.

Of course, we also need to make reporting suspicious cyber breaches gratifying, so people feel vested and receive something in return. Rather than simply collect emails, as is presently done by the many different institutions combating cyber threats, submissions could be vetted by a centralized cybersecurity team, who in addition to redressing the threat, would publicize how a person’s reporting helped thwart an attack. Reporting a cyber intrusion could become easy, fun, something we can all do. And more importantly, the mere act of habitually reporting our suspicions could in time lead to more cybersecurity consciousness among all of us.

This article is part of an ongoing series on cybersecurity. More articles will be published in the coming weeks.

The Conversation

This article was originally published on The Conversation.
Read the original article.