Category Archives: Security

Google Tracks You Even If You’re Not Signed In


A new lawsuit alleges that Google violates users’ privacy and data security by collecting and using private browsing information. Specifically, the suit states that Google systematically includes search queries in the URL of the search results page, even when the user is using private browsing mode. The researcher said this is an egregious violation of user privacy and demanded action from the Federal Trade Commission. The company has been sued by several groups, including Consumer Watchdog.

The lawsuit states that Google’s practices violate privacy laws and should be stopped. This is a significant step forward for users’ online privacy. The Internet giant’s private browsing option has been around for some time, but the suit claims that Google is violating California law by gathering personal information even when users use “private browsing” mode. The law requires consent before a company can monitor private communications, so the company must get consent before it collects any personal information.

Google’s data-collection practices have also been the subject of other lawsuits. This case is the latest in a series of similar actions. In 2012, the European Union imposed a fine of EUR50 million for violating the GDPR. The plaintiffs argued that Google failed to obtain clear consent from end users and did not make their actions transparent enough. Further, they alleged that Google did not disclose its partners’ elements. Furthermore, a private browsing mode does not offer additional information on a website.

Other lawsuits alleging that Google violates user privacy have not been successful, but the company is still facing a number of other cases regarding its data-collection practices. The suit says that the company collects browsing histories and search queries, even if users’ browsers are in private mode. The lawsuit further claims that Google intentionally deceives consumers by claiming that these actions are necessary for its business. If this lawsuit is successful, it would force Google to pay a huge sum of damages to its users.

The US government is currently suing the company for illegally invading users’ privacy. The suit is also claiming that Google has knowingly collected information about its users. It is unclear exactly how it collects such information. The data is collected when a person uses the search engines in private mode. However, this is not the only violation that Google has been accused of. The data is used to improve the quality of their search experience.

While Google does not explicitly deny that it collects this information, it does not mention that it also gathers information about its users through third-party tracking cookies. As a result, the company is not required to disclose the specific details of the third-party tracking cookies that it collects. The company has no plans to replace these cookies with anything that is less invasive. The phasing out of third-party tracking cookies, however, will not have a negative impact on its advertising business.

Despite the fact that these practices are illegal, a class-action lawsuit filed in the US alleges that Google has violated user privacy by collecting third-party tracking cookies. The class-action claims that the company violates California’s Computer Data Access and Fraud Act by violating privacy laws. In addition to these claims, it has also been cited as a violation of California’s Computer Data Access and Fraud Act.

The suit further alleges that Google’s privacy controls are deceptive, and the company collects information even without consumer consent. In addition, it is a requirement for third-party Android apps use the Firebase SDK, which is a third-party tool to allow advertisers to know the composition of their audience. This enables the company to analyze the content of the audience and use it for advertising purposes. It then uses the information to create hyper-personalized ads.

In a separate suit, the plaintiffs claim that Google has harmed the rights of millions of users by tracking their activities. This lawsuit has not been filed in the US courts, but it is still pending in the US. The Texas Attorney General’s office has previously filed a similar suit against the company, and the lawsuit is based on the company’s failure to follow the law. The company’s privacy policies are the subject of a class-action lawsuit against it, but the proposed settlement has been thrown out because they have been a major violation of user rights.

Tekoso Media Provides Web Hosting For Small Businesses


For Immediate Release –

Tekoso Media Web Hosting

Internet company Tekoso Media has recently begun offering web hosting services including shared hosting, cloud hosting and storage, VPS hosting and dedicated servers for both Windows and Linux platforms. Prices are very cheap and come with generous software packages, security and cutting edge computer technology.

Also servers in the shared, VPS and managed server packages come with cPanel and a whole host of free add-ons that every internet marketer needs for their launch, including ticket systems, multiple email accounts, billing systems and membership functionality.

WordPress Hosting at Tekoso Media

WordPress is obviously a part of what they offer, but they go the extra mile by offering wordpress hosting at a discounted rate as well. On top of that, their point and click, drag and drop website creators and integrations make it very easy to get up and running within minutes.

If you’re launching a new product or service, or even hosting a blog network for search engine optimization such as authority blogs, Tekoso Web Hosting is your best bet for getting started fast and cheaply. For just a few bucks you can have lightning speed servers, more hard drive space and RAM than you’ll ever need and plenty of bandwidth, all without breaking the bank the way you would with competing web hosting providers.

Try Tekoso and take them for a test drive. They even offer a thirty day money back guarantee on any server that is unsatisfactory. This offer may be limited, so check them out today!

Learn more about Tekoso Web Hosting at tekoso.com on Youtube at https://www.youtube.com/watch?v=Wu1Qo8e6hvU

Police around the world learn to fight global-scale cybercrime


Frank J. Cilluffo, George Washington University; Alec Nadeau, George Washington University, and Rob Wainwright, University of Exeter

From 2009 to 2016, a cybercrime network called Avalanche grew into one of the world’s most sophisticated criminal syndicates. It resembled an international conglomerate, staffed by corporate executives, advertising salespeople and customer service representatives. The Conversation

Its business, though, was not standard international trade. Avalanche provided a hacker’s delight of a one-stop shop for all kinds of cybercrime to criminals without their own technical expertise but with the motivation and ingenuity to perpetrate a scam. At the height of its activity, the Avalanche group had hijacked hundreds of thousands of computer systems in homes and businesses around the world, using them to send more than a million criminally motivated emails per week.

Our study of Avalanche, and of the groundbreaking law enforcement effort that ultimately took it down in December 2016, gives us a look at how the cybercriminal underground will operate in the future, and how police around the world must cooperate to fight back.

Cybercrime at scale

Successful cybercriminal enterprises need strong and reliable technology, but what increasingly separates the big players from the smaller nuisances is business acumen. Underground markets, forums and message systems, often hosted on the deep web, have created a service-based economy of cybercrime.

Just as regular businesses can hire online services – buying Google products to handle their email, spreadsheets and document sharing, and hosting websites on Amazon with payments handled by PayPal – cybercriminals can do the same. Sometimes these criminals use legitimate service platforms like PayPal in addition to others specifically designed for illicit marketplaces.

And just as the legal cloud-computing giants aim to efficiently offer products of broad use to a wide customer base, criminal computing services do the same. They pursue technological capabilities that a wide range of customers want to use more easily. Today, with an internet connection and some currency (bitcoin preferred), almost anyone can buy and sell narcotics online, purchase hacking services or rent botnets to cripple competitors and spread money-making malware.

The Avalanche network excelled at this, selling technically advanced products to its customers while using sophisticated techniques to evade detection and identification as the source by law enforcement. Avalanche offered, in business terms, “cybercrime as a service,” supporting a broad digital underground economy. By leaving to others the design and execution of innovative ways to use them, Avalanche and its criminal customers efficiently split the work of planning, executing and developing the technology for advanced cybercrime scams.

With Avalanche, renters – or the network’s operators themselves – could communicate with, and take control of, some or all of the hijacked computers to conduct a wide range of cyberattacks. The criminals could then, for example, knock websites offline for hours or longer. That in turn could let them extract ransom payments, disrupt online transactions to hurt a business’ bottom line or distract victims while accomplices employed stealthier methods to steal customer data or financial information. The Avalanche group also sold access to 20 unique types of malicious software. Criminal operations facilitated by Avalanche cost businesses, governments and individuals around the world hundreds of millions of dollars.

Low risk, high reward

To date, cybercrime has offered high profits – like the US$1 billion annual ransomware market – with low risk. Cybercriminals often use technical means to obscure their identities and locations, making it challenging for law enforcement to effectively pursue them.

That makes cybercrime very attractive to traditional criminals. With a lower technological bar, huge amounts of money, manpower and real-world connections have come flooding into the cybercrime ecosystem. For instance, in 2014, cybercriminals hacked into major financial firms to get information about specific companies’ stocks and to steal investors’ personal information. They first bought stock in certain companies, then sent false email advertisements to specific investors, with the goal of artificially inflating those companies’ stock prices. It worked: Stock prices went up, and the criminals sold their holdings, raking in profits they could use for their next scam.

In addition, the internet allows criminal operations to function across geographic boundaries and legal jurisdictions in ways that are simply impractical in the physical world. Criminals in the real world must be at a crime’s actual site and may leave physical evidence behind – like fingerprints on a bank vault or records of traveling to and from the place the crime occurred. In cyberspace, a criminal in Belarus can hack into a vulnerable server in Hungary to remotely direct distributed operations against victims in South America without ever setting foot below the Equator.

A path forward

All these factors present significant challenges for police, who must also contend with limited budgets and manpower with which to conduct complex investigations, the technical challenges of following sophisticated hackers through the internet and the need to work with officials in other countries.

The multinational cooperation involved in successfully taking down the Avalanche network can be a model for future efforts in fighting digital crime. Coordinated by Europol, the European Union’s police agency, the plan takes inspiration from the sharing economy.

Uber owns very few cars and Airbnb has no property; they help connect drivers and homeowners with customers who need transportation or lodging. Similarly, while Europol has no direct policing powers or unique intelligence, it can connect law enforcement agencies across the continent. This “uberization” of law enforcement was crucial to synchronizing the coordinated action that seized, blocked and redirected traffic for more than 800,000 domains across 30 countries.

Through those partnerships, various national police agencies were able to collect pieces of information from their own jurisdictions and send it, through Europol, to German authorities, who took the lead on the investigation. Analyzing all of that collected data revealed the identity of the suspects and untangled its complex network of servers and software. The nonprofit Shadowserver Foundation and others assisted with the actual takedown of the server infrastructure, while anti-virus companies helped victims clean up their computers.

Using the network against the criminals

Police are increasingly learning – often from private sector experts – how to detect and stop criminals’ online activities. Avalanche’s complex technological setup lent itself to a technique called “sinkholing,” in which malicious internet traffic is sent into the electronic equivalent of a bottomless pit. When a hijacked computer tried to contact its controller, the police-run sinkhole captured that message and prevented it from reaching the actual central controller. Without control, the infected computer couldn’t do anything nefarious.

However, interrupting the technological systems isn’t enough, unless police are able to stop the criminals too. Three times since 2010, police tried to take down the Kelihos botnet. But each time the person behind it escaped and was able to resume criminal activities using more resilient infrastructure. In early April, however, the FBI was able to arrest Peter Levashov, allegedly its longtime operator, while on a family vacation in Spain.

The effort to take down Avalanche also resulted in the arrests of five people who allegedly ran the organization. Their removal from action likely led to a temporary disruption in the broader global cybercrime environment. It forced the criminals who were Avalanche’s customers to stop and regroup, and may offer police additional intelligence, depending on what investigators can convince the people arrested to reveal.

The Avalanche network was just the beginning of the challenges law enforcement will face when it comes to combating international cybercrime. To keep their enterprises alive, the criminals will share their experiences and learn from the past. Police agencies around the world must do the same to keep up.

Frank J. Cilluffo, Director, Center for Cyber and Homeland Security, George Washington University; Alec Nadeau, Presidential Administrative Fellow, Center for Cyber and Homeland Security, George Washington University, and Rob Wainwright, Director of Europol; Honorary Fellow, Strategy and Security Institute, University of Exeter

Turning diamonds’ defects into long-term 3-D data storage


With the amount of data storage required for our daily lives growing and growing, and currently available technology being almost saturated, we’re in desperate need of a new method of data storage. The standard magnetic hard disk drive (HDD) – like what’s probably in your laptop computer – has reached its limit, holding a maximum of a few terabytes. Standard optical disk technologies, like compact disc (CD), digital video disc (DVD) and Blu-ray disc, are restricted by their two-dimensional nature – they just store data in one plane – and also by a physical law called the diffraction limit, based on the wavelength of light, that constrains our ability to focus light to a very small volume.

And then there’s the lifetime of the memory itself to consider. HDDs, as we’ve all experienced in our personal lives, may last only a few years before things start to behave strangely or just fail outright. DVDs and similar media are advertised as having a storage lifetime of hundreds of years. In practice this may be cut down to a few decades, assuming the disk is not rewritable. Rewritable disks degrade on each rewrite.

Without better solutions, we face financial and technological catastrophes as our current storage media reach their limits. How can we store large amounts of data in a way that’s secure for a long time and can be reused or recycled?

In our lab, we’re experimenting with a perhaps unexpected memory material you may even be wearing on your ring finger right now: diamond. On the atomic level, these crystals are extremely orderly – but sometimes defects arise. We’re exploiting these defects as a possible way to store information in three dimensions.

Focusing on tiny defects

One approach to improving data storage has been to continue in the direction of optical memory, but extend it to multiple dimensions. Instead of writing the data to a surface, write it to a volume; make your bits three-dimensional. The data are still limited by the physical inability to focus light to a very small space, but you now have access to an additional dimension in which to store the data. Some methods also polarize the light, giving you even more dimensions for data storage. However, most of these methods are not rewritable.

Here’s where the diamonds come in.

The orderly structure of a diamond, but with a vacancy and a nitrogen replacing two of the carbon atoms.
Zas2000

A diamond is supposed to be a pure well-ordered array of carbon atoms. Under an electron microscope it usually looks like a neatly arranged three-dimensional lattice. But occasionally there is a break in the order and a carbon atom is missing. This is what is known as a vacancy. Even further tainting the diamond, sometimes a nitrogen atom will take the place of a carbon atom. When a vacancy and a nitrogen atom are next to each other, the composite defect is called a nitrogen vacancy, or NV, center. These types of defects are always present to some degree, even in natural diamonds. In large concentrations, NV centers can impart a characteristic red color to the diamond that contains them.

This defect is having a huge impact in physics and chemistry right now. Researchers have used it to detect the unique nuclear magnetic resonance signatures of single proteins and are probing it in a variety of cutting-edge quantum mechanical experiments.

Nitrogen vacancy centers have a tendency to trap electrons, but the electron can also be forced out of the defect by a laser pulse. For many researchers, the defects are interesting only when they’re holding on to electrons. So for them, the fact that the defects can release the electrons, too, is a problem.

But in our lab, we instead look at these nitrogen vacancy centers as a potential benefit. We think of each one as a nanoscopic “bit.” If the defect has an extra electron, the bit is a one. If it doesn’t have an extra electron, the bit is a zero. This electron yes/no, on/off, one/zero property opens the door for turning the NV center’s charge state into the basis for using diamonds as a long-term storage medium.

Starting from a blank ensemble of NV centers in a diamond (1), information can be written (2), erased (3), and rewritten (4).
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Turning the defect into a benefit

Previous experiments with this defect have demonstrated some properties that make diamond a good candidate for a memory platform.

First, researchers can selectively change the charge state of an individual defect so it either holds an electron or not. We’ve used a green laser pulse to assist in trapping an electron and a high-power red laser pulse to eject an electron from the defect. A low-power red laser pulse can help check if an electron is trapped or not. If left completely in the dark, the defects maintain their charged/discharged status virtually forever.

The NV centers can encode data on various levels.
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Our method is still diffraction limited, but is 3-D in the sense that we can charge and discharge the defects at any point inside of the diamond. We also present a sort of fourth dimension. Since the defects are so small and our laser is diffraction limited, we are technically charging and discharging many defects in a single pulse. By varying the duration of the laser pulse in a single region we can control the number of charged NV centers and consequently encode multiple bits of information.

Though one could use natural diamonds for these applications, we use artificially lab-grown diamonds. That way we can efficiently control the concentration of nitrogen vacancy centers in the diamond.

All these improvements add up to about 100 times enhancement in terms of bit density relative to the current DVD technology. That means we can encode all the information from a DVD into a diamond that takes up about one percent of the space.

Past just charge, to spin as well

If we could get beyond the diffraction limit of light, we could improve storage capacities even further. We have one novel proposal on this front.

A human cell, imaged on the right with super-resolution microscope.
Dr. Muthugapatti Kandasamy, CC BY-NC-ND

Nitrogen vacancy centers have also been used in the execution of what is called super-resolution microscopy to image things that are much smaller than the wavelength of light. However, since the super-resolution technique works on the same principles of charging and discharging the defect, it will cause unintentional alteration in the pattern that one wants to encode. Therefore, we won’t be able to use it as it is for memory storage application and we’d need to back up the already written data somehow during a read or write step.

Here we propose the idea of what we call charge-to-spin conversion; we temporarily encode the charge state of the defect in the spin state of the defect’s host nitrogen nucleus. Spin is a fundamental property of any elementary particle; it’s similar to its charge, and can be imagined as having a very tiny magnet permanently attached it.

While the charges are being adjusted to read/write the information as desired, the previously written information is well protected in the nitrogen spin state. Once the charges have encoded, the information can be back converted from the nitrogen spin to the charge state through another mechanism which we call spin-to-charge conversion.

With these advanced protocols, the storage capacity of a diamond would surpass what existing technologies can achieve. This is just a beginning, but these initial results provide us a potential way of storing huge amount of data in a brand new way. We’re looking forward to transform this beautiful quirk of physics into a vastly useful technology.

The Conversation

Siddharth Dhomkar, Postdoctoral Associate in Physics, City College of New York and Jacob Henshaw, Teaching Assistant in Physics, City College of New York

Security risks in the age of smart homes


Smart homes, an aspect of the Internet of Things, offer the promise of improved energy efficiency and control over home security. Integrating various devices together can offer users easy programming of many devices around the home, including appliances, cameras and alarm sensors. Several systems can handle this type of task, such as Samsung SmartThings, Google Brillo/Weave, Apple HomeKit, Allseen Alljoyn and Amazon Alexa.

But there are also security risks. Smart home systems can leave owners vulnerable to serious threats, such as arson, blackmail, theft and extortion. Current security research has focused on individual devices, and how they communicate with each other. For example, the MyQ garage system can be turned into a surveillance tool, alerting would-be thieves when a garage door opened and then closed, and allowing them to remotely open it again after the residents had left. The popular ZigBee communication protocol can allow attackers to join the secure home network.

Little research has focused on what happens when these devices are integrated into a coordinated system. We set out to determine exactly what these risks might be, in the hope of showing platform designers areas in which they should improve their software to better protect users’ security in future smart home systems.

The popular SmartThings product line.
Zon@ IT/YouTube, CC BY

Evaluating the security of smart home platforms

First, we surveyed most of the above platforms to understand the landscape of smart home programming frameworks. We looked at what systems existed, and what features they offered. We also looked at what devices they could interact with, whether they supported third-party apps, and how many apps were in their app stores. And, importantly, we looked at their security features.

We decided to focus deeper inquiry on SmartThings because it is a relatively mature system, with 521 apps in its app store, supporting 132 types of IoT devices for the home. In addition, SmartThings has a number of conceptual similarities to other, newer systems that make our insights potentially relevant more broadly. For example, SmartThings and other systems offer trigger-action programming, which lets you connect sensors and events to automate aspects of your home. That is the sort of capability that can turn your walkway lights on when a driveway motion detector senses a car driving up, or can make sure your garage door is closed when you turn your bedroom light out at night.

We tested for potential security holes in the system and 499 SmartThings apps (also called SmartApps) from the SmartThings app store, seeking to understand how prevalent these security flaws were.

Finding and attacking main weaknesses

We found two major categories of vulnerability: excessive privileges and insecure messaging.

Overprivileged SmartApps: SmartApps have privileges to perform specific operations on a device, such as turning an oven on and off or locking and unlocking a door. This idea is similar to smartphone apps asking for different permissions, such as to use the camera or get the phone’s current location. These privileges are grouped together; rather than getting separate permission for locking a door and unlocking it, an app would be allowed to do both – even if it didn’t need to.

For example, imagine an app that can automatically lock a specific door after 9 p.m. The SmartThings system would also grant that app the ability to unlock the door. An app’s developer cannot ask only for permission to lock the door.

More than half – 55 percent – of 499 SmartApps we studied had access to more functions than they needed.

Insecure messaging system: SmartApps can communicate with physical devices by exchanging messages, which can be envisioned as analogous to instant messages exchanged between people. SmartThings devices send messages that can contain sensitive data, such as a PIN code to open a particular lock.

We found that as long as a SmartApp has even the most basic level of access to a device (such as permission to show how much battery life is left), it can receive all the messages the physical device generates – not just those messages about functions it has privileges to. So an app intended only to read a door lock’s battery level could also listen to messages that contain a door lock’s PIN code.

In addition, we found that SmartApps can “impersonate” smart-home equipment, sending out their own messages that look like messages generated by real physical devices. The malicious SmartApp can read the network’s ID for the physical device, and create a message with that stolen ID. That battery-level app could even covertly send a message as if it were the door lock, falsely reporting it had been opened, for example.

SmartThings does not ensure that only physical devices can create messages with a certain ID.

SmartThings Proof-of-Concept Attacks

Attacking the design flaws

To move beyond the potential weaknesses into actual security breaches, we built four proof-of-concept attacks to demonstrate how attackers can combine and exploit the design flaws we found in SmartThings.

In our first attack, we built an app that promised to monitor the battery levels of various wireless devices around the home, such as motion sensors, leak detectors, and door locks. However, once installed by an unsuspecting user, this seemingly benign app was programmed to snoop on the other messages sent by those devices, opening a key vulnerability.

When the authorized user creates a new PIN code for a door lock, the lock itself will acknowledge the changed code by sending a confirmation message to the network. That message contains the new code, which could then be read by the malicious battery-monitoring app. The app can then send the code to its designer by SMS text message – effectively sending a house key directly to a prospective intruder.

In our second attack, we were able to snoop on the supposedly secure communications between a SmartApp and its companion Android mobile app. This allowed us to impersonate the Android app and send commands to the SmartApp – such as to create a new PIN code that would let us into the home.

Our third and fourth attacks involved writing malicious SmartApps that were able to take advantage of other security flaws. One custom SmartApp could disable “vacation mode,” a popular occupancy-simulation feature; we stopped a smart home system from turning lights on and off and otherwise behaving as if the home were occupied. Another custom SmartApp was able to falsely trigger a fire alarm by pretending to be a carbon monoxide sensor.

Room for improvement

Taking a step back, what does this mean for smart homes in general? Are these results indicative of the industry as a whole? Can smart homes ever be safe?

There are great benefits to gain from smart homes, and the Internet of Things in general, that ultimately lead to an improved quality of living. However, given the security weaknesses in today’s systems, caution is appropriate.

These are new technologies in nascent stages, and users should think about whether they are comfortable with giving third parties (e.g., apps or smart home platforms) remote access to their devices. For example, personally, I wouldn’t mind giving smart home technologies remote access to my window shades or desk lamps. But I would be wary of staking my safety on remotely controlled door locks, fire alarms, and ovens, as these are security- and safety-critical devices. If misused, those systems could allow – or even cause – physical harm.

However, I might change that assessment if systems were better designed to reduce the risks of failure or compromise, and to better protect users’ security.

Acknowledgements: This research is the result of a collaboration with Jaeyeon Jung and Atul Prakash.

The Conversation

Earlence Fernandes, Ph.D. student, Systems and Security, University of Michigan

This article was originally published on The Conversation. Read the original article.

Supreme Court approves legal authority to gain unauthorized access to any computer


It just got a whole lot easier for local and federal law enforcement to gain unauthorized access to computers connected to the internet when the Supreme Court approved changes to the rules of criminal procedure recently. The changes have enabled warrants for searches of any remote computer system despite local laws, ownership and physical location.

These warrants are particularly important to computer crimes divisions since many investigations result in turning up anonymous hosts, or users who don’t share their true identity in any way.

Unless congress takes action beforehand, the new law goes into affect in December of 2016.

What if the FBI tried to crack an Android phone? We attacked one to find out


The Justice Department has managed to unlock an iPhone 5c used by the gunman Syed Rizwan Farook, who with his wife killed 14 people in San Bernardino, California, last December. The high-profile case has pitted federal law enforcement agencies against Apple, which fought a legal order to work around its passcode security feature to give law enforcement access to the phone’s data. The FBI said it relied on a third party to crack the phone’s encrypted data, raising questions about iPhone security and whether federal agencies should disclose their method.

But what if the device had been running Android? Would the same technical and legal drama have played out?

We are Android users and researchers, and the first thing we did when the FBI-Apple dispute hit popular media was read Android’s Full Disk Encryption documentation.

We attempted to replicate what the FBI had wanted to do on an Android phone and found some useful results. Beyond the fact the Android ecosystem involves more companies, we discovered some technical differences, including a way to remotely update and therefore unlock encryption keys, something the FBI was not able to do for the iPhone 5c on its own.

The easy ways in

Data encryption on smartphones involves a key that the phone creates by combining 1) a user’s unlock code, if any (often a four- to six-digit passcode), and 2) a long, complicated number specific to the individual device being used. Attackers can try to crack either the key directly – which is very hard – or combinations of the passcode and device-specific number, which is hidden and roughly equally difficult to guess.

Decoding this strong encryption can be very difficult. But sometimes getting access to encrypted data from a phone doesn’t involve any code-breaking at all. Here’s how:

  • A custom app could be installed on a target phone to extract information. In March 2011, Google remotely installed a program that cleaned up phones infected by malicious software. It is unclear if Android still allows this.
  • Many applications use Android’s Backup API. The information that is backed up, and thereby accessible from the backup site directly, depends on which applications are installed on the phone.
  • If the target data are stored on a removable SD card, it may be unencrypted. Only the most recent versions of Android allow the user to encrypt an entire removable SD card; not all apps encrypt data stored on an SD card.
  • Some phones have fingerprint readers, which can be unlocked with an image of the phone owner’s fingerprint.
  • Some people have modified their phones’ operating systems to give them “root” privileges – access to the device’s data beyond what is allowed during normal operations – and potentially weakening security.

But if these options are not available, code-breaking is the remaining way in. In what is called a “brute force” attack, a phone can be unlocked by trying every possible encryption key (i.e., all character combinations possible) until the right one is reached and the device (or data) unlocks.

Starting the attack

A very abstract representation of the derivation of the encryption keys on Android.
William Enck and Adwait Nadkarni, CC BY-ND

There are two types of brute-force attacks: offline and online. In some ways an offline attack is easier – by copying the data off the device and onto a more powerful computer, specialized software and other techniques can be used to try all different passcode combinations.

But offline attacks can also be much harder, because they require either trying every single possible encryption key, or figuring out the user’s passcode and the device-specific key (the unique ID on Apple, and the hardware-bound key on newer versions of Android).

To try every potential solution to a fairly standard 128-bit AES key means trying all 100 undecillion (1038) potential solutions – enough to take a supercomputer more than a billion billion years.

Guessing the passcode could be relatively quick: for a six-digit PIN with only numbers, that’s just a million options. If letters and special symbols like “$” and “#” are allowed, there would be more options, but still only in the hundreds of billions. However, guessing the device-specific key would likely be just as hard as guessing the encryption key.

Considering an online attack

That leaves the online attack, which happens directly on the phone. With the device-specific key readily available to the operating system, this reduces the task to the much smaller burden of trying only all potential passcodes.

However, the phone itself can be configured to resist online attacks. For example, the phone can insert a time delay between a failed passcode guess and allowing another attempt, or even delete the data after a certain number of failed attempts.

Apple’s iOS has both of these capabilities, automatically introducing increasingly long delays after each failure, and, at a user’s option, wiping the device after 10 passcode failures.

Attacking an Android phone

What happens when one tries to crack into a locked Android phone? Different manufacturers set up their Android devices differently; Nexus phones run Google’s standard Android configuration. We used a Nexus 4 device running stock Android 5.1.1 and full disk encryption enabled.

Android adds 30-second delays after every five failed attempts; snapshot of the 40th attempt.
William Enck and Adwait Nadkarni, CC BY-ND

We started with a phone that was already running but had a locked screen. Android allows PINs, passwords and pattern-based locking, in which a user must connect a series of dots in the correct sequence to unlock the phone; we conducted this test with each type. We had manually assigned the actual passcode on the phone, but our unlocking attempts were randomly generated.

After five failed passcode attempts, Android imposed a 30-second delay before allowing another try. Unlike the iPhone, the delays did not get longer with subsequent failures; over 40 attempts, we encountered only a 30-second delay after every five failures. The phone kept count of how many successive attempts had failed, but did wipe the data. (Android phones from other manufacturers may insert increasing delays similar to iOS.)

These delays impose a significant time penalty on an attacker. Brute-forcing a six-digit PIN (one million combinations) could incur a worst-case delay of just more than 69 days. If the passcode were six characters, even using only lowercase letters, the worst-case delay would be more than 58 years.

When we repeated the attack on a phone that had been turned off and was just starting up, we were asked to reboot the device after 10 failed attempts. After 20 failed attempts and two reboots, Android started a countdown of the failed attempts that would trigger a device wipe. We continued our attack, and at the 30th attempt – as warned on the screen and in the Android documentation – the device performed a “factory reset,” wiping all user data.

Just one attempt remaining before the device wipes its data.
William Enck and Adwait Nadkarni, CC BY-ND

In contrast to offline attacks, there is a difference between Android and iOS for online brute force attacks. In iOS, both the lock screen and boot process can wipe the user data after a fixed number of failed attempts, but only if the user explicitly enables this. In Android, the boot process always wipes the user data after a fixed number of failed attempts. However, our Nexus 4 device did not allow us to set a limit for lock screen failures. That said, both Android and iOS have options for remote management, which, if enabled, can wipe data after a certain number of failed attempts.

Using special tools

The iPhone 5c in the San Bernardino case is owned by the employer of one of the shooters, and has mobile device management (MDM) software installed that lets the company track it and perform other functions on the phone by remote control. Such an MDM app is usually installed as a “Device Administrator” application on an Android phone, and set up using the “Apple Configurator” tool for iOS.

Our test MDM successfully resets the password. Then, the scrypt key derivation function (KDF) is used to generate the new key encryption key (KEK).
William Enck and Adwait Nadkarni, CC BY-ND

We built our own MDM application for our Android phone, and verified that the passcode can be reset without the user’s explicit consent; this also updated the phone’s encryption keys. We could then use the new passcode to unlock the phone from the lock screen and at boot time. (For this attack to work remotely, the phone must be on and have Internet connectivity, and the MDM application must already be programmed to reset the passcode on command from a remote MDM server.)

Figuring out where to get additional help

If an attacker needed help from a phone manufacturer or software company, Android presents a more diverse landscape.

Generally, operating system software is signed with a digital code that proves it is genuine, and which the phone requires before actually installing it. Only the company with the correct digital code can create an update to the operating system software – which might include a “back door” or other entry point for an attacker who had secured the company’s assistance. For any iPhone, that’s Apple. But many companies build and sell Android phones.

Google, the primary developer of the Android operating system, signs the updates for its flagship Nexus devices. Samsung signs for its devices. Cellular carriers (such as AT&T or Verizon) may also sign. And many users install a custom version of Android (such as Cyanogenmod). The company or companies that sign the software would be the ones the FBI needed to persuade – or compel – to write software allowing a way in.

Comparing iOS and Android

Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.

But we found differences for online attacks, based on user and remote management configuration: Android has a more secure default for online attacks at start-up, but our Nexus 4 did not allow the user to set a maximum number of failed attempts from the lock screen (other devices may vary). Devices running iOS have both of these capabilities, but a user must enable them manually in advance.

Android security may also be weakened by remote control software, depending on the software used. Though the FBI was unable to gain access to the iPhone 5c by resetting the password this way, we were successful with a similar attack on our Android device.

The Conversation

William Enck, Assistant Professor of Computer Science, North Carolina State University and Adwait Nadkarni, Ph.D. Student of Computer Science, North Carolina State University

This article was originally published on The Conversation. Read the original article.

Showtime’s ‘Dark Net’ Uncovers Deep Web Internet Culture

Through the internet, the impact of technology on our lives is both unprecedented and undeniable.

From cyber relationships, S&M culture and child abuse to biohacking, content moderation and nootropics, Dark Net finally puts into moving pictures what blogs have been typing up a storm about for the past few years.

At first glance the show seems like your run-of-the-mill cyber culture documentary, but the topics being explored are of a much more taboo persuasion — and it’s not just the underground pedophile networks accessed via Tor we’re talking about.

While Dark Net covers a lot of ground in technology subculture, it also serves as a bit of a transhumanist playground, discussing cutting edge and controversial topics such as RFID chip implants and other biohacks, nootropics, artificial intelligence girlfriends, and more. The main topic, however, seems to be the nature of human relationships being altered, augmented, and even hindered by technology, and it’s not difficult to understand why.

Through the internet, the impact of technology on our lives is both unprecedented and undeniable. Exploring subcultures and trends such as sadomasochism, porn addiction, and even internet addiction, Dark Net attempts to bring to light some otherwise undisclosed topics the most people refuse to talk about openly.

Dark Net is on Showtime, Thursday nights.

Max Klaassen
Public enema xenomorphic robot from the dimension Zrgauddon.

Memetic Warfare and the Sixth Domain Part Three


Can an image, sound, video or string of words influence the human mind so strongly the mind is actually harmed or controlled? Cosmoso takes a look at technology and the theoretical future of psychological warfare with Part Three of an ongoing series. 

Click here for Part One.

Click here for Part Two.

A lot of the responses I got to the first two installments talked about religion being weaponized memes. People do fight and kill on behalf of their religions and memes play a large part in disseminating the message and information religions have to offer.

Curved bullet meme is a great one. Most of the comments I see associated with this image have to do with how dumb someone would have to be to believe it would work. Some people have an intuitive understanding of spacial relations. Some might have a level of education in physics or basic gun safety and feel alarm bells going off way before they’d try something this dumb. It’s a pretty dangerous idea to put out there, though, because a percentage of people the image reaches could try something stupid. Is it a viable memetic weapon? Possibly~! I present to you, the curved bullet meme.

How-to-curve-path-of-bullet

The dangers here should be obvious. The move starts with “begin trigger-pull with pistol pointed at chest (near heart)” and anyone who is taking it seriously beyond is Darwin Award material.

Whoever created this image has no intention of someone actually trying it. So, in order for someone to fall for this pretty obvious trick, they’d have to be pretty dumb. There is another way people fall for tricks, though.

There is more than one way to end up being a victim of a mindfuck and being ignorant is part of a lot of them but ignorance can actually be induced. In the case of religion, there are several giant pieces of information or ways of thinking that must be gotten all wrong before someone would have to believe that the earth is coming to an end in 2012, or the creator of the universe wants you to burn in hell for eternity for not following the rules. By trash talking religion in general, I’ve made a percentage of readers right now angry, and that’s the point. Even if you take all the other criticisms about religion out of the mix, we can all agree that religion puts its believers in the position of becoming upset or outraged by very simple graphics or text. As a non-believer, a lot of the things religious people say sound as silly to me as the curved bullet graphic seems to a well-trained marksman.

To oversimplify it further: religions are elaborate, bad advice. You can inoculate yourself against that kind of meme but the vast majority of people out there cling desperately, violently to some kind of doctrine that claims to answer one or more of the most unanswerable parts of life. When people feel relief wash over them, they are more easily duped into doing what it takes to keep their access to that feeling.

There are tons of non-religious little memes out there that simply mess with anyone who follows bad advice. It can be a prank but the pranks can get pretty destructive. Check out this image from the movie Fight Club:

Motor Oil

Thinking no one fell for this one? For one thing, it’s from a movie, and in the movie it was supposed to be a mean-spirited prank that maybe some people fell for. Go ahead and google “fertilize used motor oil”, though, and see how many people are out there asking questions about it. It may blow your mind…

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY