Category Archives: Privacy

Google Tracks You Even If You’re Not Signed In


A new lawsuit alleges that Google violates users’ privacy and data security by collecting and using private browsing information. Specifically, the suit states that Google systematically includes search queries in the URL of the search results page, even when the user is using private browsing mode. The researcher said this is an egregious violation of user privacy and demanded action from the Federal Trade Commission. The company has been sued by several groups, including Consumer Watchdog.

The lawsuit states that Google’s practices violate privacy laws and should be stopped. This is a significant step forward for users’ online privacy. The Internet giant’s private browsing option has been around for some time, but the suit claims that Google is violating California law by gathering personal information even when users use “private browsing” mode. The law requires consent before a company can monitor private communications, so the company must get consent before it collects any personal information.

Google’s data-collection practices have also been the subject of other lawsuits. This case is the latest in a series of similar actions. In 2012, the European Union imposed a fine of EUR50 million for violating the GDPR. The plaintiffs argued that Google failed to obtain clear consent from end users and did not make their actions transparent enough. Further, they alleged that Google did not disclose its partners’ elements. Furthermore, a private browsing mode does not offer additional information on a website.

Other lawsuits alleging that Google violates user privacy have not been successful, but the company is still facing a number of other cases regarding its data-collection practices. The suit says that the company collects browsing histories and search queries, even if users’ browsers are in private mode. The lawsuit further claims that Google intentionally deceives consumers by claiming that these actions are necessary for its business. If this lawsuit is successful, it would force Google to pay a huge sum of damages to its users.

The US government is currently suing the company for illegally invading users’ privacy. The suit is also claiming that Google has knowingly collected information about its users. It is unclear exactly how it collects such information. The data is collected when a person uses the search engines in private mode. However, this is not the only violation that Google has been accused of. The data is used to improve the quality of their search experience.

While Google does not explicitly deny that it collects this information, it does not mention that it also gathers information about its users through third-party tracking cookies. As a result, the company is not required to disclose the specific details of the third-party tracking cookies that it collects. The company has no plans to replace these cookies with anything that is less invasive. The phasing out of third-party tracking cookies, however, will not have a negative impact on its advertising business.

Despite the fact that these practices are illegal, a class-action lawsuit filed in the US alleges that Google has violated user privacy by collecting third-party tracking cookies. The class-action claims that the company violates California’s Computer Data Access and Fraud Act by violating privacy laws. In addition to these claims, it has also been cited as a violation of California’s Computer Data Access and Fraud Act.

The suit further alleges that Google’s privacy controls are deceptive, and the company collects information even without consumer consent. In addition, it is a requirement for third-party Android apps use the Firebase SDK, which is a third-party tool to allow advertisers to know the composition of their audience. This enables the company to analyze the content of the audience and use it for advertising purposes. It then uses the information to create hyper-personalized ads.

In a separate suit, the plaintiffs claim that Google has harmed the rights of millions of users by tracking their activities. This lawsuit has not been filed in the US courts, but it is still pending in the US. The Texas Attorney General’s office has previously filed a similar suit against the company, and the lawsuit is based on the company’s failure to follow the law. The company’s privacy policies are the subject of a class-action lawsuit against it, but the proposed settlement has been thrown out because they have been a major violation of user rights.

Security risks in the age of smart homes


Smart homes, an aspect of the Internet of Things, offer the promise of improved energy efficiency and control over home security. Integrating various devices together can offer users easy programming of many devices around the home, including appliances, cameras and alarm sensors. Several systems can handle this type of task, such as Samsung SmartThings, Google Brillo/Weave, Apple HomeKit, Allseen Alljoyn and Amazon Alexa.

But there are also security risks. Smart home systems can leave owners vulnerable to serious threats, such as arson, blackmail, theft and extortion. Current security research has focused on individual devices, and how they communicate with each other. For example, the MyQ garage system can be turned into a surveillance tool, alerting would-be thieves when a garage door opened and then closed, and allowing them to remotely open it again after the residents had left. The popular ZigBee communication protocol can allow attackers to join the secure home network.

Little research has focused on what happens when these devices are integrated into a coordinated system. We set out to determine exactly what these risks might be, in the hope of showing platform designers areas in which they should improve their software to better protect users’ security in future smart home systems.

The popular SmartThings product line.
Zon@ IT/YouTube, CC BY

Evaluating the security of smart home platforms

First, we surveyed most of the above platforms to understand the landscape of smart home programming frameworks. We looked at what systems existed, and what features they offered. We also looked at what devices they could interact with, whether they supported third-party apps, and how many apps were in their app stores. And, importantly, we looked at their security features.

We decided to focus deeper inquiry on SmartThings because it is a relatively mature system, with 521 apps in its app store, supporting 132 types of IoT devices for the home. In addition, SmartThings has a number of conceptual similarities to other, newer systems that make our insights potentially relevant more broadly. For example, SmartThings and other systems offer trigger-action programming, which lets you connect sensors and events to automate aspects of your home. That is the sort of capability that can turn your walkway lights on when a driveway motion detector senses a car driving up, or can make sure your garage door is closed when you turn your bedroom light out at night.

We tested for potential security holes in the system and 499 SmartThings apps (also called SmartApps) from the SmartThings app store, seeking to understand how prevalent these security flaws were.

Finding and attacking main weaknesses

We found two major categories of vulnerability: excessive privileges and insecure messaging.

Overprivileged SmartApps: SmartApps have privileges to perform specific operations on a device, such as turning an oven on and off or locking and unlocking a door. This idea is similar to smartphone apps asking for different permissions, such as to use the camera or get the phone’s current location. These privileges are grouped together; rather than getting separate permission for locking a door and unlocking it, an app would be allowed to do both – even if it didn’t need to.

For example, imagine an app that can automatically lock a specific door after 9 p.m. The SmartThings system would also grant that app the ability to unlock the door. An app’s developer cannot ask only for permission to lock the door.

More than half – 55 percent – of 499 SmartApps we studied had access to more functions than they needed.

Insecure messaging system: SmartApps can communicate with physical devices by exchanging messages, which can be envisioned as analogous to instant messages exchanged between people. SmartThings devices send messages that can contain sensitive data, such as a PIN code to open a particular lock.

We found that as long as a SmartApp has even the most basic level of access to a device (such as permission to show how much battery life is left), it can receive all the messages the physical device generates – not just those messages about functions it has privileges to. So an app intended only to read a door lock’s battery level could also listen to messages that contain a door lock’s PIN code.

In addition, we found that SmartApps can “impersonate” smart-home equipment, sending out their own messages that look like messages generated by real physical devices. The malicious SmartApp can read the network’s ID for the physical device, and create a message with that stolen ID. That battery-level app could even covertly send a message as if it were the door lock, falsely reporting it had been opened, for example.

SmartThings does not ensure that only physical devices can create messages with a certain ID.

SmartThings Proof-of-Concept Attacks

Attacking the design flaws

To move beyond the potential weaknesses into actual security breaches, we built four proof-of-concept attacks to demonstrate how attackers can combine and exploit the design flaws we found in SmartThings.

In our first attack, we built an app that promised to monitor the battery levels of various wireless devices around the home, such as motion sensors, leak detectors, and door locks. However, once installed by an unsuspecting user, this seemingly benign app was programmed to snoop on the other messages sent by those devices, opening a key vulnerability.

When the authorized user creates a new PIN code for a door lock, the lock itself will acknowledge the changed code by sending a confirmation message to the network. That message contains the new code, which could then be read by the malicious battery-monitoring app. The app can then send the code to its designer by SMS text message – effectively sending a house key directly to a prospective intruder.

In our second attack, we were able to snoop on the supposedly secure communications between a SmartApp and its companion Android mobile app. This allowed us to impersonate the Android app and send commands to the SmartApp – such as to create a new PIN code that would let us into the home.

Our third and fourth attacks involved writing malicious SmartApps that were able to take advantage of other security flaws. One custom SmartApp could disable “vacation mode,” a popular occupancy-simulation feature; we stopped a smart home system from turning lights on and off and otherwise behaving as if the home were occupied. Another custom SmartApp was able to falsely trigger a fire alarm by pretending to be a carbon monoxide sensor.

Room for improvement

Taking a step back, what does this mean for smart homes in general? Are these results indicative of the industry as a whole? Can smart homes ever be safe?

There are great benefits to gain from smart homes, and the Internet of Things in general, that ultimately lead to an improved quality of living. However, given the security weaknesses in today’s systems, caution is appropriate.

These are new technologies in nascent stages, and users should think about whether they are comfortable with giving third parties (e.g., apps or smart home platforms) remote access to their devices. For example, personally, I wouldn’t mind giving smart home technologies remote access to my window shades or desk lamps. But I would be wary of staking my safety on remotely controlled door locks, fire alarms, and ovens, as these are security- and safety-critical devices. If misused, those systems could allow – or even cause – physical harm.

However, I might change that assessment if systems were better designed to reduce the risks of failure or compromise, and to better protect users’ security.

Acknowledgements: This research is the result of a collaboration with Jaeyeon Jung and Atul Prakash.

The Conversation

Earlence Fernandes, Ph.D. student, Systems and Security, University of Michigan

This article was originally published on The Conversation. Read the original article.

Supreme Court approves legal authority to gain unauthorized access to any computer


It just got a whole lot easier for local and federal law enforcement to gain unauthorized access to computers connected to the internet when the Supreme Court approved changes to the rules of criminal procedure recently. The changes have enabled warrants for searches of any remote computer system despite local laws, ownership and physical location.

These warrants are particularly important to computer crimes divisions since many investigations result in turning up anonymous hosts, or users who don’t share their true identity in any way.

Unless congress takes action beforehand, the new law goes into affect in December of 2016.