Internet company Tekoso Media has recently begun offering web hosting services including shared hosting, cloud hosting and storage, VPS hosting and dedicated servers for both Windows and Linux platforms. Prices are very cheap and come with generous software packages, security and cutting edge computer technology.
Also servers in the shared, VPS and managed server packages come with cPanel and a whole host of free add-ons that every internet marketer needs for their launch, including ticket systems, multiple email accounts, billing systems and membership functionality.
WordPress Hosting at Tekoso Media
WordPress is obviously a part of what they offer, but they go the extra mile by offering wordpress hosting at a discounted rate as well. On top of that, their point and click, drag and drop website creators and integrations make it very easy to get up and running within minutes.
If you’re launching a new product or service, or even hosting a blog network for search engine optimization such as authority blogs, Tekoso Web Hosting is your best bet for getting started fast and cheaply. For just a few bucks you can have lightning speed servers, more hard drive space and RAM than you’ll ever need and plenty of bandwidth, all without breaking the bank the way you would with competing web hosting providers.
Try Tekoso and take them for a test drive. They even offer a thirty day money back guarantee on any server that is unsatisfactory. This offer may be limited, so check them out today!
Its business, though, was not standard international trade. Avalanche provided a hacker’s delight of a one-stop shop for all kinds of cybercrime to criminals without their own technical expertise but with the motivation and ingenuity to perpetrate a scam. At the height of its activity, the Avalanche group had hijacked hundreds of thousands of computer systems in homes and businesses around the world, using them to send more than a million criminally motivated emails per week.
Our study of Avalanche, and of the groundbreaking law enforcement effort that ultimately took it down in December 2016, gives us a look at how the cybercriminal underground will operate in the future, and how police around the world must cooperate to fight back.
Just as regular businesses can hire online services – buying Google products to handle their email, spreadsheets and document sharing, and hosting websites on Amazon with payments handled by PayPal – cybercriminals can do the same. Sometimes these criminals use legitimate service platforms like PayPal in addition to others specifically designed for illicit marketplaces.
And just as the legal cloud-computing giants aim to efficiently offer products of broad use to a wide customer base, criminal computing services do the same. They pursue technological capabilities that a wide range of customers want to use more easily. Today, with an internet connection and some currency (bitcoin preferred), almost anyone can buy and sell narcotics online, purchase hacking services or rent botnets to cripple competitors and spread money-making malware.
The Avalanche network excelled at this, selling technically advanced products to its customers while using sophisticated techniques to evade detection and identification as the source by law enforcement. Avalanche offered, in business terms, “cybercrime as a service,” supporting a broad digital underground economy. By leaving to others the design and execution of innovative ways to use them, Avalanche and its criminal customers efficiently split the work of planning, executing and developing the technology for advanced cybercrime scams.
To date, cybercrime has offered high profits – like the US$1 billion annualransomware market – with low risk. Cybercriminals often use technical means to obscure their identities and locations, making it challenging for law enforcement to effectively pursue them.
In addition, the internet allows criminal operations to function across geographic boundaries and legal jurisdictions in ways that are simply impractical in the physical world. Criminals in the real world must be at a crime’s actual site and may leave physical evidence behind – like fingerprints on a bank vault or records of traveling to and from the place the crime occurred. In cyberspace, a criminal in Belarus can hack into a vulnerable server in Hungary to remotely direct distributed operations against victims in South America without ever setting foot below the Equator.
A path forward
All these factors present significant challenges for police, who must also contend with limited budgets and manpower with which to conduct complex investigations, the technical challenges of following sophisticated hackers through the internet and the need to work with officials in other countries.
The multinational cooperation involved in successfully taking down the Avalanche network can be a model for future efforts in fighting digital crime. Coordinated by Europol, the European Union’s police agency, the plan takes inspiration from the sharing economy.
Through those partnerships, various national police agencies were able to collect pieces of information from their own jurisdictions and send it, through Europol, to German authorities, who took the lead on the investigation. Analyzing all of that collected data revealed the identity of the suspects and untangled its complex network of servers and software. The nonprofit Shadowserver Foundation and others assisted with the actual takedown of the server infrastructure, while anti-virus companies helped victims clean up their computers.
Using the network against the criminals
Police are increasingly learning – often from private sector experts – how to detect and stop criminals’ online activities. Avalanche’s complex technological setup lent itself to a technique called “sinkholing,” in which malicious internet traffic is sent into the electronic equivalent of a bottomless pit. When a hijacked computer tried to contact its controller, the police-run sinkhole captured that message and prevented it from reaching the actual central controller. Without control, the infected computer couldn’t do anything nefarious.
However, interrupting the technological systems isn’t enough, unless police are able to stop the criminals too. Three times since 2010, police tried to take down the Kelihos botnet. But each time the person behind it escaped and was able to resume criminal activities using more resilient infrastructure. In early April, however, the FBI was able to arrest Peter Levashov, allegedly its longtime operator, while on a family vacation in Spain.
The Avalanche network was just the beginning of the challenges law enforcement will face when it comes to combating international cybercrime. To keep their enterprises alive, the criminals will share their experiences and learn from the past. Police agencies around the world must do the same to keep up.
With the amount of data storage required for our daily lives growing and growing, and currently available technology being almost saturated, we’re in desperate need of a new method of data storage. The standard magnetic hard disk drive (HDD) – like what’s probably in your laptop computer – has reached its limit, holding a maximum of a few terabytes. Standard optical disk technologies, like compact disc (CD), digital video disc (DVD) and Blu-ray disc, are restricted by their two-dimensional nature – they just store data in one plane – and also by a physical law called the diffraction limit, based on the wavelength of light, that constrains our ability to focus light to a very small volume.
And then there’s the lifetime of the memory itself to consider. HDDs, as we’ve all experienced in our personal lives, may last only a few years before things start to behave strangely or just fail outright. DVDs and similar media are advertised as having a storage lifetime of hundreds of years. In practice this may be cut down to a few decades, assuming the disk is not rewritable. Rewritable disks degrade on each rewrite.
Without better solutions, we face financial and technological catastrophes as our current storage media reach their limits. How can we store large amounts of data in a way that’s secure for a long time and can be reused or recycled?
One approach to improving data storage has been to continue in the direction of optical memory, but extend it to multiple dimensions. Instead of writing the data to a surface, write it to a volume; make your bits three-dimensional. The data are still limited by the physical inability to focus light to a very small space, but you now have access to an additional dimension in which to store the data. Some methods also polarize the light, giving you even more dimensions for data storage. However, most of these methods are not rewritable.
Here’s where the diamonds come in.
A diamond is supposed to be a pure well-ordered array of carbon atoms. Under an electron microscope it usually looks like a neatly arranged three-dimensional lattice. But occasionally there is a break in the order and a carbon atom is missing. This is what is known as a vacancy. Even further tainting the diamond, sometimes a nitrogen atom will take the place of a carbon atom. When a vacancy and a nitrogen atom are next to each other, the composite defect is called a nitrogen vacancy, or NV, center. These types of defects are always present to some degree, even in natural diamonds. In large concentrations, NV centers can impart a characteristic red color to the diamond that contains them.
Nitrogen vacancy centers have a tendency to trap electrons, but the electron can also be forced out of the defect by a laser pulse. For many researchers, the defects are interesting only when they’re holding on to electrons. So for them, the fact that the defects can release the electrons, too, is a problem.
But in our lab, we instead look at these nitrogen vacancy centers as a potential benefit. We think of each one as a nanoscopic “bit.” If the defect has an extra electron, the bit is a one. If it doesn’t have an extra electron, the bit is a zero. This electron yes/no, on/off, one/zero property opens the door for turning the NV center’s charge state into the basis for using diamonds as a long-term storage medium.
Turning the defect into a benefit
Previous experiments with this defect have demonstrated some properties that make diamond a good candidate for a memory platform.
First, researchers can selectively change the charge state of an individual defect so it either holds an electron or not. We’ve used a green laser pulse to assist in trapping an electron and a high-power red laser pulse to eject an electron from the defect. A low-power red laser pulse can help check if an electron is trapped or not. If left completely in the dark, the defects maintain their charged/discharged status virtually forever.
Our method is still diffraction limited, but is 3-D in the sense that we can charge and discharge the defects at any point inside of the diamond. We also present a sort of fourth dimension. Since the defects are so small and our laser is diffraction limited, we are technically charging and discharging many defects in a single pulse. By varying the duration of the laser pulse in a single region we can control the number of charged NV centers and consequently encode multiple bits of information.
Though one could use natural diamonds for these applications, we use artificially lab-grown diamonds. That way we can efficiently control the concentration of nitrogen vacancy centers in the diamond.
All these improvements add up to about 100 times enhancement in terms of bit density relative to the current DVD technology. That means we can encode all the information from a DVD into a diamond that takes up about one percent of the space.
Past just charge, to spin as well
If we could get beyond the diffraction limit of light, we could improve storage capacities even further. We have one novel proposal on this front.
Nitrogen vacancy centers have also been used in the execution of what is called super-resolution microscopy to image things that are much smaller than the wavelength of light. However, since the super-resolution technique works on the same principles of charging and discharging the defect, it will cause unintentional alteration in the pattern that one wants to encode. Therefore, we won’t be able to use it as it is for memory storage application and we’d need to back up the already written data somehow during a read or write step.
Here we propose the idea of what we call charge-to-spin conversion; we temporarily encode the charge state of the defect in the spin state of the defect’s host nitrogen nucleus. Spin is a fundamental property of any elementary particle; it’s similar to its charge, and can be imagined as having a very tiny magnet permanently attached it.
While the charges are being adjusted to read/write the information as desired, the previously written information is well protected in the nitrogen spin state. Once the charges have encoded, the information can be back converted from the nitrogen spin to the charge state through another mechanism which we call spin-to-charge conversion.
With these advanced protocols, the storage capacity of a diamond would surpass what existing technologies can achieve. This is just a beginning, but these initial results provide us a potential way of storing huge amount of data in a brand new way. We’re looking forward to transform this beautiful quirk of physics into a vastly useful technology.
Little research has focused on what happens when these devices are integrated into a coordinated system. We set out to determine exactly what these risks might be, in the hope of showing platform designers areas in which they should improve their software to better protect users’ security in future smart home systems.
Evaluating the security of smart home platforms
First, we surveyed most of the above platforms to understand the landscape of smart home programming frameworks. We looked at what systems existed, and what features they offered. We also looked at what devices they could interact with, whether they supported third-party apps, and how many apps were in their app stores. And, importantly, we looked at their security features.
We decided to focus deeper inquiry on SmartThings because it is a relatively mature system, with 521 apps in its app store, supporting 132 types of IoT devices for the home. In addition, SmartThings has a number of conceptual similarities to other, newer systems that make our insights potentially relevant more broadly. For example, SmartThings and other systems offer trigger-action programming, which lets you connect sensors and events to automate aspects of your home. That is the sort of capability that can turn your walkway lights on when a driveway motion detector senses a car driving up, or can make sure your garage door is closed when you turn your bedroom light out at night.
We tested for potential security holes in the system and 499 SmartThings apps (also called SmartApps) from the SmartThings app store, seeking to understand how prevalent these security flaws were.
Finding and attacking main weaknesses
We found two major categories of vulnerability: excessive privileges and insecure messaging.
Overprivileged SmartApps: SmartApps have privileges to perform specific operations on a device, such as turning an oven on and off or locking and unlocking a door. This idea is similar to smartphone apps asking for different permissions, such as to use the camera or get the phone’s current location. These privileges are grouped together; rather than getting separate permission for locking a door and unlocking it, an app would be allowed to do both – even if it didn’t need to.
For example, imagine an app that can automatically lock a specific door after 9 p.m. The SmartThings system would also grant that app the ability to unlock the door. An app’s developer cannot ask only for permission to lock the door.
More than half – 55 percent – of 499 SmartApps we studied had access to more functions than they needed.
Insecure messaging system: SmartApps can communicate with physical devices by exchanging messages, which can be envisioned as analogous to instant messages exchanged between people. SmartThings devices send messages that can contain sensitive data, such as a PIN code to open a particular lock.
We found that as long as a SmartApp has even the most basic level of access to a device (such as permission to show how much battery life is left), it can receive all the messages the physical device generates – not just those messages about functions it has privileges to. So an app intended only to read a door lock’s battery level could also listen to messages that contain a door lock’s PIN code.
In addition, we found that SmartApps can “impersonate” smart-home equipment, sending out their own messages that look like messages generated by real physical devices. The malicious SmartApp can read the network’s ID for the physical device, and create a message with that stolen ID. That battery-level app could even covertly send a message as if it were the door lock, falsely reporting it had been opened, for example.
SmartThings does not ensure that only physical devices can create messages with a certain ID.
Attacking the design flaws
To move beyond the potential weaknesses into actual security breaches, we built four proof-of-concept attacks to demonstrate how attackers can combine and exploit the design flaws we found in SmartThings.
In our first attack, we built an app that promised to monitor the battery levels of various wireless devices around the home, such as motion sensors, leak detectors, and door locks. However, once installed by an unsuspecting user, this seemingly benign app was programmed to snoop on the other messages sent by those devices, opening a key vulnerability.
When the authorized user creates a new PIN code for a door lock, the lock itself will acknowledge the changed code by sending a confirmation message to the network. That message contains the new code, which could then be read by the malicious battery-monitoring app. The app can then send the code to its designer by SMS text message – effectively sending a house key directly to a prospective intruder.
In our second attack, we were able to snoop on the supposedly secure communications between a SmartApp and its companion Android mobile app. This allowed us to impersonate the Android app and send commands to the SmartApp – such as to create a new PIN code that would let us into the home.
Our third and fourth attacks involved writing malicious SmartApps that were able to take advantage of other security flaws. One custom SmartApp could disable “vacation mode,” a popular occupancy-simulation feature; we stopped a smart home system from turning lights on and off and otherwise behaving as if the home were occupied. Another custom SmartApp was able to falsely trigger a fire alarm by pretending to be a carbon monoxide sensor.
Room for improvement
Taking a step back, what does this mean for smart homes in general? Are these results indicative of the industry as a whole? Can smart homes ever be safe?
There are great benefits to gain from smart homes, and the Internet of Things in general, that ultimately lead to an improved quality of living. However, given the security weaknesses in today’s systems, caution is appropriate.
These are new technologies in nascent stages, and users should think about whether they are comfortable with giving third parties (e.g., apps or smart home platforms) remote access to their devices. For example, personally, I wouldn’t mind giving smart home technologies remote access to my window shades or desk lamps. But I would be wary of staking my safety on remotely controlled door locks, fire alarms, and ovens, as these are security- and safety-critical devices. If misused, those systems could allow – or even cause – physical harm.
However, I might change that assessment if systems were better designed to reduce the risks of failure or compromise, and to better protect users’ security.
Acknowledgements: This research is the result of a collaboration with Jaeyeon Jung and Atul Prakash.
It just got a whole lot easier for local and federal law enforcement to gain unauthorized access to computers connected to the internet when the Supreme Court approved changes to the rules of criminal procedure recently. The changes have enabled warrants for searches of any remote computer system despite local laws, ownership and physical location.
These warrants are particularly important to computer crimes divisions since many investigations result in turning up anonymous hosts, or users who don’t share their true identity in any way.
Unless congress takes action beforehand, the new law goes into affect in December of 2016.
The Justice Department has managed to unlock an iPhone 5c used by the gunman Syed Rizwan Farook, who with his wife killed 14 people in San Bernardino, California, last December. The high-profile case has pitted federal law enforcement agencies against Apple, which fought a legal order to work around its passcode security feature to give law enforcement access to the phone’s data. The FBI said it relied on a third party to crack the phone’s encrypted data, raising questions about iPhone security and whether federal agencies should disclose their method.
But what if the device had been running Android? Would the same technical and legal drama have played out?
We are Android users and researchers, and the first thing we did when the FBI-Apple dispute hit popular media was read Android’s Full Disk Encryption documentation.
We attempted to replicate what the FBI had wanted to do on an Android phone and found some useful results. Beyond the fact the Android ecosystem involves more companies, we discovered some technical differences, including a way to remotely update and therefore unlock encryption keys, something the FBI was not able to do for the iPhone 5c on its own.
The easy ways in
Data encryption on smartphones involves a key that the phone creates by combining 1) a user’s unlock code, if any (often a four- to six-digit passcode), and 2) a long, complicated number specific to the individual device being used. Attackers can try to crack either the key directly – which is very hard – or combinations of the passcode and device-specific number, which is hidden and roughly equally difficult to guess.
Decoding this strong encryption can be very difficult. But sometimes getting access to encrypted data from a phone doesn’t involve any code-breaking at all. Here’s how:
A custom app could be installed on a target phone to extract information. In March 2011, Google remotely installed a program that cleaned up phones infected by malicious software. It is unclear if Android still allows this.
Many applications use Android’s Backup API. The information that is backed up, and thereby accessible from the backup site directly, depends on which applications are installed on the phone.
Some people have modified their phones’ operating systems to give them “root” privileges – access to the device’s data beyond what is allowed during normal operations – and potentially weakening security.
But if these options are not available, code-breaking is the remaining way in. In what is called a “brute force” attack, a phone can be unlocked by trying every possible encryption key (i.e., all character combinations possible) until the right one is reached and the device (or data) unlocks.
Starting the attack
There are two types of brute-force attacks: offline and online. In some ways an offline attack is easier – by copying the data off the device and onto a more powerful computer, specialized software and other techniques can be used to try all different passcode combinations.
But offline attacks can also be much harder, because they require either trying every single possible encryption key, or figuring out the user’s passcode and the device-specific key (the unique ID on Apple, and the hardware-bound key on newer versions of Android).
To try every potential solution to a fairly standard 128-bit AES key means trying all 100 undecillion (1038) potential solutions – enough to take a supercomputer more than a billion billion years.
Guessing the passcode could be relatively quick: for a six-digit PIN with only numbers, that’s just a million options. If letters and special symbols like “$” and “#” are allowed, there would be more options, but still only in the hundreds of billions. However, guessing the device-specific key would likely be just as hard as guessing the encryption key.
Considering an online attack
That leaves the online attack, which happens directly on the phone. With the device-specific key readily available to the operating system, this reduces the task to the much smaller burden of trying only all potential passcodes.
However, the phone itself can be configured to resist online attacks. For example, the phone can insert a time delay between a failed passcode guess and allowing another attempt, or even delete the data after a certain number of failed attempts.
Apple’s iOS has both of these capabilities, automatically introducing increasingly long delays after each failure, and, at a user’s option, wiping the device after 10 passcode failures.
Attacking an Android phone
What happens when one tries to crack into a locked Android phone? Different manufacturers set up their Android devices differently; Nexus phones run Google’s standard Android configuration. We used a Nexus 4 device running stock Android 5.1.1 and full disk encryption enabled.
We started with a phone that was already running but had a locked screen. Android allows PINs, passwords and pattern-based locking, in which a user must connect a series of dots in the correct sequence to unlock the phone; we conducted this test with each type. We had manually assigned the actual passcode on the phone, but our unlocking attempts were randomly generated.
After five failed passcode attempts, Android imposed a 30-second delay before allowing another try. Unlike the iPhone, the delays did not get longer with subsequent failures; over 40 attempts, we encountered only a 30-second delay after every five failures. The phone kept count of how many successive attempts had failed, but did wipe the data. (Android phones from other manufacturers may insert increasing delays similar to iOS.)
These delays impose a significant time penalty on an attacker. Brute-forcing a six-digit PIN (one million combinations) could incur a worst-case delay of just more than 69 days. If the passcode were six characters, even using only lowercase letters, the worst-case delay would be more than 58 years.
When we repeated the attack on a phone that had been turned off and was just starting up, we were asked to reboot the device after 10 failed attempts. After 20 failed attempts and two reboots, Android started a countdown of the failed attempts that would trigger a device wipe. We continued our attack, and at the 30th attempt – as warned on the screen and in the Android documentation – the device performed a “factory reset,” wiping all user data.
In contrast to offline attacks, there is a difference between Android and iOS for online brute force attacks. In iOS, both the lock screen and boot process can wipe the user data after a fixed number of failed attempts, but only if the user explicitly enables this. In Android, the boot process always wipes the user data after a fixed number of failed attempts. However, our Nexus 4 device did not allow us to set a limit for lock screen failures. That said, both Android and iOS have options for remote management, which, if enabled, can wipe data after a certain number of failed attempts.
Using special tools
The iPhone 5c in the San Bernardino case is owned by the employer of one of the shooters, and has mobile device management (MDM) software installed that lets the company track it and perform other functions on the phone by remote control. Such an MDM app is usually installed as a “Device Administrator” application on an Android phone, and set up using the “Apple Configurator” tool for iOS.
We built our own MDM application for our Android phone, and verified that the passcode can be reset without the user’s explicit consent; this also updated the phone’s encryption keys. We could then use the new passcode to unlock the phone from the lock screen and at boot time. (For this attack to work remotely, the phone must be on and have Internet connectivity, and the MDM application must already be programmed to reset the passcode on command from a remote MDM server.)
Figuring out where to get additional help
If an attacker needed help from a phone manufacturer or software company, Android presents a more diverse landscape.
Generally, operating system software is signed with a digital code that proves it is genuine, and which the phone requires before actually installing it. Only the company with the correct digital code can create an update to the operating system software – which might include a “back door” or other entry point for an attacker who had secured the company’s assistance. For any iPhone, that’s Apple. But many companies build and sell Android phones.
Google, the primary developer of the Android operating system, signs the updates for its flagship Nexus devices. Samsung signs for its devices. Cellular carriers (such as AT&T or Verizon) may also sign. And many users install a custom version of Android (such as Cyanogenmod). The company or companies that sign the software would be the ones the FBI needed to persuade – or compel – to write software allowing a way in.
Comparing iOS and Android
Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.
But we found differences for online attacks, based on user and remote management configuration: Android has a more secure default for online attacks at start-up, but our Nexus 4 did not allow the user to set a maximum number of failed attempts from the lock screen (other devices may vary). Devices running iOS have both of these capabilities, but a user must enable them manually in advance.
Android security may also be weakened by remote control software, depending on the software used. Though the FBI was unable to gain access to the iPhone 5c by resetting the password this way, we were successful with a similar attack on our Android device.
From cyber relationships, S&M culture and child abuse to biohacking, content moderation and nootropics, Dark Net finally puts into moving pictures what blogs have been typing up a storm about for the past few years.
At first glance the show seems like your run-of-the-mill cyber culture documentary, but the topics being explored are of a much more taboo persuasion — and it’s not just the underground pedophile networks accessed via Tor we’re talking about.
While Dark Net covers a lot of ground in technology subculture, it also serves as a bit of a transhumanist playground, discussing cutting edge and controversial topics such as RFID chip implants and other biohacks, nootropics, artificial intelligence girlfriends, and more. The main topic, however, seems to be the nature of human relationships being altered, augmented, and even hindered by technology, and it’s not difficult to understand why.
Through the internet, the impact of technology on our lives is both unprecedented and undeniable. Exploring subcultures and trends such as sadomasochism, porn addiction, and even internet addiction, Dark Net attempts to bring to light some otherwise undisclosed topics the most people refuse to talk about openly.
Dark Net is on Showtime, Thursday nights.
Public enema xenomorphic robot from the dimension Zrgauddon.
Can an image, sound, video or string of words influence the human mind so strongly the mind is actually harmed or controlled? Cosmoso takes a look at technology and the theoretical future of psychological warfare with Part Three of an ongoing series.
A lot of the responses I got to the first two installments talked about religion being weaponized memes. People do fight and kill on behalf of their religions and memes play a large part in disseminating the message and information religions have to offer.
Curved bullet meme is a great one. Most of the comments I see associated with this image have to do with how dumb someone would have to be to believe it would work. Some people have an intuitive understanding of spacial relations. Some might have a level of education in physics or basic gun safety and feel alarm bells going off way before they’d try something this dumb. It’s a pretty dangerous idea to put out there, though, because a percentage of people the image reaches could try something stupid. Is it a viable memetic weapon? Possibly~! I present to you, the curved bullet meme.
The dangers here should be obvious. The move starts with “begin trigger-pull with pistol pointed at chest (near heart)” and anyone who is taking it seriously beyond is Darwin Award material.
Whoever created this image has no intention of someone actually trying it. So, in order for someone to fall for this pretty obvious trick, they’d have to be pretty dumb. There is another way people fall for tricks, though.
There is more than one way to end up being a victim of a mindfuck and being ignorant is part of a lot of them but ignorance can actually be induced. In the case of religion, there are several giant pieces of information or ways of thinking that must be gotten all wrong before someone would have to believe that the earth is coming to an end in 2012, or the creator of the universe wants you to burn in hell for eternity for not following the rules. By trash talking religion in general, I’ve made a percentage of readers right now angry, and that’s the point. Even if you take all the other criticisms about religion out of the mix, we can all agree that religion puts its believers in the position of becoming upset or outraged by very simple graphics or text. As a non-believer, a lot of the things religious people say sound as silly to me as the curved bullet graphic seems to a well-trained marksman.
To oversimplify it further: religions are elaborate, bad advice. You can inoculate yourself against that kind of meme but the vast majority of people out there cling desperately, violently to some kind of doctrine that claims to answer one or more of the most unanswerable parts of life. When people feel relief wash over them, they are more easily duped into doing what it takes to keep their access to that feeling.
There are tons of non-religious little memes out there that simply mess with anyone who follows bad advice. It can be a prank but the pranks can get pretty destructive. Check out this image from the movie Fight Club:
Thinking no one fell for this one? For one thing, it’s from a movie, and in the movie it was supposed to be a mean-spirited prank that maybe some people fell for. Go ahead and google “fertilize used motor oil”, though, and see how many people are out there asking questions about it. It may blow your mind…
We live in a time where auto theft is incredibly impractical. Criminals in 2015 struggle to figure out how to get past electronic security and alarm systems, reflecting an over 90% drop in NYC auto theft since the early 90’s. These days, even a successfully stolen vehicle can be recovered with GPS tracking and incidences of theft are often caught on video.
It might seem like convenience is weakness but since car theft is way down, this might not hold true at the moment. The security holes that seem most vulnerable to exploitation revolve around a key fob. Fobs are those small black electronic keys that everyone uses to unlock their car these days. They work by using A pre-determined electronic signal that must be authenticated by the CAN system. If the authentication checks out, the doors unlock. In newer cars, the engine will start via push button if the fob is in the immediate vicinity of the car so the driver doesn’t have to fish them out of her pocket.
Etymology of the word fob: Written evidence of the word's usage has been traced to 1888. Almost no one uses a pocket watch these days but a fob was originally an ornament attached to a pocket watch chain. The word hung around as an ocassional, outdated way to refer to key chains. In the 80's, the consumer market was introduced to devices that allowed a car to be unlocked or started remotely. The small electronic device was easily attached to the conventional set of carkeys, and within a few years the term fob key was generally used to describe any electronic key entry system that stored a code in a device, including hotel keycards as well as the remote car unlocking device usually described by the word.
Let’s take a look at three ways a fob key can be hacked.
Recording FOB signals for replay. This is one of those urban legends that’s been around since at least 2008. The story goes: thieves record the key fob signal and can later replay it with a dummy fob. The car can’t tell the difference and unlocks/starts as if the correct key fob has been used. It’s easy for the thief to control the schedule and catch the victim unawares because it doesn’t have to interact with the fob in real time. Sounds like the most effective way to hack a key fob, right? Problem is, each signal is unique, created with an algorithm than includes time. If the devices are not synchronized the fob can’t open the lock. A recorded signal played back wouldn’t open the lock. The conventional wisdom is that the devices, proprietary knowledge and experience needed to make this method work are not worth a stolen car’s worth of risk. Secrets leak but honestly, a team organized enough to steal a car this way would be able to use the same skills to make a lot more money legally. Lastly, if you could reverse engineer and record fob signals the FBI would already be watching you. The demographic that used to steal cars in the 90’s were largely not like the fast and furious franchise. The idea that a huge tech security op could be thwarted isn’t necessarily far fetched but there are no recorded cases. Not one. For that to change, someone needs to figure out how the sync code is incorporated into the algorithm and apparently no one has.
Amplifying FOB signal to trigger auto unlock feature. Not only is this method genius but it is rumored to be already in use. Eyewitnesses claim to have seen this in use and it sparked theories about the methodology. Unlike recording a signal, amplification is a lot cheaper and requires almost no proprietary knowledge of the code to pull off. It works like this: A device picks up a range of frequencies that the key fob is giving off and increases the range. Some cars feature the ability to sense the authentic key fob in a five foot range and auto-unlock or autostart their ignitions. With a signal amp, the engine can theoretically be started if the real key fob is within 30 feet. So, the keys can be on your nightstand but the car thinks you are at the car door. The thief can then open the door, sit in the drivers seat and the ignition can be pushbutton triggered as if the key fob was in the car with the thief. I thought about repeating some of the anecdotes I found online about this method but none of them are confirmed. No one has tested it but it looks like a signal booster can be bought online for pretty cheap if you know what to buy($17 – $300). Last week, NYT ran a piece about signal boosting. You can read that here.
Random signal generator. So unique frequency codes means you can’t record the signal and reuse it without a proprietary algorithm but signal amplification might not work on some systems in the near future. The rumors of it working successfully already have car companies working on a sensitive enough receiver that it would be sensitive to distortion and interference caused by the amp. But there are exceptions, where the signal is not random, such as a service codes. Manufacturers have overriding unlock codes and reset devices to assist with lost key fobs and maintenance/emergency cases. When these codes are leaked, they often open up a brief but large hole in security, during which thousands of cars can be swiped. The main reason it isn’t happening already is more about organized crime not being organized enough to plan and exploit that security hole. Or, you know, maybe the codes just haven’t leaked yet.
Constructing the hardware components needed takes specialized knowledge of hardware. Searching for information about this stuff if bound to attract NSA attention when followed by parts being ordered. The kind of guy who likes to sit in a workshop ordering parts and tinkering all day isn’t always the one who wants to go out and take risks with newer, higher-end cars. That is the kind of multifaceted thief NYC was famous for back before the numbers plunged in the 90’s but the hardware is becoming more and more esoteric. People are not as apt to work on devices that have such small parts on projects that run with such high risk. For that reason, there is more money to be made in producing a bunch of low-cost black market devices that are already calibrated and tested to work. Buying this device on the street and using it before selling it off again might leave a smaller trail than building it in a sketchy apartment-turned-lab that is sure to be searched if a heist goes wrong.
Paper trail & identity theft.
Technology has made it really difficult to even take the car int he first place but once you have a stolen car they are almost impossible to get rid of these days. There can be multiple tracking devices and serial number locations in one car and if the operation isn’t extremely current, the likelihood of the car being found in red hands goes up quickly.
Once the car is stolen, a tech-savvy thief would need special equipment to access the on-board computer and do things like disable the GPS system, take any additional tracking system offline, and disable tech support from manipulating the vehicle’s electronics. Equipment to hack the car’s CAN system has been expensive and shrouded in mystery for the last couple decades but in recent days the internet has united hackers and security researchers to create custom hardware like CANtact Device Lets you Hack a Car’s CPU for $60.