Category Archives: Security

The future of Auto Theft


We live in a time where auto theft is incredibly impractical. Criminals in 2015 struggle to figure out how to get past electronic security and alarm systems,  reflecting an over 90% drop in NYC auto theft since the early 90’s. These days, even a successfully stolen vehicle can be recovered with GPS tracking and incidences of theft are often caught on video.

It might seem like convenience is weakness but since car theft is way down,  this might not hold true at the moment. The security holes that seem most vulnerable to exploitation revolve around a key fob. Fobs are those small black electronic keys that everyone uses to unlock their car these days.  They work by using A pre-determined electronic signal that must be authenticated by the CAN system. If the authentication checks out, the doors unlock. In newer cars, the engine will start via push button if the fob is in the immediate vicinity of the car so the driver doesn’t have to fish them out of her pocket.

Etymology of the word fob:  Written evidence of the word's usage has been traced to 1888. Almost no one uses a pocket watch these days but a fob was originally an ornament attached to a pocket watch chain. The word hung around as an ocassional, outdated way to refer to key chains. In the 80's, the consumer market was introduced to devices that allowed a car to be unlocked or started remotely. The small electronic device was easily attached to the conventional set of carkeys, and within a few years the term fob key was generally used to describe any electronic key entry system that stored a code in a device, including hotel keycards as well as the remote car unlocking device usually described by the word.
Let’s take a look at three ways a fob key can be hacked.

Recording FOB signals for replay. This is one of those urban legends that’s been around since at least 2008. The story goes: thieves record the key fob signal and can later replay it with a dummy fob. The car can’t tell the difference and unlocks/starts as if the correct key fob has been used. It’s easy for the thief to control the schedule and catch the victim unawares because it doesn’t have to interact with the fob in real time. Sounds like the most effective way to hack a key fob, right? Problem is, each signal is unique, created with an algorithm than includes time. If the devices are not synchronized the fob can’t open the lock. A recorded signal played back wouldn’t open the lock. The conventional wisdom is that the devices, proprietary knowledge and experience needed to make this method work are not worth a stolen car’s worth of risk. Secrets leak but honestly, a team organized enough to steal a car this way would be able to use the same skills to make a lot more money legally. Lastly, if you could reverse engineer and record fob signals the FBI would already be watching you. The demographic that used to steal cars in the 90’s were largely  not like the fast and furious franchise.  The idea that a huge tech security op could be thwarted isn’t necessarily far fetched but there are no recorded cases. Not one. For that to change, someone needs to figure out how the sync code is incorporated into the algorithm and apparently no one has.

Amplifying FOB signal to trigger auto unlock feature. Not only is this method genius but it is rumored to be already in use. Eyewitnesses claim to have seen this in use and it sparked theories about the methodology. Unlike recording a signal, amplification is a lot cheaper and requires almost no proprietary knowledge of the code to pull off. It works like this: A device picks up a range of frequencies that the key fob is giving off and increases the range. Some cars feature the ability to sense the authentic key fob in a five foot range and auto-unlock or autostart their ignitions. With a signal amp, the engine can theoretically be started if the real key fob is within 30 feet. So, the keys can be on your nightstand but the car thinks you are at the car door. The thief can then open the door, sit in the drivers seat and the ignition can be pushbutton triggered as if the key fob was in the car with the thief. I thought about repeating some of the anecdotes I found online about this method but none of them are confirmed. No one has tested it but it looks like a signal booster can be bought online for pretty cheap if you know what to buy($17 – $300). Last week, NYT ran a piece about signal boosting. You can read that here.

Random signal generator. So unique frequency codes means you can’t record  the signal and reuse it without a proprietary algorithm but signal amplification might not work on some systems in the near future. The rumors of it working successfully already have car companies working on a sensitive enough receiver that it would be sensitive to distortion and interference caused by the amp. But there are exceptions, where the signal is not random, such as a service codes. Manufacturers have overriding unlock codes and reset devices to assist with lost key fobs and maintenance/emergency cases. When these codes are leaked, they often open up a brief but large hole in security, during which thousands of cars can be swiped. The main reason it isn’t happening already is more about organized crime not being organized enough to plan and exploit that security hole. Or, you know, maybe the codes just haven’t leaked yet.

Hardware construction.

hackrfConstructing the hardware components needed takes specialized knowledge of hardware. Searching for information about this stuff if bound to attract NSA attention when followed by parts being ordered. The kind of guy who likes to sit in a workshop ordering parts and tinkering all day isn’t always the one who wants to go out and take risks with newer, higher-end cars. That is the kind of multifaceted thief NYC was famous for back before the numbers plunged in the 90’s but the hardware is becoming more and more esoteric. People are not as apt to work on devices that have such small parts on projects that run with such high risk. For that reason, there is more money to be made in producing a bunch of low-cost black market devices that are already calibrated and tested to work. Buying this device on the street and using it before selling it off again might leave a smaller trail than building it in a sketchy apartment-turned-lab that is sure to be searched if a heist goes wrong.

Paper trail & identity theft.

Technology has made it really difficult to even take the car int he first place but once you have a stolen car they are almost impossible to get rid of these days. There can be multiple tracking devices and serial number locations in one car and if the operation isn’t extremely current, the likelihood of the car being found in red hands goes up quickly.

Once the car is stolen, a tech-savvy thief would need special equipment to access the on-board computer and do things like disable the GPS system, take any additional tracking system offline, and disable tech support from manipulating the vehicle’s electronics. Equipment to hack the car’s CAN system has been expensive and shrouded in mystery for the last couple decades but in recent days the internet has united hackers and security researchers to create custom hardware like CANtact Device Lets you Hack a Car’s CPU for $60. 

 

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Revisiting the Death of Michael Hastings


Could emerging tech present new forensics in the suspicious early demise of controversial Rolling Stone reporter, Michael Hastings? How cheaper hardware and open-sourced coding could shed new light on a murder as the possibility of remotely hacking today’s cars gains traction.

Hacking your car might already be possible. This tweet by NYT tech writer, Nick Bilton, is a great example:

Weeks back, I wrote a short piece about CANtact, a $60 device that enables you  to interface with a car’s onboard computer through your laptop’s USB port. Eric Evenchick presented CANtact at Black Hat Asia 2015 security conference in Singapore. The onboard CPU of a motor-vehicle is called the CAN, for Controller Area Network. Evenchick hopes his device’s affordability will spur programmers to reverse engineer the firmware and proprietary languages various CAN systems use.

Read more about CANtact: CANtact Device Lets you Hack a Car’s CPU for $60

I got feedback on the CANtact story about a seemingly unrelated topic: The Death of Michael Hastings. Hastings was Rolling Stone and Buzzfeed contributor who became very vocal about the surveillance state when the  U.S. Department of Justice started investigating reporters in 2013. Hastings coined the term “war on journalism” when the Obama Administration sanctioned limitations on journalists ability to report when the White House considered it a security risk. Buzzfeed ran his last story, “Why Democrats Love to Spy On Americans”, June 7, 2013. Hastings is considered suspicious by many Americans after he died in an explosive, high -speed automobile accident, June 18, 2013, in Los Angeles, CA.

Check out one of the last interviews with Michael Hastings and scroll down for a description of the oft repeated conspiracy theory surrounding his untimely death.

The Michael Hastings Conspiracy Theory:

Unlike a lot of post-millennium conspiracy theories, which usually start online, this one actually began on television. Reporters were already contentious about the limitations the Obama admin. were attempting to impose and it seemed like extremely suspicious timing that one of the leaders of the criticism against censorship was suddenly killed. The internet ran with it and some Americans considered the crash as suspicious at the time. Public opinion is often without the merit of hard evidence, though, and this case was no different. Not everyone considered the media coverage unbiased, considering the political stake journalists had in the issue.

The first solid argument that Hasting didn’t die by accident came from Richard A. Clarke, a former U.S. National Coordinator for Security, Infrastructure Protection, and Counter-terrorism(what a title~!), who called the crash “consistent with a car cyber attack”. The conspiracy theory gestating around water coolers and message boards was truly born when Clarke went public with this outright accusation:

“There is reason to believe that intelligence agencies for major powers—including the United States—know how to remotely seize control of a car. So if there were a cyber attack on [Hastings’] car—and I’m not saying there was, I think whoever did it would probably get away with it.”

Next, WikiLeaks announced that Hastings reached out to a Wikileaks lawyer Jennifer Robinson only a few hours before the crash.

Army Staff Sergent Joe Biggs came forward with an email he thought might help in a murder investigation. The email was CCed to a few of Hastings’ colleagues, stating he was “onto a big story” and planned to “go off the radar”. Perhaps the most incriminating detail is that he warned the addressees of this email to expect a visit from the FBI. The FBI denied Hastings was being investigated in a formal press release.

LA Weekly admitted Hastings was preparing a new installment of what had been an ongoing story involving the CIA. Hastings’ wife, Elise Jordan, confirmed he had been working on a story profiling CIA Director John O. Brennan.

 

The case against foul play:

I have to admit, I got sucked in for a second but Cosmoso is a science blog and I personally believe an important part of science is to maintain rational skepticism. The details I listed above are the undisputed facts. You can research online and verify them. It might seem really likely that Hastings was onto something and silenced by some sort of foul play leading to a car accident but there is no hard evidence, no smoking gun, no suspects and nothing really proving he was a victim of murder.

The rumor online has always been that there are suspicious aspects to the explosion. Cars don’t always explode when they crash but Frank Markus director of Motor Trend said the ensuing fire after the crash was consistent with most high-speed car crashes. The usual conspiracy theorist reaction is to suspect this kind of testimony to have some advantage or involvement thus “proving” it biased. It’s pretty difficult to do that in the case of Frank Markus, who just directs a magazine and website about cars.

Hastings’ own family doesn’t seem to think the death was suspicious. His brother, Jonathan, later revealed Michael seemed “manic” in the days leading up to the crash. Elise Jordan, his wife told the press it was “just a really tragic accident”

A host of The Young Turks who was close with Hastings once said Hastings’ friends had noticed he was agitated and tense. Michael often complained that he was being followed and watched. It’s easy to dismiss the conspiracy theory when you consider it may have stemmed from the line of work he chose.

Maybe the government conspiracy angle is red herring.

Reporting on the FBI, the Military, the Whitehouse, or the CIA are what reporters do. People did it before and since. Those government organizations have accountability in ways that would make an assassination pretty unlikely.

If it wasn’t the government who would have wanted to kill Hastings?

A lot of people, it turns out. Hastings had publicly confirmed he received several death-threats after his infamous Rolling Stone article criticizing and exposing General McChrystal. Considering the United States long history of reactionary violence an alternate theory is that military personnel performed an unsanctioned hit on Hastings during a time when many right wing Americans considered the journalist unpatriotic.

Here’s where the tech comes into play:

Hastings had told USA Today his car had recently been “tampered with”, without any real explanation of what that means but most people in 2013 would assume it means physical tampering with the brakes or planting a bug. In any case he said he was scared and planned to leave town.

Now it’s only two years later, and people are starting to see how a little bit of inside knowledge of how the CAN computer works in a modern vehicle can be used to do some serious harm. We might never know if this was a murder, an assassination or an accident but hacking a car remotely seemed like a joke at the time; two years later no one is laughing.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

American Revolutionary Edward Snowden in the News this Week


Controversial hero of the information revolution to some, traitor to the American empire to others, Edward Snowden is popping up in headlines again.

A lot of Cosmoso might have caught the John Oliver interview.

The interview is an instant classic and will be talked about for a long time but it was also genuinely funny, with some unexpected chemistry between Oliver and Snowden. It also featured probably the second best extended tech metaphor involving dicks.

Silicon Valley

That’s right. I said second best.

Other superficial highlights include John Oliver losing his mind during the half hour before Snowden showed up late, the alarming but totally unsurprising ignorance of Americans during the man-on-the-street interviews about privacy and a concise but fleeting description of Snowden’s Patriotism for the layman.

 

Snowden Bust 2

The morning after the interview aired, another iconic moment in revolutionary journalism happened. Three, anonymous street artists erected a bust of Edward Snowden in Brooklyn, video and still pics documented exclusively by AnimalNewYork. The work was covered with a tarp and removed within twelve hours because it was put atop an existing war monument, and done without permission.

Update:

A hologram of Snowden is currently being shown in the spot where the bust was removed, courtesy of The Illuminator Art Collective who used two projections and a cloud of smoke  to show a likeness of Edward Snowden at the Revolutionary War memorial, releasing an accompanying statement:

“While the State may remove any material artifacts that speak in defiance against incumbent authoritarianism, the acts of resistance remain in the public consciousness, and it is in sharing that act of defiance that hope resides.”

Snowden Hologram

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

CANtact Device Lets you Hack a Car’s CPU for $60


Right now, Eric Evenchick is presenting CANtact at Black Hat Asia 2015 security conference in Singapore. Cantact is a hardware interface that attaches to the car’s CPU at one end and a regular laptop at the other. He’s already figured out how to do several simple hacks. It may sound like a simple device but the pricey commercially-available on-board CPU interfaces have been a consistent obstacle to car security research.

Car Companies have a huge security hole that they have not publicly addressed. The only reason people don’t regularly computer hack motor-vehicles is a lack of commercially available hardware. Hacking a car’s electronic system is something only a few people would even have the equipment to learn. To become a specialized security researcher in this area you would have to have a car you are willing to seriously mess with, which is expensive in and of itself. Some people might have access to a clunker that was made recently enough to have a CPU but they can’t afford the $1,200 stock cable that your local car mechanic would have to run the pre-fab software provided my the manufacturer. Eric Evenchick spent the last year figuring out exactly what makes the hardware tick, so he could put it int he hands of security researchers for the price of a dinner at a fancy restaurant.

24-year-old Eric Evenchick calls the controversial device CANtact, and he’s going to present it today at Black Hat Asia security conference in Singapore, whether car companies like it or not. The code that comes on the board attached tot he cable is open source. He can get it as cheap as $60 and maybe it will sell through third parties for $100.  CANtact uses any USB interface to adapt to a car or truck’s OBD2 port at the other end. OBD2 ports usually connect under the dashboard and talk to the car or trucks CPU. In most modern vehicles, the complicated Controller Area Network, or CAN, controls  the windows, the brakes, the power-steering, the dashboard indicators and more. It’s something that can disable your car and most people shouldn’t mess with it just yet. Once peer-collaborated info breaks into the mainstream, Evenchick hopes customized CAN systems will be common practice.

“Auto manufacturers are not up to speed. They’re just behind the times. Car software is not built to the same standards as, say, a bank application. Or software coming out of Microsoft.” Ed Adams at Security Innovation, 2014

Is can hacking a security threat we’ll see in the future? Quite probably. Back in 2013 security researchers Chris Valasek and Charlie Miller used DARPA funding to demonstrate how possible it really is to affect steering and brakes once the CAN system is accessed.

In the controversial death of journalist Michael Hastings, some people suspected car-hacking. It’s never been proven but you can read a detailed examination of the evidence in the Cosmoso.net article: Revisiting the Death of Michael Hastings

Evenchick is not trying to allow hackers to more easily hack cars. Instead he claims more affordable gadgetry will improve security, which seems to be the way tenuous relationship of security culture and hacking has always gone. In the test described in the link to the forbes article above, Valasek and Miller rewired a $150 ECOM cable to access and test vehicles’ OBD2 ports. CANtact comes out of the box ready to do what Valasek and Miller had to stay up late nights perfecting.

Anyone who attended Black Hat Asia, or can get a hold of any video of Evenchick’s presentation can contact Jon Howard: [email protected]
Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

When your body becomes your password…


Passwords are a pain. I’ve just had to rummage around for the password required in order to post this article. I seem to have 100 or more different identities on different websites to manage. Whenever I book a flight or buy a concert ticket this often means setting up yet another persona and coming up with a password to authenticate it.

It’s got so bad I’ve resorted to a password manager program to suggest secure, truly random passwords and then keep track of them for me. Of course if I forget the password to that program, or worse still if someone else guesses that password, I’ll be in all sorts of trouble.

Your phone is the key

This is a recognised problem, so it’s no surprise firms are looking at ways to make this easier. In the US, Yahoo has announced it plans to move to a password-on-demand system, where a new, one-time password is generated and texted to your mobile phone, and you can text the password to Yahoo’s servers whenever its services require authentication.

This makes it things easier for the user, whose phone is now a key as well as everything else. But some security experts have been less than impressed. For example, many phones show the text of incoming messages automatically, popping up even when the phone is locked. All that would be required is five minutes alone with your phone and your Yahoo account could be hijacked. And who hasn’t left their phone unattended for even just a short while?

How about your body?

All this hassle with usernames and passwords has led many to think biometrics are the answer, in which uniquely identifying elements of our physical body are used as authentication keys.

Obviously this still needs to be miniaturised.
scanner by aurin/www.shutterstock.com

The most common, fingerprints, have been used as a means to authenticate users for some time. Fingerprint-based controlled access can be made to work reasonably well, although it is not immune to successful attack. When you find that Sherlock Holmes was cracking cases in 1903 which involved forged fingerprints, you might be forgiven for wondering if we really can provide security on the basis of our fingertips and thumbs. However, modern biometric security goes further to try to provide greater security.

Goodbye Windows password

Microsoft is building biometric password support into the forthcoming Windows 10, due to arrive later this year. The Windows Hello component, essentially a login screen, will be able to use a webcam to examine the user’s face, iris, or a fingerprint scanner to unlock devices and provide Windows logon. Microsoft are also touting a mechanism built into its Passport service that will provide authentication on your behalf to other sites once you have successfully logged on to your computer and it has recognised you.

Halifax, the bank, has gone one step further for its online banking services. It is currently testing a smart wristband called Nymi which reads the wearer’s heartbeat – another biometric measure that provides a rhythmic pattern that can be used as a unique identifier. Heartbeat biometrics are touted as harder to fake or fool than other biometrics, although when I consider what happens to my heartbeat when I check my bank balance I’d imagine it will need considerable testing.

Give me convenience or give me death

All this is a step toward the Holy Grail of authentication: security with convenience. Microsoft’s moves in this direction are as part of the FIDO Alliance which aims to improve the way we approach security for devices and online services in the future, improving security and reducing the burden on users, which has a tendency to lead towards corner-cutting, weak or re-used passwords, and security compromises.

The good news for us password jugglers is that there is now a greater imperative behind building higher levels of security into systems from the outset, rather than trying to add it on afterwards, and that new and better ways of doing this are being expored. Modern devices, the latest Dell tablet for example, have 3D cameras which can generate images that contain depth information as well as a visible picture. The wider introduction of these sorts of components and their successors will offer a way to provide a whole new way of authentication, to the point that in the not too distant future our smile really will be our passport.

The Conversation

This article was originally published on The Conversation.
Read the original article.

The Computer of the Future is…. Vague.


Quantum Computer prototypes make mistakes. It’s in their nature. Can redundancy correct them?

Quantum memory promises speed combined with energy efficiency. If made viable it will be used in phones, laptops and other devices and give us all faster, more trustworthy tech which will require less power to operate.  Before we see it applied, the hardware requires redundant memory cells to check and double-check it’s own errors.

All indications show quantum tech is poised to usher the next round of truly revolutionary devices but first, scientists must solve the problem of the memory cells saving the wrong answer. Quantum physicists must redesign circuitry that exploits quantum behavior. The current memory cell is called a Qubit. The Qubit takes advantage of quantum mechanics to transfer data at an almost instantaneous rate, but the data is sometimes corrupted with errors. The Qubit is vulnerable to errors because it is physically sensitive to small changes in the environment it physically exists in. It’s been difficult to solve this problem because it is a hardware issue, not a software design issue. UC Santa Barbara’s physics professor John Martinis’ lab is dedicated to finding a workaround that can move forward without tackling the actual errors. They are working on a self-correcting Qubit.

The latest design they’ve developed at Martinis’ Lab is quantum circuitry that repeatedly self-checks for errors and suppresses the statistical mistake. Saving data to mutliple Qubits and empowering the overall system with that kind of desirable reliability we’ve come to expect from non-quantum digital computers. Since an error-free Qubit seemed last week to be a difficult hurdle, this new breakthrough seems to mean we are amazingly close to a far-reaching breakthrough.

Julian Kelly is a grad student and co-lead author published in Nature Journal:

“One of the biggest challenges in quantum computing is that qubits are inherently faulty so if you store some information in them, they’ll forget it.”

Bit flipping is the problem dejour in smaller, faster computers.

Last week I wrote about a hardware design problem called bit flipping, where a classic, non-quantum computer has this same problem of unreliable data. In effort to make a smaller DRAM chip, designers created an environment where the field around one bit storage location could be strong enough to actually change the value of the bit storage location next to it. You can read about that design flaw and the hackers who proved it could be exploited to gain system admin privileges in otherwise secure servers, here.

Bit flipping also applies to this issue in quantum computing. Quantum computers don’t just save information in binary(“yes/no”, or “true/false”) positions.  Qubits can be in any or even all positions at once, because they are storing value in multiple dimensions. It’s called “superpositioning,” and it’s the very reason why quantum computers have the kind of computational prowess they do, but ironically this characteristic also makes Qubits prone to bit flipping. Just being around atoms and energy transference is enough to create unstable environments and thus unreliable for data storage.

“It’s hard to process information if it disappears.” ~ Julian Kelly.

Along with Rami Barends, staff scientist Austin Fowler and others in the Martinis Group, Julian Kelly is making a data storage scheme where several qubits work in conjunction to redundantly preserve information. Information is stored across several qubits in a chip that is hard-wired to also check of the odd-man-out error. So, while each Qubit is unreliable, the chip itself can be trusted to store data for longer and with less, hopefully, no errors.

It isn’t a new idea but this is the first time it’s been applied. The device they designed is small, in terms of data storage, but it works as designed. It corrects its own errors. The vision we all have of a working quantum computer able to process a sick amount of data in an impressively short time? That will require something in the neighborhood of  a hundred million Qubits and each of the Qubits will be redundantly  self-checking to prevent errors.

Austin Fowler spoke to Phys.org about the firmware embedded in this new quantum error detection system, calling it surface code. It relies on the measurement of change between a duplication and the original bit, as opposed to simlpy comparing a copy of the same info. This measurement of change instead of comparison of duplicates is called parity recognition, and it is unique to quantum data storage. The original info being preserved in the Qubits is actually unobserved, which is a key aspect of quantum data.

“You can’t measure a quantum state, and expect it to still be quantum,” explained Barends.

As in any discussion of quantum physics, the act of observation has the power to change the value of the bit. In order to truly duplicate the data the way classical computing does in error detection, the bit would have to be examined, which in and of itself would potentially cause a bitflip, corrupting the original bit. The device developed at Martini’s U of C Santa Barbara lab

This project is a groundbreaking way of applying physical and theoretical quantum computing because it is using the phsycial Qubit chip and a logic circuit that applies quantum theory as an algorithm. The results being a viable way of storing data prove that several otherwise untested quantum theories are real and not just logically sound. Ideas in quantum theory that have been pondered for decades are now proven to work in the real world!

What happens next?

Phase flips:

Martinis Lab will be continuing it’s tests in effort to refine and  develop this approach. While the bit flip errors seemed to have been solved with this new design, there is a new type of error not found in classical computing that has yet to be solved: the  phase-flip. Phase-flips might be a whole other article and until Quantum physicists solve them there is no rush for the layman to understand.

Stress tests:

The team is also currently running the error correction cycle for longer and longer periods while monitoring the devices integrity and behavior to see what will happen. Suffice to say, there are a few more types of errors than it may appear, despite this breakthrough.

Corporate sponsorship:

As if there was any doubt about funding…. Google has approached Martinis Lab and offered them support in effort to speed up the day when quantum computers stomp into the mainstream.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Cyber CSI: The Challenges of Digital Forensics


Forensics is changing in the digital age, and the legal system is still catching up when it comes to properly employing digital evidence.

Broadly speaking, digital evidence is information found on a wide range of electronic devices that is useful in court because of its probative value. It’s like the digital equivalent of a fingerprint or a muddy boot.

However, digital evidence tendered in court often fails to meet the same high standards expected of more established forensics practices, particularly in ensuring the evidence is what it purports to be.

Technology changes evidence

This is not the first time that technology has impacted the way evidence is gathered and presented in courts. And it’s not the first time that there have been problems in the way new evidence is used.

You might remember the case of the death of Azaria Chamberlain at Ayers Rock (Uluru) more than 30 years ago. Forensics played a key role in the conviction of Lindy Chamberlain in 1982. However, her conviction was later reversed in 1988 following closer scrutiny of the evidence.

Subsequent coronial inquests, a court case featuring controversial DNA forensic evidence, and the subsequent Australian Royal Commission into Azaria’s death, resulted in a fundamental reconsideration of Australian forensic practices.

There is still a vigorous debate in the legal world over the usage and reliability of DNA evidence, for example. This is now being mirrored in more recent court challenges over the use of digital evidence.

The special properties and technical complexity of digital evidence often makes it even more challenging, as courts find it difficult to understand the true nature and value of that evidence.

In fact, my first role as a digital forensics consultant is typically to act as an interpreter, explaining what the evidence means in a legal context.

Cyber evidence

It is increasingly common for criminal trials to rely on digital evidence. And, regrettably, it is not uncommon for innocents to be convicted and guilty people acquitted because of digital evidence.

There are several reasons for this. Firstly, the evidence might be compelling at first glance, but it could be misleading. The defendant may also have limited financial resources to rebut the evidence. The defence lawyers might also misread the evidence. Plea-bargaining offers can also lessen sentences.

Conversely, other investigations may not get to trial because of the complexity or incompleteness of the evidence.

Worryingly, some defendants are pleading guilty based on what appears to be overwhelming hearsay digital evidence without robust defence rebuttal. In these cases, the defence lawyer – whose job it is to analyse the evidence – may simply not understand it. This is why external digital forensics consultants can be so important.

However, the high cost of mounting a defence using forensic practitioners is often beyond the financial reach of many. For those qualified to receive legal aid, it is increasingly hard to obtain sufficient funding because of stringent budgeting regimes in various Australian jurisdictions.

Other factors can affect the validity of the evidence, including: failure of the prosecution or a plaintiff to report exculpatory data; evidence taken out of context and misinterpreted; failure to identify relevant evidence; system and application processing errors; and so forth.

Investigators undertaking these important but tedious tasks are often under-resourced, over-burdened with complex cases, increasingly large and complex datasets, etc.

Forensic analyses and evidence presentations are sometimes confounded by inexperienced investigators and communicators, which is further exacerbated by faulty case management.

Another problem issue is the paucity of reliable forensic tools and processes that meet the needs of investigators and the expectations of the courts. However, I also suspect some courts in Australia and elsewhere may be unaware of these undercurrents, or what standards they should expect of the evidence.

Getting it right

Digital forensics is still in its infancy, and it is more of an art form lacking broad scientific standards to supports its use as evidence.

There is a call among researchers to test and trial better forensic practices and forensic tools. This is especially important due to the increasing size of data storage on some personal computing devices, let alone cloud and network storage, which presents greater recovery and jurisdictional challenges to practitioners.

We also need new tools and processes capable of locating and recovering sufficient evidence from larger data sets quickly, efficiently and thoroughly. Forensic tools are often commercial products, thus profit-driven rather than science-based, and do not fulfil real forensic needs. They increasingly fail to identify all evidence from larger datasets in a timely manner. The processes used by law enforcement tend to be agency-centric with little consensus on practice, standards and processes and sharing of case knowledge.

Cyber security threats to governments, businesses and individuals highlight our vulnerability to malicious attacks on our information assets and networks. Prevention and threat mitigation is topical, but we often overlook the simple act of bringing miscreants to justice and proving the innocence of those framed by their actions.

There is an old adage in forensics (thanks to Arthur Conan Doyle’s fictional detective Sherlock Holmes): “There is nothing more deceptive than an obvious fact.” This also applies to digital forensics, where I have all too often encountered cases of investigator bias and a laziness when seeking the truth.

Encouragingly, sounder tools and processes are emerging that I expect will rejuvenate this emerging discipline.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Examining The Apple iPhone Planned Obsolescence Conspiracy


Apple has the money and the know how… are they making your old iPhone suck through planned obsolescence just to force you into the checkout line for a new one?

Planned Obsolescence isn’t just a conspiracy theory. You can read the 1932 pamphlet, widely-considered the origin of the concept, here. The argument in favor of it is it’s effect on the economy; more products being produced and sold means an active, thriving market. Of course there is an obvious ethical problem of selling people a product that won’t continue to work as it should for as long as it should. Several companies openly admit they do it. For Apple, it works like this: Whenever a new iPhone comes out, the previous model gets buggy, slow and unreliable. Apple dumps money into a new, near perfect ad campaign and the entire first world and beyond irrationally feels silly for not already owning one, even before it’s available. Each release marks the more expensive iPhone with capabilities the last one can’t touch. This is already a great marketing plan and I’m not criticizing Apple’s ability to pull it off as described. The problem is planned obsolescence; some iPhone owners notice the older model craps out on them JUST as the newest iPhone hits the retail shops. Apple has the money and the know how… are they making your old iPhone suck just to force you into the checkout line for a new one?

Full disclosure, I’m biased: I owned an iphone for long enough to live through a new product release and mine did, indeed, crap out as described above. Slow, buggy, and unreliable it was. With that anecdote under my belt I might be satisfied to call this e-rumor totally true but in the interest of science I collected further evidence. I combed the messageboards to see who had good points and who is just the regular internet nutjob with a stupid theory. To examine the evidence, I’m gonna start with this fact:

Fact 1: Apple’s product announcements and new product releases come at regular intervals. So, if the old iPhones stop working correctly at that same interval there would be a coinciding pattern. The tricky part is finding the data but the pattern of release dates is a good place to start because it is so clear. Other companies could be doing this type of fuckery but it would be harder to track. Not only does Apple time their releases but they do it at a faster pace than most. The new iPhones tend to come out once a year but studies show people keep their phones for about 2-3 years if they are not prompted or coerced to purchase a newer model.

Fact 2: Yes, it’s possible. There are so many ways the company would be able to slow or disable last year’s iPhone. It could happen by an automatic download that can’t be opted out of, such as an “update” from the company. Apple can have iPhones come with pre-programmed software that can’t be accessed through any usual menu system on the iPhone. There can even be a hardware issue that decays or changes based on the average amount of use. There can be a combination of these methods. The thing is, so many people jailbreak iPhones, it seems like someone might be able to catch malicious software. There are some protocols that force updates, though. hmmm.

Fact 3: They’ve been accused of doing this every new release since iPhone 4 came out. his really doesn’t look like an accident, guys. This 2013 article in the New York Times Magazine by Catherine Rampell describes her personal anecdote, which, incidentally is exactly the same as the way my iPhone failed me. When Catherine contacted Apple tech support they informed her the iOS 7 platform didn’t work as well on the older phones, which lead her to wonder why the phones automatically updated the operating system upgrade in the first place.

Earlier on the timeline, Apple released iOS 4 offering features that were new and hot in 2010: features like tap-to-focus camera, multitasking and faster image loading. The iPhone 4 was the most popular phone in the country at the time but it suddenly didn’t work right, crashing and becoming too slow to be useful.

The iPhone 4 release made the iPhone 4 so horrible it was basically garbage, and Apple appeared to have realized the potential lost loyalty and toned it down. The pattern of buggy and slow products remained, though, When iOS 7 came out in 2013, it was a common complaint online and people started to feel very sure Apple was doing it on purpose.

Fact 4: Google Trends shows telltale spikes in complaints that match up perfectly with the release dates. The New York Times(2014) called this one and published Google queries for “iphone slow” spike in traffic for that topic. Look at Google trends forecasting further spikes because the pattern is just that obvious:

Does Apple Ruin Your iPhone on Purpose? The Conspiracy, Explained

Apple has a very loyal customer base, though. Rene Ritchie wrote for iMore, saying this planned obsolescence argument is “sensational,” and a campaign of “misinformation” by people who don’t actually understand how great an iPhone really is(barf). Even though the motive is crystal clear, the arguement that Apple is innocent isn’t complete nonsense, either: Apple ruining iPhones could damage customer loyalty. People espousing this argument claim an intentional slowdown is less likely than just regular incompatibility due to new software features. The latter point is a good one, considering how almost all software manufacturers have a hard time adjusting new software to old operating systems. Cooler software usually needs faster hardware and for some ridiculous reason no one has ever come out with an appropriately customizable smartphone and Apple woudl likely be the last on the list.

Christopher Mims pointed out on Quartz: “There is no smoking gun here, no incriminating memo,” of an intentional slowdown on Apple’s part.

There is really no reason to believe Apple would be against this kind of thing, even if planned obsolescence were a happy accident for the mega-corporation. Basically, if this is happening by accident it’s even better for Apple because they don’t have to take responsibility and it likely helps push the new line. Apple is far from deserving the trustworthy reputation they’ve cultivated under Steve Jobs, as the glitzy marketing plan behind the pointless new Apple Watch demonstrates.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

“Rowhammering” Attack Gives Hackers Admin Access


A piece of code can actually manipulate the physical memory chip by repeatedly accessing nearby capacitors in a burgeoning new hack called Rowhammering. Rowhammer hacking is so brand new no one’s actually done it yet. Google’s Project Zero security initiative figured out how to exploit an aspect of a physical component in some types of DDR memory chips. The hack can give the user increased system rights regardless of an untrusted status. Any Intel-compatible PCs with this chip and running Linux are vulnerable – in theory. Project Zero pulled it off but it isn’t exactly something to panic about unless you are doing both those things: using DRAM and running linux.

A lot of readers might be susceptible to this security hack but most won’t want to read the technical details. If you are interested you can check out the project zero blog piece about it.  The security flaw is in a specific chip, the DRAM, or dynamic random-access memory chip. The chip is supposed to just store information in the form of bits saved on a series of capacitors. The hack works by switching the value of bits stored in DDR3 chip modules known as DIMMs. so, DRAM is the style of chip, and each DRAM houses several DIMMs. Hackers researching on behalf of Project Zero basically designed a program to repeatedly access sections of data stored on the vulnerable DRAM until the statistical odds of one or more DIMMS retaining a charge when it shouldn’t becomes a statistical reality.

IN 2014, this kind of hack was only theoretical until, scientists proved this kind of “bit flipping” is completely possible. Repeatedly accessing an area of a specific DIMM can become so reliable as to allow the hacker to predict the change of contents stored in that section of DIMM memory. Last Monday(March 9th, 2015) Project Zero demonstrated exactly how a piece of software can translate this flaw into an effective security attack.

“The thing that is really impressive to me in what we see here is in some sense an analog- and manufacturing-related bug that is potentially exploitable in software,” David Kanter, senior editor of the Microprocessor Report, told Ars. “This is reaching down into the underlying physics of the hardware, which from my standpoint is cool to see. In essence, the exploit is jumping several layers of the stack.”

Why it’s called Rowhammering.

The memory in a DDR-style chip is configured in an array of rows and columns. Each row is grouped with others into large blocks which handle the accessable memory for a specific application, including the memory resources used to run the operating system. There is a security feature called a “sandbox”, designed to protect the data integrity and ensure the overall system stays secure. A sandbox can only be accessed through a corresponding application or the Operating System.  Bit- flipping a DDR chip works when a hacker writes an application that can access two chosen rows of memory. The app would then access those same 2 rows hundreds of thousands of times, aka hammering. When the targeted bits flip from ones to zeros, matching a dummy list of data in the application, the target bits are left alone with the new value.

The implications of this style hack are hard to see for the layman but profound in the security world. Most data networks allow a limited list of administrators to have special privileges. It would be possible, using a rowhammer attack, to allow an existing account to suddenly gain administrative privileges to the system. In the vast majority of systems that kind of access would allow access into several other accounts. Administrative access would also allow some hackers to alter existing security features. The bigger the data center, the more users with accounts accessing the database, the more useful this vulnerability is.

The Physics of a Vulnerability

We’re all used to newer tech coming with unforeseen security problems. Ironically, this vulnerability is present in newer DDR3 memory chips. This is because the newer chips are so small there is actually and is the result of the ever smaller dimensions of the silicon. The DRAM cells are too close together in this kind of chip, making it possible to take a nearby chip, flip it back and forth repeatedly, and eventually make the one next to it – the target bit that is not directly accessible- to flip.

Note: The Rowhammer attack being described doesn’t work against newer DDR4 silicon or DIMMs that contain ECC(error correcting code), capabilities.

The Players and the Code:

Mark Seaborn, and Thomas Dullien are the guys who finally wrote a piece of code able to take advantage of this flaw. They created 2 rowhammer attacks which can run as processes. Those processes have no security privileges whatsoever but can end up gaining  administrative access to a  x86-64 Linux system. The first exploit was a Native Client module, incorporating itself into the platform as part of Google Chrome. Google developers caught this attack and altered an instruction in Chrome called CLFLUSH and the exploit stopped working. Seaborn and Dullien were psyched that they were able to get that far and write the second attempt shortly thereafter.

The second exploit, looks like a totally normal Linux process. It allowed Seaborn and Dullien to access to all physical memory which proved the vulnerability is actually a threat to any machine with this type of DRAM.

The ARS article about this has a great quote by Irene Abezgauz, a product VP at Dyadic Security:

The Project Zero guys took on the challenge of leveraging the concept of rowhammer into an actual exploit. What’s impressive is the combination of lots of deep technical knowledge with quite a bit of hacker creativity. What they did was create attack techniques in which flipping just a single bit in a specific location allows them to execute any code they want with root privileges or escape a sandbox. This is impressive by itself, but they added to this quite a few creative solutions to make it more likely to succeed in a real world scenario and not just in the lab. They figured out ways for better targeting of the specific locations in memory they needed to flip, improved the chances of the attack to succeed by creating (“spraying”) multiple locations where a flipped bit would make the right impact, and came up with several ideas to leverage this into actual privileged code execution. This combination makes for one of the coolest exploits I’ve seen in a while.

Project Zero didn’t name which models of DDR3 are susceptible to rowhammering. They also claim that this attack could work on a variety of operating platforms, even though they only tried it on a Linux computer running x86-64 hardware, something that they didn’t technically prove but seems very believable considering the success and expertise they seem to carry behind that opinion.

So, is Rowhammering a real threat or just some BS?

There isn’t an obvious, practical application for this yet. Despite how powerful the worst-case scenario would be, this threat doesn’t really come with a guarantee of sweeping the internet like some other, less-recent vulnerability exploits. The overwhelming majority of hacks are attempted from remote computers but Seaborn and Dullien apparently needed physical access to incorporate their otherwise unprivlidged code into the targeted system. Also, because the physical shape of the chip dictates which rows are vulnerable it may be the case that users who want to increase security to protect against this exploit can just reconfigure where the administrative privileges are stored and manipulated on the chip. Thirdly, rowhammering as Project Zero describes actually requires over 540,000 memory accesses less than 64 milliseconds – that’s a memory speed demand that means some systems can’t even run the necessary code. Hijacking a system using rowhammering with these limitations is presently not a real threat.

People used to say the same thing about memory corruption exploits, though. For examples: buffer overflow or a use-after-free both allow hack-attempts to squeeze malicious shell code into protected memory of a computer. Rowhammering is differnt because it is so simple. It only allows increased privileges for the hacker or piece of code, which is a real threat if it becomes developed as thoroughly as the development of memory corruption exploits has. The subtle difference might even be hard to grasp now, but now that the work has been done it’s the usual race between security analysts who would love to protect against it and the criminal world trying to dream up a way to make it more viable. Rob Graham, CEO of Errata Security, wrote further on the subject, here.

In short, this is noteworthy because a physical design flaw in a chip is being exploited, as opposed to a software oversight or code efficacy problem. A piece of code is actually affecting the physical inside of the computer during the attack.

Or, as Kanter, of the Microprocessor Report, said:

“This is not like software, where in theory we can go patch the software and get a patch distributed via Windows update within the next two to three weeks. If you want to actually fix this problem, we need to go out and replace, on a DIMM by DIMM basis, billions of dollars’ worth of DRAM. From a practical standpoint that’s not ever going to happen.”

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY