Category Archives: Internet

Examining The Apple iPhone Planned Obsolescence Conspiracy


Apple has the money and the know how… are they making your old iPhone suck through planned obsolescence just to force you into the checkout line for a new one?

Planned Obsolescence isn’t just a conspiracy theory. You can read the 1932 pamphlet, widely-considered the origin of the concept, here. The argument in favor of it is it’s effect on the economy; more products being produced and sold means an active, thriving market. Of course there is an obvious ethical problem of selling people a product that won’t continue to work as it should for as long as it should. Several companies openly admit they do it. For Apple, it works like this: Whenever a new iPhone comes out, the previous model gets buggy, slow and unreliable. Apple dumps money into a new, near perfect ad campaign and the entire first world and beyond irrationally feels silly for not already owning one, even before it’s available. Each release marks the more expensive iPhone with capabilities the last one can’t touch. This is already a great marketing plan and I’m not criticizing Apple’s ability to pull it off as described. The problem is planned obsolescence; some iPhone owners notice the older model craps out on them JUST as the newest iPhone hits the retail shops. Apple has the money and the know how… are they making your old iPhone suck just to force you into the checkout line for a new one?

Full disclosure, I’m biased: I owned an iphone for long enough to live through a new product release and mine did, indeed, crap out as described above. Slow, buggy, and unreliable it was. With that anecdote under my belt I might be satisfied to call this e-rumor totally true but in the interest of science I collected further evidence. I combed the messageboards to see who had good points and who is just the regular internet nutjob with a stupid theory. To examine the evidence, I’m gonna start with this fact:

Fact 1: Apple’s product announcements and new product releases come at regular intervals. So, if the old iPhones stop working correctly at that same interval there would be a coinciding pattern. The tricky part is finding the data but the pattern of release dates is a good place to start because it is so clear. Other companies could be doing this type of fuckery but it would be harder to track. Not only does Apple time their releases but they do it at a faster pace than most. The new iPhones tend to come out once a year but studies show people keep their phones for about 2-3 years if they are not prompted or coerced to purchase a newer model.

Fact 2: Yes, it’s possible. There are so many ways the company would be able to slow or disable last year’s iPhone. It could happen by an automatic download that can’t be opted out of, such as an “update” from the company. Apple can have iPhones come with pre-programmed software that can’t be accessed through any usual menu system on the iPhone. There can even be a hardware issue that decays or changes based on the average amount of use. There can be a combination of these methods. The thing is, so many people jailbreak iPhones, it seems like someone might be able to catch malicious software. There are some protocols that force updates, though. hmmm.

Fact 3: They’ve been accused of doing this every new release since iPhone 4 came out. his really doesn’t look like an accident, guys. This 2013 article in the New York Times Magazine by Catherine Rampell describes her personal anecdote, which, incidentally is exactly the same as the way my iPhone failed me. When Catherine contacted Apple tech support they informed her the iOS 7 platform didn’t work as well on the older phones, which lead her to wonder why the phones automatically updated the operating system upgrade in the first place.

Earlier on the timeline, Apple released iOS 4 offering features that were new and hot in 2010: features like tap-to-focus camera, multitasking and faster image loading. The iPhone 4 was the most popular phone in the country at the time but it suddenly didn’t work right, crashing and becoming too slow to be useful.

The iPhone 4 release made the iPhone 4 so horrible it was basically garbage, and Apple appeared to have realized the potential lost loyalty and toned it down. The pattern of buggy and slow products remained, though, When iOS 7 came out in 2013, it was a common complaint online and people started to feel very sure Apple was doing it on purpose.

Fact 4: Google Trends shows telltale spikes in complaints that match up perfectly with the release dates. The New York Times(2014) called this one and published Google queries for “iphone slow” spike in traffic for that topic. Look at Google trends forecasting further spikes because the pattern is just that obvious:

Does Apple Ruin Your iPhone on Purpose? The Conspiracy, Explained

Apple has a very loyal customer base, though. Rene Ritchie wrote for iMore, saying this planned obsolescence argument is “sensational,” and a campaign of “misinformation” by people who don’t actually understand how great an iPhone really is(barf). Even though the motive is crystal clear, the arguement that Apple is innocent isn’t complete nonsense, either: Apple ruining iPhones could damage customer loyalty. People espousing this argument claim an intentional slowdown is less likely than just regular incompatibility due to new software features. The latter point is a good one, considering how almost all software manufacturers have a hard time adjusting new software to old operating systems. Cooler software usually needs faster hardware and for some ridiculous reason no one has ever come out with an appropriately customizable smartphone and Apple woudl likely be the last on the list.

Christopher Mims pointed out on Quartz: “There is no smoking gun here, no incriminating memo,” of an intentional slowdown on Apple’s part.

There is really no reason to believe Apple would be against this kind of thing, even if planned obsolescence were a happy accident for the mega-corporation. Basically, if this is happening by accident it’s even better for Apple because they don’t have to take responsibility and it likely helps push the new line. Apple is far from deserving the trustworthy reputation they’ve cultivated under Steve Jobs, as the glitzy marketing plan behind the pointless new Apple Watch demonstrates.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

“Rowhammering” Attack Gives Hackers Admin Access


A piece of code can actually manipulate the physical memory chip by repeatedly accessing nearby capacitors in a burgeoning new hack called Rowhammering. Rowhammer hacking is so brand new no one’s actually done it yet. Google’s Project Zero security initiative figured out how to exploit an aspect of a physical component in some types of DDR memory chips. The hack can give the user increased system rights regardless of an untrusted status. Any Intel-compatible PCs with this chip and running Linux are vulnerable – in theory. Project Zero pulled it off but it isn’t exactly something to panic about unless you are doing both those things: using DRAM and running linux.

A lot of readers might be susceptible to this security hack but most won’t want to read the technical details. If you are interested you can check out the project zero blog piece about it.  The security flaw is in a specific chip, the DRAM, or dynamic random-access memory chip. The chip is supposed to just store information in the form of bits saved on a series of capacitors. The hack works by switching the value of bits stored in DDR3 chip modules known as DIMMs. so, DRAM is the style of chip, and each DRAM houses several DIMMs. Hackers researching on behalf of Project Zero basically designed a program to repeatedly access sections of data stored on the vulnerable DRAM until the statistical odds of one or more DIMMS retaining a charge when it shouldn’t becomes a statistical reality.

IN 2014, this kind of hack was only theoretical until, scientists proved this kind of “bit flipping” is completely possible. Repeatedly accessing an area of a specific DIMM can become so reliable as to allow the hacker to predict the change of contents stored in that section of DIMM memory. Last Monday(March 9th, 2015) Project Zero demonstrated exactly how a piece of software can translate this flaw into an effective security attack.

“The thing that is really impressive to me in what we see here is in some sense an analog- and manufacturing-related bug that is potentially exploitable in software,” David Kanter, senior editor of the Microprocessor Report, told Ars. “This is reaching down into the underlying physics of the hardware, which from my standpoint is cool to see. In essence, the exploit is jumping several layers of the stack.”

Why it’s called Rowhammering.

The memory in a DDR-style chip is configured in an array of rows and columns. Each row is grouped with others into large blocks which handle the accessable memory for a specific application, including the memory resources used to run the operating system. There is a security feature called a “sandbox”, designed to protect the data integrity and ensure the overall system stays secure. A sandbox can only be accessed through a corresponding application or the Operating System.  Bit- flipping a DDR chip works when a hacker writes an application that can access two chosen rows of memory. The app would then access those same 2 rows hundreds of thousands of times, aka hammering. When the targeted bits flip from ones to zeros, matching a dummy list of data in the application, the target bits are left alone with the new value.

The implications of this style hack are hard to see for the layman but profound in the security world. Most data networks allow a limited list of administrators to have special privileges. It would be possible, using a rowhammer attack, to allow an existing account to suddenly gain administrative privileges to the system. In the vast majority of systems that kind of access would allow access into several other accounts. Administrative access would also allow some hackers to alter existing security features. The bigger the data center, the more users with accounts accessing the database, the more useful this vulnerability is.

The Physics of a Vulnerability

We’re all used to newer tech coming with unforeseen security problems. Ironically, this vulnerability is present in newer DDR3 memory chips. This is because the newer chips are so small there is actually and is the result of the ever smaller dimensions of the silicon. The DRAM cells are too close together in this kind of chip, making it possible to take a nearby chip, flip it back and forth repeatedly, and eventually make the one next to it – the target bit that is not directly accessible- to flip.

Note: The Rowhammer attack being described doesn’t work against newer DDR4 silicon or DIMMs that contain ECC(error correcting code), capabilities.

The Players and the Code:

Mark Seaborn, and Thomas Dullien are the guys who finally wrote a piece of code able to take advantage of this flaw. They created 2 rowhammer attacks which can run as processes. Those processes have no security privileges whatsoever but can end up gaining  administrative access to a  x86-64 Linux system. The first exploit was a Native Client module, incorporating itself into the platform as part of Google Chrome. Google developers caught this attack and altered an instruction in Chrome called CLFLUSH and the exploit stopped working. Seaborn and Dullien were psyched that they were able to get that far and write the second attempt shortly thereafter.

The second exploit, looks like a totally normal Linux process. It allowed Seaborn and Dullien to access to all physical memory which proved the vulnerability is actually a threat to any machine with this type of DRAM.

The ARS article about this has a great quote by Irene Abezgauz, a product VP at Dyadic Security:

The Project Zero guys took on the challenge of leveraging the concept of rowhammer into an actual exploit. What’s impressive is the combination of lots of deep technical knowledge with quite a bit of hacker creativity. What they did was create attack techniques in which flipping just a single bit in a specific location allows them to execute any code they want with root privileges or escape a sandbox. This is impressive by itself, but they added to this quite a few creative solutions to make it more likely to succeed in a real world scenario and not just in the lab. They figured out ways for better targeting of the specific locations in memory they needed to flip, improved the chances of the attack to succeed by creating (“spraying”) multiple locations where a flipped bit would make the right impact, and came up with several ideas to leverage this into actual privileged code execution. This combination makes for one of the coolest exploits I’ve seen in a while.

Project Zero didn’t name which models of DDR3 are susceptible to rowhammering. They also claim that this attack could work on a variety of operating platforms, even though they only tried it on a Linux computer running x86-64 hardware, something that they didn’t technically prove but seems very believable considering the success and expertise they seem to carry behind that opinion.

So, is Rowhammering a real threat or just some BS?

There isn’t an obvious, practical application for this yet. Despite how powerful the worst-case scenario would be, this threat doesn’t really come with a guarantee of sweeping the internet like some other, less-recent vulnerability exploits. The overwhelming majority of hacks are attempted from remote computers but Seaborn and Dullien apparently needed physical access to incorporate their otherwise unprivlidged code into the targeted system. Also, because the physical shape of the chip dictates which rows are vulnerable it may be the case that users who want to increase security to protect against this exploit can just reconfigure where the administrative privileges are stored and manipulated on the chip. Thirdly, rowhammering as Project Zero describes actually requires over 540,000 memory accesses less than 64 milliseconds – that’s a memory speed demand that means some systems can’t even run the necessary code. Hijacking a system using rowhammering with these limitations is presently not a real threat.

People used to say the same thing about memory corruption exploits, though. For examples: buffer overflow or a use-after-free both allow hack-attempts to squeeze malicious shell code into protected memory of a computer. Rowhammering is differnt because it is so simple. It only allows increased privileges for the hacker or piece of code, which is a real threat if it becomes developed as thoroughly as the development of memory corruption exploits has. The subtle difference might even be hard to grasp now, but now that the work has been done it’s the usual race between security analysts who would love to protect against it and the criminal world trying to dream up a way to make it more viable. Rob Graham, CEO of Errata Security, wrote further on the subject, here.

In short, this is noteworthy because a physical design flaw in a chip is being exploited, as opposed to a software oversight or code efficacy problem. A piece of code is actually affecting the physical inside of the computer during the attack.

Or, as Kanter, of the Microprocessor Report, said:

“This is not like software, where in theory we can go patch the software and get a patch distributed via Windows update within the next two to three weeks. If you want to actually fix this problem, we need to go out and replace, on a DIMM by DIMM basis, billions of dollars’ worth of DRAM. From a practical standpoint that’s not ever going to happen.”

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

East Coast Internet Speeds Are The Fastest


Smaller states seem to do the best at providing fast Internet but the Internet connections in these states pale in comparison to the speeds in other countries, particularly South Korea.

East Coast Internet Speeds Are The Fastest

The latest Akami State of the Internet report ranked states by their average Internet connection speed, which suggested that smaller states seem to do the best at providing fast Internet to residents. However, the Internet connections in these states pale in comparison to the speeds in other countries, particularly South Korea.

First up in top average speed came the state of Vermont, with 12.7 Mbps of speed as the average of the state’s speed. It was followed by New Hampshire, Delaware, the District of Columbia (a state for the sake of the study), and Utah, all resting above 11 Mbps. The latter five states, Massachusetts, Virginia, Maryland, New Jersey and Connecticut coming in slightly above 10 Mbps.

On the other hand, South Korean Internet averages a connection speed of 14.2 Mbps.

The thing to note about the faster states, before comparing them to Korea, is that most of them are small in size, aside from Utah. Utah, on the other hand is home to the new NSA data center, which might skew the data.  With a lot less physical land to connect, it is likely that the small size of these states, and their level of urbanization, is responsible for these higher speeds. Rural regions may have much more dispersed populations, and less incentive to build larger infrastructure there.

A second thing of note is that the Internet may have a bigger emphasis in Korean society – and thus an expectation of higher speeds and more power than we expect in the United States. Competitive gaming for instance is huge in Korea, to the point that some games are almost entirely dominated by Korean players. Online gaming, of course, is taxing on bandwidth. It is a world where a slower connection can cost points, and cash prizes.

David Belson, the editor of the report suggests that it may not just be physiography or population, or anything like gamers that drive up the demand for high speed net. He suggests that competition between providers has helped raise Internet speeds, at least on a more local level.

“At least on a community level, if there’s only one provider, the chances of getting very high-speed services for reasonable prices is low. But when you put Verizon FiOS and Comcast and RCN and others in the same neighborhood, there’s more contention for the customer dollar. So you’ll see them start getting more competitive for speed and pricing,” he wrote.

Consumers seem to agree with him.

A survey from California by the PPIC (Public Policy Institute of California) focused on that state’s use of the Internet showed that a majority of the population, particularly Latinos and blacks, felt that high-speed Internet should be a public utility. Over 50% of Latinos, and almost 60% of blacks surveyed in this also felt that children without broadband access were at a disadvantage – with 67% of voters favoring a government program to provide broadband Internet to lower-income and rural residents of the state.

High connection speed could finally provide areas with a single provider an alternative, and competition for cheaper rates and faster Internet. However, for some, it may be a while until they get access to Internet services near the power of the South Korea.

[Image Via NASA]

No, the FCC Didn’t make the Internet a Public Utility – So what actually happened?


The internet buzzed with celebration at the FCC 3–2 vote to approve of the new Title II-backed “net neutrality” regulations last week. You can read about it in the press release dated February 26, 2015. Despite how the story’s been spun, it isn’t exactly the win it appears to be.

Don’t get me wrong; this decision is important and this is a historic, significant win for the internet as we know it. After the D.C. Circuit ruled in favor of Verizon in the case of Verizon v. FCC, it looked like the concept of net neutrality was in very real peril. Last week’s win is crucial to the war, but there are likely going to be many more battles before an outcome arrives. Still, there are a ton of heroes in here:  advocacy groups, small businesses, progressive big businesses and a host of supporters comprising one of the largest online movements ever.

While it’s a big win, it’s frustrating to research and discover that the internet wasn’t exactly made into a “public utility”, as reported, like, everywhere. Internet providers are now going to be reclassified as “telecommunications services” under Title II, what we currently refer to as Internet Service Providers/ISPs aka Comcast & Verizon. Most people don’t immediately grasp that the classifying ISP’s as service providers doesn’t make them the legal equivalent of a utility.

It’s a subtle enough distinction that almost no one online reported this important detail. In fact it was most commonly misreported.

You can brush up on the history of the net neutrality fight by following the links and reading quotes gathered at The Verge.

The New York Times
The Hill

CNBC

Engadget

The above examples are only a sample of the long list of unintentionally yet blatantly wrong headlines. Nilay Patel wrote perhaps one of the most convincing arguments for making the internet a utility, written last winter(2014, Verge).

The Title II decision allows the FCC to reclassify all ISP’s as “common carriers,” which sounds like a public utility – but it isn’t. As John Bergmayer from Public Knowledge put it like this:

This misapprehension comes about because the most prominent telecommunications common carriage service of the past—telephone service—also was regulated as a utility. But utility regulation typically carries with it a number of features not present in any current proposals for broadband—most notably, thorough price regulation and detailed local regulation of service quality, customer service responsiveness, and so forth.

What’s being described sounds almost exactly like a public utility.

As Bergmayer wrote, “even full common carrier regulation is not identical to utility regulation.”

Aspects of Title II seem like utility-style regulations. The FCC is able to use a concept called forbearance to make Title II-backed net neutrality different from “utility-style regulation.” The differences are all in favor of the ISP’s, rather than the consumers. There’s going to be no kind of rate regulation under this change. ISPs won’t pay tariffs or be subject to monitoring that could slow business, if not internet speeds. ISP’s can continue to bundle and lease their competitor’s access to their networks they control. The ISPs are not obligated to contribute to the Universal Services Fund, or collect the associated taxes and fees a utility would.

If you are for net neutrality(you should be), you need to recognize that last weeks victory was real and it was a valid win for the cause. If you want to argue or advocate for that cause you need to abandon the misinfo that the internet is now a utility. Saying the FCC’s decision gave service providers utility status benefits the very companies who would love to destroy net neutrality, effectively making the internet shitty for everyone you know~!

/end rant

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

LA Startup The DIME is the Video Version of Craigslist


The video version of Craigslist allows users to either upload, or view, videos when trying to sell or buy online.

What if you could make your own commercial for the stuff you wanted to sell online? The DIME is an app created by Latinos that does exactly that. The app allows smartphone users to create their own commercials for cars, apartments, jobs, roommates, puppies, tickets or anything else you want to sell or buy online.

We were lucky enough to nab an interview with Isaac Cuchilla, CEO of The DIME. The 26 year-old LA native is Nicaraguan and Salvadoran and found himself working with technology early in in elementary school; later when he became interested in journalism and worked at La Opinión, he found a way to fuse these two interests.

MW: Tell me about The DIME and how you all came up with the idea.

IC: During a brainstorming session with Erik Caballero, we were thinking about ways to bolster the newspaper industry’s revenue, and we started talking about its advertising revenue, in particular the classifieds section. We got to evaluating how Craigslist came in to poach the newspapers classifieds revenue and did it to the extent that newspapers were left scrambling. We wanted to create a product that made the overall local buying/selling experience safer and more interactive.

MW: What are 3 lessons you have learned and would like to share with other entrepreneurs?

IC: It takes a lot of work to get it from idea to execution. Then it takes an inhumane amount of work to make a company out of it. Be ready.

There will be plenty of “no” before you get a “yes” when it comes to people believing in the product and raising capital for it. So just keep at it and don’t stop.

Persevere. You can’t score if you don’t shoot.

MW: Where are you with DIME right now?

IC: We’re in the Apple App Store with our first version. Pretty much a Beta that has taught us a lot about where to improve and focus our future versioning goals. It’s a listening period where we improve upon the user experience. We’re working on our next versions and look forward to doing a full public launch.

MW: Where do you see the company going in the next 2-5 years?

IC: Providing our users the safest and most interactive experience in local buying, selling and promotion. Basically, the go-to platform for local content and goods— all in real-time video and everywhere you like to consume it.

You can download The DIME here.

How the digital age has changed our approach to death and grief


In the days and weeks leading up to the death of Leonard Nimoy, the actor and director most known for playing the gravel-voiced Vulcan Mr Spock in Star Trek, knew he was dying. He used Twitter as a means to make peace with this fact, and to say goodbye to his friends, family and fans around the world with sayings, poetry, and wise words.

So is a new ars moriendi, or a new craft of dying, emerging in the digital age? Historians have argued that dying was a more public affair before the 20th century, when most people were cramped together in one room hovels. Even the rich in their grand houses lived more public-facing lives than we might tolerate today.

Improved housing offers greater privacy for living, including that provided by hospital or residential care, which is where our dying takes place – removed from most people’s sight. The result is not that death is taboo, but it has certainly become hidden – what historian Philippe Ariès called “unfamiliar”.

But that has been changing for a while now. The past 30 years have witnessed an explosion of auto-pathography: published autobiographies about the writer’s own dying, almost always of cancer. Art photographers also have got in on the act, documenting the withering bodies of people dying of cancer or AIDS, or portraits taken before and after death.

Nobody was obliged to read or view these offerings, but in the UK that changed in 2009. Jade Goody, who had come to fame as a contestant on Big Brother, did a deal with the tabloids and OK Magazine to cover her death from cancer, day-by-day, week-by-week. She wanted to die as she lived, in the full glare of the media. For several weeks it was impossible to go into any newsagent without being confronted with front-page images of a bald-headed Goody on her final and very public journey.

Jade Goody, who lived and died in the public eye.
Stefan Rousseau/PA

These days, the pervasive nature of social media can carry this several steps further. Anybody can now blog or tweet about their own dying – which can be remarkably educational for the doctors who read their patients’ blogs. Online mutual help groups of those with a fatal condition also enables them to communicate, anywhere, any time. Online, they can find emotional and practical support from one other.

After death, social media enable grief to become more shared, more public, than it generally was in the 20th century. Sufferers can express their suffering. And in so doing, they educate others about dying and mourning.

A mixed blessing

All this is not without its problems. In the 20th century, many people actually valued the privacy that removed their dying or grieving from the sight of others. Visibility, offline or online, creates the possibility of support, but it also requires the sufferer to put on a public face which may not mirror their internal torment.

Visibility also increases the chances of unhelpful comments and even censure. This is apparent in grieving, where mourners may be criticised for grieving too much or too little, too long or not long enough, for being too stoical or too expressive. Facebook, with its upbeat ethos, may not be where young people dying of cancer want to share their worst fears and deepest anxieties.

In the US, split between religious conservatism and liberal humanism, people’s very different ways of dealing with suffering and finding hope in mortality might once have stayed within their communities. But in the borderless online world they bang up against each other, often adding to the suffering. Fundamentalist sites discussing euthanasia or post-abortion grief can be profoundly unhelpful to those seeking advice and counsel. Liberal humanist sites may not be welcomed by some who are religious and after pastoral help.

This is why online groups restricted to particular age groups with particular conditions or particular religious beliefs, can be valuable. But online sites run by people living with certain life-threatening conditions – notably depression and anorexia – can disturb friends and family. Such sites may even embrace suicide pacts or a pro-anorexia ethos, and may get shut down, adding to their members’ feeling not being understood.

The one certainty

Humans have always been mortal, but cultures and subcultures around dying have never been static. The internet and new ways of communicating offer new ways of familiarising ourselves and others with death: printing, photography, sound recording, television, email, Facebook, Twitter, and so on. Each new technology impacts existing tensions such as privacy versus sharing, freedom to die or grieve one’s own way versus surveillance and censure by others, power versus resistance to power.

We can all be certain we will die. But we cannot be at all certain, when our time comes, how technology and society will offer to accompany us on that final journey.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Technology as a social lifeline for kids with Asperger’s


Technology is often maligned for having a negative influence on young people, particularly on their ability to develop healthy social relations and a sense of identity. But technology can also be a force for good.

For some people, such as those on the higher-functioning end of the autism spectrum, technology can be a vehicle for personal and social growth.

Many young people with autism spectrum disorder (ASD) are drawn to computers. That’s not surprising given technology’s potential for exploring the deep, intense interests that many ASD kids have.

What is surprising, though, is how little we know about the ways in which ASD kids use technology to navigate the pitfalls of a world that, more often than not, doesn’t understand them.

Social challenges

Young people on the autism spectrum find it hard to navigate social norms, often with costly consequences. So there’s much to be learnt from how they use technology to foster self-worth and establish contact with others.

Many of them yearn for social connection, but find it difficult to initiate. So they seek it in settings such as online communities, where there’s less pressure to respond in ways that “neurotypical” peers find acceptable and where they’re more likely to find like-minded people with similar interests.

Through our involvement with The Lab – a network of “technology clubs” connecting school-age ASD kids with IT mentors and each other in after-school spaces – we’re beginning to see just how complex and creative ASD kids’ networked lives can be.

At The Lab, online and offline spaces are deployed in rich combinations that help young people with ASD retain control over how they are perceived, demonstrate their skills and celebrate, rather than hide, their unique qualities. We see these spaces as “differentiated” – created or appropriated by ASD kids, for ASD kids.

Lab members bring their own laptops. They decide where and with whom they sit, and the terms under which they share what they do. The setting is designed to be unlike the typical school classroom, with its rigid expectations and arrangement of space.

Third places

The differentiated space of The Lab evokes the playful, non-judgemental “third places” observed by Ray Oldenburg in the 1980s: communal hangouts like cafés and social clubs where people turn up because they feel valued for who they are and what they can bring.

An internet-era version is LAN cafés, which perform a similar function for school-age gamers, including, no doubt, many people with ASD.

At The Lab, interactions happen both in person and through technology, sometimes at the same time. This is where Homi Bhabha’s concept of the “third space” comes in.

The idea of third space can be understood in many ways, but we use it to describe a transitional state where ideas and thoughts meet to produce new knowledge. The environments created by “third places” enable “third space” states to emerge.

Third space knowledge incorporates tacit understandings about oneself and one’s relationship to others. But in everyday life, kids with ASD don’t always learn the unwritten rules because they process the world differently to neurotypicals.

It’s like asking an international visitor to understand Australian slang on her first day in the country. Over time, she may understand and even use this slang, but it doesn’t happen naturally.

There is, then, a need for forms of interaction that help young people with ASD develop the everyday social skills others take for granted, and which matter for future employment. But these settings also need to value ASD kids for who they already are, so they can see their own strengths and build on them.

So far, The Lab does seem to be making a difference to the lives of participants. Much of this seems to flow from the combination of non-judgement, social connection and skills sharing. From here, participants forge their own learning paths based on their interests.

Life’s a game

Computer games have played a large part in this process. There’s been much debate lately on the pros and cons of games use, with claims of increased aggression and anti-social behaviour countered by other views that see games as useful tools for improving socialisation.

We’ve seen schools banning computer games that aren’t defined by teachers as “educational”, ignoring the sophisticated forms of informal learning and socialisation that occur during “screen time”.

Video games like Mario Bros are often seen to be harmful at worst or time wasters at best. However, sometimes they can provide a social nexus that brings young people together, particularly those on the autism spectrum.
Sam Howzit/Flickr, CC BY

We’ve also seen student-led proposals for lunchtime technology groups rejected by school councils. While not dismissing issues connected with too much gaming, we do worry that some children – especially those who, like kids with ASD, don’t “fit in” – are unnecessarily missing out on unique opportunities to learn and belong.

One such child joined The Lab at age 12. Lonely and isolated at school, and obsessed with Super Mario Brothers, he quickly progressed from playing Super Mario games to programming his own. Then he learnt to make his own Super Mario music tracks, teaching himself violin in the process.

From there, he learnt animation and video making from a new Lab friend and soon created his own popular YouTube channel on how to draw Mario characters. Since then he’s expanded his interests, but his sense of competence, self-worth and agency has stayed with him.

Given the woefully low employment rate of people with autism, and the stories we continually hear about self-harm and depression by young people with ASD, we think it’s worth some schools rethinking their approach to technology usage, and considering the creation of informal social settings where ASD kids’ IT interests can be harnessed to improve their quality of life, as well as their future prospects.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Kasperspy vs. Equation Group: Private Corporate Security Links Malware to NSA


In a story that is abstract, hard to grasp and comprised of details and names science fiction writers might be jealous of, Kasperspy is finally able to point an indirect but definite finger at the NSA.

Last Monday, February 16th, at Kasperspy’s  Security Analyst Summit, Kaspersky security researchers were finally prepared to present their findings linking the 15 year old NSA handle, “Equation Group”, to hundreds of files including plug-ins and upgraded variations going back fifteen years. Kasperspy operatives were initially able to identify the nls_933w.dll module by correlating a list of hard drive vendors in part of the code with a list of hardware commonly infected by a piece of code identified five years ago, dubbed the  nls_933w.dll module.

The nls_933w.dll module was very likely written by the same people who worked on the equally ubiquitous malware of initially baffling origin, Stuxnet. If you follow this sort of security news you may have read about stuxnet before. In both cases, this type of malware remains dormant unless called upon by an autonomous piece of code to stop hibernating and perform an unknown set of actions. It’s notoriously difficult to reverse engineer these complex pieces of code.

Vitaly Kamluk is the voicebox for Kaspersky Lab’s Global Research and Analysis Team. He gave the now week-old-but-already-infamous talk, offering several long-coming answers to questions anyone interested in high-level cyber security have been otherwise fruitlessly asking for years. Kamluk explained that the module is in many ways the “ultimate cyberattack tool”. It’s possibly the crowning achievement of the so-called Equation Group. He explained how the available evidence implies Equation group is about 15-years-old and gave detailed reasons why the malware is evidence that the same group responsible for the nls_933w.dll module  must have had confident and confidential knowledge of Stuxnet and Flame.Personally, I have trouble vetting the information to verify Kasperspy’s accusation, and it is difficult to link Equation Group to the NSA. This is the nature of information warfare, though; the people who are great at concealing intentions and information are going to be shrouded in mystery even after someone is able to accuse them. What makes Kasperspy vs. Equation Group so noteworthy is that a private security firm seems to have the clearest understanding of cyberwarfare, out of everyone who has the guts to openly discuss such a formidable potential enemy. Equation Group is known to be behind several security operations of dubious benefit to anyone other than the United States, with targets including the most-feared zero-day exploits that can literally ruin computers, including systems that are running critical military or utility functions for states. Equation Group has been accused without concrete proof of espionage against increasingly sensitive targets. The current list of victims includes governments, energy companies, embassies, telecoms and many other entities, mostly based in Russia, Syria, Iran and Pakistan.

The targets imply Equation Group is acting on behalf of US interests but until people know the endgame of such security violations or the true identity of Equation, there are more questions than answers – probably by design.

Read more about the internet:

World Cyberwar: Six Internet News Stories in 2015 Blur the Line Between Sci Fi and Reality

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Before decrying the latest cyberbreach, consider your own cyberhygiene


By Arun Vishwanath, University at Buffalo, The State University of New York

The theft of 80 million customer records from health insurance company Anthem earlier this month would be more shocking if it were not part of a larger trend. In 2013, the Department of Defense and some US states were receiving 10–20 million cyberattacks per day. By 2014, there was a 27% increase in successful attacks, culminating with the infamous hack of Sony Pictures.

Much of the media focus is on the losses rather than the process by which such breaches take place. Consequently, instead of talking about how we could stop the next attack, people and policymakers are discussing punitive actions. But not enough attention is given to the actions of individual end users in these cyberattacks.

We are the unintentional insiders

Many of these hacking attacks employ simple phishing schemes, such as an e-card on Valentine’s Day or a notice from the IRS about your tax refund. They look innocuous but when clicked, they open virtual back doors into our organizations.

It is you and I who click on these links and become the “unintentional insiders” giving the hackers access and helping spread the infection. Such attacks are hard to detect using existing anti-virus programs that, like vaccines, are good at protecting systems from known external threats — not threats from within.

Clearly, this virtual battle cannot be won using software alone. In the same way personal hygiene stymies the spread of infectious disease, fixing this cyber quandary will require all of us to develop better cyberhygiene. We need to begin by considering the cyberbehaviors that lead to breaches.

My research on phishing points to three. Firstly, most of us pay limited attention to email content, focusing instead on quick clues that help expedite judgment. A picture of an inexpensive heart-shaped valentine gift gets attention, oftentimes at the cost of looking at the sender’s email address.

This is coupled by our ritualized media habits that our always-on and accessible smartphones and tablets enable. Many of us check emails throughout the day whenever an opportunity or notification arises, even when we know it is dangerous to do so, such as while driving. Such habitual usage significantly increases the likelihood of someone opening an email as matter of routine.

And finally, many of us just aren’t knowledgeable about online risks. We tend to hold what I call “cyber risk beliefs” about the security of an operating system, the safety of a program, or the vulnerability of an online action, most of which are flawed.

Sit on down and get educated.
Matt Grimm, CC BY-NC-SA

Cleaning up our cyberhygiene act

Developing cyberhygiene requires all of us — netizens, educators, local government, and federal policymakers — to actively engage in creating it.

To begin, we must focus on educating everyone about the risks of online actions. Most children don’t learn about cybersafety until they reach high school; many until college. More troublingly, some learn through risky trials or the reports of someone else’s errors.

In an age where online data remain on servers perpetually, the consequences of a privacy breach could haunt a victim forever. Expanding federal programs such as the National Initiative for Cybersecurity Education, which presently aims to inspire students to pursue cybersecurity careers, could help achieve universal cybersecurity education.

Second, we must train people to become better at detecting online fraud. At the very least, all of us must be made aware of online security protocols, safe browsing practices, secure password creation and storage, and on procedures for sequestering or reporting suspicious activity. Flawed cyber-risk beliefs must be replaced with objective knowledge through training.

Although some training programs address these issues, most target businesses that can pay for training. Left out are households and other vulnerable groups, which, given the recent “bring your own device to work” (BYOD) trend, increases the chances that a compromised personal device brings a virus into the workplace. Initiatives such as the Federal Cybersecurity Training Events that presently offer free workshops to IT professionals are steps in this direction, but the emphasis must move beyond training specialists to training the average netizen.

President Obama calls for beefing up cybersecurity laws on Feb 13, 2015.
Kevin Lamarque / Reuters

Finally, we must centralize the reporting of cyber breaches. The President’s proposed Personal Data Notification and Protection Act would make it mandatory for companies to report data breaches within 30 days. But it still doesn’t address who within the vast network of enforcement agencies is responsible for resolution. Having a single clearing house that centralizes and tracks breaches, just like the Centers for Disease Control and Prevention tracks disease outbreaks across the nation, would make remediation and resource allocation easier.

Across the Atlantic, the City of London Police created a system called Action Fraud, which serves as a single site for reporting all types of cyberattacks, along with a specialized team called FALCON to quickly respond to and even address impending cyberattacks. Our city and state police forces could do likewise by channeling some resource away from fighting offline crime. After all, real world crime is at a historically low rate while cybercrimes have grown exponentially.

The Conversation

This article was originally published on The Conversation.
Read the original article.