Category Archives: Feature

Your Interpretation of Quantum Physics is Probably Wrong


Quantum theory can be misinterpreted to support false claims.

pseudoscience6102011uj

There is legit science to quantum theory but misinterpretations justify an assortment of pseudoscience. Let’s examine why.

Quantum science isn’t a young science anymore. This year, 2015, the term “quantum”, as it relates to quantum physics, turns 113 years old. The term as we know it first appeared “in a 1902 article on the photoelectric effect by Philipp Lenard, who credited Hermann von Helmholtz for using the word in reference to electricity”(Wikipedia). During it’s first century of life attempts to understand quantum particle behavior have lead to a bunch of discoveries. Quantum physics has furthered understanding of key physical aspects of the universe. That complex understanding has been used to develop new technologies.

Quantum physics is enigmatic in that it pushes the limits of conceptualization itself, leaving it seemingly open to interpretation. While it is has been used to predict findings and improve human understanding, It’s also been used by charlatans who have a shaky-at-best understanding of science. Quantum physics has been misappropriated to support a bunch of downright unscientific ideas.

It’s easy to see why it can be misunderstood by well-intentioned people and foisted upon an unsuspecting public by new age hacks. The best minds in academia don’t always agree on secondary implications of quantum physics. No one has squared quantum theory with the theory of relativity,  for example.

Most people are not smart enough to parse all the available research on quantum physics. The public’s research skills are notoriously flawed on any subject. The internet is rife with misinformation pitting researchers against their own lack of critical thinking skills. Anti-science and pseudoscience alike get a surprising amount of traction online, with Americans believing in a wide variety of superstitions and erroneous claims.

In addition to the public simply misinterpreting or misunderstanding the science, there is money to be made in taking advantage of gullible people. Here are some false claims that have erroneously used quantum theory as supporting evidence:

Many Interacting Worlds

The internet loves this one. Contemporary multiple universe theorMultiverse1ies are philosophy, not science, but that didn’t stop Australian physicists Howard Wiseman and Dr. Michael Hall from collaborating with  UC Davis mathematician Dr. Dirk-Andre Deckert to publish the “many interacting worlds” theory as legit science in the otherwise respectable journal, Physical Review X. This is the latest in a train of thought that forgoes scientific reliance on evidence and simply supposes the existence of other universes, taking it a step further by insisting we live in an actual multiverse, with alternate universes constantly influence each other. Um, that’s awesome but it’s not science. You can read their interpretation of reality for yourself.

Deepak Chopra

Deepak Chopra is a celebrated new age guru whose views on the human condition and spirituality are respected by large numbers of the uneducated. By misinterpreting quantum physics he has made a career of stitching together a nonsensical belief system from disjointed but seemingly actual science. Chopra’s false claims can seem very true when first investigated but has explained key details that Chopra nonetheless considers mysterious.

The Secret

‘The Power’ and ‘The Secret’ are best-selling books that claim science supports what can be interpreted as an almost maniacal selfishness. The New York Times once described the books as “larded with references to magnets, energy and quantum mechanics.” the secret

The Secret’s author,  Rhonda Byrne, uses confusing metaphysics not rooted in any known or current study of consciousness by borrowing heavily from important-sounding terminology found in psychology and neuroscience.  Byrne’s  pseudoscientific jargon is surprisingly readable and comforting but that doesn’t make the science behind it any less bogus.

Scientology

L._Ron_Hubbard_in_1950

There isn’t anything in quantum physics implying a solipsism or subjective experience of reality but that doesn’t stop Scientology from pretending we each have our own “reality” – and yours is broken.

Then there is the oft-headlining, almost post modern psuedoscientific masterpiece of utter bullshit: Scientology.

Scientology uses this same type of claim to control it’s cult following. Scientology relies on a re-fabrication of the conventional vocabulary normal, English-speaking people use. The religion drastically redefined the word reality. L.R. Hubbard called reality the “agreement.” Scientologists believe the universe is a construct of the spiritual beings living within it. The real world we all share is, to them, a product of consensus. Scientology describes, for example, mentally ill people as those who no longer accept an “agreed upon apparency” that has been “mocked up” by we spiritual beings, to use their reinvented terminology. Scientologists misuse of the word reality to ask humans, “what’s your reality?” There isn’t anything in quantum physics implying a solipsism or subjective experience of reality but that doesn’t stop Scientology.

In conclusion…

The struggle to connect quantum physics to spirituality is a humorous metaphor for subjectivity itself.

If you find yourself curious to learn more about quantum theory you should read up and keep and open mind, no doubt. The nature of a mystery is that it hasn’t been explained. Whatever evidence that might be able to help humanity understand the way reality is constructed is not going to come from religion or superstition, it will come from science. Regardless of the claims to the contrary, quantum theory only points out a gap in understanding and doesn’t explain anything about existence, consciousness or subjective reality.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Why is CRISPR the Science Buzzword of Early 2015?


CRISPR isn’t just the cutting edge of genetic modification – it is re-framing our understanding of evolution.

 What is CRISPR?
CRISPR is a DNA sequence that can do something most other genes can’t. It changes based on the experience of the cell it’s written in.  It works because of a natural ability for cells to rewrite their own genetic code, first discovered in 1987. The name CRISPR was coined in 2002, and it stands for “clustered regularly interspaced short palindromic repeats”. They function as a method of inserting recognizable DNA of questionable or dangerous viruses into DNA strands so that the offspring of the cell can recognize what its ancestors have encountered and defeated in the past. By inserting a CRISPR-associated protein into a cell along with a piece of RNA code the cell didn’t write, DNA can be edited.A 2012 breakthrough  involved, in part, the work of Dr. Jennifer A. Doudna. Doudna and the rest of the team at UC Berkley were the first to edit human DNA using CRISPR.  Recently, in March 2015, she warned this new genome-editing technique comes with dangers and ethical quandaries, as new tech often does. Dr. Doudna in a NYT article, she called for a planet-wide moratorium on human DNA editing, to allow humanity time to better understand the complicated subset of issues we all now face.
CRISPR-related tech insn’t only about editing human genes, though. It affects cloning and the reactivation of otherwise extinct species. It isn’t immediately clear what purpose this type of species revival would have without acknowledging the scary, rapidly increasing list of animals that are going extinct because of human activity. Understanding and utilizing species revival could allow humans to undo or reverse some of our environmental wrongs. The technique may be able to revive the long lost wooly mammoth by editing existing elephant DNA to match the mammoth‘s, for instance. Mammoths likely died out due to an inability to adapt to natural climate change which caused lower temperatures in their era, and are a non-politically controversial choice but the implications for future environmentalism are promising.
Each year, mosquitoes are responsible for the largest planetary human death toll. Editing DNA with CRISPR bio-techniques could help control or even wipe out malaria someday. The goal of this controversial tech is to make the mosquito’s immune system susceptible to malaria or make decisions about their breeding based on how susceptible they are to carrying the disease. The controversy around this approach to pest and disease control involves the relatively young research behind Horizontal Gene Transfer, where DNA is passed from one organism to an unrelated species. A gene that interferes with the ability of mosquitoes to reproduce could end up unintentionally cause other organisms to have trouble reproducing. This info is based on the work of , , http://www.biorxiv.org/content/early/2014/12/27/013276
Even more controversial are the startups claiming they can create new life forms, and own the publishing rights. Austen Heinz’ firm is called Cambrian Genomics which grows genetically-controlled and edited plants. The most amazing example is the creation of a rose species that literally glows in the dark. Cambrian is collaborating with the rose’s designer, a company called Glowing Plant, whose projects were eventually banned from kickstarter for violating a rule about owning lifeforms. Eventually, Heinz wants to let customers request and create creatures: http://www.sfgate.com/business/article/Controversial-DNA-startup-wants-to-let-customers-5992426.php#photo-7342819
The final example in an ongoing list of 2015 breakthroughs involving CRISPR is this CRISPR-mediated direct mutation of cancer genes in the mouse liver might be able to combat cancer. It’s the second cancer-related breakthrough in 2015 that affects the immune system, the first was on Cosmos about a week back: Accidental Discovery Could Turn Cancer Cells Into Cancer-Attacking Immune Cells.

Other Related Cosmoso.net articles:

Pre-Darwinian Theory of Heredity Wasn’t Too Far Off

Wooly Mammoth Poised to be the First De-Extincted Animal, Son~!

 

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

CANtact Device Lets you Hack a Car’s CPU for $60


Right now, Eric Evenchick is presenting CANtact at Black Hat Asia 2015 security conference in Singapore. Cantact is a hardware interface that attaches to the car’s CPU at one end and a regular laptop at the other. He’s already figured out how to do several simple hacks. It may sound like a simple device but the pricey commercially-available on-board CPU interfaces have been a consistent obstacle to car security research.

Car Companies have a huge security hole that they have not publicly addressed. The only reason people don’t regularly computer hack motor-vehicles is a lack of commercially available hardware. Hacking a car’s electronic system is something only a few people would even have the equipment to learn. To become a specialized security researcher in this area you would have to have a car you are willing to seriously mess with, which is expensive in and of itself. Some people might have access to a clunker that was made recently enough to have a CPU but they can’t afford the $1,200 stock cable that your local car mechanic would have to run the pre-fab software provided my the manufacturer. Eric Evenchick spent the last year figuring out exactly what makes the hardware tick, so he could put it int he hands of security researchers for the price of a dinner at a fancy restaurant.

24-year-old Eric Evenchick calls the controversial device CANtact, and he’s going to present it today at Black Hat Asia security conference in Singapore, whether car companies like it or not. The code that comes on the board attached tot he cable is open source. He can get it as cheap as $60 and maybe it will sell through third parties for $100.  CANtact uses any USB interface to adapt to a car or truck’s OBD2 port at the other end. OBD2 ports usually connect under the dashboard and talk to the car or trucks CPU. In most modern vehicles, the complicated Controller Area Network, or CAN, controls  the windows, the brakes, the power-steering, the dashboard indicators and more. It’s something that can disable your car and most people shouldn’t mess with it just yet. Once peer-collaborated info breaks into the mainstream, Evenchick hopes customized CAN systems will be common practice.

“Auto manufacturers are not up to speed. They’re just behind the times. Car software is not built to the same standards as, say, a bank application. Or software coming out of Microsoft.” Ed Adams at Security Innovation, 2014

Is can hacking a security threat we’ll see in the future? Quite probably. Back in 2013 security researchers Chris Valasek and Charlie Miller used DARPA funding to demonstrate how possible it really is to affect steering and brakes once the CAN system is accessed.

In the controversial death of journalist Michael Hastings, some people suspected car-hacking. It’s never been proven but you can read a detailed examination of the evidence in the Cosmoso.net article: Revisiting the Death of Michael Hastings

Evenchick is not trying to allow hackers to more easily hack cars. Instead he claims more affordable gadgetry will improve security, which seems to be the way tenuous relationship of security culture and hacking has always gone. In the test described in the link to the forbes article above, Valasek and Miller rewired a $150 ECOM cable to access and test vehicles’ OBD2 ports. CANtact comes out of the box ready to do what Valasek and Miller had to stay up late nights perfecting.

Anyone who attended Black Hat Asia, or can get a hold of any video of Evenchick’s presentation can contact Jon Howard: [email protected]
Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Accidental Discovery Could Turn Cancer Cells Into Cancer-Attacking Immune Cells


Unexpected results are sort of the point of lab experiments. Laboratory studies reveal the unforeseen and if they didn’t, there would seldom be a reason to perform lab studies. It can be problematic when scientists don’t get the results they wanted or thought to expect but other times new data can be the result of the unexpected, and lead to discoveries no one thought to check for in the beginning. Some famous discoveries happened on total  accident throughout scientific history. The latest unintentional discovery might make one of the most aggressive types of cancer more treatable than ever before.

Scientists at Stanford recently discovered a way to force leukemia cells to become mature immune cells do something amazing.  The researchers were actually trying to stabilize cancer cells so they could keep them alive longer in order to study them. The method of keeping the cells alive allowed the cells to develop into immune cells that may one day help the immune system attack cancerous tumor cells!

You can read the study in full at Proceedings of the National Academy of Sciences.

Acute lymphocytic leukemia (ALL) is the name for a particularly rapidly-progressing cancer where the immature cells that should differentiate and become white blood cells or lymphocytes instead become cancerous.  ALL has several classifications based on which kind of lymphocyte (B cell or T cell) the mutated cancer cell originated from.

The scientists were simply investigating a common type of lymphoblastic leukemia, an acute cancer called precursor B cell ALL, aka B-ALL. B-ALL starts as a rogue B cell mutating away from usefulness during an early part of its maturation. The immature cells can’t fully differentiate and become the B cells they were otherwise destined to be. The flawed B cells lack the  transcription factors  required for normal development. Transcription factors are basically proteins that attach themselves to sections of DNA and are then supposed to switch designated genes on or off, depending on the type of transcription factor.  Did you follow that? It’s a bit technical for the layman but most of us understand DNA. Transcription factors are basically a DNA reader than helps the cell decide which part of your DNA it should use to become a specific type of cell.

So, when a transcription factor messes up and activates the wrong section of DNA or doesn’t activate the correct section, it can cause mutations where the cell doesn’t develop or develops poorly. B-ALL is one of the most nasty types of cancer and the  prognosis for victims is not good. The Stanford U team wanted to study this villain but had trouble keeping the cancer cells alive outside of the victims body.

Lead researcher Ravi Majeti reported in the lab’s news release: “We were throwing everything at the cells to help them survive.”

One of the techniques they used to attempt to keep the cancer cells from dying involved exposure to a certain transcription factor. The exposed cells began to grow and change shape, and the new morphology was a type of white blood cell called a macrophage, normally responsible for attacking  damaged, mutated cells or foreign material.

The team recognized the cancerous cells behaved the same as macrophages in various ways such as surrounding and engulfing bacteria. Most notably, the pseudomacrophages from the cancer cells of mice added back into the cancerous mouse did not behave as a cancer cell, and the mice who did not have cancer did not develop cancer after being exposed.

The Stanford researchers believe the newly converted cells are no longer cancerous. Furthermore, they might even help the body’s immune system regroup and attack other, still cancerous cells. It could work because macrophages normally collect DNA tags from abnormal cells they encounter and also mark foreign material so that other cells in the immune system know what to attack. Since the false macrophages were originally cancerous cells, they will, in theory,  already possess the correct signals that recognize the same kind of  cancer.

Now that this principle has been identified as a possible method of treating one cancer, it might open the door to helping the immune system combat other cancers.

Related Cosmoso article: Pre-Darwinian Theory of Heredity Wasn’t Too Far Off

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Carbon3D’s CLIP: Faster than Any Other 3D Printing System – and Cooler Looking!


A picture is worth a thousand words with CLIP 3D’s laser-cured liquid printing.

3D printing is one of the best up and coming tech fields to follow. CLIP 3D Printing is the fastest device to date. Designers and engineers are starting to rely on 3D printing to stay competitive but the process is far from streamlined. Companies like Carbon3D are ahead of the pack with the coolest looking printing process that just happens to also be faster than anyone else out there. By rethinking the way the resin is cured, Carbon3D got their newest printer to produce 25-100 times faster than any other resin printing techniques, as of early 2015. It’s like they just couldn’t decide between fast an beautiful.

Peep the video at the bottom of this article~!

3Dprint.com broke the story, announcing Carbon3D’s Continuous Liquid Interface Production technique. CLIP built off of the most innovative ideas that have already been done with 3D printing  by utilizing photosensitive resin and an incredibly precise laser to cure the liquid into a solid from the bottom of a clear pan. Inspired by techniques which print and cure layer-by-layer, CLIP instead uses it’s laser to cure in conjunction with oxygen which inhibits the curing process allowing for variable ratios of viscosity. This allows the printer to print in 3 dimensions simultaneously.

DIAGRAM OF CLIP

You can see the liquid from the top in the promotional media but the action happens underneath the pool. The transparent window that holds the pool of liquid “ink” is also oxygen-permeable. This allows controlled amounts of oxygen and laser light to hit the bottom of the liquid layer.  Carbon3D  explains the process can leave uncured spots on the bottom layer as little as a few dozen microns thick. As the oxygenated areas of the resin are decided, the laser cures the unoxygenated areas, leaving a layer of solid that is attached to the layer above. This amazing GIF speaks for itself. DAYUM:

Carbon 3D has managed to keep a proprietary amount of this technique secret while still nailing down $41 million in funding from venture capital firms. It’s almost like they 3D printed themselves from liquid into the solid competitive start-up they are today.

As the fastest guys on the scene Carbon3D are the hottest new guys. The slow production speed is one of the biggest reasons 3D printing hasn’t become the manufacturing norm and CLIP printing is expected to change that moving forward from early 2015. Cosmoso.net is watching this fascinating development on the edge of our 3D printed seats.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

The Computer of the Future is…. Vague.


Quantum Computer prototypes make mistakes. It’s in their nature. Can redundancy correct them?

Quantum memory promises speed combined with energy efficiency. If made viable it will be used in phones, laptops and other devices and give us all faster, more trustworthy tech which will require less power to operate.  Before we see it applied, the hardware requires redundant memory cells to check and double-check it’s own errors.

All indications show quantum tech is poised to usher the next round of truly revolutionary devices but first, scientists must solve the problem of the memory cells saving the wrong answer. Quantum physicists must redesign circuitry that exploits quantum behavior. The current memory cell is called a Qubit. The Qubit takes advantage of quantum mechanics to transfer data at an almost instantaneous rate, but the data is sometimes corrupted with errors. The Qubit is vulnerable to errors because it is physically sensitive to small changes in the environment it physically exists in. It’s been difficult to solve this problem because it is a hardware issue, not a software design issue. UC Santa Barbara’s physics professor John Martinis’ lab is dedicated to finding a workaround that can move forward without tackling the actual errors. They are working on a self-correcting Qubit.

The latest design they’ve developed at Martinis’ Lab is quantum circuitry that repeatedly self-checks for errors and suppresses the statistical mistake. Saving data to mutliple Qubits and empowering the overall system with that kind of desirable reliability we’ve come to expect from non-quantum digital computers. Since an error-free Qubit seemed last week to be a difficult hurdle, this new breakthrough seems to mean we are amazingly close to a far-reaching breakthrough.

Julian Kelly is a grad student and co-lead author published in Nature Journal:

“One of the biggest challenges in quantum computing is that qubits are inherently faulty so if you store some information in them, they’ll forget it.”

Bit flipping is the problem dejour in smaller, faster computers.

Last week I wrote about a hardware design problem called bit flipping, where a classic, non-quantum computer has this same problem of unreliable data. In effort to make a smaller DRAM chip, designers created an environment where the field around one bit storage location could be strong enough to actually change the value of the bit storage location next to it. You can read about that design flaw and the hackers who proved it could be exploited to gain system admin privileges in otherwise secure servers, here.

Bit flipping also applies to this issue in quantum computing. Quantum computers don’t just save information in binary(“yes/no”, or “true/false”) positions.  Qubits can be in any or even all positions at once, because they are storing value in multiple dimensions. It’s called “superpositioning,” and it’s the very reason why quantum computers have the kind of computational prowess they do, but ironically this characteristic also makes Qubits prone to bit flipping. Just being around atoms and energy transference is enough to create unstable environments and thus unreliable for data storage.

“It’s hard to process information if it disappears.” ~ Julian Kelly.

Along with Rami Barends, staff scientist Austin Fowler and others in the Martinis Group, Julian Kelly is making a data storage scheme where several qubits work in conjunction to redundantly preserve information. Information is stored across several qubits in a chip that is hard-wired to also check of the odd-man-out error. So, while each Qubit is unreliable, the chip itself can be trusted to store data for longer and with less, hopefully, no errors.

It isn’t a new idea but this is the first time it’s been applied. The device they designed is small, in terms of data storage, but it works as designed. It corrects its own errors. The vision we all have of a working quantum computer able to process a sick amount of data in an impressively short time? That will require something in the neighborhood of  a hundred million Qubits and each of the Qubits will be redundantly  self-checking to prevent errors.

Austin Fowler spoke to Phys.org about the firmware embedded in this new quantum error detection system, calling it surface code. It relies on the measurement of change between a duplication and the original bit, as opposed to simlpy comparing a copy of the same info. This measurement of change instead of comparison of duplicates is called parity recognition, and it is unique to quantum data storage. The original info being preserved in the Qubits is actually unobserved, which is a key aspect of quantum data.

“You can’t measure a quantum state, and expect it to still be quantum,” explained Barends.

As in any discussion of quantum physics, the act of observation has the power to change the value of the bit. In order to truly duplicate the data the way classical computing does in error detection, the bit would have to be examined, which in and of itself would potentially cause a bitflip, corrupting the original bit. The device developed at Martini’s U of C Santa Barbara lab

This project is a groundbreaking way of applying physical and theoretical quantum computing because it is using the phsycial Qubit chip and a logic circuit that applies quantum theory as an algorithm. The results being a viable way of storing data prove that several otherwise untested quantum theories are real and not just logically sound. Ideas in quantum theory that have been pondered for decades are now proven to work in the real world!

What happens next?

Phase flips:

Martinis Lab will be continuing it’s tests in effort to refine and  develop this approach. While the bit flip errors seemed to have been solved with this new design, there is a new type of error not found in classical computing that has yet to be solved: the  phase-flip. Phase-flips might be a whole other article and until Quantum physicists solve them there is no rush for the layman to understand.

Stress tests:

The team is also currently running the error correction cycle for longer and longer periods while monitoring the devices integrity and behavior to see what will happen. Suffice to say, there are a few more types of errors than it may appear, despite this breakthrough.

Corporate sponsorship:

As if there was any doubt about funding…. Google has approached Martinis Lab and offered them support in effort to speed up the day when quantum computers stomp into the mainstream.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

You Are What You Eat: Horizontal Gene Transfer in Evolution


One of the most basic and widely-understood concepts in modern biology is Darwinian evolution, wherein DNA is inherited by offspring from the parent organism. This view of evolution, also called vertical gene transfer, is increasingly becoming too simple to explain sections of the human genome. The science journal called Genome Biology published a study last September, 2014, describing the evidence and conclusions behind an alternative mechanism of gene acquisition called horizontal gene transfer or HGT. HGT is when genetic material transfers between otherwise non-genetically-related organisms. HGT actually already evidenced in several single-celled bacteria, but this study proved higher organisms display traits and DNA evidence from outside their ancestry.  Last falls Genome Biology report was able to show signature DNA in humans that did not come from the understood human lineage.

HGT is sometimes called Lateral Gene Transfer. The concept is that organisms ingest or otherwise absorb DNA from the organisms they encounter in their environment. Much of the conceptualization behind HGT originated in Seattle in 1951 Horizontal gene transfer was first described in Seattle in 1951 by Victor J Freeman in the Journal  of Bacteriology. This new study is significant because it shows human genealogy may have fungal DNA through such lateral transfer.

The 2014 study shifts the conventional POV that animal evolution solely uses genes passed down from parental lineage. Because evolution is an ongoing process, this new aspect of that process is implied to be ongoing. We are still absorbing DNA from other organisms right now~!

Alastair Crisp at the University of Cambridge:

“This is the first study to show how widely horizontal gene transfer (HGT) occurs in animals, including humans, giving rise to tens or hundreds of active ‘foreign’ genes.”

There are many advantages to organisms which evolve laterally. One long-refernced example in the genetics community is quickly-evolving bacteria that can resist anti-biotics after only a few generations. In the Darwinian model, only the bacteria that had a chance mutation would resist the drug. By transfering genetic material from other related bacteria, a bacterium can adopt an immunity it did not originally have a mutation for. HGT might be an important part of the evolution of complex organisms, too, including animals.  We now have evidence that nematode worms can acquired genes from the microorganisms and plants they exist in cohabitation with. Another example from the new study shows a type of beetle displayed bacterial genes which enabled them to digest coffee berries.

HGT in humans is controversial because it implies organisms in our natural – and unnatural – environments can affect our DNA, and that of our offspring. Humanity has created a planet where organisms are repeatedly exposed to organisms which would not otherwise be part of our environment.

Researchers were able to calculate the likelihood that  similar genes from other species were transferred this way. Last September’s study showed the genomes of 12 species of fruit fly, four species of nematode worm, and 10 species of primates, had overwhelming evidence of HGT. The primate list included humans!

The numbers also help estimate when the genes were acquired. Most of the genes in humans that can be shown as a match to non-human organisms are blood or enzyme related. Most importantly the ABO blood group gene in humans, are confirmed to have been acquired through vertebrates using HGT. Most of the other genes were related to metabolism. This might lead to discoveries about the human diet that can change or modify metabolism.

Past studies have attempted to prove this same phenomenon but have only proven 17 genes attributed to HGT.  This latest study shows  128 additional foreign genes in the human genome.

Nematode

HGT likely plays an integral part in the evolution of all animals but the evidence here shows nematode worms have acquired genes by coexisting with microorganisms and plants. The genes enabled nematodes to metabolize lipids, breaking down of fatty acids and helping form glycolipids. Some genes affected the immune system, such as the antimicrobial response, immune cell signals, and inflammatory responses. Others affected amino-acid metabolism and enabled nematodes to modify proteins and engage in antioxidant activities.

Researchers even identified which organisms the transferred genes came from. Bacteria & protists, which together comprise another class of microbes are the most common lateral gene donor. Perhaps future studies might show that the trait of HGT is one possessed by simple single celled life, rather than one controlled by the expression of multicellular organisms.

Even more strange: most of the foreign genes in primates, including humanity, seemed to come from viruses. Some genes even seemed to have originated from fungi which  explains why previous studies focused on bacteria missed the most likely lateral lineage.

Despite the ongoing nature of evolution, the mutation and gradual adaptation is slow. The roots of HGT expressed in primates are ancient. Some of the DNA may have been with our species since a common ancestor was shared between Chordata & Primates.

The implications of this are far-reaching, such as rethinking the invasiveness and shared DNA in host-parasite relationships. Mistakes or incompatible DNA that is nonetheless laterally transferred could be responsible for genetic mutations that are not desirable such as a tendency for heart disease or cancer. Solid proof of HGT that can be reproduced and relied on can also mean new cures for those exact same diseases.

Most importantly HGT will change the way we discuss and teach evolution.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

“Rowhammering” Attack Gives Hackers Admin Access


A piece of code can actually manipulate the physical memory chip by repeatedly accessing nearby capacitors in a burgeoning new hack called Rowhammering. Rowhammer hacking is so brand new no one’s actually done it yet. Google’s Project Zero security initiative figured out how to exploit an aspect of a physical component in some types of DDR memory chips. The hack can give the user increased system rights regardless of an untrusted status. Any Intel-compatible PCs with this chip and running Linux are vulnerable – in theory. Project Zero pulled it off but it isn’t exactly something to panic about unless you are doing both those things: using DRAM and running linux.

A lot of readers might be susceptible to this security hack but most won’t want to read the technical details. If you are interested you can check out the project zero blog piece about it.  The security flaw is in a specific chip, the DRAM, or dynamic random-access memory chip. The chip is supposed to just store information in the form of bits saved on a series of capacitors. The hack works by switching the value of bits stored in DDR3 chip modules known as DIMMs. so, DRAM is the style of chip, and each DRAM houses several DIMMs. Hackers researching on behalf of Project Zero basically designed a program to repeatedly access sections of data stored on the vulnerable DRAM until the statistical odds of one or more DIMMS retaining a charge when it shouldn’t becomes a statistical reality.

IN 2014, this kind of hack was only theoretical until, scientists proved this kind of “bit flipping” is completely possible. Repeatedly accessing an area of a specific DIMM can become so reliable as to allow the hacker to predict the change of contents stored in that section of DIMM memory. Last Monday(March 9th, 2015) Project Zero demonstrated exactly how a piece of software can translate this flaw into an effective security attack.

“The thing that is really impressive to me in what we see here is in some sense an analog- and manufacturing-related bug that is potentially exploitable in software,” David Kanter, senior editor of the Microprocessor Report, told Ars. “This is reaching down into the underlying physics of the hardware, which from my standpoint is cool to see. In essence, the exploit is jumping several layers of the stack.”

Why it’s called Rowhammering.

The memory in a DDR-style chip is configured in an array of rows and columns. Each row is grouped with others into large blocks which handle the accessable memory for a specific application, including the memory resources used to run the operating system. There is a security feature called a “sandbox”, designed to protect the data integrity and ensure the overall system stays secure. A sandbox can only be accessed through a corresponding application or the Operating System.  Bit- flipping a DDR chip works when a hacker writes an application that can access two chosen rows of memory. The app would then access those same 2 rows hundreds of thousands of times, aka hammering. When the targeted bits flip from ones to zeros, matching a dummy list of data in the application, the target bits are left alone with the new value.

The implications of this style hack are hard to see for the layman but profound in the security world. Most data networks allow a limited list of administrators to have special privileges. It would be possible, using a rowhammer attack, to allow an existing account to suddenly gain administrative privileges to the system. In the vast majority of systems that kind of access would allow access into several other accounts. Administrative access would also allow some hackers to alter existing security features. The bigger the data center, the more users with accounts accessing the database, the more useful this vulnerability is.

The Physics of a Vulnerability

We’re all used to newer tech coming with unforeseen security problems. Ironically, this vulnerability is present in newer DDR3 memory chips. This is because the newer chips are so small there is actually and is the result of the ever smaller dimensions of the silicon. The DRAM cells are too close together in this kind of chip, making it possible to take a nearby chip, flip it back and forth repeatedly, and eventually make the one next to it – the target bit that is not directly accessible- to flip.

Note: The Rowhammer attack being described doesn’t work against newer DDR4 silicon or DIMMs that contain ECC(error correcting code), capabilities.

The Players and the Code:

Mark Seaborn, and Thomas Dullien are the guys who finally wrote a piece of code able to take advantage of this flaw. They created 2 rowhammer attacks which can run as processes. Those processes have no security privileges whatsoever but can end up gaining  administrative access to a  x86-64 Linux system. The first exploit was a Native Client module, incorporating itself into the platform as part of Google Chrome. Google developers caught this attack and altered an instruction in Chrome called CLFLUSH and the exploit stopped working. Seaborn and Dullien were psyched that they were able to get that far and write the second attempt shortly thereafter.

The second exploit, looks like a totally normal Linux process. It allowed Seaborn and Dullien to access to all physical memory which proved the vulnerability is actually a threat to any machine with this type of DRAM.

The ARS article about this has a great quote by Irene Abezgauz, a product VP at Dyadic Security:

The Project Zero guys took on the challenge of leveraging the concept of rowhammer into an actual exploit. What’s impressive is the combination of lots of deep technical knowledge with quite a bit of hacker creativity. What they did was create attack techniques in which flipping just a single bit in a specific location allows them to execute any code they want with root privileges or escape a sandbox. This is impressive by itself, but they added to this quite a few creative solutions to make it more likely to succeed in a real world scenario and not just in the lab. They figured out ways for better targeting of the specific locations in memory they needed to flip, improved the chances of the attack to succeed by creating (“spraying”) multiple locations where a flipped bit would make the right impact, and came up with several ideas to leverage this into actual privileged code execution. This combination makes for one of the coolest exploits I’ve seen in a while.

Project Zero didn’t name which models of DDR3 are susceptible to rowhammering. They also claim that this attack could work on a variety of operating platforms, even though they only tried it on a Linux computer running x86-64 hardware, something that they didn’t technically prove but seems very believable considering the success and expertise they seem to carry behind that opinion.

So, is Rowhammering a real threat or just some BS?

There isn’t an obvious, practical application for this yet. Despite how powerful the worst-case scenario would be, this threat doesn’t really come with a guarantee of sweeping the internet like some other, less-recent vulnerability exploits. The overwhelming majority of hacks are attempted from remote computers but Seaborn and Dullien apparently needed physical access to incorporate their otherwise unprivlidged code into the targeted system. Also, because the physical shape of the chip dictates which rows are vulnerable it may be the case that users who want to increase security to protect against this exploit can just reconfigure where the administrative privileges are stored and manipulated on the chip. Thirdly, rowhammering as Project Zero describes actually requires over 540,000 memory accesses less than 64 milliseconds – that’s a memory speed demand that means some systems can’t even run the necessary code. Hijacking a system using rowhammering with these limitations is presently not a real threat.

People used to say the same thing about memory corruption exploits, though. For examples: buffer overflow or a use-after-free both allow hack-attempts to squeeze malicious shell code into protected memory of a computer. Rowhammering is differnt because it is so simple. It only allows increased privileges for the hacker or piece of code, which is a real threat if it becomes developed as thoroughly as the development of memory corruption exploits has. The subtle difference might even be hard to grasp now, but now that the work has been done it’s the usual race between security analysts who would love to protect against it and the criminal world trying to dream up a way to make it more viable. Rob Graham, CEO of Errata Security, wrote further on the subject, here.

In short, this is noteworthy because a physical design flaw in a chip is being exploited, as opposed to a software oversight or code efficacy problem. A piece of code is actually affecting the physical inside of the computer during the attack.

Or, as Kanter, of the Microprocessor Report, said:

“This is not like software, where in theory we can go patch the software and get a patch distributed via Windows update within the next two to three weeks. If you want to actually fix this problem, we need to go out and replace, on a DIMM by DIMM basis, billions of dollars’ worth of DRAM. From a practical standpoint that’s not ever going to happen.”

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Is the new Apple Watch Missing the Mark or Ahead of it’s Time?


My friend Pat O’dea wrote:

Asked a guy showing off his expensive new smartwatch “What do you like most about it?” He replied “You don’t realize how many times during the day you have to reach into your pocket and pull your phone out just to see what time it is, so this like, totally solves that.” Speechless.

Steve Jobs’ legendary status as a businessman and a tech pioneer didn’t die with him, but Apple is pretty far from bulletproof. No one’s talent for anticipating the wants and needs of the consumer base is infallible and the ability to cultivate a brand reputation is arguably too rare to even study or accurately speculate about.  Apple has gone through ups and downs and had some spectacular failures in the past. Since his death, everyone has pondered at least once: Can the brand progress into a new era of product development without Jobs?

Apple tenaciously conceptualized the personal computer but the real ability to stay afloat and eventually thrive depended on financial support from investors and even competitors who were simply eager to keep the market of new ideas alive with the competition that spurred the personal computer’s development in the first place. The story behind Apple is one that discusses the future of branding itself. A few years ago Apple cultivated a lifestyle. The ipod and the iphone were as American as Coca~Cola or Warner Brothers. From hardware design, to software design, to intuitive user experience, Apple made devices that people found easy to use and extremely, surprisingly useful – and they did it with confidence and subtlety.  Never before has a company proven it’s finger to be on the pulse of the market. Period.

The millions behind Apple’s multiple ad campaigns were spent to capture a market that may not be able to afford the type of products that forged the rep. Missing Steve Jobs leadership might not be the problem behind Apple repeatedly missing the mark but it’s hard to imagine him supporting a product like the Apple Watch.

The “Think different,” campaign was aimed at regular, middle class people. Apple products took existing tech and put it in a format anyone could just pick up, figure out, and use without any real instruction or coaching. Most of all, apple products were effective and useful. Despite the target audience, the products have always come with a price tag that was a tad high for the intended consumers.

Apple Watch is following only one aspect of the marketing plan in this beginning of the post-Jobs era: the pitch. They are trying to push the watch as an affordable product when it’s usefulness is taken into consideration. The problem is: it isn’t very useful.

People supported and even coveted iPods and iPhones  because of the groundbreaking and aesthetic but the accusation of them being expensive and frivolous has always plagued the company. The atmosphere Jobs cultivated put a spell on the world but the products often did live up to the hype – or at least come close. The days of Americans buying $2000 laptops and considering it a bargain are damn near over. Being able to take a unit out of the box and find a pre-programmed piece of tech that the everyman could (almost)afford and operate was apparently harder than Jobs made it seem. The days of Apple being able to brag about how useful these devices are seriously numbered.

It’s not just the watch. Apple press-released new laptops available in gold. They released videos of Christy Turlington Burns doing the things millionaires do. The Apple Watch also comes plated in 18 Karat Gold. Tim Cook quoted the starting price at $10,000.

Over the past year, various people speculated or confirmed that this jump to a new target audience was in the works. John Gruber blogs for Apple, and he predicted the highest of this new high end material would not even be affordable. Kevin Roose wrote for Fusion, saying Apple is likely to market toward the high end of the wealth inequality spectrum pointing out how wrong engineer Jony Ive was by quipping, “Apple products are for everyone.”

So the new prices are out and they are as ridiculous as expected. The new product reviews are in and the watch isn’t really doing anything that a phone can’t already pull off.  The lower end model of the Apple Watch is still $350 and if all it really offers is the differnce between a pocket-watch and a wristwatch, I think it’s safe to say: Apple fell off. There is no technological difference between the low end and high end models; the computer is the identical in functionality. The higher end model is not useful except for people who want to brag about it as a status symbol or convert their money into an asset that may not even appreciate in value. In short, it seems like a seriously bad investment.

I might be out of line by imagining what a deadman would say but gold-plated anything is not something I would have expected Jobs’ reputation to support. The other side of this debate is something like: Apple has had a long and storied history and changed it’s mission several times. There is no reason to see this as the end of Apple. It’s possible that the company is acting on economic information that has been vetted and tested extensively and knows full well that an expensive, sort of silly watch is going to push profit margins appropriately toward their goals. That doesn’t make this round of new products any less disappointing.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

An Interview With 3D Printed Food Artist Chloe Rutzerveld


Chloé shines in this interview about the future of food design and her upcoming year, including SXSW and developing 3D-printed prototypes into a culinary reality.

Eindhoven University of Technology Graduate, Chloé Rutzerveld, designed a food I don’t quite know how to categorize. I first saw pictures of her most recent work, Edible Growth, last week and immediately wrote to her. Her Edible Growth concept involves a bunch of hot topics in current scientific thought but the pictures don’t put the technology first – they just look great. In fact the pictures are currently the point of the project. There are tons of details that need to be worked out, and Rutzerveld is spending the upcoming year getting the funding, awareness and support to develop this project into a realistic restaurant menu item. 3d printing technology is a frontier she is willing to jump way into. Read more about Edible Growth on Rutzerveld’s website.

Chloé answered a ton of questions below

Sketches

The current concept art looks great. What was the initial idea behind these great looking confections?

The shape of the edible developed and changed throughout the design process, influenced by development in the technological and biotechnological parts of the project. For example, at first, I made drawings of Edible Growth in which the entire ball was filled with wholes. Which doesn’t make sense because cresses and mushrooms don’t grow down, only up 😉

3d printed food

Chloé’s initial, all-plastic design showed plants and mushrooms growing in all directions but the final design with real food had to accommodate gravity with a modified design.

Also, when the product is printed, you see straight lines, showing the technology part.. when the product matures these straight technological lines become invisible by the organic growth of the product. Showing the collaboration between technology and nature. Technology in this project is merely used as a means to enhance natural processes like photosynthesis and fermentation.

Chloe RutzerveldWhat inspired you? 

My skepticism towards printing food and the urge to find some way to use this new technology to create healthy, natural food with good a good taste and structure in which the printer would add something to the product, as well as the environment.

3d printen

A 3d printer arranges dough for the first step of an edible growth prototype.

Once you had the idea, how long did it take you to produce the prototypes and pastries we can see in the photos?

At first I made a lot of drawings and prototypes form clay. After that I started using nylon 3d-printed structures. When I gained more knowledge about 3d printing and the material composition inside the structure, the design of the product changed along with that. The mushrooms and cress inside the prototypes, as well as the savory pie dough is just a visualization, the final product might be totally different. It’s mend as inspiration and showing that we should think beyond printing sugar, chocolate and dough if we want to use this technology to create future ‘food’.

The prototyping process took about 2 months I think.. and multiple museums asked if they could exhibited it, I made non-food, food products that would last longer.

DSC06857

What are you doing for a living? 

Haha great question, because as you probably understand, media attention is great but does not help me pay my bills unfortunately 😉 But it does make it easier to get assignments for the development of workshops, dinners etc.

Basically at this point, I give lectures, presentations, and organize events and dinners. One upcoming event I’m organizing is about my new project called “Digestive Food”. I will not say too much about it, but I’ll update my website soon;)

To have a more stable income, I started working for the Next Nature Network in February, to organize the Next Nature Fellow program! Next Nature explores the technosphere and the co-evolving relationship with technology

Edible GrowthHow did you find the project so far?

Well I personally think it looks beautiful and I’m quite proud that so many people are inspired and fascinated by it! It would be great if such a product would come on the market.

I wonder what the pastry and edible soil are made of. Can you talk about the ingredients? 

I don’t call it edible soil, but a breeding ground. Because everything must be edible (like a fully edible eco-system) we experimented with a lot of different materials. But in the end, we found that agar-agar is a very suitable breeding ground on which also certain species of fungi and cress (like the velvet-paw and watercress for example) can grow very easily within a few days without growing moldy!

IMG_8562

Agar-agar breeding ground turned out to be the right mix of versatility and food-safe materials to make Edible Growth go from plastic prototype to edible reality.

How do you feel about copyright and patented ideas?

I am not very interested in that part.. of course it’s good to get credits for the idea and the photo’s but I will not buy a patent. I don’t have the knowledge or employees to develop this concept into a reel product. So I actually hope someone steels the idea and starts developing it further :)! I’m often asked by big tech-companies or chefs if I wanted an investment to develop it… but to be honest.. I’ve many other ideas and things I would like to do.

Edible prototype  - Copy

Do  you have secret ingredients?

Haha not in the product, but in my work it would be passion, creativity and a pinch of excessive work ethos 😉

What types of foods have you experimented with?

For Edible Growth? A dozen of cresses, and other seeds, dried fruits and vegetables for the breeding ground, agar-agar, gelatins, some spores..

But for my other projects also with mice, muskrat, organ meat, molecular enzymes etc.

IMG_9265

Who have you been working with? 

Waag Society (Open Wetlab, Amsterdam), Next Nature (Amsterdam), TNO (Eindhoven & Zeist), Eurest at the High Tech Campus (Eindhoven)

What is your studio environment like? 

I actually still live in a huge student-home which I share with 9 other people. But because I almost graduated one year ago I will need to move out. So I work a lot at home, in my 16m2 room, in the big-ass kitchen downstairs,  if I have appointments somewhere I afterwards work in a café or restaurant with wi-fi, or at flex work places, my parents house.. I’m very flexible and can work almost everywhere 🙂 Practical work I’ll do mostly at home obviously.

But I am looking for a nice studio in Eindhoven, that’s easier to receive guests or people from companies.

 What steps need to happen before we start seeing 3D printed food become commercially available? Development of software, hardware and material composition.

I noticed on your website you have other projects in the works. What are you doing currently? What are your upcoming plans and goals for 2015? 

Next week I’ll go to SXSW. In the summer I’m going to Matthew Kenney Culinary academy to learn more practical and theoretical things about food (and secretly just because I absolutely love to learn about plating and menu planing). I’m developing this event I told you about for the Museum Boerhaave in Leiden and the E&R platform. And when I return from Maine, I actually want to set up a temporary pop-up restaurant at the Ketelhuisplein during the Dutch Design Week 2015 about a social or cultural food issue.

Thanks again, Chloé~! This was fun!!!

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY