Category Archives: Physics

The Ancient Technology of Mirrors


Mirrors out perform most modern image technologies in terms of resolution, efficiency and user experience.

The technology to understand and manipulate light took centuries to develop, and happened independently across many different cultures. Mirrors have helped to shape the modern human mind while also furthering our understanding of math and science. They are an impressive holdover from the analogue age that doesn’t require electricity yet can produce replicated, moving images at a resolution higher than the human eye can perceive. Cosmoso takes a look at the history of this uniquely high tech piece of our global, human ancestry.

Developing Reflective Tech Before Recorded History

Like a lot of ancient technologies, people were able to develop and perfect image reflection without fully understanding the principles and materials they were manipulating. Creating a mirror in a time before modern science took different paradigms of understanding, such as alchemy, superstition and religious belief.

Human technologies are often inspired by nature. Our ancestors often pondered the meaning of the reflective properties they no doubt noticed in pools of water. When water is flowing, falling or otherwise in turmoil, the light it reflects is scattered but a calm pool of water with a dark surface below it shows a reflection.

Technologies often lead to further inspiration, and the advent of metal smelting and the discovery of glass and crystal lead to a variety of reflective properties humans were able to control. Archeologists have found man made mirrors made of polished obsidian, a natural volcanic glass dated back to 6000 BC in ancient Turkey.

Thousands of years after stone reflective mirrors were created, mirrors made of polished copper were made by ancient Mesopotamians dated to 4000 BC, one thousand years before Egyptians discovered copper smelting and discovered copper reflection on their own, at about 3000 before Christ.

Other types of polished stone mirrors have been found in Central and South America much later around 2000 BC. Ancient Americans developed tech on a later timeline because the land was developed later in the planet’s history by nomadic people who often abandoned technology to live off the land while nomadically exploring previously uninhabited lands.

Aztec_mirror,_Museo_de_América,_Madrid

In times when the only access to your own reflection was an enigmatic piece of polished obsidian, the sense of self was a psychological leap away from modern man’s. Obsidian mirrors were used by various cultures to scry or predict the future, and mirrors of stone were thought to possess magic powers.

Chinese Technology: Far Ahead of the West

Around the time when the Americas were still developing stone technologies, bronze mirrors were being manufactured in 2000 BC China. China was very technologically developed at this time, and able to smelt and create a variety of metals, compounds and amalgams, including a bronze. There are many archeological finds attributed by forensics to Chinese “Qijia” culture. Proprietary secrets forced mirror tech to diverge, and it’s possible to find examples of mirrors made from various metal alloys such as copper and tin, at the same time other parts of Asia were still simply polishing copper smelted from the earth.  The tin, copper alloy found in China and India is called speculum metal, which would have been very expensive to produce in it’s time.

mirror-03-238x300Speculum metal coated mirrors brought such a high analogue resolution that people could understand what they looked like, which affected fashion and hairstyles but also began to affect other artforms like dancing and martial arts. Philosophical concepts such as duality, other worlds and multiplicity were suddenly easy to explain via analogy with the help of a mirror.

Manipulation of one’s own facial expression, slight of hand and other practiced mannerisms were now able to be studied and documented, creating new layers to the fabric of civilization.

For all of this cultural development, there was no scientific analyzation of why a mirror worked or the light it was reflecting. Before mirror technology could be advanced, there needed to be written, thoughtful investigation of why the tech worked in the first place. This was a slow process in any ancient tech but in a time before light waves and chemistry was understood, it was extremely slow. The earliest written work studying the way light reflects came from Diocles, a Greek mathematician and author of On Burning Mirrors who lived 240 BC – c. 180 BC. Illiteracy and language barriers slowed the technological development of concave and convex curved mirrors another few hundred years.greek math

Mathematics and mirrors will always have a reciprocal relationship, with math and science allowing humanity to dream up new ways of manipulating light and mirrors allowing that manipulation to inspire new questions and explanations. What was once considered magic became the study of the world we inhabit as  technology took root in the physical and psychological world humans are trapped in.

Another technological breakthrough happened in ancient Lebanon when metal-backed glass mirrors were finally invented, first century AD. Roman author Pliny wrote his famous work, “Natural History” in 77 AD, where he mentions gold-leafed glass mirrors, though none from that time have survived. Most Roman mirrors were glass coated with lead which might have used the same technological process and just been much cheaper than gold.

ptolemy's optics

Discovering the text On Burning Mirrors, Greco-Egyptian writer, Ptolemy, began to experiment with curved polished iron mirrors. He was able to peak the interest of wealthy benefactors and study with impunity. His writings discuss plane, convex spherical, and concave spherical mirrors. This was circa 90 AD. The image above describes light passing through a cylindrical glass of water.

Silvered Glass & the Modern Age:

Silver-mercury amalgams were found in archeological digs and antique collections dating back to 500 AD China. Renaissance Europeans traded with the known world, perfecting a tin-mercury amalgam and developing the most reflective surfaces known until the 1835 invention of silvered-glass mirrors. Historical records seem to credit German chemist Justus von Liebig with silvered glass but glassworkers guild records obscure the story behind it. Silvered glass coats metallic silver on the back of the reflective glass by utilizing silver nitrate in the dawning of applied chemistry. Silver is expensive but the layer is so thin, and the process so reliable that affordable mirrors began to show up in working class households across the planet ever since.

 

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

“Many Interactive Worlds” Quantum Theory Still Doesn’t Make Sense


A lot of people really want alternate universes to make sense but they don’t. It makes for great sci fi and it’s a fun thought experiment, but alternate universes might be based on too much assumption to be considered good science: back in October, 2014, Wiseman and Deckert suggested a new take on the Many Worlds Interpretation of quantum theory: Many Interactive Worlds. It’s hard to see what sets their work apart from predecessors.

You can read more about misinterpreted implications of quantum mechanics: Your Interpretation of Quantum Physics is Probably Wrong

I was initially excited by their work, published Winter 2014, but the more I read about the Many Worlds Interpretation the less I bought it. Quantum theory is hard for most people to understand, which makes sifting through conflicting theories and rationalizations a daunting task. I’m going to try and be concise but thorough in my critique of Wiseman and Deckert’s work. I’m sure they are fine people and they’ve certainly put a lot of thought into a very abstract, difficult concept.

Whats-in-a-Name

First let me get this superficial complaint out of the way: Wiseman and Deckert seem to have just dropped the word “interpretation” from their interpretation. Why? well it certainly wasn’t for clarity’s sake. The Many Worlds Interpretation and Many Interacting Worlds have awkwardly similar acronyms, MWI and MIW. Because quantum theory isn’t confusing enough~!

The Many Worlds Interpretation was the work of Hugh Everett III back in  1957. It gets called the parallel universe theory, the alternate universe theory, and the “many universes” interpretation. It comes back up in science fiction periodically but most quantum physicists don’t count it as a viable explanation of quantum mechanics’ many unanswered questions. Everett postulated all  possible outcomes happen causing reality to branch at each decision or quantum observation, creating infinite parallel universes as more an more branches are formed. Everett imagined the observer splitting into what he described as “clones” who live in the different universes. It’s really easy now, in 2015, for a version of the Many Worlds Interpretation to gain traction, because so many people are familiar with the concept from decades of science fiction examples.

So Wiseman and Deckert didn’t make up the idea of multiple universes. What are they saying is different about their new interpretation? In the Everettian model, universes branch off like a tree, never to meet again. Wiseman and Deckert describe a multiverse where particles seem to be able to influence each other and interact despite existing in separate universes. It makes a more classically physical math work out in the examples they chose. Many Interactive Worlds explains “Ehrenfest’s theorem, wave packet spreading, barrier tunneling, and zero-point energy—as a direct consequence of mutual repulsion between worlds.”

The equation they provided can successfully calculate quantum ground states and explains the notorious double-slit interference phenomenon. It sounds so impressive that most science news outlets ran with it despite there being absolutely no evidence of these other universes.

So the Griffith University academics turned heads but they kind of sidestepped the work of many foundational aspects of quantum science.  Physical Review X published the work, which is basically a proposal that parallel universes not only exist, but that they constantly interact. They explain this interaction as a force of repulsion between alternate universes. Their equations show this type of an interaction explains some of the most bizarre parts of quantum mechanics – and that is a mathematical breakthrough. It just doesn’t really have any explanation of what this “force of repulsion” is or how it can be measured. They are basically talking about philosophy, not science, but it’s really hard to prove them wrong because it’s so complicated and most people want a solution to the century of unexplainable quantum dynamics.
The bottom line: There is still no experimental evidence to support any multiple universe model, and the Many Interactive World interpretation didn’t change that.
Update: I found a video that explains my point~! Check it out.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Your Interpretation of Quantum Physics is Probably Wrong


Quantum theory can be misinterpreted to support false claims.

pseudoscience6102011uj

There is legit science to quantum theory but misinterpretations justify an assortment of pseudoscience. Let’s examine why.

Quantum science isn’t a young science anymore. This year, 2015, the term “quantum”, as it relates to quantum physics, turns 113 years old. The term as we know it first appeared “in a 1902 article on the photoelectric effect by Philipp Lenard, who credited Hermann von Helmholtz for using the word in reference to electricity”(Wikipedia). During it’s first century of life attempts to understand quantum particle behavior have lead to a bunch of discoveries. Quantum physics has furthered understanding of key physical aspects of the universe. That complex understanding has been used to develop new technologies.

Quantum physics is enigmatic in that it pushes the limits of conceptualization itself, leaving it seemingly open to interpretation. While it is has been used to predict findings and improve human understanding, It’s also been used by charlatans who have a shaky-at-best understanding of science. Quantum physics has been misappropriated to support a bunch of downright unscientific ideas.

It’s easy to see why it can be misunderstood by well-intentioned people and foisted upon an unsuspecting public by new age hacks. The best minds in academia don’t always agree on secondary implications of quantum physics. No one has squared quantum theory with the theory of relativity,  for example.

Most people are not smart enough to parse all the available research on quantum physics. The public’s research skills are notoriously flawed on any subject. The internet is rife with misinformation pitting researchers against their own lack of critical thinking skills. Anti-science and pseudoscience alike get a surprising amount of traction online, with Americans believing in a wide variety of superstitions and erroneous claims.

In addition to the public simply misinterpreting or misunderstanding the science, there is money to be made in taking advantage of gullible people. Here are some false claims that have erroneously used quantum theory as supporting evidence:

Many Interacting Worlds

The internet loves this one. Contemporary multiple universe theorMultiverse1ies are philosophy, not science, but that didn’t stop Australian physicists Howard Wiseman and Dr. Michael Hall from collaborating with  UC Davis mathematician Dr. Dirk-Andre Deckert to publish the “many interacting worlds” theory as legit science in the otherwise respectable journal, Physical Review X. This is the latest in a train of thought that forgoes scientific reliance on evidence and simply supposes the existence of other universes, taking it a step further by insisting we live in an actual multiverse, with alternate universes constantly influence each other. Um, that’s awesome but it’s not science. You can read their interpretation of reality for yourself.

Deepak Chopra

Deepak Chopra is a celebrated new age guru whose views on the human condition and spirituality are respected by large numbers of the uneducated. By misinterpreting quantum physics he has made a career of stitching together a nonsensical belief system from disjointed but seemingly actual science. Chopra’s false claims can seem very true when first investigated but has explained key details that Chopra nonetheless considers mysterious.

The Secret

‘The Power’ and ‘The Secret’ are best-selling books that claim science supports what can be interpreted as an almost maniacal selfishness. The New York Times once described the books as “larded with references to magnets, energy and quantum mechanics.” the secret

The Secret’s author,  Rhonda Byrne, uses confusing metaphysics not rooted in any known or current study of consciousness by borrowing heavily from important-sounding terminology found in psychology and neuroscience.  Byrne’s  pseudoscientific jargon is surprisingly readable and comforting but that doesn’t make the science behind it any less bogus.

Scientology

L._Ron_Hubbard_in_1950

There isn’t anything in quantum physics implying a solipsism or subjective experience of reality but that doesn’t stop Scientology from pretending we each have our own “reality” – and yours is broken.

Then there is the oft-headlining, almost post modern psuedoscientific masterpiece of utter bullshit: Scientology.

Scientology uses this same type of claim to control it’s cult following. Scientology relies on a re-fabrication of the conventional vocabulary normal, English-speaking people use. The religion drastically redefined the word reality. L.R. Hubbard called reality the “agreement.” Scientologists believe the universe is a construct of the spiritual beings living within it. The real world we all share is, to them, a product of consensus. Scientology describes, for example, mentally ill people as those who no longer accept an “agreed upon apparency” that has been “mocked up” by we spiritual beings, to use their reinvented terminology. Scientologists misuse of the word reality to ask humans, “what’s your reality?” There isn’t anything in quantum physics implying a solipsism or subjective experience of reality but that doesn’t stop Scientology.

In conclusion…

The struggle to connect quantum physics to spirituality is a humorous metaphor for subjectivity itself.

If you find yourself curious to learn more about quantum theory you should read up and keep and open mind, no doubt. The nature of a mystery is that it hasn’t been explained. Whatever evidence that might be able to help humanity understand the way reality is constructed is not going to come from religion or superstition, it will come from science. Regardless of the claims to the contrary, quantum theory only points out a gap in understanding and doesn’t explain anything about existence, consciousness or subjective reality.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Finding an affordable way to use graphene is the key to its success


Graphene is a remarkably strong material given it’s only a single carbon-atom thick. But finding ways to do something with it – that’s also affordable too – have always been a challenge.

Scientists have long been excited about the potential for graphene to revolutionise technologies, and even consider it a technology itself. Graphene is the best known conductor of electricity and heat. It is also the thinnest surface and represents the next generation wonder material for everyday applications in electronics.

The 2010 Nobel Prize in Physics was awarded to Konstantin Novoselov and Andre Geim for their pioneering work on the electronic properties of graphene.

There followed much hype in the science world with concepts to revolutionise electronic displays and circuitry. These two areas form the basis of many technologies so the impact of graphene was extensive.

How to make graphene

For such applications, graphene had to be industrially produced as large thin films on a supporting material. This highlighted two avenues where graphene could be directed: as an electronic component; or as the chief technology.

But these directions were rather narrow, as they only focused on potential commercial exploits involving the electronics industries.

Exaggerated demand for graphene to be commercialised quickly outpaced the overlapping challenges concerning the processing of nanomaterials. As such, despite all the excitement, graphene has not yet found widespread use because it is chemically difficult to process.

Let chemistry find a use

In 2009 I developed the first technique to chemically produce graphene in industrial scale quantities.

It was clear chemistry had a key role to play in the future use of the material. We could now create gram and kilogram quantities of the graphene sheets atom by atom using chemical reactions.

My work has led to many attempts by researchers world-wide to find more viable techniques to produce graphene. Each attempt reaching out to be inventive, quirky, or more innovative over the prior art.

We found an avenue where expensive apparatus was no longer required and graphene powder could be transported with an extended shelf life. This is now a common goal among researchers.

This development overcame a key tenant which was overlooked during the physics era: graphene is essentially a material which is all surface. The interface at a surface is where exciting things occur and where chemists operate.

To do something useful with a surface you need a lot of it, and we now had a lot of graphene. The options to obtain a lot of graphene material are simple. Either start by digging graphite out of the ground from natural deposits, or you make it chemically in the lab.

Chemically produced graphene offers a relatively large amount of surface to perform exciting chemical reactions. This is equivalent to having a nice smooth football field to move a football around on.

Non-sticky stuff this graphene

But changing the chemical structure of graphene while retaining its superb physical properties is incredibly difficult. This is due to a paradox that allows for the very existence of graphene: the remarkable stability of the graphene surface.

Molecules like metals and gases required for energy storage simply do not stick to graphene. Imagine if everything you placed on your table simply kept falling off – the table would not be of much use.

Attempts to change the chemical nature of graphene focused on attaching a small number of molecules. This has limited the utilisation of graphene in nanotechnologies, as the next generation of batteries, solar energy films and fuel cells involve more complex chemical reactions.

Applications that would see graphene used in these technologies would require molecules with versatile chemistry stuck to graphene.

Get boron onboard

Together with my colleagues, we have created a new graphene hybrid material by directly attaching boron clusters to the graphene surface.

The trick was to use the stable conjugated network in graphene to trap a highly reactive boron cluster. Attaching these kinds of chemicals unlocks entirely new and interesting material properties, such as improved functionality and hierarchically organised responsiveness.

For example, the material may now soon be used to interact with biological molecules, harvest sunlight for use in solar cells, and anchor metals for efficient hydrogen storage.

The work will provide an insight into how graphene materials retain their function after large scale processing. We can now perform exact chemical reactions on graphene that will ultimately translate into more reliable and affordable graphene-based technologies.

We have pushed the boundaries at the nanoscale and started to find new ways to create materials from the ground up with fascinating properties that can be commercialised.

The Conversation

This article was originally published on The Conversation.
Read the original article.

The Computer of the Future is…. Vague.


Quantum Computer prototypes make mistakes. It’s in their nature. Can redundancy correct them?

Quantum memory promises speed combined with energy efficiency. If made viable it will be used in phones, laptops and other devices and give us all faster, more trustworthy tech which will require less power to operate.  Before we see it applied, the hardware requires redundant memory cells to check and double-check it’s own errors.

All indications show quantum tech is poised to usher the next round of truly revolutionary devices but first, scientists must solve the problem of the memory cells saving the wrong answer. Quantum physicists must redesign circuitry that exploits quantum behavior. The current memory cell is called a Qubit. The Qubit takes advantage of quantum mechanics to transfer data at an almost instantaneous rate, but the data is sometimes corrupted with errors. The Qubit is vulnerable to errors because it is physically sensitive to small changes in the environment it physically exists in. It’s been difficult to solve this problem because it is a hardware issue, not a software design issue. UC Santa Barbara’s physics professor John Martinis’ lab is dedicated to finding a workaround that can move forward without tackling the actual errors. They are working on a self-correcting Qubit.

The latest design they’ve developed at Martinis’ Lab is quantum circuitry that repeatedly self-checks for errors and suppresses the statistical mistake. Saving data to mutliple Qubits and empowering the overall system with that kind of desirable reliability we’ve come to expect from non-quantum digital computers. Since an error-free Qubit seemed last week to be a difficult hurdle, this new breakthrough seems to mean we are amazingly close to a far-reaching breakthrough.

Julian Kelly is a grad student and co-lead author published in Nature Journal:

“One of the biggest challenges in quantum computing is that qubits are inherently faulty so if you store some information in them, they’ll forget it.”

Bit flipping is the problem dejour in smaller, faster computers.

Last week I wrote about a hardware design problem called bit flipping, where a classic, non-quantum computer has this same problem of unreliable data. In effort to make a smaller DRAM chip, designers created an environment where the field around one bit storage location could be strong enough to actually change the value of the bit storage location next to it. You can read about that design flaw and the hackers who proved it could be exploited to gain system admin privileges in otherwise secure servers, here.

Bit flipping also applies to this issue in quantum computing. Quantum computers don’t just save information in binary(“yes/no”, or “true/false”) positions.  Qubits can be in any or even all positions at once, because they are storing value in multiple dimensions. It’s called “superpositioning,” and it’s the very reason why quantum computers have the kind of computational prowess they do, but ironically this characteristic also makes Qubits prone to bit flipping. Just being around atoms and energy transference is enough to create unstable environments and thus unreliable for data storage.

“It’s hard to process information if it disappears.” ~ Julian Kelly.

Along with Rami Barends, staff scientist Austin Fowler and others in the Martinis Group, Julian Kelly is making a data storage scheme where several qubits work in conjunction to redundantly preserve information. Information is stored across several qubits in a chip that is hard-wired to also check of the odd-man-out error. So, while each Qubit is unreliable, the chip itself can be trusted to store data for longer and with less, hopefully, no errors.

It isn’t a new idea but this is the first time it’s been applied. The device they designed is small, in terms of data storage, but it works as designed. It corrects its own errors. The vision we all have of a working quantum computer able to process a sick amount of data in an impressively short time? That will require something in the neighborhood of  a hundred million Qubits and each of the Qubits will be redundantly  self-checking to prevent errors.

Austin Fowler spoke to Phys.org about the firmware embedded in this new quantum error detection system, calling it surface code. It relies on the measurement of change between a duplication and the original bit, as opposed to simlpy comparing a copy of the same info. This measurement of change instead of comparison of duplicates is called parity recognition, and it is unique to quantum data storage. The original info being preserved in the Qubits is actually unobserved, which is a key aspect of quantum data.

“You can’t measure a quantum state, and expect it to still be quantum,” explained Barends.

As in any discussion of quantum physics, the act of observation has the power to change the value of the bit. In order to truly duplicate the data the way classical computing does in error detection, the bit would have to be examined, which in and of itself would potentially cause a bitflip, corrupting the original bit. The device developed at Martini’s U of C Santa Barbara lab

This project is a groundbreaking way of applying physical and theoretical quantum computing because it is using the phsycial Qubit chip and a logic circuit that applies quantum theory as an algorithm. The results being a viable way of storing data prove that several otherwise untested quantum theories are real and not just logically sound. Ideas in quantum theory that have been pondered for decades are now proven to work in the real world!

What happens next?

Phase flips:

Martinis Lab will be continuing it’s tests in effort to refine and  develop this approach. While the bit flip errors seemed to have been solved with this new design, there is a new type of error not found in classical computing that has yet to be solved: the  phase-flip. Phase-flips might be a whole other article and until Quantum physicists solve them there is no rush for the layman to understand.

Stress tests:

The team is also currently running the error correction cycle for longer and longer periods while monitoring the devices integrity and behavior to see what will happen. Suffice to say, there are a few more types of errors than it may appear, despite this breakthrough.

Corporate sponsorship:

As if there was any doubt about funding…. Google has approached Martinis Lab and offered them support in effort to speed up the day when quantum computers stomp into the mainstream.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Look, your eyes are wired backwards: here’s why


The human eye is optimised to have good colour vision at day and high sensitivity at night. But until recently it seemed as if the cells in the retina were wired the wrong way round, with light travelling through a mass of neurons before it reaches the light-detecting rod and cone cells. New research presented at a meeting of the American Physical Society has uncovered a remarkable vision-enhancing function for this puzzling structure.

Section through the retina and its layers.
Labin/Safuri/Perlman/Ribak/Nature

About a century ago, the fine structure of the retina was discovered. The retina is the light-sensitive part of the eye, lining the inside of the eyeball. The back of the retina contains cones to sense the colours red, green and blue. Spread among the cones are rods, which are much more light-sensitive than cones, but which are colour-blind.

Before arriving at the cones and rods, light must traverse the full thickness of the retina, with its layers of neurons and cell nuclei. These neurons process the image information and transmit it to the brain, but until recently it has not been clear why these cells lie in front of the cones and rods, not behind them. This is a long-standing puzzle, even more so since the same structure, of neurons before light detectors, exists in all vertebrates, showing evolutionary stability.

Researchers in Leipzig found that glial cells, which also span the retinal depth and connect to the cones, have an interesting attribute. These cells are essential for metabolism, but they are also denser than other cells in the retina. In the transparent retina, this higher density (and corresponding refractive index) means that glial cells can guide light, just like fibre-optic cables.

Selective vision

In view of this, my colleague Amichai Labin and I built a model of the retina, and showed that the directional of glial cells helps increase the clarity of human vision. But we also noticed something rather curious: the colours that best passed through the glial cells were green to red, which the eye needs most for daytime vision. The eye usually receives too much blue – and thus has fewer blue-sensitive cones.

Further computer simulations showed that green and red are concentrated five to ten times more by the glial cells, and into their respective cones, than blue light. Instead, excess blue light gets scattered to the surrounding rods.

This surprising result of the simulation now needed an experimental proof. With colleagues at the Technion Medical School, we tested how light crosses guinea pig retinas. Like humans, these animals are active during the day and their retinal structure has been well-characterised, which allowed us to simulate their eyes just as we had done for humans. Then we passed light through their retinas and, at the same time, scanned them with a microscope in three dimensions. This we did for 27 colours in the visible spectrum.

Beady-eyed guinea pigs make great…well… guinea pigs, for optical research
Jg4817, CC BY-SA

The result was easy to notice: in each layer of the retina we saw that the light was not scattered evenly, but concentrated in a few spots. These spots were continued from layer to layer, thus creating elongated columns of light leading from the entrance of the retina down to the cones at the detection layer. Light was concentrated in these columns up to ten times, compared to the average intensity.

Even more interesting was the fact that the colours that were best guided by the glial cells matched nicely with the colours of the cones. The cones are not as sensitive as the rods, so this additional light allowed them to function better – even under lower light levels. Meanwhile, the bluer light, that was not well-captured in the glial cells, was scattered onto the rods in its vicinity.

These results mean that the retina of the eye has been optimised so that the sizes and densities of glial cells match the colours to which the eye is sensitive (which is in itself an optimisation process suited to our needs). This optimisation is such that colour vision during the day is enhanced, while night-time vision suffers very little. The effect also works best when the pupils are contracted at high illumination, further adding to the clarity of our colour vision.

The Conversation

This article was originally published on The Conversation.
Read the original article.

This Year’s Pi Day A Once-a-century Celebration


Pi Day – on March 14 – will be particularly memorable this year: the date can be written 3/14 by those who opt for the month then day format, which is Pi to two decimal places, 3.14. If you include the year this year then that gives 3/14/15, which is Pi to four decimal places, 3.1415.

This happens only once a century, and the Museum of Mathematics in New York City, among others, is taking Pi Day 2015 one step further, by celebrating at 9:26pm, adding three more digits to Pi, 3.1415926.

You can personally celebrate the event 12 hours earlier at 9.26am, wait a further 53 seconds to get 3.141592653 Pi to nine decimal places. That’s probably the best time and date approximation to Pi you can get with your typical time piece, although the digits of Pi continue on indefinitely, but more on that later.

Chicagoans plan to celebrate Pi Day this year by running in a Pi-K race of 3.14 miles. Numerous city bakeries are offering special pies for the occasion at US$3.14 per slice.

Another celebration

Not as well known perhaps is the fact that March 14 this year is also the 136th birthday of physicist Albert Einstein, and that 2015 is the 100th anniversary of the publication of Einstein’s paper on general relativity.

To commemorate this doubly significant event, Princeton University is planning a its usual gala event, including a pie eating contest, a performance by the Princeton Symphony, a contest to see who can recite the most correct digits of Pi (the current Guinness world record is 67,890 places), a guided Einstein tour and even an Einstein look-a-like contest.

Young entrants in an Einstein look-a-like competition as Princeton celebration Albert Einstein’s birthday, which coincides with Pi Day.
Flickr/Princeton Public Library, CC BY-NC

Pi in the popular culture

Pi Day long ago extended its reach beyond a handful of mathematical zealots, to become a widely celebrated, even the subject of a resolution to mark the day each year that was passed by the US House of Representatives in 2009.

This may well be the first legislation on Pi Day to have been adopted by a national governmental body. Pi Day even has its own following on Twitter through the hashtag #piday.

In general, Pi is much more in the public eye than it was even five or ten years ago, as we wrote last year.

Pi continues to fascinate and made another appearance on the US quiz show Jeopardy! on May 9, 2013 when it featured in an entire category of questions. The clues provided were:

  1. (US$200) Pi is the ratio of this measurement of a circle to its diameter.
  2. (US$400) Numerically, pi is considered this, like a type of “meditation”.
  3. (US$600) For about $19,100 x pi, this “Black Swan” director made “Pi”, his 1998 debut film about a math whiz.
  4. (US$800) In the 100s AD this Alexandrian astronomer calculated a more precise value of pi, the equivalent of 3.14166.
  5. (US$1,000) You can find the area of this oval geometric shape with pi x A x B, if A & B are half of its longest & shortest diameter.

The clues and the answers (all were answered correctly by various contestants) are given here in the J-archive, an independent repository of clues and answers maintained by Jeopardy! fans.

Current record for computing Pi

Ever since the dawn of the computing age, researchers have plied their craft at computing Pi, by a variety of often exotic techniques.

As we explained earlier, if we count things to the second, this year’s Pi Day gives the number down to nine decimal places, at 3.141592653. But this is still only an approximation to the true value of Pi.

Pi is a transcendental number which means you can continue to expand the number of decimal places of Pi forever and there is no repetitive pattern.

The current record for calculating digits of Pi is 13.3 trillion decimal digits, which has been ascribed to someone known only as “houlouonchi” and Alexander J Yee.

What is Pi good for?

So what is Pi good for, anyway? Does Pi or the digits of Pi ever really enter the day-to-day world? It does, actually, quite a bit.

For example, Pi is central to digital signal processing, which is pervasive in our modern wireless world. The digits of Pi (in binary) are probably somewhere in the programming of your smartphone, used in digitally decoding multichannel, gigahertz signals while you casually chat with your friend about the local weather and politics.

Mathematically speaking, your smartphone is performing a fast Fourier transform, which involves Pi.

Pi even appears in the field equations of Einstein’s general relativity. So when you read reports about tests of Einstein’s general relativity, such as the recent dramatic discovery of a four-way gravitational lens, keep in mind that Pi is behind the equations governing these mind-blowing phenomena.

For those interested in looking over some of the original technical papers on Pi that have appeared over the past 120 years in the American Mathematical Monthly, see the Pi Day anthology by one of us and Scott Chapman in the March 2015 Monthly. While many of these articles are targeted to mathematical researchers, quite a few are readable by those with relatively modest mathematical training.

For the rest of us, perhaps it is enough just to know – for this year’s Pi Day purposes – that Pi = 3.141592653 so set your alarm clocks now for 9:26am, Saturday March 14, 2015, in your favourite timezone, and enjoy the 53 second countdown.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Pi Day is silly, but π itself is fascinating and universal


Math students everywhere will be eating pies in class this week in celebration of what is known as Pi Day, the 14th day of the 3rd month.

Larry Shaw initiated the first Pi Day in 1988.
Ron Hipschman, CC BY-SA

The symbol π (pronounced paɪ in English) is the sixteenth letter of the Greek alphabet and is used in mathematics to stand for a real number of special significance. When π is written in decimal notation, it begins 3.14, suggesting the date 3/14. In fact, the decimal expansion of π begins 3.1415, so this year’s Pi Day, whose date we can abbreviate as 3/14/15, is said to be of special significance, a once-per-century coincidence. (Yet we might anticipate a similar claim next year on 3/14/16, since 3.1416 is a closer approximation to π than is 3.1415.)

Besides a reason to enjoy baked goods while feeling mathematically in-the-know, just what is π anyway?

A circle’s measurements define π.
Robotics Academy

It’s defined to be the ratio between the circumference of a circle and the diameter of that circle. This ratio is the same for any size circle, so it’s intrinsically attached to the idea of circularity. The circle is a fundamental shape, so it’s natural to wonder about this fundamental ratio. People have been doing so going back at least to the ancient Babylonians.

The hexagon’s perimeter is shorter than the circle’s, while the square’s is longer.
CC BY

You can see that π is greater than 3 if you look at a hexagon inscribed within a circle. The perimeter of the hexagon is shorter than the circumference of the circle, and yet the ratio of the hexagon’s perimeter to the circle’s diameter is 3. And you can see that π is less than 4 if you look at the square that circumscribes a circle. The square’s perimeter is longer than the circle’s circumference, and yet the ratio of this perimeter to the diameter of the circle is 4. So π is somewhere in there between 3 and 4. OK, but what number is it?

A little experimentation with a measuring tape and a dinner plate suggests that π might be 22/7, a number whose decimal expansion begins 3.14. But it turns out that 22/7 is approximately 3.1429, while even 2,250 years ago Archimedes knew that π is approximately 3.1416. The fraction 355/113 is much closer to π but still not exactly equal to it.

Fractionally closer?

So this raises the question: is there some other fraction out there that equals π, not merely approximately but exactly? The answer is no. In 1761, Swiss mathematician Johann Lambert proved that no fraction exactly equals π. This implies that its decimal expansion is never-ending, with no repeated pattern.

The German mathematician Ferdinand Lindemann proved in 1882 that π is in fact transcendental, which means that it does not solve any polynomial equation with integer coefficients. This implies in some sense that there isn’t ever going to be a simple way of describing π arithmetically. Nowadays, machines can compute trillions of decimal digits of π, but that in no way helps us understand what π is exactly. It’s easiest just to say that, to be exact, π is equal to … π.

No one knows whether each of the ten digits – 0 through 9 – appears with equal frequency in the decimal expansion of π, as we would expect if the digits of π were produced by a random digit generator. This illustrates that a strikingly elementary question can be out of reach of modern mathematics. Perhaps in a century mankind will know the answer to this question, but it’s not even clear at this time how to attack it effectively.

You could measure circumference and diameter of these pies to get π.
Dennis Wilkinson, CC BY-NC-SA

Everything’s coming up π

What is astonishing about π is that it appears in many different mathematical contexts and across all mathematical areas. It turns out that π is the ratio of the area of a circle to the area of the square built on the radius of the circle. That seems like a coincidence, because π was defined to be a different ratio. But the two ratios are the same. π is also the ratio of the surface area of a sphere to the area of the square built on the diameter of the square. And what about the ratio of the volume of sphere to the volume of the cube built on the sphere’s diameter? That’s π/6.

The area under the bell-shaped curve y=1/(1+x²) is π. But this curve isn’t actually the well-known and universal bell-shaped curve seen in statistics that has the formula y=e-x². The area under that curve is the square root of π! If you drop a pin of length one centimeter on a sheet of lined paper with lines spaced at centimeter intervals, the probability that the pin crosses one of the lines is 2/π. If you choose two whole numbers at random, the probability that they will have no common factor is 6/π².

There are thousands of formulas for π of one sort or another, although it isn’t clear whether any of them will satisfy the desire to know what π is exactly. One such formula is

Ramanujan’s equation for π.
Author provided

where the sigma symbol indicates that one must plug in all the whole numbers in place of the symbol “k” in the subsequent formula and add up the resulting infinitely-many fractions. What is remarkable about this expression is that it was discovered by the legendary Indian genius Srinivasan Ramanujan in 1914, working alone. No one knows how Ramanujan came up with this amazing formula. Moreover, his formula wasn’t even shown to be correct until 1985 – and that demonstration used high-speed computers to which Ramanujan had no access.

π is beyond universal

The number π is a universal constant that is ubiquitous across mathematics. In fact, it is an understatement to call it “universal,” because π lives not only in this universe but in any conceivable universe. It existed even prior to the Big Bang. It is permanent and unchanging.

Math enthusiasts need to cut loose sometimes too.
Vancouver Island University, CC BY-NC-ND

That’s why the celebration of Pi Day seems so silly. The Gregorian calendar, the decimal system, the Greek alphabet, and pies are relatively modern, human-made inventions, chosen arbitrarily among many equivalent choices. Of course a mood-boosting piece of lemon meringue could be just what many math lovers need in the middle of March at the end of a long winter. But there’s an element of absurdity to celebrating π by noting its connections with these ephemera, which have themselves no connection to π at all, just as absurd as it would be to celebrate Earth Day by eating foods that start with the letter “E.”

The Conversation

This article was originally published on The Conversation.
Read the original article.

“Rowhammering” Attack Gives Hackers Admin Access


A piece of code can actually manipulate the physical memory chip by repeatedly accessing nearby capacitors in a burgeoning new hack called Rowhammering. Rowhammer hacking is so brand new no one’s actually done it yet. Google’s Project Zero security initiative figured out how to exploit an aspect of a physical component in some types of DDR memory chips. The hack can give the user increased system rights regardless of an untrusted status. Any Intel-compatible PCs with this chip and running Linux are vulnerable – in theory. Project Zero pulled it off but it isn’t exactly something to panic about unless you are doing both those things: using DRAM and running linux.

A lot of readers might be susceptible to this security hack but most won’t want to read the technical details. If you are interested you can check out the project zero blog piece about it.  The security flaw is in a specific chip, the DRAM, or dynamic random-access memory chip. The chip is supposed to just store information in the form of bits saved on a series of capacitors. The hack works by switching the value of bits stored in DDR3 chip modules known as DIMMs. so, DRAM is the style of chip, and each DRAM houses several DIMMs. Hackers researching on behalf of Project Zero basically designed a program to repeatedly access sections of data stored on the vulnerable DRAM until the statistical odds of one or more DIMMS retaining a charge when it shouldn’t becomes a statistical reality.

IN 2014, this kind of hack was only theoretical until, scientists proved this kind of “bit flipping” is completely possible. Repeatedly accessing an area of a specific DIMM can become so reliable as to allow the hacker to predict the change of contents stored in that section of DIMM memory. Last Monday(March 9th, 2015) Project Zero demonstrated exactly how a piece of software can translate this flaw into an effective security attack.

“The thing that is really impressive to me in what we see here is in some sense an analog- and manufacturing-related bug that is potentially exploitable in software,” David Kanter, senior editor of the Microprocessor Report, told Ars. “This is reaching down into the underlying physics of the hardware, which from my standpoint is cool to see. In essence, the exploit is jumping several layers of the stack.”

Why it’s called Rowhammering.

The memory in a DDR-style chip is configured in an array of rows and columns. Each row is grouped with others into large blocks which handle the accessable memory for a specific application, including the memory resources used to run the operating system. There is a security feature called a “sandbox”, designed to protect the data integrity and ensure the overall system stays secure. A sandbox can only be accessed through a corresponding application or the Operating System.  Bit- flipping a DDR chip works when a hacker writes an application that can access two chosen rows of memory. The app would then access those same 2 rows hundreds of thousands of times, aka hammering. When the targeted bits flip from ones to zeros, matching a dummy list of data in the application, the target bits are left alone with the new value.

The implications of this style hack are hard to see for the layman but profound in the security world. Most data networks allow a limited list of administrators to have special privileges. It would be possible, using a rowhammer attack, to allow an existing account to suddenly gain administrative privileges to the system. In the vast majority of systems that kind of access would allow access into several other accounts. Administrative access would also allow some hackers to alter existing security features. The bigger the data center, the more users with accounts accessing the database, the more useful this vulnerability is.

The Physics of a Vulnerability

We’re all used to newer tech coming with unforeseen security problems. Ironically, this vulnerability is present in newer DDR3 memory chips. This is because the newer chips are so small there is actually and is the result of the ever smaller dimensions of the silicon. The DRAM cells are too close together in this kind of chip, making it possible to take a nearby chip, flip it back and forth repeatedly, and eventually make the one next to it – the target bit that is not directly accessible- to flip.

Note: The Rowhammer attack being described doesn’t work against newer DDR4 silicon or DIMMs that contain ECC(error correcting code), capabilities.

The Players and the Code:

Mark Seaborn, and Thomas Dullien are the guys who finally wrote a piece of code able to take advantage of this flaw. They created 2 rowhammer attacks which can run as processes. Those processes have no security privileges whatsoever but can end up gaining  administrative access to a  x86-64 Linux system. The first exploit was a Native Client module, incorporating itself into the platform as part of Google Chrome. Google developers caught this attack and altered an instruction in Chrome called CLFLUSH and the exploit stopped working. Seaborn and Dullien were psyched that they were able to get that far and write the second attempt shortly thereafter.

The second exploit, looks like a totally normal Linux process. It allowed Seaborn and Dullien to access to all physical memory which proved the vulnerability is actually a threat to any machine with this type of DRAM.

The ARS article about this has a great quote by Irene Abezgauz, a product VP at Dyadic Security:

The Project Zero guys took on the challenge of leveraging the concept of rowhammer into an actual exploit. What’s impressive is the combination of lots of deep technical knowledge with quite a bit of hacker creativity. What they did was create attack techniques in which flipping just a single bit in a specific location allows them to execute any code they want with root privileges or escape a sandbox. This is impressive by itself, but they added to this quite a few creative solutions to make it more likely to succeed in a real world scenario and not just in the lab. They figured out ways for better targeting of the specific locations in memory they needed to flip, improved the chances of the attack to succeed by creating (“spraying”) multiple locations where a flipped bit would make the right impact, and came up with several ideas to leverage this into actual privileged code execution. This combination makes for one of the coolest exploits I’ve seen in a while.

Project Zero didn’t name which models of DDR3 are susceptible to rowhammering. They also claim that this attack could work on a variety of operating platforms, even though they only tried it on a Linux computer running x86-64 hardware, something that they didn’t technically prove but seems very believable considering the success and expertise they seem to carry behind that opinion.

So, is Rowhammering a real threat or just some BS?

There isn’t an obvious, practical application for this yet. Despite how powerful the worst-case scenario would be, this threat doesn’t really come with a guarantee of sweeping the internet like some other, less-recent vulnerability exploits. The overwhelming majority of hacks are attempted from remote computers but Seaborn and Dullien apparently needed physical access to incorporate their otherwise unprivlidged code into the targeted system. Also, because the physical shape of the chip dictates which rows are vulnerable it may be the case that users who want to increase security to protect against this exploit can just reconfigure where the administrative privileges are stored and manipulated on the chip. Thirdly, rowhammering as Project Zero describes actually requires over 540,000 memory accesses less than 64 milliseconds – that’s a memory speed demand that means some systems can’t even run the necessary code. Hijacking a system using rowhammering with these limitations is presently not a real threat.

People used to say the same thing about memory corruption exploits, though. For examples: buffer overflow or a use-after-free both allow hack-attempts to squeeze malicious shell code into protected memory of a computer. Rowhammering is differnt because it is so simple. It only allows increased privileges for the hacker or piece of code, which is a real threat if it becomes developed as thoroughly as the development of memory corruption exploits has. The subtle difference might even be hard to grasp now, but now that the work has been done it’s the usual race between security analysts who would love to protect against it and the criminal world trying to dream up a way to make it more viable. Rob Graham, CEO of Errata Security, wrote further on the subject, here.

In short, this is noteworthy because a physical design flaw in a chip is being exploited, as opposed to a software oversight or code efficacy problem. A piece of code is actually affecting the physical inside of the computer during the attack.

Or, as Kanter, of the Microprocessor Report, said:

“This is not like software, where in theory we can go patch the software and get a patch distributed via Windows update within the next two to three weeks. If you want to actually fix this problem, we need to go out and replace, on a DIMM by DIMM basis, billions of dollars’ worth of DRAM. From a practical standpoint that’s not ever going to happen.”

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY