Category Archives: Electronics

Lots of hot air about heat, but why is no one talking about sustainable cooling?


Without cooling, the supply of food, medicine and data would simply break down. We consume large amounts of energy and cause a great deal of pollution keeping things cool yet compared to electricity, transport or heat, cold has received very little attention in the energy debate; neither the UK, USA nor the EU yet has an explicit policy on cold.

Global demand is booming – and incremental efficiency improvements are unlikely to contain the resulting environmental damage. We need radically new technology.

In rapidly developing nations investment in cooling is starting to boom as rising incomes, urbanisation and population growth boost demand. But the industry remains rudimentary and has enormous headroom to grow: in India, just 4% of fresh produce is transported in refrigerated vehicles currently compared to more than 90% in the UK; China has an estimated 66,000 refrigerated vehicles to serve a population of 1.3 billion, whereas France has 140,000 for 66 million. But these disparities seem unlikely to persist. India projects that it needs to spend more than US$15 billion on the cold chain over the next five years.

For industrial use, cold must generally be maintained through a whole supply chain – think of how seafood can remain frozen from trawler to supermarket. We call this the cold chain.

Been a while since they were last warm.
Brian Smith, CC BY

Cold chain growth is currently based on diesel-powered technologies that produce grossly disproportionate emissions of nitrogen oxides (NOx) and particulate matter (PM). The fridge you might find on a supermarket home delivery van consumes up to 20% of the vehicle’s diesel, but emits up to six times as much NOx and 29 times as much PM as the engine. It also uses HFC refrigerants harmful to the atmosphere.

At the same time, however, vast amounts of cold are wasted, for example when liquid natural gas (LNG) is turned back into gas at import terminals. This cold could potentially be stored as liquid air or liquid nitrogen then recycled to reduce the cost and environmental impact of cooling in buildings and vehicles.

This insight has stimulated new thinking aimed at creating business and environmental value from the efficient integration of cold into the wider energy system, now known as the “cold economy”. The cold economy crucially involves the recycling of waste cold and “wrong-time” energy such as excess wind power generated at night when demand is low to provide, through novel forms of energy storage, low-carbon, zero-emission cooling and power.

Big changes in the energy market over the next decade will spur the adoption of tidal power, solar power, offshore wind and other novel technologies. This in turn will require far greater integration of different forms of energy generation and consumption – and it is increasingly clear this now means joining up not only heat, power and transport, but also cold.

This is an important opportunity: with the right support, the cold economy could develop into a large industry that simultaneously reduces greenhouse gas emissions, improves air quality and replaces environmentally destructive refrigerants with benign alternatives – as well as generating thousands of new manufacturing jobs.

The cold economy is the subject of a new policy commission entitled Doing Cold Smarter, launched by the University of Birmingham this month. It will assess not only how the growing demand for cooling can be met without causing environmental ruin, but also the potential benefits both in the UK and emerging markets.

What will we come up with? You’ll have to wait until the commission’s final report is published this autumn. But it should be full of thought provoking – even cool – ideas.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Is the new Apple Watch Missing the Mark or Ahead of it’s Time?


My friend Pat O’dea wrote:

Asked a guy showing off his expensive new smartwatch “What do you like most about it?” He replied “You don’t realize how many times during the day you have to reach into your pocket and pull your phone out just to see what time it is, so this like, totally solves that.” Speechless.

Steve Jobs’ legendary status as a businessman and a tech pioneer didn’t die with him, but Apple is pretty far from bulletproof. No one’s talent for anticipating the wants and needs of the consumer base is infallible and the ability to cultivate a brand reputation is arguably too rare to even study or accurately speculate about.  Apple has gone through ups and downs and had some spectacular failures in the past. Since his death, everyone has pondered at least once: Can the brand progress into a new era of product development without Jobs?

Apple tenaciously conceptualized the personal computer but the real ability to stay afloat and eventually thrive depended on financial support from investors and even competitors who were simply eager to keep the market of new ideas alive with the competition that spurred the personal computer’s development in the first place. The story behind Apple is one that discusses the future of branding itself. A few years ago Apple cultivated a lifestyle. The ipod and the iphone were as American as Coca~Cola or Warner Brothers. From hardware design, to software design, to intuitive user experience, Apple made devices that people found easy to use and extremely, surprisingly useful – and they did it with confidence and subtlety.  Never before has a company proven it’s finger to be on the pulse of the market. Period.

The millions behind Apple’s multiple ad campaigns were spent to capture a market that may not be able to afford the type of products that forged the rep. Missing Steve Jobs leadership might not be the problem behind Apple repeatedly missing the mark but it’s hard to imagine him supporting a product like the Apple Watch.

The “Think different,” campaign was aimed at regular, middle class people. Apple products took existing tech and put it in a format anyone could just pick up, figure out, and use without any real instruction or coaching. Most of all, apple products were effective and useful. Despite the target audience, the products have always come with a price tag that was a tad high for the intended consumers.

Apple Watch is following only one aspect of the marketing plan in this beginning of the post-Jobs era: the pitch. They are trying to push the watch as an affordable product when it’s usefulness is taken into consideration. The problem is: it isn’t very useful.

People supported and even coveted iPods and iPhones  because of the groundbreaking and aesthetic but the accusation of them being expensive and frivolous has always plagued the company. The atmosphere Jobs cultivated put a spell on the world but the products often did live up to the hype – or at least come close. The days of Americans buying $2000 laptops and considering it a bargain are damn near over. Being able to take a unit out of the box and find a pre-programmed piece of tech that the everyman could (almost)afford and operate was apparently harder than Jobs made it seem. The days of Apple being able to brag about how useful these devices are seriously numbered.

It’s not just the watch. Apple press-released new laptops available in gold. They released videos of Christy Turlington Burns doing the things millionaires do. The Apple Watch also comes plated in 18 Karat Gold. Tim Cook quoted the starting price at $10,000.

Over the past year, various people speculated or confirmed that this jump to a new target audience was in the works. John Gruber blogs for Apple, and he predicted the highest of this new high end material would not even be affordable. Kevin Roose wrote for Fusion, saying Apple is likely to market toward the high end of the wealth inequality spectrum pointing out how wrong engineer Jony Ive was by quipping, “Apple products are for everyone.”

So the new prices are out and they are as ridiculous as expected. The new product reviews are in and the watch isn’t really doing anything that a phone can’t already pull off.  The lower end model of the Apple Watch is still $350 and if all it really offers is the differnce between a pocket-watch and a wristwatch, I think it’s safe to say: Apple fell off. There is no technological difference between the low end and high end models; the computer is the identical in functionality. The higher end model is not useful except for people who want to brag about it as a status symbol or convert their money into an asset that may not even appreciate in value. In short, it seems like a seriously bad investment.

I might be out of line by imagining what a deadman would say but gold-plated anything is not something I would have expected Jobs’ reputation to support. The other side of this debate is something like: Apple has had a long and storied history and changed it’s mission several times. There is no reason to see this as the end of Apple. It’s possible that the company is acting on economic information that has been vetted and tested extensively and knows full well that an expensive, sort of silly watch is going to push profit margins appropriately toward their goals. That doesn’t make this round of new products any less disappointing.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Soon smartwatches will listen to your body to work out how you’re feeling


Final details of Apple’s new smartwatch have finally arrived at the firm’s glitzy Spring Forward event. But while the hype machine steps up another notch, there are other issues regarding health and self-tracking and, possibly even more important, over wearable tech companies’ interest in our emotional lives.

Apple’s Watch records exercise, tracks our movements throughout the day, assesses the amount of time we are stood up and reminds us to get up and move around if we have been sat for too long – let’s not forget Tim Cook’s “sitting is the new cancer” line. It achieves this by means of an accelerometer, a heart rate sensor, WiFi and GPS. There are already many smartwatches on the market such as the Pebble and offerings from LG, Sony, Samsung and Motorola, among others. Of course, these haven’t had the Apple marketing Midas touch.

Whether the Watch will be a flop or success, Apple’s entry is a significant contribution to industry-wide attempts to get us using wearable devices. The market is predicted to grow from 9.7m units in 2013 to 135m in 2018, according to CCS Insight, while a report from UK retailer John Lewis also records steady growth in wearables for health and well-being: sales were up 395% from 2013. This is notable because John Lewis is not aimed at the tech-savvy, and therefore presents a reasonable indicator of mass-market take-up of wearables.

Information is power

To understand the significance of Watch and other self-tracking wearables, we should look to Silicon Valley and the Quantified Self movement. This began in San Francisco around 2007 as the editors of Wired magazine, Gary Wolf and Kevin Kelly, initiated a group of like-minded people interested in “self-knowledge through numbers”, a motto and philosophy of sorts for the Quantified Self movement. It entails a deeply libertarian outlook of de-centralisation, a shrunken state, autonomy and self-reliance, and pre-emptive and preventative measures based on the use of data.

Apple’s move into wearables is inevitable as the market grows, but the broader interest in health is also notable. It reflects an interest from corporations and national health providers alike in promoting preventative and anticipatory technologies. The promise wearable technology offers is information: about consumers’ and patients’ behaviour, their health, and whether they stick to prescribed treatments.

This has ushered in an age of medical self-interrogation, in real time and real life contexts, whether this be from office pressure, in relationships, or the impact of disease or physical stresses on the body. Wearables are only part of the health story, as advocates of digital health care foresee how the doctor-patient approach would be radically altered by means of wearable monitors and sensors in the home. Technology behemoths such as Apple and Google alongside many startups would clearly be interested in the possibilities offered by reorganising health provision along these lines.

Think and act

Beyond health, Apple’s interest in emotion is key to understanding the significance of its watch. Apple’s website promises that we will reach out and connect in ways we never have. Watch will allow us to draw doodle pictures and observe others as they create theirs, give loved ones a “tap” on the wrist to show we are thinking of them, send real-time heartbeats to others, and so on.

The message is to use connectivity to be intimate even at a distance, with the language Apple uses an attempt to claim intimacy and sociability from afar, and to humanise and make palatable what are essentially tracking technologies.

There is however a more literal emotional dimension to biometric technologies: the Watch is an example of what I term empathic media – machines able to assess, collect and make use of data about our emotions. This can be achieved through interpretation of speech patterns and tone, gesture, gaze direction, facial cues, heart rates, and respiration patterns. While Apple’s product does not offer all this (although earlier iterations of Watch made similar promises), it still sits within a wider context of technologies that quite literally feel our bodily reactions.

Until now the online world has understood our preferences through the search term keywords we use and what we click; empathic media will quite literally feel our reactions. This is important because if companies can understand moods, emotions or states of arousal, they have access to information that may sway the decisions we make.

We have yet to see Apple’s privacy policy for the watch. While I’m sure it will state that no personally identifiable information will be disclosed to third-parties, what remains to be seen is what can be drawn from aggregated biometric and emotional data, and where that data ends up. This is a key revenue stream for other empathic media and wearable companies. Will Apple be doing the same?

The Conversation

This article was originally published on The Conversation.
Read the original article.

Simultaneous Observation Might Change Our Understanding of Quantum Mechanics


New data could shed light on a decades old gap in understanding quantum mechanics – but how?

There is no shame in struggling to conceptualize quantum mechanics, considering some of the best minds on the planet struggle as well. In fact, the field of study has been confusing for even the most forward-thinking, capable scientists. The new piece of data can be gleaned from a complicated but relatively easy to grasp experiment, published March second, 2015, entitled Simultaneous observation of the quantization and the interference pattern of a plasmonic near-field.

This new event was possible through a collaboration of the Laboratory for Ultrafast Microscopy and Electron Scattering of EPFL, the Department of Physics of Trinity College (US) and the Physical and Life Sciences Directorate of the Lawrence Livermore National Laboratory. The image was rendered by EPFL’s ultrafast energy-filtered transmission electron microscope. There are currently only 2 such microscopes in the world.

Before now, traditional understanding of quantum mechanics has not been able to explain why some subatomic features can behave as a simultaneous particle or a wave. The often-referenced experiments demonstrating the effect of observation always left me asking why you can’t try both at the same time. No experiment was able to capture both states of light simultaneously.  Science has only been able to record evidence of a light as waves or particles; this new photograph captures an image of both from the exact same moment in time. Finally, an novel experiment was devised, developed and executed, simultaneous observation.

 

Traditional particle/wave observation works like this: ultraviolet light hits a metal surface causing the metal to emit electrons in a predictable, observable time-frame. Until Albert Einstein wrote about what he called the photoelectric effect, light was thought to be a  wave. Once the logic is understood this photoelectric effect is hard proof of light behaving as a particle, able to knock into other particles.

Researcher Fabrizio Carbone lead his team at EPFL as they performed a modified version of : using electrons to image light. The researchers have captured, for the first time ever, a single snapshot of light behaving simultaneously as both a wave and a stream of particles particle.

Carbone’s team was able to use nanotechnology to exploit the wave aspect of light to create a standing wave. They used a laser to direct a short pulse of light at a nano-thin metal wire. The light travels along the wire’s surface to create a standing wave on the other side. By running electricity through the wire and measuring the speed of that electron flow they were able to create an image of the wave aspect of the light during the pulse. The same electron-flow that was used to create the image of the wave traveled so close to the standing light-wave it actually had a measurable exchange of energy, as only a particle can do.

Fabrizio Carbone explains the significance: “This experiment demonstrates that, for the first time ever, we can film quantum mechanics – and its paradoxical nature – directly.”

I wonder what this new way of observing the same exact quantity of light in both states will mean in the developing applications which involve quantum theory. Carbone gives a great example, “Being able to image and control quantum phenomena at the nanometer scale like this opens up a new route towards quantum computing.”

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Can Information be Weaponized? Memetic Warfare and the Sixth Domain: Part One


Can an image, sound, video or string of words influence the human mind so strongly the mind is actually harmed or controlled? Cosmoso takes a look at technology and the theoretical future of psychological warfare with Part One of an ongoing series. This installment of Can Information be Weaponized? Memetic Warfare and the Sixth Domain: Part One is about a possible delivery system for harmful memes. You can click here to jump to Part Two.

Chloe Diggins and Clint Arizmendi wrote an article for Wired Magazine back in Dec., 2012 entitled, Hacking the Human Brain: The Next Domain of Warfare. The piece began:

It’s been fashionable in military circles to talk about cyberspace as a “fifth domain” for warfare, along with land, space, air and sea. But there’s a sixth and arguably more important warfighting domain emerging: the human brain. ~Hacking the Human Brain by Chloe Diggins and Clint Arizmendi, 2012, Wired Magazine

Hacking the Human Brain  concentrated on the vulnerabilities of Brain-Computer Interface or BCI, giving some examples about how ever-increasing human reliance of computer-aided decision making in modern warfare opens users to security risks from weaponized hacking attempts. It’s a great article but the article is not actually discussing that sixth domain it claimed to in that opening paragraph I quoted above.  The attacks described by Diggins and Arizmendi are in the nature of exosuits and mind-controlled drones being overridden by hackers, exhibiting the fifth domain of warfare of the given paradigm. What kind of attack would truly compromise, subjugate the sixth domain, the domain of the mind?

“Wait a minute, Juanita. Make up your mind. This Snow Crash thing—is it a virus, a drug, or a religion?”
Juanita shrugs. “What’s the difference?” ~ From Neil Stephenson’s Snow Crash, 1992

In Neil Stephenson‘s 1992 novel, Snow Crash, the hero unravels a complicated conspiracy to control minds using a complicated image file which taps into the innate, hardwired firmware language the human brain uses as an operating system. By simply viewing an image, any human could be susceptible to a contagious, self-replicating idea. The novel was ahead of its time in describing the power of media and the potential dangers posed by creating immersive, interactive virtual worlds and memes with harmful messages or ideas that can spread virally via social media. In the world of Snow Crash, a simple 2d image was the only technology needed to infect the human mind, forcing the victim to comply. The word and much of the concept of a meme had yet to be developed in 1992 but as the above quote points out, there are several, well tested mind control systems in existence already, including viruses, drugs and religions(Check out Snow Crash by Neil Stephenson at Amazon.com).

Stephenson waxed academic about language, history and the idea that ancient Sumerians had already uncovered this ability to hack the human mind. He later credited a 1976 book by Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind as an influence and inspiration for Snow Crash. In Origin of Consciousness, Jaynes coined the term bicameralism,  hypothetical psychological supposition that the human mind used to be divided into 2 main language functions. One part of the human mind was for speaking and the other was for listening, aka bicameralism. Jaynes claimed this state was normal in primates until a relatively recent change in language and cognition happened to humanity, supposedly about 3000 years ago. Stephenson’s fictional technology attacks modern man’s anthropologically latent compulsion to automatically accept orders when the orders are presented in the correct language.

snow crash

Is a mind-control meme only the stuff of science fiction? In real life, how susceptible are humans to this kind of attack? Check out Can Information be Weaponized? Memetic Warfare and the Sixth Domain Part Two.

Thanks for reading Can Information be Weaponized? Memetic Warfare and the Sixth Domain: Part One~! Any suggestions, contradictions, likes, shares or comments are welcome.

Jonathan Howard posted this on Monday, February 9th, 2015

[email protected]

 

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Are you addicted to your smartphone?


There is no arguing that smartphones are amazing pieces of technology but how many of us have crossed the line from using the useful to abusing our smart phones?

Photographer Eric Smith captured a man off the coast of Redondo Beach, California, engrossed in his phone, while a humpback whale casually lounged just feet from his boat.

Photographer Eric Smith captured a man off the coast of Redondo Beach, California, so engrossed in his phone he is apparently missing a humpback whale casually lounged just feet from his boat. The image was posted to Instagram on Feb. 2, 2015.

Smart phones are now an established, pervasive part of humanity.  You probably have a pattern of checking your phone compulsively, throughout the day. Do you ever wonder if you’ve become an addict? What make something addictive? Can media be addictive even though it isn’t a drug? Let’s look at the facts.

According to Pew Research, as of January 2014 90% of American adults have some kind of device they check regularly. Back in  2012, 67% of cell owners find themselves regularly checking their phone for messages, alerts, or calls even when no alerts or notifications compel them. More stats on cell phone use from Pew Research here. People are nuts for their phones, no doubt.

The numbers here are so high it seems like I’m probably telling you something you already know but is it addiction to check our phones this often? Dr. David Greenfield, psychologist and author of Virtual Addiction: Help for Netheads, Cyber Freaks, and Those Who Love Them writes, “We already know that the Internet and certain forms of computer use are addictive and while we’re not seeing actual smartphone addictions now, the potential is certainly there.”

Greenfield argued that any media tech could be potentially addictive,calling them”psychoactive.”, an adjective usually reserved for prescription, shamanic and recreational drugs. Greenfield claims that smart phones and other interactive media modify the user’s mood and trigger positive feelings. Anything you enjoy instigates a dopamine reaction the brain that can become a sought after sensation and cause compulsive, over-indulgent behavior.

The unprecedented speed with which the smartphone took hold on humanity can be distressing. Some people feel downright paranoid about it(but maybe not enough to give up their phones~!). Back in 2011, Amber Case argued that reliance on smartphone technology is one of a short list of developments that technically makes us all cyborgs(see video below).

Probably the best illustration of the frustrating, problematic but undeniable appeal of smartphone hogging our attention is the 2013 short film and smash-hit youtube video, I Forgot My Phone. The short movie starred actress Charlene deGuzman – who also wrote and directed it.  The main character wanders, slightly unsettled through various social situations wherein everyone around her is so captivated by their phones they seem oblivious of what’s happening in the physical world. I wonder how many of the 40+ million views it got involved people watching on their phones?

From the moment they wake up until the moment they go to bed, and throughout that time the devices provide an almost continuous stream of messages and alerts as well as easy access to a myriad of compelling information sources.

By design, it’s an environment of almost constant interruptions and distractions. The smartphone, more than any other gadget, steals from us the opportunity to maintain our attention, to engage in contemplation and reflection, or even to be alone with our thoughts.

Nicholas Carr, author of The Shallows: What the Internet is Doing to Our Brains.

So, chances are you are an addict. What can you do about it? It’s going to take self-awareness and discipline to use a smart phone in a responsible way. In the book Sleeping with Your Smartphone by Leslie Perlow the author recommends taking Predictable Time Off(PTO). In a small-scale experiments claims to have proven an increase in efficiency, better collaboration, heightened job satisfaction, and better work-life balance. Anecdotal evidence has shown that effort put into reduced or more structured smartphone use noticeably improved mood and concentration.

Try it yourself~!
You can try reducing your dependency on handheld devices by trying to go 15 minutes without checking your phone even if there are alerts or messages pending. After a few days try adding another fifteen minutes. Every few days you can add more time until you can comfortably go over an hour without looking at your device.

Does  it work? For now, there is not enough research on this subject to tell. Until a more scientific approach to studying the issue happens, you’re going to have to follow your gut. Mine is telling me we are all addicted already.

Cosmoso.net will be publishing more data as it is made available. Until then… put the phone down for a second, will ya?

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Light technologies illuminate global challenges


During these dark winter months, spare a thought for artificial lights. From strings of lights adding holiday cheer to artificial sunlamps alleviating seasonal affective disorder, they brighten our days. And light’s applications can go much further than that. The United Nations designated 2015 as the International Year of Light and Light-Based Technologies to raise awareness of how photonic technologies offer solutions to international challenges. Light technology is now an active area of research in energy, health and agriculture.

First lighting the way

Thomas Edison with some of his incandescent bulbs.

In the late 1800s, Thomas Edison created a practical light bulb, an electrically-powered, long-lasting light source that significantly changed our work, play and sleep habits. The ability to control light in new ways transformed how we experience and see the world. Light-based technologies such as optical fiber networks allow us to connect rapidly with people worldwide over the internet. Light emitting diodes (LEDs) are now everywhere from consumer electronics like smart phones to light bulbs for home lighting.

CoeLux’s artificial skylight harnesses technology to mimic our most vital light source: the sun.
James Holloway, CC BY-NC

One recent example is the artificial skylight invented by researchers who spent over ten years refining the CoeLux system. This invention, which received Lux Awards 2014 Light Source Innovation of the Year, can fill a room’s ceiling mimicking sunlight from different latitudes, from the equator to northern Europe. The key to its success in replicating a sunny sky uses nanostructured materials to scatter light from LEDs in the same way tiny particles scatter sunlight in the atmosphere – so-called Rayleigh scattering. Funding for this project from the European Commission enabled scientific advances in light management and nanotechnology as well as the completion of a device that may improve quality of life in indoor settings, from hospitals to underground parking garages.

Blue LEDs were the missing link.
Pete Brown, CC BY

Illuminating research

Only recently has the full utility of LEDs been realized for general lighting. While red and green LEDs had been in commercial use for more than a decade, the missing color for producing white light was blue. Isamu Akasaki, Hiroshi Amano, and Shuji Nakamura cracked the blue conundrum in the early 1990s. Now, thanks to their work, white light LEDs are ubiquitous. In recognition of this energy-saving invention, they received the Nobel Prize in Physics last year.

Light was also recognized in the Nobel Prize category of Chemistry last year for light-based microscopy tools that use a few tricks to sense the presence of a single molecule. Microscopy had been limited by diffraction, where two adjacent objects can only be resolved if they are separated by more than half the wavelength of light used for imaging. But Nobel laureates Eric Betzig, Stefan Hell and W.E. Moerner all took different approaches using similar principles to get beyond the diffraction barrier in order to control the fluorescence of individual molecules to view them in high detail. By turning the light emitted from the molecules on or off, the scientists could reconstruct the location of the molecules at the nanometer scale.

Microscope images of human protein vimentin. Note the higher resolution on the right.
Fabian Göttfert, Christian Wurm, CC BY-SA

Here’s how it works: a fraction of fluorescent molecules or proteins is first excited by a weak light pulse. Then after their emission fades, another subgroup of fluorescent molecules are excited. This cycle of on and off continues, and then the images are processed and superimposed to form a high-resolution map of individual proteins. The ability to peer into the nano-world of living cells to observe, for example, how proteins aggregate in the earliest stages of diseases like Alzheimer’s and Huntington’s, has just begun. Understanding disease progression at the single-molecule level could help identify when early intervention might be advantageous.

Let there be light in the darkness.
martinak15, CC BY

Investors must see the light

Light is a unifying science across fields like chemistry and physics, improving our lives and the world. But learning how to manipulate light is costly and takes time. Technologies are largely built on investments in basic science research as well as, of course, serendipity and circumstantial opportunities. Take LEDs for example. Research in blue LEDs started more than 40 years ago at Radio Corporation of America, but changes in the company’s funding structure stymied their development for two decades — until last year’s Nobel Prize winners solved the materials problem and the scale-up process.

Continued and sustained support of fundamental research is critical for future technologies not yet imagined or seen but that could have a transformative impact on our daily lives. For example, in agriculture, more effective harvesting of solar energy and its conversion into heat via greenhouses could enable year-round production as well as access to crops not currently available in certain climates.

(Left) Cartoon of nanoparticle lasers. (Right) Electron microscopy image of an array of bow-tie nanolasers.
Teri Odom, CC BY-ND

In my own work as a chemistry researcher, my group invented a laser the size of a virus particle, which should not be possible based on traditional ways to control light but is, thanks to metal nanoparticles that can squeeze light into small volumes. These tiny lasers are promising light sources that can be used to send and receive data with high bandwidths as well as to detect trace molecules or bio-agents.

Construction of our nano-laser required precise control over the shape and location of the adjacent gold nanoparticles. That such nanostructures could even be made is because of the decades-long investment by the electronics industry in developing nanofabrication tools to make the tiny components in computers. Investments in both fundamentals and applications are critical, as has been highlighted by last year’s Nobel Prizes in Chemistry and Physics.

The UN’s designation of this International Year of Light will spotlight the potentials of these kinds of innovations and the need to continue investing in future technologies. From new ways to shake off those winter blues to manipulating light in small spaces, the trajectory for artificial light is bright indeed.

The Conversation

This article was originally published on The Conversation.
Read the original article.

How to deal with electronic waste? Make it a national security issue


We’re in the midst of fevered discussions about communications and security. Cybertarian campaigners want to stop collusion between corporations and governments to intercept citizen chat; attention-grabbing adolescents at Anonymous want to disrupt murderers who dislike mockery of their principal prophet and the gilt-edged grown-ups in national security services want to listen in on plans to revenge such blasphemy.

But away from these dramatic debates over speech, privacy, the state, a less exciting conversation is underway, beyond the third-sector moralism of cybertarians, the attention span of adolescents, and the Olympian speechifying of spymasters. This conversation touches on security and communications in a less spectacular way.

Do you know what your old phone is up to?

Electronic waste (or e-waste) is the largest source of materials left in municipal dumps around the world.

A high proportion of it is derived from the gadgets you are reading this article on: phones, tablets, and computers, which quickly move from being vital sources of everyday life to discarded garbage once an upgrade becomes available. Where did that old fat-screen analogue television go when it was replaced by the slim, flat-screen digital version? Where are those phones you threw out?

The ghosts of computers past.
Francisco Delatorre, CC BY-NC-SA

A vast proportion of these deadly gizmos, with their lethal cocktails of carcinogenic gases and chemicals, end up being unsafely recycled by the poorest of the poor, the most vulnerable of the vulnerable. Pre-teen girls in Chinese and Indian villages are expert at the dangerous work of extracting recyclable minerals from our detritus.

Increasingly, of course, the trade in e-waste is domestic. Asian middle classes are booming and as keen as their so-called “Western” counterparts to fetishise the fresh and new by dumping the toxic and the old in the villages and bodies of the desperate. The result is horrendous disease, a poisoned water table, and drifting air pollution.

Because this trade occurs in what is politely known as “the informal sector” – translation, where there are no taxes, benefits, health and safety protections, or retirement payments – it is difficult to trace the precise dimensions of e-waste. But the International Telecommunication Union, a technology booster if ever there was one, acknowledges the existence of a problem as annual amounts reach a reported 53 million metric tonnes (2013), with an additional 67 million metric tonnes sold in various forms.

And after recycling? E-waste ends up in some very interesting places.

E-waste in the military

Two years ago, the Pentagon and the US Senate Armed Services Committee reported that vital components of the nation’s military hardware routinely include “counterfeit” electronic materials, from China (70%), the UK (11%), and Canada (9%). The Pentagon defines counterfeiting as recycled materials wrongly sold as new, or misuse of others’ intellectual property.

Often these counterfeits come from recycled e-waste – the committee estimated more than a million counterfeit parts were in service in US planes. It’s no surprise much of this can be traced back to China, where counterfeiting operates at an industrial level, with factory floors populated by thousands of workers dedicated to the task.

Take the Boeing P8 Poseidon, a plane used by the US Navy to drop torpedoes, depth charges and carry surveillance equipment. In 2011, Boeing reported it had discovered a faulty ice detection system in the aircraft, according to the senate committee report. Further investigation revealed the part was previously used, and made to appear new.

After tracing the parts through companies in California and then Florida, it turned out the ice detection equipment had originally come from “an affiliate of A Access Electronics in Shenzhen, China.” And before that? Who knows? Investigators from the senate committee wanted to find out but were denied Chinese visas.

Poseidon: made in California. Or Florida. Eventually.
Tataquax, CC BY-SA

Now the story is in the news again. Forbes has just published a column on the topic as did National Defense magazine, an obvious mouthpiece for the military industrial complex. The principal beneficiaries of the complex are warning that this malfeasance continues unabated, and at a massive level, despite 2012 legislation designed to quell it.

The senate committee report refers to “risks to national security and the safety of US military personnel” posed by this trade in counterfeit e-waste. There is no mention of the risks posed to people all over the world by the United States’ very use of matériel, of course.

But the fact that e-waste is on the agenda in such powerful quarters bodes well for real reform towards managing it properly.

The technology is available to recycle our electronic pleasures in a much safer way than is generally the case. In many instances, laws exist to mandate that. What we need is proper use of the technology and serious enforcement of such legislation. But beyond that, we need an end to the built-in obsolescence and advertising-fuelled clamour surrounding innovation.

It would help if the cybertarians, adolescents, and spies were able to let go of their shared obsessions for a moment and question the very devices and systems on which they rely for their self-anointed roles. Then they might recognise that the much-vaunted internet of things is also and equally an internet of junk.

The Conversation

This article was originally published on The Conversation.
Read the original article.