Category Archives: Technology

Could the mystery of the meow be solved by a new talking cat collar?


Imagine you’re a cat, and, every time you meowed, the loud voice of a snooty-sounding British gentleman kindly informed your human guardian of your every thought and feeling (well, the thoughts and feelings you had before you were terrified by the sound of the voice).

A new product called the Catterbox – the world’s first talking cat collar – purports to do just that, using Bluetooth technology, a microphone and a speaker to capture a cat’s meow and translate it into an English-speaking human voice.

An ad for the Catterbox, which claims to be the world’s first ‘talking’ cat collar.

It’s not a joke; nor is it the first time a company has tried to use technology to translate cat meows for humans. A few years ago, the Meowlingual promised to interpret feline vocalization and expressions, but it didn’t exactly fly off the shelves or revolutionize our relationships with cats.

Still, the fact that these devices exist speak to the obsession humans seem to have with figuring out what their cats are thinking and feeling. Cats have a reputation for being hard to read – their mind is a “black box” – and some animal scientists have suggested that cats are just too challenging to even study.

But while a talking cat collar isn’t likely to solve the mystery of the meow, scientists have already discovered a few helpful things about human-cat communication and cats’ environmental needs.

A 20,000-year head start

Domestication of both dogs and cats has likely had a huge influence on their behavior, especially the way they interact with humans.

The coevolution of dogs and humans, however, can be traced back approximately 30,000 years, giving dogs a 20,000-year edge over cats in wiggling their way into human companionship.

Because cats have had a much shorter period of coevolution with humans than dogs, they’ve been subject to less selection for facial expressions that we translate in dogs as “easy to read” and “human-like.” For example, we see something as simple as “eyebrow raising” in dogs as a sign of sadness and vulnerability.

For this reason, many will either dismiss cats as inscrutable, or use venues such as LOLCats to imagine what cats’ thoughts might be (mostly disparaging toward humans, it appears).

But humans are actually pretty good at reading some aspects of cat communication. Cornell psychologist Nicholas Nicastro tested human perceptions of domestic cat vocalizations and compared them to those of the cat’s closest wild relative, the African wild cat.

Our pet cats have meows that are shorter and of a higher pitch than their wild cousins. Humans tended to rate domestic cat cries as more pleasant and less urgent, showing that humans can identify which meows are from domestic cats and which are from a closely related wild cat. Meanwhile, a 2009 study demonstrated that humans could discriminate an “urgent” purr (one made by a cat while soliciting food from its owner) from a nonurgent one.

Communication breakdown

Many cat owners already assign meaning to meows, depending on their context. When your cat woefully cries at 5 a.m., you might be certain he wants food. But what if it’s just petting? Or wants to go outside?

This is where the cat-human communication seems to break down. People know their cat wants something. But they don’t seem to know just what.

Yes, but what is it that you’re actually trying to say?
‘Cat’ via www.shutterstock.com

Nicastro did another study that found people were just so-so at being able to assign meaning to a meow. Experimenters recorded cats when hungry (owner preparing food), in distress (in a car), irritated (being overhandled), affiliative (when the cat wanted attention) or when facing an obstacle (a closed door). Participants could classify the meows at a rate greater than chance, but their performance wasn’t great (just 34 percent correct).

A similar study in 2015 by Dr. Sarah Ellis showed that even when the cat belonged to the participant, only four out of 10 humans could correctly identify the context of the different meows. And no one performed better than random chance when classifying meows of unfamiliar cats.

This suggests a few possibilities: meows might all sound the same to humans; perhaps some sort of learning occurs when we live with a cat that allows us to be slightly better at recognizing their meows over those of unfamiliar cats; or we might rely very heavily on context – not just the meow – to tell us what our cat might be thinking.

I have to admit, I’m not one of those people who finds cats difficult to understand. I accept that all cats have different needs than I do – and those needs include mental and physical stimulation (such as vertical space and play with interactive toys), appropriate outlets for normal feline behaviors (such as multiple litter boxes and scratching posts) and positive interactions with people (but as research has shown, in order to be positive, the interaction almost always needs to happen on the cat’s terms).

My bet? Those “urgent” 5 a.m. meows most often come from cats who either have learned that meowing is the only way to get attention or are not having their environmental and social needs met. But providing for those needs is going to be a lot more effective than trying to get your cat to talk to you through a novelty collar.

In its press release for the Catterbox, Temptation Labs claimed the device will “inject more fun” into a cat’s and human’s relationship. I can’t imagine it will be much fun for cats (who have much more sensitive hearing than humans do) to be subjected to a loud sound near their ears every time they meow.

At best, the Catterbox is a sorry attempt at a humorous ad campaign to sell cat treats. At worst, we have a product that does nothing to help us actually understand cats.

Instead we have a cat collar that promotes anthropomorphism and will probably simultaneously terrify the cats that are wearing it.

Talk about a lack of understanding.

The Conversation

Mikel Delgado, Ph.D. Candidate in Psychology, University of California, Berkeley

This article was originally published on The Conversation. Read the original article.

Is technology making us dumber or smarter? Yes


Editor’s note: This article is part of our collaboration with Point Taken, a new program from WGBH that will next air on Tuesday, June 21 on PBS and online at pbs.org. The show features fact-based debate on major issues of the day, without the shouting.

The smartphone in your hand enables you to record a video, edit it and send it around the world. With your phone, you can navigate in cities, buy a car, track your vital signs and accomplish thousands of other tasks. And so?

Each of those activities used to demand learning specific skills and acquiring the necessary resources to do them. Making a film? First, get a movie camera and the supporting technologies (film, lights, editing equipment). Second, learn how to use them and hire a crew. Third, shoot the movie. Fourth, develop and edit the film. Fifth, make copies and distribute them.

‘Is technology making us smarter or dumber?’ is the question Point Taken debates June 21 at 11 p.m. E/10 p.m. C on PBS.

Now all of those tasks are solved by technology. We need no longer learn the intricate details when the smartphone programmers have taken care of so much. But filmmakers are now freer to focus on their craft, and it is easier than ever to become a filmmaker. Historically, technology has made us individually dumber and individually smarter – and collectively smarter. Technology has made us able to do more while understanding less about what we are doing, and has increased our dependence on others.

These are not recent trends, but part of the history of technology since the first humans began to farm. In recent decades, three major changes have accelerated the process, starting with the increasing pace of humans specializing in particular skills. In addition, we outsource more skills to technological tools, like a movie-making app on a smartphone, that relieve us of the challenge of learning large amounts of technical knowledge. And many more people have access to technology than in the past, allowing them to use these tools much more readily.

Specialized knowledge

Specialization enables us to become very good at some activities, but that investment in learning – for example, how to be an ER nurse or computer coder – comes at the expense of other skills like how to grow your own food or build your own shelter.

Adam Smith, who specialized in thinking and writing.
Adam Smith Business School, CC BY-SA

As Adam Smith noted in his 1776 “Wealth of Nations,” specialization enables people to become more efficient and productive at one set of tasks, but with a trade-off of increased dependence on others for additional needs. In theory, everyone benefits.

Specialization has moral and pragmatic consequences. Skilled workers are more likely to be employed and earn more than their unskilled counterparts. One reason the United States won World War II was that draft boards kept some trained workers, engineers and scientists working on the home front instead of sending them to fight. A skilled machine tool operator or oil-rig roustabout contributed more to winning the war by staying at home and sticking to a specialized role than by heading to the front with a rifle. It also meant other men (and some women) donned uniforms and had a much greater chance of dying.

Making machines for the rest of us

Incorporating human skills into a machine – called “blackboxing” because it makes the operations invisible to the user – allows more people to, for example, take a blood pressure measurement without investing the time, resources and effort into learning the skills previously needed to use a blood pressure cuff. Putting the expertise in the machine lowers the barriers to entry for doing something because the person does not need to know as much. For example, contrast learning to drive a car with a manual versus an automatic transmission.

Technology makes killing easier: the AK-47.
U.S. Army/SPC Austin Berner

Mass production of blackboxed technologies enables their widespread use. Smartphones and automated blood pressure monitors would be far less effective if only thousands instead of tens of millions of people could use them. Less happily, producing tens of millions of automatic rifles like AK-47s means individuals can kill far more people far more easily compared with more primitive weapons like knives.

More practically, we depend on others to do what we cannot do at all or as well. City dwellers in particular depend on vast, mostly invisible structures to provide their power, remove their waste and ensure food and tens of thousands of other items are available.

Overreliance on technology is dangerous

A major downside of increased dependence on technologies is the increased consequences if those technologies break or disappear. Lewis Dartnell’s “The Knowledge” offers a delightful (and frightening) exploration of how survivors of a humanity-devastating apocaplyse could salvage and maintain 21st-century technologies.

More important than you might think: using a sextant.
U.S. Navy/PM3 M. Jeremie Yoder

Just one example of many is that the U.S. Naval Academy just resumed training officers to navigate by sextants. Historically the only way to determine a ship’s location at sea, this technique is being taught again both as a backup in case cyberattackers interfere with GPS signals and to give navigators a better feel of what their computers are doing.

How do people survive and prosper in this world of increasing dependence and change? It’s impossible to be truly self-reliant, but it is possible to learn more about the technologies we use, to learn basic skills of repairing and fixing them (hint: always check the connections and read the manual) and to find people who know more about particular topics. In this way the Internet’s vast wealth of information can not only increase our dependence but also decrease it (of course, skepticism about online information is never a bad idea). Thinking about what happens if something goes wrong can be a useful exercise in planning or a descent into obsessive worrying.

Individually, we depend more on our technologies than ever before – but we can do more than ever before. Collectively, technology has made us smarter, more capable and more productive. What technology has not done is make us wiser.

The Conversation

Jonathan Coopersmith, Associate Professor of History, Texas A&M University

This article was originally published on The Conversation. Read the original article.

How might drone racing drive innovation?


Over the past 15 years, drones have progressed from laboratory demonstrations to widely available toys. Technological improvements have brought ever-smaller components required for flight stabilization and control, as well as significant improvements in battery technology. Capabilities once restricted to military vehicles are now found on toys that can be purchased at Wal-Mart.

Small cameras and transmitters mounted on a drone even allow real-time video to be sent back to the pilot. For a few hundred dollars, anyone can buy a “first person view” (FPV) system that puts the pilot of a small drone in a virtual cockpit. The result is an immersive experience: Flying an FPV drone is like Luke Skywalker or Princess Leia flying a speeder bike through the forests of Endor.

First-person viewing puts you in the virtual cockpit of a drone, like flying a speeder on Endor.

Perhaps inevitably, hobbyists started racing drones soon after FPV rigs became available. Now several drone racing leagues have begun, both in the U.S. and internationally. If, like auto racing, drone racing becomes a long-lasting sport yielding financial rewards for backers of winning teams, might technologies developed in the new sport of drone racing find their way into commercial and consumer products?

A drone race, as a spectator and on board the drones.

An example from history

Auto racing has a long history of developing and demonstrating new technologies that find their way into passenger cars, buses and trucks. Formula 1 racing teams developed many innovations that are now standard in commercially available vehicles.

Racing for innovation: Formula 1 teams.
Morio, CC BY-SA

These include disk brakes, tire design and materials, electronic engine control and monitoring systems, the sequential gearbox and paddle shifters, active suspension systems and traction control (so successful that both were banned from Formula 1 competition), and automotive use of composite materials such as carbon fiber reinforced plastics.

A look inside the World Drone Prix.

Starting with the basics

Aerodynamically, the multi-rotor drones that are used for racing are not sophisticated: A racing drone is essentially a brick (the battery and flight electronics) with four rotors attached. A rectangular block has a drag coefficient of roughly 1, while a carefully streamlined body with about the same proportions has a drag coefficient of about 0.05. Reducing the drag force means a drone needs less power to fly at high speed. That in turn allows a smaller battery to be carried, which means lighter weight and greater maneuverability.

A brick with rotors, ripe for aerodynamic improvement.
Drone image via shutterstock.com

This is a case where technologies from aircraft and helicopter aerodynamics will find their way to the smaller vehicles. Commercial drone manufacturers have begun working on aerodynamic optimization, using techniques such as wind tunnel testing and computational fluid dynamics originally developed for analysis and design of full-scale aircraft and helicopters.

That may be able to enable longer flight times. If so, it would give drone operators more time to take money-making photos and video in flight. It could also boost drones’ ability to assist missions such as searching for lost hikers. If drone racing becomes a billion-dollar per year sport – like auto racing – teams will deploy well-funded research labs to eke out every last bit of performance. That additional incentive – and spending – could be poured into racing advances that will push drone technology farther and faster than might otherwise be the case.

Organized competition isn’t the only way to innovate, of course: Drone development has accelerated even without it. Today, the cheapest drones cost under US$50, though they can fly only indoors and have very limited flight capabilities. Hobby drones costing hundreds of dollars can perform stunning aerobatic feats in the hands of a skilled pilot. Drones capable of autonomous flight are also available, though they cost thousands of dollars and are used for more specialized purposes like scientific research, cinematography, law enforcement, and search and rescue.

Advancing control and awareness

The drones used in racing (and indeed, all current multi-rotor drones) contain hardware and software to improve stability. This is essentially a low-level autopilot responsible for “balancing” the vehicle. The human pilot controls the vehicle’s front/back and left/right tilt angles and the magnitude of the total thrust, as well as how fast the vehicle turns and changes direction.

There is no reason why this must be done via control sticks, as is currently common: Pilots could use a smartphone to control the drone instead. There is, in fact, no reason why drone control needs to be done using a physical interface: recently the University of Florida hosted a (very basic) drone race using brain-machine interfaces to control the drones.

Racing drones steered by brain signals.

Aside from flight control, situation awareness is a key problem in drone operations. It is all too easy to crash a remotely operated vehicle into a pillar on the left when the cameras are all pointed forwards. In addition, the pilot of the lead drone in a race has no way of knowing where the competitors are: They could all be a long way behind, or one could be in a position to pass.

Robots need multiple camera angles to see themselves and their surroundings, like this mosaic self-portrait of NASA’s Curiosity Rover on Mars.
NASA

Solving this problem could have payoffs for other telepresence robotics operations, such as remotely operated underwater vehicles and even planetary rovers. Vision systems consisting of several cameras and a computer to stitch together the different views could help, or a haptic system could vibrate to alert a pilot to the presence of a drone or other obstacle nearby. Those sorts of technologies to improve the pilot’s awareness during a race could also be used to assist a remote-control robot pilot operating a vehicle at an oil drilling platform or near a hydrothermal vent in the deep ocean.

This is of course still very speculative: Drone racing is a sport still in its infancy. It is not yet clear whether it will become a massively popular sport. If it does, we could see very exciting advances coming from drone racing into both the toys that we fly in our living rooms and parks and into the drones used by professional videographers, engineers and scientists.

The Conversation

Jack Langelaan, Associate Professor of Aerospace Engineering, Pennsylvania State University

This article was originally published on The Conversation. Read the original article.

Beyond Asimov: how to plan for ethical robots


As robots become integrated into society more widely, we need to be sure they’ll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?

Asimov knew they weren’t perfect


Rowena Morrill/GFDL, CC BY-SA

Asimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories “Runaround” and “Catch that Rabbit,” requiring human ingenuity to resolve. In the story “Liar!,” a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.

In “Escape!,” Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In “The Evitable Conflict,” the machines that control the world’s economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimov’s later introduction of the “Zeroth Law” that can supersede the original three, potentially allowing a robot to harm a human being for humanity’s greater good.

0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.

Asimov’s laws are in a particular order, for good reason.
Randall Munroe/xkcd, CC BY-NC

Robots without ethics

It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

The 1991 movie “Terminator 2: Judgment Day” begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a “perfect operational record”). Skynet “begins to learn at a geometric rate,” scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality).

Catastrophe results from giving too much power to artificial intelligence.

Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a “flash crash.” Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference.

Toward defining robot ethics

While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings.

These properties are inspired by a number of sources including
the Engineering and Physical Sciences Research Council (EPSRC) Principles of Robotics and
recent work on the cognitive science of morality and ethics focused on
neuroscience,
social psychology,
developmental psychology and
philosophy.

The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society.

  • If robots (and other AIs) increasingly participate in our society, then they will need to follow moral and ethical rules much as people
    do. Some rules are embodied in laws against killing, stealing, lying and driving on the wrong side of the street. Others are less formal but nonetheless important, like being helpful and cooperative when the opportunity arises.
  • Some situations require a quick moral judgment and response – for example, a child running into traffic or the opportunity to pocket a dropped wallet. Simple rules can provide automatic real-time response, when there is no time for deliberation and a cost-benefit analysis. (Someday, robots may reach human-level intelligence while operating far faster than human thought, allowing careful deliberation in milliseconds, but that day has not yet arrived, and it may be far in the future.)
  • A quick response may not always be the right one, which may be recognized after feedback from others or careful personal reflection. Therefore, the agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules.
  • To benefit from feedback from others in society, the robot must be able to explain and justify its decisions about ethical actions, and to understand explanations and critiques from others.
  • Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust. The critical mistake humans made with Skynet in “Terminator 2” was handing over control of the nuclear arsenal.
  • Trust, and trustworthiness, must be earned by the robot. Trust is earned slowly, through extensive experience, but can be lost quickly, through a single bad decision.
  • As with a human, any time a robot acts, the selection of that action in that situation sends a signal to the rest of society about how that agent makes decisions, and therefore how trustworthy it is.
  • A robot mind is software, which can be backed up, restored if the original is damaged or destroyed, or duplicated in another body. If robots of a certain kind are exact duplicates of each other, then trust may not need to be earned individually. Trust earned (or lost) by one robot could be shared by other robots of the same kind.
  • Behaving morally and well toward others is not the same as taking moral responsibility. Only competent adult humans can take full responsibility for their actions, but we expect children, animals, corporations, and robots to behave well to the best of their abilities.

Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimov’s Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.

The Conversation

Benjamin Kuipers, Professor of Computer Science and Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Using computers to better understand art


How do humans interpret and understand art? The nature of artistic style, seemingly abstract and intuitive, is the subject of ongoing debate within art history and the philosophy of art.

When we talk about paintings, artistic style can refer to image features like the brushstrokes, contour and distribution of colors that painters employ, often implicitly, to construct their works. An artist’s style helps convey meaning and intent, and affects the aesthetic experience a user has when interacting with that artwork. Style also helps us identify and sometimes categorize their work, often placing it in the context of a specific period or place.

A new field of research aims to deepen, and even quantify, our understanding of this intangible quality. Inherently interdisciplinary, visual stylometry uses computational and statistical methods to calculate and compare these underlying image features in ways humans never could before. Instead of relying only on what our senses perceive, we can use these mathematical techniques to discover novel insights into artists and artworks.

A new way to see paintings

Quantifying artistic style can help us trace the cultural history of art as schools and artists influence each other through time, as well as authenticate unknown artworks or suspected forgeries and even attribute works that could be by more than one artist to a best matching artist. It can also show us how an artist’s style and approach changes over the course of a career.

Computer analysis of even previously well-studied images can yield new relationships that aren’t necessarily apparent to people, such as Gaugin’s printmaking methods. In fact, these techniques could actually help us discover how humans perceive artworks.

Art scholars believe that a strong indicator of an artist’s style is the use of color and how it varies across the different parts of a painting. Digital tools can aid this analysis.

For example, we can start by digitizing a sample artwork, such as Albert Bierstadt’s “The Morteratsch Glacier, Upper Engadine Valley, Pontresina,” 1885, from the Brooklyn Museum.

‘The Morteratsch Glacier, Upper Engadine Valley, Pontresina, by
Albert Bierstadt, 1895.

Wikiart

Scanning the image breaks it down into individual pixels with numeric values for how much red, green and blue is in each tiny section of the painting. Calculating the difference in those values between each pixel and the others near it, throughout the painting, shows us how these tonal features vary across the work. We can then represent those values graphically, giving us another view of the painting:

Output of a discrete tonal measure analysis.
Author provided

This can help us start to categorize the style of an artist as using greater or fewer textural components, for example. When we did this as part of an analysis of many paintings in the Impressionist and Hudson River schools, our system could sort each painting by school based on its tonal distribution.

We might wonder if the background of a painting more strongly reflects the artist’s style. We can extract that section alone and examine its specific tonal features:

Output of foreground/background extraction.
Author provided

Then we could compare analyses, for example, of the painting as a whole against only its background or foreground. From our data on Impressionist and Hudson River paintings, our system was able to identify individual artists – and it did so better when examining foregrounds and backgrounds separately, rather than when analyzing whole paintings.

Sharing the ability to analyze art

Despite the effectiveness of these sorts of computational techniques at discerning artistic style, they are relatively rarely used by scholars of the humanities. Often that’s because researchers and students don’t have the requisite computer programming and machine-learning skills. Until recently, artists and art historians without those skills, and who did not have access to computer scientists, simply had to do without these methods to help them analyze their collections.

Our team, consisting of experts in computer science, the philosophy of art and cognitive science, is developing a digital image analysis tool for studying paintings in this new way. This tool, called Workflows for Analysis of Images and Visual Stylometry (WAIVS), will allow students and researchers in many disciplines, even those without significant computer skills, to analyze works of art for research, as well as for art appreciation.

WAIVS, built upon the Wings workflow system, allows users to construct analyses in the same way they would draw a flowchart. For instance, to compare the tonal analyses of the whole painting and the background alone, as described above, a scholar need not create complex computer software, but rather would just create this design of the process:

Scientific workflow for discrete tonal measure analysis.
Author provided

The diagram is actually a computer program, so once the user designs the workflow, they can simply click a button to conduct the analysis. WAIVS includes not just discrete tonal analysis but other image-analysis algorithms, including the latest computer vision and artistic style algorithms.

Another example: neural algorithm of artistic style

Recent work by Leon Gatys and others at the University of Tübingen, Germany, has demonstrated the use of deep learning techniques and technology to create images in the style of the great masters like Van Gogh and Picasso.

The specific deep learning approach, called convolutional neural networks, learns to separate the content of a painting from its style. The content of a painting consists of objects, shapes and their arrangements but usually does not depend upon the use of colors, textures and other aspects of artistic style.

A painting’s style, extracted in this manner, cannot be viewed on its own: it is purely mathematical in nature. But it can be visualized by applying the extracted style to the content of another painting or photo, making an image by one artist look like it’s by someone else.

Our group has incorporated these techniques into WAIVS and, as we add more cutting-edge algorithms, art scholars will be able to apply the latest research to their analyses, using our simple workflows. For example, we were able to use WAIVS to recreate the Bierstadt painting in other artists’ styles:

The Bierstadt painting in the styles of, clockwise from upper left, Van Gogh, Munch, Kahlo, Picasso, Matisse and Escher.
Author provided

Connecting disciplines

Eventually, we intend to incorporate WAIVS within a Beam+ telepresence system to allow people to virtually visit real-world museum displays. People around the world could not only view the art but also be able to run our digital analyses. It would dramatically expand public and scholarly access to this new method of contemplating art, and open new avenues for teaching and research.

Our hope is that WAIVS will not only improve access of humanities researchers to computerized tools, but also promote technological literacy and data analysis skills among humanities students. In addition, we expect it to introduce science students to research in art and the humanities, to explore the nature of artistic style and its role in our understanding of artwork. We also hope it will help researchers in cognitive science understand how viewers perceptually categorize, recognize and otherwise engage with art.

The Conversation

Ricky J. Sethi, Assistant Professor of Computer Science, Fitchburg State University

This article was originally published on The Conversation. Read the original article.

Big data’s ‘streetlight effect’: where and how we look affects what we see


Big data offers us a window on the world. But large and easily available datasets may not show us the world we live in. For instance, epidemiological models of the recent Ebola epidemic in West Africa using big data consistently overestimated the risk of the disease’s spread and underestimated the local initiatives that played a critical role in controlling the outbreak.

Researchers are rightly excited about the possibilities offered by the availability of enormous amounts of computerized data. But there’s reason to stand back for a minute to consider what exactly this treasure trove of information really offers. Ethnographers like me use a cross-cultural approach when we collect our data because family, marriage and household mean different things in different contexts. This approach informs how I think about big data.

We’ve all heard the joke about the drunk who is asked why he is searching for his lost wallet under the streetlight, rather than where he thinks he dropped it. “Because the light is better here,” he said.

This “streetlight effect” is the tendency of researchers to study what is easy to study. I use this story in my course on Research Design and Ethnographic Methods to explain why so much research on disparities in educational outcomes is done in classrooms and not in students’ homes. Children are much easier to study at school than in their homes, even though many studies show that knowing what happens outside the classroom is important. Nevertheless, schools will continue to be the focus of most research because they generate big data and homes don’t.

The streetlight effect is one factor that prevents big data studies from being useful in the real world – especially studies analyzing easily available user-generated data from the Internet. Researchers assume that this data offers a window into reality. It doesn’t necessarily.

Looking at WEIRDOs

Based on the number of tweets following Hurricane Sandy, for example, it might seem as if the storm hit Manhattan the hardest, not the New Jersey shore. Another example: the since-retired Google Flu Trends, which in 2013 tracked online searches relating to flu symptoms to predict doctor visits, but gave estimates twice as high as reports from the Centers for Disease Control and Prevention. Without checking facts on the ground, researchers may fool themselves into thinking that their big data models accurately represent the world they aim to study.

The problem is similar to the “WEIRD” issue in many research studies. Harvard professor Joseph Henrich and colleagues have shown that findings based on research conducted with undergraduates at American universities – whom they describe as “some of the most psychologically unusual people on Earth” – apply only to that population and cannot be used to make any claims about other human populations, including other Americans. Unlike the typical research subject in psychology studies, they argue, most people in the world are not from Western, Educated, Industrialized, Rich and Democratic societies, i.e., WEIRD.

Twitter users are also atypical compared with the rest of humanity, giving rise to what our postdoctoral researcher Sarah Laborde has dubbed the “WEIRDO” problem of data analytics: most people are not Western, Educated, Industrialized, Rich, Democratic and Online.

Context is critical

Understanding the differences between the vast majority of humanity and that small subset of people whose activities are captured in big data sets is critical to correct analysis of the data. Considering the context and meaning of data – not just the data itself – is a key feature of ethnographic research, argues Michael Agar, who has written extensively about how ethnographers come to understand the world.

What makes research ethnographic? It is not just the methods. It starts with fundamental assumptions about the world, the first and most important of which is that people see and experience the world in different ways, giving them different points of view. Second, these differences result from growing up and living in different social and cultural contexts. This is why WEIRD people are not like any other people on Earth.

The task of the ethnographer, then, is to translate the point of view of the people they study into the point of view of their audience. Discovering other points of view requires ethnographers go through multiple rounds of data collection and analysis and incorporate concepts from the people they study in the development of their theoretical models. The results are models that are good representations of the world – something analyses of big data frequently struggle to achieve.

Here is an example from my own research with mobile pastoralists. When I tried to make a map of my study area in the Logone Floodplain of Cameroon, I assumed that places had boundaries, as the one separating Ohio from Michigan. Only later, after multiple interviews and observations, did I learn that it is better to think of places in the floodplain as points in an open system, like Columbus and Ann Arbor, without any boundary between them. Imagine that!

Don’t get me wrong: I think big data is great. In our interdisciplinary research projects studying the ecology of infectious diseases and regime shifts in coupled human and natural systems, we are building our own big data sets. Of course, they are not as big as those generated by Twitter or Google users, but big enough that the analytical tools of complexity theory are useful to make sense of the data because the systems we study are more than the sum of their parts.

Moreover, we know what the data represents, how it was collected and what its limitations are. Understanding the context and meaning of the data allows us to check our findings against our knowledge of the world and validate our models. For example, we have collected data on livestock movements using a combination of surveys and GPS technology in Cameroon to build computer models and examine its impact on the spread of foot-and-mouth disease. Because we know the pastoralists and the region in which they move, we can detect the errors and explain the patterns in the data.

For data analytics to be useful, it needs to be theory- or problem-driven, not simply driven by data that is easily available. It should be more like ethnographic research, with data analysts getting out of their labs and engaging with the world they aim to understand.

The Conversation

Mark Moritz, Associate Professor of Anthropology, The Ohio State University

This article was originally published on The Conversation. Read the original article.

How computing power can help us look deep within our bodies, and even the Earth


CAT scans, MRI, ultrasound. We are all pretty used to having machines – and doctors – peering into our bodies for a whole range of reasons. This equipment can help diagnose diseases, pinpoint injuries, or give expectant parents the first glimpse of their child.

As computational power has exploded in the past half-century, it has enabled a parallel expansion in the capabilities of these computer-aided imaging systems. What used to be pictures of two-dimensional “slices” have been assembled into high-resolution three-dimensional reconstructions. Stationary pictures of yesteryear are today’s real-time video of a beating heart. The advances have been truly revolutionary.

A cardiac MRI scan shows a heart beating.

Though different in their details, X-ray computed tomography, ultrasound and even MRI have a lot in common. The images produced by each of these systems derive from an elegant interplay of sensors, physics and computation. They do not operate like a digital camera, where the data captured by the sensor are basically identical to the image produced. Rather, a lot of processing must be applied to the the raw data collected by a CAT scanner, MRI machine or ultrasound system to produce before it the images needed for a doctor to make a diagnosis. Sophisticated algorithms based on the underlying physics of the sensing process are required to put Humpty Dumpty back together again.

Early scanning methods

One of the first published X-rays (at right, with normal view of the hand at left), from 1896.
Albert Londe
A modern hand X-ray.
golanlevin/flickr, CC BY

Though we use X-rays in some cutting-edge imaging techniques, X-ray imaging actually dates back to the late 1800s. The shadowlike contrast in X-ray images, or projections, shows the density of the material between the X-ray source and the data sensor. (In the past this was a piece of X-ray film, but today is usually a digital detector.) Dense objects, such as bones, absorb and scatter many more X-ray photons than skin, muscle or other soft tissue, which appear darker in the projections.

But then in the early 1970s, X-ray CAT (which stands for Computerized Axial Tomography) scans were developed. Rather than taking just a single X-ray image from one angle, a CAT system rotates the X-ray sources and detectors to collect many images from different angles – a process known as tomography.

Computerized tomography imagery of a hand.

The difficulty is how to take all the data, from all those X-rays from so many different angles, and get a computer to properly assemble them into 3D images of, say, a person’s hand, as in the video above. That problem had a mathematical solution that had been studied by the Austrian mathematician Johann Radon in 1917 and rediscovered by the American physicist (and Tufts professor) Allan Cormack in the 1960s. Using Cormack’s work, Godfrey Hounsfield, an English electrical engineer, was the first to demonstrate a working CAT scanner in 1971. For their work on CAT, Cormack and Hounsfield received the 1979 Nobel Prize in Medicine.

Extending the role of computers

Until quite recently, these processing methods had more or less been constant since the 1970s and 1980s. Today, additional medical needs – and more powerful computers – are driving big changes. There is increased interest in CT systems that minimize X-ray exposure, yielding high-quality images from fewer images. In addition, certain uses, such as breast imaging, encounter physical constraints on how much access the imager can have to the body part. This requires scanning from only a very limited set of angles around the subject. These situations have led to research into what are called “tomosynthesis” systems – in which limited data are interpreted by computers to form fuller images.

Similar problems arise, for example, in the context of imaging the ground to see what objects – such as pollutants, land mines or oil deposits – are hidden beneath our feet. In many cases, all we can do is send signals from the surface, or drill a few holes to take sampling measurements. Security scanning in airports is constrained by cost and time, so those X-ray systems can take only a few images.

In these and a host of other fields, we are faced with less overall data, which means the Cormack-Hounsfield mathematics can’t work properly to form images. The effort to solve these problems has led to the rise of a new area of research, “computational sensing,” in which sensors, physics and computers are being brought together in new ways.

Sometimes this involves applying more computer processing power to the same data. In other cases, hardware engineers designing the equipment work closely with the mathematicians figuring out how best to analyze the data provided. Together these systems can provide new capabilities that hold the promise of major changes in many research areas.

New scanning capabilities

One example of this potential is in bio-optics, the use of light to look deep within the human body. While visible light does not penetrate far into tissue, anyone who has shone a red laser pointer into their finger knows that red light does in fact make it through at least a couple of centimeters. Infrared light penetrates even farther into human tissue. This capability opens up entirely new ways to image the body than X-ray, MRI or ultrasound.

Again, it takes computing power to move from those images into a unified 3D portrayal of the body part being scanned. But the calculations are much more difficult because the way in which light interacts with tissue is far more complex than X-rays.

As a result we need to use a different method from that pioneered by Cormack in which X-ray data are, more or less, directly turned into images of the body’s density. Now we construct an algorithm that follows a process over and over, feeding the result from one iteration back as input of the next.

The process starts by having the computer guess an image of the optical properties of the body area being scanned. Then it uses a computer model to calculate what data from the scanner would yield that image. Perhaps unsurprisingly, the initial guess is generally not so good: the calculated data don’t match the actual scans.

When that happens, the computer goes back and refines its guess of the image, recalculates the data associated with this guess and again compares with the actual scan results. While the algorithm guarantees that the match will be better, it is still likely that there will be room for improvement. So the process continues, and the computer generates a new and more improved guess.

Over time, its guesses get better and better: it creates output that looks more and more like the data collected by the actual scanner. Once this match is close enough, the algorithm provides the final image as a result for examination by the doctor or other professional.

The new frontiers of this type of research are still being explored. In the last 15 years or so, researchers – including my Tufts colleague Professor Sergio Fantini – have explored many potential uses of infrared light, such as detecting breast cancer, functional brain imaging and drug discovery. Combining “big data” and “big physics” requires a close collaboration among electrical and biomedical engineers as well as mathematicians and doctors. As we’re able to develop these techniques – both mathematical and technological – we’re hoping to make major advances in the coming years, improving how we all live.

The Conversation

Eric Miller, Professor and Chair of Electrical and Computer Engineering, Adjunct Professor of Computer Science, Adjunct Professor of Biomedical Engineering, Tufts University

This article was originally published on The Conversation. Read the original article.

Security risks in the age of smart homes


Smart homes, an aspect of the Internet of Things, offer the promise of improved energy efficiency and control over home security. Integrating various devices together can offer users easy programming of many devices around the home, including appliances, cameras and alarm sensors. Several systems can handle this type of task, such as Samsung SmartThings, Google Brillo/Weave, Apple HomeKit, Allseen Alljoyn and Amazon Alexa.

But there are also security risks. Smart home systems can leave owners vulnerable to serious threats, such as arson, blackmail, theft and extortion. Current security research has focused on individual devices, and how they communicate with each other. For example, the MyQ garage system can be turned into a surveillance tool, alerting would-be thieves when a garage door opened and then closed, and allowing them to remotely open it again after the residents had left. The popular ZigBee communication protocol can allow attackers to join the secure home network.

Little research has focused on what happens when these devices are integrated into a coordinated system. We set out to determine exactly what these risks might be, in the hope of showing platform designers areas in which they should improve their software to better protect users’ security in future smart home systems.

The popular SmartThings product line.
Zon@ IT/YouTube, CC BY

Evaluating the security of smart home platforms

First, we surveyed most of the above platforms to understand the landscape of smart home programming frameworks. We looked at what systems existed, and what features they offered. We also looked at what devices they could interact with, whether they supported third-party apps, and how many apps were in their app stores. And, importantly, we looked at their security features.

We decided to focus deeper inquiry on SmartThings because it is a relatively mature system, with 521 apps in its app store, supporting 132 types of IoT devices for the home. In addition, SmartThings has a number of conceptual similarities to other, newer systems that make our insights potentially relevant more broadly. For example, SmartThings and other systems offer trigger-action programming, which lets you connect sensors and events to automate aspects of your home. That is the sort of capability that can turn your walkway lights on when a driveway motion detector senses a car driving up, or can make sure your garage door is closed when you turn your bedroom light out at night.

We tested for potential security holes in the system and 499 SmartThings apps (also called SmartApps) from the SmartThings app store, seeking to understand how prevalent these security flaws were.

Finding and attacking main weaknesses

We found two major categories of vulnerability: excessive privileges and insecure messaging.

Overprivileged SmartApps: SmartApps have privileges to perform specific operations on a device, such as turning an oven on and off or locking and unlocking a door. This idea is similar to smartphone apps asking for different permissions, such as to use the camera or get the phone’s current location. These privileges are grouped together; rather than getting separate permission for locking a door and unlocking it, an app would be allowed to do both – even if it didn’t need to.

For example, imagine an app that can automatically lock a specific door after 9 p.m. The SmartThings system would also grant that app the ability to unlock the door. An app’s developer cannot ask only for permission to lock the door.

More than half – 55 percent – of 499 SmartApps we studied had access to more functions than they needed.

Insecure messaging system: SmartApps can communicate with physical devices by exchanging messages, which can be envisioned as analogous to instant messages exchanged between people. SmartThings devices send messages that can contain sensitive data, such as a PIN code to open a particular lock.

We found that as long as a SmartApp has even the most basic level of access to a device (such as permission to show how much battery life is left), it can receive all the messages the physical device generates – not just those messages about functions it has privileges to. So an app intended only to read a door lock’s battery level could also listen to messages that contain a door lock’s PIN code.

In addition, we found that SmartApps can “impersonate” smart-home equipment, sending out their own messages that look like messages generated by real physical devices. The malicious SmartApp can read the network’s ID for the physical device, and create a message with that stolen ID. That battery-level app could even covertly send a message as if it were the door lock, falsely reporting it had been opened, for example.

SmartThings does not ensure that only physical devices can create messages with a certain ID.

SmartThings Proof-of-Concept Attacks

Attacking the design flaws

To move beyond the potential weaknesses into actual security breaches, we built four proof-of-concept attacks to demonstrate how attackers can combine and exploit the design flaws we found in SmartThings.

In our first attack, we built an app that promised to monitor the battery levels of various wireless devices around the home, such as motion sensors, leak detectors, and door locks. However, once installed by an unsuspecting user, this seemingly benign app was programmed to snoop on the other messages sent by those devices, opening a key vulnerability.

When the authorized user creates a new PIN code for a door lock, the lock itself will acknowledge the changed code by sending a confirmation message to the network. That message contains the new code, which could then be read by the malicious battery-monitoring app. The app can then send the code to its designer by SMS text message – effectively sending a house key directly to a prospective intruder.

In our second attack, we were able to snoop on the supposedly secure communications between a SmartApp and its companion Android mobile app. This allowed us to impersonate the Android app and send commands to the SmartApp – such as to create a new PIN code that would let us into the home.

Our third and fourth attacks involved writing malicious SmartApps that were able to take advantage of other security flaws. One custom SmartApp could disable “vacation mode,” a popular occupancy-simulation feature; we stopped a smart home system from turning lights on and off and otherwise behaving as if the home were occupied. Another custom SmartApp was able to falsely trigger a fire alarm by pretending to be a carbon monoxide sensor.

Room for improvement

Taking a step back, what does this mean for smart homes in general? Are these results indicative of the industry as a whole? Can smart homes ever be safe?

There are great benefits to gain from smart homes, and the Internet of Things in general, that ultimately lead to an improved quality of living. However, given the security weaknesses in today’s systems, caution is appropriate.

These are new technologies in nascent stages, and users should think about whether they are comfortable with giving third parties (e.g., apps or smart home platforms) remote access to their devices. For example, personally, I wouldn’t mind giving smart home technologies remote access to my window shades or desk lamps. But I would be wary of staking my safety on remotely controlled door locks, fire alarms, and ovens, as these are security- and safety-critical devices. If misused, those systems could allow – or even cause – physical harm.

However, I might change that assessment if systems were better designed to reduce the risks of failure or compromise, and to better protect users’ security.

Acknowledgements: This research is the result of a collaboration with Jaeyeon Jung and Atul Prakash.

The Conversation

Earlence Fernandes, Ph.D. student, Systems and Security, University of Michigan

This article was originally published on The Conversation. Read the original article.

How nanotechnology can help us grow more food using less energy and water


With the world’s population expected to exceed nine billion by 2050, scientists are working to develop new ways to meet rising global demand for food, energy and water without increasing the strain on natural resources. Organizations including the World Bank and the U.N. Food and Agriculture Organization are calling for more innovation to address the links between these sectors, often referred to as the food-energy-water (FEW) nexus.

Nanotechnology – designing ultrasmall particles – is now emerging as a promising way to promote plant growth and development. This idea is part of the evolving science of precision agriculture, in which farmers use technology to target their use of water, fertilizer and other inputs. Precision farming makes agriculture more sustainable because it reduces waste.

We recently published results from research in which we used nanoparticles, synthesized in our laboratory, in place of conventional fertilizer to increase plant growth. In our study we successfully used zinc nanoparticles to increase the growth and yield of mung beans, which contain high amounts of protein and fiber and are widely grown for food in Asia. We believe this approach can reduce use of conventional fertilizer. Doing so will conserve natural mineral reserves and energy (making fertilizer is very energy-intensive) and reduce water contamination. It also can enhance plants’ nutritional values.

Applying fertilizer the conventional way can waste resources and contribute to water pollution.
Fotokostic/Shutterstock.com

Impacts of fertilizer use

Fertilizer provides nutrients that plants need in order to grow. Farmers typically apply it through soil, either by spreading it on fields or mixing it with irrigation water. A major portion of fertilizer applied this way gets lost in the environment and pollutes other ecosystems. For example, excess nitrogen and phosphorus fertilizers become “fixed” in soil: they form chemical bonds with other elements and become unavailable for plants to take up through their roots. Eventually rain washes the nitrogen and phosphorus into rivers, lakes and bays, where it can cause serious pollution problems.

Fertilizer use worldwide is increasing along with global population growth. Currently farmers are using nearly 85 percent of the world’s total mined phosphorus as fertilizer, although plants can uptake only an estimated 42 percent of the phosphorus that is applied to soil. If these practices continue, the world’s supply of phosphorus could run out within the next 80 years, worsening nutrient pollution problems in the process.

Phosphate mine near Flaming Gorge, Utah.
Jason Parker-Burlingham/Wikipedia, CC BY

In contrast to conventional fertilizer use, which involves many tons of inputs, nanotechnology focuses on small quantities. Nanoscale particles measure between 1 and 100 nanometers in at least one dimension. A nanometer is equivalent to one billionth of a meter; for perspective, a sheet of paper is about 100,000 nanometers thick.

These particles have unique physical, chemical and structural features, which we can fine-tune through engineering. Many biological processes, such as the workings of cells, take place at the nano scale, and nanoparticles can influence these activities.

Scientists are actively researching a range of metal and metal oxide nanoparticles, also known as nanofertilizer, for use in plant science and agriculture. These materials can be applied to plants through soil irrigation and/or sprayed onto their leaves. Studies suggest that applying nanoparticles to plant leaves is especially beneficial for the environment because they do not come in contact with soil. Since the particles are extremely small, plants absorb them more efficiently than via soil. We synthesized the nanoparticles in our lab and sprayed them through a customized nozzle that delivered a precise and consistent concentration to the plants.

We chose to target zinc, which is a micronutrient that plants need to grow, but in far smaller quantities than phosphorus. By applying nano zinc to mung bean leaves after 14 days of seed germination, we were able to increase the activity of three important enzymes within the plants: acid phosphatase, alkaline phosphatase and phytase. These enzymes react with complex phosphorus compounds in soil, converting them into forms that plants can take up easily.

Algae bloom in Lake Erie in 2011, caused by phosphorus in runoff from surrounding farms.
NASA Earth Observatory/Flickr, CC BY

When we made these enzymes more active, the plants took up nearly 11 percent more phosphorus that was naturally present in the soil, without receiving any conventional phosphorous fertilization. The plants that we treated with zinc nanoparticles increased their biomass (growth) by 27 percent and produced 6 percent more beans than plants that we grew using typical farm practices but no fertilizer.

Nanofertilizer also has the potential to increase plants’ nutritional value. In a separate study, we found that applying titanium dioxide and zinc oxide nanoparticles to tomato plants increased the amount of lycopene in the tomatoes by 80 to 113 percent, depending on the type of nanoparticles and concentration of dosages. This may happen because the nanoparticles increase plants’ photosynthesis rates and enable them to take up more nutrients.

Lycopene is a naturally occurring red pigment that acts as an antioxidant and may prevent cell damage in humans who consume it. Making plants more nutrition-rich in this way could help to reduce malnutrition. The quantities of zinc that we applied were within the U.S. government’s recommended limits for zinc in foods.

Next questions: health and environmental impacts of nanoparticles

Nanotechnology research in agriculture is still at an early stage and evolving quickly. Before nanofertilizers can be used on farms, we will need a better understanding of how they work and regulations to ensure they will be used safely. The U.S. Food and Drug Administration has already issued guidance for the use of nanomaterials in animal feed.

Manufacturers also are adding engineered nanoparticles to foods, personal care and other consumer products. Examples include silica nanoparticles in baby formula, titanium dioxide nanoparticles in powdered cake donuts, and other nanomaterials in paints, plastics, paper fibers, pharmaceuticals and toothpaste.

Many properties influence whether nanoparticles pose risks to human health, including their size, shape, crystal phase, solubility, type of material, and the exposure and dosage concentration. Experts say that nanoparticles in food products on the market today are probably safe to eat, but this is an active research area.

Addressing these questions will require further studies to understand how nanoparticles behave within the human body. We also need to carry out life cycle assessments of nanoparticles’ impact on human health and the environment, and develop ways to assess and manage any risks they may pose, as well as sustainable ways to manufacture them. However, as our research on nanofertilizer suggests, these materials could help solve some of the word’s most pressing resource problems at the food-energy-water nexus.

The Conversation

Ramesh Raliya, Research Scientist, Washington University in St Louis and Pratim Biswas, Chairman, Department of Energy, Environmental and Chemical Engineering, Washington University in St Louis

This article was originally published on The Conversation. Read the original article.

The future of personal satellite technology is here – are we ready for it?


Satellites used to be the exclusive playthings of rich governments and wealthy corporations. But increasingly, as space becomes more democratized, these sophisticated technologies are coming within reach of ordinary people. Just like drones before them, miniature satellites are beginning to fundamentally transform our conceptions of who gets to do what up above our heads.

As a recent report from the National Academy of Sciences highlights, these satellites hold tremendous potential for making satellite-based science more accessible than ever before. However, as the cost of getting your own satellite in orbit plummets, the risks of irresponsible use grow.

The question here is no longer “Can we?” but “Should we?” What are the potential downsides of having a slice of space densely populated by equipment built by people not traditionally labeled as “professionals”? And what would the responsible and beneficial development and use of this technology actually look like?

Some of the answers may come from a nonprofit organization that has been building and launching amateur satellites for nearly 50 years.

Just a few inches across and ready for orbit.
Thuvt, CC BY-SA

The technology we’re talking about

Having your own personal satellite launched into orbit might sound like an idea straight out of science fiction. But over the past few decades a unique class of satellites has been created that fits the bill: CubeSats.

The “Cube” here simply refers to the satellite’s shape. The most common CubeSat (the so-called “1U” satellite) is a 10 cm (roughly 4 inches) cube, so small that a single CubeSat could easily be mistaken for a paperweight on your desk. These mini, modular satellites can fit in a launch vehicle’s formerly “wasted space.” Multiples can be deployed in combination for more complex missions than could be achieved by one CubeSat alone.

Within their compact bodies these minute satellites are able to house sensors and communications receivers/transmitters that enable operators to study the Earth from space, as well as space around the Earth.

They’re primarily designed for Low Earth Orbit (LEO) – an easily accessible region of space from around 200 to 800 miles above the Earth, where human-tended missions like the Hubble Space Telescope and the International Space Station (ISS) hang out. But they can attain more distant orbits; NASA plans for most of its future Earth-escaping payloads (to the moon and Mars especially) to carry CubeSats.

Because they’re so small and light, it costs much less to get a CubeSat into Earth orbit than a traditional communication or GPS satellite. For instance, a research group here at Arizona State University recently claimed their developmental “femtosats” (especially small CubeSats) could cost as little as US$3,000 to put in orbit. This decrease in cost is allowing researchers, hobbyists and even elementary school groups to put simple instruments into LEO, by piggybacking onto rocket launches, or even having them deployed from the ISS.

The first CubeSat was created in the early 2000s, as a way of enabling CalPoly and Stanford graduate students to design, build, test and operate a spacecraft with similar capabilities to the USSR’s Sputnik.

Since then, NASA, the National Reconnaissance Office and even Boeing have all launched and operated CubeSats. There are more than 130 currently operational in orbit. The NASA Educational Launch of Nano Satellite (ELaNa) program, which offers free launches for educational groups and science missions, is now open to U.S. nonprofit corporations as well.

Clearly, satellites are not just for rocket scientists anymore.

Pre-K through 8th grade students at St. Thomas More Cathedral School in Arlington, Virginia designed, built and tested a CubeSat that was deployed in space.
NASA, CC BY

Thinking inside the box

The National Academy of Sciences report emphasizes CubeSats’ importance in scientific discovery and the training of future space scientists and engineers. Yet it also acknowledges that widespread deployment of LEO CubeSats isn’t risk-free.

The greatest concern the authors raise is space debris – pieces of “junk” that orbit the earth, with the potential to cause serious damage if they collide with operational units, including the ISS.

Currently, there aren’t many CubeSats and they’re tracked closely. Yet as LEO opens up to more amateur satellites, they may pose an increasing threat. As the report authors point out, even near-misses might lead to the “creation of an onerous regulatory framework and affect the future disposition of science CubeSats.”

More broadly, the report authors focus on factors that might impede greater use of CubeSat technologies. These include regulations around earth-space radio communications, possible impacts of International Traffic in Arms Regulations (which govern import and export of defense-related articles and services in the U.S.), and potential issues around extra-terrestrial contamination.

But what about the rest of us? How can we be sure that hobbyists and others aren’t launching their own “spy” satellites, or (intentionally or not) placing polluting technologies into LEO, or even deploying low-cost CubeSat networks that could be hijacked and used nefariously?

As CubeSat researchers are quick to point out, these are far-fetched scenarios. But they suggest that now’s the time to ponder unexpected and unintended possible consequences of more people than ever having access to their own small slice of space. In an era when you can simply buy a CubeSat kit off the shelf, how can we trust the satellites over our heads were developed with good intentions by people who knew what they were doing?

Some “expert amateurs” in the satellite game could provide some inspiration for how to proceed responsibly.

Modular CubeSats deployed from ISS.
NASA Johnson, CC BY-NC

Guidance from some experienced amateurs

In 1969, the Radio Amateur Satellite Corporation (AMSAT) was created in order to foster ham radio enthusiasts’ participation in space research and communication. It continued the efforts, begun in 1961, by Project OSCAR – a U.S.-based group that built and launched the very first nongovernmental satellite just four years after Sputnik.

As an organization of volunteers, AMSAT was putting “amateur” satellites in orbit decades before the current CubeSat craze. And over time, its members have learned a thing or two about responsibility.

Here, open-source development has been a central principle. Within the organization, AMSAT has a philosophy of open sourcing everything – making technical data on all aspects of their satellites fully available to everyone in the organization, and when possible, the public. According to a member of the team responsible for FOX 1-A, AMSAT’s first CubeSat:

This means that it would be incredibly difficult to sneak something by us … there’s no way to smuggle explosives or an energy emitter into an amateur satellite when everyone has access to the designs and implementation.

However, they’re more cautious about sharing info with nonmembers, as the organization guards against others developing the ability to hijack and take control of their satellites.

This form of “self-governance” is possible within long-standing amateur organizations that, over time, are able to build a sense of responsibility to community members, as well as society more generally.

AMSAT has a long history as a collaborative community.
Jeff Davis, CC BY

How does responsible development evolve?

But what happens when new players emerge, who don’t have deep roots within the existing culture?

Hobbyist and student “new kids on the block” are gaining access to technologies without being part of a longstanding amateur establishment. They are still constrained by funders, launch providers and a tapestry of regulations – all of which rein in what CubeSat developers can and cannot do. But there is a danger they’re ill-equipped to think through potential unintended consequences.

What these unintended consequences might be is admittedly far from clear. Certainly, CubeSat developers would argue it’s hard to imagine these tiny satellites causing substantial physical harm. Yet we know innovators can be remarkably creative with taking technologies in unexpected directions. Think of something as seemingly benign as the cellphone – we have microfinance and text-based social networking at one end of the spectrum, improvised explosive devices at the other.

This is where a culture of social responsibility around CubeSats becomes important – not simply for ensuring that physical risks are minimized (and good practices are adhered to), but also to engage with a much larger community in anticipating and managing less obvious consequences of the technology.

This is not an easy task. Yet the evidence from AMSAT and other areas of technology development suggest that responsible amateur communities can and do emerge around novel technologies.

For instance, see the diy-bio community, where hobbyists work in advanced community biotech labs. Their growing community commitment to safety and responsibility is highlighting how amateurs can embrace responsibility in research and innovation. A similar commitment is seen within open-source software and hardware communities, such as the members of the Linux Foundation.

The challenge here, of course, is ensuring that what an amateur community considers to be responsible, actually is. Here’s where there needs to be a much wider public conversation that extends beyond government agencies and scientific communities to include students, hobbyists, and anyone who may potentially stand to be affected by the use of CubeSat technology.

The Conversation

Elizabeth Garbee, Ph.D. Student in the Human and Social Dimensions of Science and Technology, Arizona State University and Andrew Maynard, Director, Risk Innovation Lab, Arizona State University

This article was originally published on The Conversation. Read the original article.