Category Archives: Design

The Computer of the Future is…. Vague.


Quantum Computer prototypes make mistakes. It’s in their nature. Can redundancy correct them?

Quantum memory promises speed combined with energy efficiency. If made viable it will be used in phones, laptops and other devices and give us all faster, more trustworthy tech which will require less power to operate.  Before we see it applied, the hardware requires redundant memory cells to check and double-check it’s own errors.

All indications show quantum tech is poised to usher the next round of truly revolutionary devices but first, scientists must solve the problem of the memory cells saving the wrong answer. Quantum physicists must redesign circuitry that exploits quantum behavior. The current memory cell is called a Qubit. The Qubit takes advantage of quantum mechanics to transfer data at an almost instantaneous rate, but the data is sometimes corrupted with errors. The Qubit is vulnerable to errors because it is physically sensitive to small changes in the environment it physically exists in. It’s been difficult to solve this problem because it is a hardware issue, not a software design issue. UC Santa Barbara’s physics professor John Martinis’ lab is dedicated to finding a workaround that can move forward without tackling the actual errors. They are working on a self-correcting Qubit.

The latest design they’ve developed at Martinis’ Lab is quantum circuitry that repeatedly self-checks for errors and suppresses the statistical mistake. Saving data to mutliple Qubits and empowering the overall system with that kind of desirable reliability we’ve come to expect from non-quantum digital computers. Since an error-free Qubit seemed last week to be a difficult hurdle, this new breakthrough seems to mean we are amazingly close to a far-reaching breakthrough.

Julian Kelly is a grad student and co-lead author published in Nature Journal:

“One of the biggest challenges in quantum computing is that qubits are inherently faulty so if you store some information in them, they’ll forget it.”

Bit flipping is the problem dejour in smaller, faster computers.

Last week I wrote about a hardware design problem called bit flipping, where a classic, non-quantum computer has this same problem of unreliable data. In effort to make a smaller DRAM chip, designers created an environment where the field around one bit storage location could be strong enough to actually change the value of the bit storage location next to it. You can read about that design flaw and the hackers who proved it could be exploited to gain system admin privileges in otherwise secure servers, here.

Bit flipping also applies to this issue in quantum computing. Quantum computers don’t just save information in binary(“yes/no”, or “true/false”) positions.  Qubits can be in any or even all positions at once, because they are storing value in multiple dimensions. It’s called “superpositioning,” and it’s the very reason why quantum computers have the kind of computational prowess they do, but ironically this characteristic also makes Qubits prone to bit flipping. Just being around atoms and energy transference is enough to create unstable environments and thus unreliable for data storage.

“It’s hard to process information if it disappears.” ~ Julian Kelly.

Along with Rami Barends, staff scientist Austin Fowler and others in the Martinis Group, Julian Kelly is making a data storage scheme where several qubits work in conjunction to redundantly preserve information. Information is stored across several qubits in a chip that is hard-wired to also check of the odd-man-out error. So, while each Qubit is unreliable, the chip itself can be trusted to store data for longer and with less, hopefully, no errors.

It isn’t a new idea but this is the first time it’s been applied. The device they designed is small, in terms of data storage, but it works as designed. It corrects its own errors. The vision we all have of a working quantum computer able to process a sick amount of data in an impressively short time? That will require something in the neighborhood of  a hundred million Qubits and each of the Qubits will be redundantly  self-checking to prevent errors.

Austin Fowler spoke to Phys.org about the firmware embedded in this new quantum error detection system, calling it surface code. It relies on the measurement of change between a duplication and the original bit, as opposed to simlpy comparing a copy of the same info. This measurement of change instead of comparison of duplicates is called parity recognition, and it is unique to quantum data storage. The original info being preserved in the Qubits is actually unobserved, which is a key aspect of quantum data.

“You can’t measure a quantum state, and expect it to still be quantum,” explained Barends.

As in any discussion of quantum physics, the act of observation has the power to change the value of the bit. In order to truly duplicate the data the way classical computing does in error detection, the bit would have to be examined, which in and of itself would potentially cause a bitflip, corrupting the original bit. The device developed at Martini’s U of C Santa Barbara lab

This project is a groundbreaking way of applying physical and theoretical quantum computing because it is using the phsycial Qubit chip and a logic circuit that applies quantum theory as an algorithm. The results being a viable way of storing data prove that several otherwise untested quantum theories are real and not just logically sound. Ideas in quantum theory that have been pondered for decades are now proven to work in the real world!

What happens next?

Phase flips:

Martinis Lab will be continuing it’s tests in effort to refine and  develop this approach. While the bit flip errors seemed to have been solved with this new design, there is a new type of error not found in classical computing that has yet to be solved: the  phase-flip. Phase-flips might be a whole other article and until Quantum physicists solve them there is no rush for the layman to understand.

Stress tests:

The team is also currently running the error correction cycle for longer and longer periods while monitoring the devices integrity and behavior to see what will happen. Suffice to say, there are a few more types of errors than it may appear, despite this breakthrough.

Corporate sponsorship:

As if there was any doubt about funding…. Google has approached Martinis Lab and offered them support in effort to speed up the day when quantum computers stomp into the mainstream.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Examining The Apple iPhone Planned Obsolescence Conspiracy


Apple has the money and the know how… are they making your old iPhone suck through planned obsolescence just to force you into the checkout line for a new one?

Planned Obsolescence isn’t just a conspiracy theory. You can read the 1932 pamphlet, widely-considered the origin of the concept, here. The argument in favor of it is it’s effect on the economy; more products being produced and sold means an active, thriving market. Of course there is an obvious ethical problem of selling people a product that won’t continue to work as it should for as long as it should. Several companies openly admit they do it. For Apple, it works like this: Whenever a new iPhone comes out, the previous model gets buggy, slow and unreliable. Apple dumps money into a new, near perfect ad campaign and the entire first world and beyond irrationally feels silly for not already owning one, even before it’s available. Each release marks the more expensive iPhone with capabilities the last one can’t touch. This is already a great marketing plan and I’m not criticizing Apple’s ability to pull it off as described. The problem is planned obsolescence; some iPhone owners notice the older model craps out on them JUST as the newest iPhone hits the retail shops. Apple has the money and the know how… are they making your old iPhone suck just to force you into the checkout line for a new one?

Full disclosure, I’m biased: I owned an iphone for long enough to live through a new product release and mine did, indeed, crap out as described above. Slow, buggy, and unreliable it was. With that anecdote under my belt I might be satisfied to call this e-rumor totally true but in the interest of science I collected further evidence. I combed the messageboards to see who had good points and who is just the regular internet nutjob with a stupid theory. To examine the evidence, I’m gonna start with this fact:

Fact 1: Apple’s product announcements and new product releases come at regular intervals. So, if the old iPhones stop working correctly at that same interval there would be a coinciding pattern. The tricky part is finding the data but the pattern of release dates is a good place to start because it is so clear. Other companies could be doing this type of fuckery but it would be harder to track. Not only does Apple time their releases but they do it at a faster pace than most. The new iPhones tend to come out once a year but studies show people keep their phones for about 2-3 years if they are not prompted or coerced to purchase a newer model.

Fact 2: Yes, it’s possible. There are so many ways the company would be able to slow or disable last year’s iPhone. It could happen by an automatic download that can’t be opted out of, such as an “update” from the company. Apple can have iPhones come with pre-programmed software that can’t be accessed through any usual menu system on the iPhone. There can even be a hardware issue that decays or changes based on the average amount of use. There can be a combination of these methods. The thing is, so many people jailbreak iPhones, it seems like someone might be able to catch malicious software. There are some protocols that force updates, though. hmmm.

Fact 3: They’ve been accused of doing this every new release since iPhone 4 came out. his really doesn’t look like an accident, guys. This 2013 article in the New York Times Magazine by Catherine Rampell describes her personal anecdote, which, incidentally is exactly the same as the way my iPhone failed me. When Catherine contacted Apple tech support they informed her the iOS 7 platform didn’t work as well on the older phones, which lead her to wonder why the phones automatically updated the operating system upgrade in the first place.

Earlier on the timeline, Apple released iOS 4 offering features that were new and hot in 2010: features like tap-to-focus camera, multitasking and faster image loading. The iPhone 4 was the most popular phone in the country at the time but it suddenly didn’t work right, crashing and becoming too slow to be useful.

The iPhone 4 release made the iPhone 4 so horrible it was basically garbage, and Apple appeared to have realized the potential lost loyalty and toned it down. The pattern of buggy and slow products remained, though, When iOS 7 came out in 2013, it was a common complaint online and people started to feel very sure Apple was doing it on purpose.

Fact 4: Google Trends shows telltale spikes in complaints that match up perfectly with the release dates. The New York Times(2014) called this one and published Google queries for “iphone slow” spike in traffic for that topic. Look at Google trends forecasting further spikes because the pattern is just that obvious:

Does Apple Ruin Your iPhone on Purpose? The Conspiracy, Explained

Apple has a very loyal customer base, though. Rene Ritchie wrote for iMore, saying this planned obsolescence argument is “sensational,” and a campaign of “misinformation” by people who don’t actually understand how great an iPhone really is(barf). Even though the motive is crystal clear, the arguement that Apple is innocent isn’t complete nonsense, either: Apple ruining iPhones could damage customer loyalty. People espousing this argument claim an intentional slowdown is less likely than just regular incompatibility due to new software features. The latter point is a good one, considering how almost all software manufacturers have a hard time adjusting new software to old operating systems. Cooler software usually needs faster hardware and for some ridiculous reason no one has ever come out with an appropriately customizable smartphone and Apple woudl likely be the last on the list.

Christopher Mims pointed out on Quartz: “There is no smoking gun here, no incriminating memo,” of an intentional slowdown on Apple’s part.

There is really no reason to believe Apple would be against this kind of thing, even if planned obsolescence were a happy accident for the mega-corporation. Basically, if this is happening by accident it’s even better for Apple because they don’t have to take responsibility and it likely helps push the new line. Apple is far from deserving the trustworthy reputation they’ve cultivated under Steve Jobs, as the glitzy marketing plan behind the pointless new Apple Watch demonstrates.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

“Rowhammering” Attack Gives Hackers Admin Access


A piece of code can actually manipulate the physical memory chip by repeatedly accessing nearby capacitors in a burgeoning new hack called Rowhammering. Rowhammer hacking is so brand new no one’s actually done it yet. Google’s Project Zero security initiative figured out how to exploit an aspect of a physical component in some types of DDR memory chips. The hack can give the user increased system rights regardless of an untrusted status. Any Intel-compatible PCs with this chip and running Linux are vulnerable – in theory. Project Zero pulled it off but it isn’t exactly something to panic about unless you are doing both those things: using DRAM and running linux.

A lot of readers might be susceptible to this security hack but most won’t want to read the technical details. If you are interested you can check out the project zero blog piece about it.  The security flaw is in a specific chip, the DRAM, or dynamic random-access memory chip. The chip is supposed to just store information in the form of bits saved on a series of capacitors. The hack works by switching the value of bits stored in DDR3 chip modules known as DIMMs. so, DRAM is the style of chip, and each DRAM houses several DIMMs. Hackers researching on behalf of Project Zero basically designed a program to repeatedly access sections of data stored on the vulnerable DRAM until the statistical odds of one or more DIMMS retaining a charge when it shouldn’t becomes a statistical reality.

IN 2014, this kind of hack was only theoretical until, scientists proved this kind of “bit flipping” is completely possible. Repeatedly accessing an area of a specific DIMM can become so reliable as to allow the hacker to predict the change of contents stored in that section of DIMM memory. Last Monday(March 9th, 2015) Project Zero demonstrated exactly how a piece of software can translate this flaw into an effective security attack.

“The thing that is really impressive to me in what we see here is in some sense an analog- and manufacturing-related bug that is potentially exploitable in software,” David Kanter, senior editor of the Microprocessor Report, told Ars. “This is reaching down into the underlying physics of the hardware, which from my standpoint is cool to see. In essence, the exploit is jumping several layers of the stack.”

Why it’s called Rowhammering.

The memory in a DDR-style chip is configured in an array of rows and columns. Each row is grouped with others into large blocks which handle the accessable memory for a specific application, including the memory resources used to run the operating system. There is a security feature called a “sandbox”, designed to protect the data integrity and ensure the overall system stays secure. A sandbox can only be accessed through a corresponding application or the Operating System.  Bit- flipping a DDR chip works when a hacker writes an application that can access two chosen rows of memory. The app would then access those same 2 rows hundreds of thousands of times, aka hammering. When the targeted bits flip from ones to zeros, matching a dummy list of data in the application, the target bits are left alone with the new value.

The implications of this style hack are hard to see for the layman but profound in the security world. Most data networks allow a limited list of administrators to have special privileges. It would be possible, using a rowhammer attack, to allow an existing account to suddenly gain administrative privileges to the system. In the vast majority of systems that kind of access would allow access into several other accounts. Administrative access would also allow some hackers to alter existing security features. The bigger the data center, the more users with accounts accessing the database, the more useful this vulnerability is.

The Physics of a Vulnerability

We’re all used to newer tech coming with unforeseen security problems. Ironically, this vulnerability is present in newer DDR3 memory chips. This is because the newer chips are so small there is actually and is the result of the ever smaller dimensions of the silicon. The DRAM cells are too close together in this kind of chip, making it possible to take a nearby chip, flip it back and forth repeatedly, and eventually make the one next to it – the target bit that is not directly accessible- to flip.

Note: The Rowhammer attack being described doesn’t work against newer DDR4 silicon or DIMMs that contain ECC(error correcting code), capabilities.

The Players and the Code:

Mark Seaborn, and Thomas Dullien are the guys who finally wrote a piece of code able to take advantage of this flaw. They created 2 rowhammer attacks which can run as processes. Those processes have no security privileges whatsoever but can end up gaining  administrative access to a  x86-64 Linux system. The first exploit was a Native Client module, incorporating itself into the platform as part of Google Chrome. Google developers caught this attack and altered an instruction in Chrome called CLFLUSH and the exploit stopped working. Seaborn and Dullien were psyched that they were able to get that far and write the second attempt shortly thereafter.

The second exploit, looks like a totally normal Linux process. It allowed Seaborn and Dullien to access to all physical memory which proved the vulnerability is actually a threat to any machine with this type of DRAM.

The ARS article about this has a great quote by Irene Abezgauz, a product VP at Dyadic Security:

The Project Zero guys took on the challenge of leveraging the concept of rowhammer into an actual exploit. What’s impressive is the combination of lots of deep technical knowledge with quite a bit of hacker creativity. What they did was create attack techniques in which flipping just a single bit in a specific location allows them to execute any code they want with root privileges or escape a sandbox. This is impressive by itself, but they added to this quite a few creative solutions to make it more likely to succeed in a real world scenario and not just in the lab. They figured out ways for better targeting of the specific locations in memory they needed to flip, improved the chances of the attack to succeed by creating (“spraying”) multiple locations where a flipped bit would make the right impact, and came up with several ideas to leverage this into actual privileged code execution. This combination makes for one of the coolest exploits I’ve seen in a while.

Project Zero didn’t name which models of DDR3 are susceptible to rowhammering. They also claim that this attack could work on a variety of operating platforms, even though they only tried it on a Linux computer running x86-64 hardware, something that they didn’t technically prove but seems very believable considering the success and expertise they seem to carry behind that opinion.

So, is Rowhammering a real threat or just some BS?

There isn’t an obvious, practical application for this yet. Despite how powerful the worst-case scenario would be, this threat doesn’t really come with a guarantee of sweeping the internet like some other, less-recent vulnerability exploits. The overwhelming majority of hacks are attempted from remote computers but Seaborn and Dullien apparently needed physical access to incorporate their otherwise unprivlidged code into the targeted system. Also, because the physical shape of the chip dictates which rows are vulnerable it may be the case that users who want to increase security to protect against this exploit can just reconfigure where the administrative privileges are stored and manipulated on the chip. Thirdly, rowhammering as Project Zero describes actually requires over 540,000 memory accesses less than 64 milliseconds – that’s a memory speed demand that means some systems can’t even run the necessary code. Hijacking a system using rowhammering with these limitations is presently not a real threat.

People used to say the same thing about memory corruption exploits, though. For examples: buffer overflow or a use-after-free both allow hack-attempts to squeeze malicious shell code into protected memory of a computer. Rowhammering is differnt because it is so simple. It only allows increased privileges for the hacker or piece of code, which is a real threat if it becomes developed as thoroughly as the development of memory corruption exploits has. The subtle difference might even be hard to grasp now, but now that the work has been done it’s the usual race between security analysts who would love to protect against it and the criminal world trying to dream up a way to make it more viable. Rob Graham, CEO of Errata Security, wrote further on the subject, here.

In short, this is noteworthy because a physical design flaw in a chip is being exploited, as opposed to a software oversight or code efficacy problem. A piece of code is actually affecting the physical inside of the computer during the attack.

Or, as Kanter, of the Microprocessor Report, said:

“This is not like software, where in theory we can go patch the software and get a patch distributed via Windows update within the next two to three weeks. If you want to actually fix this problem, we need to go out and replace, on a DIMM by DIMM basis, billions of dollars’ worth of DRAM. From a practical standpoint that’s not ever going to happen.”

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

An Interview With 3D Printed Food Artist Chloe Rutzerveld


Chloé shines in this interview about the future of food design and her upcoming year, including SXSW and developing 3D-printed prototypes into a culinary reality.

Eindhoven University of Technology Graduate, Chloé Rutzerveld, designed a food I don’t quite know how to categorize. I first saw pictures of her most recent work, Edible Growth, last week and immediately wrote to her. Her Edible Growth concept involves a bunch of hot topics in current scientific thought but the pictures don’t put the technology first – they just look great. In fact the pictures are currently the point of the project. There are tons of details that need to be worked out, and Rutzerveld is spending the upcoming year getting the funding, awareness and support to develop this project into a realistic restaurant menu item. 3d printing technology is a frontier she is willing to jump way into. Read more about Edible Growth on Rutzerveld’s website.

Chloé answered a ton of questions below

Sketches

The current concept art looks great. What was the initial idea behind these great looking confections?

The shape of the edible developed and changed throughout the design process, influenced by development in the technological and biotechnological parts of the project. For example, at first, I made drawings of Edible Growth in which the entire ball was filled with wholes. Which doesn’t make sense because cresses and mushrooms don’t grow down, only up 😉

3d printed food

Chloé’s initial, all-plastic design showed plants and mushrooms growing in all directions but the final design with real food had to accommodate gravity with a modified design.

Also, when the product is printed, you see straight lines, showing the technology part.. when the product matures these straight technological lines become invisible by the organic growth of the product. Showing the collaboration between technology and nature. Technology in this project is merely used as a means to enhance natural processes like photosynthesis and fermentation.

Chloe RutzerveldWhat inspired you? 

My skepticism towards printing food and the urge to find some way to use this new technology to create healthy, natural food with good a good taste and structure in which the printer would add something to the product, as well as the environment.

3d printen

A 3d printer arranges dough for the first step of an edible growth prototype.

Once you had the idea, how long did it take you to produce the prototypes and pastries we can see in the photos?

At first I made a lot of drawings and prototypes form clay. After that I started using nylon 3d-printed structures. When I gained more knowledge about 3d printing and the material composition inside the structure, the design of the product changed along with that. The mushrooms and cress inside the prototypes, as well as the savory pie dough is just a visualization, the final product might be totally different. It’s mend as inspiration and showing that we should think beyond printing sugar, chocolate and dough if we want to use this technology to create future ‘food’.

The prototyping process took about 2 months I think.. and multiple museums asked if they could exhibited it, I made non-food, food products that would last longer.

DSC06857

What are you doing for a living? 

Haha great question, because as you probably understand, media attention is great but does not help me pay my bills unfortunately 😉 But it does make it easier to get assignments for the development of workshops, dinners etc.

Basically at this point, I give lectures, presentations, and organize events and dinners. One upcoming event I’m organizing is about my new project called “Digestive Food”. I will not say too much about it, but I’ll update my website soon;)

To have a more stable income, I started working for the Next Nature Network in February, to organize the Next Nature Fellow program! Next Nature explores the technosphere and the co-evolving relationship with technology

Edible GrowthHow did you find the project so far?

Well I personally think it looks beautiful and I’m quite proud that so many people are inspired and fascinated by it! It would be great if such a product would come on the market.

I wonder what the pastry and edible soil are made of. Can you talk about the ingredients? 

I don’t call it edible soil, but a breeding ground. Because everything must be edible (like a fully edible eco-system) we experimented with a lot of different materials. But in the end, we found that agar-agar is a very suitable breeding ground on which also certain species of fungi and cress (like the velvet-paw and watercress for example) can grow very easily within a few days without growing moldy!

IMG_8562

Agar-agar breeding ground turned out to be the right mix of versatility and food-safe materials to make Edible Growth go from plastic prototype to edible reality.

How do you feel about copyright and patented ideas?

I am not very interested in that part.. of course it’s good to get credits for the idea and the photo’s but I will not buy a patent. I don’t have the knowledge or employees to develop this concept into a reel product. So I actually hope someone steels the idea and starts developing it further :)! I’m often asked by big tech-companies or chefs if I wanted an investment to develop it… but to be honest.. I’ve many other ideas and things I would like to do.

Edible prototype  - Copy

Do  you have secret ingredients?

Haha not in the product, but in my work it would be passion, creativity and a pinch of excessive work ethos 😉

What types of foods have you experimented with?

For Edible Growth? A dozen of cresses, and other seeds, dried fruits and vegetables for the breeding ground, agar-agar, gelatins, some spores..

But for my other projects also with mice, muskrat, organ meat, molecular enzymes etc.

IMG_9265

Who have you been working with? 

Waag Society (Open Wetlab, Amsterdam), Next Nature (Amsterdam), TNO (Eindhoven & Zeist), Eurest at the High Tech Campus (Eindhoven)

What is your studio environment like? 

I actually still live in a huge student-home which I share with 9 other people. But because I almost graduated one year ago I will need to move out. So I work a lot at home, in my 16m2 room, in the big-ass kitchen downstairs,  if I have appointments somewhere I afterwards work in a café or restaurant with wi-fi, or at flex work places, my parents house.. I’m very flexible and can work almost everywhere 🙂 Practical work I’ll do mostly at home obviously.

But I am looking for a nice studio in Eindhoven, that’s easier to receive guests or people from companies.

 What steps need to happen before we start seeing 3D printed food become commercially available? Development of software, hardware and material composition.

I noticed on your website you have other projects in the works. What are you doing currently? What are your upcoming plans and goals for 2015? 

Next week I’ll go to SXSW. In the summer I’m going to Matthew Kenney Culinary academy to learn more practical and theoretical things about food (and secretly just because I absolutely love to learn about plating and menu planing). I’m developing this event I told you about for the Museum Boerhaave in Leiden and the E&R platform. And when I return from Maine, I actually want to set up a temporary pop-up restaurant at the Ketelhuisplein during the Dutch Design Week 2015 about a social or cultural food issue.

Thanks again, Chloé~! This was fun!!!

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Ancient Cities Developed Just Like Modern Ones


Just before famed Spanish conquistador Hernán Cortés pillaged the Aztec capital city Tenochtitlan in 1521, toppling one empire to make room for another, he was astounded by the immense size of its buildings and the wealth of its rulers. The city, he recalled in a letter to Holy Roman Emperor Charles V, was just as large as his home town of Seville. Like any modern city, Tenochtitlan was not without its boulevards, thriving marketplaces for trade, lavish courthouses, temples, and an intricate network of canals. Clearly, the Aztecs had no European cities to model their capital off of, in the way Washington, D.C. was modeled after Paris, but something about it struck a chord as familiar – to Cortes’ own experience home in the Old World.

All cities have their own personality – landmarks and dialects that stand out as well as food staples, but now some anthropologists believe that there is something that transcends cultural barriers – universal laws that gradually find their way into the structure of all urban regions – mathematical rules that may be found in ancient cities as well.

To accomplish their study, the team of researchers analyzed archaeological data from excavations at Tenochtitlan and a number of other sites around it throughout Mexico, finding predictable ways in which all the buildings were put together.

“We build cities in ways that create what I like to call social reactors,” said complex systems researcher Luis Bettencourt, of the Santa Fe Institute in New Mexico.

Bettencourt and his collaborators in Santa Fe have long been working on a theoretical system for understanding where modern cities come from – how they originate and how they gradually expand. The primary reason for urban development is the increased opportunity for people to interact, networking with other people. Due to these increasing relationships among groups, the cities increase in their efficiency. It is actually due to an influx of people that there is a positive correlation with the respective city’s economy. When a city’s population doubles, Bettencourt’s team noticed that very often there’s an estimated 15 percent increase in the city’s overall productivity — a 15 percent jump in wages, in GDP, and also in patents. However, the increase is also associated with about the same increase in violent crime, thus not all increases in population are good.

Bettencourt’s work is challenging the common perception of a city as a pile of brick and steel buildings, and instead beginning to see them as specific constructs to maintain day to day social interactions. To be specific, it is a means of pooling together people of different skill to solve problems that no single individual can solve on their own. It was this realization that caused the team to suspect that this pattern might be observed in ancient cities as well.

“What I realized was that none of the parameters they were discussing in these models had anything to do with modern capitalism, democracy or industrialization,” said Scott Ortman, an anthropologist at the University of Colorado, Boulder, of Bettencourt’s work. “Their parameters are basic properties of human social networks on the ground. And so I thought, ‘Well, gosh, if that’s true, then these models should apply very broadly.”

To conduct their analysis, the team analyzed over 2,000 years of history in Mexico City, from its very beginnings around 500 B.C., right into the colonial period which began in the 16th century. They looked at over 1,550 square miles of land, which contained thousands of settlements throughout its history, starting as little towns containing only a few hundred people, which then burgeoned into immense cities like Teotihuacan and Tenochtitlan, which exceeded 200,000 people.

They published a study in PLOS ONE last year that showed a pattern of population growth similar to cities of today. As they grew larger, populations doubled. Then, when the populations increased, development of the living area slowed by about 83 percent. Their conclusions, Bettencourt said, showed the relationship between social networks and living quarters – in which the former maintains priority over the latter. If cities doubled each time populations increased, cost of living would be too high to maintain.

To compare modern urban economics with these cities, the researchers compared them with the construction of public monuments and temples, with much of these projects being commissioned after the city populations increased. They even noticed a distribution of income similar to today – with wealthier citizens buying larger houses.

“What’s interesting is that this expresses exactly the same as GDP,” Bettencourt said.

The study was published today in the journal Science Advances. Although their research is limited to ancient Mexico, Ortman is intrigued by the possibility that studies of ancient cities throughout the rest of the world might yield similar results – each being the product of ages of social interaction.

“It implies that some of the most robust patterns in modern urban systems derive from processes that have been part of human societies all along,” said Ortman. “I just think that’s an amazing concept.”

James Sullivan
James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.