The Last of Us, one of the most HBO hit shows just got it’s Season 1 finale. The show is based on a popular video game franchise, features a parasitic fungus that turns its host into a zombie-like creature. The fungus, known as Cordyceps, has become a popular topic of discussion among gamers and science enthusiasts alike. While the depiction of Cordyceps in The Last of Us may seem far-fetched, the fungus is actually real and has some fascinating properties.
Cordyceps is a type of parasitic fungus that infects insects and arthropods, such as ants and caterpillars. The fungus invades the host’s body and eventually takes over, manipulating the host’s behavior to its advantage. In some cases, the fungus will cause the host to climb to a high location, where it will spread its spores and infect other nearby hosts.
This behavior is what inspired the depiction of Cordyceps in The Last of Us, where the fungus infects humans and turns them into aggressive creatures.
Cordyceps is a type of parasitic fungus that infects insects and arthropods, such as ants and caterpillars. The fungus invades the host’s body and eventually takes over, manipulating the host’s behavior to its advantage. In some cases, the fungus will cause the host to climb to a high location, where it will spread its spores and infect other nearby hosts. This behavior is what inspired the depiction of Cordyceps in The Last of Us, where the fungus infects humans and turns them into aggressive creatures.
While Cordyceps has some fascinating properties, it is not capable of infecting humans in the same way that it infects insects. However, there are some documented cases of humans being infected with Cordyceps, but these cases are extremely rare and typically only occur in people with weakened immune systems.
Despite the fact that Cordyceps cannot infect humans in the same way that it infects insects, the fungus has become popular in traditional medicine and is believed to have numerous health benefits. Cordyceps is commonly used in traditional Chinese medicine as a treatment for a variety of conditions, including fatigue, asthma, and kidney disease.
In addition to its medicinal properties, Cordyceps has also been the subject of scientific research. Scientists have discovered that the fungus contains a variety of bioactive compounds that have the potential to be used in the development of new drugs. Cordyceps has been shown to have anti-inflammatory, antioxidant, and anti-cancer properties, among others.
The depiction of Cordyceps in The Last of Us has sparked interest in the fungus and has brought attention to its unique properties. While the idea of a parasitic fungus infecting humans and turning them into zombies may seem like science fiction, the reality of Cordyceps is equally fascinating. As scientists continue to study the fungus, it is possible that Cordyceps may one day be used to develop new treatments for a variety of medical conditions.
Literature professor Simon John James and physicist Richard Bower were both involved in the curating the exhibition, Time Machines – the past, the future, and how stories take us there. Their conversations quickly revealed to them the many, wildly various, meanings of “time travel”. Here, they discuss how time travelling in literary and scientific terms might, one day, coincide.
Simon John James: Richard, what does the term “time travel” mean for a physical scientist?
Richard Bower: Time travel is the basis of modern physics, and, for anyone that looks up at the night sky, an everyday experience. When we view the stars and planets, we see them, not as they are now, but as they were in the past. For the planets this time delay is only a few minutes, but for most of the stars in the night sky, thousands of years. For galaxies, faint smudges of light made up of very distant collections of stars, the delay can be millions or billions of years. By observing the faintest galaxies with the world’s latest telescopes, we can look back through time and watch the whole history of the universe unfold.
But this is not the most satisfying kind of time travel. It allows us only to gaze into the past as remote observers. One of the key challenges for modern physics is to determine whether it is possible to influence the past.
One of the key concepts of Einstein’s Theory of Relativity is that objects exist in a long line in 4D spacetime, a unification of time and space. Although all observers agree on the length of the world line that connects two events, they may have different views about whether the events occur simultaneously, or at the same location but at different times, or a mixture of both. For example, while I sit at my desk to eat lunch, then work a little and get up to go home several hours later, a (very) fast-moving observer will see me whizz by eating lunch and immediately getting up to go home. In Einstein’s theory, time and space are mixed together: we cannot think of them separately. It therefore makes best sense to think of myself as always moving along that 4D world-line, travelling into the future at the speed of light.
But is it possible to cheat the safeguards of Einstein’s theory and to travel backwards through time? At face value the answer is no, but then again, the science of earlier generations would have said it was impossible for mankind to fly. Perhaps all scientists need is inspiration and a cunning idea.
SJJ: Well, you can find a lot of inspiration and cunning ideas in fantastic fiction, of course. Perhaps the most famous time travel text is The Time Machine (1895) by HG Wells, which was the first to imagine humans travelling in time through the use of technology. Other of his imaginations have been realised – he imagined and wrote about the technology of powered flight before science made it possible in real life, for example. Wells’s innovative idea led to modern time travel stories such as Back to the Future or Doctor Who.
But many different kinds of stories travel in time: Aristotle observed that a good story has a beginning, a middle and an end – but they do not have to be in that order. Even a text as ancient as Homer’s Iliad does not begin with the judgement of Paris, but with Achilles sulking in his tent in the ninth year of the Trojan War, and the story unfolds from there. Whodunnits usually don’t tend to begin with the murder, but with the discovery of the body, and the plot is reconstructed by the detective as the story moves both forwards and backwards. This is the temporal freedom of narrative time.
RB: What’s freeing in the literary device is for practical time travel the central obstacle. Although Einstein’s theories allow us to stretch and shrink time, the causal ordering of events remains constant. While, in your example, the life of the murder victim might experience their life flashing before their eyes in their dying seconds, the experience of their life will always precede the moment of death.
But in The Terminator, to take one example, the future human civilisation finds a way to loop the protagonist’s world line so that he travels back in time to intercept the cyborg and avert Sarah Connor’s death. In the inner regions of a spinning black hole, space and time are mixed so that this is tantalisingly close to possible, but I’ve never knowingly met anyone that made their way back from the future this way. Perhaps the looped world line cuts off the old future and pops out a new future, creating parallel worlds that exist at the same time.
From the conventional point of view, there’s rather a lot wrong with the idea of looping back in time. But modern interpretations of quantum mechanics suggest that the world may actually consist of many parallel futures, constantly splitting off from one another. All of these futures exist simultaneously, but we are only conscious of one of them. From this viewpoint, there isn’t so much to fear from time travel. The looped world line simply creates another layer of possible futures.
SJJ: I’m fascinated by time travel’s flexibility as metaphor for talking about many different kinds of academic research. History, archaeology would be obvious examples, but in a recent project I’ve been really inspired by work in the psychology of autobiographical memory. Narrative is not just a property of literary and other kinds of texts: it has been argued that the human sense of self is constructed from our narrativising of our own experiences within the passing of time: that memory and planning for the future are a kind of “mental time travel” which allows us to constitute identity.
Here my literary example is Charles Dickens’s A Christmas Carol. Scrooge travels back to memories of his past selves, and by so doing is encouraged to change his ways for the better in the future. We could think of the despised, neglected miser of the vision of Christmas Yet to Come, and the beloved happy Scrooge of the novel’s ending as those inhabiting two different “parallel worlds”, perhaps?
RB: It’s certainly fascinating how literary ideas challenge scientific understanding – perhaps both of those parallel futures might be proved equally real yet.
How closely will we live with the technology we use in the future? How will it change us? And how close is “close”? Ghost in the Shell imagines a futuristic, hi-tech but grimy and ghetto-ridden Japanese metropolis populated by people, robots, and technologically-enhanced human cyborgs.
Beyond the superhuman strength, resilience, and X-ray vision provided by bodily enhancements, one of the most transformative aspects of this world is the idea of brain augmentation, that as cyborgs we might have two brains rather than one. Our biological brain – the “ghost” in the “shell” – would interface via neural implants to powerful embedded computers that would give us lightening fast reactions and heightened powers of reasoning, learning and memory.
First written as a Manga comic series in 1989 during the early days of the internet, Ghost in the Shell’s creator, Japanese artist Masamune Shirow, foresaw that this brain-computer interface would overcome the fundamental limitation of the human condition: that our minds are trapped inside our heads. In Shirow’s transhuman future our minds would be free to roam, relaying thoughts and imaginings to other networked brains, entering via the cloud into distant devices and sensors, even “deep diving” the mind of another in order to understand and share their experiences.
Shirow’s stories also pin-pointed some of the dangers of this giant technological leap. In a world where knowledge is power, these brain-computer interfaces would create new tools for government surveillance and control, and new kinds of crime such as “mind-jacking” – the remote control of another’s thoughts and actions. Nevertheless there was also a spiritual side to Shirow’s narrative: that the cyborg condition might be the next step in our evolution, and that the widening of perspective and the merging of individuality from a networking of minds could be a path to enlightenment.
Lost in translation
Borrowing heavily from Ghost in the Shell’s re-telling by director Mamoru Oshii in his classic 1995 animated film version, the newly arrived Hollywood cinematic interpretation stars Scarlett Johansson as Major, a cyborg working for Section 9, a government-run security organisation charged with fighting corruption and terrorism. Directed by Rupert Sanders, the new film is visually stunning and the storyline lovingly recreates some of the best scenes from the original anime.
Sadly though, Sanders’ movie pulls its punches around the core question of how this technology could change the human condition. Indeed, if casting Western actors in most key roles wasn’t enough, the new film also engages in a form of cultural appropriation by superimposing the myth of the American all-action hero – who you are is defined by what you do – on a character who is almost the complete antithesis of that notion.
Major fights the battles of her masters with increasing reluctance, questioning the actions asked of her, drawn to escape and contemplation. This is no action hero, but someone trying to piece together fragments of meaning from within her cyborg existence with which to assemble a worthwhile life.
A scene midway through the film shows, even more bluntly, the central role of memory in creating the self. We see the complete breakdown of a man who, having been mind-jacked, faces the realisation that his identity is built on false memories of a life never lived, and a family who never existed. The 1995 anime insists that we are individuals only because of our memories. While the new film retains much of the same story line, it refuses to follow the inference. Rather than being defined by our memories, Major’s voice tells us that “we cling to memories as if they define us, but what we do defines us”. Perhaps this is meant to be reassuring, but to me it is both confusing and unfaithful to the spirit of the original tale.
The new film also backs away from another key idea of Shirow’s work, that the human mind – even the human species – are, in essence, information. Where the 1995 anime talked of the possibility of leaving the physical body – the shell – elevating consciousness to a higher plane and “becoming part of all things”, the remake has only veiled hints that such a merging minds, or a melding of the human mind with the internet, could be either positive or transformational.
In the real world, the notion of networked minds is already upon us. Touchscreens, keypads, cameras, mobile, the cloud: we are more and more directly and instantly linked to a widening circle of people, while opening up our personal lives to surveillance and potential manipulation by governments, advertisers, or worse.
The possibility of voluntarily networking our minds is also here. Devices like the Emotiv are simple wearable electroencephalograph-based (EEG) devices that can detect some of the signature electrical signals emitted by our brains, and are sufficiently intelligent to interpret those signals and turn them into useful output. For example, an Emotiv connected to a computer can control a videogame by the power of the wearer’s thoughts alone.
In terms of artificial intelligence, the work in my lab at Sheffield Robotics explores the possibility of building robot analogues of human memory for events and experiences. The fusion of such systems with the human brain is not possible with today’s technology – but it is imaginable in the decades to come. Were an electronic implant developed that could vastly improve your memory and intelligence, would you be tempted? Such technologies may be on the horizon, and science fiction imaginings such as Ghost in the Shell suggest that their power to fundamentally change the human condition should not be underestimated.
When it comes to geek culture, Stan Lee may perhaps be the top superhero of them all. As the creator of some of the most famous and recognizable characters out of comic book history, seeing Lee speaking and appearing at comic book conventions and related conventions was more than just a demand, but almost a right. That’s why when he recently announced that this year’s NYC Comic Con will be his last appearance, it’s already shaking up the comic book community.
At an impressive 93 years of age, Stan Lee is typically high-spirited and full of smiles, but that doesn’t mean he is without his problems, particularly his health. The old ticker got a pacemaker put in in 2012, for example, and while Lee hasn’t given an official reason for retiring from Comic Con appearances, it’s most likely that it’s simply too much excitement for an old geezer like him.
Furthermore, in a Radio Times interview earlier this year, Lee revealed that both his sight and hearing are getting tougher and more difficult, though he did say that he is otherwise in good health. Of course there are other factors besides his health that could take a toll on his appearances, and that includes all the movie and television work that Marvel is doing, which most likely takes up a bit of his time through contract negotiations surrounding rights
UK’s Channel 4 has commissioned Bryan Cranston of Breaking Bad fame to produce a 10-episode TV series based on the work of legendary science fiction author Philip K. Dick’s short stories. Each episode will be a stand-alone story and written by Battlestar Galactica and Outlander producer Ronald D. Moore.
All this is said to rival Netflix’s revival of Black Mirror, also a sci-fi anthology series, which ceased airing due to failed negotiations for a third season. Despite it’s critical acclaim and global phenomenon, Channel 4 believed Black Mirror belonged on its publicly owned network rather than the private sector, but Mirror’s producers felt otherwise.
Today, Channel 4 hopes to fill the gap left by Black Mirror with a sure-to-be anticipated new sci-fi series headed by two of the best people for the job in Hollywood. Will it be all it’s cracked up to be? Most likely. The critical acclaim both Cranston and Moore have far exceeds that of the creators of Black Mirror, and that says a lot.