There are numerous reasons to get hyped about the upcoming Livelock from Tuque Games and Perfect World Entertainment. Frankly, it’s really, really good. One of the factors that’s helping the title stand out is the story, penned by Robopocalypse author Daniel H. Wilson. (It’s an excellent read for fans of action/sci-fi novels or books centering around humanity banding together to overcome overwhelming odds.) Recently, Hardcore Gamer had the opportunity to pick the Carnegie Melon educated (Ph.D) author’s brain. Truthfully, we were a bit intimidated by the intelligence of the answers…
[Hardcore Gamer] I want to thank you profusely for taking the time to answer these questions. Personally, I am very excited to have a chance to interact with you, even in a format such as this. This is coming from a fan of Robopocalypse and Robogenesis. I will try to keep my fanboying to a minimum…
[Daniel H. Wilson] Thanks, it’s my pleasure!
With your obvious interest in machines and futurism, something must have sparked your imagination to take it to the levels that you have. What inspired your passion for the subject? (To give an example, my father took his childhood love of the original Star Trek series to getting a Ph.D in astrophysics from the University of Chicago. Was there a similar influence?)
I read about plenty of robots during my childhood, between Asimov, Dick, Clarke, Zelazny, Vonnegut, and Bradbury—but I always thought of them as science fiction. I loved those fictional worlds so much that I tried writing short stories in high school. In college, I discovered that machine learning, artificial intelligence, and the field of robotics were real. Worlds were colliding, and I knew I had to study robotics. Being accepted into the Carnegie Mellon Robotics Institute felt like learning that Hogwarts was an actual school, and they had let me in!
After going back and forth on this for a couple decades, I finally decided I love science and science fiction equally. As a kid, I dreamed of being a scientist in a lab coat, but I also loved my sci-fi. I don’t know that you can separate the two. My goal as a writer is simply to tell a great story that draws on what I’m interested in. I am way interested in robots.
Other than Livelock, what are you reading/watching/playing?
After spending a lot of time cowering in futuristic hallways in the Dead Space series and Alien: Isolation, I am now enjoying standing up like a man and kicking demonic butt in the futuristic hallways of Doom 5. That’s when I’m not in my HTC Vive VR rig—defusing bombs, shooting cardboard zombies, and hitting homeruns. With books, I’m researching Chinese, European, and Russian history for my latest novel, but I’m also steadily reading all of Max Barry, one of my favorite authors. I just finished Lexicon, which I highly recommend!
The guys approached me and laid out the gameplay scenario – robots blowing each other up in the ruins of humanity. That’s fundamentally awesome, but it needed a good story to carry the action. (Without a story, it’s just pixels on fire.) The puzzle was: how do we get rid of humanity, keep the cities, and find a good reason for a war between the robots who have inherited the earth? My mind went to gamma ray burst events, which I had read about in a great book by Phil Plait called Death from the Skies, and of course, the only way to survive organic annihilation is to run the human mind on an inorganic substrate, i.e., Ray Kurzweil’s dream of neural uploading. The rest of the story flowed from those two concepts.
There seems to be a prevailing theme in your works that technology will bite humanity in the rear, what with Archos becoming self-aware and immediately putting together the seeds of man’s destruction in Robopocalypse and our complete eradication in Livelock. These are common themes in science fiction, of course, but is there a grain of actual concern in your fiction?
Don’t get me wrong, I love technology and robots! In fact, in Livelock the robots never kill a single human. It’s the opposite. To survive, human beings upload their minds into a computer network and use robot bodies to move around in the world. When the apocalypse arrives, however, the network is corrupted. Amnesiac survivors in the bodies of robots begin an endless cycle of warfare. Only a few survivors can save the rest of humanity, by conquering them. How badass is that? I love this story because it showcases the redemptive power of technology, as well as the destructive power.
Speaking of Archos, and this question comes from a person that is ignorant in the field of AI outside the realm of gaming, the idea of a technological singularity, an artificial intelligence capable of self improvement and perpetuation, is something that many in the field believe is not only plausible, but probable. Others have dismissed the theory. First, in your opinion, is this a possibility? Second, what do you believe this means for humanity as a whole?
The Singularity is the idea that a machine will get smart enough to make another, even smarter, version of itself, and then repeat that until you have a godlike super-intelligence. It’s extremely unlikely this will occur for general intelligence, and if it does happen, I don’t think it will happen by accident. So long before we have to deal with a god in a box, we will have to deal with a very smart person in a box, and then the world’s smartest person in a box, and then maybe a demi-god in a box. The point is, we will have time to figure out what our world looks like when we have super-intelligent oracles who can help us solve our problems.
Much like faster than light travel, something that simply isn’t possible based on all current understanding of accurate physics, the transferring of a human mind into a machine or computer is something that is brought up often in science fiction, including Livelock. This seems equally implausible. Were it not, what do you think this says about the idea of consciousness, the human soul?
It might sound crazy, but neural uploading is extremely plausible—it’s an active area of research. Scientists are replicating the function of different areas of the brain with computers, for animals and people. It will be awhile, but people are invested in understanding the brain, and finding ways to live forever inside machines. The question is: if you copy your mind into a machine, are you still a human being? What if you make changes after you’re in the machine, like making copies of yourself, or augmenting your intelligence, or changing your sensory input? It’s a philosophical question at this point, but my opinion is that the substrate matters – a human mind running on a machine is no longer a person, but something else entirely…
Finally, is there a question that you wish people would ask you but nobody ever does?
Ask me whatever you like on Twitter, I’m game! @danielwilsonpdx
One last note: Daniel H. Wilson was also a guest on a fascinating episode of Monster Talk. It’s worth a listen. [Featured image: Aaron Lynett/National Post]