Tim Stevens, author of ubiwar, has been kind enough to answer some questions for me concerning cyber conflict in our contemporary virtual space. Mr. Stevens is a PhD candidate at King’s College in London researching institutional responses to cyber threats, particularly in the field of cyber strategy. His related research interests include the political use of cyberspace, social technologies, violence in virtual worlds, and the nature of the technological accident. I thought this would be an interesting opportunity to coalesce and discuss our respective areas of study.
Q: Do you have any thoughts on how the fusion of social media, location-based technologies, and real-time information may shape the context of cyber conflict in years to come?
A: I think your recent post addressed this question very well. My general position is that all things are possible, but most things are improbable. When I started Ubiwar, it was to look at how people exploited technological niches in pursuit of political ends, principally through the application of violence. I see that David Kilcullen has just characterised counterinsurgency as “a battle for adaption…against an enemy who is evolving.” This is a position with which I have a great deal of sympathy. As a battle for adaption, it follows for Kilcullen that COIN cannot really be strategic, and scholars of ‘change’ would generally agree with this. I’m also skeptical of the strategic impact of information technologies – what ‘Twitter Revolution’?
What I’m getting to is that tactical and operational use of information technologies is a massively adaptive field and people are experimental animals. Humans are hackers, and hacking is one way of achieving success in any environment. ICTs offer myriad opportunities for exploitation by a range of actors for a wide range of strategic ends. The fusion you ask about is what others would call convergence. Technologies converge spatially and temporally, always have done. The difference now is that the temporal element has been reduced to effectively zero, as you point out, which similarly collapses space, resulting what you could call a non-locative cyberspace. If you think of ‘cyber’ as command-and-control, then we all have the ability to effect change remotely by contesting the connectivity of non-co-located actors in cyberspace.
It is significant that locative technologies are coming to the fore again. It’s almost like tieing cyberspace back ‘down to earth’, although Seymour Goodman wrote years ago that cyberspace ‘always touches ground somewhere’. Hardware hasn’t gone away, nor has the wetware of the human mind. What I suspect you’re referring to is augmented reality and ubiquitous computing. Short answer: it’s all ripe for hacking. My personal take is that guys like you look into the technical possibilities, and that’s all well and good. I’m more interested in what it actually means. What happens to the body in this space, these spaces? The internet has already had a huge impact on what used to be the relatively solid notion of subjectivity. What happens to identity in cyberspace(s)? The context of cyber conflict is ultimately us – how we internalise cyberspace, or project externally into it, is unknown. I have an idea that cyberspace is not really new anyway – it was born when we became conscious, communicative animals. In that sense, cyber conflict has always been with us, and its psychological vectors remain pretty much the same, if twisted and mutated somewhat. The physiological changes are much more murky and hard to decipher. Some good work has been undertaken on ‘presence’, for example, but it’s early days. This is approximately where my research into violence in virtual worlds is situated.
Q: Are their any fundamental aspects of cyber conflict that exist ubiquitously in all cases of cyber conflict? If these fundamental commonalities do exist, what are they and how could they be used to remedy future cyber conflicts?
A: Well, see above. The issue of remediation is interesting though. I think that deterrence in its various forms, for example, is a psychological matrix of cost-benefit analysis, even for actors we don’t normally think of as ‘rational’. Pre-event deterrence-by-denial dissuades an initial attack. Post-event deterrence-by-denial dissuades future attacks by demonstrating the ability to recover. Deterrence-by-punishment dissuades by plausibly threatening to kick your ass if you try anything funny.
But cyber conflicts are not just psychological, any more than other forms of conflict. The physical systems on which cyberspace is ‘parasitic’, in Albert Borgmann’s phrase, are also contested, for example, and are largely what worry SCADA wonks. Martin Libicki’s recent RAND report on cyber deterrence mentions the physical, syntactic and semantic layers of cyberspace, and this is a useful way of thinking about the differing layers of contestability. He swiped this idea from linguistics without reference but I’ll forgive him for that. This is another reason why I’m not so sure cyberspace is new, which speaks to your ‘fundamental aspects’ question. How we engage in cyber conflicts throws up a host of weirdness and counter-intuitive possibilities but not all of it is ‘new’.
Q: You recently posted Neal Stephenson’s response to a fascinating questions concerning the protection of hacking tools (in the United States) under the second amendment. How would you respond to that question?
A: Being a cheese-eating surrender-monkey I’m going to be called out whatever I say in response to this. I’m not a priori opposed to the Second Amendment but I do think it’s been hijacked somewhat over the years. US gun-control laws are in dire need of review: what’s the point in having guns to keep the government in check if all you do is shoot fellow Americans with them? In keeping with almost everyone else – including US citizens – I have to claim ignorance as to what it really means. As to whether it extends to ‘hacking tools’-code-I’m with Stephenson here. My default position when it comes to constitutional issues is generally ‘do nothing’ unless there’s a very good case for doing otherwise; I don’t think that case exists yet. Of course, we don’t even have a written constitution in the UK, so what do I know?
There’s another issue here, one that the military are currently actively exploring, and which the UN are likely to tackle at some point: cyber arms control. My initial response is: how the hell are you going to police that? Code is not amenable to the same forms of physical monitoring and intelligence regimes as kinetic weapons. Code lacks the traditional dimensionality required for control. I should think the implications of that are obvious.
Q: When discussing cyber conflict, you appear to become frustrated when the argument centers around hyperbole. This is absolutely understandable. What advice would you give to those of us (including myself) on the front lines that sometimes unconsciously fall in to this trap?
A: Well, it’s pretty simple, actually. Be conscious of who you are. If you lose sight of the bigger picture, then you have little hope of formulating realistic solutions to realistic problems. Planning involves considering worst-case scenarios and formulating strategies for dealing with them. We’ve had six decades of doing exactly that with nuclear weapons, for example. The problem with doing that is if everything is predicated on the worst-case scenario – including public discourse – the solutions we come up with are just as likely to be the worst ones. It boils down to understanding the effects that one’s own actions have, and taking responsibility for them. Perspective’s a handy tool and it doesn’t just mean looking outwards; it means examining yourself too. Being critical. If cyberspace is as important as everyone says it is, it would be wise for all those involved to think about exactly what it is they’re pushing for, and ask if their actions in any way further the interests of the global commons. If it doesn’t, then your standpoint may need to be tweaked a bit. And, for the record, I don’t think that national interest necessarily trumps all.
Q: In a society prone to finger pointing, what is the appropriate response to the nature of technological accidents?
A: I have two answers, one practical, the other philosophical. The practical answer would be to find out what went wrong. Sounds simple, right? Not always so. You’re right about finger-pointing; everyone’s so gee-ed up with their own importance, and too weak to resist the bleating of single-interest groups, that people tend to get fired before anyone even knows what the problem is. By all means hold people to account but count to five before you sack someone just because you need a scapegoat. Some of the problems of technology are ‘wicked’ ones, and require significant unpacking before action is taken. One of the problems with ‘cyber’ is that so many people are shouting that cyber defence is moving way too slowly to keep up with the environment that the fingers are pointing even before anything’s happened. There are too many people who in one breath are repeating the mantra, ‘we.must.all.work.together’, whilst tearing strips off anyone who isn’t half-as-damn-smart as they are. That’s a really good basis for co-operation. Sometimes, of course, shit just happens. Learn from it. Move on.
The philosophical argument is slightly different, and there is no right response. There are two people to bear in mind here. One is the ‘anarcho-Christian’ theorist Paul Virilio, the other Ted Kaczynski, the Unabomber. In different ways, they would both maintain that technology contains within itself the seeds of its own destruction. The technological accident is therefore something inevitable. I always paraphrase this as, ‘you invent the car, you get the car crash; invent TV, and you get Fox’. Trite, I know, but you get the point. So, the response to a technological accident in these terms is complex. You can throw your hands up and blame it on the evils of technology in a told-you-so kind of way and go and break up all the spinning Jennies, or you can stop and wonder if technology really is teleological in these terms. Does technology really have an imperative, a drive, a force beyond the control of humankind, or can its trajectory be shaped by humans? Your response really depends on whether you’re a hardcore technodeterminist in the first instance, or a social constructivist in the second. Me, I’m somewhere between the two right now, but vacillate daily. Today’s metric is 60:40 in favour of determinism. Ask me again tomorrow.