Feeds:
Posts
Comments

Since closing the blog last year I have had periodic requests to restart it.  As it stands I would be open to anyone wishing to write on this blog around the subject of security and virtual worlds.  There aren’t too many other sources out there so keeping the blog going is a good idea.  Therefore, if anyone wishes to get involved please email me at roderick[dot]jones[at]gmail[dot]com.

I recently checked into Second Life for the first time in a long time.  Interestingly, the subject of what ‘went wrong’ with Second Life has cropped up quite a bit recently as a kind of essay question.  My own view is that Second Life had a moment where it almost did offer the metaverse of imagination.  Second Life circa 2006 did seem unbound by terrestrial law and economics.  The reports of fortunes being made in a new world were drawing in an enormous amount of interest.  However, from April to July 2007 the owners of Second Life, Linden Lab began to bind the world the United States law by introducing Age Verification and then banning Gambling from the world. This was in an environment when the US was aggressively pursuing UK executives from BetonSports, which was engaged in online gaming (so it is entirely understandable).  While seemingly sensible policy decisions at the time, these decision placed Second Life back in the terrestrial realm.  From there the promise and excitement around the world declined.

Offshore Second Life

I find myself wondering how difficult it now would have been  to offshore Second Life to maintain its ‘freedom’ and how hard that would be to do now.  Unbind it from nation-state law and regulation again.  Would it be possible to place the company behind a trust arrangement in an offshore center and place the servers behind similar extra-territorial and legal protections.  It is an intriguing idea as online environments become increasingly walled-in — creating one community, which can be transnational is a fascinating idea and would maybe re-boot the idea of a Metaverse.  Second Life is currently valued at just north of $200M so it wouldn’t be a cheap experiment but it could work as a side project in Linden Lab.

 

 

After almost three years of reporting and commenting on security issues relating to virtual worlds this blog and to some degree the metasecurity project as a whole has run its course.  When I started examining virtual worlds and considering the security implications of their expansion back in 2005 the paradigm shift was undeniable.  My aim through researching, writing and speaking about virtual worlds was to educate on the new vulnerabilities and opportunities presented by the massive migration in to virtual spaces.  Through active research within Second Life and the creation of online scenarios (such as the SLLA), the presentation of material relating to virtual worlds and of course blogging I believe that the virtual environment is well on its way to being thoughtfully considered by the National Security Community within the USA and the EU.

Virtual worlds look set to expand and become more relevant as a generation of users become familiar with 3-d immersive worlds.  My own intention is to retain a watching brief over virtual worlds but not devote the necessary time to the genre that I once did.

My writings and research will now be held over at www.roderickbjones.com.

Finally thanks to Doug Crescenzi and David Grundy for contributing to the blog over the past 3 year.

From Techcrunch:

First Planet Company, a subsidiary of the MindArk Group that develops and markets the Massively Multiplayer Online Role Playing Game Planet Calypso, which is based on the Entropia Platform and part of the Entropia Universe (still with me?), claims that the world record for most expensive virtual object ever to be sold has been smashed into thousands of virtual pieces.

First Planet Company about a month ago announced the public auction of the Crystal Palace Space Station, according to the release ‘an extremely popular hunting destination that is in orbit around Planet Calypso’. The lucky winner of the auction, Erik Novak (aka “Buzz Erik Lightyear”), ended up winning the bid with a $330,000.00 USD offer.

It’s 2010, so it’s no longer surprising for most people to see this kind of money being spent on virtual objects, and I doubt this world record will stick around for long.

Novak puts it this way:

“This is a stunning investment opportunity, and I have complete faith I will recover what I spent relatively quickly. To say Planet Calypso has changed my life would be an understatement. I have even found the love of my life in the game, and now we live together in real life. I feel very confident about purchasing the Crystal Palace Space Station as I have already invested years of time, dedication, work, hope and love. All of those things have already paid off more than I could have ever imagined. With the new game engine, new features and almost ten years of experience Planet Calypso is one of the few safe investments in this economy.”

For the record (wink, wink, nudge, nudge), the previous world record as registered by the authorative Guinness World Records book in 2008, was also a sale of a virtual property in Planet Calypso. The amount paid for that property was much smaller though, with a reported $100.000 USD being spent on a Virtual Space Resort and Nightclub now dubbed Club Neverdie.

Do you spend money on virtual goods? What if it would concern a significant investment that you’re confident would generate a great return down the line?

The BBC reported today that organized theft of accounts and their related in-world currency had been occurring in Runescape.

UK Police have arrested one individual and are thought to be working with the FBI to trace US based suspect.

This is a fascinating case of life following fiction as it looks as though the plot-line of Charles Stross, Halting State is being played out for real.

Virtual currency is a criminal Tsunami, which looks set to overwhelm traditional policing methods unless policy,law and procedure are introduced to address the reality of virtual reality.

Tim Stevens, author of ubiwar, has been kind enough to answer some questions for me concerning cyber conflict in our contemporary virtual space.  Mr. Stevens is a PhD candidate at King’s College in London researching institutional responses to cyber threats, particularly in the field of cyber strategy.  His related research interests include the political use of cyberspace, social technologies, violence in virtual worlds, and the nature of the technological accident.  I thought this would be an interesting opportunity to coalesce and discuss our respective areas of study.

Q: Do you have any thoughts on how the fusion of social media, location-based technologies, and real-time information may shape the context of cyber conflict in years to come?

A: I think your recent post addressed this question very well. My general position is that all things are possible, but most things are improbable. When I started Ubiwar, it was to look at how people exploited technological niches in pursuit of political ends, principally through the application of violence. I see that David Kilcullen has just characterised counterinsurgency as “a battle for adaption…against an enemy who is evolving.” This is a position with which I have a great deal of sympathy. As a battle for adaption, it follows for Kilcullen that COIN cannot really be strategic, and scholars of ‘change’ would generally agree with this. I’m also skeptical of the strategic impact of information technologies – what ‘Twitter Revolution’?

What I’m getting to is that tactical and operational use of information technologies is a massively adaptive field and people are experimental animals. Humans are hackers, and hacking is one way of achieving success in any environment. ICTs offer myriad opportunities for exploitation by a range of actors for a wide range of strategic ends. The fusion you ask about is what others would call convergence. Technologies converge spatially and temporally, always have done. The difference now is that the temporal element has been reduced to effectively zero, as you point out, which similarly collapses space, resulting what you could call a non-locative cyberspace. If you think of ‘cyber’ as command-and-control, then we all have the ability to effect change remotely by contesting the connectivity of non-co-located actors in cyberspace.

It is significant that locative technologies are coming to the fore again. It’s almost like tieing cyberspace back ‘down to earth’, although Seymour Goodman wrote years ago that cyberspace ‘always touches ground somewhere’. Hardware hasn’t gone away, nor has the wetware of the human mind. What I suspect you’re referring to is augmented reality and ubiquitous computing. Short answer: it’s all ripe for hacking. My personal take is that guys like you look into the technical possibilities, and that’s all well and good. I’m more interested in what it actually means. What happens to the body in this space, these spaces? The internet has already had a huge impact on what used to be the relatively solid notion of subjectivity. What happens to identity in cyberspace(s)? The context of cyber conflict is ultimately us – how we internalise cyberspace, or project externally into it, is unknown. I have an idea that cyberspace is not really new anyway – it was born when we became conscious, communicative animals. In that sense, cyber conflict has always been with us, and its psychological vectors remain pretty much the same, if twisted and mutated somewhat. The physiological changes are much more murky and hard to decipher. Some good work has been undertaken on ‘presence’, for example, but it’s early days. This is approximately where my research into violence in virtual worlds is situated.

Q: Are their any fundamental aspects of cyber conflict that exist ubiquitously in all cases of cyber conflict?  If these fundamental commonalities do exist, what are they and how could they be used to remedy future cyber conflicts?

A: Well, see above. The issue of remediation is interesting though. I think that deterrence in its various forms, for example, is a psychological matrix of cost-benefit analysis, even for actors we don’t normally think of as ‘rational’. Pre-event deterrence-by-denial dissuades an initial attack. Post-event deterrence-by-denial dissuades future attacks by demonstrating the ability to recover. Deterrence-by-punishment dissuades by plausibly threatening to kick your ass if you try anything funny.

But cyber conflicts are not just psychological, any more than other forms of conflict. The physical systems on which cyberspace is ‘parasitic’, in Albert Borgmann’s phrase, are also contested, for example, and are largely what worry SCADA wonks. Martin Libicki’s recent RAND report on cyber deterrence mentions the physical, syntactic and semantic layers of cyberspace, and this is a useful way of thinking about the differing layers of contestability. He swiped this idea from linguistics without reference but I’ll forgive him for that. This is another reason why I’m not so sure cyberspace is new, which speaks to your ‘fundamental aspects’ question. How we engage in cyber conflicts throws up a host of weirdness and counter-intuitive possibilities but not all of it is ‘new’.

Q: You recently posted Neal Stephenson’s response to a fascinating questions concerning the protection of hacking tools (in the United States) under the second amendment.  How would you respond to that question?

A: Being a cheese-eating surrender-monkey I’m going to be called out whatever I say in response to this. I’m not a priori opposed to the Second Amendment but I do think it’s been hijacked somewhat over the years. US gun-control laws are in dire need of review: what’s the point in having guns to keep the government in check if all you do is shoot fellow Americans with them? In keeping with almost everyone else – including US citizens – I have to claim ignorance as to what it really means. As to whether it extends to ‘hacking tools’-code-I’m with Stephenson here. My default position when it comes to constitutional issues is generally ‘do nothing’ unless there’s a very good case for doing otherwise; I don’t think that case exists yet. Of course, we don’t even have a written constitution in the UK, so what do I know?

There’s another issue here, one that the military are currently actively exploring, and which the UN are likely to tackle at some point: cyber arms control. My initial response is: how the hell are you going to police that? Code is not amenable to the same forms of physical monitoring and intelligence regimes as kinetic weapons. Code lacks the traditional dimensionality required for control. I should think the implications of that are obvious.

Q: When discussing cyber conflict, you appear to become frustrated when the argument centers around hyperbole.  This is absolutely understandable.  What advice would you give to those of us (including myself) on the front lines that sometimes unconsciously fall in to this trap?

A: Well, it’s pretty simple, actually. Be conscious of who you are. If you lose sight of the bigger picture, then you have little hope of formulating realistic solutions to realistic problems. Planning involves considering worst-case scenarios and formulating strategies for dealing with them. We’ve had six decades of doing exactly that with nuclear weapons, for example. The problem with doing that is if everything is predicated on the worst-case scenario – including public discourse – the solutions we come up with are just as likely to be the worst ones. It boils down to understanding the effects that one’s own actions have, and taking responsibility for them. Perspective’s a handy tool and it doesn’t just mean looking outwards; it means examining yourself too. Being critical. If cyberspace is as important as everyone says it is, it would be wise for all those involved to think about exactly what it is they’re pushing for, and ask if their actions in any way further the interests of the global commons. If it doesn’t, then your standpoint may need to be tweaked a bit. And, for the record, I don’t think that national interest necessarily trumps all.

Q: In a society prone to finger pointing, what is the appropriate response to the nature of technological accidents?

A: I have two answers, one practical, the other philosophical. The practical answer would be to find out what went wrong. Sounds simple, right? Not always so. You’re right about finger-pointing; everyone’s so gee-ed up with their own importance, and too weak to resist the bleating of single-interest groups, that people tend to get fired before anyone even knows what the problem is. By all means hold people to account but count to five before you sack someone just because you need a scapegoat. Some of the problems of technology are ‘wicked’ ones, and require significant unpacking before action is taken. One of the problems with ‘cyber’ is that so many people are shouting that cyber defence is moving way too slowly to keep up with the environment that the fingers are pointing even before anything’s happened. There are too many people who in one breath are repeating the mantra, ‘we.must.all.work.together’, whilst tearing strips off anyone who isn’t half-as-damn-smart as they are. That’s a really good basis for co-operation. Sometimes, of course, shit just happens. Learn from it. Move on.

The philosophical argument is slightly different, and there is no right response. There are two people to bear in mind here. One is the ‘anarcho-Christian’ theorist Paul Virilio, the other Ted Kaczynski, the Unabomber. In different ways, they would both maintain that technology contains within itself the seeds of its own destruction. The technological accident is therefore something inevitable. I always paraphrase this as, ‘you invent the car, you get the car crash; invent TV, and you get Fox’. Trite, I know, but you get the point. So, the response to a technological accident in these terms is complex. You can throw your hands up and blame it on the evils of technology in a told-you-so kind of way and go and break up all the spinning Jennies, or you can stop and wonder if technology really is teleological in these terms. Does technology really have an imperative, a drive, a force beyond the control of humankind, or can its trajectory be shaped by humans? Your response really depends on whether you’re a hardcore technodeterminist in the first instance, or a social constructivist in the second. Me, I’m somewhere between the two right now, but vacillate daily. Today’s metric is 60:40 in favour of determinism. Ask me again tomorrow.

This post had originally been titled “The Top Augmented Security Threats“….on what grounds do I have to make such claims?  These technologies and ideas are new.  As such, aggressively speculating on potential future dangers (with no idea how real they are) is dangerous.  In writing this blog, I hope to spark new thoughts and build upon the ideas of others.  What I do not want is to over-sensationalize the threats I discuss.  Many of them are simply conceptual and interesting to think about, but to no extent do I wish to peddle fear off on to others for my own personal gain.  ::cough:: 60 minutes ::cough::  As this blog matures, I hope to promote worthy dialogue and keep fear mongering out of proximity.  That said..

With augmented reality systems on the rise it has become important to focus on the corresponding security threats users may face.  Fundamentally, the AR paradigm allows users to interface with a more intelligent planet.  Our mobile devices now provide a gateway to context specific knowledge and information.  This knowledge rich virtual layer permits individuals to more intelligently maneuver and manipulate our contemporary surroundings.

Context hacking and location manipulation: As we become more dependent on these mobile devices to provide information relevant to our surrounding environment, a trust relationship is born.  We as users come to trust that the information we receive is valid and credible.  Applications such as Layer, show users what is in proximity to them by displaying real time digital information on top of reality through the mobile phone’s camera.  Much of the real time digital information that we find in such applications is user submitted data.  What is to prevent malicious users from targeting specific locations and submitting false information?  Attackers could target specific locations, manipulate the environment’s digital context, and more effectively facilitate attacks such as spear phishing and social engineering.  Attackers can easily leverage the power of social context to stack the deck in their favor.  Take it one step further.  What if attackers target a specific business or organization?  By hacking context and manipulating location, attackers can desecrate an organization’s reputation.  Attackers could even go so far as to depreciate the value of a home simply by means of context hacking and location manipulation.  As can be seen in the new Twitter API for location based trends these attacks really are not that far away.
Location Based DDoS’ing: AR systems and location go hand in hand.  It is the location based information, in many cases, that makes an AR system worth using.  The ubiquitous networking of objects and the Internet of things implies networks and their hosts will become somewhat presence aware.  Users will come to rely upon systems and networks with presence that are location specific.  Attackers may choose to DDoS location specific targets particular to a mission.  However, this idea is not intrinsically new.  AR systems simply have the potential to amplify such threats.
Physical Threat: Continuing on with the importance of location, physical threats become more relevant.  Users with mobile devices, acting as sensors, promote the dissemination of location relevant information.  As such, an individual targeting another individual in physical space (instead of virtual space) could conceivably do so more effectively.
Spam: Spam, sigh, the problem we were to have solved back in 2006.  Spam will be just as relevant to AR systems as it is today with email.  This virtual layer will likely become littered in spam.  Advertisements will be everywhere.  Users themselves may become the advertisements…. similar to something like this.  Will users simply learn  to tone them out as they do with advertisements on the Web?  Probably.  However, the market and dirty money to disseminate spam will still be there.
Mobile Metadata Mining: I posted about this a few days ago.  Is it a threat?  I suppose.  Is it something that should keep me up at night?  Absolutely not.  The metadata associated with output from mobile devices will eventually allow us to do some pretty incredible things….that is of course, if it becomes standardized.  Until then, mobile metadata mining will simply be the mass acquisition of dissimilar data.  The differences in format and semantics will only permit a group or individuals mining the data to do so much.  If some kind of standard to recognize the who, what, where, when does come to exist, look out.  Intelligence gathering will grow to new levels.

Augmented reality is here.  Right now, today.  We are about to see some creative developers make some incredibly powerful applications, applications that will change our lives on a daily basis.  So what is augmented reality?

In case the concept of augmented reality is still new to you, basically it’s the placement of a digital layer of information on top of a real-life view of the world around you, as seen through e.g. a mobile phone’s camera lens. Using augmented reality, you could be using your smartphone to glance around the main square of a city you’re visiting and get up-to-date information about nearby restaurants, ATMs, real estate offers, and more on-screen, bolted on top of what you’d be seeing if you weren’t looking through the lens.

When I first started this blog about 4-5 months ago, I understood the power of virtual environments, but I focused too heavily on three dimensional spaces.  I believe three dimensional virtual spaces, that are Metaverse-like, are still important but I am beginning to take a step back from them.  Based on where we are today with mobile computing, social networks, location-based media,and real time information, it is hard not to get excited about the oncoming explosion of AR systems.

Instead of providing a third dimension of internet context, augmented reality has an intelligent virtual layer that interfaces with the real world.  Currently, the information residing on this virtual layer is primarily solitary and cached.  Soon, users will be interacting with, and collaborating over this virtual layer in real time.  The output users embed into the virtual layer from their mobile devices, whether it be text, pictures, audio content, etc. will have core metadata components bound to it.  These core metadata components will answer questions associated with mobile output for things like who, what, where, and when.  This metadata permeating throughout the AR system makes the system more intelligent.  However, it will leave behind a digital trail unique to target individuals.

Scraping these AR systems, and mining this user output metadata, willl become a powerful intelligence gathering tool.  Relationships between individuals, their locations, their interests, etc will all be easily ascertained.  This information will no doubt provide value to malicious attackers but it will also promote intelligent risk management applications.  Organizations and nation states will use aggregated metadata from mobile devices to model scenarios and perform dynamic threat vector analysis.

AR systems will be powerful and provide great value, but individuals must be careful with how they interact with the virtual grid and what they’re willing to embed within it.

How does one damage facebook to cause them serious monetary losses?  I was recently posed this question and did not have an immediate response.  It is an intimidating question considering facebook’s pervasive ubiquity throughout the world.  facebook is a massive giant with perpetually endless resources and support.  It defines the success of social media in the virtual space.

When I was first thinking about this question I was too heavily focusing on it from a low level technical perspective.  I was devising overly complex ideas that were unreasonable and could by no means challenge the colossal beast facebook is today.  Eventually I had to take a step back and think about it at a high level.  Upon thinking about if further, I believe I have come up with something that is really quite simple and wouldn’t be difficult for an organization with adequate resources to pull off.

Let me begin by first acknowledging that the following ideas are by no means novel.  Yet these independent, unrelated concepts formulate an innovative idea once amalgamated together.

Often times facebook is viewed as a ‘social networking‘ service provider.  I prefer to look at facebook as an identity service where users can autonomously stand up an identity that facilitates social networking.  Users rely upon this identity service to interface with people they know (or don’t know) from the real world in a virtual environment.  Fundamentally, facebook users must trust facebook’s identity service otherwise the system fails.  When users can longer trust this service they will go elsewhere and facebook will lose money.

So, how does one attack this identity service??

In recent months we have seen individuals stand up both twitter and facebook profiles that fraudulently pose as celebrities.  This causes a number of problems for service providers because users can no longer adequately trust the identity service they rely on.  Questions arise regarding how do I know if I’m really communicating, following, friend’ing, etc. the real person, or someone claiming to be said real person, in a virtual environment?  How can I trust someone is who they say they are?  This comes back to one of the hardest problems to solve in computer security.  Identity management.

As a facebook adversary (or adversary of another organization and leveraging facebook as an attack medium…which I will get into in a minute), it is important to create identity ambiguity on a grand scale.  Just because a few randomly selected individuals have multiple accounts, one that is actually legitimate, and others that are fraudulent, the damage to facebook’s reputation as an identity service provider will likely not be tarnished.  It is imperative for these fraudulent accounts to become widespread.  The facebook population is absolutely mammoth so I do not expect all users or members of their social circles to be effected, but rather enough to raise some red flags, jeopardize user trust in facebook’s service, and cause some users to stop using it.

So, how does an adversary initiate the rampant creation of fraudulent facebook accounts?

Many of those who study virtual worlds and MMORPGs are familiar with the concept of gold farming.

Gold farming is a general term for an MMORPG activity in which a player attempts to acquire (“farm”) items of value which are sold to create stocks of in-game currency (“gold”), usually by exploiting repetitive elements of the game’s mechanics. This is usually accomplished by carrying out in-game actions (such as killing an important creature) repeatedly to maximize gains, sometimes by using a program such as a bot or automatic clicker. More broadly, the term “gold farmer” could refer to a player of any type of game who repeats mundane actions over and over in order to collect in-game currency and items. An organization which organizes farmers is known by some as a sweatshop, though the less value-laden term is “workshop” or “gold farm”.

A motivated adversary or organization (perhaps a facebook competitor) with adequate resources could stand up a fraud farm composed of cheap laborers.  These fraud farms and their fraud farmers could repetitively stand up fraudulent facebook accounts.  These fraudulent accounts would mimic legitimate accounts.  Their pictures, their information, etc. however they would require a different email address.  It would not be difficult for a fraud farmer to stand up name appropriate user email addresses to impersonate real ones for real accounts.  Also, in many cases, fraud farmers would need to befriend their targets to obtain the information necessary for standing up acceptable fraudulent accounts.  We already know how many individuals have no problem accepting friend requests from people they don’t know.  They would probably be more inclined to accept friend requests from individuals with the same name.  “Wow, this person has the same name as me, how cool!”  This really is how many people think.  Once this relationship exists, the fraud farmer  has the tools necessary to stand up a counterfeit account.

For previously existing relationships between individuals on a social network, determining real accounts of friends versus fake accounts would be trivial.  However, it becomes interesting in cases in which new relationships between individuals are being established.  It becomes particularly interesting when new relationships are established between individuals from the same organization.

Lets say perhaps I have a fairly large fraud farming operation in some third world country and I’ve decided to target Goldman Sachs.  It would be easy to establish facebook friendships with legitimate Goldman Sachs employees via friend’ing them with fraudulent accounts that impersonate other real Goldman Sachs employees.  In this case, facebook is being leveraged as an attack medium for an outsider to interface with real, internal, employees.  Think about all the things a fraud farming unit could potentially do with these trust relationships???  The possibilities are endless.

Eventually some users and some organizations would lose faith in the identity service facebook is providing.  In extreme cases, organizations may even go so far as to ban employees from even having accounts!  Think about all the press something like that would get.  If anything, it would certainly raise questions regarding the risks of facebook and their services.

I exaggerate a bit with this post’s title.  Chances are this would not kill facebook….but it would certainly cost them money.  Not only that, this concept also turns facebook into a powerful weapon to target other organizations.  It could cause these organizations devastating financial and information losses.

It is becoming apparent that social media and virtual relationships have serious security implications for individuals and their organizations.  These trends begin to pose the security questions of tomorrow.

Alternate reality gaming (ARG) is a relatively new trend beginning to gain serious traction.  At its fundamental core, an ARG is simply a communication rich, collaboration environment that coalesces the real world and virtual space.  An alternate reality game bridges the metaphysical disconnects between the two environments.

I recently began using foursquare.  The folks over there tout foursquare as 50% friend-finder, 30% social cityguide, 20% nightlife game.  Foursquare provides a virtual world, that interfaces with the real world.  Registered users use Foursquare to connect with friends, update their location (“checking in”), describe what they are doing, and receive points for doing so.  The point system and earning badges (the gaming aspect), provide users incentive to do and try new things.  Additionally, it encourages them to share information.  “You should check out this bar and try their microbrew!”  Users themselves provide knowledge rich information specific to a target location.  The community makes the system more intelligent and capable of meeting profound knowledge management needs.

From a security perspective, the question I find myself asking, what utility can be found in this information?  Instead of focusing upon it at a micro-level (privacy, dangers of sharing location, social engineering, phishing, etc) it is far more interesting to look at it from a macro-level.

A paradigm shift in social media is coming in which the real world and virtual space interact together as a singular entity.  This entity is comprised of three fundamental components; people, location, and knowledge.  These three components lay the foundation for a dynamic, living, breathing system that evolves over time.  The system evolves around how these components are built and structured around each other.  It is somewhat analogous to an iterative mathematical process in which operators and operands are used to create complex equations and theorems over time.  These complex equations and theorems can then be used as the foundation for future equations and theorems…and so on, and so on.

What is most interesting are the relationships that form between people, location, and knowledge.  These relationships build around each other to create profoundly rich links and ties that essentially act as the system’s DNA.

With all of this information, security folks could create models to uncover interesting relationships between individuals, location, and the knowledge associated with them.  One could then simulate potential outcomes by incorporating variables into the model.  This would enable security professionals to predict future relationships between individuals and their locations and thus reveal common threat indicators and patterns.  These models used for exploring existing relationships and simulating future relationships (based on variable inputs) would, with hope, provide cogent foresight for law enforcement.

When I come in to work every morning I follow a standard routine.  First, I make sure to grab some coffee and fruit down at the cafe.  I then check my email, voicemail, calendar, etc. and plan my day accordingly.  Next, I catch up on the news – technical, political, weather, security, sports, etc. Trite, cliche, boring eh?  Well, the way in which I go about accessing the Internet is somewhat unique….

A colleague of mine recently turned me on to the concept of ‘ephemeral desktops’.  The idea behind ephemeral desktops is simple.  The reality is, an attacker can catch any one of us snoozing at any given time.  Maybe clickjacking?  Perhaps drive-by downloads?  Phishing malware? etc.  Inevitably, every organization at some point or another will have an employee fall prey to persistent malware and put their company’s network at risk.  Ephemeral desktops are a great tool for mitigating persistent malware threats.  How do they do this?  What exactly does this mean?

Getting back to my daily routine…before I check the news, I load a custom Ubuntu 9.04 live CD (that my colleague has put together).  This Ubuntu live CD is read-only with a few useful applications to assist me in doing my job including both SSH and VPN clients.  The idea behind the ephemeral desktop, in my case the tinkered with Ubuntu live CD, is that nothing can be written to disk.  This means, no persistent malware can be written to disk because I am manipulating the Internet with a browser on a read-only CD.  Perhaps, while using my ephemeral desktop, I browse the Internet and accidentally download some form of persistent malware.  It really doesn’t matter.  The next time I boot from my Ubuntu live CD I will be starting, once again, from a clean state.  I can lose the battle but still win the war.

The idea behind lightweight, ephemeral desktops, is auspicious considering the direction we are headed with the cloud.  As for virtual environments, users require a client to interface with a particular environment.  Currently, virtual environments rely too heavily upon these clients for functionality (scripting and condensed physics engines).  It may be interesting to pursue research concerning ephemeral clients (with similar principles to the ephemeral desktop) that will always start from a clean state.  Despite what malicious content may or may not have been downloaded from a previous virtual experience, a user can trust that no persistent malware has been written to their disk.