It's not as if the people now developing ubiquitous systems are blind to the more problematic implications of their worknot all of them, anyway, and not by a long stretch. But perhaps unsurprisingly, when they think of means to address these implications, they tend to consider technical solutions first.
Consider the ethic that your image belongs to youthat in private space, anyway, you have the right to determine who is allowed to record that image and what is done with it. At the seventh annual Ubicomp conference, held in Tokyo in September 2005, a team from the Georgia Institute of Technology demonstrated an ingenious system that would uphold this ethic by defeating unwanted digital photography, whether overt or surreptitious.
By relying on the distinctive optical signature of the charge-coupled devices (CCDs) digital cameras are built around, the Georgia Tech system acquires any camera aimed its way in fractions of a second, and dazzles it with a precisely-calibrated flare of light. Such images as the camera manages to capture are blown out, utterly illegible. As demonstrated in Tokyo, it was both effective and inspiring.
Georgia Tech's demo seemed at first blush to be oriented less toward the individual's right to privacy than toward the needs of institutions attempting to secure themselves against digital observationwhether it might be Honda wanting to make sure that snaps of next year's Civic don't prematurely leak to the enthusiast press, or the Transportation Security Agency trying to thwart the casing of their arrangements at LAX. But it was nevertheless fairly evident that, should the system prove effective under real-world conditions, there was nothing in principle that would keep some equivalent from being deployed on a personal level.
This functions as a timely reminder that there are other ways to protect ourselves and our prerogatives from the less salutary impacts of ubiquitous technology than the guidelines contemplated here. There will always be technical means: various tools, hacks and fixes intended to secure our rights for us, from Dunne & Raby's protective art objects to the (notional) RFIDwasher, a keyfob-sized device that enables its users "to locate RFID tags and destroy them forever!" Some will argue that such material strategies are more efficient, more practical, or more likely to succeed than any assertion of professional ethics.
However clever the Georgia Tech system was as a proof of conceptand it made for an impressive demothere were factors it was not able to account for. For example, it could not prevent photographers using digital SLR cameras (or, indeed, conventional, film-based cameras of any kind) from acquiring images. This was immediately pointed out by optics-savvy members of the audience and openly acknowledged by the designers.
If you were among those in the audience that day in Tokyo, you might have noticed that the discussion took a 90-degree turn at that point. It became one of measures and countermeasures, gambits and responses, ways to game the system and ways to bolster its effectiveness. Thirty seconds after the last echo of applause had faded from the room, we were already into the opening moments of a classic arms race.
This may well be how evolution works, but it has the unfortunate effect of accommodating instead of challenging the idea that, for example, someone has the right to take your image, on your property, without your knowledge or consent. It's a reframing of the discussion on ground that is potentially inimical to our concerns.
Admittedly, this was a presentation of a prototype system at an academic technology conference, not an Oxford Union debate on the ethics of image and representation in late capitalism. But isn't that just the point? Once we've made the decision to rely on an ecology of tools for our protectiontools made on our behalf, by those with the necessary technical expertisewe've let the chance to assert our own prerogatives slip away. An ethics will inevitably be inscribed in the design of such tools, but it needn't be ours or anything we'd even remotely consider endorsing. And once the initiative slips from our grasp, it's not likely to be returned to us for a very long time.
We know, too, that such coevolutionary spirals tend to stretch on without end. There's rarely, if ever, a permanent technical solution in cases like this: There are always bigger guns and thicker grades of armor, more insidious viruses and more effective security patches.
From my point of view, then, technical solutions to ethical challenges are themselves problematic. I'm not suggesting that we do without them entirely. I'm saying, rather, that technical measures and ethical guidelines ought to be seen as complementary strategies, most effective when brought to bear on the problem of everyware together. And that where we do adopt technical means to address the social, political, and psychological challenges of ubiquitous technology, that adoption must be understood by all to be without prejudice to the exercise of our ethical prerogatives.