Thesis 69

Thesis 69

It is ethically incumbent on the designers of ubiquitous systems and environments to afford the human user some protection.

We owe to the poet Delmore Schwartz the observation that "in dreams begin responsibilities." These words were never truer than they are in the context of everyware.

Those of us who have participated in this conversation for the last several years have dreamed a world of limitless interconnection, where any given fact or circumstance can be associated with an immensely large number of others. And despite what we can see of the drawbacks and even dangers implied, we have chosen to build the dream.

If the only people affected by this decision were those making it, that would be one thing. Then it wouldn't really matter what kind of everyware we chose to build for ourselves, any more than I'm affected right now by Steve Mann's cyborg life, or by the existence of someone who's wired every light and speaker in their home to a wood-grained controller they leave on the nightstand. However strange or tacky or pointless such gestures might seem, they harm no one. They're ultimately a matter of individual taste on the part of the person making them and therefore off-limits to regulation in a free society.

But that's not, at all, what is at stake here, is it? By involving other people by the hundreds of millions in our schemes of ubiquity, those of us designing everyware take onto our own shoulders the heaviest possible burden of responsibility for their well-being and safety. We owe it to them to anticipate, wherever possible, the specific circumstances in which our inventions might threaten the free exercise of their interests, andagain, wherever possibleto design such provisions into the things we build that would protect those interests.

This is not paternalism; in fact, it's just the opposite. Where paternalism is the limitation of choice, all I am arguing for is that people be informed just what it is that they are being offered in everyware, at every step of the way, so they can make meaningful decisions about the place they wish it to have in their lives.

The remainder of this book will articulate some general principles we should observe in the development of ubiquitous computing to secure the interests of those people most affected by it.

Section 7. How might We Safeguard Our Prerogatives in an Everyware World?

By now, our picture is essentially complete. We have a reasonably comprehensive understanding of the nature of ubiquitous computing and the forces involved in determining that nature.

How can we, as designers, users, and consumers, ensure that everyware contains provisions preserving our quality of life and safeguarding our fundamental prerogatives?

Thesis 70

It will not be sufficient simply to say, "First, do no harm."

We've agreed that, in order to protect the interests of everyone involved, it would be wise for us to establish some general principles guiding the ethical design and deployment of ubiquitous technology.

The most essential principle is, of course, first, do no harm. If everyone contemplating the development of everyware could be relied upon to take this simple idea to heart, thoughtfully and with compassion, there would be very little need to enunciate any of the following.

There are difficulties with such a laissez-faire approach, though. For one thing, it leaves entirely too much unspoken as to what constitutes harm, as to who is at risk, as to what the likely consequences of failure would be. It assumes that everyone developing everyware will do so in complete good faith and will always esteem the abstract-seeming needs of users more highly than market share, the profit motive, or the prerogatives of total information awareness. And, even where developers can be relied upon to act in good faith, it's simply not specific enough to constitute practically useful guidance.

The next best thing, then, is to develop a strategy for ethical development that does take these factors into accountsomething that spells out the issues in sufficient detail to be of use to developers, that strikes a balance between their needs and those of users, and that incentivizes compliance rather than punish noncompliance.

How might we go about designing such a strategy? Let's consider the fundamental nature of the challenge before us one last time, and with that fresh in mind, articulate a framework that should help us develop wiser, more useful, and more humane instantiations of everyware.