Section 22.2. The Theoretical Problem: How Is Knowledge Distributed?


22.2. The Theoretical Problem: How Is Knowledge Distributed?

Contemporary literature on "communities of practice" takes off from a very similar bet.[3] This literature offers a set of relatively obvious but useful design principles that appear to contribute to success. None of these principles really is well enough specified to be operational, but they are clearly worth keeping in mind as a checklist against which any system design can be compared. Roughly, they are:

[3] See, for example, Etienne Wenger, Communities of Practice: Learning, Meaning, and Identity (Cambridge University Press, 1999).

  • Design for evolution (allow the community to change).

  • Open a dialog between inside and outside perspectives (tightly insulated communities tend to corrode).

  • Allow for different and bursty levels of participation (different people will participate at different levels, and any single person will participate at different levels over time).

  • Preserve both public and private community spaces (not all community interactions are public; backchannels should be available).

  • Focus on the value that is created for the people in the community.

  • Mix the familiar and the new.

  • Facilitate the creation of a rhythm (pure bursty-ness and unpredictability tend to corrode commitment).

These design principles actually presuppose quite a lot about the nature of the knowledge that the community of practice is trying to generate, organize, and share. I want to parse out some of the assumptions about that knowledge and some of the different ways it may be embedded in communities to illustrate this point.

Consider again the common saying "none of us is as smart as all of us." The operative assumption is that each one of us has bits and pieces of "good" (useful) knowledge and "bad" (wrong, irrelevant, or mistaken) knowledge about a problem. If Frederick Brooks was even partially right about the social dynamics of complex reasoning (and I think he was right), the demonstrated success of the open source process cannot simply depend on getting more people or even the "right" people to contribute to a project. It depends, crucially, on how those people communicate information to each other. Put differently, depending upon how the community selects, recombines, and iteratively moves information forward over time, the collectivity will become either very smart or very stupid.

I am just saying explicitly here what Eric Raymond implied about the open source process. It is not simply that "with more eyeballs all bugs become shallow." It depends directly on how those eyeballs are organized. And since I am treating organization as an outcome of what kind of information processing algorithm the community needs, to get to operational design principles means understanding better at least these two aspects: how knowledge is distributed in the community, and what the error correction mechanisms you can apply to that knowledge. In simpler language, who knows what, and how do you fix the mistakes?

We know from both intuition and experience that much of what a group needs to "know" to do something is in fact coded in the experiences, tacit knowledge, implicit theories, and data that is accessible to individuals. The problem for the group is that these individuals often don't know how to, aren't incentivized to, or haven't thought of sharing it with others in a mutually beneficial way. We know also that there is noise in the signal. At best, the pieces of distributed knowledge that (if they could be brought together effectively) make up a solution to a problem, are floating around in a sea of irrelevant or incorrect "knowledge."

In a changing and uncertain environment, with strategic players who sometimes have economic incentives to mislead others, and a relatively low tolerance for cascading failures that hurt human lives, the law of large numbers won't solve this problem for us. That is a complicated way of saying that we can't afford to wait for evolutionary selection. Most of evolution is wasted resources. It is extremely inefficient and slow, destroys enormous amounts of information (and protoplasm), and can't backtrack effectively. No one wants this for human systems and it's not clear that we should tolerate it. We need an engineered system.

We also know that this is a very tall hurdle to get over. Large firms commit huge resources to knowledge management, and with very few exceptions (Xerox's Eureka project is notable here) these investments underperform. These systems fail in a number of distinct ways. The most common (and probably the most frustrating) is simply that nobody uses the system, or not enough people use it to generate sufficient interest. More troubling is the failure mode in which the "wrong" people use the systempeople with good intentions who happen to have bad information, or people who might be trying to game the system or intentionally insert bad information to advantage themselves over others in a manner that is either cynical or strategic, depending on how you look at it.

There are other potential failure modes, but the point is to recognize that there is no inherent ratchet-up mechanism for knowledge management. The system could deteriorate over time in several ways. People could share mistakes with each other and scale them up. People could reuse past experiences which are seen as successful in the short term or by particular individuals, but actually are failures overall from the long-term perspective of the community. You could attract the wrong "experts" into your network, or perhaps more likely use experts for the wrong purpose. And you could populate a database with garbage and produce multiplying wastes of effort and cascading failures of behavior. All of us have worked in organizations or communities that have suffered from knowledge management failures of at least one of these types.

But put the community in the background for a moment, and consider the problem from a microperspective by imagining that you are a person searching for a solution to a problem within that community. Now, how knowledge is distributed directly affects the search problem that you face. There are at least three possibilities here.

Case 1 is where you have a question, some other individual has the answer, and the problem for you is whether you can find that person and whether that person is interested in sharing with you what she knows. Case 2 is where no single person has the answer to your question; instead, pieces of the answer are known by or embedded in many people's experiences. The relevant bits of information float in a sea of irrelevant information; your problem is to separate out the bits of signal from the noise and recombine them into an answer. Case 3 is a search and discovery problem. Some of the knowledge that you need is floating around in disaggregated pieces (as in Case 2) but not all of it; you need to find and combine the pieces of what is known and then synthesize answers or add to that new knowledge from outside the community itself.

Here's where your dilemma gets deeper. You don't know to start if you are facing Case 1, 2, or 3. And it matters for what kind of search algorithm you want the system to provide for you. For example, should you use a snowball method (go to the first node in the network and ask that node where to go next)? Or some kind of rational analysis rule? Or a random walk? Or maybe you should just talk to the people you trust.

And now consider the dilemma from the perspective of the person trying to design the system to help you. She doesn't know if you are an expert or a novice; or how entrepreneurial or creative you are; or what your tolerance will be for signal-to-noise ratios; or whether you can more easily tolerate false positives or false negatives.

The history of the open source community as it navigates some of these dilemmas, some of the time, suggests a big lesson: it's impossible to "get it right" and it's not sensible to try. What is more sensible is to try to parse the uncertainties more precisely so that we can design systems to be robust. More ambitiously, to design systems that can diagnose to some degree and adapt to uncertainties as the system interacts with the community over time. A second big lesson of open source is the high value of being both explicit and transparent about the choices embedded in design principles. The next section incorporates both of these lessons into a set of seven design principles for a referee function, inspired by patterns of collaboration within open source communities, that just might make sense for a community of knowledge and practice in politics.



Open Sources 2.0
Open Sources 2.0: The Continuing Evolution
ISBN: 0596008023
EAN: 2147483647
Year: 2004
Pages: 217

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net