We're not there yet. This section outlines some of the other capabilities we need to have in place to make this a reality.
Much of what we are talking about is highly valued intellectual property. At a minimum it needs to be secured, or it will be guarded and its full value might never be realized. Agents need to be secured so that they do not conduct unauthorized activity on your behalf.
Just as we do implicitly in the real world, we need ways of keeping track of which sources are reliable (i.e., which ontologies have reliable information, as far as we are concerned). The flip side is a more robust version of a "bozo filter." If the mushrooms from Al tasted terrible, despite rave reviews from a dozen mycophagists, you put those dozen reviewers on a "bozo list." The scope of the bozo list is at least mushrooms, perhaps all cuisine, and perhaps any opinions. The next time you look at reviews for food, their opinions and votes will be excluded.
As James Hendler has pointed out, another source of semantic tagging is the tools that support the authoring process.[121] When you select the image of a giraffe from a clip art source and include it in your document, it will be genus and species identified.
We require at least semiautomated support for semantic tagging of content. Some of these tools, such as those from Applied Semantics[*] and Verity, already exist at the high end, and it may only be a matter of packaging them or something similar for mass use.
A variation on semiautomated tag assignment would be to have a special kind of agent that uses the ontologies you are committed to and evaluates the fidelity of your assignments based on other attributes available. For example, you may have tagged your Studebaker as a new car. The agent could tell, either from the model year or from the fact that Studebaker stopped making cars 40 years ago, that you had mistagged this content. It should be able to work similar to the grammar checker in a word processor, comparing a specific instance of a sentence with a number of rules about words and grammar.
Ontologies will change and evolve over time. We need to be able to know when the change is accretive (forward compatible for sites committed to it) and when it is evulsive (when something is torn out or otherwise changed so as not to be backward compatible). Version management of ontologies, in at least some cases, also needs to be able to answer questions such as "In the year 2002, what was our knowledge about the lethality of Boletus edulis?"
We like to believe that we will have natural language interpreters that will figure out all the fine points of what we mean from our utterances. However, even if that is possible (and I'm not convinced it is), we have the problem that what we say may be incomplete relative to what we needed to say, and we will need a software conversationalist that can intermediate between us and an ontology and help us refine what we are saying.
[121]James Hendler, "Agents on the Semantic Web." Available at http://www.cs.umd.edu/users/hendler/AgentWeb.html.
[*]Applied Semantics has been acquired by Google.