Verification


Simply detecting the presence of an undesirable behavior is often not sufficient to take corrective action, especially if the corrective action has a large potential negative impact or cost. A trust value needs to be assigned to the detection report and supporting or refuting evidence gathered.

For simple cases, where the cost of corrective action is low, the reports can be collected by an automated system and the corrective action taken automatically. An example is profanity filtering; there is no need to report to a human customer representative each time someone uses an impermissible word or phrase.

For more complex cases, especially where the players in question have a considerable stake, a trusted human may be required to intervene. It is likely that there will be several "tiers" of trust, where there will be a large outer layer of "slightly trusted" individuals and an inner core of "highly trusted" individuals.

For extremely sensitive cases, there may be a "commit/no-commit" protocol that requires multiple trusted representatives to be involved, so that no single individual (either inside or outside the company) can gain great advantage by manipulating the system.

When combining witness testimony with data gathered by automated agents , care must be taken to ensure that the proper context of the event is maintained . A sentence taken in isolation can easily be misinterpreted. The customer service (CS) user interface should be designed so that arbiters can easily access and comprehend the relevant details of an incident without wasting too much time searching through log files.



Developing Online Games. An Insiders Guide
Developing Online Games: An Insiders Guide (Nrg-Programming)
ISBN: 1592730000
EAN: 2147483647
Year: 2003
Pages: 230

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net