How Can This Biometric Be Spoofed?

 <  Day Day Up  >  

As discussed in the opening of this chapter, it was accepted that even humans can be fooled into thinking we recognize a voice when we do not. If this is the case, it is believable and to be expected that a voice biometric system could be fooled as well. While it is generally accepted that voice biometrics do not provide the same level of FAR as other biometrics, voice biometrics offer other very attractive attributes such as high user acceptance and low hardware cost. The term "low cost" is relative to the quality of the identification and thus the ability to successfully enroll and verify. This is based on the ambient noise in the verification and enrollment location and the quality demanded by the underlying algorithm. It is ideal to use the same type of enrollment and verification device. This way, acoustical differences between different enrollment and verification devices can be minimized.

Voice biometrics are like any other biometric: susceptible to some level of spoofing. Attacks on a voice biometric system fall into the following categories:

  • Attacking the physical voice

  • Using artifacts

  • Attacking the communications (see Chapter 5)

  • Compromising the template (see Chapter 5)

  • Attacking the fallback system (see Chapter 5)

Attacking the Physical Voice

During the discussion of which algorithm to pick, it was noted that a decision had to be made between user convenience and security. After evaluating the tradeoffs, if the company's decision was for convenience, then the system is much more susceptible to things like replay attacks from a recorded voice or voice impersonation. If, on the other hand, security won out over convenience, then the system is stronger and less likely to be compromised.

In general, attacks on voice biometrics either involve the playback of a static phrase or the spoofer's trying to impersonate the user.

Using artifacts

The artifacts used for voice biometrics are not of the same type as those used for other biometrics. Since there is no residual contact left from the templating, the artifacts need to be recorded. These recorded artifacts, that is, the users' voices, are then used as the basis for an attack. Once this is done, the artifacts, in effect, are used as described above in "Attacking the Physical Voice."

Mitigating this attack

The best mitigation for this type of attack is to use an algorithm that has a sufficiently large lexicon. The lexicon should also use less common words along with the standard digits. This way, it is less likely in normal conversation to use one of the special words from the lexicon.

Another countermeasure can be to have challenge phrases presented to the user to say in a limited amount of time. This way, the spoofer would need to have the lexicon recorded for the particular user and be able to produce the challenge words in the required amount of time.

 <  Day Day Up  >  


Biometrics for Network Security
Biometrics for Network Security (Prentice Hall Series in Computer Networking and Distributed)
ISBN: 0131015494
EAN: 2147483647
Year: 2003
Pages: 123
Authors: Paul Reid

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net