Securing and Trusting a Biometric Transaction

 <  Day Day Up  >  

The security and trust of a biometric transaction must begin at the presentation of the live biometric trait and continue through the final algorithm decision. This transaction path is made up of the following components :

  • User

  • Biometric reader

  • Matching location

Each item that is part of the biometric transaction needs to be secured. Let's examine the transaction path in more detail.

User

The user is the starting point of any biometric transaction. If the biometric device being used is active, the user initiates a transaction by presenting his/her biometric trait. If the biometric device being used is passive, the transaction could be started before the user is aware of its presence. In both cases, the user needs to have situational awareness about himself/herself and the environment. If the user is in an environment that is using passive biometrics, then the user has little control over when the transaction begins. Therefore, the user needs to know that a passive biometric is being used and to be prepared for a transaction taking place once the user is in range of the passive device. A passive transaction, for example, could be the authorization of opening a door, passing a security checkpoint, registering for time and attendance, network authentication, or public security surveillance. If the environment uses active biometric devices, then the user can decide when the transaction takes place. An active transaction, for example, could be door access, logical computer access, or access to time and attendance applications. In an active biometric environment, the user needs to submit to the measurement. The use of active biometrics is pro-privacy and allows the user to better safeguard his/her biometric data.

As mentioned, a user can secure the beginning of a biometric transaction by having situational awareness. Situational awareness allows the user to safeguard the use of his/her biometric data. The trust of the user, however, begins at the time of enrollment. When the user enrolls in a biometric system, a moment of trust happens. The user willingly presents himself/herself for enrollment. Before enrollment happens, the physical identity of a user is often verified by some other method. This verification is usually made by a government-issued identification, by an employer-issued identification, or by another trusted authority vouching for the user's identity. What happens is that this moment of trust becomes predicated on a previous moment of trust. That is, we must believe that the government-issued identification or the employer-issued identification is valid. The credibility of the trusted authority vouching for the user needs to be trusted as well. This results in another tradeoff . If the biometric measurement is based on a previous moment of trust, then it is in the best interest of everyone concerned to choose the most reliable moment of trust. It is quite feasible that, based on what the biometric transaction is to authorize or secure, multiple previous moments of trust may need to be verified. Thus, as with biometric technology, a risk level needs to be defined. If the strength of the previous moment of trust is great enough, this then becomes an acceptable risk.

Biometric Reader

The biometric reader is the interaction point of the user with the biometric system. The reader will take the physical trait and digitize it into raw biometric data. The raw biometric data is then transformed into a template for comparison. The location where the transformation and comparison take place depends on the biometric system. The mechanics of the biometric system can affect the trust and security of the biometric transaction.

Biometric readers fall into one of two categories:

  • Trusted biometric devices

  • Non-trusted biometric devices

Trusted biometric devices

Trusted biometric devices make comparison decisions that are assumed to be uncompromised. That is, the capture, templating, and comparison occurs on the same physical device. It does not communicate with the host computer for any biometric function. It will provide a yes/no response to a request for authentication or, when used in conjunction with a storage token like a smart card or on-board cryptographic module, it will supply access to requested information. Unlike a workstation, a trusted device is a single-task device, that is, it is has pre-programmed functionality. As such, it does not require the general input and output connections that a general-purpose device requires. Since additional input/output are not required, it is very unlikely that the pre-programmed nature of the device can be changed. This gives it the ability to be trusted. By contrast, a workstation is not a trusted device. It is general-purpose and has many means of input/output. Malicious programs can be introduced to alter how it behaves and computes, that is, malicious software can be introduced that causes biometric comparisons to always match or never match. With a trusted device, this cannot occur.

The trustworthiness of a device is based on its pre-programmed state. A trusted device needs to be protected from malicious attacks in order to remain trusted. To accomplish this, the device needs to be hardened both physically and electronically .

Physical hardening [1]

[1] Kingpin, "Attacks on and Countermeasures for USB Hardware Token Devices" (196 Broadway, Cambridge, MA), 2002, pp. 1 “23.

To protect the integrity of a trusted device, access to the electronics needs to be protected. If a device is breached, it can no longer be trusted. To prevent this from happening, the following precautions need to be considered :

Make attempts at physical intrusion obvious ” The electronics that control a trusted device are found inside the device. To reach these components, the external casing needs to be breached. Any attack on the case should leave evidence of the attempt. The material used in the casing should show scratches or pry marks from any attempted entry. The casing should be closed with methods that make it difficult to re- open , Also, the casing should use closing mechanisms that once re-opened, are broken.

Making attempts at entry obvious will provide the user or security manager with an indication of tampering. When a device seems to be compromised, it can be removed from use and evaluated.

Make internal tampering externally visible ” If the external casing can be breached and the internal electronics reached, it is possible that external tampering will not be noticed. To add extra security to the trustfulness of a device, manufacturers can include some external manifestation of any internal intrusion. This external manifestation could take the form of changing the color of a tamper window, zeroing the EEPROMs (electrically erasable programmable read-only memory), or tripping a "dead-man's switch." Such an additional security mechanism should increase the complexity of a physical security breach.

Encapsulate internal components in epoxy ” If the external casing is breached, the ease of access to the internal components should be minimized. The majority of the devices produced are not field-serviceable and therefore do not have a need for component access for replacement or troubleshooting. The encapsulation of components in epoxy is one more layer of defense against internal tampering. The intruder now needs to take the time to carefully remove the epoxy. This can be a near- impossible feat. The epoxy used for encapsulation can withstand several thousand pounds of pressure per square inch and is normally cured using heat. Thus, prying and using heat will probably destroy the components or render the device inoperable.

Use glue with a high melting point ” The more difficult an intrusion can be made, the less likelihood of success. As the difficulty increases, the resources required to be successful also increase. Using a glue with a high melting point increases the time, money, and resources required to be successful. The goal is to have a melting point that is high enough to either cause damage to the surrounding housing and components, or render the device inoperable.

Obscure part numbers ” Any breach of security begins with reconnaissance and information-gathering. The more information that can be gained about a potential target before an attack is launched, the better the chances of its being successful. Therefore, the amount of information provided about a device should be limited to marketing requirements and general device identification. To that end, all reference numbers or markers should be removed. This could be done through etching or just sanding the component. If the components need to be identified, a secure design reference could be kept by the design department.

Restrict access to the EEPROMs ” While it seems obvious to restrict access to the EEPROMs, they are quite often left exposed. The reasons for the exposure could be lack of experience, lack of funding for a new board layout, or a limited production run, where an EEPROM may be flashed with the latest firmware before release. The EEPROMs are a vital piece of the electronics. It is here that cryptographic or secret data may be stored, and if the firmware is flashable, it is kept in an EEPROM. Due to the nature of the information contained in this component, it needs the same level of protection as any other piece of circuitry .

Access to the EEPROMs can be restricted in the same way as access to other electronics. The use of epoxy encapsulation, obfuscation of part numbers, and the use of more secure chip mounting can help protect the EEPROMs.

Remove or inactivate future expansion points ” As with the EEPROMs, it seems obvious to inactivate or remove unneeded access points and future expansion points. Again, for the same reasons that the EEPROMs are not protected, expansion points or unneeded access points are usually forgotten. Leaving these expansion points exposed could lead to the compromise of otherwise well-protected electronics. For example, in Kingpin's analysis of USB cryptographic tokens, a newer version of a token protected access to the EEPROM with epoxy encapsulation, yet future expansion traces allowed access to the EEPROMs. [2] This example makes it clear that if time and effort are put into securing certain components, leaving other means of access defeats the best intentions of the designer.

[2] Kingpin, "Attacks on and Countermeasures for USB Hardware Token Devices" (196 Broadway, Cambridge, MA),2002, pp. 17 “18.

Reduce Electro-Magnetic Frequency (EMF) emissions ” As stated earlier, reconnaissance is a large part of planning an attack. If an attacker can gain information from simple, non- intrusive means, this makes the job of compromising a system much easier. Monitoring EMF emissions does not require very sophisticated equipment. The information gained from EMF emissions may seem trivial and expensive to negate relative to their risk. The current attack methodologies use an experimental approach. For example, timing attacks work by measuring the timing characteristics of responses or signal latencies relative to a given PIN. The longer the latency, the closer the guessed PIN is to the actual PIN. These types of experimental attacks could use EMF emissions to gather this type of information. Thus, reducing EMF emissions makes an attacker's job more difficult.

Use good board layout and system design ” To help secure a trusted device, the board and components placed on it can be designed to enhance security. The difficulty of following board traces increases if they are spread out over layers and if the data, power, and ground lines are alternated. [3] In addition, with the current trend of manufacturers providing systems on chip solutions, the number of external interfaces has greatly decreased. Spending some time thinking about vulnerabilities after the board design is complete could avert a potential disaster in the future.

[3] Ibid.

Electronic hardening

A trusted device does the biometric templating and matching on the device itself. Since the decision-making is done on the device, physical and electrical hardening is very important. For electronic hardening of a trusted device, the following must be considered:

Protection of the flashable firmware ” With the advent of the USB communication protocol for external peripherals, the use of flashable firmware has greatly increased. Flashable firmware allows a developer to make changes and upgrades to a device without replacing the internal integrated circuits. While this is a great feature for the customer and good for the developer, it can be an entry point for attacks. If the updating of the firmware is not strongly protected, malicious firmware code could be used to compromise the device. To protect the firmware, the following precautions should be taken. [4]

[4] Brian Oblivion and Kingpin, "Secure Hardware Design," Black Hat Briefings, July 26 “27, 2000.

  • Firmware should be encrypted and decrypted by the trusted device before updating.

  • The updated firmware needs to be digitally signed. Before the update, the device verifies the signed firmware. The trusted device needs to get new public keys for verification and encryption from a trusted site.

  • The firmware should be compiler-optimized before being released, and all debug and symbol information should be removed.

Integrity of communications to the host ” Even a trusted device needs to communicate with its host. This communication could be a request for authentication by the host and a response to the request from the trusted device. It could be the sending of new firmware to the device, or the storage of information to a smart card or an on-board cryptographic module. This link will normally take the form of a USB connection. As such, some basic data security can be implemented to increase the overall security of the trusted device. The use of encryption for the data being transmitted and authentication while creating the link will increase the security of the device.

With the increased focus on security and breaking existing security methods, it makes sense to use the strongest possible solution available. To adequately protect the link between the host and a trusted device, the data should be encrypted before being sent. The data should also be signed and timestamped to prevent an attacker from recording a transaction and playing it back at a later time. To accomplish this, a shared secret is needed. A shared secret could be transmitted in the clear, but this could be intercepted by tapping the communication channel. The shared secret could be obfuscated in the host software and the firmware of the trusted device. The drawback to this is that the shared secret is in the software on the host and, given sufficient time, effort, and money, this shared secret could be compromised. And, once a shared secret is compromised, it is compromised for all systems.

A better approach is to use a Diffie-Hellman key exchange. A Diffie-Hellman key exchange allows both the device and the host to securely generate the same secret. In this configuration, both parties know a large prime number and another number referred to as the generator. Using the large prime and the generator, both parties compute a public number. Once the public number is generated, all the numbers can be exchanged. Once the exchange is done, both parties can generate the same secret.

One drawback to this is the possibility of an attacker sitting in the middle of the transaction. This attacker would be able to impersonate both ends of the transaction. That is, the attacker could compute his/her own public number. When the host sends its public number to the device, the attacker could intercept it and send his/her public number to the device. When the device sends its public number to the host, the attacker once again could substitute his/her own number. Thus, when a transaction is to take place between the host and the device, the attacker can decrypt and re-encrypt for both parties using their own public numbers.

This happens because the device and the host have not authenticated each other. If the device and the host had authenticated each other, then they would know if they were talking to the right party. This securing of the Diffie-Hellman key exchange is referred to as a station-to-station protocol. To accomplish this, both the device and the host get a certificate from a trusted authority so that when either party receives data, it can verify the signature on the data as being legitimate . In this new configuration, an attacker can still intercept the exchange of the public keys, but the attacker is unable to trick the receiving party into accepting the re-encrypted data. The receiving party would verify the signature on the data to ensure it has not been tampered with.

When both parties use public keys and digital signatures for authenticating each other, additional cryptography can be applied. Either end of the transaction can use the other party's public key to encrypt the data being sent so that the attacker cannot even decrypt what is being sent. When the other party receives the data, that party decrypts it with the private key and verifies the signature and timestamp of the data.

With the addition of this electronic hardening, the trusted device is more reliable in its trust, and is proactively protecting itself and the data it provides.

Protection from external injections of spurious data ” While a trusted device should be limited in the amount of I/O it provides, some will be necessary. These points of I/O can be used to attack a device by injecting spurious information into the system in an attempt to compromise the system itself or deny its use. Regardless of how the I/O of a device is used, some basic attacks should be taken into account. These include:

Buffer overflows ” Any part of a trusted system that takes in information is generally buffering its received data. While the designer makes certain assumptions about the operation of a device, it may be possible to introduce data that would overflow the buffer. The buffer overflow could result in the innocuous consequence of a device hanging or being reset. In a severe situation, it could allow the execution of untrusted code. This can happen when data sent for input is formed in such a way that the first part of the data includes the untrusted code to be run and enough additional data to overflow the buffer. When the buffer overflows, the plan for the memory location of the next instruction to be executed is overwritten with the memory location of the untrusted code. When this occurs, the device is compromised and cannot be trusted.

Use of undocumented command sets and functionality ” In many cases, when devices are being built and tested , developers will implement undocumented commands. These undocumented commands are normally used for troubleshooting or to create macro commands for frequently executed application programming interfaces (APIs). These undocumented command sets can allow an attacker to gain information from the device and possibly put the device into a testing mode. Many testing modes allow access to all memory and functions on the device. In this way, the attacker could change some stored data and then return the device to a normal operational mode.

An undocumented command set can also provide detailed debugging or line monitoring capabilities. For example, one early pioneering biometric device sometimes had an I/O device driver installed on the host machine, which allowed all the registers, stacks, and buffers to be communicated for debugging purposes.

These types of undocumented command sets and functionality need to be restricted with some sort of strong secret, or disabled altogether before the final shipment of firmware and integrated circuits are sent to production.

Improperly structured data elements ” With the need for the trusted device to provide feedback or send data back to the host, some sort of I/O method needs to be used. This I/O method could include serial, parallel, USB, or wireless connectivity. All of these connectivity methods have well-defined and structured data communication models. In these models, the format, size , and timing of the data are described. Additionally, the model also describes any expected variation from the standards, which are also known as tolerances. These tolerances are normally taken into account during the design of the trusted device. Sometimes the design of a trusted device can push the bounds of the tolerances. When this happens, the device can become susceptible to improperly structured data elements. In these cases, the improperly structured data elements can have the same behavioral characteristics as a buffer overflow attack. An improperly structured data element could cause a device to fail or expose a new behavior not normally seen. If a device fails, it could fail in a state where it is now susceptible to probing, or it could respond with memory data that would not otherwise be made available.

A trusted system needs to guard against these types of attacks. If it fails from any of these types of attacks, it should fail in a closed state. That is, it should shut down or otherwise become inoperable. It is preferable to have a device unavailable to the user than to have confidential or private information exposed.

Changed data without physical contact [5] ” It has long been known that spurious cosmic rays can cause stored information in computer memory or in transit on a bus to become corrupted. The use of ECC (error correcting code) memory does help in alleviating some of this concern, but large bit-level errors may go undetected by the ECC methods currently implemented. ECC memory is also an additional cost that many cost-sensitive industries do not want to incur. In addition, even with ECC memory, low-level data corruption could still occur on the memory or system bus, or in-process.

[5] Sudhakar Govindavajhala and Andrew W. Appel, "Using Memory Errors to Attack a Virtual Machine" (Dept. of Computer Science, Princeton University, Princeton, NJ), 2003.

While it is possible to generate these types of cosmic rays here on Earth, the particular accelerators required would not be available to the average hacker. Other forms of radiation generation are available, but most are not powerful enough to have the desired effect. There are some radioactive materials used in the oil-drilling business that may generate the energy required, but they would be hard to acquire as well. One form of radiation that is always available to us, and is easy to control, is radiant heat. The use of radiant heat can cause a low-level memory error. These errors induced in memory have caused system crashes and losses of data. What if an attacker could induce these errors to his/her benefit?

A paper [6] published by Sudhakar Govindavajhala and Andrew W. Appel of Princeton University demonstrated in the Java machine framework that such bit errors could cause the execution of arbitrary code. While the paper explores these errors in the context of Java, it is not unreasonable to believe that the same methodology could be applied to other languages and memory/process security models. While many trusted devices are not written in Java, the supporting storage mechanisms for trusted devices may use Java as their operating system. This is of most concern to biometric vendors offering match on (smart) card (MOC). MOC works by storing the enrolled biometric in the smart card and implementing a biometric algorithm on the card for matching. Once the biometric is presented and templated, it is then processed on the card, and a match/no match is generated. Frequently, the smart card also stores cryptographic secrets or other confidential data. The match of the biometric is the release mechanism for that data. Thus, it may be possible to induce a bit-level error that causes the stored flag for match/no match to become flipped and allow access to the stored information. This new type of attack makes implementing active security measures more important.

[6] Ibid .

As originally recommended by Kingpin in his research, the implementation of detectors for temperature and radiation is warranted. If an attack is detected , actions could be taken to avert the exposure of data. This could include the zeroing of any critical data, or the failure of a device in a closed state. The intrusion should also be recorded and made physically visible. While it may seem like science fiction, the reality is that fact has become fiction . We now have a method of using light/radiation and heat to attack computer systems.

Improper failure states ” When systems are designed, much thought is given to how the system will function in its normal state, that is, when it is being used for the tasks for which it was intended. Some thought is put into error catching and maybe even error recovery. Little time and thought are currently given to the state in which the device will be left when it fails, however. If a device fails in such a way that it leaves itself open to change or probing, then it has lost its trustworthiness. At the same time, state machines are often used in the programming of devices. These state machines need to have default error states that prevent the leak of confidential information if an exception happens, causing the state machine to crash. The default failure and error states must ensure that no system data can be retrieved or released.

Final thoughts on trusted devices

As seen from the previous discussion, a lot of effort and cost goes into making a trusted device. Companies that build trusted devices do so to sell them and turn a profit. This means that the costs of hardening will be passed on to the customer in the form of higher prices. What the customer needs to do is evaluate his/her risk model relative to the cost of the trusted device. If the risk analysis does not demonstrate a serious threat, then the use of a non-trusted device is warranted. If, on the other hand, the information or the activities that are to be biometrically protected have very serious consequences, then the additional cost of a trusted device can be justified.

Non-trusted biometric devices

A non-trusted biometric device is not able to make comparison decisions. Since the comparison decisions are taking place off the device, a certain amount of uncertainty is assumed to be present in the transaction. Since the device is not making the matching decision, that decision must occur somewhere else. This matching can take place on the connected host, on an authentication server, or on a smart card running an MOC algorithm. The choice of where the match is to take place is now also a case of risk mitigation. As we will see in the following discussion, the risk associated with each option does increase. The choice of where the match will occur should be based on the locations which the biometric system supports, and on the consequences of the transaction being compromised.

 <  Day Day Up  >  


Biometrics for Network Security
Biometrics for Network Security (Prentice Hall Series in Computer Networking and Distributed)
ISBN: 0131015494
EAN: 2147483647
Year: 2003
Pages: 123
Authors: Paul Reid

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net