13.6 IMPLEMENTING AUTHENTICATION CONTROL MECHANISMS (164.312(D))


13.6 IMPLEMENTING AUTHENTICATION CONTROL MECHANISMS ( § 164.312(D) )

(d) Standard: Person or entity authentication. Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.

This section looks at the specific mechanisms for implementing the authentication control standard. Authentication is the next step after identification and it confirms that the individual in question is indeed known in the system under the specified identity. In the most primitive form, authentication has been around for as long as human race existed-a person's appearance and language pointed to his native tribe long before the formal authentication concept was developed. A ring with a seal on it would identify its carrier as belonging to nobility circles, as well as his clothes and the manner of speech.

Turning to more formal methodologies, authentication, at the high level, can be achieved by something you:

  • Know-a computer password, or social security number and mother's maiden name

  • Are-a unique combination of your facial features, or the phone number you are calling from

  • Possess-a unique hardware or software token, like a smart card, or driver license, presented upon a request

All those examples represent a single-factor authentication, since in each case there is only one proof of identity. It is contrasted with a multi-factor authentication (MFA), where the claimant has to present more than one proof, usually-from different categories. In the earlier example, the ring would be a token-based authentication, which is confirmed (in case the ring was stolen) by the noble appearance and the manner of speech. Generally, this approach is preferred for securing more sensitive data or only certain parts of it.

The above categories differ in multiple ways-ease of implementation, convenience to users, authentication strength (attack resistance). Those parameters are reviewed in greater details in the following sections.

13.6.1 Something You Know

This is, probably, the most common approach to authentication today-technical examples would be various passwords or PIN numbers that go together with tokens. While very easily implementable, this approach is also very easy to abuse, if certain rules, outlined below, are not followed. Unfortunately, there is a classic trade-off involved here-these rules, while improving security, decrease usability, so the users become more inclined to look for ways around the restrictions.

A common problem with this kind of authentication model is that the 'key' (password or PIN) is either easily guessable, or hard to remember.

Technical items to discuss are: passwords, proper ways to construct them ( size , character combinations, etc.), PIN.

The most common way to authenticate users is via a password. Similar to login id identification, it is a feature of many modern applications. This approach to authentication works well for a large group of technically unsophisticated users, working inside secured perimeter (like personnel of a medical practice), but is not sufficiently strong for technically-adept users, or in hostile zones (like Internet at large).

By following a number of guidelines, it is easy to make passwords resistant to tampering:

  1. Require passwords to be a minimum length and contain a combination of letters , numbers and punctuation marks if possible. This makes them harder to guess via brute force methods .

  2. Require users to change passwords on a regular interval.

  3. Configure applications to not allow users to reuse old passwords.

  4. Create a policy that discourages users from writing down passwords.

  5. Configure applications to wait a number of seconds after an invalid login attempt before allowing the next one. This makes it harder for automated brute force methods to operate successfully.

  6. Configure applications to temporarily disable accounts when authentication has failed a number of times. This also makes it difficult for brute force methods to operate successfully. It is important to only temporarily disable accounts or this creates a denial of service condition (allows an attacker to disable an account on a network by performing a number of invalid logon attempts on purpose).

13.6.2 Something You Are

This mechanism are based on some unique characteristic(s) of the user , be that his fingerprint , phone number, iris, etc. There is a variety of methods, with different complexities and associated costs, which fall into this category, for instance: biometric readers, callback verification.

Fingerprint readers and iris scanners are probably the most cost-effective and common biometric devices out there today. However, they may be relatively easily misled, and in some cases generate a lot of 'false positives', i.e. mistake the users for somebody else, or simply fooled by a determined perpetrator. The best way to use these devices is in attended mode (i.e. where people are present), where direct manipulation of the device is not possible.

Callback verification authenticates users by location, whereas the system, after having been contacted by the user, breaks the connection and immediately contacts the user at the preconfigured location. The assumption here is that call routing manipulation (like changing IP routing or phone switching) is a hard task, which can not be easily achieved by casual attacker. A determined professional, however, is capable to carry out such an attack. Things are further complicated by the complexities of initial user setup and maintenance, as well as widely opening up the door for Denial-Of-Service (DOS) attacks.

13.6.3 Something You Possess

Different token-based mechanisms have one thing in common-they incorporate one or more units of information, which can make identity claims, sign them, confirm it, or all of the above. The confirmation information, a secret, is often too long and complex to be memorizable. If assertions about a person's identity are not provided on the token (i.e. it carries only confirmation of the claim), they have to be entered separately. Conversely, some devices may provide only assertions, which may be signed for better protection, and require entering the confirmation by out-of-bound means. In this case, the obtained confirmation is usually transformed, in one way or another, into a signature, which is then compared with the one found on the device. If identity assertions are present as well, they are frequently signed by external authorities, independent from the claimant.

The amount of trust one puts into the signing authority and the protection mechanisms that guard the secret determines strength and reliability of such a token.

Solutions may differ by processing capabilities and by the underlying authentication mechanisms. MFA is often used in combination with token-based devices to provide better security protection against token's loss or theft.

13.6.3.1 Differentiation by Processing Capabilities

Different types of token devices have varying operational capabilities with regards to how the secret is treated. The simplest case are passive devices, also known as memory devices, which work like a mini-storage device for the secret, and present it, upon request, to an outside device for processing. These devices are very common-an employee electronic badge or proximity card is an example of a memory card.

Such devices are often concerned only with protecting integrity of the stored information, because it may be relatively easy to extract the secret. These solutions are the cheapest, but the secret's accessibility is a security disadvantages, so that such devices should not be trusted to store sensitive information, or, at least, they should employ MFA.

13.6.3.2 Differentiation by Authentication Mechanism

Cryptographic solutions provide strong authentication mechanisms for token-based devices. If symmetric keys are used, key exchange and distribution may pose a problem for large deployments. PKI solutions avoid the problem by having both public and private key information stored on the device itself.

There exist also time-based or counter-based synchronization devices, where the token and the server share either counter or time context, and the pseudo-random data, displayed by the token, serves as the authentication input to the server. These types of devices are easy to deploy and use, but are susceptible to time or counter drifts, which would require periodic re-synchronization with the server in order to provide valid input. Also, their authentication strength greatly depends on the quality of the randomization algorithm, which generates authentication input.

13.6.4 Temporary Tokens

Usually, these are the mechanisms, which rely on short-lived tokens (or tickets, depending on the literature), valid only within a single session or transaction. Next session of the same subject will produce a completely different token, and any previous ones are considered invalid. These tokens carry temporary shared secrets, and, therefore, allow classifying these methods as token-based.

Strictly speaking, any durable session will require some kind of a token-based solution behind it, unless the user is going to be authentication upon each request. Anybody who used Internet is undoubtedly familiar with cookies, issued by web sites when a user logs into them. These cookies represent tokens, which allow the user to avoid being asked for his credentials on successive requests to the site.

13.6.4.1 Challenge-Response

The main goal of this family of protocols is to confirm that the subject does indeed possess the shared key that belongs to the claimed identity. This confirmation, a.k.a. Proof-of-Possession, may be required to verify that the request does not constitute a replay attack using an existing token, or to establish a new one.

There are various implementations of this method, but they all share the same basic idea-the server sends a request to the client, containing unique data (nonce), which is then processed by the client in a pre-determined way, utilizing the shared key, and returned to the server as a confirmation. For this approach to work, the server and client must share some common attribute, besides the shared key. Commonly used attributes are timestamps or counters, but other approaches also possible.

If the goal is establishing of a new token, the protocol is not limited to the confirmation exchange, there is a 'handshake' process which allows creating a new shared secret without explicitly sending credentials. The 'handshake' may involve one-way or two-way (mutual) authentication between client and server. The following example protocols are in common use today: Windows NTLM, SSL.

13.6.4.2 Single Sign-In (SSI)

This family of authentication protocols is aimed at simplifying access to multiple services for a user. Traditionally, each user established individual accounts with each service that he used, and was forced to perform multiple logons during a single session. The SSI concept replaces this multitude of logons with a single logon to a centralized server, which then issues tokens that are accepted in all services in that domain.

As far as trust in the presented token goes, individual services have an option to check the token, issued by the central server, and decide, whether they require re-authentication. Possible reasons for re-authentication might be, among other things: insufficiently strong initial authentication, expired validity. Usually, the services have shared secret keys with the authentication server, to allow for encrypting and integrity checking of the issued tokens.

Kerberos protocol is one of the best-known examples of a SSI system. It originated at MIT in the late 80s as an academic project, but has evolved into a widely accepted standard. This protocol replaces NTLM as the default authentication mechanism starting with Windows 2000.

Currently, Liberty Alliance group of vendors is developing an SSI solution for Web Services, which is based on the Security Assertion Markup Language (SAML).

13.6.5 Multi-factor Authentication

An authentication is considered to be multi-factor (MFA) when several different types of authentication are used to strengthen the overall protection. It is worth considering combining multiple identification and authorization technologies in order to significantly improve the security by applying so called 'defense in depth' strategy. It is much harder to beat two combined technologies than one.

13.6.6 Summary of Authentication Methods

The table below details the pros and cons of each approach:

Method

Resistance to user id sharing

Attack resistance

Ease of implementation

Ease of use

What you know: Password, PIN

Poor. It is easy for one user to give another user their user id.

Fair, if the above-mentioned guidelines are implemented.

Excellent. Most applications already include this feature.

Good-most users accustomed to it.

What you are: Biometrics

Excellent. Users cannot easily share a physical part of their body.

Good-special skills are required to fool the approximation algorithms and readers into false acceptance.

Fair. Requires installation/implemen tation of biometric devices on workstations and integration with applications.

Good-easy to use, but some devices may be intrusive or cause inconveniences to the users.

What you have: Smart card

Good. Users must physically hand device to someone else to let them gain access.

Excellent. Strong encryption is used to communicate this information.

Good. Often requires implementation of identity readers or periodic re-synchronization

Excellent-most of times, it's automated and user interaction is minimal.




HIPAA Security Implementation, Version 1.0
HIPAA Security Implementation, Version 1.0
ISBN: 974372722
EAN: N/A
Year: 2003
Pages: 181

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net