The Sin Explained

SSL is a connection-based protocol (although a connectionless version is on track to surface from the Internet Engineering Task Force, or IETF, pretty soon). The primary goal of SSL is to transfer messages between two parties over a network where the two parties know as definitively as is reasonable to whom theyre talking (its pretty difficult to ever be absolutely sure who youre talking to, of course), and to ensure that those messages are not readable or modifiable by an attacker with access to the network.

To get to the point where two parties can have arbitrary secure communications with SSL, the two parties need to authenticate each other first. Pretty much universally , the client needs to authenticate the server. The server may be willing to talk to anonymous users, perhaps to get them enrolled. If not, it will want to do its own authentication of the client. That might happen at the same time (mutual authentication), or it may involve subsequent authentication, such as by using a password, over an established link. However, the legitimacy of the servers authentication will depend on the quality of the clients authentication. If the client doesnt do a good job making sure its talking to the right server, then it could be possible for an attacker to talk to the client and then relay that information to the server (this is called a man-in-the-middle, MITM, attack), even if, for example, the client sends the server the right password.

SSL uses a client-server model. Often, the client and the server authenticate to each other using separate mechanisms. For authenticating servers, most of the world uses a Public Key Infrastructure (PKI). As part of the setup process, the server creates a certificate. The certificate contains a public key that can be used for establishing a session, along with a bunch of data (such as name of the server, validity dates, and so on) cryptographically bound to each other and the public key. But the client needs to know that the certificate really does belong to the server.

PKIs provide a mechanism to make server validation happenthe certificate is signed by a trusted third party, known as a Certification Authority (CA) (Technically, there can also be a chain of trust from the certificate to a root, with intermediate signing certificates). Checking to make sure that the certificate is the correct one can take a lot of work. First, the client needs some sort of basis for validating the CAs signature. Generally , the client needs to have pre-installed root certificates for common CAs, such as VeriSign, or have a root certificate for an enterprise CA, which makes internally deployed SSL practical. Then, if it does have the CA signing key, it needs to validate the signature, which confirms the contents of the certificate are the same as it was when signed by the CA.

The certificate is generally only valid for a period of time, and there is a start time and an expiration date in the certificate, just like with a credit card. If the certificate isnt valid, then the client isnt supposed to accept it. Part of the theory here is that, the longer a certificate and its corresponding private key exists, the greater the risk that the private key has been stolen. Plus, the CA no longer has the responsibility to track whether the private key associated with a certificate was compromised.

Many platforms come with a common list of root certificates that can be used for establishing trust. Libraries may or may not check for a chain of trust to a root certificate for the developer. They may or may not check for an expired certificate (or a certificate that is not yet valid). When youre using HTTPS, libraries generally will do these things, because HTTPS explicitly specifies them (and you have to explicitly write code not to check for them). Otherwise, you have to add these checks to your code.

Even if the SSL library youre using does handle these things, there are still plenty of other important things that it may not handle. For example, while the preceding steps validate a chain of trust to a CA and ensure the certificate is within its validity period, it doesnt actually validate that youve got the party on the other end you really want to have. To illustrate , lets say that you wanted to connect to a service on example.com. You connect via SSL, get a certificate, and then check to see that it hasnt expired and that a known CA has signed it. You havent checked to see if its a certificate for example.com, or one for attacker.org. If attackers do insert their own certificates, how will you know?

This is a real problem, because it is generally pretty easy for an attacker to get a certificate from a trusted source while staying anonymous, not only by stealing other credentials, but also through legitimate means, since there are CAs tied into trusted hierarchies that have extremely lax authentication requirements. (The author has gotten certificates where the CA only checked his information against the registration information attached to his domain, which itself can usually contain bogus information.) Plus, in most cases, systems that dont know how to do proper validation arent likely to be keeping around certificates after theyre used, or to be logging the information necessary to catch a culprit, so theres also very little risk to average attackers in using their own certificates. Also, using someone elses stolen certificate often works well.

The best way to check to see if the certificate is the right certificate is to validate every field of the certificate that you care about, particularly the domain name. There are two possible fields that this could be in, which are the distinguished name (DN) field and a subjectAltName field of type dnsName. Note that these fields contain other information besides simply a hostname.

Once youve done all that, are your SSL risks all gone? While youve gotten rid of the biggest, most common risks (particularly attackers inserting certificates that arent signed by a known CA, or dont have the correct data fields), youre still not quite near the edge of the woods. What happens if the private key associated with the servers certificate is stolen? With such server credentials, an attacker can masquerade as a server, and none of the validation weve discussed will detect the problem, even if the server administrator has identified the compromise and installed new credentials. Even HTTPS is susceptible to this problem, despite its generally good approach to SSL security.

What you need is some kind of way to say that a server certificate maps to invalid credentials. There are a couple of ways to do this. The first is to use certificate revocation lists (CRLs). The basic idea here is that the CA keeps a list of all the bad certificates out there ( revoked certificates), and you can download that list, generally either via HTTP or Lightweight Directory Access Protocol (LDAP). There are a few issues with CRLs:

  • There can be a big window of vulnerability between the time the private key associated with a certificate is stolen and the time clients download the CRLs. First, the theft has to be noticed and reported to the CA. Then, the CA has to stick the associated certificate in its CRL and publish that CRL. This process can give an attacker weeks to go around masquerading as a popular web site.

  • CRLs arent easy to check because theyre not well supported, in general. SSL libraries tend not to support them well (and sometimes not at all). In those libraries that do support them, it usually requires a lot of code to acquire and check CRLs. Plus, the CAs tend not to be explicit about where to find them (its supposed to say in the certificate, but most often does not). Some CAs dont update their CRLs frequently, and some dont even publish them at all.

Another option is the Online Certificate Status Protocol (OCSP). The goal of OCSP is to reduce the window of vulnerability by providing an online service for checking certificate status. This shares a problem with CRLs in that it is very poorly supported, in general. (Despite being an IETF standard, many CAs and SSL APIs dont support it at all, and those APIs that do support it are likely to have it turned off.) And, OCSP has got some unique problems of its own, the most obvious being youll need full network connectivity to the OCSP responder. For this reason, if implementing OCSP, you should either fail-safe when the responder isnt accessible, or, at least, take a defense- in-depth strategy and also download and check CRLs, failing if CRLs havent been updated in a reasonable amount of time.

While weve covered the major SSL-specific problems, there are other things that deserve brief mention. First, in previous versions of SSL, there have been security issues, some major and some minor. Youre best off using the latest version of the TLS protocol in your applications, and not allowing older versions, especially SSLv2 and PCT. That can sometimes be tricky because libraries will often allow you to negotiate any version of the protocol, by default. You should also stay away from cipher suites that are a high risk for cryptographic breaks. In particular, you should avoid the RC4 cipher suites. RC4 is known for its speed, and people often use it because they assume it will speed things up, although with SSL it generally wont make any significant difference. And, RC4 is cryptographically unsound, with strong evidence that it is possible to break the algorithm, given a reasonably sized data stream, even if all current best practices are followed. In short, the performance bottleneck for most applications will be the initial public key operations associated with authentication, and not the ongoing cryptographic operations (well, unless youre using 3DES).

Related Sins

In this sin, we primarily talk about the client authenticating the server even though the server generally has to authenticate the client as well. Usually, the client is responsible for authenticating the server, and then when the client is convinced its talking to the server over a secure connection, it will send authentication data over that connection (although SSL does provide several mechanisms that could be used, if desired). There can be a bunch of risks with client authentication protocols, particularly password protocols, as youll see in Sin 11.

Really, the core problem we discuss is a common instance of a much broader problem, where two parties settle upon a cryptographic key, but dont do so securely. The more general problem is covered in Sin 17.

Additionally, some libraries can introduce new risks by choosing bad keys, due to improper use of cryptographic random numbers , as discussed in Sin 18.



19 Deadly Sins of Software Security. Programming Flaws and How to Fix Them
Writing Secure Code
ISBN: 71626751
EAN: 2147483647
Year: 2003
Pages: 239

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net