Section 14.3. Defenses


14.3. Defenses

As we showed earlier in the example of the eBay attack, we can separate an online interaction into four steps (Figure 14-5):

  • Message retrieval. An email message or web page arrives at the user's personal computer from the Internet.

  • Presentation. The message is displayed in the user interface, the user perceives it, and the user forms a mental model.

  • Action. Guided by the mental model, the user performs an action in the user interface, such as clicking a link or filling in a form.

  • System operation. The user's action is translated into system operations, such as connecting to a web server and submitting data.

In this section, we survey existing defenses against phishing attacks, classifying them according to which of these four steps they address.

Figure 14-5. Four steps of human-Internet interaction


14.3.1. Message Retrieval

In an ideal world, the best defense against phishing would simply block all phishing communications from being shown to the user, by filtering them at message retrieval time. The essential requirement for this solution is that the computer alone must be able to accurately differentiate phishing messages from legitimate ones. Defenses that filter at message retrieval depend on message properties that are easily understood by a computer.

14.3.1.1 Identity of the sender

One of these properties is the identity of the sender. Black listing is widely used to block potentially dangerous or unwelcome messages, such as spam. If the sender's IP address is found in a black list, the incoming message can be categorized as spam or even simply rejected without informing the user. A black list may be managed by an individual user, the approach taken by Internet Explorer's Content Advisor (Figure 14-6). Alternatively, it may be managed by an organization or by collaboration among many users. For phishing, the EarthLink Toolbar alerts the user about web pages that are found on a black list of known fraudulent sites.[8]

[8] EarthLink Toolbar: Featuring ScamBlocker; http://www.earthlink.net/earthlinktoolbar/download/.

Figure 14-6. Internet Explorer's Content Advisor


Black listing is unlikely to be an effective defense on today's Internet, because it is so easy to generate new identities such as new email addresses and new domain names. Even new IP addresses are cheap and easy to obtain. The black list must be updated constantly to warn users about dangerous messages from newly created sources. Because phishing sites exist for only a short time, the black list must be updated within hours or minutes in order to be effective at blocking the attack.

The converse of black listing is white listing, allowing users to see messages only from a list of acceptable sources. For example, Secure Browser controls where users may browse on the Internet using a list of permitted URLs.[9] White listing avoids the new-identity problem because newly created sources are initially marked as unacceptable. But defining the white list is a serious problem. Because it is impossible to predict where a user might need to browse, a predefined, fixed white list invariably blocks users from accessing legitimate web sites. On the other hand, a dynamic white list that needs the user's involvement puts a burden on users because, for every site they want to visit, they must first decide whether to put it in the white list. This also creates vulnerability: if a phishing site can convince users to submit sensitive data to it, it may also be able to convince them to put it into a white list.

[9] Tropical Software Secure Browser; http://www.tropsoft.com/secbrowser/.

14.3.1.2 Textual content of the message

Another property amenable to message filtering is the textual content of the message. This kind of content analysis is used widely in antispam and antivirus solutions. Dangerous messages are detected by searching for well-known patterns, such as spam keywords and virus code signatures. In order to beat content analysis, an attacker can tweak the content to bypass the well-known filtering rules. For example, encryption and compression are added to existing viruses in order to bypass antivirus scans.[10] Random characters are inserted into spam emails to enable them to bypass spam filters. One sophisticated phishing attack used images to display text messages so that they would defeat content analysis.[11]

[10] F-SECURE, "F-Secure Virus Descriptions: Bagle.N"; http://www.f-secure.com/v-descs/bagle_n.shtml.

[11] Anti-Phishing Working Group, "MBNAMBNA Informs You!" (Feb. 24, 2004); http://www.antiphishing.org/phishing_archive/MBNA_2-24-04.htm.

Spam filtering is one defense that applies at message retrieval time. Because nearly all phishing attacks are currently launched by spam, getting spam under control may reduce the risk of phishing attacks significantly. Unfortunately, the techniques used by many spam filters, which scan for keywords in the message content to distinguish spam from legitimate mail, are insufficient for classifying phishing attacks, because phishing messages are designed expressly to mimic legitimate mail from organizations with which the user already has a relationship. Even if spam filters manage to reduce the spam problem substantially, we can anticipate that phishing will move to other transmission vectors, such as anonymous comments on discussion web sites, or narrowly targeted email attacks rather than broadcast spam.

14.3.2. Presentation

When a message is presented to the user, in either an email client or a web browser, the user interface can provide visual cues to help the user decide whether the message is legitimate.

Current web browsers reflect information about the source and integrity of a web page through a set of visual cues. For example, the address bar at the top of the window displays the URL of the retrieved web page. A lock icon, typically found in the status bar, indicates whether the page was retrieved through an encrypted, authenticated connection. These cues are currently the most widely deployed and most user-accessible defenses against phishing, and security advisories about phishing warn users to pay close attention to them at all times.[12], [13], [14]

[12] eBay Inc., "Email and Websites Impersonating eBay"; http://pages.ebay.com/ help/confidence/isgw-account-theft-spoof.html.

[13] Federal Bureau of Investigation, Department of Justice, "FBI Says Web 'Spoofing' Scams Are a Growing Problem" (2003); http://www.fbi.gov/pressrel/pressrel03/spoofing072103.htm.

[14] PayPal Inc., "Security Tips"; http://www.paypal.com/cgi-bin/webscr?cmd=p/gen/fraud-prevention-outside.

Unfortunately, these visual cues are vulnerable for several reasons. First, the cues are displayed in the peripheral area of the browser, separately from the page content. Because the content is central and almost always is the user's focus of attention, a peripheral cue must fight to draw the user's attention. Second, these cues can be attacked directly by phishing. As we mentioned earlier, URL hiding and domain name similarity are evidence that the address bar is susceptible to deception. JavaScript and Java applets have also been used to hide or fake other security cues, including the address bar, status bar, authentication dialog boxes, SSL lock icon, and SSL certificate information.[15], [16], [17]

[15] J. D. Tygar and Alma Whitten, "WWW Electronic Commerce and Java Trojan Horses" Proceedings of the Second USENIX Workshop on Electronic Commerce (1996).

[16] Edward W. Felten, Dirk Balfanz, Drew Dean, and Dan S. Wallach, "Web Spoofing: An Internet Con Game," 20th National Information Systems Security Conference (1996).

[17] Zishuang Ye, Yougu Yuan, and Sean Smith, Web Spoofing Revisited: SSL and Beyond, Technical Report TR2002-417, Dartmouth College (2002).

PEOPLE MAY IGNORE SECURITY CUES

A general problem with the presentation of security cues is that users may disregard them, or attribute their presence to causes other than malicious attack. We observed this effect recently while developing a new authentication mechanism for logging in to web sites through an untrusted, public Internet terminal. Instead of requesting a secret password through the untrusted terminal (where it may be recorded by a key logger), authentication is performed on the user's cell phone using SMS messages and WAP browsing. To defend this approach against spoofing, however, it was necessary to associate a unique session name with the login attempt.

The user's only task was to confirm that the session name displayed in the untrusted web browser was the same as the session name displayed on the cell phone. In a user study of 20 users, however, the error rate for this confirmation was 30%. In other words, out of 20 times that we simulated an attack in which the session name on the phone differed from the session name on the terminal, users erroneously confirmed the session 6 timesgiving the attacker access to their account.

Some users erred simply because they had stopped paying attention to the session names. Others made telling comments:

  • "There must be a bug because the session name displayed in the computer does not match the one in the mobile phone."

  • "The network connection must be really slow because the session name has not been displayed yet."

We subsequently changed the user interface design so that instead of simply approving the session name (Yes or No), the user is obliged to choose the session name from a short list of choices. Not surprisingly, the error rate dropped to zero, because the new design forces users to attend to the security cue and prevents them from rationalizing away discrepancies.


eBay's Account Guard (Figure 14-7) puts a site identity indicator into a dedicated toolbar.[18] Account Guard separates the Internet into three categories, described next.

[18] eBay toolbar; http://pages.ebay.com/ebay_toolbar/.

  • Web sites truly belonging to eBay or PayPal, indicated by a green icon

  • Known spoofs of eBay or PayPal, indicated by a red icon

  • All other sites, indicated by a neutral gray icon

One problem with this approach is its lack of scalability. Of course, phishing attacks are not limited to eBay and PayPal. As of October 2004, the Anti-Phishing Working Group has collected attacks targeted at customers of 39 different organizations. It is impossible to cram all the possible toolbars, each representing a single organization, into a single browser. A better approach would be a single toolbar, created and managed by a single authority such as VeriSign or TRUSTe, to which organizations could subscribe if they have been, or fear becoming, victims of phishing attacks. VeriSign might do this right away by rolling out a toolbar that automatically certifies all members of its VeriSign Secured Seal program.[19]

[19] VeriSign Secured Seal Program; http://www.verisign.com/products-services/security-services/secured-seal/index.html.

Figure 14-7. eBay Account Guard toolbar


SpoofStick (Figure 14-8) is a browser extension that helps users parse the URL and detect URL spoofing by displaying only the most relevant domain information on a dedicated toolbar. For example, when the current URL is http://signin.ebay.com@10.19.32.4, SpoofStick displays "You're on 10.19.32.4". When the current URL is http://www.citibank.com.intl-en.us, SpoofStick displays "You're on intl-en.us". Because it uses a large, colorful font, this toolbar is presumably easier for users to notice. But SpoofStick cannot solve the similar-domain-name problem: is ebay-members-security.com a domain owned by eBay, or is mypaypal.com a legitimate domain for PayPal? If the user's answer to either of these questions is yes, then the user will be tricked even with SpoofStick installed. Moreover, it is unknown whether seeing an IP address instead of a domain name raises sufficient suspicion in users' minds, because some legitimate sites also use bare IP addresses (e.g., Google caches).

Figure 14-8. SpoofStick toolbar


In order to address the problem of faked cues, Ye and Smith have proposed synchronized random dynamic boundaries.[20] With this approach, all legitimate browser windows change their border colors together at random intervals. Because a spoofed window generated by a remote site has no access to the random value generated on the local machine, its border does not change synchronously with the legitimate window borders. This approach was considered for inclusion in the Mozilla web browser, but was dropped out of concern that users wouldn't understand it (see Chapter 28).

[20] Zishuang Ye and Sean Smith, "Trusted Paths for Browsers," ACM Transactions in Information Systems Security 8:2 (May 2005), 153186..

A related approach, proposed by Tygar and Whitten,[21] is personalized display, in which legitimate browser windows are stamped with a personal logo, such as a picture of the user's face. The same principle can be used to distinguish legitimate web pages from phishing attacks. For example, Amazon and Yahoo! greet registered users by name. Anti-phishing advisories suggest that an impersonal email greeting should be treated as a warning sign for a potential spoofed email.[22] PassMark goes even further, by displaying a user-configured image as part of the web site's login page, so that the user can authenticate the web site at the same time that the web site authenticates the user.[23]

[21] Tygar and Whitten.

[22] eBay, Inc., "Tutorial: Spoof (fake) Emails"; http://pages.ebay.com/education/spooftutorial/.

[23] PassMark Security; http://www.passmarksecurity.com/twoWay.jsp.

Personalization is much harder to spoof, but requires more configuration by the user. Configuration could be avoided if the web site automatically chose a random image for the user, but a system-chosen image may not be memorable. Another question about personalization is whether the lack of personalization in a phishing attack would raise sufficient warning flags in a user's mind. The absence of a positive cue like personalization may not trigger caution in the same way that the presence of a negative cue, like a red light in a toolbar, does.

14.3.3. Action

Phishing depends on a user not only being deceived but also acting in response to persuasion. As a result, security advisories try to discourage users from performing potentially dangerous actions. For example, most current phishing attacks use email messages as the initial bait, in order to trick the recipient into clicking through a link provided in the email, which points to a phishing server. Security tips suggest that the user should ignore links provided by email, and instead open a new browser and manually type the URL of the legitimate site.

This advice is unlikely to be followed. Considering the low frequency of phishing attacks relative to legitimate messages, this suggestion sacrifices the efficiency of hyperlinks in legitimate emails in order to prevent users from clicking misleading links in very few phishing emails.

14.3.4. System Operation

In the final step of a successful phishing attack, the user's action is translated into a system operation. This is the last chance we have to prevent the attack. Unfortunately, because phishing does not exploit system bugs, the system operations involved in a phishing attack are perfectly valid. For example, it is ordinary to post information to a remote server. Warnings based solely on system operations will inevitably generate a high rate of false positive errorsthat is, warning users about innocent actions (Figure 14-9). These false-positives eventually cause users to disable the warnings or simply to become habituated to "swatting" the warning away.

Figure 14-9. Warning based on system operations


A more interesting approach involves modifying the system operation according to its destination. Web password hashing applies this idea to defend against phishing attacks that steal web site passwords.[24] The browser automatically hashes the password typed by the user with the domain name to which it is being sent, in order to generate a unique password for each siteand hence sending useless garbage to a phishing site. Web password hashing assumes that users will type their passwords only into a password HTML element. But this element can be spoofed, and a sophisticated attack may be able to trick users into disclosing their passwords through other channels.

[24] Dan Boneh, John Mitchell, and Blake Ross, "Web Password Hashing," Stanford University; http://crypto.stanford.edu/PwdHash/.

14.3.5. Case Study: SpoofGuard

The most comprehensive solution thus far for stopping phishing at the user interface is SpoofGuard, a browser plug-in for Internet Explorer.[25] SpoofGuard addresses three of the four steps where phishing might be prevented.

[25] Chou et al.

At message retrieval time, SpoofGuard calculates a total spoof score for an incoming web page. The calculation is based on common characteristics of known phishing attacks, including:

  • Potentially misleading patterns in URLs, such as use of @

  • Similarity of the domain name to popular or previously visited domains, as measured by edit distance

  • Embedded images that are similar to images from frequently spoofed domains, as measured by image hashing

  • Whether links in the page contain misleading URLs

  • Whether the page includes password fields but fails to use SSL, as most phishing sites eschew SSL

At presentation time, SpoofGuard translates this spoof score into a traffic light (red, yellow, or green) displayed in a dedicated toolbar. Further, when the score is above a threshold, SpoofGuard pops up a modal warning box that demands the user's consent before it proceeds with displaying the page.

For the action step, SpoofGuard does nothing to modify the user's online behavior. The user is free to click on any links or buttons in the page, regardless of their spoof score.

SpoofGuard becomes involved again in the system operation step, however, by evaluating posted data before it is submitted to a remote server. The evaluation tries to detect whether sensitive data is being sent, by maintaining a database of passwords (stored as hashes) and comparing each element sent against the database. If a user's eBay password is sent to a site that isn't in ebay.com, then the spoof score for the interaction is increased. This evaluation is also linked with the detection of embedded images so that if the page also contained an eBay logo, the spoof score is increased still more. If the evaluation of the system operation causes the spoof score to exceed a certain threshold, then the post is blocked and the user is warned.

The evaluation of posted data depends on some assumptions that may not be valid for sophisticated attacks. For example, data must be posted from an HTML form; an attacker might defeat this by using Flash form submission. Further, a password must be submitted as a single piece of clear text to be detected, but JavaScript could easily hash it.

In general, however, SpoofGuard is an impressive step toward fighting phishing attacks at the client side.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net