Network Infrastructure Security


As an IS auditor performing detailed network assessments and access control reviews, you first must determine the points of entry to the system and then must review the associated controls. Per ISACA, the following are controls over the communication network:

  • Network control functions should be performed by technically qualified operators.

  • Network control functions should be separated, and the duties should be rotated on a regular basis, when possible.

  • Network-control software must restrict operator access from performing certain functions (such as the capability to amend or delete operator activity logs).

  • Operations management should periodically review audit trails, to detect any unauthorized network operations activities.

  • Network operations standards and protocols should be documented and made available to the operations, and should be periodically reviewed to ensure compliance.

  • Network access by the system engineers should be closely monitored and reviewed to detect unauthorized access to the network.

  • Analysis should be performed to ensure workload balance, fast response time, and system efficiency.

  • The communications software should maintain a terminal identification file, to check the authentication of a terminal when it tries to send or receive messages.

  • When appropriate, data encryption should be used to protect messages from disclosure during transmission.


IS auditors should first determine points of entry when performing a detailed network assessment and access control review.


As stated in Chapter 3, the firewall is a secured network gateway. The firewall protects the organization's resources from unauthorized users (internal or external). As an example, firewalls are used to prevent unauthorized users (usually external) from gaining access to an organization's computer systems through the Internet gateway. A firewall can also be used as an interface to connect authorized users to private trusted network resources. Chapter 3 discussed the implementation of a firewall that works closely with a router to filter all network packets, to determine whether to forward them toward their destination. The router can be configured with outbound traffic filtering that drops outbound packets that contain source addresses from other than the user's organization. Firewalls and filtering routers can be configured to limit services not allowed by policy and can help prevent misuse of the organization's systems. An example of misuse associated with outbound packets is a distributed denial-of-service attack (DDoS). In this type of attack, unauthorized persons gain access to an organization's systems and install a denial-of-service (DoS) program that is used to launch an attack against other computers. Basically, a large number of systems on different hosts await commands from a central client (unauthorized user). The central client (DDoS client) then sends a message to all the servers (DDoS server program) instructing them to send as much traffic as they can to the target system. In this scenario, the DDoS program distributes the work of flooding the target among all available DoS servers, creating a distributed DoS. The application gateway firewall can be configured to prevent applications such as FTPs from entering the organization's network.


Application gateways, or proxy firewalls, are an effective method for controlling file downloading via FTP. Outbound traffic filtering can help prevent an organization's systems from participating in a distributed denial-of-service (DDoS) attack.


A screened-subnet firewall can be used to create a demilitarized zone (DMZ). This type of firewall utilizes a bastion host that is sandwiched between two packet-filtering routers and is the most secure firewall system. This type of firewall system supports both network and application-level security, while defining a separate demilitarized zone network.


Firewalls can be used to prevent unauthorized access to the internal network from the Internet. A firewall located within a screened subnet is a more secure firewall system.


Employees of the organization, as well as partners and vendors, can connect through a dial-up system to get access to organizational resources. One of the methods implemented for authenticating users is a callback system. The callback system works to ensure users are who they say they are by calling back a predefined number to establish a connection. An authorized user calls a remote server through a dial-up line first. Then the server disconnects and dials back to the user machine, based on the user ID and password, using a telephone number from its database. However, it should be noted that callback security can easily be defeated through simple call forwarding.


A callback system is a remote access control whereby the user initially connects to the network systems via dial-up access, only to have the initial connection terminated by the server. The server then subsequently dials the user back at a predetermined number stored in the server's configuration database.


Encryption Techniques

The use of encryption enables companies to digitally protect their most valuable assets: information. The organization's information system contains and processes intellectual property, including organizational strategy, customer lists, and financial data. In fact, a majority of information as well as the transactions associated with this information are stored digitally. This environment requires companies to use encryption to protect the confidentiality and the integrity of information. Organizations should utilize encryption services to ensure reliable authentication of messages, the integrity of documents, and the confidentiality of information that is transmitted and received.

Cryptography is the art and science of hiding the meaning of communication from unintended recipients by encrypting plain text into cipher text. The process of encryption and decryption is performed by a cryptosystem that uses mathematical functions (algorithms) and a special password called a key.

Encryption is used to protect data while in transit over networks, protect data stored on systems, deter and detect accidental or intentional alterations of data, and verify the authenticity of a transaction or document. In other words, encryption provides confidentiality, authenticity, and nonrepudiation. Nonrepudiation provides proof of the origin of data and protects the sender against a false denial by the recipient that the data has been received, or to protect the recipient against false denial by the sender that the data has been sent.

The strength of a cryptosystem lies in the attributes of its key components. The first component is the algorithm, which is a mathematical-based function that performs encryption and decryption. The second component is the key that is used in conjunction with the algorithm; each key makes the encryption/decryption process unique. To decrypt a message that has been encrypted, the receiver must use the correct key; if an incorrect key is used, the message is unreadable. The key length, which is predetermined, is important to reduce the possibility of a brute-force attack to decrypt an encrypted message. The longer the key is, the more difficult it is to decrypt a message because of the amount of computation required to try all possible key combinations (work factor). Cryptanalysis is the science of studying and breaking the secrecy of encryption algorithms and their necessary pieces. The work factor involved in brute-forcing encrypted messages relies significantly on the computing power of the machines that are brute-forcing the message.


The strength of a cryptosystem is determined by a combination of key length, initial input vectors, and the complexity of the data-encryption algorithm that uses the key.


As an example, the Data Encryption Standard (DES) was selected as an official cipher (method of encrypting information) under the Federal Information Processing Standard (FIPS) for the United States in 1976. When introduced, DES used a 56-bit key length. It is now considered to be insecure for many applications because they have been broken into in less than 24 hours. A 24-hour time frame to break a cryptographic key is considered a very low work factor. In 1998, the Electronic Frontier Foundation (EFF) spent approximately $250,000 and created a DES-cracker to show that DES was breakable. The machine brute-forced a DES key in a little more than two days, proving that the work factor involved was small and that DES was, therefore, insecure. Fortunately, there is a version of DES named Triple DES (3DES) that uses a 168-bit key (three 56-bit keys) and provides greater security than its predecessor. As a point of interest, you should note that the U.S. Federal Government has ended support for the DES cryptosystem in favor of the new Advanced Encryption Standard (AES).

The cryptographic algorithms use either symmetric keys or asymmetric keys. Symmetric keys are also known as secret keys or shared secret keys because both parties in a transaction use the same key for encryption and decryption. The ability of users to keep the key secret is one of the weaknesses in a symmetric key system. If a key is compromised, all messages using this key can be decrypted. In addition, the secure delivery of keys poses a problem when adding new devices or users to a symmetric key system. Acceptable methods of delivery can include placing the key on a floppy and hand delivering it or delivering the key through the use of a secure courier or via postal mail. Protecting the exchange of symmetric shared keys through the use of asymmetric or hybrid cryptosystems is another option that is described in more detail later in this chapter.

A variety of symmetric encryption algorithms exists, as shown in Table 4.1.

Table 4.1. Symmetric Encryption Algorithms

Algorithm

Notes

Data Encryption Standard (DES)

Low work factorhas been broken once
Provides confidentiality but not nonrepudiation
Low work factor
Provides confidentiality but not nonrepudiation

Advanced Encryption Standard (AES)

High work factor
Provides confidentiality but not nonrepudiation

International Data Encryption Algorithm (IDEA)

High work factor
Provides confidentiality but not nonrepudiation

Rivest Cipher 5 (RC5)

High work factor
Provides confidentiality but not nonrepudiation


Symmetric keys are fast because the algorithms are not burdened with providing authentication services, and they are difficult to break if they use a large key size. However, symmetric keys are more difficult to distribute securely.

Figure 4.3 shows the symmetric key process. Both the sending and receiving parties use the same key.

Figure 4.3. Symmetric encryption process.


Symmetric encryption's security is based on how well users protect the private key. If the private key is compromised, all messages encrypted with the private key can be decrypted by an unauthorized third party. The advantage of symmetric encryption is speed.

Asymmetric Encryption (Public-Key Cryptography)

In using symmetric key encryption, a single shared secret key is used between parties. In asymmetric encryption, otherwise known as public-key cryptography, each party has a respective key pair. These asymmetric keys are mathematically related and are known as public and private keys. When messages are encrypted by one key, the other key is required for decryption. Public keys can be shared and are known to everyone, hence the definition public. Private keys are known only to the owner of the key. These keys make up the key pair in public key encryption.

Before public-key cryptography can be used, both the sender and the recipient need to exchange one another's public keys. If a sender wants to encrypt a message to another recipient, the sender encrypts a message using his private key (known only to him), and the recipient decrypts the message using the sender's public key (known to everyone). Because the keys are mathematically linked, the recipient is assured that the message truly came from the original sender. This is known as authentication or authenticity because the sender is the only party who should have the private key to encrypt content in a way that can be decrypted by the sender's public key. It is important to keep in mind that anyone who has the sender's public key can decrypt this message at this point, so this initial encryption does not provide confidentiality. If the sender wants the message to be confidential, he should then re-encrypt his message using the recipient's public key. This requires the recipient to use his own private key (known only to him) to initially decrypt the message and then to use the sender's public key to decrypt the remainder. In this scenario, the sender is assured that only the recipient can decrypt the message (protecting confidentiality), and the recipient is assured that the message came from Waylon (proof of authenticity). This type of data encryption provides message confidentiality and authentication.


With public key encryption, or asymmetric encryption, data is encrypted by the sender using the recipient's public key. The data is then decrypted using the recipient's private key.


The following is a review of basic asymmetric encryption flow:

  1. A clear-text message is encrypted by the sender with the sender's private key, to ensure authenticity only.

  2. The message is re-encrypted with the recipient's public key, to ensure confidentiality.

  3. The message is initially decrypted by the recipient using the recipient's own private key, rendering a message that remains encrypted with sender's private key.

  4. The message is then decrypted by the recipient using the sender's public key. If this is successful, the receiver can be sure that the message truly came from the original sender.

Figure 4.4 outlines asymmetric encryption to ensure both authenticity and confidentiality.

Figure 4.4. Asymmetric encryption process.


The advantages of an asymmetric key encryption system are the ease of secure key distribution and the capability to provide authenticity, confidentiality, and nonrepudiation. The disadvantages of asymmetric encryption systems are the increase in overhead processing and, therefore, cost.


An elliptic curve cryptosystem has a much higher computation speed than RSA encryption.


A variety of asymmetric encryption algorithms are used, as shown in Table 4.2.

Table 4.2. Encryption Algorithms

Algorithm

Use

Notes

Rivest, Shamir, Adleman (RSA)

Encryption Digital signature

Security comes from the difficulty of factoring large prime numbers.

Elliptic Curve Cryptosystem (ECC)

Encryption Digital signature

Rich mathematical structures are used for efficiency. ECC can provide the same level of protection as RSA, but with a key size that is smaller than what RSA requires.

Digital Signature Algorithm (DSA)

Digital signature

Security comes from the difficulty of factoring discrete algorithms in a finite space.



A long asymmetric encryption key increases encryption overhead and cost.


We have examined both symmetric and asymmetric cryptography, and each has advantages and disadvantages. Symmetric cryptography is fast, but if the shared secret key is compromised, the encrypted messages might be compromised. There are challenges in distributing the shared secret keys securely. Asymmetric processing provides authenticity, confidentiality, and nonrepudiation but requires higher overhead because processing is slower. If we combine the two encryption methods in a hybrid approach, we can use public key cryptography.

Public and private key cryptography use algorithms and keys to encrypt messages. In private (shared) key cryptography, there are significant challenges in distributing keys securely. In public key cryptography, the challenge lies in ensuring that the owner of the public key is who he says he is, and trusted notification if the key is invalid because of compromise.

Public Key Infrastructure (PKI)

A public key infrastructure (PKI) incorporates public key cryptography, security policies, and standards that enable key maintenance (including user identification, distribution, and revocation) through the use of certificates. The goal of PKI is to answer the question "How do I know this key is truly your public key?" PKI provides access control, authentication, confidentiality, nonrepudiation, and integrity for the exchange of messages through use of Certificate Authorities (CA) and digital certificates. PKI uses a combination of public-key cryptography and digital certificates to provide some of the strongest overall control over data confidentiality, reliability, and integrity for Internet transactions.

The CA maintains, issues, and revokes public key certificates, which ensure an individual's identity. If a user (Randy) receives a message from Waylon that contains Waylon's public key, he can request authentication of Waylon's key from the CA. When the CA has responded that this is Waylon's public key, Randy can communicate with Waylon, knowing that he is who he says he is. The other advantage of the CA is the maintenance of a certificate revocation list (CRL), which lists all certificates that have been revoked. Certificates can be revoked if the private key has been comprised or the certificate has expired. As an example, imagine that Waylon found that his private key had been compromised and had a list of 150 people to whom he had distributed his public key. He would need to contact all 150 and tell them to discard the existing public key they had for him. He would then need to distribute a new public key to all those he communicates with. In using PKI, Waylon could contact the CA, provide a new public key (establish a new certificate), and place the old public key on the CRL. This is a more efficient way to deal with key distribution because a central authority is providing key maintenance services.


A Certificate Authority manages the certificate life cycle and certificate revocation list (CRL).


The certificates used by the CAs incorporate identity information, certificate serial numbers, certificate version numbers, algorithm information, lifetime dates, and the signature of the issuing authority (CA). The most widely used certificate types are the Version 3 X.509 certificates. The X.509 certificates are commonly used in secure web transactions via Secure Sockets Layer (SSL).

This example provides a view of the contents of an X.509 certificate and includes the lifetime dates, who the certificate is issued to, who the certificate is issued by, and the communication and encryption protocols that are used. Digital certificates are considered the most reliable sender-authentication control. In this case, PKI provides nonrepudiation services for e-commerce transactions. This e-commerce hosting organization uses asymmetric encryption along with digital certificates to verify the authenticity of the organization and its transaction communications for its customers.

A Certifying Authority (CA) can delegate the processes of establishing a link between the requesting entity and its public key to a Registration Authority (RA). An RA performs certification and registration duties to offload some of the work from the CAs. The RA can confirm individual identities, distribute keys, and perform maintenance functions, but it cannot issue certificates. The CA still manages the digital certificate life cycle, to ensure that adequate security and controls exist.

Digital Signature Techniques

Digital signatures provide integrity in addition to message source authentication because the digital signature of a signed message changes every time a single bit of the document changes. This ensures that a signed document cannot be altered without being detected. Depending on the mechanism chosen to implement a digital signature, the mechanism might be capable of ensuring data confidentiality or even timeliness, but this is not guaranteed.

A digital signature is a cryptographic method that ensures data integrity, authentication of the message, and nonrepudiation. The primary purpose of digital signatures is to provide authentication and integrity of data. In common electronic transactions, (a digital signature is created by the sender to prove message integrity and authenticity by initially using a hashing algorithm to produce a hash value, or message digest, from the entire message contents. The sender provides a mechanism to authenticate the message contents by encrypting the message digest using the sender's own private key. If the recipient can decrypt the message using the sender's public key, which has been validated by a third-party Certificate Authority, the recipient can rest assured that the message digest was indeed created by the original sender. Upon receiving the data and decrypting the message digest, the recipient can independently create a message digest from the data using the same publicly available hashing algorithm for data comparison and integrity validation.

The following is the flow of a digital signature:

  1. The sender and recipient exchange public keys:

    • These public keys are validated via a third-party Certificate Authority (CA).

    • A Registration Authority (sometimes separate from the CA) manages the certificate application and procurement procedures.

  2. The sender uses a digital signature hashing algorithm to compute a hash value of the entire message (called a message digest).

  3. The sender "signs" the message digest by encrypting it with the sender's private key.

  4. The recipient validates authenticity of the message digest by decrypting it with the sender's validated public key.

  5. The recipient then validates the message integrity by computing a message digest of the message and compares the message digest value to the recently decrypted message digest provided by the sender.


With digital signatures, a hash of the data is encrypted with the sender's private key, to ensure data integrity.


A key distinction between encryption and hashing algorithms is that hashing algorithms are irreversible. A message digest is the result of using a one-way hash that creates a fingerprint of the message. If the message is altered, a comparison of the message digest to the hash of the altered message will show that the message has been changed. The hashing algorithms are publicly known and differ from encryption algorithms in that they are one-way functions and are never used in reverse. The sender runs a hash against a message to produce the message digest, and the receiver runs the same hash to produce a second message digest. The message digests are compared; if they are different, the message has been altered. The sender can use a digital signature to provide message authentication, integrity, and nonrepudiation by first creating a message digest from the entire message by using an irreversible hashing algorithm, and then "signing" the message digest by encrypting the message digest with the sender's private key. Confidentiality is added by then re-encrypting the message with the recipient's public key.


Digital signatures require the sender to "sign" the data by encrypting the data with the sender's private key. This then is decrypted by the recipient using the sender's public key.


Within the Digital Signature Standard (DSS) the RSA and Digital Signature Algorithm (DSA) are the most popular.

Each component of cryptography provides separate functions. An encrypted message can provide confidentiality. If the message contains a digital signature, this is a guarantee of the authentication and integrity of the message. As an example, a message that contains a digital signature that encrypts a message digest with the sender's private key provides strong assurance of message authenticity and integrity.

Network and Internet Security

As an IS auditor, you need to understand network connectivity, security, and encryption mechanisms on the organization's network. The use of layer security (also known as defense-in-depth) reduces the risks associated with the theft of or damage to computer systems, data, or the organization's network. Proper security policies and procedures, combined with strong internal and external access-control mechanisms, reduce risk to the organization and ensure the confidentiality, integrity, and availability of services and data.

As stated earlier in Chapter 3, firewalls can be used to protect the organization's assets against both internal and external threats. Firewalls can be used as perimeter security between the organization and the Internet, to protect critical systems and data from external hackers or internally from untrusted users (internal hackers).

Per ISACA, organizations that have implemented firewalls face these problems:

  • A false sense of security, with management feeling that no further security checks or controls are needed on the internal network (that is, the majority of incidents are cause by insiders, who are not controlled by firewalls).

  • Circumventing firewalls through the use of modems might connect users directly to Internet service providers. Management should ensure that the use of modems when a firewall exists is strictly controlled or prohibited altogether.

  • Misconfigured firewalls might allow unknown and dangerous services to pass through freely.

  • What constitutes a firewall might be misunderstood (companies claiming to have a firewall might have merely a screening router).

  • Monitoring activities might not occur on a regular basis (for example, log settings might not be appropriately applied and reviewed).

  • Firewall policies might not be maintained regularly.

An initial step in creating a proper firewall policy is identifying network applications, such as mail, web, or FTP servers to be externally accessed. When reviewing a firewall, an IS auditor should be primarily concerned with proper firewall configuration, which supports enforcement of the security policy.

Working in concert with firewalls are the methods of access and encryption of data and user sessions on the network. A vast majority of organizations have users who are geographically dispersed, who work from home, or who travel as part of their job (road warriors). In addition, organizations allow vendors, suppliers, or support personnel access to their internal network. Virtual private networks (VPNs) provide a secure and economical method for WAN connectivity. This access can be provided via a VPN or via a public site for which traffic is encapsulated or encrypted.

VPNs use a combination of tunneling encapsulation and encryption to ensure communication security. The protocols used to provide secure connectivity might vary by the vendor and implementation. A tunneling protocol creates a virtual path through public and private networks. Network protocols such as IPSec often encrypt and encapsulate data in the OSI network layer.


Data encryption is an effective control against confidentiality vulnerabilities associated with connectivity to remote sites.


PPTP is a protocol that provides encapsulation between a client and a server. PPTP works at the data link layer of the OSI model and provides encryption and encapsulation over the private link. Because it is designed to work from a client to a server, it sets up a single connection and transmits only over IP networks. In negotiating a PPTP connection, the client initiates a connection to a network either by using dial-in services or coming across the Internet. A weakness associated with PPTP is that the initial negotiation of IP address, username, and password are sent in clear text (not encrypted); after the connection is established, the remainder of communication is encapsulated and encrypted. This is a weakness in the protocol and might allow unauthorized parties to use a network sniffer to see initial negotiations passed in the clear.


VPNs use tunneling and encryption to hide information from sniffers on the Internet.


IPSec works at the network layer and protects and authenticates packets through two modes: transport and tunnel. IPSec transport mode encrypts only the data portion of each packet, not the header. For more robust security, tunnel mode encrypts both the original header and the data portion of the packet. IPSec supports only IP networks and can handle multiple connections at the same time.

In addition to protocols associated with establishing private links, tunneling, and encrypting data, protocols are used to facilitate secure web and client/server communication. The Secure Sockets Layer (SSL) protocol provides confidentiality through symmetric encryption such as the Data Encryption Standard (DES) and is an application/session-layer protocol used for communication between web browsers and servers. When a session is established, SSL achieves secure authentication and integrity of data through the use of a public key infrastructure (PKI). The services provided by SSL ensure confidentiality, integrity, authenticity, and nonrepudiation. SSL is most commonly used in e-commerce transactions to provide security for all transactions within the HTTP session.

The complexity associated with the implementation of encryption and secure transmission protocols requires the IT organization to pay careful attention to ensure that the protocols are being configured, implemented, and tested properly. In addition, careful attention should be paid to the secrecy and length of keys, as well as the randomness of key generation.

Security Software

Intrusion-detection systems (IDS) are used to gather evidence of system or network attacks. An IDS can be either be signature based or statistical anomaly based. Generally, statistical anomalybased IDSs are more likely to generate false alarms. A network-based IDS works in concert with the routers and firewalls by monitoring network usage to detect anomalies at different levels within the network.

The first type of IDS is a network-based IDS. Network IDSs are generally placed between the firewall and the internal network, and on every sensitive network segment to monitor traffic looking for attack patterns or suspicious activity. If these patterns are recognized, the IDS alerts administrators or, in later generations of IDS, protects the network by denying access to the attacking addresses or dropping all packets associated with the attack. Host-based IDSs operate on a host and can monitor system resources (CPU, memory, and file system) to identify attack patterns.

The latest generation of IDSs can detect either misuse or anomalies by gathering and analyzing information and comparing it to large databases of attack signatures. In this case, the specific attack or misuse must have already occurred and been documented. This type of IDS is only as good as the database of attack signatures. If the IDS is using anomaly detection, the administrator should identify and document (within the IDS) a baseline; when the IDS detects patterns that fall outside the baseline (anomalies), it performs a certain action (alerts, stops traffic, shuts down applications or network devices). If the IDS is not baselined or configured correctly, the system could detect and alert on false positives. A false positive occurs when a system detects and alerts on an act that does not really exist. If a system detects a high number of false positives, the risk is that either the alerts will be ignored or that the particular rule associated with the alert will be turned off completely. If the IDS is a passive system, it will detect potential anomalies, log the information, and alert administrators. In a reactive system, the IDS takes direct action to protect assets on the network. These actions can include dropping packets from the attacking IP address, reprogramming the firewall to block the offending traffic or all traffic, or shutting down devices or applications that are being attacked.


A common issue with intrusion-detection systems is the detection of false positives (an attack is reported that is not actually an attack).


The firewall and IDS work together to achieve network security. The firewall can be viewed as a preventative measure because the firewall limits the access between networks to prevent intrusion but does not signal an attack. An IDS evaluates a suspected intrusion after it has taken place and sends an alert.

Single sign-on (SSO) systems are used to centralize authentication and authorization access within an information system. With specialization of applications, the average user accesses multiple applications while performing his duties. Some applications might allow users to authenticate once and access multiple applications (usually the same vendor), but most do not. SSO allows users to authenticate once, usually with a single login ID and password, and get authorization to work on multiple applications. SSO can apply to one network or can span multiple networks and applications. It is sometimes referred to as federated identity management. When implementing a single sign-on system, the organization must ensure that the authentication systems are redundant and secure.

Single sign-on authentication systems are prone to the vulnerability of having a single point of failure for authentication. In addition, if all the users internal and external to the organization and their authorization rights are located in one system, the impact of compromised authentication and subsequent unauthorized access is magnified. If one user within a single sign-on system or directory is compromised, all users, passwords, and access rights might have been compromised.

Voice Communications Security

Most people use the phone in day-to-day business and do not think about the security required within the telecommunications network. In fact, for many years, both telecommunications companies and organizations focused on the aspect of availability and did not consider integrity and confidentiality. When someone places a phone call from home or work, the call moves through any number of telephone switches before reaching its destination. These switches connect businesses within cities, cities to states, and countries to countries.

One of the systems in use in most business is the Private Branch Exchange (PBX). The PBX is similar to the telecommunications company switches, in that it routes calls within the company and passes calls to the external telecommunication provider. Organizations might have a variety of devices connected to the PBX, including telephones, modems (remote-access and vendor-maintenance lines), and computer systems. The lack of proper controls within the PBX and associated devices increases both unauthorized access vulnerabilities and outages (availability) in the organization's voice telecommunications network.

These vulnerabilities include the following:

  • Theft of service An example is toll fraud, in which attackers gain access to the PBX to make "free" phone calls.

  • Disclosure of information Organizational data is disclosed without authorization, through either malicious acts or error. Telephone conversations might be intentionally or unintentionally overheard by unauthorized individuals, or access might be gained to telephone routing or address data.

  • Information modification Organizational data contained within the PBX or a system connected to the PBX might be altered through deletion or modification. An unauthorized person might alter billing information or modify system information to gain access to additional services.

  • Unauthorized access Unauthorized users gain access to system resources or privileges.

  • Denial of service Unauthorized persons intentionally or unintentionally prevent the system from functioning as intended.

  • Traffic analysis Unauthorized persons observe information about calls and make informed guesses based on source and destination numbers or call frequency. As an example, an unauthorized person might see a high volume of calls between the CEO of an organization and a competitor's CEO or legal department, and infer that the organizations are looking to merge.

To reduce the risk associated with these vulnerabilities, administrators should remove all default passwords from the PBX system and ensure that access control within the system applies the rule of least privilege. All modems associated with maintenance should be disabled unless they are needed, and modems that employees use for remote access should employ additional hardware or software for access control. All phone numbers that are not in use should be disabled, and users who need to access voice mail should have a password policy that requires the use of strong passwords and periodic password changes. In addition, administrators should enable logging on the system and review both the PBX access and telephone call logs periodically.



Exam Cram 2. CISA
Cisa Exam Cram 2
ISBN: B001EEFNHG
EAN: N/A
Year: 2005
Pages: 146

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net