Chapter 4 Learning Check

 < Day Day Up > 

1:

Which technology uses capacitors to store data?

A1:

Answer: DRAM

2:

Which technology is faster?

A2:

Answer: SRAM

3:

Which technology is used in cache?

A3:

Answer: SRAM

4:

Which technology stores data in a grid?

A4:

Answer: Both

5:

What innovation did DDR RAM introduce?

  1. Transfers data on both the rising and falling edge of each clock cycle

  2. Distributes data across DIMMs in two banks

  3. Adds a parity bit to each byte when it writes it to memory

  4. Corrects single-bit errors

A5:

Answer: A

6:

Which memory technology doubles the amount of data obtained in a single memory access from 64 bits to 128 bits?

  1. DDR RAM

  2. Online spare memory

  3. Hot-plug RAID memory

  4. Interleaved memory

A6:

Answer: A

7:

Match the fault-tolerant technology with its description:

Parity

This technology uses a checksum to analyze an error, determine which byte is corrupt, and correct it.

ECC

Four memory controllers each write one block of data to one of four DIMMs. A fifth memory controller stores parity information on a fifth DIMM.

Advanced ECC

A memory bank with a faulty DIMM automatically fails over to a spare bank of DIMMs.

Online spare memory

The memory controller writes the same data to identically configured banks of DIMMs on two memory boards.

Hot-plug mirrored memory

The memory controller adds a bit to each byte when it writes the byte to memory based on the number of 1s in the byte.

Hot-plug RAID memory

This technology corrects multibit errors that occur on a single DRAM chip.


A7:

Answer:

Parity: The memory controller adds a bit to each byte when it writes the byte to memory based on the number of 1s in the byte.

ECC: This technology uses a checksum to analyze an error, determine which byte is corrupt, and correct it.

Advanced ECC: This technology corrects multibit errors that occur on a single DRAM chip.

Online spare memory: A memory bank with a faulty DIMM automatically fails over to a spare bank of DIMMs.

Hot-plug mirrored memory: The memory controller writes the same data to identically configured banks of DIMMs on two memory boards.

Hot Plug RAID Memory: Four memory controllers each write one block of data to one of four DIMMs. A fifth memory controller stores parity information on a fifth DIMM.

8:

What is the benefit of cache in a server?

  1. Fills data requests from the processor more quickly than memory

  2. Doubles the amount of data that can be stored on the hard drive

  3. Decodes instructions to make the processor work faster

  4. Increases the clock speed of the memory bus

A8:

Answer: A

9:

Which cache stores the first data checked by the processor?

A9:

Answer: Level 1

10:

Which bus connects the L2 cache to the processor?

  1. Frontside bus

  2. Backside bus

  3. System bus

  4. PCI bus

A10:

Answer: B

11:

What is the function of the tag RAM?

A11:

Answer: Tag RAM stores the memory address for the data in each cache line. When the processor requests a piece of data, the cache controller compares the address in the request with the addresses in the Tag RAM. If the cache controller finds the address, it returns the associated data to the processor.

12:

Match the cache implementation to its definition:

Look-aside

A cache controller listens in to system bus traffic for any memory requests made by bus masters.

Look-through

When a bus master is trying to write to memory, the cache controller captures the data being written and writes it to cache.

Fully associated

A bit attached to the cache line is flagged to indicate that the data has not yet been written to memory.

Direct mapped

Data from main memory can be stored in any cache line.

Set-associative

The system must write the data through all the memory levels before it can be used again.

Write-through

Both cache and memory receive memory requests. If there is a cache hit, the cache controller terminates the request to the other devices.

Write-back

A group of memory addresses is assigned to each cache line.

Bus snooping

If there is a cache hit, no request makes it to the system bus.

Bus snarfing

A group of memory addresses is assigned to a specific group of cache lines.


A12:

Answer:

Look-aside: Both cache and memory receive memory requests. If there is a cache hit, the cache controller terminates the request to the other devices.

Look-through: If there is a cache hit, no request makes it to the system bus.

Fully associated: Data from main memory can be stored in any cache line.

Direct mapped: A group of memory addresses is assigned to each cache line.

Set-associative: A group of memory addresses is assigned to a specific group of cache lines.

Write-through: The system must write the data through all the memory levels before it can be used again.

Write-back: A bit attached to the cache line is flagged to indicate that the data has not yet been written to memory.

Bus snooping: A cache controller listens in to system bus traffic for any memory requests made by bus masters.

Bus snarfing: When a bus master is trying to write to memory, the cache controller captures the data being written and writes it to cache.

     < Day Day Up > 


    HP ProLiant Servers AIS. Official Study Guide and Desk Reference
    HP ProLiant Servers AIS: Official Study Guide and Desk Reference
    ISBN: 0131467174
    EAN: 2147483647
    Year: 2004
    Pages: 278

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net