ANSWERS

     
A1:

Number of logic gates = 200 + 30 = 230

First instruction appears after 230 units of time

Second instruction starts after 150 logic gates, i.e., the "glue" kicks in. Therefore, the second and subsequent instructions appear after 30+150=180 units of time.

We have 100 instructions in our instruction stream. Time to execute = 230 + 99 * 180 = 18050

Time to execute non-pipelined instruction stream = 200 * 100 = 20000

graphics/ap01equ03.gif

1 pipeline = 150 gates = 150 units of time

A2:

Having separate caches for instructions and data allows parallel access to instructions and data and hence improves the effectiveness of pipelining. Having a unified cache will slow the pipeline down due to contention when accessing either instructions or data: That's a demonstration of the von Neumann bottleneck.

A3:

You want a "long" cache line because multiple data elements can be stored in a single cache line and, hence, accessed quicker. Applications with more "random" data access patterns would benefit from "shorter" cache lines because subsequent accesses will probably require a new cache line to be fetched . With "shorter" cache lines, we can have more cache lines for the same size cache.

A4:

Answer: A Direct Mapped cache.

A5:

Answer: 16.

Take the 2-bit sequence 10 as our encoded cache "set number." This needs to map to four separate locations. Using the binary line number, we can see why having 16 lines works: The shaded boxes are for our four-way set.

graphics/ap01inf02.gif



HP-UX CSE(c) Official Study Guide and Desk Reference
HP-UX CSE(c) Official Study Guide and Desk Reference
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 434

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net