Permalink
Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
37 lines (18 sloc) 2.71 KB

Chapter 8: Cache Memory

  • Cache memory is the most costly but the fastest of all memory types. Cache memory usually lies between CPU and memory. Use to store the data / instructions that are frequently used by the system. It is also used to store the page / segment map tables.

  • For cache memory, the physical address generated by the CPU has two components (Block number, word in block).

  • The three types of cache memory are as follows: Associative, Direct Mapped, Set Associative.

Associative Cache Memory

  • A tag address is compared with the physical address generated for a particular instruction. If there is a match, then that instruction exist in cache memory. If not, the go to main memory fetch the data/instruction and put it in cache memory and save the tag. This tag memory / table work similar to the segment/page map table.

  • Parallel search is used to search if the block number exists or has a tag address.

Direct Mapped Cache:

  • Every component in main memory will now have two main components (Tag, Group) including the byte component.

  • MM blocks is represented using a multi-dimensional array, in which each component is a block of memory. Each block of memory will belong to one cache line in cache memory which is 256 in size. The columns in the memory block represent a tag and every row represent a group.

  • A memory block number is stored in cache memory directly and the tag number is used to keep in the tag memory. The valid bit says that the corresponding section in cache exists.

  • In the direct mapped cache, there is only a 5-bit comparator, unlike in Associative cache where there is a comparator for each tag component.

  • Cache miss is more higher in Direct mapped cache since only one particular block of memory can be placed in a specific cache line. In the associate cache, any block can go any available cache line, there, less probability of cache miss.

Set Associative cache

  • Set associate cache uses a combination of both previous techniques to make the most of the two techniques.

  • In this type of cache setup, instead of having a specific cache line for each memory block, you would have a set of cache lines where the block of memory can go. This will reduce the number of cache misses. This is also referred two as two-way set associative cache. The number of sets can be changed and the higher it is the less hits will occur; there is more to compare with.

  • In set associative there is a trade-off between the number of misses (higher) and the amount of hardware required (less).

  • In this setup (2 sets), there has to be two misses by the comparator to be an actual miss. Conversely, there only needs to be one hit to be considered a hit.