Thursday, June 18, 2009

Mapping Function

The replacement policy decides where in the cache a copy of a particular entry of main memory will go. In order to make room for the new entry on a cache miss, the cache generally has to evict one of the existing entries. The fundamental problem with any replacement policy is that it must predict which existing cache entry is least likely to be used in the future. Predicting the future is difficult, especially for hardware caches that use simple rules amenable to implementation in circuitry, so there are a variety of replacement policies to choose from and no perfect way to decide among them. One popular replacement policy, LRU, replaces the least recently used entry.


Direct mapping

If each entry in main memory can go in just one place in the cache.





Fully Associative mapping
If the replacement policy is free to choose any entry in the cache to hold the copy. This mapping function is basically a hypothetical one, and very expensive to implemented in the real environment as it demands lots of hardwire complexity (parallel searching must be implemented).


Set Associative mapping




Associativity is a trade-off. If there are ten places the replacement policy can put a new cache entry, then when the cache is checked for a hit, all ten places must be searched. Checking more places takes more power, area, and potentially time. On the other hand, caches with more associativity suffer fewer misses (see conflict misses, below), so that the CPU spends less time servicing those misses. The rule of thumb is that doubling the associativity, from direct mapped to 2-way, or from 2-way to 4-way, has about the same effect on hit rate as doubling the cache size. Associativity increases beyond 4-way have much less effect on the hit rate, and are generally done for other reasons (see virtual aliasing, below).




Figure: Miss rate against cache size

1 comment: