As we generate each of the new contexts, we examine their children to
determine how likely they are to be the next event. Using the formula
*Countchild/Countparent-1*
we generate a maximum-likelihood
estimation [17] of the probability of that child's event
occurring. We compare this estimate to a probability threshold set as
a parameter of our algorithm. If the estimated likelihood is greater
than or equal to this threshold, then the data accessed for this event
is prefetched into the cache. We evaluate each context **1** through **m**
independently, resulting in **m** sets of predictions. The zero order
context is a Least Frequently Used model and therefore was thought to
be of little benefit. Consequently the selector does not examine the
zero order context for predictions.

Prefetched data is placed at the front of our cache, and since cache replacement is still LRU, the data will most likely be in the cache for the next several events. The result is that although our prefetching is based on a prediction for the next event, if this predicted event occurs before its data is removed from the cache we still avoid a cache miss.

Tom M. Kroeger Tue Nov 14 00:27:35 PST 1995