Memory Use Metrics

Last updated: December 4, 2002

For considering the memory use of programs, we concentrate on the amount and properties of dynamically-allocated memory (memory use for the stack is related to the call graph metrics, and memory for globals is not usually a dynamically varying value).

Allocation Density Metrics

The first metric required is just a simple value metric to measure how much dynamic memory is allocated by the program, per 1000 bytecode instructions (kbc) executed, and there are two variations. Although these metrics give a simple summary of how memory-hungry the program is overall, they do not distinguish between a program that allocates smoothly over its entire execution and a program that allocates only in some phases of the execution. To show this kind of behaviour, there are obvious continuous analogs, where the number of bytes / objects allocated per kbc is computed per execution time interval, and not just once for the entire execution (memory.byteAllocationDensity.continuous and memory.objectAllocationDensity.continuous).

Back to Top

Object Size Distribution Metrics

Back to Top

Object Liveness Metrics

Researchers interested in garbage collection are often interested in the liveness of dynamically-allocated objects. For example, the generational collection is potentially a good idea if a large proportion of objects have short lifetimes. For liveness metrics, time is often reported in terms of intervals of allocated bytes. For example, for an interval size of 10000 bytes, interval 1 ends after 10000 bytes have been allocated, interval 2 ends after 20000 bytes have been allocated and so on.

Object lifetimes can be estimated by forcing a garbage collection at the end of each interval, thus allowing one to find the amount of live memory after the collection, and to capture the death of objects that have become unreachable during that interval.

Based on the the amount of live memory at the end of each interval, we can compute the following two metrics.

The highwater mark indicates how big the heap has to grow. The average tells us how big the heap is on average. If the average is much smaller than the highwater mark, then the program has some phase that is memory hungry, but other parts that are less memory hungry.

To compute interesting metrics about object lifetimes we define the birth_time of an object as the interval number in which it was allocated, the death_time as the interval number in which it was freed, and the last_used_time as the interval number in which the object was last touched. Each object then has a total_lifetime (death_time - birth_time), which is composed of two subintervals: active_lifetime (lastused_time - birth_time) and dragged_lifetime (death_time - lastused_time).

Another concept that is appearing in the garbage collection literature is the notion of the dragged objects. Dragged objects are those that are still reachable (live), but are never touched for the remainder of the execution. To define a metric that measures the amount of drag for a program we define the useful real estate for an object o as active_lifetime(o) x sizeof(o), the useless real estate for object o as dragged_lifetime(o) x sizeof(o), and the total real estate for o as total_lifetime(o) x sizeof(o). We can then look at total useless real estate as a fraction of total real estate. If this is a significant fraction, then dragged objects may be a problem in this benchmark.

The previous metric summarizes the uselessRealEstateFraction for all objects. If this fraction is high, then there exists a drag problem in the benchmark. We can also categorize objects by their individual individual uselessRealEstateFraction values using bins.

Back to Top