Most computers have a set of registers that can be accessed faster than the computer’s main memory. A good compiler will perform a kind of optimization called “redundant load and store removal.” The compiler looks for places in the code where it can either remove an instruction to load data from memory because the value is already in a register, or remove an instruction to store data to memory because the value can stay in a register until it is changed again anyway.
If a variable is a pointer to something other than normal memory, such as memory-mapped ports on a peripheral, redundant load and store optimizations might be detrimental. For instance, here’s a piece of code that might be used to time some operation:
time_t time_addition(volatile const struct timer *t, int a)
{
int n;
int x;
time_t then;
x = 0;
then = t->value;
for (n = 0; n < 1000; n++)
{
x = x + a;
}
return t->value - then;
}
In this code, the variable t->value is actually a hardware counter that is being incremented as time passes. The function adds the value of a to x 1000 times, and it returns the amount the timer was incremented by while the 1000 additions were being performed.
Without the volatile modifier, a clever optimizer might assume that the value of t does not change during the execution of the function, because there is no statement that explicitly changes it. In that case, there’s no need to read it from memory a second time and subtract it, because the answer will always be 0. The compiler might therefore “optimize” the function by making it always return 0.
If a variable points to data in shared memory, you also don’t want the compiler to perform redundant load and store optimizations. Shared memory is normally used to enable two programs to communicate with each other by having one program store data in the shared portion of memory and the other program read the same portion of memory. If the compiler optimizes away a load or store of shared memory, communication between the two programs will be affected.
2 comments: