In our first article we looked at vacuums and clustering. In this article we review FREEZE.
What is a FREEZE?
A freeze is an essential system maintenance task. If you do not run it when it is due, your system will shut down and you will be forced to run a freeze in single user maintenance mode. Because of this FREEZE is included in the VACUUM set, though it is actually dealing with logical consistency, not freeing any physical space.
The simple example given in the first article was of drawing a line on an entry on a sheet of paper in order to delete it. For databases the model is more subtle because of the concept of transaction. Let us suppose you have taken your sheets of notes with you to a conference. You may change your existing notes based on what you hear, yet later you decide that you do not agree at all with what was said and would like to restore the notes to their previous form. This would be difficult if all you had done was to draw lines. You would need to know when each line was drawn so you could trace back which entries were deleted in order to restore the material.
PostgreSQL gives each of its lines a number. The full set of lines that delete a previous entry and new entries, each of which share the same number, are called a transaction. You can open a transaction, put into it as much as you please and, if you change your mind, you can simply revert to the previous state by doing what in database terminology is called a rollback.
Database programmers define their possible actions in terms of INSERT, UPDATE and DELETE, suggesting that an existing entry could actually be modified. In fact, UPDATE is only a synthetic description of a complex sequence of events that does not involve modifying an existing entry. What really happens is that the entry is deleted and a new one is created, so you always have both versions. This could not be otherwise since it may be necessary to rollback to the previous state.
Deleted entries that are still there, as has been described, are what a basic vacuum is about. However, there is an additional problem that concerns the number used to identify transactions. This number is cyclic and once you reach the maximum value the next step takes you back to 0. Since PostgreSQL interprets a>b as a happens after b this cannot be allowed. It would turn all past transactions into future events and totally disrupt the transaction mechanism. PostgreSQL will shut down and refuse to continue rather than allow this. But while shutting down might save your data it will surely kill your service and so there must be a way to solve the problem before it reaches this critical point. The answer is the FREEZE command (you should read this to see how there are always two billions of usable values even when you rollback to zero).
Briefly, a special number is used called FrozenXID. Any transaction marked by this number is in the past by design. When an old transaction might end up in the future because of a counter wraparound, it can be assigned this number to ensure it will never be misinterpreted. This is what the FREEZE command does. Its behaviour is regulated by two parameters: vacuum_freeze_min_age; and autovacuum_freeze_max_age. Max_age can be set only at server start but it can be reduced for individual tables at runtime if necessary.
When does FREEZE fire, exactly?
A FREEZE is executed at each delta transaction, where delta is:
The actual time needed to consume a given delta is a function of your application's design.
Tuning - the AUTOVACUUM daemon
When left to its own devices, PostgreSQL will use a daemon to run the two kinds of VACUUM that do not place locks (see commands in green in the table above). This daemon runs by default, and requires track_activities and track_counts to be enabled. It uses a multiprocessor architecture in distributing the load (potentially increasing any I/O contention problems). A launcher sends out a worker process each autovacuum_naptime second, until it reaches the top number of autovacuum_max_workers.
These worker processes will vacuum/analyze any table that goes over a threshold expressed by the following formula:
In general, the practice is to tune these values so that vacuum runs very frequently but for only for a short time. In other words, to have a very low tolerance to bloat. This is called aggressive autovacuum. Autovacuum can be enabled or disabled, but will start the main daemon anyway, and it will execute a VACUUM FREEZE whenever necessary as a minimum non-suppressible level of activity.
What resources does VACUUM use?
Apart from generating an augmented I/O, VACUUM uses RAM. The maximum quantity it can consume is established by the maintenance_work_mem parameter set at server startup. Whenever this limit is reached VACUUM must restart from the beginning and this can happen very many times with large tables. Increasing this parameter will thus make VACUUM perform faster by reducing the number of process restarts.
However, it must be remembered that up to autovacuum_max_workers, parallel VACUUMs may be running and so it may make sense to have a smaller number of them in order to avoid OUT OF MEMORY errors.
A final resource that vacuum needs is data availability. When a vacuum operation needs to modify data it cannot do so if a long transaction holds those data It must sit and wait for the transaction either to rollback or commit.
There is a further possible problem, the size of Free Space Map, that relates only to 8.3.x releases. In many cases its size can be less than what is needed and when its limit is reached, PostgreSQL loses its capability to reuse available free space, which causes the system to become extremely bloated in a very short time.
How to reduce the I/O impact
Vacuum is I/O intensive and this may be a problem. The impact on normal server operations can be reduced by forcing vacuum to pause by default after a delta time spent on its operations.
A current cost level is computed based on the three parameters in blue, and is compared to (auto)vacuum_cost_limit. If the latter value is trespassed, vacuum pauses for vacuum_cost_delay milliseconds, thus freeing resources for normal I/O operations. Please note that cost-based vacuum delays do not occur during operations that hold critical locks.
What about AUTOVACUUM ANALYZE?
The logics are almost the same, with just a small difference. A VACUUM only needs to know what tuples have become invalid because of UPDATE/DELETE commands, but ANALYZE needs to be aware of INSERT commands as well.
There are known problems that can delay if not totally suspend an ANALYZE of a given table (matters improved in this area starting with release 9.0.x). It is good practice to manually check if any table lags behind and to proceed to a manual ANALYZE should there be a need for it.
What is overall good practice?
choose a standard PostgreSQL version so that it becomes easier to have general policies (the repercussions of this are wider than can be covered here)
since psql01 is the most rigid point in the system, have the rest of the company adapt to it
use the most aggressive version of autovacuum you can (see autovacuum_vacuum_scale_factor)
free RAM resources you can allocate to autovacuum by capping the maximum number of simultaneous connections (probably using a pooler)
have in place a monitor system to observe what autovacuum is doing so it can be further tuned by looking at freeze dynamics