Monitoring Database Activitymonitoringdatabase activitydatabase activitymonitoring
A database administrator frequently wonders, What is the system
doing right now?
This chapter discusses how to find that out.
Several tools are available for monitoring database activity and
analyzing performance. Most of this chapter is devoted to describing
PostgreSQL's statistics collector,
but one should not neglect regular Unix monitoring programs such as
ps>, top>, iostat>, and vmstat>.
Also, once one has identified a
poorly-performing query, further investigation might be needed using
PostgreSQL's command.
discusses EXPLAIN>
and other methods for understanding the behavior of an individual
query.
Standard Unix Toolspsto monitor activity
On most platforms, PostgreSQL modifies its
command title as reported by ps>, so that individual server
processes can readily be identified. A sample display is
$ ps auxww | grep ^postgres
postgres 960 0.0 1.1 6104 1480 pts/1 SN 13:17 0:00 postgres -i
postgres 963 0.0 1.1 7084 1472 pts/1 SN 13:17 0:00 postgres: writer process
postgres 965 0.0 1.1 6152 1512 pts/1 SN 13:17 0:00 postgres: stats collector process
postgres 998 0.0 2.3 6532 2992 pts/1 SN 13:18 0:00 postgres: tgl runbug 127.0.0.1 idle
postgres 1003 0.0 2.4 6532 3128 pts/1 SN 13:19 0:00 postgres: tgl regression [local] SELECT waiting
postgres 1016 0.1 2.4 6532 3080 pts/1 SN 13:19 0:00 postgres: tgl regression [local] idle in transaction
(The appropriate invocation of ps> varies across different
platforms, as do the details of what is shown. This example is from a
recent Linux system.) The first process listed here is the
master server process. The command arguments
shown for it are the same ones given when it was launched. The next two
processes are background worker processes automatically launched by the
master process. (The stats collector> process will not be present
if you have set
the system not to start the statistics collector.) Each of the remaining
processes is a server process handling one client connection. Each such
process sets its command line display in the form
postgres: user> database> host> activity>
The user, database, and connection source host items remain the same for
the life of the client connection, but the activity indicator changes.
The activity can be idle> (i.e., waiting for a client command),
idle in transaction> (waiting for client inside a BEGIN> block),
or a command type name such as SELECT>. Also,
waiting> is attached if the server process is presently waiting
on a lock held by another server process. In the above example we can infer
that process 1003 is waiting for process 1016 to complete its transaction and
thereby release some lock or other.
If you have turned off then the
activity indicator is not updated; the process title is set only once
when a new process is launched. On some platforms this saves a useful
amount of per-command overhead, on others it's insignificant.
Solaris requires special handling. You must
use /usr/ucb/ps, rather than
/bin/ps. You also must use two
flags, not just one. In addition, your original invocation of the
postgres command must have a shorter
ps status display than that provided by each
server process. If you fail to do all three things, the ps>
output for each server process will be the original postgres>
command line.
The Statistics CollectorstatisticsPostgreSQL's statistics collector>
is a subsystem that supports collection and reporting of information about
server activity. Presently, the collector can count accesses to tables
and indexes in both disk-block and individual-row terms. It also tracks
total numbers of rows in each table, and the last vacuum and analyze times
for each table.
PostgreSQL also supports determining the exact
command currently being executed by other server processes. This is an
independent facility that does not depend on the collector process.
Statistics Collection Configuration
Since collection of statistics adds some overhead to query execution,
the system can be configured to collect or not collect information.
This is controlled by configuration parameters that are normally set in
postgresql.conf>. (See for
details about setting configuration parameters.)
The parameter controls whether
information is actually sent to the collector process and thus determines
whether any run-time overhead occurs for event counting.
The parameter enables monitoring
of the current command being executed by any server process.
Normally these parameters are set in postgresql.conf> so
that they apply to all server processes, but it is possible to turn
them on or off in individual sessions using the command. (To prevent
ordinary users from hiding their activity from the administrator,
only superusers are allowed to change these parameters with
SET>.)
Viewing Collected Statistics
Several predefined views, listed in , are available to show the results
of statistics collection. Alternatively, one can
build custom views using the underlying statistics functions.
When using the statistics to monitor current activity, it is important
to realize that the information does not update instantaneously.
Each individual server process transmits new statistical counts to
the collector just before going idle; so a query or transaction still in
progress does not affect the displayed totals. Also, the collector itself
emits a new report at most once per PGSTAT_STAT_INTERVAL
milliseconds (500 unless altered while building the server). So the
displayed information lags behind actual activity. However, current-query
information collected by track_activities is
always up-to-date.
Another important point is that when a server process is asked to display
any of these statistics, it first fetches the most recent report emitted by
the collector process and then continues to use this snapshot for all
statistical views and functions until the end of its current transaction.
So the statistics will appear not to change as long as you continue the
current transaction. Similarly, information about the current queries of
all processes is collected when any such information is first requested
within a transaction, and the same information will be displayed throughout
the transaction.
This is a feature, not a bug, because it allows you to perform several
queries on the statistics and correlate the results without worrying that
the numbers are changing underneath you. But if you want to see new
results with each query, be sure to do the queries outside any transaction
block. Alternatively, you can invoke
pg_stat_clear_snapshot(), which will discard the
current transaction's statistics snapshot (if any). The next use of
statistical information will cause a new snapshot to be fetched.
Standard Statistics ViewsView NameDescriptionpg_stat_activity>One row per server process, showing database OID, database
name, process ID>, user OID, user name, current query,
query's waiting status, time at which the current transaction and
current query began execution, time at which the process was
started, and client's address and port number. The columns that
report data on the current query are available unless the parameter
track_activities has been turned off.
Furthermore, these columns are only visible if the user examining
the view is a superuser or the same as the user owning the process
being reported on.
pg_stat_bgwriter>One row only, showing cluster-wide statistics from the
background writer: number of scheduled checkpoints, requested
checkpoints, buffers written by checkpoints and cleaning scans,
and the number of times the bgwriter stopped a cleaning scan
because it had written too many buffers. Also includes
statistics about the shared buffer pool, including buffers written
by backends (that is, not by the background writer) and total buffers
allocated.
pg_stat_database>One row per database, showing database OID, database name,
number of active server processes connected to that database,
number of transactions committed and rolled back in that database,
total disk blocks read, total buffer hits (i.e., block
read requests avoided by finding the block already in buffer cache),
number of rows returned, inserted, updated and deleted.
pg_stat_all_tables>For each table in the current database (including TOAST tables),
the table OID, schema and table name, number of sequential
scans initiated, number of live rows fetched by sequential
scans, number of index scans initiated (over all indexes
belonging to the table), number of live rows fetched by index
scans, numbers of row insertions, updates, and deletions,
number of row updates that were HOT (i.e., no separate index update),
numbers of live and dead rows,
the last time the table was vacuumed manually,
the last time it was vacuumed by the autovacuum daemon,
the last time it was analyzed manually,
and the last time it was analyzed by the autovacuum daemon.
pg_stat_sys_tables>Same as pg_stat_all_tables>, except that only
system tables are shown.pg_stat_user_tables>Same as pg_stat_all_tables>, except that only user
tables are shown.pg_stat_all_indexes>For each index in the current database,
the table and index OID, schema, table and index name,
number of index scans initiated on that index, number of
index entries returned by index scans, and number of live table rows
fetched by simple index scans using that index.
pg_stat_sys_indexes>Same as pg_stat_all_indexes>, except that only
indexes on system tables are shown.pg_stat_user_indexes>Same as pg_stat_all_indexes>, except that only
indexes on user tables are shown.pg_statio_all_tables>For each table in the current database (including TOAST tables),
the table OID, schema and table name, number of disk
blocks read from that table, number of buffer hits, numbers of
disk blocks read and buffer hits in all indexes of that table,
numbers of disk blocks read and buffer hits from that table's
auxiliary TOAST table (if any), and numbers of disk blocks read
and buffer hits for the TOAST table's index.
pg_statio_sys_tables>Same as pg_statio_all_tables>, except that only
system tables are shown.pg_statio_user_tables>Same as pg_statio_all_tables>, except that only
user tables are shown.pg_statio_all_indexes>For each index in the current database,
the table and index OID, schema, table and index name,
numbers of disk blocks read and buffer hits in that index.
pg_statio_sys_indexes>Same as pg_statio_all_indexes>, except that only
indexes on system tables are shown.pg_statio_user_indexes>Same as pg_statio_all_indexes>, except that only
indexes on user tables are shown.pg_statio_all_sequences>For each sequence object in the current database,
the sequence OID, schema and sequence name,
numbers of disk blocks read and buffer hits in that sequence.
pg_statio_sys_sequences>Same as pg_statio_all_sequences>, except that only
system sequences are shown. (Presently, no system sequences are defined,
so this view is always empty.)pg_statio_user_sequences>Same as pg_statio_all_sequences>, except that only
user sequences are shown.
The per-index statistics are particularly useful to determine which
indexes are being used and how effective they are.
Beginning in PostgreSQL 8.1, indexes can be
used either directly or via bitmap scans>. In a bitmap scan
the output of several indexes can be combined via AND or OR rules;
so it is difficult to associate individual heap row fetches
with specific indexes when a bitmap scan is used. Therefore, a bitmap
scan increments the
pg_stat_all_indexes>.idx_tup_read>
count(s) for the index(es) it uses, and it increments the
pg_stat_all_tables>.idx_tup_fetch>
count for the table, but it does not affect
pg_stat_all_indexes>.idx_tup_fetch>.
Before PostgreSQL 8.1, the
idx_tup_read> and idx_tup_fetch> counts
were essentially always equal. Now they can be different even without
considering bitmap scans, because idx_tup_read> counts
index entries retrieved from the index while idx_tup_fetch>
counts live rows fetched from the table; the latter will be less if any
dead or not-yet-committed rows are fetched using the index.
The pg_statio_> views are primarily useful to
determine the effectiveness of the buffer cache. When the number
of actual disk reads is much smaller than the number of buffer
hits, then the cache is satisfying most read requests without
invoking a kernel call. However, these statistics do not give the
entire story: due to the way in which PostgreSQL>
handles disk I/O, data that is not in the
PostgreSQL> buffer cache might still reside in the
kernel's I/O cache, and might therefore still be fetched without
requiring a physical read. Users interested in obtaining more
detailed information on PostgreSQL> I/O behavior are
advised to use the PostgreSQL> statistics collector
in combination with operating system utilities that allow insight
into the kernel's handling of I/O.
Other ways of looking at the statistics can be set up by writing
queries that use the same underlying statistics access functions as
these standard views do. These functions are listed in . The per-database access
functions take a database OID as argument to identify which
database to report on. The per-table and per-index functions take
a table or index OID. (Note that only tables and indexes in the
current database can be seen with these functions.) The
per-server-process access functions take a server process
number, which ranges from one to the number of currently active
server processes.
Statistics Access FunctionsFunctionReturn TypeDescriptionpg_stat_get_db_numbackends(oid)integer
Number of active server processes for database
pg_stat_get_db_xact_commit(oid)bigint
Transactions committed in database
pg_stat_get_db_xact_rollback(oid)bigint
Transactions rolled back in database
pg_stat_get_db_blocks_fetched(oid)bigint
Number of disk block fetch requests for database
pg_stat_get_db_blocks_hit(oid)bigint
Number of disk block fetch requests found in cache for database
pg_stat_get_db_tuples_returned(oid)bigint
Number of tuples returned for database
pg_stat_get_db_tuples_fetched(oid)bigint
Number of tuples fetched for database
pg_stat_get_db_tuples_inserted(oid)bigint
Number of tuples inserted in database
pg_stat_get_db_tuples_updated(oid)bigint
Number of tuples updated in database
pg_stat_get_db_tuples_deleted(oid)bigint
Number of tuples deleted in database
pg_stat_get_numscans(oid)bigint
Number of sequential scans done when argument is a table,
or number of index scans done when argument is an index
pg_stat_get_tuples_returned(oid)bigint
Number of rows read by sequential scans when argument is a table,
or number of index entries returned when argument is an index
pg_stat_get_tuples_fetched(oid)bigint
Number of table rows fetched by bitmap scans when argument is a table,
or table rows fetched by simple index scans using the index
when argument is an index
pg_stat_get_tuples_inserted(oid)bigint
Number of rows inserted into table
pg_stat_get_tuples_updated(oid)bigint
Number of rows updated in table (includes HOT updates)
pg_stat_get_tuples_deleted(oid)bigint
Number of rows deleted from table
pg_stat_get_tuples_hot_updated(oid)bigint
Number of rows HOT-updated in table
pg_stat_get_live_tuples(oid)bigint
Number of live rows in table
pg_stat_get_dead_tuples(oid)bigint
Number of dead rows in table
pg_stat_get_blocks_fetched(oid)bigint
Number of disk block fetch requests for table or index
pg_stat_get_blocks_hit(oid)bigint
Number of disk block requests found in cache for table or index
pg_stat_get_last_vacuum_time(oid)timestamptz
Time of the last vacuum initiated by the user on this table
pg_stat_get_last_autovacuum_time(oid)timestamptz
Time of the last vacuum initiated by the autovacuum daemon on this table
pg_stat_get_last_analyze_time(oid)timestamptz
Time of the last analyze initiated by the user on this table
pg_stat_get_last_autoanalyze_time(oid)timestamptz
Time of the last analyze initiated by the autovacuum daemon on this
table
pg_stat_get_backend_idset()setof integer
Set of currently active server process numbers (from 1 to the
number of active server processes). See usage example in the text
pg_backend_pid()integer
Process ID of the server process attached to the current session
pg_stat_get_backend_pid(integer)integer
Process ID of the given server process
pg_stat_get_backend_dbid(integer)oid
Database ID of the given server process
pg_stat_get_backend_userid(integer)oid
User ID of the given server process
pg_stat_get_backend_activity(integer)text
Active command of the given server process, but only if the
current user is a superuser or the same user as that of
the session being queried (and
track_activities is on)
pg_stat_get_backend_waiting(integer)boolean
True if the given server process is waiting for a lock,
but only if the current user is a superuser or the same user as that of
the session being queried (and
track_activities is on)
pg_stat_get_backend_activity_start(integer)timestamp with time zone
The time at which the given server process' currently
executing query was started, but only if the
current user is a superuser or the same user as that of
the session being queried (and
track_activities is on)
pg_stat_get_backend_xact_start(integer)timestamp with time zone
The time at which the given server process' currently
executing transaction was started, but only if the
current user is a superuser or the same user as that of
the session being queried (and
track_activities is on)
pg_stat_get_backend_start(integer)timestamp with time zone
The time at which the given server process was started, or
null if the current user is not a superuser nor the same user
as that of the session being queried
pg_stat_get_backend_client_addr(integer)inet
The IP address of the client connected to the given
server process. Null if the connection is over a Unix domain
socket. Also null if the current user is not a superuser nor
the same user as that of the session being queried
pg_stat_get_backend_client_port(integer)integer
The IP port number of the client connected to the given
server process. -1 if the connection is over a Unix domain
socket. Null if the current user is not a superuser nor the
same user as that of the session being queried
pg_stat_get_bgwriter_timed_checkpoints()bigint
The number of times the bgwriter has started timed checkpoints
(because the checkpoint_timeout time has expired)
pg_stat_get_bgwriter_requested_checkpoints()bigint
The number of times the bgwriter has started checkpoints based
on requests from backends because the checkpoint_segments
has been exceeded or because the CHECKPOINT
command has been issued
pg_stat_get_bgwriter_buf_written_checkpoints()bigint
The number of buffers written by the bgwriter during checkpoints
pg_stat_get_bgwriter_buf_written_clean()bigint
The number of buffers written by the bgwriter for routine cleaning of
dirty pages
pg_stat_get_bgwriter_maxwritten_clean()bigint
The number of times the bgwriter has stopped its cleaning scan because
it has written more buffers than specified in the
bgwriter_lru_maxpages parameter
pg_stat_get_buf_written_backend()bigint
The number of buffers written by backends because they needed
to allocate a new buffer
pg_stat_get_buf_alloc()bigint
The total number of buffer allocations
pg_stat_clear_snapshot()void
Discard the current statistics snapshot
pg_stat_reset()void
Reset all statistics counters for the current database to zero
(requires superuser privileges)
blocks_fetched minus
blocks_hit gives the number of kernel
read()> calls issued for the table, index, or
database; but the actual number of physical reads is usually
lower due to kernel-level buffering.
The function pg_stat_get_backend_idset provides
a convenient way to generate one row for each active server process. For
example, to show the PID>s and current queries of all server processes:
SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,
pg_stat_get_backend_activity(s.backendid) AS current_query
FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;
Viewing Lockslockmonitoring
Another useful tool for monitoring database activity is the
pg_locks system table. It allows the
database administrator to view information about the outstanding
locks in the lock manager. For example, this capability can be used
to:
View all the locks currently outstanding, all the locks on
relations in a particular database, all the locks on a
particular relation, or all the locks held by a particular
PostgreSQL session.
Determine the relation in the current database with the most
ungranted locks (which might be a source of contention among
database clients).
Determine the effect of lock contention on overall database
performance, as well as the extent to which contention varies
with overall database traffic.
Details of the pg_locks view appear in
.
For more information on locking and managing concurrency with
PostgreSQL, refer to .
Dynamic TracingDTracePostgreSQL provides facilities to support
dynamic tracing of the database server. This allows an external
utility to be called at specific points in the code and thereby trace
execution. Currently, this facility is primarily intended for use by
database developers, as it requires substantial familiarity with the code.
A number of trace points, often called probes, are already inserted
into the source code. By default these probes are disabled, and
the user needs to explicitly tell the configure script to make the
probes available in PostgreSQL.
Currently, only the DTrace utility is supported, which is only available
on Solaris Express and Solaris 10+. It is expected that DTrace will
be available in the future on FreeBSD and Mac OS X.
Supporting other dynamic tracing utilities is theoretically possible by
changing the definitions for the PG_TRACE> macros in
src/include/pg_trace.h>.
Compiling for Dynamic Tracing
By default, trace points are disabled, so you will need to
explicitly tell the configure script to make the probes available
in PostgreSQL. To include DTrace support
specify Built-in Trace Points
A few standard trace points are provided in the source code
(of course, more can be added as needed for a particular problem).
These are shown in .
Built-in Trace PointsNameParametersOverviewtransaction__start(int transactionId)The start of a new transaction.transaction__commit(int transactionId)The successful completion of a transaction.transaction__abort(int transactionId)The unsuccessful completion of a transaction.lwlock__acquire(int lockid, int mode)An LWLock has been acquired.lwlock__release(int lockid, int mode)An LWLock has been released.lwlock__startwait(int lockid, int mode)An LWLock was not immediately available and a backend
has begun to wait for the lock to become available.
lwlock__endwait(int lockid, int mode)A backend has been released from its wait for an LWLock.
lwlock__condacquire(int lockid, int mode)An LWLock was successfully acquired when the caller specified no
waiting.
lwlock__condacquire__fail(int lockid, int mode)An LWLock was not successfully acquired when the caller specified
no waiting.
lock__startwait(int locktag_field2, int lockmode)A request for a heavyweight lock (lmgr lock) has begun to wait
because the lock is not available.
lock__endwait(int locktag_field2, int lockmode)A request for a heavyweight lock (lmgr lock) has finished waiting
(i.e., has acquired the lock).
Using Trace Points
The example below shows a DTrace script for analyzing transaction
counts on the system, as an alternative to snapshotting
pg_stat_database> before and after a performance test:
#!/usr/sbin/dtrace -qs
postgresql$1:::transaction-start
{
@start["Start"] = count();
self->ts = timestamp;
}
postgresql$1:::transaction-abort
{
@abort["Abort"] = count();
}
postgresql$1:::transaction-commit
/self->ts/
{
@commit["Commit"] = count();
@time["Total time (ns)"] = sum(timestamp - self->ts);
self->ts=0;
}
Note how the double underline in trace point names needs to
be replaced by a hyphen when using D script.
When executed, the example D script gives output such as:
# ./txn_count.d `pgrep -n postgres`
^C
Start 71
Commit 70
Total time (ns) 2312105013
You should remember that trace programs need to be carefully written and
debugged prior to their use, otherwise the trace information collected might
be meaningless. In most cases where problems are found it is the
instrumentation that is at fault, not the underlying system. When
discussing information found using dynamic tracing, be sure to enclose
the script used to allow that too to be checked and discussed.
Defining Trace Points
New trace points can be defined within the code wherever the developer
desires, though this will require a recompilation.
A trace point can be inserted by using one of the
trace macros. These are chosen according to how many variables will
be made available for inspection at that trace point. Tracing the
occurrence of an event can be achieved with a single line, using
just the trace point name, e.g.:
PG_TRACE (my__new__trace__point);
More complex trace points can be provided with one or more variables
for inspection by the dynamic tracing utility by using the
PG_TRACE>n> macro that corresponds to the number
of parameters after the trace point name:
PG_TRACE3 (my__complex__event, varX, varY, varZ);
The definition of the transaction__start trace point is shown below:
static void
StartTransaction(void)
{
...
/*
* generate a new transaction id
*/
s->transactionId = GetNewTransactionId(false);
XactLockTableInsert(s->transactionId);
PG_TRACE1(transaction__start, s->transactionId);
...
}
Note how the transaction ID is made available to the dynamic tracing
utility.
The dynamic tracing utility might require you to further define these trace
points. For example, DTrace requires you to add new probes to the file
src/backend/utils/probes.d> as shown here:
provider postgresql {
...
probe transaction__start(int);
...
};
You should take care that the data types specified for the probe arguments
match the datatypes of the variables used in the PG_TRACE>
macro. This is not checked at compile time. You can check that your newly
added trace point is available by recompiling, then running the new binary,
and as root, executing a DTrace command such as:
dtrace -l -n transaction-start