mirror of
				https://github.com/postgres/postgres.git
				synced 2025-10-31 10:30:33 +03:00 
			
		
		
		
	Previously tables declared WITH OIDS, including a significant fraction of the catalog tables, stored the oid column not as a normal column, but as part of the tuple header. This special column was not shown by default, which was somewhat odd, as it's often (consider e.g. pg_class.oid) one of the more important parts of a row. Neither pg_dump nor COPY included the contents of the oid column by default. The fact that the oid column was not an ordinary column necessitated a significant amount of special case code to support oid columns. That already was painful for the existing, but upcoming work aiming to make table storage pluggable, would have required expanding and duplicating that "specialness" significantly. WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0). Remove it. Removing includes: - CREATE TABLE and ALTER TABLE syntax for declaring the table to be WITH OIDS has been removed (WITH (oids[ = true]) will error out) - pg_dump does not support dumping tables declared WITH OIDS and will issue a warning when dumping one (and ignore the oid column). - restoring an pg_dump archive with pg_restore will warn when restoring a table with oid contents (and ignore the oid column) - COPY will refuse to load binary dump that includes oids. - pg_upgrade will error out when encountering tables declared WITH OIDS, they have to be altered to remove the oid column first. - Functionality to access the oid of the last inserted row (like plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed. The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false) for CREATE TABLE) is still supported. While that requires a bit of support code, it seems unnecessary to break applications / dumps that do not use oids, and are explicit about not using them. The biggest user of WITH OID columns was postgres' catalog. This commit changes all 'magic' oid columns to be columns that are normally declared and stored. To reduce unnecessary query breakage all the newly added columns are still named 'oid', even if a table's column naming scheme would indicate 'reloid' or such. This obviously requires adapting a lot code, mostly replacing oid access via HeapTupleGetOid() with access to the underlying Form_pg_*->oid column. The bootstrap process now assigns oids for all oid columns in genbki.pl that do not have an explicit value (starting at the largest oid previously used), only oids assigned later by oids will be above FirstBootstrapObjectId. As the oid column now is a normal column the special bootstrap syntax for oids has been removed. Oids are not automatically assigned during insertion anymore, all backend code explicitly assigns oids with GetNewOidWithIndex(). For the rare case that insertions into the catalog via SQL are called for the new pg_nextoid() function can be used (which only works on catalog tables). The fact that oid columns on system tables are now normal columns means that they will be included in the set of columns expanded by * (i.e. SELECT * FROM pg_class will now include the table's oid, previously it did not). It'd not technically be hard to hide oid column by default, but that'd mean confusing behavior would either have to be carried forward forever, or it'd cause breakage down the line. While it's not unlikely that further adjustments are needed, the scope/invasiveness of the patch makes it worthwhile to get merge this now. It's painful to maintain externally, too complicated to commit after the code code freeze, and a dependency of a number of other patches. Catversion bump, for obvious reasons. Author: Andres Freund, with contributions by John Naylor Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
Extended statistics
===================
When estimating various quantities (e.g. condition selectivities) the default
approach relies on the assumption of independence. In practice that's often
not true, resulting in estimation errors.
Extended statistics track different types of dependencies between the columns,
hopefully improving the estimates and producing better plans.
Types of statistics
-------------------
There are currently two kinds of extended statistics:
    (a) ndistinct coefficients
    (b) soft functional dependencies (README.dependencies)
Compatible clause types
-----------------------
Each type of statistics may be used to estimate some subset of clause types.
    (a) functional dependencies - equality clauses (AND), possibly IS NULL
Currently, only OpExprs in the form Var op Const, or Const op Var are
supported, however it's feasible to expand the code later to also estimate the
selectivities on clauses such as Var op Var.
Complex clauses
---------------
We also support estimating more complex clauses - essentially AND/OR clauses
with (Var op Const) as leaves, as long as all the referenced attributes are
covered by a single statistics object.
For example this condition
    (a=1) AND ((b=2) OR ((c=3) AND (d=4)))
may be estimated using statistics on (a,b,c,d). If we only have statistics on
(b,c,d) we may estimate the second part, and estimate (a=1) using simple stats.
If we only have statistics on (a,b,c) we can't apply it at all at this point,
but it's worth pointing out clauselist_selectivity() works recursively and when
handling the second part (the OR-clause), we'll be able to apply the statistics.
Note: The multi-statistics estimation patch also makes it possible to pass some
clauses as 'conditions' into the deeper parts of the expression tree.
Selectivity estimation
----------------------
Throughout the planner clauselist_selectivity() still remains in charge of
most selectivity estimate requests. clauselist_selectivity() can be instructed
to try to make use of any extended statistics on the given RelOptInfo, which
it will do if:
    (a) An actual valid RelOptInfo was given. Join relations are passed in as
        NULL, therefore are invalid.
    (b) The relation given actually has any extended statistics defined which
        are actually built.
When the above conditions are met, clauselist_selectivity() first attempts to
pass the clause list off to the extended statistics selectivity estimation
function. This functions may not find any clauses which is can perform any
estimations on. In such cases these clauses are simply ignored. When actual
estimation work is performed in these functions they're expected to mark which
clauses they've performed estimations for so that any other function
performing estimations knows which clauses are to be skipped.
Size of sample in ANALYZE
-------------------------
When performing ANALYZE, the number of rows to sample is determined as
    (300 * statistics_target)
That works reasonably well for statistics on individual columns, but perhaps
it's not enough for extended statistics. Papers analyzing estimation errors
all use samples proportional to the table (usually finding that 1-3% of the
table is enough to build accurate stats).
The requested accuracy (number of MCV items or histogram bins) should also
be considered when determining the sample size, and in extended statistics
those are not necessarily limited by statistics_target.
This however merits further discussion, because collecting the sample is quite
expensive and increasing it further would make ANALYZE even more painful.
Judging by the experiments with the current implementation, the fixed size
seems to work reasonably well for now, so we leave this as future work.