"millennium" date part implementation in postgresql, both in the code
and the documentation, so that it conforms to the official definition.
If you do not agree with the official definition, please send your
complaint to "pope@vatican.org". I'm not responsible for them;-)
With the previous version, the centuries and millenniums had a wrong
number and started the wrong year. Moreover century number 0, which does
not exist in reality, lasted 200 years. Also, millennium number 0 lasted
2000 years.
If you want postgresql to have it's own definition of "century" and
"millennium" that does not conform to the one of the society, just give
them another name. I would suggest "pgCENTURY" and "pgMILLENNIUM";-)
IMO, if someone may use the options, it means that postgresql is used for
historical data, so it make sense to have an historical definition. Also,
I just want to divide the year by 100 or 1000, I can do that quite easily.
BACKWARD INCOMPATIBLE CHANGE
Fabien Coelho - coelho@cri.ensmp.fr
> >>with allowed values of "all, mod, ddl, none" with default "none".
OK, here is a patch that implements #1. Here is sample output:
test=> set client_min_messages = 'log';
SET
test=> set log_statement = 'mod';
SET
test=> select 1;
?column?
----------
1
(1 row)
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> copy test from '/tmp/x';
LOG: statement: copy test from '/tmp/x';
ERROR: relation "test" does not exist
test=> copy test to '/tmp/x';
ERROR: relation "test" does not exist
test=> prepare xx as select 1;
PREPARE
test=> prepare xx as update x set y=1;
LOG: statement: prepare xx as update x set y=1;
ERROR: relation "x" does not exist
test=> explain analyze select 1;;
QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.006..0.007 rows=1 loops=1)
Total runtime: 0.046 ms
(2 rows)
test=> explain analyze update test set x=1;
LOG: statement: explain analyze update test set x=1;
ERROR: relation "test" does not exist
test=> explain update test set x=1;
ERROR: relation "test" does not exist
It checks PREPARE and EXECUTE ANALYZE too. The log_statement values are
'none', 'mod', 'ddl', and 'all'. For 'all', it prints before the query
is parsed, and for ddl/mod, it does it right after parsing using the
node tag (or command tag for CREATE/ALTER/DROP), so any non-parse errors
will print after the log line.
results with tuples as ordinary varlena Datums. This commit does not
in itself do much for us, except eliminate the horrid memory leak
associated with evaluation of whole-row variables. However, it lays the
groundwork for allowing composite types as table columns, and perhaps
some other useful features as well. Per my proposal of a few days ago.
is measured in kilobytes and checked against actual physical execution
stack depth, as per my proposal of 30-Dec. This gives us a fairly
bulletproof defense against crashing due to runaway recursive functions.
listen_addresses parameter, as per recent discussion. The default behavior
is now to listen on localhost, which eliminates the need for the -i
postmaster switch in many scenarios.
Andrew Dunstan
of fighting it, avoid hard-wired (and wrong) assumption about max length
of prefix, cause %l to actually work as documented, don't compute data
we may not need.
TID (heap position). This doesn't do anything to the validity of the
finished index, but by pretending to qsort() that there are no really
equal keys in the sort, we can avoid performance problems with qsort
implementations that have trouble with large numbers of equal keys.
Patch from Manfred Koizar.
so that the 'val' is computed only once, per recent discussion. The
speedup is not much when 'val' is just a simple variable, but could be
significant for larger expressions. More importantly this avoids issues
with multiple evaluations of a volatile 'val', and it allows the CASE
expression to be reverse-listed in its original form by ruleutils.c.
In particular, don't depend on strtod() to accept 'NaN' and 'Infinity'
inputs (while this is required by C99, not all platforms are compliant
with that yet). Also, don't require glibc's behavior from isinf():
it seems that on a lot of platforms isinf() does not itself distinguish
between negative and positive infinity.
types. Update the regression tests and the documentation to reflect
this. Remove the UNSAFE_FLOATS #ifdef.
This is only half the story: we still unconditionally reject
floating point operations that result in +/- infinity. See
recent thread on -hackers for more information.
any amount of leading or trailing whitespace (where "whitespace"
is defined by isspace()). This is for SQL conformance, as well
as consistency with other numeric types (e.g. oid, numeric).
Also refactor pg_atoi() to avoid looking at errno where not
necessary, and add a bunch of regression tests for the input
to these types.
bin directories to be packaged under the same root directory (eg. <some
path>/pgsql/bin and <some path>/pgsql/lib) for the win32 port, which
does not appear to be an onerous restriction.
Claudio Natoli
#log_line_prefix = '' # e.g. '<%u%%%d> '
# %u=user name %d=database name
# %r=remote host and port
# %p=PID %t=timestamp %i=command tag
# %c=session id %l=session line number
# %s=session start timestamp
# %x=stop here in non-session processes
# %%='%'
Andrew Dunstan
support for 'week' within the date_trunc function.
Within the patch I added a couple of test cases and associated target
output, and changed the documentation to add 'week' appropriately.
Robert Creager
float8 types. This begins the deprecation of this feature: in 7.6,
this input will be rejected.
Also added a new error code for warnings about deprecated features,
and updated the regression tests.
comments, make some unrelated improvements to the functions
documentation, and perform some minor consistency cleanup
elsewhere. Original initcap() change from Dennis B., additional
changes by Neil C.
* Changes incorrect CYGWIN defines to __CYGWIN__
* Some localtime returns NULL checks (when unchecked cause SEGVs under
Win32
regression tests)
* Rationalized CreateSharedMemoryAndSemaphores and
AttachSharedMemoryAndSemaphores (Bruce, I finally remembered to do it);
requires attention.
Claudio Natoli
exposed thereby. AFAICT these would not lead to any worse problems than
junk emitted on the backend's stdout, but we should have the option to
catch possible worse errors in future.
number of openable files and the number already opened. This eliminates
depending on sysconf(_SC_OPEN_MAX), and allows much saner behavior on
platforms where open-file slots are used up by semaphores.
logically belongs. Arrange to update the _NSGetArgv() copy of the argv
pointer on Darwin. (It seems likely that other NeXT-derived platforms
also have an _NSGetArgv() problem, but until we have some reports I'll
just make this #ifdef __darwin__.)
problem, per previous discussion. Make some additional changes to
centralize the knowledge of just how identifier downcasing is done,
in hopes of simplifying any future tweaking in this area.
Win2K, and possibly all Win32 variants, it is always 0). This causes a
number of problems in the dfmgr.c logic, which basically all revolve
around the fact that *any* two files will appear to have the same inode.
Claudio Natoli
corner cases that could stand improvement, but it does all the basic
stuff. A byproduct is that the selectivity routines are no longer
constrained to working on simple Vars; we might in future be able to
improve the behavior for subexpressions that don't match indexes.
vs. timestamptz. This allows use of indexes for expressions like
datecol >= date 'today' - interval '1 month'
which were formerly not indexable without casting the righthand side
down from timestamp to date.
subroutine in src/port/pgsleep.c. Remove platform dependencies from
miscadmin.h and put them in port.h where they belong. Extend recent
vacuum cost-based-delay patch to apply to VACUUM FULL, ANALYZE, and
non-btree index vacuuming.
By the way, where is the documentation for the cost-based-delay patch?