by Ken Ashcraft's report. I think there is no actual bug here since if
the int32 value does wrap a little bit, palloc will still reject it.
Still it's better that the code be obviously correct.
all the code that looks for other binaries. I move FindExec into
port/exec.c (and renamed it to find_my_binary()). I also added
find_other_binary that looks for another binary in the same directory as
the calling program, and checks the version string.
The only behavior change was that initdb and pg_dump would look in the
hard-coded bindir directory if it can't find the requested binary in the
same directory as the caller. The new code throws an error. The old
behavior seemed too error prone for version mismatches.
rather than allowing them only in a few special cases as before. In
particular you can now pass a ROW() construct to a function that accepts
a rowtype parameter. Internal generation of RowExprs fixes a number of
corner cases that used to not work very well, such as referencing the
whole-row result of a JOIN or subquery. This represents a further step in
the work I started a month or so back to make rowtype values into
first-class citizens.
costing us lots more to maintain than it was worth. On shared tables
it was of exactly zero benefit because we couldn't trust it to be
up to date. On temp tables it sometimes saved an lseek, but not often
enough to be worth getting excited about. And the real problem was that
we forced an lseek on every relcache flush in order to update the field.
So all in all it seems best to lose the complexity.
conversion of basic ASCII letters. Remove all uses of strcasecmp and
strncasecmp in favor of new functions pg_strcasecmp and pg_strncasecmp;
remove most but not all direct uses of toupper and tolower in favor of
pg_toupper and pg_tolower. These functions use the same notions of
case folding already developed for identifier case conversion. I left
the straight locale-based folding in place for situations where we are
just manipulating user data and not trying to match it to built-in
strings --- for example, the SQL upper() function is still locale
dependent. Perhaps this will prove not to be what's wanted, but at
the moment we can initdb and pass regression tests in Turkish locale.
modify. Also fix a passel of problems with ALTER TABLE CLUSTER ON:
failure to check that the index is safe to cluster on (or even belongs
to the indicated rel, or even exists), and failure to broadcast a relcache
flush event when changing an index's state.
* ALTER ... ADD COLUMN with defaults and NOT NULL constraints works per SQL
spec. A default is implemented by rewriting the table with the new value
stored in each row.
* ALTER COLUMN TYPE. You can change a column's datatype to anything you
want, so long as you can specify how to convert the old value. Rewrites
the table. (Possible future improvement: optimize no-op conversions such
as varchar(N) to varchar(N+1).)
* Multiple ALTER actions in a single ALTER TABLE command. You can perform
any number of column additions, type changes, and constraint additions with
only one pass over the table contents.
Basic documentation provided in ALTER TABLE ref page, but some more docs
work is needed.
Original patch from Rod Taylor, additional work from Tom Lane.
> Please find a attached a small patch that adds accessor functions
> for "aclitem" so that it is not an opaque datatype.
>
> I needed these functions to browse aclitems from user land. I can load
> them when necessary, but it seems to me that these accessors for a
> backend type belong to the backend, so I submit them.
>
> Fabien Coelho
for "aclitem" so that it is not an opaque datatype.
I needed these functions to browse aclitems from user land. I can load
them when necessary, but it seems to me that these accessors for a
backend type belong to the backend, so I submit them.
Fabien Coelho
* removed a few redundant defines
* get_user_name safe under win32
* rationalized pipe read EOF for win32 (UPDATED PATCH USED)
* changed all backend instances of sleep() to pg_usleep
- except for the SLEEP_ON_ASSERT in assert.c, as it would exceed a
32-bit long [Note to patcher: If a SLEEP_ON_ASSERT of 2000 seconds is
acceptable, please replace with pg_usleep(2000000000L)]
I added a comment to that part of the code:
/*
* It would be nice to use pg_usleep() here, but only does 2000 sec
* or 33 minutes, which seems too short.
*/
sleep(1000000);
Claudio Natoli
"millennium" date part implementation in postgresql, both in the code
and the documentation, so that it conforms to the official definition.
If you do not agree with the official definition, please send your
complaint to "pope@vatican.org". I'm not responsible for them;-)
With the previous version, the centuries and millenniums had a wrong
number and started the wrong year. Moreover century number 0, which does
not exist in reality, lasted 200 years. Also, millennium number 0 lasted
2000 years.
If you want postgresql to have it's own definition of "century" and
"millennium" that does not conform to the one of the society, just give
them another name. I would suggest "pgCENTURY" and "pgMILLENNIUM";-)
IMO, if someone may use the options, it means that postgresql is used for
historical data, so it make sense to have an historical definition. Also,
I just want to divide the year by 100 or 1000, I can do that quite easily.
BACKWARD INCOMPATIBLE CHANGE
Fabien Coelho - coelho@cri.ensmp.fr
> >>with allowed values of "all, mod, ddl, none" with default "none".
OK, here is a patch that implements #1. Here is sample output:
test=> set client_min_messages = 'log';
SET
test=> set log_statement = 'mod';
SET
test=> select 1;
?column?
----------
1
(1 row)
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> copy test from '/tmp/x';
LOG: statement: copy test from '/tmp/x';
ERROR: relation "test" does not exist
test=> copy test to '/tmp/x';
ERROR: relation "test" does not exist
test=> prepare xx as select 1;
PREPARE
test=> prepare xx as update x set y=1;
LOG: statement: prepare xx as update x set y=1;
ERROR: relation "x" does not exist
test=> explain analyze select 1;;
QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.006..0.007 rows=1 loops=1)
Total runtime: 0.046 ms
(2 rows)
test=> explain analyze update test set x=1;
LOG: statement: explain analyze update test set x=1;
ERROR: relation "test" does not exist
test=> explain update test set x=1;
ERROR: relation "test" does not exist
It checks PREPARE and EXECUTE ANALYZE too. The log_statement values are
'none', 'mod', 'ddl', and 'all'. For 'all', it prints before the query
is parsed, and for ddl/mod, it does it right after parsing using the
node tag (or command tag for CREATE/ALTER/DROP), so any non-parse errors
will print after the log line.
results with tuples as ordinary varlena Datums. This commit does not
in itself do much for us, except eliminate the horrid memory leak
associated with evaluation of whole-row variables. However, it lays the
groundwork for allowing composite types as table columns, and perhaps
some other useful features as well. Per my proposal of a few days ago.
is measured in kilobytes and checked against actual physical execution
stack depth, as per my proposal of 30-Dec. This gives us a fairly
bulletproof defense against crashing due to runaway recursive functions.
listen_addresses parameter, as per recent discussion. The default behavior
is now to listen on localhost, which eliminates the need for the -i
postmaster switch in many scenarios.
Andrew Dunstan
of fighting it, avoid hard-wired (and wrong) assumption about max length
of prefix, cause %l to actually work as documented, don't compute data
we may not need.
TID (heap position). This doesn't do anything to the validity of the
finished index, but by pretending to qsort() that there are no really
equal keys in the sort, we can avoid performance problems with qsort
implementations that have trouble with large numbers of equal keys.
Patch from Manfred Koizar.
so that the 'val' is computed only once, per recent discussion. The
speedup is not much when 'val' is just a simple variable, but could be
significant for larger expressions. More importantly this avoids issues
with multiple evaluations of a volatile 'val', and it allows the CASE
expression to be reverse-listed in its original form by ruleutils.c.
In particular, don't depend on strtod() to accept 'NaN' and 'Infinity'
inputs (while this is required by C99, not all platforms are compliant
with that yet). Also, don't require glibc's behavior from isinf():
it seems that on a lot of platforms isinf() does not itself distinguish
between negative and positive infinity.
types. Update the regression tests and the documentation to reflect
this. Remove the UNSAFE_FLOATS #ifdef.
This is only half the story: we still unconditionally reject
floating point operations that result in +/- infinity. See
recent thread on -hackers for more information.
any amount of leading or trailing whitespace (where "whitespace"
is defined by isspace()). This is for SQL conformance, as well
as consistency with other numeric types (e.g. oid, numeric).
Also refactor pg_atoi() to avoid looking at errno where not
necessary, and add a bunch of regression tests for the input
to these types.