--- ie, they're only called for side-effects. Add a PG_RETURN_VOID()
macro and use it where appropriate. This probably doesn't change the
machine code by a single bit ... it's just for documentation.
to_char. I don't know about the rest of the world, but the "standard" in
Australia is the following:
1st, 2nd, 3rd, 4th - 9th
10th - 19th
21st, 22nd, 23rd, 24th - 29th (similarly for 30s - 90s)
110th - 119th (and for all "teens")
121st, 122nd, 123rd, 124th - 129th
I think you see the trend. The current code works fine except that it
produces:
111st, 112nd, 113rd, 114th - 119th
211st, 212nd, 213rd, 214th - 219th ... and so on.
Without knowing anything about what's supported (and what isn't) in the usual
I18N libraries, should this type of behaviour be defined within the locales?
Daniel Baldoni
inputs have been converted to newstyle. This should go a long way towards
fixing our portability problems with platforms where char and short
parameters are passed differently from int-width parameters. Still
more to do for the Alpha port however.
to 10, and be consistent about whether it counts the trailing null (it
does not). Also increase MAXDATELEN to be sure no buffer overflows are
caused by the longer MAXTZLEN.
key call sites are changed, but most called functions are still oldstyle.
An exception is that the PL managers are updated (so, for example, NULL
handling now behaves as expected in plperl and plpgsql functions).
NOTE initdb is forced due to added column in pg_proc.
other than the most common value in a column. We had had 0.5, make it
0.1 to make it more likely that an indexscan will be chosen. Really
need better statistics instead, but this should stem the bleeding
meanwhile ...
IRIX systems using the native compilers. A summary is:
- Various files use "//" as a comment delimiter in c files.
- Problems caused by assuming "char" is signed.
cash.in: building -signed the rules regression test fails as described
in FAQ_QNX4. If CHAR_MAX is "255U" then ((signed char)CHAR_MAX) is -1.
postmaster.c: random number regression test failed without this change.
- Some generic build issues and warning message cleanup.
David Kaelbling
(LIKE and regexp matches). These are not yet referenced in pg_operator,
so by default the system will continue to use eqsel/neqsel.
Also, tweak convert_to_scalar() logic so that common prefixes of strings
are stripped off, allowing better accuracy when all strings in a table
share a common prefix.
Add a random number generator and seed setter (random(), SET SEED)
Fix up the interval*float8 math to carry partial months
into the time field.
Add float8*interval so we have symmetry in the available math.
Fix the parser and define.c to accept SQL92 types as field arguments.
Fix the parser to accept SQL92 types for CREATE TYPE, etc. This is
necessary to allow...
Bit/varbit support in contrib/bit cleaned up to compile and load
cleanly. Still needs some work before final release.
Implement the "SOME" keyword as a synonym for "ANY" per SQL92.
Implement ascii(text), ichar(int4), repeat(text,int4) to help
support the ODBC driver.
Enable the TRUNCATE() function mapping in the ODBC driver.
to next integer. Previously, if selectivity was small, we could compute
very tiny scan cost on the basis of estimating that only 0.001 tuple
would be fetched, which is silly. This naturally led to some rather
silly plans...
Clean up grotty coding in them, too. AFAICS from the CVS logs, these
have been broken since Postgres95, so I'm not going to insist on an
initdb to fix them now...
running gcc and HP's cc with warnings cranked way up. Signed vs unsigned
comparisons, routines declared static and then defined not-static,
that kind of thing. Tedious, but perhaps useful...
small changes in formatting.c code (better memory usage ...etc.) and
better
to_char's cache (will fastly for more to_char()s in one query).
(It is probably end of to_char() development in 7.0 cycle.)
Karel
Implement TIME WITH TIME ZONE type (timetz internal type).
Remap length() for character strings to CHAR_LENGTH() for SQL92
and to remove the ambiguity with geometric length() functions.
Keep length() for character strings for backward compatibility.
Shrink stored views by removing internal column name list from visible rte.
Implement min(), max() for time and timetz data types.
Implement conversion of TIME to INTERVAL.
Implement abs(), mod(), fac() for the int8 data type.
Rename some math functions to generic names:
round(), sqrt(), cbrt(), pow(), etc.
Rename NUMERIC power() function to pow().
Fix int2 factorial to calculate result in int4.
Enhance the Oracle compatibility function translate() to work with string
arguments (from Edwin Ramirez).
Modify pg_proc system table to remove OID holes.
(ie, allow rounding to occur at a digit position left of the decimal
point). Apparently this is how Oracle handles it, and there are
precedents in other programming languages as well.
Since we detect oversize tuples elsewhere, I see no reason not to allow
string constants that are 'too long' --- after all, they might never get
stored in a tuple at all.
the to_char() source code is large, here are regression tests for
numeric/timestamp/int8 part. It is probably enough test for formatting
code in the formatting.c module. The others (float4/float8/int4) types
share this formatting code and eventual bugs for these types aren't
few probable.
Patch fix timestamp_to_char() for infinity/invalid timestamp too.
Karel
when you have networks with the same prefix, but different netmasks.
This is due to the fact that occassionally there is random
(uninitialized?)
data in the extra bits past the point where the netmask cares about
them.
ie (real data from a real live database):
10.0/10 == 00001010.00100000.00100000.00011000
10.0/11 == 00001010.00000000.00000000.00000000
^ Bad data, normally never seen
The v4bitncmp() function was only taking one bit length argument so
it would determine that the networks were different, even though
they really aren't (and the netmask test wouldn't be used). This
ONLY happens if the tuple with the longer bit length is used as the
ip_bits() for the v4bitncmp call AND there happens to be junk data
in place in the shorter tuple. Odd and random, but I saw it happen
a couple times so...
Ryan Mooney