From 32d369378e4292a5d859f6edea95f6a396013dfc Mon Sep 17 00:00:00 2001 From: unknown Date: Thu, 24 May 2001 20:06:19 +0300 Subject: [PATCH 01/20] manual.texi Add instructions for innodb_unix_file_flush_method and MyISAM->InnoDB conversion Docs/manual.texi: Add instructions for innodb_unix_file_flush_method and MyISAM->InnoDB conversion --- Docs/manual.texi | 155 ++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 154 insertions(+), 1 deletion(-) diff --git a/Docs/manual.texi b/Docs/manual.texi index 7702d2dad10..ed45a1b6317 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -24835,6 +24835,17 @@ in its own lock table and rolls back the transaction. If you use than InnoDB in the same transaction, then a deadlock may arise which InnoDB cannot notice. In cases like this the timeout is useful to resolve the situation. +@item @code{innodb_unix_file_flush_method} @tab +(Available from 3.23.39 up.) +The default value for this is @code{fdatasync}. +Another option is @code{O_DSYNC}. +Options @code{littlesync} and @code{nosync} have the +risk that in an operating system crash or a power outage you may easily +end up with a half-written database page, and you have to do a recovery +from a backup. See the section "InnoDB performance tuning", item 6, below +for tips on how to set this parameter. If you are happy with your database +performance it is wisest not to specify this parameter at all, in which +case it will get the default value. @end multitable @node InnoDB init, Using InnoDB tables, InnoDB start, InnoDB @@ -24955,6 +24966,46 @@ InnoDB has its own internal data dictionary, and you will get problems if the @strong{MySQL} @file{.frm} files are out of 'sync' with the InnoDB internal data dictionary. +@subsubsection Converting MyISAM tables to InnoDB + +InnoDB does not have a special optimization for separate index creation. +Therefore it does not pay to export and import the table and create indexes +afterwards. +The fastest way to alter a table to InnoDB is to do the inserts +directly to an InnoDB table, that is, use @code{ALTER TABLE ... TYPE=INNODB}, +or create an empty InnoDB table with identical definitions and insert +the rows with @code{INSERT INTO ... SELECT * FROM ...}. + +To get better control over the insertion process, it may be good to insert +big tables in pieces: + +@example +INSERT INTO newtable SELECT * FROM oldtable WHERE yourkey > something + AND yourkey <= somethingelse; +@end example + +After all data has been inserted you can rename the tables. + +During the conversion of big tables you should set the InnoDB +buffer pool size big +to reduce disk i/o. Not bigger than 80 % of the physical memory, though. +You should set InnoDB log files big, and also the log buffer large. + +Make sure you do not run out of tablespace: InnoDB tables take a lot +more space than MyISAM tables. If an @code{ALTER TABLE} runs out +of space, it will start a rollback, and that can take hours if it is +disk-bound. +In inserts InnoDB uses the insert buffer to merge secondary index records +to indexes in batches. That saves a lot of disk i/o. In rollback no such +mechanism is used, and the rollback can take 30 times longer than the +insertion. + +In the case of a runaway rollback, if you do not have valuable data in your +database, +it is better that you kill the database process and delete all InnoDB data +and log files and all InnoDB table @file{.frm} files, and start +your job again, rather than wait for millions of disk i/os to complete. + @node Adding and removing, Backing up, Using InnoDB tables, InnoDB @subsection Adding and removing InnoDB data and log files @@ -25355,6 +25406,103 @@ set by the SQL statement may be preserved. This is because InnoDB stores row locks in a format where it cannot afterwards know which was set by which SQL statement. +@subsection Performance tuning tips + +@strong{1.} +If the Unix @file{top} or the Windows @file{Task Manager} shows that +the CPU usage percentage with your workload is less than 70 %, +your workload is probably +disk-bound. Maybe you are making too many transaction commits, or the +buffer pool is too small. +Making the buffer pool bigger can help, but do not set +it bigger than 80 % of physical memory. + +@strong{2.} +Wrap several modifications into one transaction. InnoDB must +flush the log to disk at each transaction commit, if that transaction +made modifications to the database. Since the rotation speed of a disk +is typically +at most 167 revolutions/second, that constrains the number of commits +to the same 167/second if the disk does not fool the operating system. + +@strong{3.} +If you can afford the loss of some latest committed transactions, you can +set the @file{my.cnf} parameter @code{innodb_flush_log_at_trx_commit} +to zero. InnoDB tries to flush the log anyway once in a second, +though the flush is not guaranteed. + +@strong{4.} +Make your log files big, even as big as the buffer pool. When InnoDB +has written the log files full, it has to write the modified contents +of the buffer pool to disk in a checkpoint. Small log files will cause many +unnecessary disk writes. The drawback in big log files is that recovery +time will be longer. + +@strong{5.} +Also the log buffer should be quite big, say 8 MB. + +@strong{6.} (Relevant from 3.23.39 up.) +In some versions of Linux and other Unixes flushing files to disk with the Unix +@code{fdatasync} and other similar methods is surprisingly slow. +The default method InnoDB uses is the @code{fdatasync} function. +If you are not satisfied with the database write performance, you may +try setting @code{innodb_unix_file_flush_method} in @file{my.cnf} +to @code{O_DSYNC}, though O_DSYNC seems to be slower on most systems. +You can also try setting it to @code{littlesync}, which means that +InnoDB does not call the file flush for every write it does to a +file, but only +in log flush at transaction commits and data file flush at a checkpoint. +The drawback in @code{littlesync} is that if the operating system +crashes, you can easily end up with a half-written database page, +and you have to +do a recovery from a backup. With @code{nosync} you have even less safety: +InnoDB will only flush the database files to disk at database shutdown + +@strong{7.} In importing data to InnoDB, make sure that MySQL does not have +@code{autocommit=1} on. Then every insert requires a log flush to disk. +Put before your plain SQL import file line + +@example +set autocommit=0; +@end example + +and after it + +@example +commit; +@end example + +If you use the @file{mysqldump} option @code{--opt}, you will get dump +files which are fast to import also to an InnoDB table, even without wrapping +them to the above @code{set autocommit=0; ... commit;} wrappers. + +@strong{8.} +Beware of big rollbacks of mass inserts: InnoDB uses the insert buffer +to save disk i/o in inserts, but in a corresponding rollback no such +mechanism is used. A disk-bound rollback can take 30 times the time +of the corresponding insert. Killing the database process will not +help because the rollback will start again at the database startup. The +only way to get rid of a runaway rollback is to increase the buffer pool +so that the rollback becomes CPU-bound and runs fast, or delete the whole +InnoDB database. + +@strong{9.} +Beware also of other big disk-bound operations. +Use @code{DROP TABLE} +or @code{TRUNCATE} (from MySQL-4.0 up) to empty a table, not +@code{DELETE FROM yourtable}. + +@strong{10.} +Use the multi-line @code{INSERT} to reduce +communication overhead between the client and the server if you need +to insert many rows: + +@example +INSERT INTO yourtable VALUES (1, 2), (5, 5); +@end example + +This tip is of course valid for inserts into any table type, not just InnoDB. + @node Implementation, Table and index, InnoDB transaction model, InnoDB @subsection Implementation of multiversioning @@ -25707,6 +25855,11 @@ they roll back the corresponding SQL statement. @subsection Some restrictions on InnoDB tables @itemize @bullet + +@item @code{SHOW TABLE STATUS} does not give accurate statistics +on InnoDB tables, except for the physical size reserved by the table. +The row count is only a rough estimate used in SQL optimization. + @item If you try to create an unique index on a prefix of a column you will get an error: @@ -25755,7 +25908,7 @@ files your operating system supports. Support for > 4 GB files will be added to InnoDB in a future version. @item The maximum tablespace size is 4 billion database pages. This is also -the maximum size for a table. +the maximum size for a table. The minimum tablespace size is 10 MB. @end itemize @node InnoDB contact information, , InnoDB restrictions, InnoDB From b6098186662faaef749577b027d49c928a80fa2e Mon Sep 17 00:00:00 2001 From: unknown Date: Thu, 24 May 2001 22:35:14 +0300 Subject: [PATCH 02/20] log0log.c InnoDB now prints timestamp at startup and shutdown srv0start.c InnoDB now prints timestamp at startup and shutdown ut0ut.h InnoDB now prints timestamp at startup and shutdown ut0ut.c InnoDB now prints timestamp at startup and shutdown innobase/ut/ut0ut.c: InnoDB now prints timestamp at startup and shutdown innobase/include/ut0ut.h: InnoDB now prints timestamp at startup and shutdown innobase/srv/srv0start.c: InnoDB now prints timestamp at startup and shutdown innobase/log/log0log.c: InnoDB now prints timestamp at startup and shutdown --- innobase/include/ut0ut.h | 7 +++++++ innobase/log/log0log.c | 8 +++++--- innobase/srv/srv0start.c | 6 ++++-- innobase/ut/ut0ut.c | 37 +++++++++++++++++++++++++++++++++++++ 4 files changed, 53 insertions(+), 5 deletions(-) diff --git a/innobase/include/ut0ut.h b/innobase/include/ut0ut.h index f2c4781c167..1e93a2b8a36 100644 --- a/innobase/include/ut0ut.h +++ b/innobase/include/ut0ut.h @@ -136,6 +136,13 @@ ut_difftime( /* out: time2 - time1 expressed in seconds */ ib_time_t time2, /* in: time */ ib_time_t time1); /* in: time */ +/************************************************************** +Prints a timestamp to a file. */ + +void +ut_print_timestamp( +/*===============*/ + FILE* file); /* in: file where to print */ /***************************************************************** Runs an idle loop on CPU. The argument gives the desired delay in microseconds on 100 MHz Pentium + Visual C++. */ diff --git a/innobase/log/log0log.c b/innobase/log/log0log.c index 46fcf400d34..31cf595e59e 100644 --- a/innobase/log/log0log.c +++ b/innobase/log/log0log.c @@ -2634,8 +2634,9 @@ logs_empty_and_mark_files_at_shutdown(void) { dulint lsn; ulint arch_log_no; - - fprintf(stderr, "InnoDB: Starting shutdown...\n"); + + ut_print_timestamp(stderr); + fprintf(stderr, " InnoDB: Starting shutdown...\n"); /* Wait until the master thread and all other operations are idle: our algorithm only works if the server is idle at shutdown */ @@ -2725,7 +2726,8 @@ loop: fil_flush_file_spaces(FIL_TABLESPACE); - fprintf(stderr, "InnoDB: Shutdown completed\n"); + ut_print_timestamp(stderr); + fprintf(stderr, " InnoDB: Shutdown completed\n"); } /********************************************************** diff --git a/innobase/srv/srv0start.c b/innobase/srv/srv0start.c index 29ddf2a21c8..b584b663e43 100644 --- a/innobase/srv/srv0start.c +++ b/innobase/srv/srv0start.c @@ -813,7 +813,8 @@ innobase_start_or_create_for_mysql(void) /* Create the thread which watches the timeouts for lock waits */ os_thread_create(&srv_lock_timeout_monitor_thread, NULL, thread_ids + 2 + SRV_MAX_N_IO_THREADS); - fprintf(stderr, "InnoDB: Started\n"); + ut_print_timestamp(stderr); + fprintf(stderr, " InnoDB: Started\n"); srv_was_started = TRUE; srv_is_being_started = FALSE; @@ -835,8 +836,9 @@ innobase_shutdown_for_mysql(void) { if (!srv_was_started) { if (srv_is_being_started) { + ut_print_timestamp(stderr); fprintf(stderr, - "InnoDB: Warning: shutting down not properly started database\n"); + " InnoDB: Warning: shutting down a not properly started database\n"); } return(DB_SUCCESS); } diff --git a/innobase/ut/ut0ut.c b/innobase/ut/ut0ut.c index cfd714fc275..cae0c3819f3 100644 --- a/innobase/ut/ut0ut.c +++ b/innobase/ut/ut0ut.c @@ -49,6 +49,43 @@ ut_difftime( return(difftime(time2, time1)); } +/************************************************************** +Prints a timestamp to a file. */ + +void +ut_print_timestamp( +/*===============*/ + FILE* file) /* in: file where to print */ +{ + struct tm* cal_tm_ptr; + struct tm cal_tm; + time_t tm; + + try_again: + time(&tm); + + cal_tm_ptr = localtime(&tm); + + memcpy(&cal_tm, cal_tm_ptr, sizeof(struct tm)); + + /* In theory localtime may return a wrong result because its return + struct is not protected with a mutex */ + + if (difftime(tm, mktime(&cal_tm)) > 0.5 + || difftime(tm, mktime(&cal_tm)) < -0.5) { + + goto try_again; + } + + fprintf(file,"%02d%02d%02d %2d:%02d:%02d", + cal_tm.tm_year % 100, + cal_tm.tm_mon+1, + cal_tm.tm_mday, + cal_tm.tm_hour, + cal_tm.tm_min, + cal_tm.tm_sec); +} + /***************************************************************** Runs an idle loop on CPU. The argument gives the desired delay in microseconds on 100 MHz Pentium + Visual C++. */ From cf5233e47c8219f8e2f17b85f02110efc396fd90 Mon Sep 17 00:00:00 2001 From: unknown Date: Thu, 24 May 2001 22:59:32 +0300 Subject: [PATCH 03/20] ut0ut.c Rewrote ut_print_timestamp with localtime_r and in Win GetLocalTime innobase/ut/ut0ut.c: Rewrote ut_print_timestamp with localtime_r and in Win GetLocalTime --- innobase/ut/ut0ut.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/innobase/ut/ut0ut.c b/innobase/ut/ut0ut.c index cae0c3819f3..07ee3d2b6fe 100644 --- a/innobase/ut/ut0ut.c +++ b/innobase/ut/ut0ut.c @@ -57,25 +57,26 @@ ut_print_timestamp( /*===============*/ FILE* file) /* in: file where to print */ { - struct tm* cal_tm_ptr; +#ifdef __WIN__ + SYSTEMTIME cal_tm; + + GetLocalTime(&cal_tm); + + fprintf(file,"%02d%02d%02d %2d:%02d:%02d", + (int)cal_tm.wYear % 100, + (int)cal_tm.wMonth, + (int)cal_tm.wDay, + (int)cal_tm.wHour, + (int)cal_tm.wMinute, + (int)cal_tm.wSecond); +#else + struct tm cal_tm; time_t tm; - try_again: time(&tm); - cal_tm_ptr = localtime(&tm); - - memcpy(&cal_tm, cal_tm_ptr, sizeof(struct tm)); - - /* In theory localtime may return a wrong result because its return - struct is not protected with a mutex */ - - if (difftime(tm, mktime(&cal_tm)) > 0.5 - || difftime(tm, mktime(&cal_tm)) < -0.5) { - - goto try_again; - } + localtime_r(&tm, &cal_tm); fprintf(file,"%02d%02d%02d %2d:%02d:%02d", cal_tm.tm_year % 100, @@ -84,6 +85,7 @@ ut_print_timestamp( cal_tm.tm_hour, cal_tm.tm_min, cal_tm.tm_sec); +#endif } /***************************************************************** From f2c0436616634fc55613114a65e38eec1bfcc010 Mon Sep 17 00:00:00 2001 From: unknown Date: Thu, 24 May 2001 18:47:52 -0500 Subject: [PATCH 04/20] manual.texi Updated Contrib section, mirrors. Docs/manual.texi: Updated Contrib section, mirrors. BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BitKeeper/etc/logging_ok | 6 +----- Docs/manual.texi | 25 +++++++++++++++++-------- 2 files changed, 18 insertions(+), 13 deletions(-) diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index 5fb0381538f..8bf83e5a369 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1,5 +1 @@ -heikki@donna.mysql.fi -jani@janikt.pp.saunalahti.fi -monty@donna.mysql.fi -monty@tik.mysql.fi -paul@central.snake.net +mwagner@evoq.mwagner.org diff --git a/Docs/manual.texi b/Docs/manual.texi index ed45a1b6317..b5fe52845c9 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -4514,6 +4514,12 @@ Please report bad or out-of-date mirrors to @email{webmaster@@mysql.com}. @uref{http://ftp.esat.net/mirrors/download.sourceforge.net/pub/mirrors/mysql/, WWW} @uref{ftp://ftp.esat.net/mirrors/download.sourceforge.net/pub/mirrors/mysql/, FTP} +@c @item +@c Added 20010524 +@c EMAIL: arvids@parks.lv (Arvids) +@c @image{Flags/latvia} Latvia [linux.lv] @ +@c @uref{ftp://ftp.linux.lv/pub/software/mysql/, FTP} + @item @c Added 20001125 @c EMAIL: mleicher@silverpoint.nl (Marcel Leicher) @@ -4794,10 +4800,10 @@ Please report bad or out-of-date mirrors to @email{webmaster@@mysql.com}. @uref{http://mysql.linuxwired.net/, WWW} @uref{ftp://ftp.linuxwired.net/pub/mirrors/mysql/, FTP} -@item +@c @item @c EMAIL: dan@surfsouth.com (Dan Muntz) -@image{Flags/usa} USA [Venoma.Org/Valdosta, GA] @ -@uref{http://mysql.venoma.org/, WWW} +@c @image{Flags/usa} USA [Venoma.Org/Valdosta, GA] @ +@c @uref{http://mysql.venoma.org/, WWW} @item @c EMAIL: hkind@adgrafix.com (Hans Kind) @@ -4997,8 +5003,8 @@ Please report bad or out-of-date mirrors to @email{webmaster@@mysql.com}. @c Added 980610 @c EMAIL: jason@dstc.edu.au (Jason Andrade) @image{Flags/australia} Australia [AARNet/Queensland] @ -@uref{http://mirror.aarnet.edu.au/mysql, WWW} -@uref{ftp://mirror.aarnet.edu.au/pub/mysql, FTP} +@uref{http://mysql.mirror.aarnet.edu.au/, WWW} +@uref{ftp://mysql.mirror.aarnet.edu.au/, FTP} @c @item @c Added 980805. Removed 000102 'no such directory' @@ -43007,6 +43013,9 @@ tickets for this event is implemented using @strong{MySQL} and tcl/tk. More than service with millions of users.} @item @uref{http://f1.tauzero.se, Forza Motorsport} + +@item @uref{http://www.dreamhost.com/, DreamHost Web Hosting} + @end itemize @cindex services @@ -43344,16 +43353,16 @@ interface, you should fetch the @code{Data-Dumper}, @code{DBI}, and Perl @code{Data-Dumper} module. Useful with @code{DBI}/@code{DBD} support for older Perl installations. -@item @uref{http://www.mysql.com/Downloads/Contrib/DBI-1.14.tar.gz, DBI-1.14.tar.gz} +@item @uref{http://www.mysql.com/Downloads/Contrib/DBI-1.15.tar.gz, DBI-1.15.tar.gz} Perl @code{DBI} module. -@item @uref{http://www.mysql.com/Downloads/Contrib/KAMXbase1.0.tar.gz,KAMXbase1.0.tar.gz} +@item @uref{http://www.mysql.com/Downloads/Contrib/KAMXbase1.2.tar.gz,KAMXbase1.2.tar.gz} Convert between @file{.dbf} files and @strong{MySQL} tables. Perl module written by Pratap Pereira @email{pereira@@ee.eng.ohio-state.edu}, extended by Kevin A. McGrail @email{kmcgrail@@digital1.peregrinehw.com}. This converter can handle MEMO fields. -@item @uref{http://www.mysql.com/Downloads/Contrib/Msql-Mysql-modules-1.2215.tar.gz, Msql-Mysql-modules-1.2215.tar.gz} +@item @uref{http://www.mysql.com/Downloads/Contrib/Msql-Mysql-modules-1.2216.tar.gz, Msql-Mysql-modules-1.2216.tar.gz} Perl @code{DBD} module to access mSQL and @strong{MySQL} databases. @item @uref{http://www.mysql.com/Downloads/Contrib/Data-ShowTable-3.3.tar.gz, Data-ShowTable-3.3.tar.gz} From 714640bfb9f725f05567c58fac34612e3e99df33 Mon Sep 17 00:00:00 2001 From: unknown Date: Fri, 25 May 2001 16:26:52 -0600 Subject: [PATCH 05/20] BUILD/SETUP.sh@1.9 removed -ffixed-ebp from reckless flags BUILD/compile-pentium@1.16 use fast, not reckless flags for binary distribuition sql/share/english/errmsg.txt@1.24 Point the user to the manual when he gets aborted connection message BUILD/SETUP.sh: removed -ffixed-ebp from reckless flags BUILD/compile-pentium: use fast, not reckless flags for binary distribuition sql/share/english/errmsg.txt: Point the user to the manual when he gets aborted connection message BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BUILD/SETUP.sh | 5 ++++- BUILD/compile-pentium | 2 +- BitKeeper/etc/logging_ok | 1 + sql/share/english/errmsg.txt | 4 ++-- 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/BUILD/SETUP.sh b/BUILD/SETUP.sh index 1f45c5c18cb..778625e9e75 100644 --- a/BUILD/SETUP.sh +++ b/BUILD/SETUP.sh @@ -43,8 +43,11 @@ alpha_cflags="-mcpu=ev6 -Wa,-mev6" # Not used yet pentium_cflags="-mpentiumpro" sparc_cflags="" +# be as fast as we can be without losing our ability to backtrace fast_cflags="-O3 -fno-omit-frame-pointer" -reckless_cflags="-O3 -fomit-frame-pointer -ffixed-ebp" +# this is one is for someone who thinks 1% speedup is worth not being +# able to backtrace +reckless_cflags="-O3 -fomit-frame-pointer " debug_cflags="-DEXTRA_DEBUG -DFORCE_INIT_OF_VARS -DSAFEMALLOC -DSAFE_MUTEX -O2" base_cxxflags="-felide-constructors -fno-exceptions -fno-rtti" diff --git a/BUILD/compile-pentium b/BUILD/compile-pentium index 9607ca03e7e..11559be93de 100755 --- a/BUILD/compile-pentium +++ b/BUILD/compile-pentium @@ -3,7 +3,7 @@ path=`dirname $0` . "$path/SETUP.sh" -extra_flags="$pentium_cflags $reckless_cflags" +extra_flags="$pentium_cflags $fast_cflags" extra_configs="$pentium_configs" strip=yes diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index 8bf83e5a369..2bd9389d2d3 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1 +1,2 @@ mwagner@evoq.mwagner.org +sasha@mysql.sashanet.com diff --git a/sql/share/english/errmsg.txt b/sql/share/english/errmsg.txt index ff29fffe958..4deffcd16be 100644 --- a/sql/share/english/errmsg.txt +++ b/sql/share/english/errmsg.txt @@ -153,7 +153,7 @@ "You have an error in your SQL syntax", "Delayed insert thread couldn't get requested lock for table %-.64s", "Too many delayed threads in use", -"Aborted connection %ld to db: '%-.64s' user: '%-.32s' (%-.64s)", +"Aborted connection %ld to db: '%-.64s' user: '%-.32s' (%-.64s) - see http://www.mysql.com/doc/C/o/Communication_errors.html", "Got a packet bigger than 'max_allowed_packet'", "Got a read error from the connection pipe", "Got an error from fcntl()", @@ -185,7 +185,7 @@ "Got error %d during ROLLBACK", "Got error %d during FLUSH_LOGS", "Got error %d during CHECKPOINT", -"Aborted connection %ld to db: '%-.64s' user: '%-.32s' host: `%-.64s' (%-.64s)", +"Aborted connection %ld to db: '%-.64s' user: '%-.32s' host: `%-.64s' (%-.64s) see http://www.mysql.com/doc/C/o/Communication_errors.html", "The handler for the table does not support binary table dump", "Binlog closed, cannot RESET MASTER", "Failed rebuilding the index of dumped table '%-.64s'", From 860d2b392a542f42ad60e12382fced1068c04c20 Mon Sep 17 00:00:00 2001 From: unknown Date: Sat, 26 May 2001 21:58:59 +0300 Subject: [PATCH 06/20] sync0sync.ic InnoDB now uses XCHG to implement mutex if GCC and x86 configure.in InnoDB now uses XCHG to implement mutex if GCC and x86 innobase/configure.in: InnoDB now uses XCHG to implement mutex if GCC and x86 innobase/include/sync0sync.ic: InnoDB now uses XCHG to implement mutex if GCC and x86 BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BitKeeper/etc/logging_ok | 1 + innobase/configure.in | 5 +++++ innobase/include/sync0sync.ic | 14 ++++++++++++++ 3 files changed, 20 insertions(+) diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index 2bd9389d2d3..f772adc5410 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1,2 +1,3 @@ mwagner@evoq.mwagner.org sasha@mysql.sashanet.com +heikki@donna.mysql.fi diff --git a/innobase/configure.in b/innobase/configure.in index 83d302c6dc4..1133ab86221 100644 --- a/innobase/configure.in +++ b/innobase/configure.in @@ -95,6 +95,11 @@ case "$target_os" in CFLAGS="$CFLAGS -DUNIV_MUST_NOT_INLINE";; esac +case "$target" in + i[[4567]]86-*-*) + CFLAGS="$CFLAGS -DUNIV_INTEL_X86";; +esac + AC_OUTPUT(Makefile os/Makefile ut/Makefile btr/Makefile buf/Makefile com/Makefile data/Makefile dict/Makefile dyn/Makefile diff --git a/innobase/include/sync0sync.ic b/innobase/include/sync0sync.ic index a937ac5d579..e23e2b68e14 100644 --- a/innobase/include/sync0sync.ic +++ b/innobase/include/sync0sync.ic @@ -86,6 +86,20 @@ mutex_test_and_set( /* mutex_fence(); */ return(res); +#elif defined(__GNUC__) && defined(UNIV_INTEL_X86) + ulint* lw; + ulint res; + + lw = &(mutex->lock_word); + + /* In assembly we use the so-called AT & T syntax where + the order of operands is inverted compared to the ordinary Intel + syntax. The 'l' after the mnemonics denotes a 32-bit operation. */ + + asm volatile("movl $1, %%eax; xchgl (%%ecx), %%eax" : + "=eax" (res): + "ecx" (lw)); + return(res); #else ibool ret; From 7d621c1714b2549da804f03a0cb4db562459bc86 Mon Sep 17 00:00:00 2001 From: unknown Date: Sun, 27 May 2001 01:10:46 -0500 Subject: [PATCH 07/20] post-commit Updated commit address. latvia.gif BitKeeper file /opt/local/x1/work/bk/mysql/Docs/Flags/latvia.gif latvia.txt BitKeeper file /opt/local/x1/work/bk/mysql/Docs/Flags/latvia.txt BitKeeper/triggers/post-commit: Updated commit address. --- BitKeeper/triggers/post-commit | 2 +- Docs/Flags/latvia.gif | Bin 0 -> 117 bytes Docs/Flags/latvia.txt | 0 3 files changed, 1 insertion(+), 1 deletion(-) create mode 100644 Docs/Flags/latvia.gif create mode 100644 Docs/Flags/latvia.txt diff --git a/BitKeeper/triggers/post-commit b/BitKeeper/triggers/post-commit index fe561c79fae..b84dc543e0a 100755 --- a/BitKeeper/triggers/post-commit +++ b/BitKeeper/triggers/post-commit @@ -1,7 +1,7 @@ #!/bin/sh #shift -TO=dev@mysql.com +TO=dev-public@mysql.com FROM=$USER@mysql.com LIMIT=10000 diff --git a/Docs/Flags/latvia.gif b/Docs/Flags/latvia.gif new file mode 100644 index 0000000000000000000000000000000000000000..8a898328ebe8753c22dc04547563e74ba6478aaa GIT binary patch literal 117 zcmZ?wbhEHbRA3NeSoELa|NsBV$;nd$BzhP)_7)fK&CX^3104_vQp3RPl(Fm1Ki?Cc ztM>*yJiGkeg0HPh45mHkEPB9pWg ROOCD4d&70;t(6diH2^6qFLVF^ literal 0 HcmV?d00001 diff --git a/Docs/Flags/latvia.txt b/Docs/Flags/latvia.txt new file mode 100644 index 00000000000..e69de29bb2d From 8f2056875c28064a4d8517b16f45813ce67b8ce5 Mon Sep 17 00:00:00 2001 From: unknown Date: Sun, 27 May 2001 04:13:02 -0500 Subject: [PATCH 08/20] manual.texi Added mirror. Fixed erroneous @email{} usage. Docs/manual.texi: Added mirror. Fixed erroneous @email{} usage. --- Docs/manual.texi | 66 ++++++++++++++++++++---------------------------- 1 file changed, 28 insertions(+), 38 deletions(-) diff --git a/Docs/manual.texi b/Docs/manual.texi index b5fe52845c9..bd3de64b73b 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -3058,21 +3058,21 @@ from the local @strong{MySQL} list. The following @strong{MySQL} mailing lists exist: @table @code -@item @email{announce-subscribe@@lists.mysql.com, announce} +@item @email{announce-subscribe@@lists.mysql.com} announce This is for announcement of new versions of @strong{MySQL} and related programs. This is a low volume list all @strong{MySQL} users should subscribe to. -@item @email{mysql-subscribe@@lists.mysql.com, mysql} +@item @email{mysql-subscribe@@lists.mysql.com} mysql The main list for general @strong{MySQL} discussion. Please note that some topics are better discussed on the more-specialized lists. If you post to the wrong list, you may not get an answer! -@item @email{mysql-digest-subscribe@@lists.mysql.com, mysql-digest} +@item @email{mysql-digest-subscribe@@lists.mysql.com} mysql-digest The @code{mysql} list in digest form. That means you get all individual messages, sent as one large mail message once a day. -@item @email{bugs-subscribe@@lists.mysql.com, bugs} +@item @email{bugs-subscribe@@lists.mysql.com} bugs On this list you should only post a full, repeatable bug report using the @code{mysqlbug} script (if you are running on Windows, you should include a description of the operating system and the @strong{MySQL} version). @@ -3083,55 +3083,45 @@ bugs posted on this list will be corrected or documented in the next @strong{MySQL} release! If there are only small code changes involved, we will also post a patch that fixes the problem. -@item @email{bugs-digest-subscribe@@lists.mysql.com, bugs-digest} +@item @email{bugs-digest-subscribe@@lists.mysql.com} bugs-digest The @code{bugs} list in digest form. -@item @email{developer-subscribe@@lists.mysql.com, developer} -This list has been depreciated in favor of the -@email{internals-subscribe@@lists.mysql.com, internals} list (below). - -@item @email{developer-digest-subscribe@@lists.mysql.com, developer-digest} -This list has been deprecated in favor of the -@email{internals-digest-subscribe@@lists.mysql.com, internals-digest} -list (below). - -@item @email{internals-subscribe@@lists.mysql.com, internals} +@item @email{internals-subscribe@@lists.mysql.com} internals A list for people who work on the @strong{MySQL} code. On this list one can also discuss @strong{MySQL} development and post patches. -@item @email{internals-digest-subscribe@@lists.mysql.com, internals-digest} -A digest version of the @email{internals-subscribe@@lists.mysql.com, internals} -list. +@item @email{internals-digest-subscribe@@lists.mysql.com} internals-digest +A digest version of the @code{internals} list. -@item @email{java-subscribe@@lists.mysql.com, java} +@item @email{java-subscribe@@lists.mysql.com} java Discussion about @strong{MySQL} and Java. Mostly about the JDBC drivers. -@item @email{java-digest-subscribe@@lists.mysql.com, java-digest} +@item @email{java-digest-subscribe@@lists.mysql.com} java-digest A digest version of the @code{java} list. -@item @email{win32-subscribe@@lists.mysql.com, win32} +@item @email{win32-subscribe@@lists.mysql.com} win32 All things concerning @strong{MySQL} on Microsoft operating systems such as Win95, Win98, NT, and Win2000. -@item @email{win32-digest-subscribe@@lists.mysql.com, win32-digest} +@item @email{win32-digest-subscribe@@lists.mysql.com} win32-digest A digest version of the @code{win32} list. -@item @email{myodbc-subscribe@@lists.mysql.com, myodbc} +@item @email{myodbc-subscribe@@lists.mysql.com} myodbc All things about connecting to @strong{MySQL} with ODBC. -@item @email{myodbc-digest-subscribe@@lists.mysql.com, myodbc-digest} +@item @email{myodbc-digest-subscribe@@lists.mysql.com} myodbc-digest A digest version of the @code{myodbc} list. -@item @email{plusplus-subscribe@@lists.mysql.com, plusplus} +@item @email{plusplus-subscribe@@lists.mysql.com} plusplus All things concerning programming with the C++ API to @strong{MySQL}. -@item @email{plusplus-digest-subscribe@@lists.mysql.com, plusplus-digest} +@item @email{plusplus-digest-subscribe@@lists.mysql.com} plusplus-digest A digest version of the @code{plusplus} list. -@item @email{msql-mysql-modules-subscribe@@lists.mysql.com, msql-mysql-modules} -A list about the Perl support in @strong{MySQL}. +@item @email{msql-mysql-modules-subscribe@@lists.mysql.com} +A list about the Perl support in @strong{MySQL}. msql-mysql-modules -@item @email{msql-mysql-modules-digest-subscribe@@lists.mysql.com, msql-mysql-modules-digest} +@item @email{msql-mysql-modules-digest-subscribe@@lists.mysql.com} msql-mysql-modules-digest A digest version of the @code{msql-mysql-modules} list. @end table @@ -3147,16 +3137,16 @@ English. Note that these are not operated by @strong{MySQL AB}, so we can't guarantee the quality on these. @table @code -@item @email{mysql-france-subscribe@@yahoogroups.com, A French mailing list} -@item @email{list@@tinc.net, A Korean mailing list} +@item @email{mysql-france-subscribe@@yahoogroups.com} A French mailing list +@item @email{list@@tinc.net} A Korean mailing list Email @code{subscribe mysql your@@email.address} to this list. -@item @email{mysql-de-request@@lists.4t2.com, A German mailing list} +@item @email{mysql-de-request@@lists.4t2.com} A German mailing list Email @code{subscribe mysql-de your@@email.address} to this list. You can find information about this mailing list at @uref{http://www.4t2.com/mysql}. -@item @email{mysql-br-request@@listas.linkway.com.br, A Portugese mailing list} +@item @email{mysql-br-request@@listas.linkway.com.br} A Portugese mailing list Email @code{subscribe mysql-br your@@email.address} to this list. -@item @email{mysql-alta@@elistas.net, A Spanish mailing list} +@item @email{mysql-alta@@elistas.net} A Spanish mailing list Email @code{subscribe mysql your@@email.address} to this list. @end table @@ -4514,11 +4504,11 @@ Please report bad or out-of-date mirrors to @email{webmaster@@mysql.com}. @uref{http://ftp.esat.net/mirrors/download.sourceforge.net/pub/mirrors/mysql/, WWW} @uref{ftp://ftp.esat.net/mirrors/download.sourceforge.net/pub/mirrors/mysql/, FTP} -@c @item +@item @c Added 20010524 @c EMAIL: arvids@parks.lv (Arvids) -@c @image{Flags/latvia} Latvia [linux.lv] @ -@c @uref{ftp://ftp.linux.lv/pub/software/mysql/, FTP} +@image{Flags/latvia} Latvia [linux.lv] @ +@uref{ftp://ftp.linux.lv/pub/software/mysql/, FTP} @item @c Added 20001125 @@ -50040,7 +50030,7 @@ of @code{analyze} is run on all sub tables. @end itemize @node TODO future, TODO sometime, TODO MySQL 4.0, TODO -@appendixsec Things that must done in the real near future +@appendixsec Things that must be done in the real near future @itemize @bullet @item From 6ccdc9dd478ad6fa000e533c9ec881d7b0b0d01d Mon Sep 17 00:00:00 2001 From: unknown Date: Sun, 27 May 2001 04:20:35 -0500 Subject: [PATCH 09/20] manual.texi Missed one. Docs/manual.texi: Missed one. --- Docs/manual.texi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Docs/manual.texi b/Docs/manual.texi index bd3de64b73b..f60d7c51b97 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -3118,7 +3118,7 @@ All things concerning programming with the C++ API to @strong{MySQL}. @item @email{plusplus-digest-subscribe@@lists.mysql.com} plusplus-digest A digest version of the @code{plusplus} list. -@item @email{msql-mysql-modules-subscribe@@lists.mysql.com} +@item @email{msql-mysql-modules-subscribe@@lists.mysql.com} msql-mysql-modules A list about the Perl support in @strong{MySQL}. msql-mysql-modules @item @email{msql-mysql-modules-digest-subscribe@@lists.mysql.com} msql-mysql-modules-digest From b10a22390fba2a05a1dcb144beaeae4b157c1f7a Mon Sep 17 00:00:00 2001 From: unknown Date: Sun, 27 May 2001 08:16:35 -0300 Subject: [PATCH 10/20] Small fix for Symlink on Win32 BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BitKeeper/etc/logging_ok | 1 + 1 file changed, 1 insertion(+) diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index f772adc5410..3037efd4af7 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1,3 +1,4 @@ mwagner@evoq.mwagner.org sasha@mysql.sashanet.com heikki@donna.mysql.fi +miguel@linux.local From bd130011e1fb1ef78c78e166559f8ee6fcb9e5da Mon Sep 17 00:00:00 2001 From: unknown Date: Mon, 28 May 2001 02:45:19 +0300 Subject: [PATCH 11/20] Fixed portability bug in my_config.sh Added print of --use-symbolic-links in mysqld Docs/manual.texi: Added new links configure.in: Fixes for gcc scripts/mysql_config.sh: Fixed portability bug. sql/mysqld.cc: Added print of --use-symbolic-links --- Docs/manual.texi | 10 ++++++++++ configure.in | 6 ++++-- scripts/mysql_config.sh | 2 +- sql/mysqld.cc | 6 +++++- 4 files changed, 20 insertions(+), 4 deletions(-) diff --git a/Docs/manual.texi b/Docs/manual.texi index ed45a1b6317..cb8b73448e3 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -43643,6 +43643,11 @@ Some features: @itemize @bullet @item Manage servers, databases, tables, columns, indexes, and users @item Import wizard to import structure and data from MS Access, MS Excel, Dbase, FoxPro, Paradox, and ODBC Databases. + +@item @uref{http://www.mysql.com/Downloads/Contrib/KMYENG113.zip,KMYENG113.zip} +An administrator GUI for @strong{MySQL}. Works only on windows, no source. +Available in English and Japanese. By Mitunobu Kaneko. +Home page: @uref{http://sql.jnts.ne.jp/} @end itemize @item @uref{http://www.mysql.com/Downloads/Contrib/xmysqladmin-1.0.tar.gz, xmysqladmin-1.0.tar.gz} @@ -43950,6 +43955,11 @@ By Steve Shreeve. Perl program to convert Oracle databases to @strong{MySQL}. By Johan Andersson. @item @uref{http://www.mysql.com/Downloads/Contrib/excel2mysql, excel2mysql} Perl program to import Excel spreadsheets into a @strong{MySQL} database. By Stephen Hurd @email{shurd@@sk.sympatico.ca} + +@item @uref{http://www.mysql.com/Downloads/Contrib/T2S_100.ZIP, T2S_100.ZIP}. +Windows program to convert text files to @strong{MySQL} databases. By +Asaf Azulay. + @end itemize @appendixsec Using MySQL with Other Products diff --git a/configure.in b/configure.in index 9998902bcec..4e73bb901fa 100644 --- a/configure.in +++ b/configure.in @@ -285,8 +285,10 @@ export CC CFLAGS LD LDFLAGS if test "$GXX" = "yes" then - # mysqld requires this when compiled with gcc - CXXFLAGS="$CXXFLAGS -fno-implicit-templates" + # mysqld requires -fno-implicit-templates. + # Disable exceptions as they seams to create problems with gcc and threads. + # mysqld doesn't use run-time-type-checking, so we disable it. + CXXFLAGS="$CXXFLAGS -fno-implicit-templates -fno-exceptions -fno-rtti" fi # Avoid bug in fcntl on some versions of linux diff --git a/scripts/mysql_config.sh b/scripts/mysql_config.sh index 09f81c70a1f..ed344f4b1e3 100644 --- a/scripts/mysql_config.sh +++ b/scripts/mysql_config.sh @@ -45,7 +45,7 @@ EOF exit 1 } -if ! test $# -gt 0; then usage; fi +if test $# -le 0; then usage; fi while test $# -gt 0; do case $1 in diff --git a/sql/mysqld.cc b/sql/mysqld.cc index dccb54ae7ec..b009387f5c0 100644 --- a/sql/mysqld.cc +++ b/sql/mysqld.cc @@ -3039,8 +3039,12 @@ static void usage(void) --console Don't remove the console window\n\ --install Install mysqld as a service (NT)\n\ --remove Remove mysqld from the service list (NT)\n\ - --standalone Dummy option to start as a standalone program (NT)\n\ + --standalone Dummy option to start as a standalone program (NT)\ "); +#ifdef USE_SYMDIR + puts("--use-symbolic-links Enable symbolic link support"); +#endif + puts(""); #endif #ifdef HAVE_BERKELEY_DB puts("\ From 1b20480fd0171ad674f8bf5069f762d4d4a2dea6 Mon Sep 17 00:00:00 2001 From: unknown Date: Mon, 28 May 2001 02:56:22 +0300 Subject: [PATCH 12/20] Corrected error messages to avoid problems with too long error messages. --- sql/share/english/errmsg.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sql/share/english/errmsg.txt b/sql/share/english/errmsg.txt index 4deffcd16be..ff29fffe958 100644 --- a/sql/share/english/errmsg.txt +++ b/sql/share/english/errmsg.txt @@ -153,7 +153,7 @@ "You have an error in your SQL syntax", "Delayed insert thread couldn't get requested lock for table %-.64s", "Too many delayed threads in use", -"Aborted connection %ld to db: '%-.64s' user: '%-.32s' (%-.64s) - see http://www.mysql.com/doc/C/o/Communication_errors.html", +"Aborted connection %ld to db: '%-.64s' user: '%-.32s' (%-.64s)", "Got a packet bigger than 'max_allowed_packet'", "Got a read error from the connection pipe", "Got an error from fcntl()", @@ -185,7 +185,7 @@ "Got error %d during ROLLBACK", "Got error %d during FLUSH_LOGS", "Got error %d during CHECKPOINT", -"Aborted connection %ld to db: '%-.64s' user: '%-.32s' host: `%-.64s' (%-.64s) see http://www.mysql.com/doc/C/o/Communication_errors.html", +"Aborted connection %ld to db: '%-.64s' user: '%-.32s' host: `%-.64s' (%-.64s)", "The handler for the table does not support binary table dump", "Binlog closed, cannot RESET MASTER", "Failed rebuilding the index of dumped table '%-.64s'", From 9ad7aedb41feeddc5a872cf330197d6e3efc14ed Mon Sep 17 00:00:00 2001 From: unknown Date: Tue, 29 May 2001 13:46:17 +0300 Subject: [PATCH 13/20] Fixed problems with decimals withing IF() Force add of FN_LIBCHAR to symlinks on windows Docs/manual.texi: Cleanup & Changelog client/mysqladmin.c: Added quoting for 'drop database' client/mysqlcheck.c: Fixed wrong comment syntax libmysql/net.c: Cleanup mysql-test/mysql-test-run.sh: Better error message. mysql-test/r/func_test.result: test for if() mysql-test/t/func_test.test: test for if() mysys/mf_pack.c: Force add of FN_LIBCHAR to symlinks on windows. sql/item_cmpfunc.cc: Fixed problems with decimals withing IF() sql/mysqlbinlog.cc: Better error messages. sql/sql_repl.cc: Better error messages. --- Docs/manual.texi | 33 +++++++++++++-------------------- client/mysqladmin.c | 4 ++-- client/mysqlcheck.c | 2 +- libmysql/net.c | 1 - mysql-test/mysql-test-run.sh | 5 +++-- mysql-test/r/func_test.result | 2 ++ mysql-test/t/func_test.test | 10 ++++++++++ mysys/mf_pack.c | 7 ++++++- sql/item_cmpfunc.cc | 2 +- sql/mysqlbinlog.cc | 14 ++++++-------- sql/sql_repl.cc | 5 ++--- 11 files changed, 46 insertions(+), 39 deletions(-) diff --git a/Docs/manual.texi b/Docs/manual.texi index da569c67dd1..4720ee760d3 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -314,7 +314,6 @@ Windows Notes * Windows and SSH:: Connecting to a remote @strong{MySQL} from Windows with SSH * Windows symbolic links:: Splitting data across different disks under Win32 * Windows compiling:: Compiling MySQL clients on Windows. -* Windows and BDB tables.:: Windows and BDB Tables * Windows vs Unix:: @strong{MySQL}-Windows compared to Unix @strong{MySQL} Post-installation Setup and Testing @@ -553,7 +552,7 @@ InnoDB Tables Creating InnoDB table space -* Error creating InnoDB:: +* Error creating InnoDB:: Backing up and recovering an InnoDB database @@ -8874,7 +8873,6 @@ This is also described in the @file{README} file that comes with the * Windows and SSH:: Connecting to a remote @strong{MySQL} from Windows with SSH * Windows symbolic links:: Splitting data across different disks under Win32 * Windows compiling:: Compiling MySQL clients on Windows. -* Windows and BDB tables.:: Windows and BDB Tables * Windows vs Unix:: @strong{MySQL}-Windows compared to Unix @strong{MySQL} @end menu @@ -8923,9 +8921,8 @@ symbolic links, BDB and InnoDB tables. @item @code{mysqld-opt} @tab Optimized binary with no support for transactional tables. @item @code{mysqld-nt} @tab -Optimized for a Pentium Pro processor. Has support for -named pipes. You can run this version on Win98, but in -this case no named pipes are created and you must +Optimized binary for NT with support for named pipes. You can run this +version on Win98, but in this case no named pipes are created and you must have TCP/IP installed. @item @code{mysqld-max} @tab Optimized binary with support for symbolic links, BDB and InnoDB tables. @@ -9226,7 +9223,7 @@ text @code{D:\data\foo}. After that, all tables created in the database @cindex compiling, on Windows @cindex Windows, compiling on -@node Windows compiling, Windows and BDB tables., Windows symbolic links, Windows +@node Windows compiling, Windows vs Unix, Windows symbolic links, Windows @subsection Compiling MySQL Clients on Windows In your source files, you should include @file{windows.h} before you include @@ -9246,19 +9243,9 @@ with the static @file{mysqlclient.lib} library. Note that as the mysqlclient libraries are compiled as threaded libraries, you should also compile your code to be multi-threaded! -@cindex BDB tables -@cindex tables, BDB -@node Windows and BDB tables., Windows vs Unix, Windows compiling, Windows -@subsection Windows and BDB Tables - -We will shortly do a full test on the new BDB interface on Windows. -When this is done we will start to release binary distributions (for -Windows and Unix) of @strong{MySQL} that will include support for BDB -tables. - @cindex Windows, versus Unix @cindex operating systems, Windows versus Unix -@node Windows vs Unix, , Windows and BDB tables., Windows +@node Windows vs Unix, , Windows compiling, Windows @subsection MySQL-Windows Compared to Unix MySQL @strong{MySQL}-Windows has by now proven itself to be very stable. This version @@ -24898,7 +24885,7 @@ mysqld: ready for connections @end example @menu -* Error creating InnoDB:: +* Error creating InnoDB:: @end menu @node Error creating InnoDB, , InnoDB init, InnoDB init @@ -42562,7 +42549,7 @@ attachments, you should ftp all the relevant files to: @end itemize @node Reporting mysqltest bugs, , extending mysqltest, MySQL test suite -@subsection Extending the MySQL Test Suite +@subsection Reporting bugs in the MySQL Test Suite If your @strong{MySQL} version doesn't pass the test suite you should do the following: @@ -42593,6 +42580,10 @@ so that we can examine it. Please remember to also include a full description of your system, the version of the mysqld binary and how you compiled it. +@item +Try also to run @code{mysql-test-run} with the @code{--force} option to +see if there is any other test that fails. + @item If you have compiled @strong{MySQL} yourself, check our manual for how to compile @strong{MySQL} on your platform or, preferable, use one of @@ -44538,6 +44529,8 @@ not yet 100% confident in this code. @appendixsubsec Changes in release 3.23.39 @itemize @bullet @item +Fixed problem with @code{IF()} and number of decimals in the result. +@item Fixed that date-part extract functions works with dates where day and/or month is 0. @item diff --git a/client/mysqladmin.c b/client/mysqladmin.c index 1e6bf3c5219..3570cefc4ae 100644 --- a/client/mysqladmin.c +++ b/client/mysqladmin.c @@ -28,7 +28,7 @@ #include /* because of signal() */ #endif -#define ADMIN_VERSION "8.20" +#define ADMIN_VERSION "8.21" #define MAX_MYSQL_VAR 64 #define SHUTDOWN_DEF_TIMEOUT 3600 /* Wait for shutdown */ #define MAX_TRUNC_LENGTH 3 @@ -870,7 +870,7 @@ static int drop_db(MYSQL *mysql, const char *db) return -1; } } - sprintf(name_buff,"drop database %.*s",FN_REFLEN,db); + sprintf(name_buff,"drop database `%.*s`",FN_REFLEN,db); if (mysql_query(mysql,name_buff)) { my_printf_error(0,"DROP DATABASE %s failed;\nerror: '%s'",MYF(ME_BELL), diff --git a/client/mysqlcheck.c b/client/mysqlcheck.c index 3d4d4597ef5..a9379837847 100644 --- a/client/mysqlcheck.c +++ b/client/mysqlcheck.c @@ -338,7 +338,7 @@ static int get_options(int *argc, char ***argv) { int pnlen = strlen(my_progname); - if (pnlen < 6) // name too short + if (pnlen < 6) /* name too short */ what_to_do = DO_CHECK; else if (!strcmp("repair", my_progname + pnlen - 6)) what_to_do = DO_REPAIR; diff --git a/libmysql/net.c b/libmysql/net.c index f60a2a20ce0..11497cc7077 100644 --- a/libmysql/net.c +++ b/libmysql/net.c @@ -34,7 +34,6 @@ #include #include #include -#include #ifdef MYSQL_SERVER ulong max_allowed_packet=65536; diff --git a/mysql-test/mysql-test-run.sh b/mysql-test/mysql-test-run.sh index ece2e42f40b..0dfdbda701e 100644 --- a/mysql-test/mysql-test-run.sh +++ b/mysql-test/mysql-test-run.sh @@ -300,8 +300,9 @@ show_failed_diff () echo "-------------------------------------------------------" $DIFF -c $result_file $reject_file echo "-------------------------------------------------------" - echo "Please e-mail the above, along with the output of mysqlbug" - echo "and any other relevant info to bugs@lists.mysql.com" + echo "Please follow the instructions outlined at" + echo "http://www.mysql.com/doc/R/e/Reporting_mysqltest_bugs.html" + echo "to find the reason to this problem and how to report this." fi } diff --git a/mysql-test/r/func_test.result b/mysql-test/r/func_test.result index 3dc0fc19848..5d2211baf50 100644 --- a/mysql-test/r/func_test.result +++ b/mysql-test/r/func_test.result @@ -34,3 +34,5 @@ this is a 2 2.0 1 1 1 and 0 or 2 2 or 1 and 0 1 1 +sum(if(num is null,0.00,num)) +144.54 diff --git a/mysql-test/t/func_test.test b/mysql-test/t/func_test.test index 9562ae5f77b..0439a96f077 100644 --- a/mysql-test/t/func_test.test +++ b/mysql-test/t/func_test.test @@ -24,3 +24,13 @@ select -1.49 or -1.49,0.6 or 0.6; select 5 between 0 and 10 between 0 and 1,(5 between 0 and 10) between 0 and 1; select 1 and 2 between 2 and 10, 2 between 2 and 10 and 1; select 1 and 0 or 2, 2 or 1 and 0; + +# +# Problem with IF() +# + +drop table if exists t1; +create table t1 (num double(12,2)); +insert into t1 values (144.54); +select sum(if(num is null,0.00,num)) from t1; +drop table t1; diff --git a/mysys/mf_pack.c b/mysys/mf_pack.c index c18d37888b8..b442af7e9e5 100644 --- a/mysys/mf_pack.c +++ b/mysys/mf_pack.c @@ -236,11 +236,16 @@ void symdirget(char *dir) *pos++=temp; *pos=0; /* Restore old filename */ if (fp) { - if (fgets(buff, sizeof(buff), fp)) + if (fgets(buff, sizeof(buff)-1, fp)) { for (pos=strend(buff); pos > buff && (iscntrl(pos[-1]) || isspace(pos[-1])) ; pos --); + + /* Ensure that the symlink ends with the directory symbol */ + if (pos == buff || pos[-1] != FN_LIBCHAR) + *pos++=FN_LIBCHAR; + strmake(dir,buff, (uint) (pos-buff)); } my_fclose(fp,MYF(0)); diff --git a/sql/item_cmpfunc.cc b/sql/item_cmpfunc.cc index e7a6c52dfd9..373aede7b6b 100644 --- a/sql/item_cmpfunc.cc +++ b/sql/item_cmpfunc.cc @@ -487,7 +487,7 @@ Item_func_if::fix_length_and_dec() { maybe_null=args[1]->maybe_null || args[2]->maybe_null; max_length=max(args[1]->max_length,args[2]->max_length); - decimals=max(args[0]->decimals,args[1]->decimals); + decimals=max(args[1]->decimals,args[2]->decimals); enum Item_result arg1_type=args[1]->result_type(); enum Item_result arg2_type=args[2]->result_type(); if (arg1_type == STRING_RESULT || arg2_type == STRING_RESULT) diff --git a/sql/mysqlbinlog.cc b/sql/mysqlbinlog.cc index f0a9692cc2d..c234e2421bf 100644 --- a/sql/mysqlbinlog.cc +++ b/sql/mysqlbinlog.cc @@ -303,14 +303,12 @@ static void dump_remote_log_entries(const char* logname) uint len; NET* net = &mysql->net; if(!position) position = 4; // protect the innocent from spam - if(position < 4) - { - position = 4; - // warn the guity - fprintf(stderr, - "Warning: with the position so small you would hit the magic number\n\ -Unfortunately, no sweepstakes today, adjusted position to 4\n"); - } + if (position < 4) + { + position = 4; + // warn the guity + sql_print_error("Warning: The position in the binary log can't be less than 4.\nStarting from position 4\n"); + } int4store(buf, position); int2store(buf + 4, binlog_flags); len = (uint) strlen(logname); diff --git a/sql/sql_repl.cc b/sql/sql_repl.cc index e5039d118be..3af757993b5 100644 --- a/sql/sql_repl.cc +++ b/sql/sql_repl.cc @@ -297,10 +297,9 @@ void mysql_binlog_send(THD* thd, char* log_ident, ulong pos, ushort flags) if ((file=open_binlog(&log, log_file_name, &errmsg)) < 0) goto err; - if(pos < 4) + if (pos < 4) { - errmsg = "Congratulations! You have hit the magic number and can win \ -sweepstakes if you report the bug"; + errmsg = "Client requested master to start repliction from impossible position.\n"; goto err; } From 34a80e21cfea772d4e0e6966eda074544d2719f3 Mon Sep 17 00:00:00 2001 From: unknown Date: Tue, 29 May 2001 15:07:41 +0300 Subject: [PATCH 14/20] ut0ut.c If localtime_r not available in Unix, use localtime configure.in If localtime_r not available in Unix, use localtime innobase/configure.in: If localtime_r not available in Unix, use localtime innobase/ut/ut0ut.c: If localtime_r not available in Unix, use localtime --- innobase/configure.in | 1 + innobase/ut/ut0ut.c | 18 ++++++++++++------ 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/innobase/configure.in b/innobase/configure.in index 1133ab86221..48bb9504219 100644 --- a/innobase/configure.in +++ b/innobase/configure.in @@ -38,6 +38,7 @@ AC_CHECK_HEADERS(aio.h sched.h) AC_CHECK_SIZEOF(int, 4) AC_CHECK_FUNCS(sched_yield) AC_CHECK_FUNCS(fdatasync) +AC_CHECK_FUNCS(localtime_r) #AC_C_INLINE Already checked in MySQL AC_C_BIGENDIAN diff --git a/innobase/ut/ut0ut.c b/innobase/ut/ut0ut.c index 07ee3d2b6fe..1436f6a10a3 100644 --- a/innobase/ut/ut0ut.c +++ b/innobase/ut/ut0ut.c @@ -72,19 +72,25 @@ ut_print_timestamp( #else struct tm cal_tm; + struct tm* cal_tm_ptr; time_t tm; time(&tm); +#ifdef HAVE_LOCALTIME_R localtime_r(&tm, &cal_tm); + cal_tm_ptr = &cal_tm; +#else + cal_tm_ptr = localtime(&tm); +#endif fprintf(file,"%02d%02d%02d %2d:%02d:%02d", - cal_tm.tm_year % 100, - cal_tm.tm_mon+1, - cal_tm.tm_mday, - cal_tm.tm_hour, - cal_tm.tm_min, - cal_tm.tm_sec); + cal_tm_ptr->tm_year % 100, + cal_tm_ptr->tm_mon+1, + cal_tm_ptr->tm_mday, + cal_tm_ptr->tm_hour, + cal_tm_ptr->tm_min, + cal_tm_ptr->tm_sec); #endif } From 9d52381348a5ff15e856d3efc2004bbe36bb39bd Mon Sep 17 00:00:00 2001 From: unknown Date: Tue, 29 May 2001 09:29:08 -0400 Subject: [PATCH 15/20] Pushing all the Gemini changes above the table handler. BUILD/FINISH.sh: Add Gemini to configure Docs/manual.texi: Added Gemini content to the manual. acinclude.m4: Add Gemini to configure configure.in: Add Gemini to configure include/my_base.h: transaction isolation level READ UNCOMMITTED does not allow updates include/mysqld_error.h: Added new messages for Lock related failures sql/field.cc: Gemini BLOB support - sql/field.h: Gemini BLOB Support sql/ha_gemini.cc: Gemini Table handler sql/ha_gemini.h: Gemini Table handler sql/handler.cc: Added new messages for Lock related failures Provide the ability to turn off recovery for operations like REPAIR TABLE ans ALTER TABLE sql/handler.h: Add a bit to have full text indexes as an option and define the prototype to optionally turn on and off logging sql/lock.cc: Added new messages for Lock related failures sql/share/czech/errmsg.txt: Added new messages for Lock related failures sql/share/danish/errmsg.txt: Added new messages for Lock related failures sql/share/dutch/errmsg.txt: Added new messages for Lock related failures sql/share/english/errmsg.txt: Added new messages for Lock related failures sql/share/estonian/errmsg.txt: Added new messages for Lock related failures sql/share/french/errmsg.txt: Added new messages for Lock related failures sql/share/german/errmsg.txt: Added new messages for Lock related failures sql/share/greek/errmsg.txt: Added new messages for Lock related failures sql/share/hungarian/errmsg.txt: Added new messages for Lock related failures sql/share/italian/errmsg.txt: Added new messages for Lock related failures sql/share/japanese/errmsg.txt: Added new messages for Lock related failures sql/share/korean/errmsg.txt: Added new messages for Lock related failures sql/share/norwegian-ny/errmsg.txt: Added new messages for Lock related failures sql/share/norwegian/errmsg.txt: Added new messages for Lock related failures sql/share/polish/errmsg.txt: Added new messages for Lock related failures sql/share/portuguese/errmsg.txt: Added new messages for Lock related failures sql/share/romanian/errmsg.txt: Added new messages for Lock related failures sql/share/russian/errmsg.txt: Added new messages for Lock related failures sql/share/slovak/errmsg.txt: Added new messages for Lock related failures sql/share/spanish/errmsg.txt: Added new messages for Lock related failures sql/share/swedish/errmsg.txt: Added new messages for Lock related failures sql/sql_base.cc: Avoidlock table overflow issues when doing an alter table on Windows. This is Gemini specific. sql/sql_table.cc: Add a bit to have full text indexes as an option and define the prototype to optionally turn on and off logging BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BUILD/FINISH.sh | 4 + BitKeeper/etc/logging_ok | 2 +- Docs/manual.texi | 893 +++++++++++++++++-- acinclude.m4 | 4 +- configure.in | 11 + include/my_base.h | 1 + include/mysqld_error.h | 5 +- sql/field.cc | 53 ++ sql/field.h | 7 + sql/ha_gemini.cc | 1368 ++++++++++++++++++++++++----- sql/ha_gemini.h | 51 +- sql/handler.cc | 28 + sql/handler.h | 2 + sql/lock.cc | 26 +- sql/share/czech/errmsg.txt | 3 + sql/share/danish/errmsg.txt | 3 + sql/share/dutch/errmsg.txt | 3 + sql/share/english/errmsg.txt | 3 + sql/share/estonian/errmsg.txt | 3 + sql/share/french/errmsg.txt | 3 + sql/share/german/errmsg.txt | 3 + sql/share/greek/errmsg.txt | 3 + sql/share/hungarian/errmsg.txt | 3 + sql/share/italian/errmsg.txt | 3 + sql/share/japanese/errmsg.txt | 3 + sql/share/korean/errmsg.txt | 3 + sql/share/norwegian-ny/errmsg.txt | 3 + sql/share/norwegian/errmsg.txt | 3 + sql/share/polish/errmsg.txt | 3 + sql/share/portuguese/errmsg.txt | 3 + sql/share/romanian/errmsg.txt | 3 + sql/share/russian/errmsg.txt | 3 + sql/share/slovak/errmsg.txt | 3 + sql/share/spanish/errmsg.txt | 3 + sql/share/swedish/errmsg.txt | 3 + sql/sql_base.cc | 18 +- sql/sql_table.cc | 19 + 37 files changed, 2261 insertions(+), 294 deletions(-) diff --git a/BUILD/FINISH.sh b/BUILD/FINISH.sh index 4f13f5f8e4d..368ab339c2b 100644 --- a/BUILD/FINISH.sh +++ b/BUILD/FINISH.sh @@ -15,6 +15,10 @@ $make -k clean || true aclocal && autoheader && aclocal && automake && autoconf (cd bdb/dist && sh s_all) (cd innobase && aclocal && autoheader && aclocal && automake && autoconf) +if [ -d gemini ] +then + (cd gemini && aclocal && autoheader && aclocal && automake && autoconf) +fi CFLAGS=\"$cflags\" CXX=gcc CXXFLAGS=\"$cxxflags\" $configure" diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index 8bf83e5a369..4075ed0def7 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1 +1 @@ -mwagner@evoq.mwagner.org +mikef@nslinux.bedford.progress.com diff --git a/Docs/manual.texi b/Docs/manual.texi index b5fe52845c9..4944bce6406 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -529,10 +529,25 @@ BDB or Berkeley_DB Tables GEMINI Tables -* GEMINI overview:: -* GEMINI start:: -* GEMINI features:: -* GEMINI TODO:: +* GEMINI Overview:: +* Using GEMINI Tables:: + +GEMINI Overview + +* GEMINI Features:: +* GEMINI Concepts:: +* GEMINI Limitations:: + +Using GEMINI Tables + +* Startup Options:: +* Creating GEMINI Tables:: +* Backing Up GEMINI Tables:: +* Restoring GEMINI Tables:: +* Using Auto_Increment Columns With GEMINI Tables:: +* Performance Considerations:: +* Sample Configurations:: +* When To Use GEMINI Tables:: InnoDB Tables @@ -10119,7 +10134,7 @@ If you are using BDB (Berkeley DB) tables, you should familiarize yourself with the different BDB specific startup options. @xref{BDB start}. If you are using Gemini tables, refer to the Gemini-specific startup options. -@xref{GEMINI start}. +@xref{Using GEMINI Tables}. If you are using InnoDB tables, refer to the InnoDB-specific startup options. @xref{InnoDB start}. @@ -18868,7 +18883,7 @@ When you insert a value of @code{NULL} (recommended) or @code{0} into an If you delete the row containing the maximum value for an @code{AUTO_INCREMENT} column, the value will be reused with an -@code{ISAM}, @code{BDB} or @code{INNODB} table but not with a +@code{ISAM}, @code{GEMINI}, @code{BDB} or @code{INNODB} table but not with a @code{MyISAM} table. If you delete all rows in the table with @code{DELETE FROM table_name} (without a @code{WHERE}) in @code{AUTOCOMMIT} mode, the sequence starts over for both table types. @@ -24558,87 +24573,849 @@ not in @code{auto_commit} mode, until this problem is fixed (the fix is not trivial). @end itemize -@cindex tables, @code{GEMINI} +@cindex GEMINI tables @node GEMINI, InnoDB, BDB, Table types @section GEMINI Tables +@cindex GEMINI tables, overview @menu -* GEMINI overview:: -* GEMINI start:: -* GEMINI features:: -* GEMINI TODO:: +* GEMINI Overview:: +* Using GEMINI Tables:: @end menu -@node GEMINI overview, GEMINI start, GEMINI, GEMINI -@subsection Overview of GEMINI tables +@node GEMINI Overview, Using GEMINI Tables, GEMINI, GEMINI +@subsection GEMINI Overview -The @code{GEMINI} table type is developed and supported by NuSphere Corporation -(@uref{http://www.nusphere.com}). It features row-level locking, transaction -support (@code{COMMIT} and @code{ROLLBACK}), and automatic crash recovery. +@code{GEMINI} is a transaction-safe table handler for @strong{MySQL}. It +provides row-level locking, robust transaction support and reliable +crash recovery. It is targeted for databases that need to handle heavy +multi-user updates typical of transaction processing applications while +still providing excellent performance for read-intensive operations. The +@code{GEMINI} table type is developed and supported by NuSphere +Corporation (see @url{http://www.nusphere.com}). -@code{GEMINI} tables will be included in some future @strong{MySQL} 3.23.X -source distribution. +@code{GEMINI} provides full ACID transaction properties (Atomic, +Consistent, Independent, and Durable) with a programming model that +includes support for statement atomicity and all four standard isolation +levels (Read Uncommitted, Read Committed, Repeatable Read, and +Serializable) defined in the SQL standard. -@node GEMINI start, GEMINI features, GEMINI overview, GEMINI -@subsection GEMINI startup options +The @code{GEMINI} tables support row-level and table-level locking to +increase concurrency in applications and allow reading of tables without +locking for maximum concurrency in a heavy update environment. The +transaction, locking, and recovery mechanisms are tightly integrated to +eliminate unnecessary administration overhead. -If you are running with @code{AUTOCOMMIT=0} then your changes in @code{GEMINI} -tables will not be updated until you execute @code{COMMIT}. Instead of commit -you can execute @code{ROLLBACK} to forget your changes. @xref{COMMIT}. +In general, if @code{GEMINI} tables are selected for an application, it +is recommended that all tables updated in the application be +@code{GEMINI} tables to provide well-defined system behavior. If +non-@code{GEMINI} tables are mixed into the application then, ACID +transaction properties cannot be maintained. While there are clearly +cases where mixing table types is appropriate, it should always be done +with careful consideration of the impact on transaction consistency and +recoverability needs of the application and underlying database. -If you are running with @code{AUTOCOMMIT=1} (the default), your changes -will be committed immediately. You can start an extended transaction with -the @code{BEGIN WORK} SQL command, after which your changes will not be -committed until you execute @code{COMMIT} (or decide to @code{ROLLBACK} -the changes). +The @code{GEMINI} table type is derived from a successful commercial +database and uses the storage kernel technology tightly integrated with +@strong{MySQL} server. The basic @code{GEMINI} technology is in use by +millions of users worldwide in production environments today. This +maturity allows @code{GEMINI} tables to provide a solution for those +users who require transaction-based behavior as part of their +applications. -The following options to @code{mysqld} can be used to change the behavior of -GEMINI tables: +The @code{GEMINI} table handler supports a configurable data cache that +allows a significant portion of any database to be maintained in memory +while still allowing durable updates. -@multitable @columnfractions .30 .70 -@item @strong{Option} @tab @strong{Meaning} -@item @code{--gemini-full-recovery} @tab Default. -@item @code{--gemini-no-recovery} @tab Turn off recovery logging. Not recommended. -@item @code{--gemini-lazy-commit} @tab Relaxes the flush log at commit rule. -@item @code{--gemini-unbuffered-io} @tab All database writes bypass OS cache. -@item @code{--skip-gemini} @tab Don't use Gemini. -@item @code{--O gemini_db_buffers=#} @tab Number of database buffers in database cache. -@item @code{--O gemini_connection_limit=#} @tab Maximum number of connections to Gemini. -@item @code{--O gemini_spin_retries=#} @tab Spin lock retries (optimization). -@item @code{--O gemini_io_threads=#} @tab Number of background I/O threads. -@item @code{--O gemini_lock_table_size=#} @tab Set the maximum number of locks. Default 4096. +@cindex GEMINI tables, features +@menu +* GEMINI Features:: +* GEMINI Concepts:: +* GEMINI Limitations:: +@end menu + +@node GEMINI Features, GEMINI Concepts, GEMINI Overview, GEMINI Overview +@subsubsection GEMINI Features + +The following summarizes the major features provided by @code{GEMINI} +tables. + +@itemize @bullet +@item +Supports all optimization statistics used by the @strong{MySQL} optimizer +including table cardinality, index range estimates and multi-component +selectivity to insure optimal query performance. + +@item +Maintains exact cardinality information for each table so @code{SELECT +COUNT(*) FROM} table-name always returns an answer immediately. + +@item +Supports index-only queries; when index data is sufficient to resolve a +query no record data is read (for non character types). + +@item +@code{GEMINI} uses block based I/O for better performance. There is no +performance penalty for using @code{VARCHAR} fields. The maximum record size is +currently 32K. + +@item +The number of rows in a single @code{GEMINI} table can be 4 quintillion +(full use of 64 bits). + +@item +Individual tables can be as large as 16 petabytes. + +@item +Locking is done at a record or row level rather than at table level +unless table locks are explicitly requested. When a row is inserted into +a table, other rows can be updated, inserted or deleted without waiting +for the inserted row to be committed. + +@item +Provides durable transactions backed by a crash recovery mechanism that +returns the database to a known consistent state in the event of an +unexpected failure. + +@item +Support for all isolation levels and statement atomicity defined in the +SQL standard. + +@item +Reliable Master Replication; the master database can survive system +failure and recover all committed transactions. +@end itemize + +@cindex GEMINI tables, concepts +@node GEMINI Concepts, GEMINI Limitations, GEMINI Features, GEMINI Overview +@subsubsection GEMINI Concepts + +This section highlights some of the important concepts behind +@code{GEMINI} and the @code{GEMINI} programming model, including: + +@itemize @bullet +@item +ACID Transactions +@item +Transaction COMMIT/ROLLBACK +@item +Statement Atomicity +@item +Recovery +@item +Isolation Levels +@item +Row-Level Locking +@end itemize + +These features are described below. + +@cindex GEMINI tables, ACID transactions +@noindent +@strong{ACID Transactions} + +ACID in the context of transactions is an acronym which stands for +@emph{Atomicity}, @emph{Consistency}, @emph{Isolation}, @emph{Durability}. + +@multitable @columnfractions .25 .75 +@item @sc{Attribute} @tab @sc{Description} +@item +@strong{Atomicity} +@tab A transaction allows for the grouping of one or more changes to +tables and rows in the database to form an atomic or indivisible +operation. That is, either all of the changes occur or none of them +do. If for any reason the transaction cannot be completed, everything +this transaction changed can be restored to the state it was in prior to +the start of the transaction via a rollback operation. + +@item +@strong{Consistency} +@tab +Transactions always operate on a consistent view of the data and when +they end always leave the data in a consistent state. Data may be said to +be consistent as long as it conforms to a set of invariants, such as no +two rows in the customer table have the same customer ID and all orders +have an associated customer row. While a transaction executes, these +invariants may be violated, but no other transaction will be allowed to +see these inconsistencies, and all such inconsistencies will have been +eliminated by the time the transaction ends. + +@item +@strong{Isolation} +@tab To a given transaction, it should appear as though it is running +all by itself on the database. The effects of concurrently running +transactions are invisible to this transaction, and the effects of this +transaction are invisible to others until the transaction is committed. + +@item +@strong{Durability} +@tab Once a transaction is committed, its effects are guaranteed to +persist even in the event of subsequent system failures. Until the +transaction commits, not only are any changes made by that transaction +not durable, but are guaranteed not to persist in the face of a system +failures, as crash recovery will rollback their effects. @end multitable -If you use @code{--skip-gemini}, @strong{MySQL} will not initialize the -Gemini table handler, saving memory; you cannot use Gemini tables if you -use @code{--skip-gemini}. +@cindex GEMINI tables, COMMIT/ROLLBACK +@noindent +@strong{Transaction COMMIT/ROLLBACK} -@node GEMINI features, GEMINI TODO, GEMINI start, GEMINI -@subsection Features of @code{GEMINI} tables: +As stated above, a transaction is a group of work being done to +data. Unless otherwise directed, @strong{MySQL} considers each statement +a transaction in itself. Multiple updates can be accomplished by placing +them in a single statement, however they are limited to a single table. + +Applications tend to require more robust use of transaction +concepts. Take, for example, a system that processes an order: A row may +be inserted in an order table, additional rows may be added to an +order-line table, updates may be made to inventory tables, etc. It is +important that if the order completes, all the changes are made to all +the tables involved; likewise if the order fails, none of the changes to +the tables must occur. To facilitate this requirement, @strong{MySQL} +has syntax to start a transaction called @code{BEGIN WORK}. All +statements that occur after the @code{BEGIN WORK} statement are grouped +into a single transaction. The end of this transaction occurs when a +@code{COMMIT} or @code{ROLLBACK} statement is encountered. After the +@code{COMMIT} or @code{ROLLBACK} the system returns back to the behavior +before the @code{BEGIN WORK} statement was encountered where every +statement is a transaction. + +To permanently turn off the behavior where every statement is a +transaction, @strong{MySQL} added a variable called +@code{AUTOCOMMIT}. The @code{AUTOCOMMIT} variable can have two values, +@code{1} and @code{0}. The mode where every statement is a transaction +is when @code{AUTOCOMMIT} is set to @code{1} (@code{AUTOCOMMIT=1}). When +@code{AUTOCOMMIT} is set to @code{0} (@code{AUTOCOMMIT=0}), then every +statement is part of the same transaction until the transaction end by +either @code{COMMIT} or @code{ROLLBACK}. Once a transaction completes, a +new transaction is immediately started and the process repeats. + +Here is an example of the SQL statements that you may find in a typical +order: + +@example +BEGIN WORK; + INSERT INTO order VALUES ...; + INSERT INTO order-lines VALUES ...; + INSERT INTO order-lines VALUES ...; + INSERT INTO order-lines VALUES ...; + UPDATE inventory WHERE ...; +COMMIT; +@end example + +This example shows how to use the @code{BEGIN WORK} statement to start a +transaction. If the variable @code{AUTOCOMMIT} is set to @code{0}, then +a transaction would have been started already. In this case, the +@code{BEGIN WORK} commits the current transaction and starts a new one. + +@cindex GEMINI tables, statement atomicity +@noindent +@strong{Statement Atomicity} + +As mentioned above, when running with @code{AUTOCOMMIT} set to @code{1}, +each statement executes as a single transaction. When a statement has an +error, then all changes make by the statement must be +undone. Transactions support this behavior. Non-transaction safe table +handlers would have a partial statement update where some of the changes +from the statement would be contained in the database and other changes +from the statement would not. Work would need to be done to manually +recover from the error. + +@cindex GEMINI tables, recovery +@noindent +@strong{Recovery} + +Transactions are the basis for database recovery. Recovery is what +supports the Durability attribute of the ACID transaction. + +@code{GEMINI} uses a separate file called the Recovery Log located in +the @code{$DATADIR} directory named @code{gemini.rl}. This file +maintains the integrity of all the @code{GEMINI} tables. @code{GEMINI} +can not recover any data from non-@code{GEMINI} tables. In addition, the +@code{gemini.rl} file is used to rollback transactions in support of the +@code{ROLLBACK} statement. + +In the event of a system failure, the next time the @strong{MySQL} +server is started, @code{GEMINI} will automatically go through its +crash recovery process. The result of crash recovery is that all the +@code{GEMINI} tables will contain the latest changes made to them, and +all transactions that were open at the time of the crash will have been +rolled back. + +The @code{GEMINI} Recovery Log reuses space when it can. Space can be +reused when information in the Recovery Log is no longer needed for +crash recovery or rollback. + +@cindex GEMINI tables, isolation levels +@noindent +@strong{Isolation Levels} + +There are four isolation levels supported by @code{GEMINI}: @itemize @bullet @item -If a query result can be resolved solely from the index key, Gemini will -not read the actual row stored in the database. +READ UNCOMMITTED @item -Locking on Gemini tables is done at row level. +READ COMMITTED @item -@code{SELECT COUNT(*) FROM table_name} is fast; Gemini maintains a count -of the number of rows in the table. +REPEATABLE READ +@item +SERIALIZABLE @end itemize -@node GEMINI TODO, , GEMINI features, GEMINI -@subsection Current limitations of @code{GEMINI} tables: +These isolation levels apply only to shared locks obtained by select +statements, excluding select for update. Statements that get exclusive +locks always retain those locks until the transaction commits or rolls +back. + +By default, @code{GEMINI} operates at the @code{READ COMMITTED} +level. You can override the default using the following command: + +@example +SET [GLOBAL | SESSION] TRANSACTION ISOLATION LEVEL [READ UNCOMMITTED | +READ COMMITTED | REPEATABLE READ | SERIALIZABLE ] +@end example + +If the @code{SESSION} qualifier used, the specified isolation level +persists for the entire session. If the @code{GLOBAL} qualifier is used, +the specified isolation level is applied to all new connections from +this point forward. Note that the specified isolation level will not +change the behavior for existing connections including the connection +that exectues the @code{SET GLOBAL TRANSACTION ISOLATION LEVEL} +statement. + +@multitable @columnfractions .30 .70 +@item @sc{Isolation Level} @tab @sc{Description} + +@item +@strong{READ UNCOMMITTED} +@tab Does not obtain any locks when reading rows. This means that if a +row is locked by another process in a transaction that has a more strict +isolation level, the @code{READ UNCOMMITTED} query will not wait until +the locks are released before reading the row. You will get an error if +attempt any updates while running at this isolation level. + +@item +@strong{READ COMMITTED} +@tab Locks the requested rows long enough to copy the row from the +database block to the client row buffer. If a @code{READ COMMITTED} +query finds that a row is locked exclusively by another process, it will +wait until either the row has been released, or the lock timeout value +has expired. + +@item +@strong{REPEATABLE READ} +@tab Locks all the rows needed to satisfy the query. These locks are +held until the transaction ends (commits or rolls back). If a +@code{REPEATABLE READ} query finds that a row is locked exclusively by +another process, it will wait until either the row has been released, or +the lock timeout value has expired. + +@item +@strong{SERIALIZABLE} +@tab Locks the table that contains the rows needed to satisfy the +query. This lock is held until the transaction ends (commits or rolls +back). If a @code{SERIALIZABLE} query finds that a row is exclusively +locked by another process, it will wait until either the row has been +released, or the lock timeout value has expired. +@end multitable + +The statements that get exclusive locks are @code{INSERT}, +@code{UPDATE}, @code{DELETE} and @code{SELECT ... FOR UPDATE}. Select +statements without the @code{FOR UPDATE} qualifier get shared locks +which allow other not ''for update'' select statements to read the same +rows but block anyone trying to update the row from accessing it. Rows +or tables with exclusive locks block all access to the row from other +transactions until the transaction ends. + +In general terms, the higher the Isolation level the more likelihood of +having concurrent locks and therefore lock conflicts. In such cases, +adjust the @code{-O gemini_lock_table_size} accordingly. + +@cindex GEMINI tables, row-level locking +@noindent +@strong{Row-Level Locking} + +@code{GEMINI} uses row locks, which allows high concurrency for requests +on the same table. + +In order to avoid lock table overflow, SQL statements that require +applying locks to a large number of rows should either be run at the +serializable isolation level or should be covered by a lock table +statement. + +Memory must be pre-allocated for the lock table. The mysqld server +startup option @code{-0 gemini_lock_table_size} can be used to adjust +the number of concurrent locks. + +@cindex GEMINI tables, limitations +@node GEMINI Limitations, , GEMINI Concepts, GEMINI Overview +@subsubsection GEMINI Limitations + +The following limitations are in effect for the current version of +@code{GEMINI}: @itemize @bullet @item -BLOB columns are not supported in @code{GEMINI} tables. +@code{DROP DATABASE} does not work with @code{GEMINI} tables; instead, +drop all the tables in the database first, then drop the database. + @item -The maximum number of concurrent users accessing @code{GEMINI} tables is -limited by @code{gemini_connection_limit}. The default is 100 users. +Maximum number of @code{GEMINI} tables is 1012. + +@item +Maximum number of @code{GEMINI} files a server can manage is 1012. Each +table consumes one file; an additional file is consumed if the table has +any indexes defined on it. + +@item +Maximum size of BLOBs is 16MB. + +@item +@code{FULLTEXT} indexes are not supported with @code{GEMINI} tables. + +@item +There is no support for multi-component @code{AUTO_INCREMENT} fields +that provide alternating values at the component level. If you try to +create such a field, @code{GEMINI} will refuse. + +@item +@code{TEMPORARY TABLES} are not supported by @code{GEMINI}. The +statement @code{CREATE TEMPORARY TABLE ... TYPE=GEMINI} will generate +the response: @code{ERROR 1005: Can't create table '/tmp/#sqlxxxxx' +(errno: 0)}. + +@item +@code{FLUSH TABLES} has not been implemented with @code{GEMINI} tables. @end itemize -NuSphere is working on removing these limitations. +@cindex GEMINI tables, using +@node Using GEMINI Tables, , GEMINI Overview, GEMINI +@subsection Using GEMINI Tables + +This section explains the various startup options you can use with +@code{GEMINI} tables, how to backup @code{GEMINI} tables, some +performance considerations and sample configurations, and a brief +discussion of when to use @code{GEMINI} tables. + +Specifically, the topics covered in this section are: + +@itemize @bullet +@item +Startup Options +@item +Creating @code{GEMINI} Tables +@item +Backing Up @code{GEMINI} Tables +@item +Using Auto_Increment Columns With @code{GEMINI} Tables +@item +Performance Considerations +@item +Sample Configurations +@item +When To Use @code{GEMINI} Tables +@end itemize + +@cindex GEMINI tables, startup options +@menu +* Startup Options:: +* Creating GEMINI Tables:: +* Backing Up GEMINI Tables:: +* Restoring GEMINI Tables:: +* Using Auto_Increment Columns With GEMINI Tables:: +* Performance Considerations:: +* Sample Configurations:: +* When To Use GEMINI Tables:: +@end menu + +@node Startup Options, Creating GEMINI Tables, Using GEMINI Tables, Using GEMINI Tables +@subsubsection Startup Options + +The table below lists options to mysqld that can be used to change the +behavior of @code{GEMINI} tables. + +@multitable @columnfractions .40 .60 +@item @sc{Option} @tab @sc{Description} + +@item +@code{--default-table-type=gemini} +@tab Sets the default table handler to be @code{GEMINI}. All create +table statements will create @code{GEMINI} tables unless otherwise +specified with @code{TYPE=@var{table-type}}. As noted above, there is +currently a limitation with @code{TEMPORARY} tables using @code{GEMINI}. + +@item +@code{--gemini-flush-log-at-commit} +@tab Forces the recovery log buffers to be flushed after every +commit. This can have a serious performance penalty, so use with +caution. + +@item +@code{--gemini-recovery=FULL | NONE | FORCE} +@tab Sets the recovery mode. Default is @code{FULL}. @code{NONE} is +useful for performing repeatable batch operations because the updates +are not recorded in the recovery log. @code{FORCE} skips crash recovery +upon startup; this corrupts the database, and should be used in +emergencies only. + +@item +@code{--gemini-unbuffered-io} +@tab All database writes bypass the OS cache. This can provide a +performance boost on heavily updated systems where most of the dataset +being worked on is cached in memory with the @code{gemini_buffer_cache} +parameter. + +@item +@code{--O gemini_buffer_cache=size} +@tab Amount of memory to allocate for database buffers, including Index +and Record information. It is recommended that this number be 10% of the +total size of all @code{GEMINI} tables. Do not exceed amount of memory +on the system! + +@item +@code{--O gemini_connection_limit=#} +@tab Maximum number of connections to @code{GEMINI}; default is +@code{100}. Each connection consumes about 1K of memory. + +@item +@code{--O gemini_io_threads=#} +@tab Number of background I/O threads; default is @code{2}. Increase the +number when using @code{--gemini-unbuffered-io} + +@item +@code{--O gemini_lock_table_size=#} +@tab Sets the maximum number of concurrent locks; default is 4096. Using +@code{SET [ GLOBAL | SESSION ] TRANSACTION ISOLATION = ...} will +determine how long a program will hold row locks. + +@item +@code{--O gemini_lock_wait_timeout=seconds} +@tab Number of seconds to wait for record locks when performing queries; +default is 10 seconds. Using @code{SET [ GLOBAL | SESSION ] TRANSACTION +ISOLATION = ...} will determine how long a program will hold row locks. + +@item +@code{--skip-gemini} +@tab Do not use @code{GEMINI}. If you use @code{--skip-gemini}, @strong{MySQL} +will not initialize the @code{GEMINI} table handler, saving memory; you +cannot use @code{GEMINI} tables if you use @code{--skip-gemini}. + +@item +@code{--transaction-isolation=READ-UNCOMMITTED | READ-COMMITTED | REPEATABLE-READ | SERIALIZABLE} +@tab Sets the GLOBAL transaction isolation level for all users that +connect to the server; can be overridden with the SET ISOLATION LEVEL +statement. +@end multitable + +@cindex GEMINI tables, creating +@node Creating GEMINI Tables, Backing Up GEMINI Tables, Startup Options, Using GEMINI Tables +@subsubsection Creating GEMINI Tables + +@code{GEMINI} tables can be created by either using the @code{CREATE +TABLE} syntax or the @code{ALTER TABLE} syntax. + +@itemize @bullet +@item +The syntax for creating a @code{GEMINI} table is: + +@example +CREATE TABLE @var{table-name} (....) TYPE=GEMINI; +@end example + +@item +The syntax to convert a table to @code{GEMINI} is: + +@example +ALTER TABLE @var{table-name} TYPE=GEMINI; +@end example +@end itemize + +@xref{Tutorial}, for more information on how to create and use +@code{MySQL} tables. + +@cindex GEMINI tables, backing up +@node Backing Up GEMINI Tables, Restoring GEMINI Tables, Creating GEMINI Tables, Using GEMINI Tables +@subsubsection Backing Up GEMINI Tables + +@code{GEMINI} supports both @code{BACKUP TABLE} and @code{RESTORE TABLE} +syntax. To learn more about how to use @code{BACKUP} and @code{RESTORE}, +see @ref{BACKUP TABLE} and @ref{RESTORE TABLE}. + +To backup @code{GEMINI} tables outside of the @code{MySQL} environment, +you must first shut down the @code{MySQL} server. Once the server is +shut down, you can copy the files associated with @code{GEMINI} to a +different location. The files that make up the @code{GEMINI} table +handler are: + +@itemize @bullet +@item +All files associated with a table with a @code{.gmd} extention below the +@code{$DATADIR} directory. Such files include @code{@var{table}.gmd}, +@code{@var{table}.gmi}, and @code{@var{table}.frm} +@item +@code{gemini.db} in the @code{$DATADIR} directory +@item +@code{gemini.rl} in the @code{$DATADIR} directory +@item +@code{gemini.lg} in the @code{$DATADIR} directory +@end itemize + +All the @code{GEMINI} files must be copied together. You can not copy +just the @code{.gmi} and @code{.gmd} files to a different +@code{$DATADIR} and have them become part of a new database. You can +copy an entire @code{$DATADIR} directory to another location and start a +@strong{MySQL} server using the new @code{$DATADIR}. + +@cindex GEMINI tables, restoring +@node Restoring GEMINI Tables, Using Auto_Increment Columns With GEMINI Tables, Backing Up GEMINI Tables, Using GEMINI Tables +@subsubsection Restoring GEMINI Tables + +To restore @code{GEMINI} tables outside of the @code{MySQL} environment, +you must first shut down the @code{MySQL} server. Once the server is +shut down, you can remove all @code{GEMINI} files in the target +@code{$DATADIR} and then copy the files previously backed up into the +@code{$DATADIR} directory. + +As mentioned above, the files that make up the @code{GEMINI} table +handler are: + +@itemize @bullet +@item +All files associated with a table with a @code{.gmd} extention below the +@code{$DATADIR} directory. Such files include @code{@var{table}.gmd}, +@code{@var{table}.gmi}, and @code{@var{table}.frm} +@item +@code{gemini.db} in the @code{$DATADIR} directory +@item +@code{gemini.rl} in the @code{$DATADIR} directory +@item +@code{gemini.lg} in the @code{$DATADIR} directory +@end itemize + +When restoring a table, all the @code{GEMINI} files must be copied +together. You can not restore just the @code{.gmi} and @code{.gmd} +files. + +@cindex GEMINI tables, auto_increment +@node Using Auto_Increment Columns With GEMINI Tables, Performance Considerations, Restoring GEMINI Tables, Using GEMINI Tables +@subsubsection Using Auto_Increment Columns With GEMINI Tables + +As mentioned previously, @code{GEMINI} tables support row-level and +table-level locking to increase concurrency in applications and to allow +reading of tables without locking for maximum concurrency in heavy +update environments. This feature has several implications when working +with @code{auto_increment} tables. + +In @code{MySQL}, when a column is defined as an @code{auto_increment} +column, and a row is inserted into the table with a @code{NULL} for the +column, the @code{auto_increment} column is updated to be 1 higher than +the highest value in the column. + +With @code{MyISAM} tables, the @code{auto_increment} function is +implemented by looking in the index and finding the highest value and +adding 1 to it. This is possible because the entire @code{ISAM} table is +locked during the update period and the increment value is therefore +guaranteed to not be changing. + +With @code{GEMINI} tables, the @code{auto_increment} function is +implemented by maintaining a counter in a separate location from the +table data. Instead of looking at the highest value in the table index, +@code{GEMINI} tables look at this separately maintained counter. This +means that in a transactional model, unlike the bottleneck inherent in +the @code{MyISAM} approach, @code{GEMINI} users do @b{not} have to wait +until the transaction that added the last value either commits or +rollbacks before looking at the value. + +Two side-effects of the @code{GEMINI} implementation are: + +@itemize @bullet +@item +If an insert is done where the column with the @code{auto_increment} is +specified, and this specified value is the highest value, @code{MyISAM} +uses it as its @code{auto_increment} value, and every subsequent insert +is based on this. By contrast, @code{GEMINI} does not use this value, +but instead uses the value maintained in the separate @code{GEMINI} +counter location. + +@item +To set the counter to a specific value, you can use @code{SET +insert_id=#} and insert a new row in the table. However, as a general +rule, values should not be inserted into an @code{auto_increment} +column; the database manager should be maintaining this field, not the +application. @code{SET insert_id} is a recovery mechanism that should be +used in case of error only. +@end itemize + +Note that if you delete the row containing the maximum value for an +@code{auto_increment} column, the value will be reused with a +@code{GEMINI} table but not with a @code{MyISAM} table. + +See @ref{CREATE TABLE} for more information about creating +@code{auto_increment} columns. + +@cindex GEMINI tables, peformance considerations +@node Performance Considerations, Sample Configurations, Using Auto_Increment Columns With GEMINI Tables, Using GEMINI Tables +@subsubsection Performance Considerations + +In addition to designing the best possible application, configuration of +the data and the server startup parameters need to be considered. How +the hardware is being used can have a dramatic affect on how fast the +system will respond to queries. Disk Drives and Memory must both be +considered. + +@noindent +@strong{Disk Drives} + +For best performance, you want to spread the data out over as many disks +as possible. Using RAID 10 stripes work very well. If there are a lot of +updates then the recovery log (@code{gemini.rl}) should be on a +relatively quiet disk drive. + +To spread the data out without using RAID 10, you can do the following: + +@itemize @bullet +@item +Group all the tables into three categories: Heavy Use, Moderate Use, +Light Use. + +@item +Take the number of disk drives available and use a round-robin approach +to the three categories grouping the tables on a disk drive. The result +will be an equal distribution of Heavy/Moderate/Light tables assigned to +each disk drive. + +@item +Once the tables have been converted to @code{GEMINI} by using the +@code{ALTER TABLE TYPE=GEMINI} statements, move (@code{mv}) the +@code{.gmd} and @code{.gmi} files to a different disk drive and link +(@code{ln -s}) them back to the original directory where the @code{.frm} +file resides. + +@item +Finally, move the @code{gemini.rl} file to its quiet disk location and link +the file back to the @code{$DATADIR} directory. +@end itemize + +@noindent +@strong{Memory} + +The more data that can be placed in memory the faster the access to the +data. Figure out how large the @code{GEMINI} data is by adding up the +@code{.gmd} and @code{.gmi} file sizes. If you can, put at least 10% of +the data into memory. You allocate memory for the rows and indexes by +using the @code{gemini_buffer_cache} startup parameter. For example: + +@example +mysqld -O gemini_buffer_cache=800M +@end example + +@noindent +would allocate 800 MB of memory for the @code{GEMINI} buffer cache. + +@cindex GEMINI tables, sample configurations +@node Sample Configurations, When To Use GEMINI Tables, Performance Considerations, Using GEMINI Tables +@subsubsection Sample Configurations + +Based on the performance considerations above, we can look at some +examples for how to get the best performance out of the system when +using @code{GEMINI} tables. + +@multitable @columnfractions .30 .70 +@item @sc{Hardware} @tab @sc{Configuration} +@item +One CPU, 128MB memory, one disk drive +@tab Allocate 80MB of memory for reading and updating @code{GEMINI} +tables by starting the mysqld server with the following option: + +@example +-O gemini_buffer_cache=80M +@end example + +@item +Two CPUs, 512MB memory, four disk drives +@tab Use RAID 10 to stripe the data across all available disks, or use +the method described in the performance considerations section, +above. Allocate 450MB of memory for reading/updating @code{GEMINI} +tables: + +@example +-O gemini_buffer_cache=450M +@end example +@end multitable + +@cindex GEMINI tables, when to use +@node When To Use GEMINI Tables, , Sample Configurations, Using GEMINI Tables +@subsubsection When To Use GEMINI Tables + +Because the @code{GEMINI} table handler provides crash recovery and +transaction support, there is extra overhead that is not found in other +non-transaction safe table handlers. Here are some general guidelines +for when to employ @code{GEMINI} and when to use other non-transaction +safe tables (@code{NTST}). + +@multitable @columnfractions .30 .25 .45 +@item +@sc{Access Trends} @tab @sc{Table Type} @tab @sc{Reason} +@item +Read-only +@tab @code{NTST} +@tab Less overhead and faster +@item +Critical data +@tab @code{GEMINI} +@tab Crash recovery protection +@item +High concurrency +@tab @code{GEMINI} +@tab Row-level locking +@item +Heavy update +@tab @code{GEMINI} +@tab Row-level locking +@end multitable + +The table below shows how a typical application schema could be defined. + +@multitable @columnfractions .15 .30 .25 .30 +@item +@sc{Table} @tab @sc{Contents} @tab @sc{Table Type} @tab @sc{Reason} +@item +account +@tab Customer account data +@tab @code{GEMINI} +@tab Critical data, heavy update +@item +order +@tab Orders for a customer +@tab @code{GEMINI} +@tab Critical data, heavy update +@item +orderline +@tab Orderline detail for an order +@tab @code{GEMINI} +@tab Critical data, heavy update +@item +invdesc +@tab Inventory description +@tab @code{NTST} +@tab Read-only, frequent access +@item +salesrep +@tab Sales rep information +@tab @code{NTST} +@tab Infrequent update +@item +inventory +@tab Inventory information +@tab @code{GEMINI} +@tab High concurrency, critical data +@item +config +@tab System configuration +@tab @code{NTST} +@tab Read-only +@end multitable @node InnoDB, , GEMINI, Table types @section InnoDB Tables diff --git a/acinclude.m4 b/acinclude.m4 index ab2ea5cddd1..59b6e909225 100644 --- a/acinclude.m4 +++ b/acinclude.m4 @@ -999,10 +999,10 @@ dnl echo "DBG_GEM1: gemini='$gemini'" gemini_includes= gemini_libs= case "$gemini" in - no | default | *) + no) AC_MSG_RESULT([Not using Gemini DB]) ;; - yes ) + yes | default | *) have_gemini_db="yes" gemini_includes="-I../gemini/incl -I../gemini" gemini_libs="\ diff --git a/configure.in b/configure.in index 9998902bcec..47df228d310 100644 --- a/configure.in +++ b/configure.in @@ -2018,6 +2018,17 @@ EOF echo "END OF INNODB CONFIGURATION" fi + if test "X$have_gemini_db" = "Xyes"; then + sql_server_dirs="gemini $sql_server_dirs" + echo "CONFIGURING FOR GEMINI DB" + (cd gemini && sh ./configure) \ + || AC_MSG_ERROR([could not configure Gemini DB]) + + echo "END OF GEMINI DB CONFIGURATION" + + AC_DEFINE(HAVE_GEMINI_DB) + fi + if test "$with_posix_threads" = "no" -o "$with_mit_threads" = "yes" then # MIT user level threads diff --git a/include/my_base.h b/include/my_base.h index aee9f7af3f1..bb2e4128195 100644 --- a/include/my_base.h +++ b/include/my_base.h @@ -213,6 +213,7 @@ enum ha_base_keytype { #define HA_ERR_CRASHED_ON_USAGE 145 /* Table must be repaired */ #define HA_ERR_LOCK_WAIT_TIMEOUT 146 #define HA_ERR_LOCK_TABLE_FULL 147 +#define HA_ERR_READ_ONLY_TRANSACTION 148 /* Updates not allowed */ /* Other constants */ diff --git a/include/mysqld_error.h b/include/mysqld_error.h index 4f46c40ff49..e412f95a8e4 100644 --- a/include/mysqld_error.h +++ b/include/mysqld_error.h @@ -205,4 +205,7 @@ #define ER_SLAVE_THREAD 1202 #define ER_TOO_MANY_USER_CONNECTIONS 1203 #define ER_SET_CONSTANTS_ONLY 1204 -#define ER_ERROR_MESSAGES 205 +#define ER_LOCK_WAIT_TIMEOUT 1205 +#define ER_LOCK_TABLE_FULL 1206 +#define ER_READ_ONLY_TRANSACTION 1207 +#define ER_ERROR_MESSAGES 208 diff --git a/sql/field.cc b/sql/field.cc index 1f1f00b161b..629ae899494 100644 --- a/sql/field.cc +++ b/sql/field.cc @@ -4087,6 +4087,59 @@ const char *Field_blob::unpack(char *to, const char *from) } +#ifdef HAVE_GEMINI_DB +/* Blobs in Gemini tables are stored separately from the rows which contain +** them (except for tiny blobs, which are stored in the row). For all other +** blob types (blob, mediumblob, longblob), the row contains the length of +** the blob data and a blob id. These methods (pack_id, get_id, and +** unpack_id) handle packing and unpacking blob fields in Gemini rows. +*/ +char *Field_blob::pack_id(char *to, const char *from, ulonglong id, uint max_length) +{ + char *save=ptr; + ptr=(char*) from; + ulong length=get_length(); // Length of from string + if (length > max_length) + { + ptr=to; + length=max_length; + store_length(length); // Store max length + ptr=(char*) from; + } + else + memcpy(to,from,packlength); // Copy length + if (length) + { + int8store(to+packlength, id); + } + ptr=save; // Restore org row pointer + return to+packlength+sizeof(id); +} + + +ulonglong Field_blob::get_id(const char *from) +{ + ulonglong id = 0; + ulong length=get_length(from); + if (length) + longlongget(id, from+packlength); + return id; +} + + +const char *Field_blob::unpack_id(char *to, const char *from, const char *bdata) +{ + memcpy(to,from,packlength); + ulong length=get_length(from); + from+=packlength; + if (length) + memcpy_fixed(to+packlength, &bdata, sizeof(bdata)); + else + bzero(to+packlength,sizeof(bdata)); + return from+sizeof(ulonglong); +} +#endif /* HAVE_GEMINI_DB */ + /* Keys for blobs are like keys on varchars */ int Field_blob::pack_cmp(const char *a, const char *b, uint key_length) diff --git a/sql/field.h b/sql/field.h index 2f03d849c9b..b5d7c613701 100644 --- a/sql/field.h +++ b/sql/field.h @@ -869,6 +869,13 @@ public: } char *pack(char *to, const char *from, uint max_length= ~(uint) 0); const char *unpack(char *to, const char *from); +#ifdef HAVE_GEMINI_DB + char *pack_id(char *to, const char *from, ulonglong id, + uint max_length= ~(uint) 0); + ulonglong get_id(const char *from); + const char *unpack_id(char *to, const char *from, const char *bdata); + enum_field_types blobtype() { return (packlength == 1 ? FIELD_TYPE_TINY_BLOB : FIELD_TYPE_BLOB);} +#endif char *pack_key(char *to, const char *from, uint max_length); char *pack_key_from_key_image(char* to, const char *from, uint max_length); int pack_cmp(const char *a, const char *b, uint key_length); diff --git a/sql/ha_gemini.cc b/sql/ha_gemini.cc index 73241c60be7..733f0aa3a7d 100644 --- a/sql/ha_gemini.cc +++ b/sql/ha_gemini.cc @@ -19,10 +19,13 @@ #pragma implementation // gcc: Class implementation #endif -#include "mysql_priv.h" -#ifdef HAVE_GEMINI_DB +#include +#include "mysql_priv.h" #include "my_pthread.h" + +#ifdef HAVE_GEMINI_DB +#include "ha_gemini.h" #include "dbconfig.h" #include "dsmpub.h" #include "recpub.h" @@ -34,7 +37,17 @@ #include #include #include "geminikey.h" -#include "ha_gemini.h" + +#define gemini_msg MSGD_CALLBACK + +pthread_mutex_t gem_mutex; + +static HASH gem_open_tables; +static GEM_SHARE *get_share(const char *table_name, TABLE *table); +static int free_share(GEM_SHARE *share, bool mutex_is_locked); +static byte* gem_get_key(GEM_SHARE *share,uint *length, + my_bool not_used __attribute__((unused))); +static void gemini_lock_table_overflow_error(dsmContext_t *pcontext); const char *ha_gemini_ext=".gmd"; const char *ha_gemini_idx_ext=".gmi"; @@ -48,6 +61,7 @@ long gemini_locktablesize; long gemini_lock_wait_timeout; long gemini_spin_retries; long gemini_connection_limit; +char *gemini_basedir; const char gemini_dbname[] = "gemini"; dsmContext_t *pfirstContext = NULL; @@ -61,7 +75,7 @@ TYPELIB gemini_recovery_typelib= {array_elements(gemini_recovery_names),"", const int start_of_name = 2; /* Name passed as .// and we're not interested in the ./ */ -static const int keyBufSize = MYMAXKEYSIZE * 2; +static const int keyBufSize = MAXKEYSZ + FULLKEYHDRSZ + MAX_REF_PARTS + 16; static int gemini_tx_begin(THD *thd); static void print_msg(THD *thd, const char *table_name, const char *op_name, @@ -87,40 +101,56 @@ bool gemini_init(void) goto badret; } + /* dsmContextCreate and dsmContextSetString(DSM_TAGDB_DBNAME) must + ** be the first DSM calls we make so that we can log any errors which + ** occur in subsequent DSM calls. DO NOT INSERT ANY DSM CALLS IN + ** BETWEEN THIS COMMENT AND THE COMMENT THAT SAYS "END OF CODE..." + */ /* Gotta connect to the database regardless of the operation */ rc = dsmContextCreate(&pfirstContext); if( rc != 0 ) { - printf("dsmContextCreate failed %ld\n",rc); + gemini_msg(pfirstContext, "dsmContextCreate failed %l",rc); goto badret; } + /* This call will also open the log file */ rc = dsmContextSetString(pfirstContext, DSM_TAGDB_DBNAME, strlen(gemini_dbname), (TEXT *)gemini_dbname); if( rc != 0 ) { - printf("Dbname tag failed %ld\n", rc); + gemini_msg(pfirstContext, "Dbname tag failed %l", rc); goto badret; } + /* END OF CODE NOT TO MESS WITH */ fn_format(pmsgsfile, GEM_MSGS_FILE, language, ".db", 2 | 4); rc = dsmContextSetString(pfirstContext, DSM_TAGDB_MSGS_FILE, strlen(pmsgsfile), (TEXT *)pmsgsfile); if( rc != 0 ) { - printf("MSGS_DIR tag failed %ld\n", rc); + gemini_msg(pfirstContext, "MSGS_DIR tag failed %l", rc); + goto badret; + } + + strxmov(pmsgsfile, gemini_basedir, GEM_SYM_FILE, NullS); + rc = dsmContextSetString(pfirstContext, DSM_TAGDB_SYMFILE, + strlen(pmsgsfile), (TEXT *)pmsgsfile); + if( rc != 0 ) + { + gemini_msg(pfirstContext, "SYMFILE tag failed %l", rc); goto badret; } rc = dsmContextSetLong(pfirstContext,DSM_TAGDB_ACCESS_TYPE,DSM_ACCESS_STARTUP); if ( rc != 0 ) { - printf("ACCESS TAG set failed %ld\n",rc); + gemini_msg(pfirstContext, "ACCESS TAG set failed %l",rc); goto badret; } rc = dsmContextSetLong(pfirstContext,DSM_TAGDB_ACCESS_ENV, DSM_SQL_ENGINE); if( rc != 0 ) { - printf("ACCESS_ENV set failed %ld",rc); + gemini_msg(pfirstContext, "ACCESS_ENV set failed %l",rc); goto badret; } @@ -129,7 +159,7 @@ bool gemini_init(void) (TEXT *)mysql_real_data_home); if( rc != 0 ) { - printf("Datadir tag failed %ld\n", rc); + gemini_msg(pfirstContext, "Datadir tag failed %l", rc); goto badret; } @@ -137,7 +167,7 @@ bool gemini_init(void) gemini_connection_limit); if(rc != 0) { - printf("MAX_USERS tag set failed %ld",rc); + gemini_msg(pfirstContext, "MAX_USERS tag set failed %l",rc); goto badret; } @@ -145,7 +175,7 @@ bool gemini_init(void) gemini_lock_wait_timeout); if(rc != 0) { - printf("MAX_LOCK_ENTRIES tag set failed %ld",rc); + gemini_msg(pfirstContext, "MAX_LOCK_ENTRIES tag set failed %l",rc); goto badret; } @@ -153,7 +183,7 @@ bool gemini_init(void) gemini_locktablesize); if(rc != 0) { - printf("MAX_LOCK_ENTRIES tag set failed %ld",rc); + gemini_msg(pfirstContext, "MAX_LOCK_ENTRIES tag set failed %l",rc); goto badret; } @@ -161,7 +191,7 @@ bool gemini_init(void) gemini_spin_retries); if(rc != 0) { - printf("SPIN_AMOUNT tag set failed %ld",rc); + gemini_msg(pfirstContext, "SPIN_AMOUNT tag set failed %l",rc); goto badret; } @@ -172,22 +202,22 @@ bool gemini_init(void) gemini_buffer_cache); if(rc != 0) { - printf("DB_BUFFERS tag set failed %ld",rc); + gemini_msg(pfirstContext, "DB_BUFFERS tag set failed %l",rc); goto badret; } rc = dsmContextSetLong(pfirstContext, DSM_TAGDB_FLUSH_AT_COMMIT, - ((gemini_options & GEMOPT_FLUSH_LOG) ? 1 : 0)); + ((gemini_options & GEMOPT_FLUSH_LOG) ? 0 : 1)); if(rc != 0) { - printf("FLush_Log_At_Commit tag set failed %ld",rc); + gemini_msg(pfirstContext, "FLush_Log_At_Commit tag set failed %l",rc); goto badret; } rc = dsmContextSetLong(pfirstContext, DSM_TAGDB_DIRECT_IO, ((gemini_options & GEMOPT_UNBUFFERED_IO) ? 1 : 0)); if(rc != 0) { - printf("DIRECT_IO tag set failed %ld",rc); + gemini_msg(pfirstContext, "DIRECT_IO tag set failed %l",rc); goto badret; } @@ -195,10 +225,20 @@ bool gemini_init(void) ((gemini_recovery_options & GEMINI_RECOVERY_FULL) ? 1 : 0)); if(rc != 0) { - printf("CRASH_PROTECTION tag set failed %ld",rc); + gemini_msg(pfirstContext, "CRASH_PROTECTION tag set failed %l",rc); goto badret; } + if (gemini_recovery_options & GEMINI_RECOVERY_FORCE) + { + rc = dsmContextSetLong(pfirstContext, DSM_TAGDB_FORCE_ACCESS, 1); + if(rc != 0) + { + printf("CRASH_PROTECTION tag set failed %ld",rc); + goto badret; + } + } + /* cluster size will come in bytes, need to convert it to 16 K units. */ gemini_log_cluster_size = (gemini_log_cluster_size + 16383) / 16384; @@ -207,7 +247,7 @@ bool gemini_init(void) if(rc != 0) { - printf("CRASH_PROTECTION tag set failed %ld",rc); + gemini_msg(pfirstContext, "CRASH_PROTECTION tag set failed %l",rc); goto badret; } @@ -215,12 +255,20 @@ bool gemini_init(void) DSM_DB_OPENDB | DSM_DB_OPENFILE); if( rc != 0 ) { - printf("dsmUserConnect failed rc = %ld\n",rc); + /* Message is output in dbenv() */ goto badret; } /* Set access to shared for subsequent user connects */ rc = dsmContextSetLong(pfirstContext,DSM_TAGDB_ACCESS_TYPE,DSM_ACCESS_SHARED); + rc = gemini_helper_threads(pfirstContext); + + + (void) hash_init(&gem_open_tables,32,0,0, + (hash_get_key) gem_get_key,0,0); + pthread_mutex_init(&gem_mutex,NULL); + + DBUG_RETURN(0); badret: @@ -231,30 +279,40 @@ badret: static int gemini_helper_threads(dsmContext_t *pContext) { int rc = 0; + int i; + pthread_attr_t thr_attr; + pthread_t hThread; DBUG_ENTER("gemini_helper_threads"); - rc = pthread_create (&hThread, 0, gemini_watchdog, (void *)pContext); + + (void) pthread_attr_init(&thr_attr); +#if !defined(HAVE_DEC_3_2_THREADS) + pthread_attr_setscope(&thr_attr,PTHREAD_SCOPE_SYSTEM); + (void) pthread_attr_setdetachstate(&thr_attr,PTHREAD_CREATE_DETACHED); + pthread_attr_setstacksize(&thr_attr,32768); +#endif + rc = pthread_create (&hThread, &thr_attr, gemini_watchdog, (void *)pContext); if (rc) { - printf("Can't create gemini watchdog thread"); + gemini_msg(pContext, "Can't Create gemini watchdog thread"); goto done; } if(!gemini_io_threads) goto done; - rc = pthread_create(&hThread, 0, gemini_rl_writer, (void *)pContext); + rc = pthread_create(&hThread, &thr_attr, gemini_rl_writer, (void *)pContext); if(rc) { - printf("Can't create gemini recovery log writer thread"); + gemini_msg(pContext, "Can't create Gemini recovery log writer thread"); goto done; } - for( int i = gemini_io_threads - 1;i;i--) + for(i = gemini_io_threads - 1;i;i--) { - rc = pthread_create(&hThread, 0, gemini_apw, (void *)pContext); + rc = pthread_create(&hThread, &thr_attr, gemini_apw, (void *)pContext); if(rc) { - printf("Can't create gemini page writer thread"); + gemini_msg(pContext, "Can't create Gemini database page writer thread"); goto done; } } @@ -273,7 +331,7 @@ pthread_handler_decl(gemini_watchdog,arg ) rc = dsmContextCopy(pcontext,&pmyContext, DSMCONTEXTDB); if( rc != 0 ) { - printf("dsmContextCopy failed for watchdog %d\n",rc); + gemini_msg(pcontext, "dsmContextCopy failed for Gemini watchdog %d",rc); return 0; } @@ -281,7 +339,7 @@ pthread_handler_decl(gemini_watchdog,arg ) if( rc != 0 ) { - printf("dsmUserConnect failed for watchdog %d\n",rc); + gemini_msg(pcontext, "dsmUserConnect failed for Gemini watchdog %d",rc); return 0; } @@ -311,7 +369,7 @@ pthread_handler_decl(gemini_rl_writer,arg ) rc = dsmContextCopy(pcontext,&pmyContext, DSMCONTEXTDB); if( rc != 0 ) { - printf("dsmContextCopy failed for recovery log writer %d\n",rc); + gemini_msg(pcontext, "dsmContextCopy failed for Gemini recovery log writer %d",rc); return 0; } @@ -319,7 +377,7 @@ pthread_handler_decl(gemini_rl_writer,arg ) if( rc != 0 ) { - printf("dsmUserConnect failed for recovery log writer %d\n",rc); + gemini_msg(pcontext, "dsmUserConnect failed for Gemini recovery log writer %d",rc); return 0; } @@ -348,7 +406,7 @@ pthread_handler_decl(gemini_apw,arg ) rc = dsmContextCopy(pcontext,&pmyContext, DSMCONTEXTDB); if( rc != 0 ) { - printf("dsmContextCopy failed for gemini page writer %d\n",rc); + gemini_msg(pcontext, "dsmContextCopy failed for Gemini page writer %d",rc); my_thread_end(); return 0; } @@ -356,7 +414,7 @@ pthread_handler_decl(gemini_apw,arg ) if( rc != 0 ) { - printf("dsmUserConnect failed for gemini page writer %d\n",rc); + gemini_msg(pcontext, "dsmUserConnect failed for Gemini page writer %d",rc); my_thread_end(); return 0; } @@ -388,7 +446,7 @@ int gemini_set_option_long(int optid, long optval) } if (rc) { - printf("SPIN_AMOUNT tag set failed %ld",rc); + gemini_msg(pfirstContext, "SPIN_AMOUNT tag set failed %l",rc); } else { @@ -410,7 +468,7 @@ static int gemini_connect(THD *thd) DSMCONTEXTDB); if( rc != 0 ) { - printf("dsmContextCopy failed %ld\n",rc); + gemini_msg(pfirstContext, "dsmContextCopy failed %l",rc); return(rc); } @@ -418,7 +476,7 @@ static int gemini_connect(THD *thd) if( rc != 0 ) { - printf("dsmUserConnect failed %ld\n",rc); + gemini_msg(pfirstContext, "dsmUserConnect failed %l",rc); return(rc); } @@ -444,6 +502,9 @@ bool gemini_end(void) THD *thd; DBUG_ENTER("gemini_end"); + + hash_free(&gem_open_tables); + pthread_mutex_destroy(&gem_mutex); if(pfirstContext) { rc = dsmShutdownSet(pfirstContext, DSM_SHUTDOWN_NORMAL); @@ -534,6 +595,24 @@ int gemini_rollback_to_savepoint(THD *thd) DBUG_RETURN(rc); } +int gemini_recovery_logging(THD *thd, bool on) +{ + int error; + int noLogging; + + if(!thd->gemini.context) + return 0; + + if(on) + noLogging = 0; + else + noLogging = 1; + + error = dsmContextSetLong((dsmContext_t *)thd->gemini.context, + DSM_TAGCONTEXT_NO_LOGGING,noLogging); + return error; +} + /* gemDataType - translates from mysql data type constant to gemini key services data type contstant */ int gemDataType ( int mysqlType ) @@ -599,8 +678,13 @@ int ha_gemini::open(const char *name, int mode, uint test_if_locked) DBUG_ENTER("ha_gemini::open"); thd = current_thd; - thr_lock_init(&alock); - thr_lock_data_init(&alock,&lock,(void*)0); + /* Init shared structure */ + if (!(share=get_share(name,table))) + { + DBUG_RETURN(1); /* purecov: inspected */ + } + thr_lock_data_init(&share->lock,&lock,(void*) 0); + ref_length = sizeof(dsmRecid_t); if(thd->gemini.context == NULL) @@ -610,7 +694,7 @@ int ha_gemini::open(const char *name, int mode, uint test_if_locked) if(rc) return rc; } - if (!(rec_buff=my_malloc(table->rec_buff_length, + if (!(rec_buff=(byte*)my_malloc(table->rec_buff_length, MYF(MY_WME)))) { DBUG_RETURN(1); @@ -635,6 +719,12 @@ int ha_gemini::open(const char *name, int mode, uint test_if_locked) rc = dsmObjectNameToNum((dsmContext_t *)thd->gemini.context, (dsmText_t *)name_buff, &tableId); + if (rc) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Unable to find table number for %s", name_buff); + DBUG_RETURN(rc); + } } tableNumber = tableId; @@ -649,8 +739,33 @@ int ha_gemini::open(const char *name, int mode, uint test_if_locked) crashed while being in the midst of a repair operation */ rc = dsmTableStatus((dsmContext_t *)thd->gemini.context, tableNumber,&tableStatus); - if(tableStatus) + if(tableStatus == DSM_OBJECT_IN_REPAIR) tableStatus = HA_ERR_CRASHED; + + pthread_mutex_lock(&share->mutex); + share->use_count++; + pthread_mutex_unlock(&share->mutex); + + if (table->blob_fields) + { + /* Allocate room for the blob ids from an unpacked row. Note that + ** we may not actually need all of this space because tiny blobs + ** are stored in the packed row, not in a separate storage object + ** like larger blobs. But we allocate an entry for all blobs to + ** keep the code simpler. + */ + pBlobDescs = (gemBlobDesc_t *)my_malloc( + table->blob_fields * sizeof(gemBlobDesc_t), + MYF(MY_WME | MY_ZEROFILL)); + } + else + { + pBlobDescs = 0; + } + + get_index_stats(thd); + info(HA_STATUS_CONST); + DBUG_RETURN (rc); } @@ -680,6 +795,12 @@ int ha_gemini::index_open(char *tableName) rc = dsmObjectNameToNum((dsmContext_t *)thd->gemini.context, (dsmText_t *)tableName, &objectNumber); + if (rc) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Unable to file Index number for %s", tableName); + DBUG_RETURN(rc); + } pindexNumbers[i] = objectNumber; } } @@ -692,12 +813,22 @@ int ha_gemini::index_open(char *tableName) int ha_gemini::close(void) { DBUG_ENTER("ha_gemini::close"); - thr_lock_delete(&alock); - my_free(rec_buff,MYF(MY_ALLOW_ZERO_PTR)); + my_free((char*)rec_buff,MYF(MY_ALLOW_ZERO_PTR)); rec_buff = 0; my_free((char *)pindexNumbers,MYF(MY_ALLOW_ZERO_PTR)); pindexNumbers = 0; - DBUG_RETURN(0); + + if (pBlobDescs) + { + for (uint i = 0; i < table->blob_fields; i++) + { + my_free((char*)pBlobDescs[i].pBlob, MYF(MY_ALLOW_ZERO_PTR)); + } + my_free((char *)pBlobDescs, MYF(0)); + pBlobDescs = 0; + } + + DBUG_RETURN(free_share(share, 0)); } @@ -709,7 +840,7 @@ int ha_gemini::write_row(byte * record) DBUG_ENTER("write_row"); - if(tableStatus) + if(tableStatus == HA_ERR_CRASHED) DBUG_RETURN(tableStatus); thd = current_thd; @@ -737,10 +868,11 @@ int ha_gemini::write_row(byte * record) /* A set insert-id statement so set the auto-increment value if this value is higher than it's current value */ error = dsmTableAutoIncrement((dsmContext_t *)thd->gemini.context, - tableNumber, (ULONG64 *)&nr); + tableNumber, (ULONG64 *)&nr,1); if(thd->next_insert_id > nr) { - error = dsmTableAutoIncrementSet((dsmContext_t *)thd->gemini.context,tableNumber, + error = dsmTableAutoIncrementSet((dsmContext_t *)thd->gemini.context, + tableNumber, (ULONG64)thd->next_insert_id); } } @@ -749,11 +881,13 @@ int ha_gemini::write_row(byte * record) } dsmRecord.table = tableNumber; - dsmRecord.maxLength = table->reclength; + dsmRecord.maxLength = table->rec_buff_length; if ((error=pack_row((byte **)&dsmRecord.pbuffer, (int *)&dsmRecord.recLength, - record))) + record, FALSE))) + { DBUG_RETURN(error); + } error = dsmRecordCreate((dsmContext_t *)thd->gemini.context, &dsmRecord,0); @@ -769,6 +903,8 @@ int ha_gemini::write_row(byte * record) thd->gemini.needSavepoint = 1; } } + if(error == DSM_S_RQSTREJ) + error = HA_ERR_LOCK_WAIT_TIMEOUT; DBUG_RETURN(error); } @@ -777,10 +913,17 @@ longlong ha_gemini::get_auto_increment() { longlong nr; int error; + int update; THD *thd=current_thd; + if(thd->lex.sql_command == SQLCOM_SHOW_TABLES) + update = 0; + else + update = 1; + error = dsmTableAutoIncrement((dsmContext_t *)thd->gemini.context, - tableNumber, (ULONG64 *)&nr); + tableNumber, (ULONG64 *)&nr, + update); return nr; } @@ -828,8 +971,8 @@ int ha_gemini::handleIndexEntry(const byte * record, dsmRecid_t recid, expects that the three lead bytes of the header are not counted in this length -- But cxKeyPrepare also expects that these three bytes are present in the keystr */ - theKey.akey.keyLen = (COUNT)keyStringLen - 3; - theKey.akey.unknown_comp = thereIsAnull; + theKey.akey.keyLen = (COUNT)keyStringLen - FULLKEYHDRSZ; + theKey.akey.unknown_comp = (dsmBoolean_t)thereIsAnull; theKey.akey.word_index = 0; theKey.akey.descending_key =0; if(option == KEY_CREATE) @@ -880,6 +1023,7 @@ int ha_gemini::createKeyString(const byte * record, KEY *pkeyinfo, int componentLen; int fieldType; int isNull; + uint key_part_length; KEY_PART_INFO *key_part; @@ -892,21 +1036,35 @@ int ha_gemini::createKeyString(const byte * record, KEY *pkeyinfo, unsigned char *pos; key_part = pkeyinfo->key_part + i; + key_part_length = key_part->length; fieldType = gemDataType(key_part->field->type()); - if(fieldType == GEM_CHAR) + switch (fieldType) { + case GEM_CHAR: + { /* Save the current ptr to the field in case we're building a key to remove an old key value when an indexed character column gets updated. */ char *ptr = key_part->field->ptr; key_part->field->ptr = (char *)record + key_part->offset; - key_part->field->sort_string(rec_buff, key_part->length); + key_part->field->sort_string((char*)rec_buff, key_part->length); key_part->field->ptr = ptr; pos = (unsigned char *)rec_buff; - } - else - { + } + break; + + case GEM_TINYBLOB: + case GEM_BLOB: + case GEM_MEDIUMBLOB: + case GEM_LONGBLOB: + ((Field_blob*)key_part->field)->get_ptr((char**)&pos); + key_part_length = ((Field_blob*)key_part->field)->get_length( + (char*)record + key_part->offset); + break; + + default: pos = (unsigned char *)record + key_part->offset; + break; } isNull = record[key_part->null_offset] & key_part->null_bit; @@ -914,7 +1072,7 @@ int ha_gemini::createKeyString(const byte * record, KEY *pkeyinfo, *thereIsAnull = true; rc = gemFieldToIdxComponent(pos, - (unsigned long) key_part->length, + (unsigned long) key_part_length, fieldType, isNull , key_part->field->flags & UNSIGNED_FLAG, @@ -951,7 +1109,7 @@ int ha_gemini::update_row(const byte * old_record, byte * new_record) } for (uint keynr=0 ; keynr < table->keys ; keynr++) { - if(key_cmp(keynr,old_record, new_record)) + if(key_cmp(keynr,old_record, new_record,false)) { error = handleIndexEntry(old_record,lastRowid,KEY_DELETE,keynr); if(error) @@ -973,10 +1131,10 @@ int ha_gemini::update_row(const byte * old_record, byte * new_record) dsmRecord.table = tableNumber; dsmRecord.recid = lastRowid; - dsmRecord.maxLength = table->reclength; + dsmRecord.maxLength = table->rec_buff_length; if ((error=pack_row((byte **)&dsmRecord.pbuffer, (int *)&dsmRecord.recLength, - new_record))) + new_record, TRUE))) { DBUG_RETURN(error); } @@ -992,6 +1150,7 @@ int ha_gemini::delete_row(const byte * record) int error = 0; dsmRecord_t dsmRecord; THD *thd = current_thd; + dsmContext_t *pcontext = (dsmContext_t *)thd->gemini.context; DBUG_ENTER("delete_row"); statistic_increment(ha_delete_count,&LOCK_status); @@ -999,9 +1158,7 @@ int ha_gemini::delete_row(const byte * record) if(thd->gemini.needSavepoint) { thd->gemini.savepoint++; - error = dsmTransaction((dsmContext_t *)thd->gemini.context, - &thd->gemini.savepoint, - DSMTXN_SAVE, 0, 0); + error = dsmTransaction(pcontext, &thd->gemini.savepoint, DSMTXN_SAVE, 0, 0); if (error) DBUG_RETURN(error); thd->gemini.needSavepoint = 0; @@ -1013,8 +1170,27 @@ int ha_gemini::delete_row(const byte * record) error = handleIndexEntries(record, dsmRecord.recid,KEY_DELETE); if(!error) { - error = dsmRecordDelete((dsmContext_t *)thd->gemini.context, - &dsmRecord, 0, NULL); + error = dsmRecordDelete(pcontext, &dsmRecord, 0, NULL); + } + + /* Delete any blobs associated with this row */ + if (table->blob_fields) + { + dsmBlob_t gemBlob; + + gemBlob.areaType = DSMOBJECT_BLOB; + gemBlob.blobObjNo = tableNumber; + for (uint i = 0; i < table->blob_fields; i++) + { + if (pBlobDescs[i].blobId) + { + gemBlob.blobId = pBlobDescs[i].blobId; + my_free((char *)pBlobDescs[i].pBlob, MYF(MY_ALLOW_ZERO_PTR)); + dsmBlobStart(pcontext, &gemBlob); + dsmBlobDelete(pcontext, &gemBlob, NULL); + /* according to DSM doc, no need to call dsmBlobEnd() */ + } + } } DBUG_RETURN(error); @@ -1023,7 +1199,6 @@ int ha_gemini::delete_row(const byte * record) int ha_gemini::index_init(uint keynr) { int error = 0; - int keyStringLen; THD *thd; DBUG_ENTER("index_init"); thd = current_thd; @@ -1046,19 +1221,9 @@ int ha_gemini::index_init(uint keynr) } pbracketBase->index = 0; pbracketLimit->index = (dsmIndex_t)pindexNumbers[keynr]; - pbracketLimit->keycomps = 1; - keyStringLen = 0; - error = gemKeyHigh(pbracketLimit->keystr, &keyStringLen, - pbracketLimit->index); - - /* We have to subtract three here since cxKeyPrepare - expects that the three lead bytes of the header are - not counted in this length -- But cxKeyPrepare also - expects that these three bytes are present in the keystr */ - pbracketLimit->keyLen = (COUNT)keyStringLen - 3; - pbracketBase->descending_key = pbracketLimit->descending_key = 0; pbracketBase->ksubstr = pbracketLimit->ksubstr = 0; + pbracketLimit->keycomps = pbracketBase->keycomps = 1; pfoundKey = (dsmKey_t *)my_malloc(sizeof(dsmKey_t) + keyBufSize,MYF(MY_WME)); if(!pfoundKey) @@ -1130,6 +1295,7 @@ int ha_gemini::pack_key( uint keynr, dsmKey_t *pkey, { uint offset=0; unsigned char *pos; + uint key_part_length = key_part->length; int fieldType; if (key_part->null_bit) @@ -1141,7 +1307,7 @@ int ha_gemini::pack_key( uint keynr, dsmKey_t *pkey, key_ptr+= key_part->store_length; rc = gemFieldToIdxComponent( (unsigned char *)key_ptr + offset, - (unsigned long) key_part->length, + (unsigned long) key_part_length, 0, 1 , /* Tells it to build a null component */ key_part->field->flags & UNSIGNED_FLAG, @@ -1153,20 +1319,31 @@ int ha_gemini::pack_key( uint keynr, dsmKey_t *pkey, } } fieldType = gemDataType(key_part->field->type()); - if(fieldType == GEM_CHAR) + switch (fieldType) { - key_part->field->store(key_ptr + offset, key_part->length); - key_part->field->sort_string(rec_buff, key_part->length); + case GEM_CHAR: + key_part->field->store((char*)key_ptr + offset, key_part->length); + key_part->field->sort_string((char*)rec_buff, key_part->length); pos = (unsigned char *)rec_buff; - } - else - { + break; + + case GEM_TINYBLOB: + case GEM_BLOB: + case GEM_MEDIUMBLOB: + case GEM_LONGBLOB: + ((Field_blob*)key_part->field)->get_ptr((char**)&pos); + key_part_length = ((Field_blob*)key_part->field)->get_length( + (char*)key_ptr + offset); + break; + + default: pos = (unsigned char *)key_ptr + offset; + break; } rc = gemFieldToIdxComponent( pos, - (unsigned long) key_part->length, + (unsigned long) key_part_length, fieldType, 0 , key_part->field->flags & UNSIGNED_FLAG, @@ -1189,7 +1366,7 @@ void ha_gemini::unpack_key(char *record, dsmKey_t *key, uint index) int fieldIsNull, fieldType; int rc = 0; - char unsigned *pos= &key->keystr[7]; + char unsigned *pos= &key->keystr[FULLKEYHDRSZ+4/* 4 for the index number*/]; for ( ; key_part != end; key_part++) { @@ -1202,7 +1379,8 @@ void ha_gemini::unpack_key(char *record, dsmKey_t *key, uint index) } rc = gemIdxComponentToField(pos, fieldType, (unsigned char *)record + key_part->field->offset(), - key_part->field->field_length, + //key_part->field->field_length, + key_part->length, key_part->field->decimals(), &fieldIsNull); if(fieldIsNull) @@ -1266,12 +1444,12 @@ int ha_gemini::index_read(byte * buf, const byte * key, pbracketLimit->keyLen = componentLen; } - /* We have to subtract three here since cxKeyPrepare + /* We have to subtract the header size here since cxKeyPrepare expects that the three lead bytes of the header are not counted in this length -- But cxKeyPrepare also expects that these three bytes are present in the keystr */ - pbracketBase->keyLen -= 3; - pbracketLimit->keyLen -= 3; + pbracketBase->keyLen -= FULLKEYHDRSZ; + pbracketLimit->keyLen -= FULLKEYHDRSZ; thd = current_thd; @@ -1294,7 +1472,7 @@ int ha_gemini::index_next(byte * buf) dsmMask_t findMode; DBUG_ENTER("index_next"); - if(tableStatus) + if(tableStatus == HA_ERR_CRASHED) DBUG_RETURN(tableStatus); thd = current_thd; @@ -1304,9 +1482,12 @@ int ha_gemini::index_next(byte * buf) error = gemKeyLow(pbracketBase->keystr, &keyStringLen, pbracketLimit->index); - pbracketBase->keyLen = (COUNT)keyStringLen - 3; + pbracketBase->keyLen = (COUNT)keyStringLen - FULLKEYHDRSZ; pbracketBase->index = pbracketLimit->index; - pbracketBase->keycomps = 1; + error = gemKeyHigh(pbracketLimit->keystr, &keyStringLen, + pbracketLimit->index); + pbracketLimit->keyLen = (COUNT)keyStringLen - FULLKEYHDRSZ; + findMode = DSMFINDFIRST; } else @@ -1369,24 +1550,20 @@ int ha_gemini::index_last(byte * buf) error = gemKeyLow(pbracketBase->keystr, &keyStringLen, pbracketLimit->index); - if(error) - goto errorReturn; - pbracketBase->keyLen = (COUNT)keyStringLen - 3; + pbracketBase->keyLen = (COUNT)keyStringLen - FULLKEYHDRSZ; pbracketBase->index = pbracketLimit->index; - pbracketBase->keycomps = 1; + error = gemKeyHigh(pbracketLimit->keystr, &keyStringLen, + pbracketLimit->index); + pbracketLimit->keyLen = (COUNT)keyStringLen - FULLKEYHDRSZ; error = findRow(thd,DSMFINDLAST,buf); -errorReturn: if (error == DSM_S_ENDLOOP) error = HA_ERR_END_OF_FILE; table->status = error ? STATUS_NOT_FOUND : 0; DBUG_RETURN(error); - - table->status = error ? STATUS_NOT_FOUND : 0; - DBUG_RETURN(error); } int ha_gemini::rnd_init(bool scan) @@ -1414,7 +1591,7 @@ int ha_gemini::rnd_next(byte *buf) DBUG_ENTER("rnd_next"); - if(tableStatus) + if(tableStatus == HA_ERR_CRASHED) DBUG_RETURN(tableStatus); thd = current_thd; @@ -1429,7 +1606,7 @@ int ha_gemini::rnd_next(byte *buf) dsmRecord.recid = lastRowid; dsmRecord.pbuffer = (dsmBuffer_t *)rec_buff; dsmRecord.recLength = table->reclength; - dsmRecord.maxLength = table->reclength; + dsmRecord.maxLength = table->rec_buff_length; error = dsmTableScan((dsmContext_t *)thd->gemini.context, &dsmRecord, DSMFINDNEXT, lockMode, 0); @@ -1437,17 +1614,23 @@ int ha_gemini::rnd_next(byte *buf) if(!error) { lastRowid = dsmRecord.recid; - unpack_row((char *)buf,(char *)dsmRecord.pbuffer); + error = unpack_row((char *)buf,(char *)dsmRecord.pbuffer); } if(!error) ; - else if (error == DSM_S_ENDLOOP) - error = HA_ERR_END_OF_FILE; - else if (error == DSM_S_RQSTREJ) - error = HA_ERR_LOCK_WAIT_TIMEOUT; - else if (error == DSM_S_LKTBFULL) - error = HA_ERR_LOCK_TABLE_FULL; - + else + { + lastRowid = 0; + if (error == DSM_S_ENDLOOP) + error = HA_ERR_END_OF_FILE; + else if (error == DSM_S_RQSTREJ) + error = HA_ERR_LOCK_WAIT_TIMEOUT; + else if (error == DSM_S_LKTBFULL) + { + error = HA_ERR_LOCK_TABLE_FULL; + gemini_lock_table_overflow_error((dsmContext_t *)thd->gemini.context); + } + } table->status = error ? STATUS_NOT_FOUND : 0; DBUG_RETURN(error); } @@ -1500,14 +1683,14 @@ int ha_gemini::fetch_row(void *gemini_context,const byte *buf) dsmRecord.recid = lastRowid; dsmRecord.pbuffer = (dsmBuffer_t *)rec_buff; dsmRecord.recLength = table->reclength; - dsmRecord.maxLength = table->reclength; + dsmRecord.maxLength = table->rec_buff_length; rc = dsmRecordGet((dsmContext_t *)gemini_context, &dsmRecord, 0); if(!rc) { - unpack_row((char *)buf,(char *)dsmRecord.pbuffer); + rc = unpack_row((char *)buf,(char *)dsmRecord.pbuffer); } DBUG_RETURN(rc); @@ -1544,7 +1727,7 @@ int ha_gemini::findRow(THD *thd, dsmMask_t findMode, byte *buf) if(key_read) { - unpack_key(buf, pkey, active_index); + unpack_key((char*)buf, pkey, active_index); } if(!key_read) /* unpack_key may have turned off key_read */ { @@ -1554,10 +1737,17 @@ int ha_gemini::findRow(THD *thd, dsmMask_t findMode, byte *buf) errorReturn: if(!rc) ; - else if(rc == DSM_S_RQSTREJ) - rc = HA_ERR_LOCK_WAIT_TIMEOUT; - else if (rc == DSM_S_LKTBFULL) - rc = HA_ERR_LOCK_TABLE_FULL; + else + { + lastRowid = 0; + if(rc == DSM_S_RQSTREJ) + rc = HA_ERR_LOCK_WAIT_TIMEOUT; + else if (rc == DSM_S_LKTBFULL) + { + rc = HA_ERR_LOCK_TABLE_FULL; + gemini_lock_table_overflow_error((dsmContext_t *)thd->gemini.context); + } + } DBUG_RETURN(rc); } @@ -1578,25 +1768,47 @@ void ha_gemini::info(uint flag) dsmStatus_t error; ULONG64 rows; + if(thd->gemini.context == NULL) + { + /* Need to get this thread a connection into the database */ + error = gemini_connect(thd); + if(error) + DBUG_VOID_RETURN; + } + error = dsmRowCount((dsmContext_t *)thd->gemini.context,tableNumber,&rows); records = (ha_rows)rows; deleted = 0; } - else if ((flag & HA_STATUS_CONST)) + if ((flag & HA_STATUS_CONST)) { - ; + ha_rows *rec_per_key = share->rec_per_key; + for (uint i = 0; i < table->keys; i++) + for(uint k=0; + k < table->key_info[i].key_parts; k++,rec_per_key++) + table->key_info[i].rec_per_key[k] = *rec_per_key; } - else if ((flag & HA_STATUS_ERRKEY)) + if ((flag & HA_STATUS_ERRKEY)) { errkey=last_dup_key; } - else if ((flag & HA_STATUS_TIME)) + if ((flag & HA_STATUS_TIME)) { ; } - else if ((flag & HA_STATUS_AUTO)) + if ((flag & HA_STATUS_AUTO)) { - ; + THD *thd = current_thd; + dsmStatus_t error; + + error = dsmTableAutoIncrement((dsmContext_t *)thd->gemini.context, + tableNumber, + (ULONG64 *)&auto_increment_value, + 0); + /* Should return the next auto-increment value that + will be given -- so we need to increment the one dsm + currently reports. */ + auto_increment_value++; } DBUG_VOID_RETURN; @@ -1658,7 +1870,22 @@ int ha_gemini::external_lock(THD *thd, int lock_type) thd->gemini.lock_count = 1; thd->gemini.tx_isolation = thd->tx_isolation; } - + // lockMode has already been set in store_lock + // If the statement about to be executed calls for + // exclusive locks and we're running at read uncommitted + // isolation level then raise an error. + if(thd->gemini.tx_isolation == ISO_READ_UNCOMMITTED) + { + if(lockMode == DSM_LK_EXCL) + { + DBUG_RETURN(HA_ERR_READ_ONLY_TRANSACTION); + } + else + { + lockMode = DSM_LK_NOLOCK; + } + } + if(thd->gemini.context == NULL) { /* Need to get this thread a connection into the database */ @@ -1678,6 +1905,8 @@ int ha_gemini::external_lock(THD *thd, int lock_type) rc = dsmObjectLock((dsmContext_t *)thd->gemini.context, (dsmObject_t)tableNumber,DSMOBJECT_TABLE,0, lockMode, 1, 0); + if(rc == DSM_S_RQSTREJ) + rc = HA_ERR_LOCK_WAIT_TIMEOUT; } } else /* lock_type == F_UNLK */ @@ -1703,18 +1932,24 @@ THR_LOCK_DATA **ha_gemini::store_lock(THD *thd, THR_LOCK_DATA **to, !thd->in_lock_tables) lock_type = TL_WRITE_ALLOW_WRITE; lock.type=lock_type; - - if(thd->gemini.tx_isolation == ISO_READ_UNCOMMITTED) - lockMode = DSM_LK_NOLOCK; - else if(table->reginfo.lock_type > TL_WRITE_ALLOW_READ) - lockMode = DSM_LK_EXCL; - else - lockMode = DSM_LK_SHARE; } + if(table->reginfo.lock_type > TL_WRITE_ALLOW_READ) + lockMode = DSM_LK_EXCL; + else + lockMode = DSM_LK_SHARE; + *to++= &lock; return to; } +void ha_gemini::update_create_info(HA_CREATE_INFO *create_info) +{ + table->file->info(HA_STATUS_AUTO | HA_STATUS_CONST); + if (!(create_info->used_fields & HA_CREATE_USED_AUTO)) + { + create_info->auto_increment_value=auto_increment_value; + } +} int ha_gemini::create(const char *name, register TABLE *form, HA_CREATE_INFO *create_info) @@ -1777,7 +2012,7 @@ int ha_gemini::create(const char *name, register TABLE *form, (dsmText_t *)"gemini_data_area"); if( rc != 0 ) { - printf("dsmAreaNew failed %ld\n",rc); + gemini_msg(pcontext, "dsmAreaNew failed %l",rc); return(rc); } @@ -1787,7 +2022,7 @@ int ha_gemini::create(const char *name, register TABLE *form, (dsmText_t *)&name_buff[start_of_name]); if( rc != 0 ) { - printf("dsmExtentCreate failed %ld\n",rc); + gemini_msg(pcontext, "dsmExtentCreate failed %l",rc); return(rc); } @@ -1805,6 +2040,20 @@ int ha_gemini::create(const char *name, register TABLE *form, (dsmText_t *)&name_buff[start_of_name], &dummy,&dummy); + if (rc == 0 && table->blob_fields) + { + /* create a storage object record for blob fields */ + rc = dsmObjectCreate(pcontext, areaNumber, &tableNumber, + DSMOBJECT_BLOB,0,0,0, + (dsmText_t *)&name_buff[start_of_name], + &dummy,&dummy); + if( rc != 0 ) + { + gemini_msg(pcontext, "dsmObjectCreate for blob object failed %l",rc); + return(rc); + } + } + if(rc == 0 && form->keys) { fn_format(name_buff, name, "", ha_gemini_idx_ext, 2 | 4); @@ -1814,7 +2063,7 @@ int ha_gemini::create(const char *name, register TABLE *form, (dsmText_t *)"gemini_index_area"); if( rc != 0 ) { - printf("dsmAreaNew failed %ld\n",rc); + gemini_msg(pcontext, "dsmAreaNew failed %l",rc); return(rc); } /* Create an extent */ @@ -1823,7 +2072,7 @@ int ha_gemini::create(const char *name, register TABLE *form, (dsmText_t *)&name_buff[start_of_name]); if( rc != 0 ) { - printf("dsmExtentCreate failed %ld\n",rc); + gemini_msg(pcontext, "dsmExtentCreate failed %l",rc); return(rc); } @@ -1859,10 +2108,11 @@ int ha_gemini::create(const char *name, register TABLE *form, } } - rc = dsmTableAutoIncrementSet(pcontext,tableNumber, - create_info->auto_increment_value); - - + /* The auto_increment value is the next one to be given + out so give dsm one less than this value */ + if(create_info->auto_increment_value) + rc = dsmTableAutoIncrementSet(pcontext,tableNumber, + create_info->auto_increment_value-1); /* Get a table lock on this table in case this table is being created as part of an alter table statement. We don't want @@ -1950,26 +2200,25 @@ int ha_gemini::delete_table(const char *pname) (dsmObject_t *)&tableNum); if (rc) { - printf("Cound not find table number for %s with string %s, %ld\n", - pname,name_buff,rc); + gemini_msg(pcontext, "Unable to find table number for %s", name_buff); rc = gemini_rollback(thd); if (rc) { - printf("Error in rollback %ld\n",rc); + gemini_msg(pcontext, "Error in rollback %l",rc); } DBUG_RETURN(rc); } - rc = dsmObjectInfo(pcontext, tableNum, DSMOBJECT_MIXTABLE, &tableArea, - &objectAttr, &associate, &associateType, &block, &root); + rc = dsmObjectInfo(pcontext, tableNum, DSMOBJECT_MIXTABLE, tableNum, + &tableArea, &objectAttr, &associateType, &block, &root); if (rc) { - printf("Failed to get area number for table %d, %s, return %ld\n", + gemini_msg(pcontext, "Failed to get area number for table %d, %s, return %l", tableNum, pname, rc); rc = gemini_rollback(thd); if (rc) { - printf("Error in rollback %ld\n",rc); + gemini_msg(pcontext, "Error in rollback %l",rc); } } @@ -1979,14 +2228,14 @@ int ha_gemini::delete_table(const char *pname) rc = dsmObjectDeleteAssociate(pcontext, tableNum, &indexArea); if (rc) { - printf("Error deleting storage objects for table number %d, return %ld\n", + gemini_msg(pcontext, "Error deleting storage objects for table number %d, return %l", (int)tableNum, rc); /* roll back txn and return */ rc = gemini_rollback(thd); if (rc) { - printf("Error in rollback %ld\n",rc); + gemini_msg(pcontext, "Error in rollback %l",rc); } DBUG_RETURN(rc); } @@ -1994,33 +2243,33 @@ int ha_gemini::delete_table(const char *pname) if (indexArea != DSMAREA_INVALID) { /* Delete the extents for both Index and Table */ - rc = dsmExtentDelete(pcontext, indexArea, 0); + rc = dsmExtentDelete(pcontext, indexArea); rc = dsmAreaDelete(pcontext, indexArea); if (rc) { - printf("Error deleting Index Area %ld, return %ld\n", indexArea, rc); + gemini_msg(pcontext, "Error deleting Index Area %l, return %l", indexArea, rc); /* roll back txn and return */ rc = gemini_rollback(thd); if (rc) { - printf("Error in rollback %ld\n",rc); + gemini_msg(pcontext, "Error in rollback %l",rc); } DBUG_RETURN(rc); } } - rc = dsmExtentDelete(pcontext, tableArea, 0); + rc = dsmExtentDelete(pcontext, tableArea); rc = dsmAreaDelete(pcontext, tableArea); if (rc) { - printf("Error deleting table Area %ld, name %s, return %ld\n", + gemini_msg(pcontext, "Error deleting table Area %l, name %s, return %l", tableArea, pname, rc); /* roll back txn and return */ rc = gemini_rollback(thd); if (rc) { - printf("Error in rollback %ld\n",rc); + gemini_msg(pcontext, "Error in rollback %l",rc); } DBUG_RETURN(rc); } @@ -2030,7 +2279,7 @@ int ha_gemini::delete_table(const char *pname) rc = gemini_commit(thd); if (rc) { - printf("Failed to commit transaction %ld\n",rc); + gemini_msg(pcontext, "Failed to commit transaction %l",rc); } @@ -2047,7 +2296,6 @@ int ha_gemini::rename_table(const char *pfrom, const char *pto) THD *thd; dsmContext_t *pcontext; dsmStatus_t rc; - char tabname_buff[FN_REFLEN]; char dbname_buff[FN_REFLEN]; char name_buff[FN_REFLEN]; char newname_buff[FN_REFLEN]; @@ -2056,6 +2304,7 @@ int ha_gemini::rename_table(const char *pfrom, const char *pto) unsigned i, nameLen; dsmObject_t tableNum; dsmArea_t indexArea = 0; + dsmArea_t tableArea = 0; DBUG_ENTER("ha_gemini::rename_table"); @@ -2068,7 +2317,7 @@ int ha_gemini::rename_table(const char *pfrom, const char *pto) { if (gemini_is_vst(name_buff)) { - return 0; + return DSM_S_CANT_RENAME_VST; } } } @@ -2113,21 +2362,51 @@ int ha_gemini::rename_table(const char *pfrom, const char *pto) rc = dsmObjectNameToNum(pcontext, (dsmText_t *)name_buff, &tableNum); if (rc) + { + gemini_msg(pcontext, "Unable to file Table number for %s", name_buff); goto errorReturn; + } rc = dsmObjectRename(pcontext, tableNum, (dsmText_t *)newname_buff, (dsmText_t *)&newidxextname_buff[start_of_name], (dsmText_t *)&newextname_buff[start_of_name], - &indexArea); + &indexArea, &tableArea); if (rc) + { + gemini_msg(pcontext, "Failed to rename %s to %s",name_buff,newname_buff); goto errorReturn; + } + + /* Rename the physical table and index files (if necessary). + ** Close the file, rename it, and reopen it (have to do it this + ** way so rename works on Windows). + */ + if (!(rc = dsmAreaClose(pcontext, tableArea))) + { + if (!(rc = rename_file_ext(pfrom, pto, ha_gemini_ext))) + { + rc = dsmAreaOpen(pcontext, tableArea, 0); + if (rc) + { + gemini_msg(pcontext, "Failed to reopen area %d",tableArea); + } + } + } - /* rename the physical table and index files (if necessary) */ - rc = rename_file_ext(pfrom, pto, ha_gemini_ext); if (!rc && indexArea) { - rc = rename_file_ext(pfrom, pto, ha_gemini_idx_ext); + if (!(rc = dsmAreaClose(pcontext, indexArea))) + { + if (!(rc = rename_file_ext(pfrom, pto, ha_gemini_idx_ext))) + { + rc = dsmAreaOpen(pcontext, indexArea, 0); + if (rc) + { + gemini_msg(pcontext, "Failed to reopen area %d",tableArea); + } + } + } } errorReturn: @@ -2143,17 +2422,38 @@ errorReturn: double ha_gemini::scan_time() { - return records / (gemini_blocksize / table->reclength); + return (double)records / + (double)((gemini_blocksize / (double)table->reclength)); +} + +int ha_gemini::analyze(THD* thd, HA_CHECK_OPT* check_opt) +{ + int error; + uint saveIsolation; + dsmMask_t saveLockMode; + + check_opt->quick = true; + check_opt->optimize = true; // Tells check not to get table lock + saveLockMode = lockMode; + saveIsolation = thd->gemini.tx_isolation; + thd->gemini.tx_isolation = ISO_READ_UNCOMMITTED; + lockMode = DSM_LK_NOLOCK; + error = check(thd,check_opt); + lockMode = saveLockMode; + thd->gemini.tx_isolation = saveIsolation; + return (error); } int ha_gemini::check(THD* thd, HA_CHECK_OPT* check_opt) { - int error; + int error = 0; int checkStatus = HA_ADMIN_OK; ha_rows indexCount; - byte *buf = 0, *indexBuf = 0; + byte *buf = 0, *indexBuf = 0, *prevBuf = 0; int errorCount = 0; + info(HA_STATUS_VARIABLE); // Makes sure row count is up to date + /* Get a shared table lock */ if(thd->gemini.needSavepoint) { @@ -2167,23 +2467,33 @@ int ha_gemini::check(THD* thd, HA_CHECK_OPT* check_opt) return(error); thd->gemini.needSavepoint = 0; } - buf = my_malloc(table->rec_buff_length,MYF(MY_WME)); - indexBuf = my_malloc(table->rec_buff_length,MYF(MY_WME)); + buf = (byte*)my_malloc(table->rec_buff_length,MYF(MY_WME)); + indexBuf = (byte*)my_malloc(table->rec_buff_length,MYF(MY_WME)); + prevBuf = (byte*)my_malloc(table->rec_buff_length,MYF(MY_WME |MY_ZEROFILL )); + /* Lock the table */ - error = dsmObjectLock((dsmContext_t *)thd->gemini.context, - (dsmObject_t)tableNumber, - DSMOBJECT_TABLE,0, - DSM_LK_SHARE, 1, 0); + if (!check_opt->optimize) + error = dsmObjectLock((dsmContext_t *)thd->gemini.context, + (dsmObject_t)tableNumber, + DSMOBJECT_TABLE,0, + DSM_LK_SHARE, 1, 0); if(error) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Failed to lock table %d, error %d",tableNumber, error); return error; + } - info(HA_STATUS_VARIABLE); - + ha_rows *rec_per_key = share->rec_per_key; /* If quick option just scan along index converting and counting entries */ for (uint i = 0; i < table->keys; i++) { - key_read = 1; + key_read = 1; // Causes data to be extracted from the keys indexCount = 0; + // Clear the cardinality stats for this index + memset(table->key_info[i].rec_per_key,0, + sizeof(table->key_info[0].rec_per_key[0]) * + table->key_info[i].key_parts); error = index_init(i); error = index_first(indexBuf); while(!error) @@ -2195,8 +2505,12 @@ int ha_gemini::check(THD* thd, HA_CHECK_OPT* check_opt) error = fetch_row(thd->gemini.context,buf); if(!error) { - if(key_cmp(i,buf,indexBuf)) + if(key_cmp(i,buf,indexBuf,false)) { + + gemini_msg((dsmContext_t *)thd->gemini.context, + "Check Error! Key does not match row for rowid %d for index %s", + lastRowid,table->key_info[i].name); print_msg(thd,table->real_name,"check","error", "Key does not match row for rowid %d for index %s", lastRowid,table->key_info[i].name); @@ -2209,6 +2523,9 @@ int ha_gemini::check(THD* thd, HA_CHECK_OPT* check_opt) { errorCount++; checkStatus = HA_ADMIN_CORRUPT; + gemini_msg((dsmContext_t *)thd->gemini.context, + "Check Error! Key does not have a valid row pointer %d for index %s", + lastRowid,table->key_info[i].name); print_msg(thd,table->real_name,"check","error", "Key does not have a valid row pointer %d for index %s", lastRowid,table->key_info[i].name); @@ -2218,10 +2535,27 @@ int ha_gemini::check(THD* thd, HA_CHECK_OPT* check_opt) } } } + + key_cmp(i,indexBuf,prevBuf,true); + bcopy((void *)indexBuf,(void *)prevBuf,table->rec_buff_length); + if(!error) error = index_next(indexBuf); } - + + for(uint j=1; j < table->key_info[i].key_parts; j++) + { + table->key_info[i].rec_per_key[j] += table->key_info[i].rec_per_key[j-1]; + } + for(uint k=0; k < table->key_info[i].key_parts; k++) + { + if (table->key_info[i].rec_per_key[k]) + table->key_info[i].rec_per_key[k] = + records / table->key_info[i].rec_per_key[k]; + *rec_per_key = table->key_info[i].rec_per_key[k]; + rec_per_key++; + } + if(error == HA_ERR_END_OF_FILE) { /* Check count of rows */ @@ -2231,6 +2565,10 @@ int ha_gemini::check(THD* thd, HA_CHECK_OPT* check_opt) /* Number of index entries does not agree with the number of rows in the index. */ checkStatus = HA_ADMIN_CORRUPT; + gemini_msg((dsmContext_t *)thd->gemini.context, + "Check Error! Total rows %d does not match total index entries %d for %s", + records, indexCount, + table->key_info[i].name); print_msg(thd,table->real_name,"check","error", "Total rows %d does not match total index entries %d for %s", records, indexCount, @@ -2248,23 +2586,61 @@ int ha_gemini::check(THD* thd, HA_CHECK_OPT* check_opt) { /* Now scan the table and for each row generate the keys and find them in the index */ - error = fullCheck(thd, buf);\ + error = fullCheck(thd, buf); if(error) checkStatus = error; } + // Store the key distribution information + error = saveKeyStats(thd); error_return: - my_free(buf,MYF(MY_ALLOW_ZERO_PTR)); + my_free((char*)buf,MYF(MY_ALLOW_ZERO_PTR)); + my_free((char*)indexBuf,MYF(MY_ALLOW_ZERO_PTR)); + my_free((char*)prevBuf,MYF(MY_ALLOW_ZERO_PTR)); + index_end(); key_read = 0; - error = dsmObjectUnlock((dsmContext_t *)thd->gemini.context, - (dsmObject_t)tableNumber, - DSMOBJECT_TABLE,0, - DSM_LK_SHARE,0); + if(!check_opt->optimize) + { + error = dsmObjectUnlock((dsmContext_t *)thd->gemini.context, + (dsmObject_t)tableNumber, + DSMOBJECT_TABLE,0, + DSM_LK_SHARE,0); + if (error) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Unable to unlock table %d", tableNumber); + } + } return checkStatus; } +int ha_gemini::saveKeyStats(THD *thd) +{ + dsmStatus_t rc = 0; + + /* Insert a row in the indexStats table for each column of + each index of the table */ + + for(uint i = 0; i < table->keys; i++) + { + for (uint j = 0; j < table->key_info[i].key_parts && !rc ;j++) + { + rc = dsmIndexStatsPut((dsmContext_t *)thd->gemini.context, + tableNumber, pindexNumbers[i], + j, (LONG64)table->key_info[i].rec_per_key[j]); + if (rc) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Failed to update index stats for table %d, index %d", + tableNumber, pindexNumbers[i]); + } + } + } + return rc; +} + int ha_gemini::fullCheck(THD *thd,byte *buf) { int error; @@ -2319,7 +2695,12 @@ int ha_gemini::repair(THD* thd, HA_CHECK_OPT* check_opt) &thd->gemini.savepoint, DSMTXN_SAVE, 0, 0); if (error) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Error setting savepoint number %d, error %d", + thd->gemini.savepoint++, error); return(error); + } thd->gemini.needSavepoint = 0; } @@ -2330,7 +2711,11 @@ int ha_gemini::repair(THD* thd, HA_CHECK_OPT* check_opt) DSMOBJECT_TABLE,0, DSM_LK_EXCL, 1, 0); if(error) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Failed to lock table %d, error %d",tableNumber, error); return error; + } error = dsmContextSetLong((dsmContext_t *)thd->gemini.context, DSM_TAGCONTEXT_NO_LOGGING,1); @@ -2338,13 +2723,18 @@ int ha_gemini::repair(THD* thd, HA_CHECK_OPT* check_opt) error = dsmTableReset((dsmContext_t *)thd->gemini.context, (dsmTable_t)tableNumber, table->keys, pindexNumbers); + if (error) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "dsmTableReset failed for table %d, error %d",tableNumber, error); + } - buf = my_malloc(table->rec_buff_length,MYF(MY_WME)); + buf = (byte*)my_malloc(table->rec_buff_length,MYF(MY_WME)); dsmRecord.table = tableNumber; dsmRecord.recid = 0; dsmRecord.pbuffer = (dsmBuffer_t *)rec_buff; dsmRecord.recLength = table->reclength; - dsmRecord.maxLength = table->reclength; + dsmRecord.maxLength = table->rec_buff_length; while(!error) { error = dsmTableScan((dsmContext_t *)thd->gemini.context, @@ -2352,13 +2742,15 @@ int ha_gemini::repair(THD* thd, HA_CHECK_OPT* check_opt) 1); if(!error) { - unpack_row((char *)buf,(char *)dsmRecord.pbuffer); - error = handleIndexEntries(buf,dsmRecord.recid,KEY_CREATE); - if(error == HA_ERR_FOUND_DUPP_KEY) + if (!(error = unpack_row((char *)buf,(char *)dsmRecord.pbuffer))) { - /* We don't want to stop on duplicate keys -- we're repairing - here so let's get as much repaired as possible. */ - error = 0; + error = handleIndexEntries(buf,dsmRecord.recid,KEY_CREATE); + if(error == HA_ERR_FOUND_DUPP_KEY) + { + /* We don't want to stop on duplicate keys -- we're repairing + here so let's get as much repaired as possible. */ + error = 0; + } } } } @@ -2366,7 +2758,13 @@ int ha_gemini::repair(THD* thd, HA_CHECK_OPT* check_opt) (dsmObject_t)tableNumber, DSMOBJECT_TABLE,0, DSM_LK_EXCL,0); - my_free(buf,MYF(MY_ALLOW_ZERO_PTR)); + if (error) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Unable to unlock table %d", tableNumber); + } + + my_free((char*)buf,MYF(MY_ALLOW_ZERO_PTR)); error = dsmContextSetLong((dsmContext_t *)thd->gemini.context, DSM_TAGCONTEXT_NO_LOGGING,0); @@ -2374,6 +2772,313 @@ int ha_gemini::repair(THD* thd, HA_CHECK_OPT* check_opt) return error; } + +int ha_gemini::restore(THD* thd, HA_CHECK_OPT *check_opt) +{ + dsmContext_t *pcontext = (dsmContext_t *)thd->gemini.context; + char* backup_dir = thd->lex.backup_dir; + char src_path[FN_REFLEN], dst_path[FN_REFLEN]; + char* table_name = table->real_name; + int error = 0; + int errornum; + const char* errmsg = ""; + dsmArea_t tableArea = 0; + dsmObjectAttr_t objectAttr; + dsmObject_t associate; + dsmObjectType_t associateType; + dsmDbkey_t block, root; + dsmStatus_t rc; + + rc = dsmObjectInfo(pcontext, tableNumber, DSMOBJECT_MIXTABLE, tableNumber, + &tableArea, &objectAttr, &associateType, &block, &root); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmObjectInfo (.gmd) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + rc = dsmAreaFlush(pcontext, tableArea, FLUSH_BUFFERS | FLUSH_SYNC); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmAreaFlush (.gmd) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + rc = dsmAreaClose(pcontext, tableArea); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmAreaClose (.gmd) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + /* Restore the data file */ + if (!fn_format(src_path, table_name, backup_dir, ha_gemini_ext, 4 + 64)) + { + return HA_ADMIN_INVALID; + } + + if (my_copy(src_path, fn_format(dst_path, table->path, "", + ha_gemini_ext, 4), MYF(MY_WME))) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in my_copy (.gmd) (Error %d)"; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + rc = dsmAreaFlush(pcontext, tableArea, FREE_BUFFERS); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmAreaFlush (.gmd) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + rc = dsmAreaOpen(pcontext, tableArea, 1); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmAreaOpen (.gmd) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + +#ifdef GEMINI_BACKUP_IDX + dsmArea_t indexArea = 0; + + rc = dsmObjectInfo(pcontext, tableNumber, DSMOBJECT_MIXINDEX, &indexArea, + &objectAttr, &associate, &associateType, &block, &root); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmObjectInfo (.gmi) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + rc = dsmAreaClose(pcontext, indexArea); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmAreaClose (.gmi) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + /* Restore the index file */ + if (!fn_format(src_path, table_name, backup_dir, ha_gemini_idx_ext, 4 + 64)) + { + return HA_ADMIN_INVALID; + } + + if (my_copy(src_path, fn_format(dst_path, table->path, "", + ha_gemini_idx_ext, 4), MYF(MY_WME))) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in my_copy (.gmi) (Error %d)"; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + rc = dsmAreaOpen(pcontext, indexArea, 1); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmAreaOpen (.gmi) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + return HA_ADMIN_OK; +#else /* #ifdef GEMINI_BACKUP_IDX */ + HA_CHECK_OPT tmp_check_opt; + tmp_check_opt.init(); + /* The following aren't currently implemented in ha_gemini::repair + ** tmp_check_opt.quick = 1; + ** tmp_check_opt.flags |= T_VERY_SILENT; + */ + return (repair(thd, &tmp_check_opt)); +#endif /* #ifdef GEMINI_BACKUP_IDX */ + + err: + { +#if 0 + /* mi_check_print_error is in ha_myisam.cc, so none of the informative + ** error messages above is currently being printed + */ + MI_CHECK param; + myisamchk_init(¶m); + param.thd = thd; + param.op_name = (char*)"restore"; + param.table_name = table->table_name; + param.testflag = 0; + mi_check_print_error(¶m,errmsg, errornum); +#endif + return error; + } +} + + +int ha_gemini::backup(THD* thd, HA_CHECK_OPT *check_opt) +{ + dsmContext_t *pcontext = (dsmContext_t *)thd->gemini.context; + char* backup_dir = thd->lex.backup_dir; + char src_path[FN_REFLEN], dst_path[FN_REFLEN]; + char* table_name = table->real_name; + int error = 0; + int errornum; + const char* errmsg = ""; + dsmArea_t tableArea = 0; + dsmObjectAttr_t objectAttr; + dsmObject_t associate; + dsmObjectType_t associateType; + dsmDbkey_t block, root; + dsmStatus_t rc; + + rc = dsmObjectInfo(pcontext, tableNumber, DSMOBJECT_MIXTABLE, tableNumber, + &tableArea, &objectAttr, &associateType, &block, &root); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmObjectInfo (.gmd) (Error %d)"; + errornum = rc; + goto err; + } + + /* Flush the buffers before backing up the table */ + dsmAreaFlush((dsmContext_t *)thd->gemini.context, tableArea, + FLUSH_BUFFERS | FLUSH_SYNC); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmAreaFlush (.gmd) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + /* Backup the .FRM file */ + if (!fn_format(dst_path, table_name, backup_dir, reg_ext, 4 + 64)) + { + errmsg = "Failed in fn_format() for .frm file: errno = %d"; + error = HA_ADMIN_INVALID; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + if (my_copy(fn_format(src_path, table->path,"", reg_ext, 4), + dst_path, + MYF(MY_WME | MY_HOLD_ORIGINAL_MODES ))) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed copying .frm file: errno = %d"; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + /* Backup the data file */ + if (!fn_format(dst_path, table_name, backup_dir, ha_gemini_ext, 4 + 64)) + { + errmsg = "Failed in fn_format() for .GMD file: errno = %d"; + error = HA_ADMIN_INVALID; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + if (my_copy(fn_format(src_path, table->path,"", ha_gemini_ext, 4), + dst_path, + MYF(MY_WME | MY_HOLD_ORIGINAL_MODES )) ) + { + errmsg = "Failed copying .GMD file: errno = %d"; + error= HA_ADMIN_FAILED; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + +#ifdef GEMINI_BACKUP_IDX + dsmArea_t indexArea = 0; + + rc = dsmObjectInfo(pcontext, tableNumber, DSMOBJECT_MIXINDEX, &indexArea, + &objectAttr, &associate, &associateType, &block, &root); + if (rc) + { + error = HA_ADMIN_FAILED; + errmsg = "Failed in dsmObjectInfo (.gmi) (Error %d)"; + errornum = rc; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + /* Backup the index file */ + if (!fn_format(dst_path, table_name, backup_dir, ha_gemini_idx_ext, 4 + 64)) + { + errmsg = "Failed in fn_format() for .GMI file: errno = %d"; + error = HA_ADMIN_INVALID; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } + + if (my_copy(fn_format(src_path, table->path,"", ha_gemini_idx_ext, 4), + dst_path, + MYF(MY_WME | MY_HOLD_ORIGINAL_MODES )) ) + { + errmsg = "Failed copying .GMI file: errno = %d"; + error= HA_ADMIN_FAILED; + errornum = errno; + gemini_msg(pcontext, errmsg ,errornum); + goto err; + } +#endif /* #ifdef GEMINI_BACKUP_IDX */ + + return HA_ADMIN_OK; + + err: + { +#if 0 + /* mi_check_print_error is in ha_myisam.cc, so none of the informative + ** error messages above is currently being printed + */ + MI_CHECK param; + myisamchk_init(¶m); + param.thd = thd; + param.op_name = (char*)"backup"; + param.table_name = table->table_name; + param.testflag = 0; + mi_check_print_error(¶m,errmsg, errornum); +#endif + return error; + } +} + + +int ha_gemini::optimize(THD* thd, HA_CHECK_OPT *check_opt) +{ + return HA_ADMIN_ALREADY_DONE; +} + + ha_rows ha_gemini::records_in_range(int keynr, const byte *start_key,uint start_key_len, enum ha_rkey_function start_search_flag, @@ -2412,7 +3117,7 @@ ha_rows ha_gemini::records_in_range(int keynr, pbracketBase->keyLen = componentLen; } - pbracketBase->keyLen -= 3; + pbracketBase->keyLen -= FULLKEYHDRSZ; if(end_key) { @@ -2431,9 +3136,10 @@ ha_rows ha_gemini::records_in_range(int keynr, pbracketLimit->keyLen = componentLen; } - pbracketLimit->keyLen -= 3; + pbracketLimit->keyLen -= FULLKEYHDRSZ; error = dsmIndexRowsInRange((dsmContext_t *)current_thd->gemini.context, pbracketBase,pbracketLimit, + tableNumber, &pctInrange); if(pctInrange >= 1) rows = (ha_rows)pctInrange; @@ -2457,32 +3163,82 @@ ha_rows ha_gemini::records_in_range(int keynr, may only happen in rows with blobs, as the default row length is pre-allocated. */ -int ha_gemini::pack_row(byte **pprow, int *ppackedLength, const byte *record) +int ha_gemini::pack_row(byte **pprow, int *ppackedLength, const byte *record, + bool update) { + THD *thd = current_thd; + dsmContext_t *pcontext = (dsmContext_t *)thd->gemini.context; + gemBlobDesc_t *pBlobDesc = pBlobDescs; + if (fixed_length_row) { *pprow = (byte *)record; *ppackedLength=(int)table->reclength; return 0; } - if (table->blob_fields) - { - return HA_ERR_WRONG_COMMAND; - } /* Copy null bits */ memcpy(rec_buff, record, table->null_bytes); byte *ptr=rec_buff + table->null_bytes; for (Field **field=table->field ; *field ; field++) - ptr=(byte*) (*field)->pack((char*) ptr,record + (*field)->offset()); + { +#ifdef GEMINI_TINYBLOB_IN_ROW + /* Tiny blobs (255 bytes or less) are stored in the row; larger + ** blobs are stored in a separate storage object (see ha_gemini::create). + */ + if ((*field)->type() == FIELD_TYPE_BLOB && + ((Field_blob*)*field)->blobtype() != FIELD_TYPE_TINY_BLOB) +#else + if ((*field)->type() == FIELD_TYPE_BLOB) +#endif + { + dsmBlob_t gemBlob; + char *blobptr; + + gemBlob.areaType = DSMOBJECT_BLOB; + gemBlob.blobObjNo = tableNumber; + gemBlob.blobId = 0; + gemBlob.totLength = gemBlob.segLength = + ((Field_blob*)*field)->get_length((char*)record + (*field)->offset()); + ((Field_blob*)*field)->get_ptr((char**) &blobptr); + gemBlob.pBuffer = (dsmBuffer_t *)blobptr; + gemBlob.blobContext.blobOffset = 0; + if (gemBlob.totLength) + { + dsmBlobStart(pcontext, &gemBlob); + if (update && pBlobDesc->blobId) + { + gemBlob.blobId = pBlobDesc->blobId; + dsmBlobUpdate(pcontext, &gemBlob, NULL); + } + else + { + dsmBlobPut(pcontext, &gemBlob, NULL); + } + dsmBlobEnd(pcontext, &gemBlob); + } + ptr = (byte*)((Field_blob*)*field)->pack_id((char*) ptr, + (char*)record + (*field)->offset(), (longlong)gemBlob.blobId); + + pBlobDesc++; + } + else + { + ptr=(byte*) (*field)->pack((char*) ptr, (char*)record + (*field)->offset()); + } + } *pprow=rec_buff; *ppackedLength= (ptr - rec_buff); return 0; } -void ha_gemini::unpack_row(char *record, char *prow) +int ha_gemini::unpack_row(char *record, char *prow) { + THD *thd = current_thd; + dsmContext_t *pcontext = (dsmContext_t *)thd->gemini.context; + gemBlobDesc_t *pBlobDesc = pBlobDescs; + if (fixed_length_row) { /* If the table is a VST, the row is in Gemini internal format. @@ -2568,38 +3324,129 @@ void ha_gemini::unpack_row(char *record, char *prow) const char *ptr= (const char*) prow; memcpy(record, ptr, table->null_bytes); ptr+=table->null_bytes; + for (Field **field=table->field ; *field ; field++) - ptr= (*field)->unpack(record + (*field)->offset(), ptr); + { +#ifdef GEMINI_TINYBLOB_IN_ROW + /* Tiny blobs (255 bytes or less) are stored in the row; larger + ** blobs are stored in a separate storage object (see ha_gemini::create). + */ + if ((*field)->type() == FIELD_TYPE_BLOB && + ((Field_blob*)*field)->blobtype() != FIELD_TYPE_TINY_BLOB) +#else + if ((*field)->type() == FIELD_TYPE_BLOB) +#endif + { + dsmBlob_t gemBlob; + + gemBlob.areaType = DSMOBJECT_BLOB; + gemBlob.blobObjNo = tableNumber; + gemBlob.blobId = (dsmBlobId_t)(((Field_blob*)*field)->get_id(ptr)); + if (gemBlob.blobId) + { + gemBlob.totLength = + gemBlob.segLength = ((Field_blob*)*field)->get_length(ptr); + /* Allocate memory to store the blob. This memory is freed + ** the next time unpack_row is called for this table. + */ + gemBlob.pBuffer = (dsmBuffer_t *)my_malloc(gemBlob.totLength, + MYF(0)); + if (!gemBlob.pBuffer) + { + return HA_ERR_OUT_OF_MEM; + } + gemBlob.blobContext.blobOffset = 0; + dsmBlobStart(pcontext, &gemBlob); + dsmBlobGet(pcontext, &gemBlob, NULL); + dsmBlobEnd(pcontext, &gemBlob); + } + else + { + gemBlob.pBuffer = 0; + } + ptr = ((Field_blob*)*field)->unpack_id(record + (*field)->offset(), + ptr, (char *)gemBlob.pBuffer); + pBlobDesc->blobId = gemBlob.blobId; + my_free((char*)pBlobDesc->pBlob, MYF(MY_ALLOW_ZERO_PTR)); + pBlobDesc->pBlob = gemBlob.pBuffer; + pBlobDesc++; + } + else + { + ptr= (*field)->unpack(record + (*field)->offset(), ptr); + } + } } + + return 0; } int ha_gemini::key_cmp(uint keynr, const byte * old_row, - const byte * new_row) + const byte * new_row, bool updateStats) { KEY_PART_INFO *key_part=table->key_info[keynr].key_part; KEY_PART_INFO *end=key_part+table->key_info[keynr].key_parts; - for ( ; key_part != end ; key_part++) + for ( uint i = 0 ; key_part != end ; key_part++, i++) { if (key_part->null_bit) { if ((old_row[key_part->null_offset] & key_part->null_bit) != (new_row[key_part->null_offset] & key_part->null_bit)) + { + if(updateStats) + table->key_info[keynr].rec_per_key[i]++; return 1; + } + else if((old_row[key_part->null_offset] & key_part->null_bit) && + (new_row[key_part->null_offset] & key_part->null_bit)) + /* Both are null */ + continue; } if (key_part->key_part_flag & (HA_BLOB_PART | HA_VAR_LENGTH)) { - - if (key_part->field->cmp_binary(old_row + key_part->offset, - new_row + key_part->offset, + if (key_part->field->cmp_binary((char*)(old_row + key_part->offset), + (char*)(new_row + key_part->offset), (ulong) key_part->length)) + { + if(updateStats) + table->key_info[keynr].rec_per_key[i]++; return 1; + } } else { if (memcmp(old_row+key_part->offset, new_row+key_part->offset, key_part->length)) + { + /* Check for special case of -0 which causes table check + to find an invalid key when comparing the the index + value of 0 to the -0 stored in the row */ + if(key_part->field->type() == FIELD_TYPE_DECIMAL) + { + double fieldValue; + char *ptr = key_part->field->ptr; + + key_part->field->ptr = (char *)old_row + key_part->offset; + fieldValue = key_part->field->val_real(); + if(fieldValue == 0) + { + key_part->field->ptr = (char *)new_row + key_part->offset; + fieldValue = key_part->field->val_real(); + if(fieldValue == 0) + { + key_part->field->ptr = ptr; + continue; + } + } + key_part->field->ptr = ptr; + } + if(updateStats) + { + table->key_info[keynr].rec_per_key[i]++; + } return 1; + } } } return 0; @@ -2612,13 +3459,13 @@ int gemini_parse_table_name(const char *fullname, char *dbname, char *tabname) /* separate out the name of the table and the database */ - namestart = strchr(fullname + start_of_name, '/'); + namestart = (char *)strchr(fullname + start_of_name, '/'); if (!namestart) { /* if on Windows, slashes go the other way */ - namestart = strchr(fullname + start_of_name, '\\'); + namestart = (char *)strchr(fullname + start_of_name, '\\'); } - nameend = strchr(fullname + start_of_name, '.'); + nameend = (char *)strchr(fullname + start_of_name, '.'); /* sometimes fullname has an extension, sometimes it doesn't */ if (!nameend) { @@ -2680,4 +3527,105 @@ static void print_msg(THD *thd, const char *table_name, const char *op_name, thd->killed=1; } +/* Load shared area with rows per key statistics */ +void +ha_gemini::get_index_stats(THD *thd) +{ + dsmStatus_t rc = 0; + ha_rows *rec_per_key = share->rec_per_key; + + for(uint i = 0; i < table->keys && !rc; i++) + { + for (uint j = 0; j < table->key_info[i].key_parts && !rc;j++) + { + LONG64 rows_per_key; + rc = dsmIndexStatsGet((dsmContext_t *)thd->gemini.context, + tableNumber, pindexNumbers[i],(int)j, + &rows_per_key); + if (rc) + { + gemini_msg((dsmContext_t *)thd->gemini.context, + "Index Statistics faild for table %d index %d, error %d", + tableNumber, pindexNumbers[i], rc); + } + *rec_per_key = (ha_rows)rows_per_key; + rec_per_key++; + } + } + return; +} + +/**************************************************************************** + Handling the shared GEM_SHARE structure that is needed to provide + a global in memory storage location of the rec_per_key stats used + by the optimizer. +****************************************************************************/ + +static byte* gem_get_key(GEM_SHARE *share,uint *length, + my_bool not_used __attribute__((unused))) +{ + *length=share->table_name_length; + return (byte*) share->table_name; +} + +static GEM_SHARE *get_share(const char *table_name, TABLE *table) +{ + GEM_SHARE *share; + + pthread_mutex_lock(&gem_mutex); + uint length=(uint) strlen(table_name); + if (!(share=(GEM_SHARE*) hash_search(&gem_open_tables, (byte*) table_name, + length))) + { + ha_rows *rec_per_key; + char *tmp_name; + + if ((share=(GEM_SHARE *) + my_multi_malloc(MYF(MY_WME | MY_ZEROFILL), + &share, sizeof(*share), + &rec_per_key, table->key_parts * sizeof(ha_rows), + &tmp_name, length+1, + NullS))) + { + share->rec_per_key = rec_per_key; + share->table_name = tmp_name; + share->table_name_length=length; + strcpy(share->table_name,table_name); + if (hash_insert(&gem_open_tables, (byte*) share)) + { + pthread_mutex_unlock(&gem_mutex); + my_free((gptr) share,0); + return 0; + } + thr_lock_init(&share->lock); + pthread_mutex_init(&share->mutex,NULL); + } + } + pthread_mutex_unlock(&gem_mutex); + return share; +} + +static int free_share(GEM_SHARE *share, bool mutex_is_locked) +{ + pthread_mutex_lock(&gem_mutex); + if (mutex_is_locked) + pthread_mutex_unlock(&share->mutex); + if (!--share->use_count) + { + hash_delete(&gem_open_tables, (byte*) share); + thr_lock_delete(&share->lock); + pthread_mutex_destroy(&share->mutex); + my_free((gptr) share, MYF(0)); + } + pthread_mutex_unlock(&gem_mutex); + return 0; +} + +static void gemini_lock_table_overflow_error(dsmContext_t *pcontext) +{ + gemini_msg(pcontext, "The total number of locks exceeds the lock table size"); + gemini_msg(pcontext, "Either increase gemini_lock_table_size or use a"); + gemini_msg(pcontext, "different transaction isolation level"); +} + #endif /* HAVE_GEMINI_DB */ diff --git a/sql/ha_gemini.h b/sql/ha_gemini.h index 7e6e8f26588..495dc2fd1c9 100644 --- a/sql/ha_gemini.h +++ b/sql/ha_gemini.h @@ -19,17 +19,26 @@ #pragma interface /* gcc class implementation */ #endif +#include "gem_global.h" #include "dstd.h" #include "dsmpub.h" /* class for the the gemini handler */ enum enum_key_string_options{KEY_CREATE,KEY_DELETE,KEY_CHECK}; +typedef struct st_gemini_share { + ha_rows *rec_per_key; + THR_LOCK lock; + pthread_mutex_t mutex; + char *table_name; + uint table_name_length,use_count; +} GEM_SHARE; -#define READ_UNCOMMITED 0 -#define READ_COMMITED 1 -#define REPEATABLE_READ 2 -#define SERIALIZEABLE 3 +typedef struct gemBlobDesc +{ + dsmBlobId_t blobId; + dsmBuffer_t *pBlob; +} gemBlobDesc_t; class ha_gemini: public handler { @@ -38,7 +47,7 @@ class ha_gemini: public handler uint int_option_flag; int tableNumber; dsmIndex_t *pindexNumbers; // dsm object numbers for the indexes on this table - unsigned long lastRowid; + dsmRecid_t lastRowid; uint last_dup_key; bool fixed_length_row, key_read, using_ignore; byte *rec_buff; @@ -46,10 +55,12 @@ class ha_gemini: public handler dsmKey_t *pbracketLimit; dsmKey_t *pfoundKey; dsmMask_t tableStatus; // Crashed/repair status + gemBlobDesc_t *pBlobDescs; int index_open(char *tableName); - int pack_row(byte **prow, int *ppackedLength, const byte *record); - void unpack_row(char *record, char *prow); + int pack_row(byte **prow, int *ppackedLength, const byte *record, + bool update); + int unpack_row(char *record, char *prow); int findRow(THD *thd, dsmMask_t findMode, byte *buf); int fetch_row(void *gemini_context, const byte *buf); int handleIndexEntries(const byte * record, dsmRecid_t recid, @@ -70,24 +81,28 @@ class ha_gemini: public handler void unpack_key(char *record, dsmKey_t *key, uint index); int key_cmp(uint keynr, const byte * old_row, - const byte * new_row); + const byte * new_row, bool updateStats); + int saveKeyStats(THD *thd); + void get_index_stats(THD *thd); short cursorId; /* cursorId of active index cursor if any */ dsmMask_t lockMode; /* Shared or exclusive */ /* FIXFIX Don't know why we need this because I don't know what store_lock method does but we core dump without this */ - THR_LOCK alock; THR_LOCK_DATA lock; + GEM_SHARE *share; + public: ha_gemini(TABLE *table): handler(table), file(0), int_option_flag(HA_READ_NEXT | HA_READ_PREV | HA_REC_NOT_IN_SEQ | HA_KEYPOS_TO_RNDPOS | HA_READ_ORDER | HA_LASTKEY_ORDER | HA_LONGLONG_KEYS | HA_NULL_KEY | HA_HAVE_KEY_READ_ONLY | - HA_NO_BLOBS | HA_NO_TEMP_TABLES | - /* HA_BLOB_KEY | */ /*HA_NOT_EXACT_COUNT | */ + HA_BLOB_KEY | + HA_NO_TEMP_TABLES | HA_NO_FULLTEXT_KEY | + /*HA_NOT_EXACT_COUNT | */ /*HA_KEY_READ_WRONG_STR |*/ HA_DROP_BEFORE_CREATE), pbracketBase(0),pbracketLimit(0),pfoundKey(0), cursorId(0) @@ -100,7 +115,7 @@ class ha_gemini: public handler uint max_record_length() const { return MAXRECSZ; } uint max_keys() const { return MAX_KEY-1; } uint max_key_parts() const { return MAX_REF_PARTS; } - uint max_key_length() const { return MAXKEYSZ; } + uint max_key_length() const { return MAXKEYSZ / 2; } bool fast_key_read() { return 1;} bool has_transactions() { return 1;} @@ -129,8 +144,12 @@ class ha_gemini: public handler void info(uint); int extra(enum ha_extra_function operation); int reset(void); + int analyze(THD* thd, HA_CHECK_OPT* check_opt); int check(THD* thd, HA_CHECK_OPT* check_opt); int repair(THD* thd, HA_CHECK_OPT* check_opt); + int restore(THD* thd, HA_CHECK_OPT* check_opt); + int backup(THD* thd, HA_CHECK_OPT* check_opt); + int optimize(THD* thd, HA_CHECK_OPT* check_opt); int external_lock(THD *thd, int lock_type); virtual longlong get_auto_increment(); void position(byte *record); @@ -139,7 +158,7 @@ class ha_gemini: public handler enum ha_rkey_function start_search_flag, const byte *end_key,uint end_key_len, enum ha_rkey_function end_search_flag); - + void update_create_info(HA_CREATE_INFO *create_info); int create(const char *name, register TABLE *form, HA_CREATE_INFO *create_info); int delete_table(const char *name); @@ -167,6 +186,7 @@ extern long gemini_locktablesize; extern long gemini_lock_wait_timeout; extern long gemini_spin_retries; extern long gemini_connection_limit; +extern char *gemini_basedir; extern TYPELIB gemini_recovery_typelib; extern ulong gemini_recovery_options; @@ -175,12 +195,13 @@ bool gemini_end(void); bool gemini_flush_logs(void); int gemini_commit(THD *thd); int gemini_rollback(THD *thd); +int gemini_recovery_logging(THD *thd, bool on); void gemini_disconnect(THD *thd); int gemini_rollback_to_savepoint(THD *thd); int gemini_parse_table_name(const char *fullname, char *dbname, char *tabname); int gemini_is_vst(const char *pname); int gemini_set_option_long(int optid, long optval); -const int gemini_blocksize = 8192; -const int gemini_recbits = 7; +const int gemini_blocksize = BLKSIZE; +const int gemini_recbits = DEFAULT_RECBITS; diff --git a/sql/handler.cc b/sql/handler.cc index 212fcea11ae..7720e9ca671 100644 --- a/sql/handler.cc +++ b/sql/handler.cc @@ -694,6 +694,15 @@ void handler::print_error(int error, myf errflag) case HA_ERR_RECORD_FILE_FULL: textno=ER_RECORD_FILE_FULL; break; + case HA_ERR_LOCK_WAIT_TIMEOUT: + textno=ER_LOCK_WAIT_TIMEOUT; + break; + case HA_ERR_LOCK_TABLE_FULL: + textno=ER_LOCK_TABLE_FULL; + break; + case HA_ERR_READ_ONLY_TRANSACTION: + textno=ER_READ_ONLY_TRANSACTION; + break; default: { my_error(ER_GET_ERRNO,errflag,error); @@ -757,6 +766,25 @@ int ha_commit_rename(THD *thd) return error; } +/* Tell the handler to turn on or off logging to the handler's + recovery log +*/ +int ha_recovery_logging(THD *thd, bool on) +{ + int error=0; + + DBUG_ENTER("ha_recovery_logging"); +#ifdef USING_TRANSACTIONS + if (opt_using_transactions) + { +#ifdef HAVE_GEMINI_DB + error = gemini_recovery_logging(thd, on); + } +#endif +#endif + DBUG_RETURN(error); +} + int handler::index_next_same(byte *buf, const byte *key, uint keylen) { int error; diff --git a/sql/handler.h b/sql/handler.h index 076bf783f80..7a28dc07a81 100644 --- a/sql/handler.h +++ b/sql/handler.h @@ -74,6 +74,7 @@ #define HA_NOT_DELETE_WITH_CACHE (HA_NOT_READ_AFTER_KEY*2) #define HA_NO_TEMP_TABLES (HA_NOT_DELETE_WITH_CACHE*2) #define HA_NO_PREFIX_CHAR_KEYS (HA_NO_TEMP_TABLES*2) +#define HA_NO_FULLTEXT_KEY (HA_NO_PREFIX_CHAR_KEYS*2) /* Parameters for open() (in register form->filestat) */ /* HA_GET_INFO does a implicit HA_ABORT_IF_LOCKED */ @@ -353,3 +354,4 @@ int ha_autocommit_or_rollback(THD *thd, int error); void ha_set_spin_retries(uint retries); bool ha_flush_logs(void); int ha_commit_rename(THD *thd); +int ha_recovery_logging(THD *thd, bool on); diff --git a/sql/lock.cc b/sql/lock.cc index 23f81c9c164..1d9aca66e74 100644 --- a/sql/lock.cc +++ b/sql/lock.cc @@ -35,6 +35,7 @@ static MYSQL_LOCK *get_lock_data(THD *thd, TABLE **table,uint count, bool unlock, TABLE **write_locked); static int lock_external(TABLE **table,uint count); static int unlock_external(THD *thd, TABLE **table,uint count); +static void print_lock_error(int error); MYSQL_LOCK *mysql_lock_tables(THD *thd,TABLE **tables,uint count) @@ -154,7 +155,7 @@ static int lock_external(TABLE **tables,uint count) (*tables)->file->external_lock(thd, F_UNLCK); (*tables)->current_lock=F_UNLCK; } - my_error(ER_CANT_LOCK,MYF(ME_BELL+ME_OLDWIN+ME_WAITTANG),error); + print_lock_error(error); DBUG_RETURN(error); } else @@ -325,7 +326,7 @@ static int unlock_external(THD *thd, TABLE **table,uint count) } } if (error_code) - my_error(ER_CANT_LOCK,MYF(ME_BELL+ME_OLDWIN+ME_WAITTANG),error_code); + print_lock_error(error_code); DBUG_RETURN(error_code); } @@ -480,3 +481,24 @@ bool wait_for_locked_table_names(THD *thd, TABLE_LIST *table_list) } DBUG_RETURN(result); } + +static void print_lock_error(int error) +{ + int textno; + DBUG_ENTER("print_lock_error"); + + switch (error) { + case HA_ERR_LOCK_WAIT_TIMEOUT: + textno=ER_LOCK_WAIT_TIMEOUT; + break; + case HA_ERR_READ_ONLY_TRANSACTION: + textno=ER_READ_ONLY_TRANSACTION; + break; + default: + textno=ER_CANT_LOCK; + break; + } + my_error(textno,MYF(ME_BELL+ME_OLDWIN+ME_WAITTANG),error); + DBUG_VOID_RETURN; +} + diff --git a/sql/share/czech/errmsg.txt b/sql/share/czech/errmsg.txt index 666d70c957a..35a428273c7 100644 --- a/sql/share/czech/errmsg.txt +++ b/sql/share/czech/errmsg.txt @@ -215,3 +215,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/danish/errmsg.txt b/sql/share/danish/errmsg.txt index 9f1f6accc1f..b2fe6c4e800 100644 --- a/sql/share/danish/errmsg.txt +++ b/sql/share/danish/errmsg.txt @@ -209,3 +209,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/dutch/errmsg.txt b/sql/share/dutch/errmsg.txt index 8b44af7eb7b..616f832bee8 100644 --- a/sql/share/dutch/errmsg.txt +++ b/sql/share/dutch/errmsg.txt @@ -206,3 +206,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/english/errmsg.txt b/sql/share/english/errmsg.txt index ff29fffe958..018d558d7de 100644 --- a/sql/share/english/errmsg.txt +++ b/sql/share/english/errmsg.txt @@ -206,3 +206,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/estonian/errmsg.txt b/sql/share/estonian/errmsg.txt index f1559f4a44d..e1e03e4a596 100644 --- a/sql/share/estonian/errmsg.txt +++ b/sql/share/estonian/errmsg.txt @@ -210,3 +210,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/french/errmsg.txt b/sql/share/french/errmsg.txt index 5cbcfe81b87..aadfecbc8a1 100644 --- a/sql/share/french/errmsg.txt +++ b/sql/share/french/errmsg.txt @@ -206,3 +206,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/german/errmsg.txt b/sql/share/german/errmsg.txt index 307ed7a00f4..7a86a4368e7 100644 --- a/sql/share/german/errmsg.txt +++ b/sql/share/german/errmsg.txt @@ -209,3 +209,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/greek/errmsg.txt b/sql/share/greek/errmsg.txt index 119de63b2a7..5022bb65792 100644 --- a/sql/share/greek/errmsg.txt +++ b/sql/share/greek/errmsg.txt @@ -206,3 +206,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/hungarian/errmsg.txt b/sql/share/hungarian/errmsg.txt index 7e9b9e6a3bf..cfdd4b7fe75 100644 --- a/sql/share/hungarian/errmsg.txt +++ b/sql/share/hungarian/errmsg.txt @@ -208,3 +208,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/italian/errmsg.txt b/sql/share/italian/errmsg.txt index d6c857d44a4..d1b17bc8f2e 100644 --- a/sql/share/italian/errmsg.txt +++ b/sql/share/italian/errmsg.txt @@ -206,3 +206,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/japanese/errmsg.txt b/sql/share/japanese/errmsg.txt index a62f22c253d..9dfe9bb3efb 100644 --- a/sql/share/japanese/errmsg.txt +++ b/sql/share/japanese/errmsg.txt @@ -208,3 +208,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/korean/errmsg.txt b/sql/share/korean/errmsg.txt index c476ad8fa3c..4f0f90f88ce 100644 --- a/sql/share/korean/errmsg.txt +++ b/sql/share/korean/errmsg.txt @@ -206,3 +206,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/norwegian-ny/errmsg.txt b/sql/share/norwegian-ny/errmsg.txt index 2a57c93cc84..99238d61e3e 100644 --- a/sql/share/norwegian-ny/errmsg.txt +++ b/sql/share/norwegian-ny/errmsg.txt @@ -208,3 +208,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/norwegian/errmsg.txt b/sql/share/norwegian/errmsg.txt index cf23991eefa..473d297b649 100644 --- a/sql/share/norwegian/errmsg.txt +++ b/sql/share/norwegian/errmsg.txt @@ -208,3 +208,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/polish/errmsg.txt b/sql/share/polish/errmsg.txt index 03e9d59dacd..253d4afd2b7 100644 --- a/sql/share/polish/errmsg.txt +++ b/sql/share/polish/errmsg.txt @@ -210,3 +210,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/portuguese/errmsg.txt b/sql/share/portuguese/errmsg.txt index 37f2bf9e7ac..ba010a20710 100644 --- a/sql/share/portuguese/errmsg.txt +++ b/sql/share/portuguese/errmsg.txt @@ -206,3 +206,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/romanian/errmsg.txt b/sql/share/romanian/errmsg.txt index 6bc2695bed5..384df0c864e 100644 --- a/sql/share/romanian/errmsg.txt +++ b/sql/share/romanian/errmsg.txt @@ -210,3 +210,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/russian/errmsg.txt b/sql/share/russian/errmsg.txt index 75d21dda888..7dd24c743bb 100644 --- a/sql/share/russian/errmsg.txt +++ b/sql/share/russian/errmsg.txt @@ -209,3 +209,6 @@ "îÅ ÍÏÇÕ ÓÏÚÄÁÔØ ÐÒÏÃÅÓÓ SLAVE, ÐÒÏ×ÅÒØÔÅ ÓÉÓÔÅÍÎÙÅ ÒÅÓÕÒÓÙ", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/slovak/errmsg.txt b/sql/share/slovak/errmsg.txt index 673499f5105..2a6063b6aee 100644 --- a/sql/share/slovak/errmsg.txt +++ b/sql/share/slovak/errmsg.txt @@ -214,3 +214,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/spanish/errmsg.txt b/sql/share/spanish/errmsg.txt index d470556fd58..dbf7caf585d 100644 --- a/sql/share/spanish/errmsg.txt +++ b/sql/share/spanish/errmsg.txt @@ -207,3 +207,6 @@ "Could not create slave thread, check system resources", "User %-.64s has already more than 'max_user_connections' active connections", "You may only use constant expressions with SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/share/swedish/errmsg.txt b/sql/share/swedish/errmsg.txt index 672ce97c575..fc26a08e9ee 100644 --- a/sql/share/swedish/errmsg.txt +++ b/sql/share/swedish/errmsg.txt @@ -206,3 +206,6 @@ "Kunde inte starta en tråd för replikering", "Användare '%-.64s' har redan 'max_user_connections' aktiva inloggningar", "Du kan endast använda konstant-uttryck med SET", +"Lock wait timeout exceeded", +"The total number of locks exceeds the lock table size", +"Update locks cannot be acquired during a READ UNCOMMITTED transaction", diff --git a/sql/sql_base.cc b/sql/sql_base.cc index e7d63e1e5e4..d9470ee0b59 100644 --- a/sql/sql_base.cc +++ b/sql/sql_base.cc @@ -1388,11 +1388,6 @@ TABLE *open_ltable(THD *thd, TABLE_LIST *table_list, thr_lock_type lock_type) bool refresh; DBUG_ENTER("open_ltable"); -#ifdef __WIN__ - /* Win32 can't drop a file that is open */ - if (lock_type == TL_WRITE_ALLOW_READ) - lock_type= TL_WRITE; -#endif thd->proc_info="Opening table"; while (!(table=open_table(thd,table_list->db ? table_list->db : thd->db, table_list->real_name,table_list->name, @@ -1400,6 +1395,19 @@ TABLE *open_ltable(THD *thd, TABLE_LIST *table_list, thr_lock_type lock_type) if (table) { int error; + +#ifdef __WIN__ + /* Win32 can't drop a file that is open */ + if (lock_type == TL_WRITE_ALLOW_READ +#ifdef HAVE_GEMINI_DB + && table->db_type != DB_TYPE_GEMINI +#endif /* HAVE_GEMINI_DB */ + ) + { + lock_type= TL_WRITE; + } +#endif /* __WIN__ */ + table_list->table=table; table->grant= table_list->grant; if (thd->locked_tables) diff --git a/sql/sql_table.cc b/sql/sql_table.cc index ad39b91a5ca..bd7c82d3e26 100644 --- a/sql/sql_table.cc +++ b/sql/sql_table.cc @@ -423,6 +423,13 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name, column->field_name); DBUG_RETURN(-1); } + if (key->type == Key::FULLTEXT && + (file->option_flag() & HA_NO_FULLTEXT_KEY)) + { + my_printf_error(ER_WRONG_KEY_COLUMN, ER(ER_WRONG_KEY_COLUMN), MYF(0), + column->field_name); + DBUG_RETURN(-1); + } if (f_is_blob(sql_field->pack_flag)) { if (!(file->option_flag() & HA_BLOB_KEY)) @@ -1678,6 +1685,16 @@ copy_data_between_tables(TABLE *from,TABLE *to, goto err; }; + /* Turn off recovery logging since rollback of an + alter table is to delete the new table so there + is no need to log the changes to it. */ + error = ha_recovery_logging(thd,false); + if(error) + { + error = 1; + goto err; + } + init_read_record(&info, thd, from, (SQL_SELECT *) 0, 1,1); if (handle_duplicates == DUP_IGNORE || handle_duplicates == DUP_REPLACE) @@ -1723,6 +1740,7 @@ copy_data_between_tables(TABLE *from,TABLE *to, if (to->file->activate_all_index(thd)) error=1; + tmp_error = ha_recovery_logging(thd,true); /* Ensure that the new table is saved properly to disk so that we can do a rename @@ -1734,6 +1752,7 @@ copy_data_between_tables(TABLE *from,TABLE *to, if (to->file->external_lock(thd,F_UNLCK)) error=1; err: + tmp_error = ha_recovery_logging(thd,true); free_io_cache(from); *copied= found_count; *deleted=delete_count; From 3a837a04033be138ce99f03c843ddfba2d68ec97 Mon Sep 17 00:00:00 2001 From: unknown Date: Tue, 29 May 2001 20:03:58 +0300 Subject: [PATCH 16/20] Fixed a few typos. Docs/manual.texi: my.cfg -> my.cnf scripts/safe_mysqld.sh: my.cfg -> my.cnf support-files/mysql-multi.server.sh: my.cfg -> my.cnf support-files/mysql.server.sh: my.cfg -> my.cnf BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BitKeeper/etc/logging_ok | 8 +------- Docs/manual.texi | 2 +- scripts/safe_mysqld.sh | 2 +- support-files/mysql-multi.server.sh | 2 +- support-files/mysql.server.sh | 2 +- 5 files changed, 5 insertions(+), 11 deletions(-) diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index 7902f10c3d3..fb534622f9b 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1,7 +1 @@ -heikki@donna.mysql.fi -miguel@linux.local -mikef@nslinux.bedford.progress.com -monty@donna.mysql.fi -monty@tik.mysql.fi -mwagner@evoq.mwagner.org -sasha@mysql.sashanet.com +jani@janikt.pp.saunalahti.fi diff --git a/Docs/manual.texi b/Docs/manual.texi index ead53187c27..04bd5cfd5af 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -46707,7 +46707,7 @@ Added table locks to Berkeley DB. Fixed a bug with @code{LEFT JOIN} and @code{ORDER BY} where the first table had only one matching row. @item -Added 4 sample @code{my.cfg} example files in the @file{support-files} +Added 4 sample @code{my.cnf} example files in the @file{support-files} directory. @item Fixed @code{duplicated key} problem when doing big @code{GROUP BY}'s. diff --git a/scripts/safe_mysqld.sh b/scripts/safe_mysqld.sh index 6c006e96768..6eda1740ad6 100644 --- a/scripts/safe_mysqld.sh +++ b/scripts/safe_mysqld.sh @@ -114,7 +114,7 @@ fi pid_file= err_log= -# Get first arguments from the my.cfg file, groups [mysqld] and [safe_mysqld] +# Get first arguments from the my.cnf file, groups [mysqld] and [safe_mysqld] # and then merge with the command line arguments if test -x ./bin/my_print_defaults then diff --git a/support-files/mysql-multi.server.sh b/support-files/mysql-multi.server.sh index af13009d038..6c940630427 100644 --- a/support-files/mysql-multi.server.sh +++ b/support-files/mysql-multi.server.sh @@ -65,7 +65,7 @@ parse_arguments() { done } -# Get arguments from the my.cfg file, groups [mysqld], [mysql_server], +# Get arguments from the my.cnf file, groups [mysqld], [mysql_server], # and mysql_multi_server if test -x ./bin/my_print_defaults then diff --git a/support-files/mysql.server.sh b/support-files/mysql.server.sh index 9307a2e3eb2..62381ccf0d3 100644 --- a/support-files/mysql.server.sh +++ b/support-files/mysql.server.sh @@ -53,7 +53,7 @@ parse_arguments() { done } -# Get arguments from the my.cfg file, groups [mysqld] and [mysql_server] +# Get arguments from the my.cnf file, groups [mysqld] and [mysql_server] if test -x ./bin/my_print_defaults then print_defaults="./bin/my_print_defaults" From ea1251b1b8615fcce3dd793614c01b04c4d6477f Mon Sep 17 00:00:00 2001 From: unknown Date: Wed, 30 May 2001 16:24:02 +0300 Subject: [PATCH 17/20] Added documentation about mysqlcheck. Docs/manual.texi: Added documentation about mysqlcheck --- Docs/manual.texi | 132 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 129 insertions(+), 3 deletions(-) diff --git a/Docs/manual.texi b/Docs/manual.texi index 04bd5cfd5af..bb657ca85c5 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -729,6 +729,7 @@ MySQL Utilites Maintaining a MySQL Installation * Table maintenance:: Table maintenance and crash recovery +* Using mysqlcheck:: Using mysqlcheck for maintenance and recovery * Maintenance regimen:: Setting up a table maintenance regimen * Table-info:: Getting information about a table * Crash recovery:: Using @code{myisamchk} for crash recovery @@ -34598,7 +34599,8 @@ to start using the new table. @cindex maintaining, tables @cindex tables, maintaining @cindex databases, maintaining -@cindex @code{mysiamchk} +@cindex @code{myisamchk} +@cindex @code{mysqlcheck} @cindex crash, recovery @cindex recovery, from crash @node Maintenance, Adding functions, Tools, Top @@ -34606,6 +34608,7 @@ to start using the new table. @menu * Table maintenance:: Table maintenance and crash recovery +* Using mysqlcheck:: Using mysqlcheck for maintenance and recovery * Maintenance regimen:: Setting up a table maintenance regimen * Table-info:: Getting information about a table * Crash recovery:: Using @code{myisamchk} for crash recovery @@ -34616,7 +34619,7 @@ This chapter covers what you should know about maintaining a @strong{MySQL} distribution. You will learn how to care for your tables on a regular basis, and what to do when disaster strikes. -@node Table maintenance, Maintenance regimen, Maintenance, Maintenance +@node Table maintenance, Using mysqlcheck, Maintenance, Maintenance @section Using @code{myisamchk} for Table Maintenance and Crash Recovery Starting with @strong{MySQL} Version 3.23.13, you can check MyISAM @@ -34998,9 +35001,132 @@ This space is allocated on the temporary disk (specified by @code{TMPDIR} or If you have a problem with disk space during repair, you can try to use @code{--safe-recover} instead of @code{--recover}. +@node Using mysqlcheck, Maintenance regimen, Table maintenance, Maintenance +@section Using @code{mysqlcheck} for Table Maintenance and Crash Recovery + +Since @strong{MySQL} version 3.23.38 you will be able to use a new +checking and repairing tool for @code{MyISAM} tables. The difference to +@code{myisamchk} is that @code{mysqlcheck} should be used when the +@code{mysqld} server is running, where as @code{myisamchk} should be used +when it is not. The benefit is that you no longer have to take the +server down for checking or repairing your tables. + +@code{mysqlcheck} uses @strong{MySQL} server commands @code{CHECK}, +@code{REPAIR}, @code{ANALYZE} and @code{OPTIMIZE} in a convenient way +for the user. + +There are three alternative ways to invoke @code{mysqlcheck}: + +@example +shell> mysqlcheck [OPTIONS] database [tables] +shell> mysqlcheck [OPTIONS] --databases DB1 [DB2 DB3...] +shell> mysqlcheck [OPTIONS] --all-databases +@end example + +So it can be used in a similar way as @code{mysqldump} when it +comes to what databases and tables you want to choose. + +@code{mysqlcheck} does have a special feature compared to the other +clients; the default behavior, checking tables (-c), can be changed by +renaming the binary. So if you want to have a tool that repairs tables +by default, you should just copy @code{mysqlcheck} to your harddrive +with a new name, @code{mysqlrepair}, or alternatively make a symbolic +link to @code{mysqlrepair} and name the symbolic link as +@code{mysqlrepair}. If you invoke @code{mysqlrepair} now, it will repair +tables by default. + +The names that you can use to change @code{mysqlcheck} default behavior +are here: + +@example +mysqlrepair: The default option will be -r +mysqlanalyze: The default option will be -a +mysqloptimize: The default option will be -o +@end example + +The options available for @code{mysqlcheck} are listed here, please +check what your version supports with @code{mysqlcheck --help}. + +@table @code +@item -A, --all-databases +Check all the databases. This will be same as --databases with all +databases selected +@item -1, --all-in-1 +Instead of making one query for each table, execute all queries in 1 +query separately for each database. Table names will be in a comma +separated list. +@item -a, --analyze +Analyze given tables. +@item --auto-repair +If a checked table is corrupted, automatically fix it. Repairing will be +done after all tables have been checked, if corrupted ones were found. +@item -#, --debug=... +Output debug log. Often this is 'd:t:o,filename' +@item --character-sets-dir=... +Directory where character sets are +@item -c, --check +Check table for errors +@item -C, --check-only-changed +Check only tables that have changed since last check or haven't been +closed properly. +@item --compress +Use compression in server/client protocol. +@item -?, --help +Display this help message and exit. +@item -B, --databases +To check several databases. Note the difference in usage; In this case +no tables are given. All name arguments are regarded as database names. +@item --default-character-set=... +Set the default character set +@item -F, --fast +Check only tables that hasn't been closed properly +@item -f, --force +Continue even if we get an sql-error. +@item -e, --extended +If you are using this option with CHECK TABLE, it will ensure that the +table is 100 percent consistent, but will take a long time. + +If you are using this option with REPAIR TABLE, it will run an extended +repair on the table, which may not only take a long time to execute, but +may produce a lot of garbage rows also! +@item -h, --host=... +Connect to host. +@item -m, --medium-check +Faster than extended-check, but only finds 99.99 percent of all +errors. Should be good enough for most cases. +@item -o, --optimize +Optimize table +@item -p, --password[=...] +Password to use when connecting to server. If password is not given +it's solicited on the tty. +@item -P, --port=... +Port number to use for connection. +@item -q, --quick +If you are using this option with CHECK TABLE, it prevents the check +from scanning the rows to check for wrong links. This is the fastest +check. + +If you are using this option with REPAIR TABLE, it will try to repair +only the index tree. This is the fastest repair method for a table. +@item -r, --repair +Can fix almost anything except unique keys that aren't unique. +@item -s, --silent +Print only error messages. +@item -S, --socket=... +Socket file to use for connection. +@item --tables +Overrides option --databases (-B). +@item -u, --user=# +User for login if not current user. +@item -v, --verbose +Print info about the various stages. +@item -V, --version +Output version information and exit. +@end table + @cindex maintaining, tables @cindex tables, maintenance regimen -@node Maintenance regimen, Table-info, Table maintenance, Maintenance +@node Maintenance regimen, Table-info, Using mysqlcheck, Maintenance @section Setting Up a Table Maintenance Regimen Starting with @strong{MySQL} Version 3.23.13, you can check MyISAM From 1158d8a263c750fd96c43a9c6bcc37bba8e9fb5c Mon Sep 17 00:00:00 2001 From: unknown Date: Wed, 30 May 2001 14:13:43 -0500 Subject: [PATCH 18/20] manual.texi URL adjustments. Docs/manual.texi: URL adjustments. BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BitKeeper/etc/logging_ok | 1 + Docs/manual.texi | 6 ++++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index fb534622f9b..0d845c1e172 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1 +1,2 @@ jani@janikt.pp.saunalahti.fi +mwagner@evoq.mwagner.org diff --git a/Docs/manual.texi b/Docs/manual.texi index bb657ca85c5..1aa20b95937 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -2848,7 +2848,7 @@ PTS: Project Tracking System. @item @uref{http://tomato.nvgc.vt.edu/~hroberts/mot} Job and software tracking system. -@item @uref{http://www.cynergi.net/non-secure/exportsql/} +@item @uref{http://www.cynergi.net/exportsql/} ExportSQL: A script to export data from Access95+. @item @uref{http://SAL.KachinaTech.COM/H/1/MYSQL.html} @@ -43883,6 +43883,8 @@ An online magazine featuring music, literature, arts, and design content. @item @uref{http://kids.msfc.nasa.gov, NASA KIDS} @item @uref{http://science.nasa.gov, Sience@@NASA} +@item @uref{http://www.handy.de/, handy.de} + @item @uref{http://lindev.jmc.tju.edu/qwor, Qt Widget and Object Repository} @item @uref{http://www.samba-choro.com.br, Brazilian samba site (in Portuguese)} @@ -44811,7 +44813,7 @@ detection of @code{TIMESTAMP} fields), provides warnings and suggestions while converting, quotes @strong{all} special characters in text and binary data, and so on. It will also convert to @code{mSQL} v1 and v2, and is free of charge for anyone. See -@uref{http://www.cynergi.net/prod/exportsql/} for the latest version. By +@uref{http://www.cynergi.net/exportsql/} for the latest version. By Pedro Freire, @email{support@@cynergi.net}. NOTE: Doesn't work with Access2! @item @uref{http://www.mysql.com/Downloads/Contrib/access_to_mysql.txt, access_to_mysql.txt} From b6cae0f64707aca9ec2bbc437f87a84e53fbfec7 Mon Sep 17 00:00:00 2001 From: unknown Date: Thu, 31 May 2001 12:18:53 +0300 Subject: [PATCH 19/20] Added functions for symbolic link handling to make it possible to backport things from 4.0. This is safe as the functions are not used! Fixed bug in new mutex handling in InnoDB Make allow_break() and dont_break() defines. Docs/manual.texi: Remove -fomit-frame-pointer from default binaries configure.in: Use -lcma library on HPUX include/my_sys.h: Added functions for symbolic link handling to make it possible to backport things from 4.0. (This is safe as the functions are not used!) include/mysys_err.h: Error messages for symlink functions. innobase/include/sync0sync.ic: Fixed bug in new mutex handling mysys/Makefile.am: Symlink handling mysys/errors.c: Symlink handling mysys/mf_brkhant.c: Make allow_break() and dont_break() defines. sql/sql_select.h: Fix for Intel compiler. BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted --- BitKeeper/etc/logging_ok | 7 +- Docs/manual.texi | 14 +-- configure.in | 7 +- include/my_sys.h | 14 +++ include/mysys_err.h | 5 +- innobase/include/sync0sync.ic | 3 +- mysys/Makefile.am | 1 + mysys/errors.c | 8 +- mysys/mf_brkhant.c | 6 +- mysys/my_symlink.c | 171 ++++++++++++++++++++++++++++++++++ sql/sql_select.h | 7 +- 11 files changed, 220 insertions(+), 23 deletions(-) create mode 100644 mysys/my_symlink.c diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok index f517c9a46cc..e8deba03c8a 100644 --- a/BitKeeper/etc/logging_ok +++ b/BitKeeper/etc/logging_ok @@ -1,6 +1 @@ -mwagner@evoq.mwagner.org -sasha@mysql.sashanet.com -heikki@donna.mysql.fi -miguel@linux.local -monty@tik.mysql.fi -monty@donna.mysql.fi +monty@hundin.mysql.fi diff --git a/Docs/manual.texi b/Docs/manual.texi index 5437b29fb53..3e564e66942 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -8472,7 +8472,7 @@ The following @code{configure} command should work: @example shell> CFLAGS="-D_XOPEN_XPG4" CXX=gcc CXXFLAGS="-D_XOPEN_XPG4" \ ./configure \ - --with-debug --prefix=/usr/local/mysql \ + --prefix=/usr/local/mysql \ --with-named-thread-libs="-lgthreads -lsocket -lgen -lgthreads" \ --with-named-curses-libs="-lcurses" @end example @@ -9509,19 +9509,19 @@ and are configured with the following compilers and options: @code{CC=gcc CXX=gcc CXXFLAGS="-O3 -felide-constructors" ./configure --prefix=/usr/local/mysql --disable-shared --with-extra-charsets=complex --enable-assembler} @item SunOS 5.5.1 sun4u with @code{egcs} 1.0.3a -@code{CC=gcc CFLAGS="-O3 -fomit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fomit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex} +@code{CC=gcc CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex} @item SunOS 5.6 sun4u with @code{egcs} 2.90.27 -@code{CC=gcc CFLAGS="-O3 -fomit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fomit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex} +@code{CC=gcc CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex} @item SunOS 5.6 i86pc with @code{gcc} 2.8.1 @code{CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex} @item Linux 2.0.33 i386 with @code{pgcc} 2.90.29 (@code{egcs} 1.0.3a) -@code{CFLAGS="-O3 -mpentium -mstack-align-double -fomit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -mpentium -mstack-align-double -fomit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler --with-mysqld-ldflags=-all-static --with-extra-charsets=complex} +@code{CFLAGS="-O3 -mpentium -mstack-align-double" CXX=gcc CXXFLAGS="-O3 -mpentium -mstack-align-double -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler --with-mysqld-ldflags=-all-static --with-extra-charsets=complex} @item Linux 2.2.x with x686 with @code{gcc} 2.95.2 -@code{CFLAGS="-O3 -mpentiumpro -fomit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -mpentiumpro -fomit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler --with-mysqld-ldflags=-all-static --disable-shared --with-extra-charset=complex} +@code{CFLAGS="-O3 -mpentiumpro" CXX=gcc CXXFLAGS="-O3 -mpentiumpro -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler --with-mysqld-ldflags=-all-static --disable-shared --with-extra-charset=complex} @item SCO 3.2v5.0.4 i386 with @code{gcc} 2.7-95q4 @code{CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex} @@ -10724,8 +10724,8 @@ the old @code{ISAM} type. You don't have to convert your old tables to use these with Version 3.23. By default, all new tables will be created with type @code{MyISAM} (unless you start @code{mysqld} with the @code{--default-table-type=isam} option). You can change an @code{ISAM} -table to a @code{MyISAM} table with @code{ALTER TABLE} or the Perl script -@code{mysql_convert_table_format}. +table to a @code{MyISAM} table with @code{ALTER TABLE table_name TYPE=MyISAM} +or the Perl script @code{mysql_convert_table_format}. Version 3.22 and 3.21 clients will work without any problems with a Version 3.23 server. diff --git a/configure.in b/configure.in index 4e73bb901fa..088b0417720 100644 --- a/configure.in +++ b/configure.in @@ -755,6 +755,11 @@ case $SYSTEM_TYPE in echo "Enabling snprintf workaround for hpux 10.20" CFLAGS="$CFLAGS -DHAVE_BROKEN_SNPRINTF -DSIGNALS_DONT_BREAK_READ" CXXFLAGS="$CXXFLAGS -DHAVE_BROKEN_SNPRINTF -D_INCLUDE_LONGLONG -DSIGNALS_DONT_BREAK_READ" + if test "$with_named_thread" = "no" + then + echo "Using --with-named-thread=-lpthread" + with_named_thread="-lcma" + fi ;; *hpux11.*) echo "Enabling pread/pwrite workaround for hpux 11" @@ -1051,7 +1056,7 @@ fi AC_MSG_CHECKING("named thread libs:") if test "$with_named_thread" != "no" then - LIBS="$LIBS $with_named_thread" + LIBS="$with_named_thread $LIBS $with_named_thread" with_posix_threads="yes" with_mit_threads="no" AC_MSG_RESULT("$with_named_thread") diff --git a/include/my_sys.h b/include/my_sys.h index 44faddad405..e2eb7ac30d5 100644 --- a/include/my_sys.h +++ b/include/my_sys.h @@ -62,6 +62,8 @@ extern int NEAR my_errno; /* Last error in mysys */ #define MY_DONT_CHECK_FILESIZE 128 /* Option to init_io_cache() */ #define MY_LINK_WARNING 32 /* my_redel() gives warning if links */ #define MY_COPYTIME 64 /* my_redel() copys time */ +#define MY_DELETE_OLD 256 /* my_create_with_symlink() */ +#define MY_RESOLVE_LINK 128 /* my_realpath(); Only resolve links */ #define MY_HOLD_ORIGINAL_MODES 128 /* my_copy() holds to file modes */ #define MY_REDEL_MAKE_BACKUP 256 #define MY_SEEK_NOT_DONE 32 /* my_lock may have to do a seek */ @@ -378,6 +380,12 @@ extern File my_create(const char *FileName,int CreateFlags, int AccsesFlags, myf MyFlags); extern int my_close(File Filedes,myf MyFlags); extern int my_mkdir(const char *dir, int Flags, myf MyFlags); +extern int my_readlink(char *to, const char *filename, myf MyFlags); +extern int my_realpath(char *to, const char *filename, myf MyFlags); +extern File my_create_with_symlink(const char *linkname, const char *filename, + int createflags, int access_flags, + myf MyFlags); +extern int my_symlink(const char *content, const char *linkname, myf MyFlags); extern uint my_read(File Filedes,byte *Buffer,uint Count,myf MyFlags); extern uint my_pread(File Filedes,byte *Buffer,uint Count,my_off_t offset, myf MyFlags); @@ -428,8 +436,14 @@ extern int my_redel(const char *from, const char *to, int MyFlags); extern int my_copystat(const char *from, const char *to, int MyFlags); extern my_string my_filename(File fd); +#ifndef THREAD extern void dont_break(void); extern void allow_break(void); +#else +#define dont_break() +#define allow_break() +#endif + extern void my_remember_signal(int signal_number,sig_handler (*func)(int)); extern void caseup(my_string str,uint length); extern void casedn(my_string str,uint length); diff --git a/include/mysys_err.h b/include/mysys_err.h index b379f5bcbc9..2d23ead36b6 100644 --- a/include/mysys_err.h +++ b/include/mysys_err.h @@ -22,7 +22,7 @@ extern "C" { #endif #define GLOB 0 /* Error maps */ -#define GLOBERRS 24 /* Max number of error messages in map's */ +#define GLOBERRS 27 /* Max number of error messages in map's */ #define EE(X) globerrs[ X ] /* Defines to add error to right map */ extern const char * NEAR globerrs[]; /* my_error_messages is here */ @@ -51,6 +51,9 @@ extern const char * NEAR globerrs[]; /* my_error_messages is here */ #define EE_CANT_MKDIR 21 #define EE_UNKNOWN_CHARSET 22 #define EE_OUT_OF_FILERESOURCES 23 +#define EE_CANT_READLINK 24 +#define EE_CANT_SYMLINK 25 +#define EE_REALPATH 26 #ifdef __cplusplus } diff --git a/innobase/include/sync0sync.ic b/innobase/include/sync0sync.ic index e23e2b68e14..5a872c6b093 100644 --- a/innobase/include/sync0sync.ic +++ b/innobase/include/sync0sync.ic @@ -134,9 +134,10 @@ mutex_reset_lock_word( __asm XCHG EDX, DWORD PTR [ECX] #else mutex->lock_word = 0; - +#if !(defined(__GNUC__) && defined(UNIV_INTEL_X86)) os_fast_mutex_unlock(&(mutex->os_fast_mutex)); #endif +#endif } /********************************************************************** diff --git a/mysys/Makefile.am b/mysys/Makefile.am index 5a7293bc680..827367ac755 100644 --- a/mysys/Makefile.am +++ b/mysys/Makefile.am @@ -33,6 +33,7 @@ libmysys_a_SOURCES = my_init.c my_getwd.c mf_getdate.c\ my_alloc.c safemalloc.c my_fopen.c my_fstream.c \ my_error.c errors.c my_div.c my_messnc.c \ mf_format.c mf_same.c mf_dirname.c mf_fn_ext.c \ + my_symlink.c \ mf_pack.c mf_pack2.c mf_unixpath.c mf_stripp.c \ mf_casecnv.c mf_soundex.c mf_wcomp.c mf_wfile.c \ mf_qsort.c mf_qsort2.c mf_sort.c \ diff --git a/mysys/errors.c b/mysys/errors.c index 6e9f1fabab0..77e52c2f0b3 100644 --- a/mysys/errors.c +++ b/mysys/errors.c @@ -46,6 +46,9 @@ const char * NEAR globerrs[GLOBERRS]= "Can't create directory '%s' (Errcode: %d)", "Character set '%s' is not a compiled character set and is not specified in the '%s' file", "Out of resources when opening file '%s' (Errcode: %d)", + "Can't read value for symlink '%s' (Error %d)", + "Can't create symlink '%s' pointing at '%s' (Error %d)", + "Error on realpath() on '%s' (Error %d)", }; void init_glob_errs(void) @@ -81,6 +84,9 @@ void init_glob_errs() EE(EE_DISK_FULL) = "Disk is full writing '%s'. Waiting for someone to free space..."; EE(EE_CANT_MKDIR) ="Can't create directory '%s' (Errcode: %d)"; EE(EE_UNKNOWN_CHARSET)= "Character set is not a compiled character set and is not specified in the %s file"; - EE(EE_OUT_OF_FILERESOURCES)="Out of resources when opening file '%s' (Errcode: %d)", + EE(EE_OUT_OF_FILERESOURCES)="Out of resources when opening file '%s' (Errcode: %d)"; + EE(EE_CANT_READLINK)="Can't read value for symlink '%s' (Error %d)"; + EE(EE_CANT_SYMLINK)="Can't create symlink '%s' pointing at '%s' (Error %d)"; + EE(EE_REALPATH)="Error on realpath() on '%s' (Error %d)"; } #endif diff --git a/mysys/mf_brkhant.c b/mysys/mf_brkhant.c index 4e4bc2410f9..debf5d9a712 100644 --- a/mysys/mf_brkhant.c +++ b/mysys/mf_brkhant.c @@ -24,17 +24,15 @@ /* Set variable that we can't break */ +#if !defined(THREAD) void dont_break(void) { -#if !defined(THREAD) my_dont_interrupt=1; -#endif return; } /* dont_break */ void allow_break(void) { -#if !defined(THREAD) { reg1 int index; @@ -54,8 +52,8 @@ void allow_break(void) _my_signals=0; } } -#endif } /* dont_break */ +#endif /* Set old status */ diff --git a/mysys/my_symlink.c b/mysys/my_symlink.c new file mode 100644 index 00000000000..e195adcd4c5 --- /dev/null +++ b/mysys/my_symlink.c @@ -0,0 +1,171 @@ +/* Copyright (C) 2000 MySQL AB & MySQL Finland AB & TCX DataKonsult AB + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Library General Public + License as published by the Free Software Foundation; either + version 2 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Library General Public License for more details. + + You should have received a copy of the GNU Library General Public + License along with this library; if not, write to the Free + Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, + MA 02111-1307, USA */ + +#include "mysys_priv.h" +#include "mysys_err.h" +#include +#ifdef HAVE_REALPATH +#include +#include +#endif + +/* + Reads the content of a symbolic link + If the file is not a symbolic link, return the original file name in to. +*/ + +int my_readlink(char *to, const char *filename, myf MyFlags) +{ +#ifndef HAVE_READLINK + strmov(to,filename); + return 0; +#else + int result=0; + int length; + DBUG_ENTER("my_readlink"); + + if ((length=readlink(filename, to, FN_REFLEN-1)) < 0) + { + /* Don't give an error if this wasn't a symlink */ + if ((my_errno=errno) == EINVAL) + { + strmov(to,filename); + } + else + { + if (MyFlags & MY_WME) + my_error(EE_CANT_READLINK, MYF(0), filename, errno); + result= -1; + } + } + else + to[length]=0; + DBUG_RETURN(result); +#endif /* HAVE_READLINK */ +} + + +/* Create a symbolic link */ + +int my_symlink(const char *content, const char *linkname, myf MyFlags) +{ +#ifndef HAVE_READLINK + return 0; +#else + int result; + DBUG_ENTER("my_symlink"); + + result= 0; + if (symlink(content, linkname)) + { + result= -1; + my_errno=errno; + if (MyFlags & MY_WME) + my_error(EE_CANT_SYMLINK, MYF(0), linkname, content, errno); + } + DBUG_RETURN(result); +#endif /* HAVE_READLINK */ +} + + +/* + Create a file and a symbolic link that points to this file + If linkname is a null pointer or equal to filename, we don't + create a link. + */ + + +File my_create_with_symlink(const char *linkname, const char *filename, + int createflags, int access_flags, myf MyFlags) +{ + File file; + int tmp_errno; + DBUG_ENTER("my_create_with_symlink"); + if ((file=my_create(filename, createflags, access_flags, MyFlags)) >= 0) + { + /* Test if we should create a link */ + if (linkname && strcmp(linkname,filename)) + { + /* Delete old link/file */ + if (MyFlags & MY_DELETE_OLD) + my_delete(linkname, MYF(0)); + /* Create link */ + if (my_symlink(filename, linkname, MyFlags)) + { + /* Fail, remove everything we have done */ + tmp_errno=my_errno; + my_close(file,MYF(0)); + my_delete(filename, MYF(0)); + file= -1; + my_errno=tmp_errno; + } + } + } + DBUG_RETURN(file); +} + + +/* + Resolve all symbolic links in path + 'to' may be equal to 'filename' + + Because purify gives a lot of UMR errors when using realpath(), + this code is disabled when using purify. + + If MY_RESOLVE_LINK is given, only do realpath if the file is a link. +*/ + +#if defined(SCO) +#define BUFF_LEN 4097 +#elif defined(MAXPATHLEN) +#define BUFF_LEN MAXPATHLEN +#else +#define BUFF_LEN FN_LEN +#endif + +int my_realpath(char *to, const char *filename, myf MyFlags) +{ +#if defined(HAVE_REALPATH) && !defined(HAVE_purify) && !defined(HAVE_BROKEN_REALPATH) + int result=0; + char buff[BUFF_LEN]; + struct stat stat_buff; + DBUG_ENTER("my_realpath"); + + if (!(MyFlags & MY_RESOLVE_LINK) || + (!lstat(filename,&stat_buff) && S_ISLNK(stat_buff.st_mode))) + { + char *ptr; + if ((ptr=realpath(filename,buff))) + strmake(to,ptr,FN_REFLEN-1); + else + { + /* Realpath didn't work; Use original name */ + my_errno=errno; + if (MyFlags & MY_WME) + my_error(EE_REALPATH, MYF(0), filename, my_errno); + if (to != filename) + strmov(to,filename); + result= -1; + } + } + return result; +#else + if (to != filename) + strmov(to,filename); + return 0; +#endif +} diff --git a/sql/sql_select.h b/sql/sql_select.h index bb97a10128f..1bf7d7863eb 100644 --- a/sql/sql_select.h +++ b/sql/sql_select.h @@ -138,8 +138,11 @@ class TMP_TABLE_PARAM { } inline void cleanup(void) { - delete [] copy_field; - copy_field=0; + if (copy_field) /* Fix for Intel compiler */ + { + delete [] copy_field; + copy_field=0; + } } }; From 8c8244918fc37bb25656a5f8b451a362e4b16577 Mon Sep 17 00:00:00 2001 From: unknown Date: Thu, 31 May 2001 13:17:30 +0300 Subject: [PATCH 20/20] Fixed typo from last checkin Docs/manual.texi: Changelog --- Docs/manual.texi | 13 +++++++++---- sql/handler.cc | 2 +- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/Docs/manual.texi b/Docs/manual.texi index 73eb55999ff..dadc88d6368 100644 --- a/Docs/manual.texi +++ b/Docs/manual.texi @@ -18861,10 +18861,10 @@ When you insert a value of @code{NULL} (recommended) or @code{0} into an If you delete the row containing the maximum value for an @code{AUTO_INCREMENT} column, the value will be reused with an -@code{ISAM}, @code{GEMINI}, @code{BDB} or @code{INNODB} table but not with a -@code{MyISAM} table. If you delete all rows in the table with -@code{DELETE FROM table_name} (without a @code{WHERE}) in -@code{AUTOCOMMIT} mode, the sequence starts over for both table types. +@code{ISAM}, @code{GEMINI} or @code{BDB} table but not with a +@code{MyISAM} or @code{InnoDB} table. If you delete all rows in the table +with @code{DELETE FROM table_name} (without a @code{WHERE}) in +@code{AUTOCOMMIT} mode, the sequence starts over for all table types. @strong{NOTE:} There can be only one @code{AUTO_INCREMENT} column per table, and it must be indexed. @strong{MySQL} Version 3.23 will also only @@ -45510,6 +45510,8 @@ Added @code{ORDER BY} syntax to @code{UPDATE} and @code{DELETE}. @item Optimized queries of type: @code{SELECT DISTINCT * from table_name ORDER by key_part1 LIMIT #} +@item +Added support for sym-linking of MyISAM tables. @end itemize @node News-3.23.x, News-3.22.x, News-4.0.x, News @@ -45603,6 +45605,9 @@ not yet 100% confident in this code. @appendixsubsec Changes in release 3.23.39 @itemize @bullet @item +We are now using the @code{-lcma} thread library on HPUX 10.20 to +get @strong{MySQL} more stabile on HPUX. +@item Fixed problem with @code{IF()} and number of decimals in the result. @item Fixed that date-part extract functions works with dates where day diff --git a/sql/handler.cc b/sql/handler.cc index 7720e9ca671..bac24a6dba7 100644 --- a/sql/handler.cc +++ b/sql/handler.cc @@ -779,8 +779,8 @@ int ha_recovery_logging(THD *thd, bool on) { #ifdef HAVE_GEMINI_DB error = gemini_recovery_logging(thd, on); - } #endif + } #endif DBUG_RETURN(error); }