The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
If the table has a varchar column and a forced fixed for format
(as in varchar.inc), Field_varstring::store() will only store the
actual number of bytes, not padded, in the record[0].
That is, on inserts a part of record[0] can be uninitialized.
Fix: initialize record[0] when a TABLE is created, it doesn't matter
what kind of garbage can be in this unused/invisible part of the
record, as long as it's not some random memory contents
(that can contain sensitive data).
has_no_default_value() should only fail the insert in the strict mode.
Additionally, don't check for "all fields are given values" twice,
it'll produce duplicate warnings.
check_that_all_fields_are_given_values() relied on write_set,
but was run too early, before triggers updated write_set.
also, when triggers are present, fields might get values conditionally,
so we need to check that all fields are given values for every row.
Shared variables of Delayed_insert may be updated without mutex protection
when delayed insert thread gets an error.
Re-acquire mutex earlier, so that shared variables are protected.
Problem was that notify_shared_lock() didn't abort an insert delayed thread
if it was in thr_upgrade_write_delay_lock().
ALTER TABLE first takes a weak_mdl_lock, then a thr_lock and then tries to upgrade
the mdl_lock.
Delayed insert thread first takes a mdl lock followed by a
thr_upgrade_write_delay_lock()
This caused insert delay to wait for alter table in thr_lock, while
alter table was waiting for the mdl lock by insert delay.
Fixed by telling mdl to run thr_lock_abort() for the insert delay thread table.
We also set thd->mysys_var->abort to 1 for the delay thread when it's killed
by alter table to ensure it doesn't ever get locked in thr_lock.
There was a race condition between delayed insert thread and connection thread
actually performing INSERT/REPLACE DELAYED. It was triggered by concurrent
INSERT/REPLACE DELAYED and statements that flush the same table either
explicitely or implicitely (like FLUSH TABLE, ALTER TABLE, ...).
This race condition was caused by a gap in delayed thread shutdown logic,
which allowed concurrent connection running INSERT/REPLACE DELAYED to change
essential data consequently leaving table in semi-consistent state.
Specifically query thread could decrease "tables_in_use" reference counter in
this gap, causing delayed insert thread to shutdown without releasing auto
increment and table lock.
Fixed by extending condition so that delayed insert thread won't shutdown
until there're locked tables.
Also removed volatile qualifier from tables_in_use and stacked_inserts since
they're supposed to be protected by mutexes.
1. the same message text for INSERT and INSERT IGNORE
2. no new warnings in UPDATE IGNORE yet (big change for 5.5)
and replace a commonly used expression with a
named constant
This fix also fixes a connection hang when trying to do INSERT DELAYED to a crashed table.
Added crash_mysqld.inc to allow easy crash+restart of mysqld
CONSTRAINT.
Analysis
=======
INSERT and UPDATE operations using the IGNORE keyword which
causes FOREIGN KEY constraint violations reports an error
despite using the IGNORE keyword.
Foreign key violation errors were not ignored and reported
as errors instead of warnings even when IGNORE was set.
Fix
===
Added code to ignore the foreign key violation errors and
report them as warnings when the IGNORE keyword is used.
Problem was that created table was not marked as used (not set query_id) and so opening tables for stored function pick it up (as opened place holder for it) and used changing TABLE internals.
NOT NULL constraint must be checked *after* the BEFORE triggers.
That is for INSERT and UPDATE statements even NOT NULL fields
must be able to store a NULL temporarily at least while
BEFORE INSERT/UPDATE triggers are running.
* move common code to a new set_bad_null_error() function
* move repeated comparison out of the loop
* remove unused code
* unused method Table_triggers_list::set_table
* redundant condition (if (table) after table was dereferenced)
* add an assert
changes in query execution plans.
Fixed by introducing table->rpl_write_set which holds which columns should
be stored in the binary log.
Other things:
- Removed some not needed references to read_set and write_set to make
code really changing read_set and write_set easier to read
(in opt_range.cc)
- Added error handling of failed unpack_current_row()
- Added missing call to mark_columns_needed_for_insert() for DELAYED INSERT
- Removed not used functions in_read_set() and in_write_set()
- In rpl_record.cc, removed not used variable error
CONVERT_CHARSET_PARTITION_CONSTANT:
SQL/SQL_PARTITION..CC:202
Issue:
-----
This problem happens under the following conditions:
1) A table partitioned with a character column as the key.
2) The expressions specified in the partition definition
requires a charset conversion. This can happen when the
server's default collation is different from the
expression's collation.
3) INSERT DELAYED is used to insert data into the table.
SOLUTION:
---------
While creating the delayed_insert object, initialize it
with the relevant select_lex.
- Part 3: Adding mem_root to push_back() and push_front()
Other things:
- Added THD as an argument to some partition functions.
- Added memory overflow checking for XML tag's in read_xml()
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
send_result_set_metadata
Analysis
--------
Cursor inside trigger accessing NEW/OLD row leads server exit.
The reason for the bug was that implementation of function
create_tmp_table() was not considering Item::TRIGGER_FIELD_ITEM
as possible alternative for type of class being instantiated.
This was resulting in a mismatch between a number of columns
in result list and temp table definition. This mismatch leads
to the failure of assertion
DBUG_ASSERT(send_result_set_metadata.elements == item_list.elements)
in the method Materialized_cursor::send_result_set_metadata
in debug mode.
Fix:
---
Added code to consider Item::TRIGGER_FIELD_ITEM as valid
type while creating fields.
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
Added THD argument to select_result and all derivative classes.
This reduces number of pthread_getspecific calls from 796 to 776 per OLTP RO
transaction.