partitioned table
Trying INSERT DELAYED on a partitioned table, that has not been
used right before, crashes the server. When a table is used for
select or update, it is kept open for some time. This period I
mean with "right before".
Information about partitioning of a table is stored in form of
a string in the .frm file. Parsing of this string requires a
correctly set up lexical analyzer (lex). The partitioning code
uses a new temporary instance of a lex. But it does still refer
to the previously active lex. The delayd insert thread does not
initialize its lex though...
Added initialization for thd->lex before open table in the delayed
thread and at all other places where it is necessary to call
lex_start() if all tables would be partitioned and need to parse
the .frm file.
If a stored function that contains a drop temporary table statement
is invoked by a create temporary table of the same name may cause
a server crash. The problem is that when dropping a table no check
is done to ensure that table is not being used by some outer query
(or outer statement), potentially leaving the outer query with a
reference to a stale (freed) table.
The solution is when dropping a temporary table, always check if
the table is being used by some outer statement as a temporary
table can be dropped inside stored procedures.
The check is performed by looking at the TABLE::query_id value for
temporary tables. To simplify this check and to solve a bug related
to handling of temporary tables in prelocked mode, this patch changes
the way in which this member is used to track the fact that table is
used/unused. Now we ensure that TABLE::query_id is zero for unused
temporary tables (which means that all temporary tables which were
used by a statement should be marked as free for reuse after it's
execution has been completed).
and convert it to a warning instead of direct manipulation with the
thread error stack.
Fix a bug in handler::print_erorr when a garbled message was
printed for HA_ERR_NO_SUCH_TABLE.
This is a pre-requisite patch for the fix for Bug#12713 Error in a stored
function called from a SELECT doesn't cause ROLLBACK of statem
SHOW FIELDS FROM a view with no valid definer was possible (since fix
for Bug#26817), but gave NULL as a field-type. This led to mysqldump-ing
of such views being successful, but loading such a dump with the client
failing. Patch allows SHOW FIELDS to give data-type of field in underlying
table.
The root cause of this defect is that a call to my_error() is using a
'LEX_STRING' parameter instead of a 'char*'
This patch fixes the failing calls to my_error(), as well as similar calls
found during investigation.
This is a compiling bug (see the instrumentation in the bug report), no test cases provided.
The server crashed when a thread was killed while locking the
general_log table at statement begin.
The general_log table is handled like a performance schema table.
The state of open tables is saved and cleared so that this table
seems to be the only open one. Then this table is opened and locked.
After writing, the table is closed and the open table state is
restored. Before restoring, however, it is asserted that there is
no current table open.
After locking the table, mysql_lock_tables() checks if the thread
was killed in between. If so, it unlocks the table and returns an
error. open_ltable() just returns with the error and leaves closing
of the table to close_thread_tables(), which is called at
statement end.
open_performance_schema_table() did not take this into account.
It assumed that a failed open_ltable() would not leave an open
table behind.
Fixed by closing thread tables after open_ltable() and before
restore_backup_open_tables_state() if the thread was killed.
No test case. It requires correctly timed parallel execution.
Since this bug was detected by the test suite, it seems
dispensable to add another test.
This deadlock occurs when a client issues a HANDLER ... OPEN statement
that tries to open a table that has a pending name-lock on it by another
client that also needs a name-lock on some other table which is already
open and associated to a HANDLER instance owned by the first client.
The deadlock happens because the open_table() function will back-off
and wait until the name-lock goes away, causing a circular wait if some
other name-lock is also pending for one of the open HANDLER tables.
Such situation, for example, can be easily repeated by issuing a RENAME
TABLE command in such a way that the existing table is already open
as a HANDLER table by another client and this client tries to open
a HANDLER to the new table name.
The solution is to allow handler tables with older versions (marked for
flush) to be closed before waiting for the name-lock completion. This is
safe because no other name-lock can be issued between the flush and the
check for pending name-locks.
The test case for this bug is going to be committed into 5.1 because it
requires a test feature only avaiable in 5.1 (wait_condition).
When expanding a * in a USING/NATURAL join the check for table access
for both tables in the join was done using the grant information of the
first one.
Fixed by getting the grant information for the current table while
iterating through the columns of the join.
failures)
Fixed open_performance_schema_table() and close_performance_schema_table()
implementation and callers, to always execute balanced calls to:
thd->reset_n_backup_open_tables_state(backup);
thd->restore_backup_open_tables_state(backup);
This is a follow up for the patch for Bug#26162 "Trigger DML ignores low_priority_updates setting", where the stored procedure ignores the session setting of low_priority_updates.
This is a follow up for the patch for Bug#26162 "Trigger DML ignores low_priority_updates setting", where the stored procedure ignores the session setting of low_priority_updates.
For every table open operation with default write (TL_WRITE_DEFAULT) lock_type, downgrade the lock type to the session setting of low_priority_updates.
The bug caused memory corruption for some queries with top OR level
in the WHERE condition if they contained equality predicates and
other sargable predicates in disjunctive parts of the condition.
The corruption happened because the upper bound of the memory
allocated for KEY_FIELD and SARGABLE_PARAM internal structures
containing info about potential lookup keys was calculated incorrectly
in some cases. In particular it was calculated incorrectly when the
WHERE condition was an OR formula with disjuncts being AND formulas
including equalities and other sargable predicates.
Moved duplicated code to inline function store_timestamp()
Save thd->time_zone_used when logging to table as CSV internally cases it to be changed
Added MYSQL_LOCK_IGNORE_FLUSH to log tables to avoid deadlock in case of flush tables.
Mark log tables with TIMESTAMP_NO_AUTO_SET to avoid automatic timestamping
Set TABLE->no_replicate on open