Iterative patch improvement. Previously committed patch
caused wrong result on Windows. The previous patch also
broke secure_file_priv for symlinks since not all file
paths which must be compared against this variable are
normalized using the same norm.
The server variable opt_secure_file_priv wasn't
normalized properly and caused the operations
LOAD DATA INFILE .. INTO TABLE ..
and
SELECT load_file(..)
to do different interpretations of the
--secure-file-priv option.
The patch moves code to the server initialization
routines so that the path always is normalized
once and only once.
It was also intended that setting the option
to an empty string should be equal to
lifting all previously set restrictions. This
is also fixed by this patch.
There were two problems here:
1. misleading error message
2. abusing KILL QUERY in the test case
1. The server reported "'DELETE FROM t1' failed: 1689: Wait on a lock was
aborted due to a pending exclusive lock", while the proper error message
should be "'DELETE FROM t1' failed: 1317: Query execution was interrupted".
The problem is that the server has two different flags for
signalling that a query is being killed: THD::killed and
mysys_var::abort. The test case triggers a race: sometimes
mysys_var::abort is set earlier than THD::killed. That leads
to the following situation:
- thr_lock() checks mysys_var::abort and returns error status,
since mysys_var::abort is set;
- the caller (mysql_lock_tables()) gets an error from thr_lock(),
but THD::killed is not set, so it decides that thr_lock() couldn't
get a lock due to a pending exclusive lock.
This is a known issue with the server and it's not going to be fixed soon.
5.5 differs from 5.1 here as follows: when thr_lock() returns an error:
- 5.1 continues trying thr_lock() until success;
- 5.5 propagates the error
2. The test case uses KILL QUERY is a highly concurent environment.
The fix is to wait for the dying statement to rest in peace before
executing another DELETE FROM t1.
WHERE predicates containing references to empty tables in a
subquery were handled incorrectly by the optimizer when
executing EXPLAIN. As a result, the optimizer could try to
evaluate such predicates rather than just stop with
"Impossible WHERE noticed after reading const tables" as
it would do in a non-subquery case. This led to valgrind
errors and crashes.
Fixed the code checking the above condition so that subqueries
are not excluded and hence are handled in the same way as top
level SELECTs.
Conflicts:
Text conflict in configure.in
Text conflict in dbug/dbug.c
Text conflict in mysql-test/r/ps.result
Text conflict in mysql-test/t/ps.test
Text conflict in sql/CMakeLists.txt
Text conflict in sql/ha_ndbcluster.cc
Text conflict in sql/mysqld.cc
Text conflict in sql/sql_plugin.cc
Text conflict in sql/sql_table.cc
during an UPDATE
Extended the fix for bug 29310 to multi-table update:
When a table is being updated it has two set of fields - fields required for
checks of conditions and fields to be updated. A storage engine is allowed
not to retrieve columns marked for update. Due to this fact records can't
be compared to see whether the data has been changed or not. This makes the
server always update records independently of data change.
Now when an auto-updatable timestamp field is present and server sees that
a table handle isn't going to retrieve write-only fields then all of such
fields are marked as to be read to force the handler to retrieve them.
Clarified error messages related to unsafe statements:
- avoid the internal technical term "row injection"
- use 'binary log' instead of 'binlog'
- avoid the word 'unsafeness'
Fix for bug #46947 "Embedded SELECT without FOR UPDATE is
causing a lock", with after-review fixes.
SELECT statements with subqueries referencing InnoDB tables
were acquiring shared locks on rows in these tables when they
were executed in REPEATABLE-READ mode and with statement or
mixed mode binary logging turned on.
This was a regression which were introduced when fixing
bug 39843.
The problem was that for tables belonging to subqueries
parser set TL_READ_DEFAULT as a lock type. In cases when
statement/mixed binary logging at open_tables() time this
type of lock was converted to TL_READ_NO_INSERT lock at
open_tables() time and caused InnoDB engine to acquire
shared locks on reads from these tables. Although in some
cases such behavior was correct (e.g. for subqueries in
DELETE) in case of SELECT it has caused unnecessary locking.
This patch tries to solve this problem by rethinking our
approach to how we handle locking for SELECT and subqueries.
Now we always set TL_READ_DEFAULT lock type for all cases
when we read data. When at open_tables() time this lock
is interpreted as TL_READ_NO_INSERT or TL_READ depending
on whether this statement as a whole or call to function
which uses particular table should be written to the
binary log or not (if yes then statement should be properly
serialized with concurrent statements and stronger lock
should be acquired).
Test coverage is added for both InnoDB and MyISAM.
This patch introduces an "incompatible" change in locking
scheme for subqueries used in SELECT ... FOR UPDATE and
SELECT .. IN SHARE MODE.
In 4.1 the server would use a snapshot InnoDB read for
subqueries in SELECT FOR UPDATE and SELECT .. IN SHARE MODE
statements, regardless of whether the binary log is on or off.
If the user required a different type of read (i.e. locking read),
he/she could request so explicitly by providing FOR UPDATE/IN SHARE MODE
clause for each individual subquery.
On of the patches for 5.0 broke this behaviour (which was not documented
or tested), and started to use locking reads fora all subqueries in SELECT ...
FOR UPDATE/IN SHARE MODE. This patch restored 4.1 behaviour.
Stored routine DDL statements use statement-based replication
regardless of the current binlog format. The problem here was
that if a DDL statement failed during metadata lock acquisition
or opening of mysql.proc, the binlog format would not be reset
before returning. So the following DDL or DML statements are
binlogged with a wrong binlog format, which causes the slave
to stop.
The problem can be resolved by grabbing an exclusive MDL lock firstly
instead of clearing the current binlog format. So that the binlog
format will not be affected when the lock grab returns directly with
an error. The same way is taken to open a proc table for update.
parallel mode
The failure has nothing to do with parallel, but rather on the
order the tests are executed. In this case, the test
binlog_tmp_table (lets call it test2) was not ensuring that the
binary logs would be reset when it started. Later the test issues
a mysqlbinlog .../master-bin.000002 | mysql ... If the test that
was executed before this one (lets call it test1) had issued a
flush logs, then the file in use in test1 (master-bin.000002)
would not actually match the one that was expected. Eventually,
this would cause the statements logged in test1 to be replayed,
instead of the ones logged in the beginning of test2.
We fix this by:
1. adding RESET MASTER to the beginning of binlog_tmp_table
2. setting dynamically the file to use in binlog_tmp_table
Only #1 was needed, but the two make the tests cases more robust.
Extract part of innodb.innodb into innodb.innodb_misc1
This is needed in order to be able to more easily debug this test,
under valgrind, it is too huge.
The problem was in an incorrect debug assertion. The expression
used in the failing assertion states that when finding
references matching ORDER BY expressions, there can be only one
reference to a single table. But that does not make any sense,
all test cases for this bug are valid examples with multiple
identical WHERE expressions referencing the same table which
are also present in the ORDER BY list.
Fixed by removing the failing assertion. We also have to take
care of the 'found' counter so that we count multiple
references only once. We rely on this fact later in
eq_ref_table().
Statements with CONNECTION_ID were forced to be kept in the transactional
cache and by consequence non-transactional changes that were supposed to
be flushed ahead of the transaction were kept in the transactional cache.
This happened because after BUG#51894 any statement whose thd's
thread_specific_used was set was kept in the transactional cache. The idea
was to keep changes on temporary tables in the transactional cache. However,
the thread_specific_used was set not only for statements that accessed
temporary tables but also when the CONNECTION_ID was used.
To fix the problem, we created a new variable to keep track of updates
to temporary tables.
Problem: ALTER TABLE ADD INDEX may lead to table copying if there's
numeric field(s) with non-default display width modificator specified.
Fix: compare numeric field's storage lenghts when we decide whether
they can be considered 'equal' for table alteration purposes.
Previously installed dynamic plugins are explicitly not loaded
on startup with --skip-grant-tables enabled. However, INSTALL
PLUGIN/UNINSTALL PLUGIN commands are allowed, and result in
inconsistent error messages (reporting duplicate plugin or
plugin does not exist).
This patch adds a check for --skip-grant-tables mode, and
returns error ER_OPTION_PREVENTS_STATEMENT to the user when
the above commands are attempted.
partition if muliple columns used
Problem was that range scanning through the sorted array of
the column list values did not use a correct index calculation.
Fixed by also taking the number of columns in the calculation.
of sync
In RBR, sometimes the table->s->last_null_bit_pos can be zero. This
has impact at the slave when it compares records fetched from the
storage engine against records in the binary log event. If
last_null_bit_pos is zero the slave, while comparing in
log_event.cc:record_compare function, would set all bits in the last
null_byte to 1 (assumed all 8 were unused) . Thence it would loose the
ability to distinguish records that were similar in contents except
for the fact that some field was null in one record, but not in the
other. Ultimately this would cause wrong matches, and in the specific
case depicted in the bug report the same record would be updated
twice, resulting in a lost update.
Additionally, in the record_compare function the slave was setting the
X bit unconditionally. There are cases that the X bit does not exist
in the record header. This could also lead to wrong matches between
records.
We fix both by conditionally resetting the bits: (i) unused null_bits
are set if last_null_bit_pos > 0; (ii) X bit is set if
HA_OPTION_PACK_RECORD is in use.
transaction
BUG#52616 Temp table prevents switch binlog format from STATEMENT to ROW
Before the WL#2687 and BUG#46364, every non-transactional change that happened
after a transactional change was written to trx-cache and flushed upon
committing the transaction. WL#2687 and BUG#46364 changed this behavior and
non-transactional changes are now written to the binary log upon committing
the statement.
A binary log event is identified as transactional or non-transactional through
a flag in the Log_event which is set taking into account the underlie storage
engine on what it is stems from. In the current bug, this flag was not being
set properly when the DROP TEMPORARY TABLE was executed.
However, while fixing this bug we figured out that changes to temporary tables
should be always written to the trx-cache if there is an on-going transaction.
Otherwise, binlog events in the reversed order would be produced.
Regarding concurrency, keeping changes to temporary tables in the trx-cache is
also safe as temporary tables are only visible to the owner connection.
In this patch, we classify the following statements as unsafe:
1 - INSERT INTO t_myisam SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_myisam
3 - CREATE TEMPORARY TABLE t_myisam_temp SELECT * FROM t_myisam
On the other hand, the following statements are classified as safe:
1 - INSERT INTO t_innodb SELECT * FROM t_myisam_temp
2 - INSERT INTO t_myisam_temp SELECT * FROM t_innodb
The patch also guarantees that transactions that have a DROP TEMPORARY are
always written to the binary log regardless of the mode and the outcome:
commit or rollback. In particular, the DROP TEMPORARY is extended with the
IF EXISTS clause when the current statement logging format is set to row.
Finally, the patch allows to switch from STATEMENT to MIXED/ROW when there
are temporary tables but the contrary is not possible.
newly introduced metadata locks.
Previously the behavior was deterministic and if several LOCKs were
waiting the first one of them was released by UNLOCK (in chronological
order).
Now (with MDLs) the behavior is undefined and since we do not know in
what order to --reap the connections we simply disconnect them without
reaping.