From b0e64f33da6d6e958110c1e40db98ea5e8a11b85 Mon Sep 17 00:00:00 2001
From: drh
As of version 2.7.0, it doesn't. See the document on - datatypes in SQLite version 3 - for details.
-} - faq { Why doesn't SQLite allow me to use '0' and '0.0' as the primary key on two different rows of the same table? @@ -114,20 +106,6 @@ faq { (See the previous question.) Hence the values are not unique. } -faq { - My linux box is not able to read an SQLite database that was created - on my SparcStation. -} { -You need to upgrade your SQLite library to version 2.6.3 or later.
- -The x86 processor on your linux box is little-endian (meaning that - the least significant byte of integers comes first) but the Sparc is - big-endian (the most significant bytes comes first). SQLite databases - created on a little-endian architecture cannot be on a big-endian - machine by version 2.6.2 or earlier of SQLite. Beginning with - version 2.6.3, SQLite should be able to read and write database files - regardless of byte order of the machine on which the file was created.
-} faq { Can multiple applications or multiple instances of the same @@ -282,37 +260,8 @@ ORDER BY name faq { Are there any known size limits to SQLite databases? } { -A database is limited in size to 2 tibibytes (241 bytes). - That is a theoretical limitation. In practice, you should try to keep - your SQLite databases below 100 gigabytes to avoid performance problems. - If you need to store 100 gigabytes or more in a database, consider using - an enterprise database engine which is designed for that purpose.
- -The theoretical limit on the number of rows in a table is - 264-1, though obviously you will run into the file size - limitation prior to reaching the row limit. A single row can hold - up to 230 bytes of data in the current implementation. The - underlying file format supports row sizes up to about 262 bytes. -
- -There are probably limits on the number of tables or indices or - the number of columns in a table or index, but nobody is sure what - those limits are. In practice, SQLite must read and parse the original - SQL of all table and index declarations everytime a new database file - is opened, so for the best performance of - sqlite3_open() it is best - to keep down the number of declared tables. Likewise, though there - is no limit on the number of columns in a table, more than a few hundred - seems extreme. Only the first 31 columns of a table are candidates for - certain optimizations. You can put as many columns in an index as you like - but indexes with more than 30 columns will not be used to optimize queries. -
- -The names of tables, indices, view, triggers, and columns can be - as long as desired. However, the names of SQL functions (as created - by the - sqlite3_create_function() - API) may not exceed 255 characters in length.
+See limits.html for a full discussion of + the limits of SQLite.
} faq { diff --git a/www/index.tcl b/www/index.tcl index 724881b7b4..b1aa5453b4 100644 --- a/www/index.tcl +++ b/www/index.tcl @@ -27,10 +27,8 @@ Features include:+"Limits" in the context of this article means sizes or +quantities that can not be exceeded. We are concerned +with things like the maximum number of bytes in a +BLOB or the maximum number of columns in a table. +
+ ++SQLite was originally designed with a policy of avoiding +arbitrary limits. +Of course, every program that runs on a machine with finite +memory and disk space has limits of some kind. But in SQLite, +those limits +were not well defined. The policy was that if it would fit +in memory and you could count it with a 32-bit integer, then +it should work. +
+ ++Unfortunately, the no-limits policy has been shown to create +problems. Because the upper bounds where not well +defined, they were not tested, and bugs (including possible +security exploits) where often found when pushing SQLite to +extremes. For this reason, newer versions of SQLite have +well-defined limits and those limits are tested as part of +the test suite. +
+ ++This article defines what the limits of SQLite are and how they +can be customized for specific applications. The default settings +for limits are normally quite large and adequate for almost every +application. Some applications may what to increase a limit here +or there, but we expect such needs to be rare. More commonly, +an application might want to recompile SQLite with much lower +limits to avoid excess resource utilization in the event of +bug in higher-level SQL statement generators or to help thwart +attackers who inject malicious SQL statements. +
+} +proc limititem {title text} { + puts "$title
\n$text+The maximum number of bytes in a string or BLOB in SQLite is defined +by the preprocessor macro SQLITE_MAX_LENGTH. The default value +of this macro is 1 billion (1 thousand million or 1,000,000,000). +You can raise or lower this value at compile-time using a command-line +option like this: +
+ +-DSQLITE_MAX_LENGTH=123456789+ +
+The current implementation will only support a string or BLOB +length up to 231-1 or 2147483647. And +some built-in functions such as hex() might fail well before that +point. In security-sensitive applications it is best not to +try to increase the maximum string and blob length. In fact, +you might do well to lower the maximum string and blob length +to something more in the range of a few million if that is +possible. +
+ ++During part of SQLite's INSERT and SELECT processing, the complete +content of each row in the database is encoded as a single BLOB. +So the SQLTIE_MAX_LENGTH parameter also determines the maximum +number of bytes in a row. +
+} + +limititem {Maximum Number Of Columns} { ++The SQLITE_MAX_COLUMN compile-time parameter is used to set an upper +bound on: +
+ ++The default setting for SQLITE_MAX_COLUMN is 2000. You can change it +at compile time to values as large as 32676. You might be able to +redefine this value to be as large as billions, though nobody has ever +tried doing that so we do not know if it will work. On the other hand, there +are people who will argument that a well-normalized database design +will never need a value larger than about 100. +
+ ++In most applications, the number of columns is small - a few dozen. +There are places in the SQLite code generator that use algorithms +that are O(N²) where N is the number of columns. +So if you redefine SQLITE_MAX_COLUMN to be a +really huge number and you generate SQL that uses a large number of +columns, you may find that +sqlite3_prepare_v2() +runs slowly. +} + +limititem {Maximum Length Of An SQL Statement} { +
+The maximum number of bytes in the text of an SQL statement is +limited to SQLITE_MAX_SQL_LENGTH which defaults to 1000000. You +can redefine this limit to be as large as the smaller of SQLITE_MAX_LENGTH +and 1073741824. +
+ ++If an SQL statement is limited to be a million bytes in length, then +obviously you will not be able to insert multi-million byte strings +by embedding them as literals inside of INSERT statements. But +you should not do that anyway. Use host parameters +for your data. Prepare short SQL statements like this: +
+ ++INSERT INTO tab1 VALUES(?,?,?); ++ +
+Then use the +sqlite3_bind_XXXX() functions +to bind your large string values to the SQL statement. The use of binding +obviates the need to escape quote characters in the string, reducing the +risk of SQL injection attacks. It is also runs faster since the large +string does not need to be parsed or copied as much. +
+} + +limititem {Maximum Number Of Tables In A Join} { ++SQLite does not support joins containing more than 64 tables. +This limit arises from the fact that the SQLite code generator +uses bitmaps with one bit per join-table in the query optimizer. +
+} + +limititem {Maximum Depth Of An Expression Tree} { ++SQLite parses expressions into a tree for processing. During +code generation, SQLite walks this tree recursively. The depth +of expression trees is therefore limited in order to avoid +using too much stack space. +
+ ++The SQLITE_MAX_EXPR_DEPTH parameter determines the maximum expression +tree depth. If the value is 0, then no limit is enforced. The +current implementation has a default value of 1000. +
+} + +limititem {Maximum Number Of Arguments On A Function} { ++The SQLITE_MAX_FUNCTION_ARG parameter determines the maximum number +of parameters that can be passed to an SQL function. The default value +of this limit is 100. We know of no +technical reason why SQLite would not work with functions that have +millions of parameters. However, we suspect that anybody who tries +to invoke a function with millions of parameters is really +trying to find security exploits in systems that use SQLite, +not do useful work, +and so for that reason we have set this parameter relatively low. +} + +limititem {Maximum Number Of Terms In A Compound SELECT Statement} { +
+A compound SELECT statement is two or more SELECT statements connected +by operators UNION, UNION ALL, EXCEPT, or INTERSECT. We call each +individual SELECT statement within a compound SELECT a "term". +
+ ++The code generator in SQLite processes compound SELECT statements using +a recursive algorithm. In order to limit the size of the stack, we +therefore limit the number of terms in a compound SELECT. The maximum +number of terms is SQLITE_MAX_COMPOUND_SELECT which defaults to 500. +We think this is a generous allotment since in practice we almost +never see the number of terms in a compound select exceed single digits. +
+} + +limititem {Maximum Length Of A LIKE Or GLOB Pattern} { ++The pattern matching algorithm used in the default LIKE and GLOB +implementation of SQLite can exhibit O(N²) performance (where +N is the number of characters in the pattern) for certain pathological +cases. To avoid denial-of-service attacks from miscreants who are able +to specify their own LIKE or GLOB patterns, the length of the LIKE +or GLOB pattern is limited to SQLITE_MAX_LIKE_PATTERN_LENGTH bytes. +The default value of this limit is 50000. A modern workstation can +evaluate even a pathological LIKE or GLOB pattern of 50000 bytes +relatively quickly. The denial of service problem only comes into +play when the pattern length gets into millions of bytes. Nevertheless, +since most useful LIKE or GLOB patterns are at most a few dozen bytes +in length, paranoid application developers may want to reduce this +parameter to something in the range of a few hundred if they know that +external users are able to generate arbitrary patterns. +
+} + +limititem {Maximum Number Of Host Parameters In A Single SQL Statement} { ++A host parameter is a place-holder in an SQL statement that is filled +in using one of the +sqlite3_bind_XXXX() interfaces. +Many SQL programmers are familiar with using a question mark ("?") as a +host parameter. SQLite also supports named host parameters prefaced +by ":", "$", or "@" and numbered host parameters of the form "?123". +
+ ++Each host parameter in an SQLite statement is assigned a number. The +numbers normally begin with 1 and increase by one with each new +parameter. However, when the "?123" form is used, the host parameter +number is the number that follows the question mark. +
+ ++The maximum value of a host parameter number is SQLITE_MAX_VARIABLE_NUMBER. +This setting defaults to 999. +
+} + +limititem {Maximum Number Of Attached Databases} { ++The ATTACH statement is an SQLite extension +that allows two or more databases to be associated to the same database +connection and to operate as if they were a single database. The number +of simulataneously attached databases is limited to SQLITE_MAX_ATTACHED +which is set to 10 by default. +The code generator in SQLite uses bitmaps +to keep track of attached databases. That means that the number of +attached databases cannot be increased above 30 on a 32-bit machine +or 62 on a 64-bit machine. +} + +limititem {Maximum Database Page Size} { +
+An SQLite database file is organized as pages. The size of each +page is a power of 2 between 512 and SQLITE_MAX_PAGE_SIZE. +The default value for SQLITE_MAX_PAGE_SIZE is 32768. The current +implementation will not support a larger value. +
+ ++It used to be the case that SQLite would allocate some stack +structures whose size was proportional to the maximum page size. +For this reason, SQLite would sometimes be compiled with a smaller +maximum page size on embedded devices with limited stack memory. But +more recent versions of SQLite put these large structures on the +heap, not on the stack, so reducing the maximum page size is no +longer necessary on embedded devices. +
+} + +limititem {Maximum Number Of Pages In A Database File} { ++SQLite is able to limit the size of a database file to prevent +the database file from growing too large and consuming too much +disk or flash space. +The SQLITE_MAX_PAGE_COUNT parameter, which is normally set to +1073741823, is the maximum number of pages allowed in a single +database file. An attempt to insert new data that would cause +the database file to grow larger than this will return +SQLITE_FULL. +
+ ++The +max_page_count PRAGMA can be used to raise or lower this +limit at run-time. +
+ ++Note that the transaction processing in SQLite requires two bits +of heap memory for every page in the database file. For databases +of a few megabytes in size, this amounts to only a few hundred +bytes of heap memory. But for gigabyte-sized database the amount +of heap memory required is getting into the kilobyte range and +for terabyte-sized databases, megabytes of heap memory must be +allocated and zeroed at each transaction. SQLite will +support very large databases in theory, but the current implementation +is optimized for the common SQLite use cases of embedded devices +and persistent stores for desktop applications. In other words, +SQLite is designed for use with databases sized in kilobytes or +megabytes not gigabytes. If you are building an application to +work with databases that are hundreds of gigabytes or more +in size, then you should perhaps consider using a different database +engine that is explicitly designed for such large data sets. +
+} + +puts {