Overview > SQLite Limitations
SQLite is different from most other modern SQL databases in that its primary design goal is to be simple. SQLite follows this goal, even if it leads to occasional inefficient implementations of some features. The following is a list of shortcomings of SQLite:
As mentioned previously, SQLite does not support some SQL-92 features that are available in many enterprise database systems. You can obtain the latest information from the SQLite homepage.
SQLite supports only flat transactions; it does not have nesting and savepoint capabilities. (Nesting means the capability of having subtransactions in a transaction. Savepoints allow a transaction to revert back to previously established states.) It is not capable of ensuring a high degree of transaction concurrency. It permits many concurrent read-transactions, but only one exclusive write-transaction on a single database file. This limitation means that if any transaction is reading from any part of a database file, all other transactions are prevented from writing any part of the file. Similarly, if any one transaction is writing to any part of a database file, all other transactions are prevented from reading or writing any part of the file.
Because of its limited transactional concurrency, SQLite is only good for small-size transactions. In many situations, this is not a problem. Each application does its database work quickly and moves on, and hence no database is held up by a transaction for more than a few milliseconds. But there are some applications, especially write-intensive ones, that require more transactional concurrency (table or row level instead of database level), and you should use a different DBMS for such applications. SQLite is not intended to be an enterprise DBMS. It is optimal in situations where simplicity of database implementation, maintenance, and administration are more important than the countless complex features that enterprise DBMSs provide.
SQLite uses native file locking primitives to control transaction concurrency. This may cause some problems if database files reside on network partitions. Many NFS implementations are known to contain bugs (on Unix and Windows) in their file locking logic. If file locking does not work the way it is expected, it might be possible for two or more applications to modify the same part of the same database at the same time, resulting in a database corruption. Because this problem arises due to bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
Another thing is that because of the latency associated with most network filesystems, performance may not be good. In such environments, in cases where the database files must be accessed across the network, a DBMS that implements a client-server model might be more effective than SQLite.
Because of its developers' engineering design choices, SQLite may not be a good choice for very large databases. In theory, a database file can be as large as two terabytes (241) long. The logging subsystem has memory overhead that is proportional to the database size. For every write transaction, SQLite maintains one bit of in-memory information for every database page, whether the transaction even reads or writes that page. (The default page size is 1024 bytes.) Thus, memory overhead may become a severe bottleneck for databases that have more than a few million pages.
Number and type of objects
A table or index is limited by at most 264 1 entries. (Of course, you cannot have this many entries because of the 241 bytes database size limit.) A single entry can hold up to 230 bytes of data in SQLite's current implementation. (The underlying file format supports row sizes up to about 262 bytes of data.) Upon opening a database file, SQLite reads and preprocesses all entries from the master catalog table and creates many in-memory catalog objects. So, for best performance, it is better to keep down the number of tables, indexes, views, and triggers. Likewise, though there is no limit on the number of columns in a table, more than a few hundred seems extreme. Only the first 31 columns of a table are candidates for certain optimizations. You can put as many columns in an index as you like, but indexes with more than 30 columns will not be used to optimize queries.
Host variable reference
In some embedded DBMSs, SQL statements can reference host variables (i.e., those from application space) directly. This is not possible in SQLite. SQLite instead permits binding of host variables to SQL statements using sqlite3_bind_* API functions for input parameters, and not for output values. This approach is generally better than the direct access approach because the latter requires a special preprocessor that converts SQL statements into special API calls.
Many DBMSs have capability to create and store what are called stored procedures. A stored procedure is a group of SQL statements that form a logical unit of work and perform a particular task. SQL queries can use these procedures. SQLite does not have this capability.