[Asterisk-cvs] asterisk-addons/res_sqlite3/sqlite/www arch.fig, NONE, 1.1 arch.gif, NONE, 1.1 arch.png, NONE, 1.1 arch.tcl, NONE, 1.1 arch2.fig, NONE, 1.1 arch2.gif, NONE, 1.1 arch2b.fig, NONE, 1.1 audit.tcl, NONE, 1.1 c_interface.tcl, NONE, 1.1 capi3.tcl, NONE, 1.1 capi3ref.tcl, NONE, 1.1 changes.tcl, NONE, 1.1 common.tcl, NONE, 1.1 conflict.tcl, NONE, 1.1 copyright-release.html, NONE, 1.1 copyright-release.pdf, NONE, 1.1 copyright.tcl, NONE, 1.1 datatype3.tcl, NONE, 1.1 datatypes.tcl, NONE, 1.1 docs.tcl, NONE, 1.1 download.tcl, NONE, 1.1 dynload.tcl, NONE, 1.1 faq.tcl, NONE, 1.1 fileformat.tcl, NONE, 1.1 formatchng.tcl, NONE, 1.1 index.tcl, NONE, 1.1 lang.tcl, NONE, 1.1 lockingv3.tcl, NONE, 1.1 mingw.tcl, NONE, 1.1 nulls.tcl, NONE, 1.1 oldnews.tcl, NONE, 1.1 omitted.tcl, NONE, 1.1 opcode.tcl, NONE, 1.1 quickstart.tcl, NONE, 1.1 speed.tcl, NONE, 1.1 sqlite.tcl, NONE, 1.1 support.tcl, NONE, 1.1 tclsqlite.tcl, NONE, 1.1 vdbe.tcl, NONE, 1.1 version3.tcl, NONE, 1.1 whentouse.tcl, NONE, 1.1

anthm at lists.digium.com anthm at lists.digium.com
Mon Nov 15 09:41:24 CST 2004


Update of /usr/cvsroot/asterisk-addons/res_sqlite3/sqlite/www
In directory mongoose.digium.com:/tmp/cvs-serv9113/res_sqlite3/sqlite/www

Added Files:
	arch.fig arch.gif arch.png arch.tcl arch2.fig arch2.gif 
	arch2b.fig audit.tcl c_interface.tcl capi3.tcl capi3ref.tcl 
	changes.tcl common.tcl conflict.tcl copyright-release.html 
	copyright-release.pdf copyright.tcl datatype3.tcl 
	datatypes.tcl docs.tcl download.tcl dynload.tcl faq.tcl 
	fileformat.tcl formatchng.tcl index.tcl lang.tcl lockingv3.tcl 
	mingw.tcl nulls.tcl oldnews.tcl omitted.tcl opcode.tcl 
	quickstart.tcl speed.tcl sqlite.tcl support.tcl tclsqlite.tcl 
	vdbe.tcl version3.tcl whentouse.tcl 
Log Message:
check in res_sqlite3

--- NEW FILE: arch.fig ---
#FIG 3.2
Portrait
Center
Inches
Letter  
100.00
Single
-2
1200 2
2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
	1 1 3.00 75.00 135.00
	 3675 8550 3675 9075
2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
	1 1 3.00 75.00 135.00
	 3675 7200 3675 7725
2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
	1 1 3.00 75.00 135.00
	 3675 5775 3675 6300
2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
	1 1 3.00 75.00 135.00
	 3675 3975 3675 4500
2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
	1 1 3.00 75.00 135.00
	 3675 2625 3675 3150
2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
	1 1 3.00 75.00 135.00
	 3675 1275 3675 1800
2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
	1 1 3.00 75.00 135.00
	 3675 9900 3675 10425
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 10425 4875 10425 4875 11250 2550 11250 2550 10425
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 9075 4875 9075 4875 9900 2550 9900 2550 9075
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 7725 4875 7725 4875 8550 2550 8550 2550 7725
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 6300 4875 6300 4875 7200 2550 7200 2550 6300
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 4500 4875 4500 4875 5775 2550 5775 2550 4500
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 3150 4875 3150 4875 3975 2550 3975 2550 3150
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 1800 4875 1800 4875 2625 2550 2625 2550 1800
2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
	 2550 450 4875 450 4875 1275 2550 1275 2550 450
4 1 0 100 0 0 20 0.0000 4 195 1020 3675 750 Interface\001
4 1 0 100 0 0 14 0.0000 4 195 2040 3675 1125 main.c table.c tclsqlite.c\001
4 1 0 100 0 0 20 0.0000 4 195 1920 3675 6675 Virtual Machine\001
4 1 0 100 0 0 14 0.0000 4 150 570 3675 7050 vdbe.c\001
4 1 0 100 0 0 20 0.0000 4 195 1830 3675 4875 Code Generator\001
4 1 0 100 0 0 14 0.0000 4 195 1860 3675 5175 build.c delete.c expr.c\001
4 1 0 100 0 0 14 0.0000 4 195 2115 3675 5400 insert.c select.c update.c\001
4 1 0 100 0 0 14 0.0000 4 150 705 3675 5625 where.c\001
4 1 0 100 0 0 20 0.0000 4 195 735 3675 3450 Parser\001
4 1 0 100 0 0 20 0.0000 4 195 1140 3675 2100 Tokenizer\001
4 1 0 100 0 0 14 0.0000 4 150 870 3675 2475 tokenize.c\001
4 1 0 100 0 0 20 0.0000 4 255 1350 3675 9375 Page Cache\001
4 1 0 100 0 0 14 0.0000 4 150 630 3675 3825 parse.y\001
4 1 0 100 0 0 14 0.0000 4 150 600 3675 8400 btree.c\001
4 1 0 100 0 0 14 0.0000 4 150 645 3675 9750 pager.c\001
4 1 0 100 0 0 20 0.0000 4 195 1620 3675 8025 B-tree Driver\001
4 1 0 100 0 0 14 0.0000 4 105 345 3675 11100 os.c\001
4 1 0 100 0 0 20 0.0000 4 195 1470 3675 10725 OS Interface\001

--- NEW FILE: arch.gif ---
(This appears to be a binary file; contents omitted.)

--- NEW FILE: arch.png ---
(This appears to be a binary file; contents omitted.)

--- NEW FILE: arch.tcl ---
#
# Run this Tcl script to generate the sqlite.html file.
#
set rcsid {$Id: arch.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {Architecture of SQLite}
puts {
<h2>The Architecture Of SQLite</h2>

<h3>Introduction</h3>

<table align="right" border="1" cellpadding="15" cellspacing="1">
<tr><th>Block Diagram Of SQLite</th></tr>
<tr><td><img src="arch2.gif"></td></tr>
</table>
<p>This document describes the architecture of the SQLite library.
The information here is useful to those who want to understand or
modify the inner workings of SQLite.
</p>

<p>
A block diagram showing the main components of SQLite
and how they interrelate is shown at the right.  The text that
follows will provide a quick overview of each of these components.
</p>


<p>
This document describes SQLite version 3.0.  Version 2.8 and
earlier are similar but the details differ.
</p>

<h3>Interface</h3>

<p>Much of the public interface to the SQLite library is implemented by
functions found in the <b>main.c</b>, <b>legacy.c</b>, and
<b>vdbeapi.c</b> source files
though some routines are
scattered about in other files where they can have access to data 
structures with file scope.  The
<b>sqlite3_get_table()</b> routine is implemented in <b>table.c</b>.
<b>sqlite3_mprintf()</b> is found in <b>printf.c</b>.
<b>sqlite3_complete()</b> is in <b>tokenize.c</b>.
The Tcl interface is implemented by <b>tclsqlite.c</b>.  More
information on the C interface to SQLite is
<a href="capi3ref.html">available separately</a>.<p>

<p>To avoid name collisions with other software, all external
symbols in the SQLite library begin with the prefix <b>sqlite3</b>.
Those symbols that are intended for external use (in other words,
those symbols which form the API for SQLite) begin
with <b>sqlite3_</b>.</p>

<h3>Tokenizer</h3>

<p>When a string containing SQL statements is to be executed, the
interface passes that string to the tokenizer.  The job of the tokenizer
is to break the original string up into tokens and pass those tokens
one by one to the parser.  The tokenizer is hand-coded in C in 
the file <b>tokenize.c</b>.

<p>Note that in this design, the tokenizer calls the parser.  People
who are familiar with YACC and BISON may be used to doing things the
other way around -- having the parser call the tokenizer.  The author
of SQLite 
has done it both ways and finds things generally work out nicer for
the tokenizer to call the parser.  YACC has it backwards.</p>

<h3>Parser</h3>

<p>The parser is the piece that assigns meaning to tokens based on
their context.  The parser for SQLite is generated using the
<a href="http://www.hwaci.com/sw/lemon/">Lemon</a> LALR(1) parser
generator.  Lemon does the same job as YACC/BISON, but it uses
a different input syntax which is less error-prone.
Lemon also generates a parser which is reentrant and thread-safe.
And lemon defines the concept of a non-terminal destructor so
that it does not leak memory when syntax errors are encountered.
The source file that drives Lemon is found in <b>parse.y</b>.</p>

<p>Because
lemon is a program not normally found on development machines, the
complete source code to lemon (just one C file) is included in the
SQLite distribution in the "tool" subdirectory.  Documentation on
lemon is found in the "doc" subdirectory of the distribution.
</p>

<h3>Code Generator</h3>

<p>After the parser assembles tokens into complete SQL statements,
it calls the code generator to produce virtual machine code that
will do the work that the SQL statements request.  There are many
files in the code generator:
<b>attach.c</b>,
<b>auth.c</b>,
<b>build.c</b>,
<b>delete.c</b>,
<b>expr.c</b>,
<b>insert.c</b>,
<b>pragma.c</b>,
<b>select.c</b>,
<b>trigger.c</b>,
<b>update.c</b>,
<b>vacuum.c</b>
and <b>where.c</b>.
In these files is where most of the serious magic happens.
<b>expr.c</b> handles code generation for expressions.
<b>where.c</b> handles code generation for WHERE clauses on
SELECT, UPDATE and DELETE statements.  The files <b>attach.c</b>,
<b>delete.c</b>, <b>insert.c</b>, <b>select.c</b>, <b>trigger.c</b>
<b>update.c</b>, and <b>vacuum.c</b> handle the code generation
for SQL statements with the same names.  (Each of these files calls routines
in <b>expr.c</b> and <b>where.c</b> as necessary.)  All other
SQL statements are coded out of <b>build.c</b>.
The <b>auth.c</b> file implements the functionality of
<b>sqlite3_set_authorizer()</b>.</p>

<h3>Virtual Machine</h3>

<p>The program generated by the code generator is executed by
the virtual machine.  Additional information about the virtual
machine is <a href="opcode.html">available separately</a>.
To summarize, the virtual machine implements an abstract computing
engine specifically designed to manipulate database files.  The
machine has a stack which is used for intermediate storage.
Each instruction contains an opcode and
up to three additional operands.</p>

<p>The virtual machine itself is entirely contained in a single
source file <b>vdbe.c</b>.  The virtual machine also has
its own header files: <b>vdbe.h</b> that defines an interface
between the virtual machine and the rest of the SQLite library and
<b>vdbeInt.h</b> which defines structure private the virtual machine.
The <b>vdbeaux.c</b> file contains utilities used by the virtual
machine and interface modules used by the rest of the library to
construct VM programs.  The <b>vdbeapi.c</b> file contains external
interfaces to the virtual machine such as the 
<b>sqlite3_bind_...</b> family of functions.  Individual values
(strings, integer, floating point numbers, and BLOBs) are stored
in an internal object named "Mem" which is implemented by
<b>vdbemem.c</b>.</p>

<p>
SQLite implements SQL functions using callbacks to C-language routines.
Even the built-in SQL functions are implemented this way.  Most of
the built-in SQL functions (ex: <b>coalesce()</b>, <b>count()</b>,
<b>substr()</b>, and so forth) can be found in <b>func.c</b>.
Date and time conversion functions are found in <b>date.c</b>.
</p>

<h3>B-Tree</h3>

<p>An SQLite database is maintained on disk using a B-tree implementation
found in the <b>btree.c</b> source file.  A separate B-tree is used for
each table and index in the database.  All B-trees are stored in the
same disk file.  Details of the file format are recorded in a large
comment at the beginning of <b>btree.c</b>.</p>

<p>The interface to the B-tree subsystem is defined by the header file
<b>btree.h</b>.
</p>

<h3>Page Cache</h3>

<p>The B-tree module requests information from the disk in fixed-size
chunks.  The default chunk size is 1024 bytes but can vary between 512
and 65536 bytes.
The page cache is reponsible for reading, writing, and
caching these chunks.
The page cache also provides the rollback and atomic commit abstraction
and takes care of locking of the database file.  The
B-tree driver requests particular pages from the page cache and notifies
the page cache when it wants to modify pages or commit or rollback
changes and the page cache handles all the messy details of making sure
the requests are handled quickly, safely, and efficiently.</p>

<p>The code to implement the page cache is contained in the single C
source file <b>pager.c</b>.  The interface to the page cache subsystem
is defined by the header file <b>pager.h</b>.
</p>

<h3>OS Interface</h3>

<p>
In order to provide portability between POSIX and Win32 operating systems,
SQLite uses an abstraction layer to interface with the operating system.
The interface to the OS abstraction layer is defined in
<b>os.h</b>.  Each supported operating system has its own implementation:
<b>os_unix.c</b> for Unix, <b>os_win.c</b> for windows, and so forth.
Each of these operating-specific implements typically has its own
header file: <b>os_unix.h</b>, <b>os_win.h</b>, etc.
</p>

<h3>Utilities</h3>

<p>
Memory allocation and caseless string comparison routines are located
in <b>util.c</b>.
Symbol tables used by the parser are maintained by hash tables found
in <b>hash.c</b>.  The <b>utf.c</b> source file contains Unicode
conversion subroutines.
SQLite has its own private implementation of <b>printf()</b> (with
some extensions) in <b>printf.c</b> and its own random number generator
in <b>random.c</b>.
</p>

<h3>Test Code</h3>

<p>
If you count regression test scripts,
more than half the total code base of SQLite is devoted to testing.
There are many <b>assert()</b> statements in the main code files.
In additional, the source files <b>test1.c</b> through <b>test5.c</b>
together with <b>md5.c</b> implement extensions used for testing
purposes only.  The <b>os_test.c</b> backend interface is used to
simulate power failures to verify the crash-recovery mechanism in
the pager.
</p>

}
footer $rcsid

--- NEW FILE: arch2.fig ---
#FIG 3.2
Landscape
Center
Inches
Letter  
100.00
Single
-2
1200 2
0 32 #000000
0 33 #868686
0 34 #dfefd7
0 35 #d7efef
0 36 #efdbef
0 37 #efdbd7
0 38 #e7efcf
0 39 #9e9e9e
6 3225 3900 4650 6000
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 5475 4575 5475 4575 5925 3225 5925 3225 5475
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 5550 4650 5550 4650 6000 3300 6000 3300 5550
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 4650 4575 4650 4575 5100 3225 5100 3225 4650
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 4725 4650 4725 4650 5175 3300 5175 3300 4725
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 3900 4575 3900 4575 4350 3225 4350 3225 3900
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 3975 4650 3975 4650 4425 3300 4425 3300 3975
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 4350 3900 4650
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 5100 3900 5475
4 1 0 50 0 2 12 0.0000 4 135 1050 3900 5775 OS Interface\001
4 1 0 50 0 2 12 0.0000 4 135 615 3900 4200 B-Tree\001
4 1 0 50 0 2 12 0.0000 4 180 495 3900 4950 Pager\001
-6
6 5400 4725 6825 5250
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 4725 6750 4725 6750 5175 5400 5175 5400 4725
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 4800 6825 4800 6825 5250 5475 5250 5475 4800
4 1 0 50 0 2 12 0.0000 4 135 630 6000 5025 Utilities\001
-6
6 5400 5550 6825 6075
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 5550 6750 5550 6750 6000 5400 6000 5400 5550
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 5625 6825 5625 6825 6075 5475 6075 5475 5625
4 1 0 50 0 2 12 0.0000 4 135 855 6000 5850 Test Code\001
-6
6 5400 2775 6825 3750
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 2850 6825 2850 6825 3750 5475 3750 5475 2850
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 2775 6750 2775 6750 3675 5400 3675 5400 2775
4 1 0 50 0 2 12 0.0000 4 135 420 6075 3150 Code\001
4 1 0 50 0 2 12 0.0000 4 135 855 6075 3375 Generator\001
-6
6 5400 1950 6825 2475
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 1950 6750 1950 6750 2400 5400 2400 5400 1950
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 2025 6825 2025 6825 2475 5475 2475 5475 2025
4 1 0 50 0 2 12 0.0000 4 135 570 6075 2250 Parser\001
-6
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 1050 6750 1050 6750 1500 5400 1500 5400 1050
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 1125 6825 1125 6825 1575 5475 1575 5475 1125
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 1050 4575 1050 4575 1500 3225 1500 3225 1050
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 1125 4650 1125 4650 1575 3300 1575 3300 1125
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 1800 4575 1800 4575 2250 3225 2250 3225 1800
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 1875 4650 1875 4650 2325 3300 2325 3300 1875
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 2550 4575 2550 4575 3000 3225 3000 3225 2550
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 2625 4650 2625 4650 3075 3300 3075 3300 2625
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 1500 3900 1800
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 2250 3900 2550
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 3000 3900 3900
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 4575 1950 5400 1350
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 5400 2925 4650 2325
2 2 0 1 0 34 55 0 20 0.000 0 0 -1 0 0 5
	 2850 750 4875 750 4875 3375 2850 3375 2850 750
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 6075 1500 6075 1950
2 3 0 1 0 35 55 0 20 0.000 0 0 -1 0 0 5
	 2850 3675 4875 3675 4875 6225 2850 6225 2850 3675
2 2 0 1 0 37 55 0 20 0.000 0 0 -1 0 0 5
	 5175 750 7200 750 7200 4050 5175 4050 5175 750
2 2 0 1 0 38 55 0 20 0.000 0 0 -1 0 0 5
	 5175 4425 7200 4425 7200 6225 5175 6225 5175 4425
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 6075 2475 6075 2775
4 1 0 50 0 2 12 0.0000 4 135 855 6075 1350 Tokenizer\001
4 1 0 50 0 1 12 1.5708 4 180 1020 7125 2250 SQL Compiler\001
4 1 0 50 0 1 12 1.5708 4 135 345 3075 2025 Core\001
4 1 0 50 0 2 12 0.0000 4 135 1290 3900 2850 Virtual Machine\001
4 1 0 50 0 2 12 0.0000 4 165 1185 3900 1995 SQL Command\001
4 1 0 50 0 2 12 0.0000 4 135 855 3900 2183 Processor\001
4 1 0 50 0 2 14 0.0000 4 150 870 3900 1350 Interface\001
4 1 0 50 0 1 12 1.5708 4 135 885 7125 5400 Accessories\001
4 1 0 50 0 1 12 1.5708 4 135 645 3075 4875 Backend\001

--- NEW FILE: arch2.gif ---
(This appears to be a binary file; contents omitted.)

--- NEW FILE: arch2b.fig ---
#FIG 3.2
Landscape
Center
Inches
Letter  
100.00
Single
-2
1200 2
0 32 #000000
0 33 #868686
0 34 #dfefd7
0 35 #d7efef
0 36 #efdbef
0 37 #efdbd7
0 38 #e7efcf
0 39 #9e9e9e
6 3225 3900 4650 6000
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 5475 4575 5475 4575 5925 3225 5925 3225 5475
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 5550 4650 5550 4650 6000 3300 6000 3300 5550
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 4650 4575 4650 4575 5100 3225 5100 3225 4650
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 4725 4650 4725 4650 5175 3300 5175 3300 4725
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 3900 4575 3900 4575 4350 3225 4350 3225 3900
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 3975 4650 3975 4650 4425 3300 4425 3300 3975
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 4350 3900 4650
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 5100 3900 5475
4 1 0 50 0 2 12 0.0000 4 135 1050 3900 5775 OS Interface\001
4 1 0 50 0 2 12 0.0000 4 135 615 3900 4200 B-Tree\001
4 1 0 50 0 2 12 0.0000 4 180 495 3900 4950 Pager\001
-6
6 5175 4275 7200 6150
6 5400 4519 6825 5090
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 4519 6750 4519 6750 5009 5400 5009 5400 4519
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 4601 6825 4601 6825 5090 5475 5090 5475 4601
4 1 0 50 0 2 12 0.0000 4 135 630 6000 4845 Utilities\001
-6
6 5400 5416 6825 5987
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 5416 6750 5416 6750 5906 5400 5906 5400 5416
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 5498 6825 5498 6825 5987 5475 5987 5475 5498
4 1 0 50 0 2 12 0.0000 4 135 855 6000 5742 Test Code\001
-6
2 2 0 1 0 38 55 0 20 0.000 0 0 -1 0 0 5
	 5175 4275 7200 4275 7200 6150 5175 6150 5175 4275
4 1 0 50 0 1 12 1.5708 4 135 885 7125 5253 Accessories\001
-6
6 5400 2700 6825 3675
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 2775 6825 2775 6825 3675 5475 3675 5475 2775
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 2700 6750 2700 6750 3600 5400 3600 5400 2700
4 1 0 50 0 2 12 0.0000 4 135 420 6075 3075 Code\001
4 1 0 50 0 2 12 0.0000 4 135 855 6075 3300 Generator\001
-6
6 5400 1875 6825 2400
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 1875 6750 1875 6750 2325 5400 2325 5400 1875
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 1950 6825 1950 6825 2400 5475 2400 5475 1950
4 1 0 50 0 2 12 0.0000 4 135 570 6075 2175 Parser\001
-6
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 5400 1050 6750 1050 6750 1500 5400 1500 5400 1050
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 5475 1125 6825 1125 6825 1575 5475 1575 5475 1125
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 1050 4575 1050 4575 1500 3225 1500 3225 1050
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 1125 4650 1125 4650 1575 3300 1575 3300 1125
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 1800 4575 1800 4575 2250 3225 2250 3225 1800
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 1875 4650 1875 4650 2325 3300 2325 3300 1875
2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
	 3225 2550 4575 2550 4575 3000 3225 3000 3225 2550
2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
	 3300 2625 4650 2625 4650 3075 3300 3075 3300 2625
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 1500 3900 1800
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 2250 3900 2550
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 3900 3000 3900 3900
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 4575 1950 5400 1350
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 5400 2925 4650 2175
2 2 0 1 0 34 55 0 20 0.000 0 0 -1 0 0 5
	 2850 750 4875 750 4875 3375 2850 3375 2850 750
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 6075 1500 6075 1800
2 3 0 1 0 35 55 0 20 0.000 0 0 -1 0 0 5
	 2850 3675 4875 3675 4875 6150 2850 6150 2850 3675
2 2 0 1 0 37 55 0 20 0.000 0 0 -1 0 0 5
	 5175 750 7200 750 7200 3975 5175 3975 5175 750
2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
	1 1 1.00 60.00 120.00
	 6075 2400 6075 2700
4 1 0 50 0 2 12 0.0000 4 135 855 6075 1350 Tokenizer\001
4 1 0 50 0 1 12 1.5708 4 180 1020 7125 2250 SQL Compiler\001
4 1 0 50 0 1 12 1.5708 4 135 345 3075 2025 Core\001
4 1 0 50 0 2 12 0.0000 4 135 1290 3900 2850 Virtual Machine\001
4 1 0 50 0 2 12 0.0000 4 165 1185 3900 1995 SQL Command\001
4 1 0 50 0 2 12 0.0000 4 135 855 3900 2183 Processor\001
4 1 0 50 0 2 14 0.0000 4 150 870 3900 1350 Interface\001
4 1 0 50 0 1 12 1.5708 4 135 645 3075 4875 Backend\001

--- NEW FILE: audit.tcl ---
#
# Run this Tcl script to generate the audit.html file.
#
set rcsid {$Id: audit.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

puts {<html>
<head>
  <title>SQLite Security Audit Procedure</title>
</head>
<body bgcolor=white>
<h1 align=center>
SQLite Security Audit Procedure
</h1>}
puts "<p align=center>
(This page was last modified on [lrange $rcsid 3 4] UTC)
</p>"

puts {
<p>
A security audit for SQLite consists of two components.  First, there is
a check for common errors that often lead to security problems.  Second,
an attempt is made to construct a proof that SQLite has certain desirable
security properties.
</p>

<h2>Part I: Things to check</h2>

<p>
Scan all source code and check for the following common errors:
</p>

<ol>
<li><p>
Verify that the destination buffer is large enough to hold its result
in every call to the following routines:
<ul>
<li> <b>strcpy()</b> </li>
<li> <b>strncpy()</b> </li>
<li> <b>strcat()</b> </li>
<li> <b>memcpy()</b> </li>
<li> <b>memset()</b> </li>
<li> <b>memmove()</b> </li>
<li> <b>bcopy()</b> </li>
<li> <b>sprintf()</b> </li>
<li> <b>scanf()</b> </li>
</ul>
</p></li>
<li><p>
Verify that pointers returned by subroutines are not NULL before using
the pointers.  In particular, make sure the return values for the following
routines are checked before they are used:
<ul>
<li> <b>malloc()</b> </li>
<li> <b>realloc()</b> </li>
<li> <b>sqliteMalloc()</b> </li>
<li> <b>sqliteRealloc()</b> </li>
<li> <b>sqliteStrDup()</b> </li>
<li> <b>sqliteStrNDup()</b> </li>
<li> <b>sqliteExpr()</b> </li>
<li> <b>sqliteExprFunction()</b> </li>
<li> <b>sqliteExprListAppend()</b> </li>
<li> <b>sqliteResultSetOfSelect()</b> </li>
<li> <b>sqliteIdListAppend()</b> </li>
<li> <b>sqliteSrcListAppend()</b> </li>
<li> <b>sqliteSelectNew()</b> </li>
<li> <b>sqliteTableNameToTable()</b> </li>
<li> <b>sqliteTableTokenToSrcList()</b> </li>
<li> <b>sqliteWhereBegin()</b> </li>
<li> <b>sqliteFindTable()</b> </li>
<li> <b>sqliteFindIndex()</b> </li>
<li> <b>sqliteTableNameFromToken()</b> </li>
<li> <b>sqliteGetVdbe()</b> </li>
<li> <b>sqlite_mprintf()</b> </li>
<li> <b>sqliteExprDup()</b> </li>
<li> <b>sqliteExprListDup()</b> </li>
<li> <b>sqliteSrcListDup()</b> </li>
<li> <b>sqliteIdListDup()</b> </li>
<li> <b>sqliteSelectDup()</b> </li>
<li> <b>sqliteFindFunction()</b> </li>
<li> <b>sqliteTriggerSelectStep()</b> </li>
<li> <b>sqliteTriggerInsertStep()</b> </li>
<li> <b>sqliteTriggerUpdateStep()</b> </li>
<li> <b>sqliteTriggerDeleteStep()</b> </li>
</ul>
</p></li>
<li><p>
On all functions and procedures, verify that pointer parameters are not NULL
before dereferencing those parameters.
</p></li>
<li><p>
Check to make sure that temporary files are opened safely: that the process
will not overwrite an existing file when opening the temp file and that
another process is unable to substitute a file for the temp file being
opened.
</p></li>
</ol>



<h2>Part II: Things to prove</h2>

<p>
Prove that SQLite exhibits the characteristics outlined below:
</p>

<ol>
<li><p>
The following are preconditions:</p>
<p><ul>
<li><b>Z</b> is an arbitrary-length NUL-terminated string.</li>
<li>An existing SQLite database has been opened.  The return value
    from the call to <b>sqlite_open()</b> is stored in the variable
    <b>db</b>.</li>
<li>The database contains at least one table of the form:
<blockquote><pre>
CREATE TABLE t1(a CLOB);
</pre></blockquote></li>
<li>There are no user-defined functions other than the standard
    build-in functions.</li>
</ul></p>
<p>The following statement of C code is executed:</p>
<blockquote><pre>
sqlite_exec_printf(
   db,
   "INSERT INTO t1(a) VALUES('%q');", 
   0, 0, 0, Z
);
</pre></blockquote>
<p>Prove the following are true for all possible values of string <b>Z</b>:</p>
<ol type="a">
<li><p>
The call to <b>sqlite_exec_printf()</b> will
return in a length of time that is a polynomial in <b>strlen(Z)</b>.
It might return an error code but it will not crash.
</p></li>
<li><p>
At most one new row will be inserted into table t1.
</p></li>
<li><p>
No preexisting rows of t1 will be deleted or modified.
</p></li>
<li><p>
No tables other than t1 will be altered in any way.
</p></li>
<li><p>
No preexisting files on the host computers filesystem, other than
the database file itself, will be deleted or modified.
</p></li>
<li><p>
For some constants <b>K1</b> and <b>K2</b>,
if at least <b>K1*strlen(Z) + K2</b> bytes of contiguous memory are
available to <b>malloc()</b>, then the call to <b>sqlite_exec_printf()</b>
will not return SQLITE_NOMEM.
</p></li>
</ol>
</p></li>


<li><p>
The following are preconditions:
<p><ul>
<li><b>Z</b> is an arbitrary-length NUL-terminated string.</li>
<li>An existing SQLite database has been opened.  The return value
    from the call to <b>sqlite_open()</b> is stored in the variable
    <b>db</b>.</li>
<li>There exists a callback function <b>cb()</b> that appends all
    information passed in through its parameters into a single
    data buffer called <b>Y</b>.</li>
<li>There are no user-defined functions other than the standard
    build-in functions.</li>
</ul></p>
<p>The following statement of C code is executed:</p>
<blockquote><pre>
sqlite_exec(db, Z, cb, 0, 0);
</pre></blockquote>
<p>Prove the following are true for all possible values of string <b>Z</b>:</p>
<ol type="a">
<li><p>
The call to <b>sqlite_exec()</b> will
return in a length of time which is a polynomial in <b>strlen(Z)</b>.
It might return an error code but it will not crash.
</p></li>
<li><p>
After <b>sqlite_exec()</b> returns, the buffer <b>Y</b> will not contain
any content from any preexisting file on the host computers file system,
except for the database file.
</p></li>
<li><p>
After the call to <b>sqlite_exec()</b> returns, the database file will
still be well-formed.  It might not contain the same data, but it will
still be a properly constructed SQLite database file.
</p></li>
<li><p>
No preexisting files on the host computers filesystem, other than
the database file itself, will be deleted or modified.
</p></li>
<li><p>
For some constants <b>K1</b> and <b>K2</b>,
if at least <b>K1*strlen(Z) + K2</b> bytes of contiguous memory are
available to <b>malloc()</b>, then the call to <b>sqlite_exec()</b>
will not return SQLITE_NOMEM.
</p></li>
</ol>
</p></li>

</ol>
}
puts {
<p><hr /></p>
<p><a href="index.html"><img src="/goback.jpg" border=0 />
Back to the SQLite Home Page</a>
</p>

</body></html>}

--- NEW FILE: c_interface.tcl ---
#
# Run this Tcl script to generate the sqlite.html file.
#
set rcsid {$Id: c_interface.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {The C language interface to the SQLite library}
puts {
<h2>The C language interface to the SQLite library</h2>

<p>The SQLite library is designed to be very easy to use from
a C or C++ program.  This document gives an overview of the C/C++
programming interface.</p>

<h3>1.0 The Core API</h3>

<p>The interface to the SQLite library consists of three core functions,
one opaque data structure, and some constants used as return values.
The core interface is as follows:</p>

[...1077 lines suppressed...]
different that this you will have to recompile.
</p>

<p>
Under Unix, an <b>sqlite*</b> pointer should not be carried across a
<b>fork()</b> system call into the child process.  The child process
should open its own copy of the database after the <b>fork()</b>.
</p>

<h3>6.0 Usage Examples</h3>

<p>For examples of how the SQLite C/C++ interface can be used,
refer to the source code for the <b>sqlite</b> program in the
file <b>src/shell.c</b> of the source tree.
Additional information about sqlite is available at
<a href="sqlite.html">sqlite.html</a>.
See also the sources to the Tcl interface for SQLite in
the source file <b>src/tclsqlite.c</b>.</p>
}
footer $rcsid

--- NEW FILE: capi3.tcl ---
set rcsid {$Id: capi3.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {C/C++ Interface For SQLite Version 3}
puts {
<h2>C/C++ Interface For SQLite Version 3</h2>

<h3>1.0 Overview</h3>

<p>
SQLite version 3.0 is a new version of SQLite, derived from
the SQLite 2.8.13 code base, but with an incompatible file format
and API.
SQLite version 3.0 was created to answer demand for the following features:
</p>

<ul>
<li>Support for UTF-16.</li>
<li>User-definable text collating sequences.</li>
<li>The ability to store BLOBs in indexed columns.</li>
</ul>

<p>
It was necessary to move to version 3.0 to implement these features because
each requires incompatible changes to the database file format.  Other
incompatible changes, such as a cleanup of the API, were introduced at the
same time under the theory that it is best to get your incompatible changes
out of the way all at once.  
</p>

<p>
The API for version 3.0 is similar to the version 2.X API,
but with some important changes.  Most noticeably, the "<tt>sqlite_</tt>"
prefix that occurs on the beginning of all API functions and data
structures are changed to "<tt>sqlite3_</tt>".  
This avoids confusion between the two APIs and allows linking against both
SQLite 2.X and SQLite 3.0 at the same time.
</p>

<p>
There is no agreement on what the C datatype for a UTF-16
string should be.  Therefore, SQLite uses a generic type of void*
to refer to UTF-16 strings.  Client software can cast the void* 
to whatever datatype is appropriate for their system.
</p>

<h3>2.0 C/C++ Interface</h3>

<p>
The API for SQLite 3.0 includes 83 separate functions in addition
to several data structures and #defines.  (A complete
<a href="capi3ref.html">API reference</a> is provided as a separate document.)
Fortunately, the interface is not nearly as complex as its size implies.
Simple programs can still make do with only 3 functions:
<a href="capi3ref.html#sqlite3_open">sqlite3_open()</a>,
<a href="capi3ref.html#sqlite3_exec">sqlite3_exec()</a>, and
<a href="capi3ref.html#sqlite3_close">sqlite3_close()</a>.
More control over the execution of the database engine is provided
using
<a href="capi3ref.html#sqlite3_prepare">sqlite3_prepare()</a>
to compile an SQLite statement into byte code and
<a href="capi3ref.html#sqlite3_prepare">sqlite3_step()</a>
to execute that bytecode.
A family of routines with names beginning with 
<a href="capi3ref.html#sqlite3_column_blob">sqlite3_column_</a>
is used to extract information about the result set of a query.
Many interface functions come in pairs, with both a UTF-8 and
UTF-16 version.  And there is a collection of routines
used to implement user-defined SQL functions and user-defined
text collating sequences.
</p>


<h4>2.1 Opening and closing a database</h4>

<blockquote><pre>
   typedef struct sqlite3 sqlite3;
   int sqlite3_open(const char*, sqlite3**);
   int sqlite3_open16(const void*, sqlite3**);
   int sqlite3_close(sqlite3*);
   const char *sqlite3_errmsg(sqlite3*);
   const void *sqlite3_errmsg16(sqlite3*);
   int sqlite3_errcode(sqlite3*);
</pre></blockquote>

<p>
The sqlite3_open() routine returns an integer error code rather than
a pointer to the sqlite3 structure as the version 2 interface did.
The difference between sqlite3_open()
and sqlite3_open16() is that sqlite3_open16() takes UTF-16 (in host native
byte order) for the name of the database file.  If a new database file
needs to be created, then sqlite3_open16() sets the internal text
representation to UTF-16 whereas sqlite3_open() sets the text
representation to UTF-8.
</p>

<p>
The opening and/or creating of the database file is deferred until the
file is actually needed.  This allows options and parameters, such
as the native text representation and default page size, to be
set using PRAGMA statements.
</p>

<p>
The sqlite3_errcode() routine returns a result code for the most
recent major API call.  sqlite3_errmsg() returns an English-language
text error message for the most recent error.  The error message is
represented in UTF-8 and will be ephemeral - it could disappear on
the next call to any SQLite API function.  sqlite3_errmsg16() works like
sqlite3_errmsg() except that it returns the error message represented
as UTF-16 in host native byte order.
</p>

<p>
The error codes for SQLite version 3 are unchanged from version 2.
They are as follows:
</p>

<blockquote><pre>
#define SQLITE_OK           0   /* Successful result */
#define SQLITE_ERROR        1   /* SQL error or missing database */
#define SQLITE_INTERNAL     2   /* An internal logic error in SQLite */
#define SQLITE_PERM         3   /* Access permission denied */
#define SQLITE_ABORT        4   /* Callback routine requested an abort */
#define SQLITE_BUSY         5   /* The database file is locked */
#define SQLITE_LOCKED       6   /* A table in the database is locked */
#define SQLITE_NOMEM        7   /* A malloc() failed */
#define SQLITE_READONLY     8   /* Attempt to write a readonly database */
#define SQLITE_INTERRUPT    9   /* Operation terminated by sqlite_interrupt() */
#define SQLITE_IOERR       10   /* Some kind of disk I/O error occurred */
#define SQLITE_CORRUPT     11   /* The database disk image is malformed */
#define SQLITE_NOTFOUND    12   /* (Internal Only) Table or record not found */
#define SQLITE_FULL        13   /* Insertion failed because database is full */
#define SQLITE_CANTOPEN    14   /* Unable to open the database file */
#define SQLITE_PROTOCOL    15   /* Database lock protocol error */
#define SQLITE_EMPTY       16   /* (Internal Only) Database table is empty */
#define SQLITE_SCHEMA      17   /* The database schema changed */
#define SQLITE_TOOBIG      18   /* Too much data for one row of a table */
#define SQLITE_CONSTRAINT  19   /* Abort due to contraint violation */
#define SQLITE_MISMATCH    20   /* Data type mismatch */
#define SQLITE_MISUSE      21   /* Library used incorrectly */
#define SQLITE_NOLFS       22   /* Uses OS features not supported on host */
#define SQLITE_AUTH        23   /* Authorization denied */
#define SQLITE_ROW         100  /* sqlite_step() has another row ready */
#define SQLITE_DONE        101  /* sqlite_step() has finished executing */
</pre></blockquote>

<h4>2.2 Executing SQL statements</h4>

<blockquote><pre>
   typedef int (*sqlite_callback)(void*,int,char**, char**);
   int sqlite3_exec(sqlite3*, const char *sql, sqlite_callback, void*, char**);
</pre></blockquote>

<p>
The sqlite3_exec function works much as it did in SQLite version 2.
Zero or more SQL statements specified in the second parameter are compiled
and executed.  Query results are returned to a callback routine.
See the <a href="capi3ref.html#sqlite3_exec">API reference</a> for additional
information.
</p>

<p>
In SQLite version 3, the sqlite3_exec routine is just a wrapper around
calls to the prepared statement interface.
</p>

<blockquote><pre>
   typedef struct sqlite3_stmt sqlite3_stmt;
   int sqlite3_prepare(sqlite3*, const char*, int, sqlite3_stmt**, const char**);
   int sqlite3_prepare16(sqlite3*, const void*, int, sqlite3_stmt**, const void**);
   int sqlite3_finalize(sqlite3_stmt*);
   int sqlite3_reset(sqlite3_stmt*);
</pre></blockquote>

<p>
The sqlite3_prepare interface compiles a single SQL statement into byte code
for later execution.  This interface is now the preferred way of accessing
the database.
</p>

<p>
The SQL statement is a UTF-8 string for sqlite3_prepare().
The sqlite3_prepare16() works the same way except
that it expects a UTF-16 string as SQL input.
Only the first SQL statement in the input string is compiled.
The fourth parameter is filled in with a pointer to the next (uncompiled)
SQLite statement in the input string, if any.
The sqlite3_finalize() routine deallocates a prepared SQL statement.
All prepared statements must be finalized before the database can be
closed.
The sqlite3_reset() routine resets a prepared SQL statement so that it
can be executed again.
</p>

<p>
The SQL statement may contain tokens of the form "?" or "?nnn" or ":nnn:"
where "nnn" is an integer.  Such tokens represent unspecified literal values
(or wildcards) to be filled in later by the 
<a href="capi3ref.html#sqlite3_bind_blob">sqlite3_bind</a> interface.
Each wildcard as an associated number given
by the "nnn" that follows the "?".  If the "?" is not followed by an
integer, then its number one more than the number of prior wildcards
in the same SQL statement.  It is allowed for the same wildcard
to occur more than once in the same SQL statement, in which case
all instance of that wildcard will be filled in with the same value.
Unbound wildcards have a value of NULL.
</p>

<blockquote><pre>
   int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*));
   int sqlite3_bind_double(sqlite3_stmt*, int, double);
   int sqlite3_bind_int(sqlite3_stmt*, int, int);
   int sqlite3_bind_int64(sqlite3_stmt*, int, long long int);
   int sqlite3_bind_null(sqlite3_stmt*, int);
   int sqlite3_bind_text(sqlite3_stmt*, int, const char*, int n, void(*)(void*));
   int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int n, void(*)(void*));
   int sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*);
</pre></blockquote>

<p>
There is an assortment of sqlite3_bind routines used to assign values
to wildcards in a prepared SQL statement.  Unbound wildcards
are interpreted as NULLs.  Bindings are not reset by sqlite3_reset().
But wildcards can be rebound to new values after an sqlite3_reset().
</p>

<p>
After an SQL statement has been prepared (and optionally bound), it
is executed using:
</p>

<blockquote><pre>
   int sqlite3_step(sqlite3_stmt*);
</pre></blockquote>

<p>
The sqlite3_step() routine return SQLITE3_ROW if it is returning a single
row of the result set, or SQLITE3_DONE if execution has completed, either
normally or due to an error.  It might also return SQLITE3_BUSY if it is
unable to open the database file.  If the return value is SQLITE3_ROW, then
the following routines can be used to extract information about that row
of the result set:
</p>

<blockquote><pre>
   const void *sqlite3_column_blob(sqlite3_stmt*, int iCol);
   int sqlite3_column_bytes(sqlite3_stmt*, int iCol);
   int sqlite3_column_bytes16(sqlite3_stmt*, int iCol);
   int sqlite3_column_count(sqlite3_stmt*);
   const char *sqlite3_column_decltype(sqlite3_stmt *, int iCol);
   const void *sqlite3_column_decltype16(sqlite3_stmt *, int iCol);
   double sqlite3_column_double(sqlite3_stmt*, int iCol);
   int sqlite3_column_int(sqlite3_stmt*, int iCol);
   long long int sqlite3_column_int64(sqlite3_stmt*, int iCol);
   const char *sqlite3_column_name(sqlite3_stmt*, int iCol);
   const void *sqlite3_column_name16(sqlite3_stmt*, int iCol);
   const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol);
   const void *sqlite3_column_text16(sqlite3_stmt*, int iCol);
   int sqlite3_column_type(sqlite3_stmt*, int iCol);
</pre></blockquote>

<p>
The 
<a href="capi3ref.html#sqlite3_column_count">sqlite3_column_count()</a>
function returns the number of columns in
the results set.  sqlite3_column_count() can be called at any time after
sqlite3_prepare().  
<a href="capi3ref.html#sqlite3_data_count">sqlite3_data_count()</a>
works similarly to
sqlite3_column_count() except that it only works following sqlite3_step().
If the previous call to sqlite3_step() returned SQLITE_DONE or an error code,
then sqlite3_data_count() will return 0 whereas sqlite3_column_count() will
continue to return the number of columns in the result set.
</p>

<p>
The sqlite3_column_type() function returns the
datatype for the value in the Nth column.  The return value is one
of these:
</p>

<blockquote><pre>
   #define SQLITE_INTEGER  1
   #define SQLITE_FLOAT    2
   #define SQLITE_TEXT     3
   #define SQLITE_BLOB     4
   #define SQLITE_NULL     5
</pre></blockquote>

<p>
The sqlite3_column_decltype() routine returns text which is the
declared type of the column in the CREATE TABLE statement.  For an
expression, the return type is an empty string.  sqlite3_column_name()
returns the name of the Nth column.  sqlite3_column_bytes() returns
the number of bytes in a column that has type BLOB or the number of bytes
in a TEXT string with UTF-8 encoding.  sqlite3_column_bytes16() returns
the same value for BLOBs but for TEXT strings returns the number of bytes
in a UTF-16 encoding.
sqlite3_column_blob() return BLOB data.  
sqlite3_column_text() return TEXT data as UTF-8.
sqlite3_column_text16() return TEXT data as UTF-16.
sqlite3_column_int() return INTEGER data in the host machines native
integer format.
sqlite3_column_int64() returns 64-bit INTEGER data.
Finally, sqlite3_column_double() return floating point data.
</p>

<p>
It is not necessary to retrieve data in the format specify by
sqlite3_column_type().  If a different format is requested, the data
is converted automatically.
</p>

<h4>2.3 User-defined functions</h4>

<p>
User defined functions can be created using the following routine:
</p>

<blockquote><pre>
   typedef struct sqlite3_value sqlite3_value;
   int sqlite3_create_function(
     sqlite3 *,
     const char *zFunctionName,
     int nArg,
     int eTextRep,
     void*,
     void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
     void (*xStep)(sqlite3_context*,int,sqlite3_value**),
     void (*xFinal)(sqlite3_context*)
   );
   int sqlite3_create_function16(
     sqlite3*,
     const void *zFunctionName,
     int nArg,
     int eTextRep,
     void*,
     void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
     void (*xStep)(sqlite3_context*,int,sqlite3_value**),
     void (*xFinal)(sqlite3_context*)
   );
   #define SQLITE3_UTF8     1
   #define SQLITE3_UTF16    2
   #define SQLITE3_UTF16BE  3
   #define SQLITE3_UTF16LE  4
   #define SQLITE3_ANY      5
</pre></blockquote>

<p>
The nArg parameter specifies the number of arguments to the function.
A value of 0 indicates that any number of arguments is allowed.  The
eTextRep parameter specifies what representation text values are expected
to be in for arguments to this function.  The value of this parameter should
be one of the parameters defined above.  SQLite version 3 allows multiple
implementations of the same function using different text representations.
The database engine chooses the function that minimization the number
of text conversions required.
</p>

<p>
Normal functions specify only xFunc and leave xStep and xFinal set to NULL.
Aggregate functions specify xStep and xFinal and leave xFunc set to NULL.
There is no separate sqlite3_create_aggregate() API.
</p>

<p>
The function name is specified in UTF-8.  A separate sqlite3_create_function16()
API works the same as sqlite_create_function()
except that the function name is specified in UTF-16 host byte order.
</p>

<p>
Notice that the parameters to functions are now pointers to sqlite3_value
structures instead of pointers to strings as in SQLite version 2.X.
The following routines are used to extract useful information from these
"values":
</p>

<blockquote><pre>
   const void *sqlite3_value_blob(sqlite3_value*);
   int sqlite3_value_bytes(sqlite3_value*);
   int sqlite3_value_bytes16(sqlite3_value*);
   double sqlite3_value_double(sqlite3_value*);
   int sqlite3_value_int(sqlite3_value*);
   long long int sqlite3_value_int64(sqlite3_value*);
   const unsigned char *sqlite3_value_text(sqlite3_value*);
   const void *sqlite3_value_text16(sqlite3_value*);
   int sqlite3_value_type(sqlite3_value*);
</pre></blockquote>

<p>
Function implementations use the following APIs to acquire context and
to report results:
</p>

<blockquote><pre>
   void *sqlite3_aggregate_context(sqlite3_context*, int nbyte);
   void *sqlite3_user_data(sqlite3_context*);
   void sqlite3_result_blob(sqlite3_context*, const void*, int n, void(*)(void*));
   void sqlite3_result_double(sqlite3_context*, double);
   void sqlite3_result_error(sqlite3_context*, const char*, int);
   void sqlite3_result_error16(sqlite3_context*, const void*, int);
   void sqlite3_result_int(sqlite3_context*, int);
   void sqlite3_result_int64(sqlite3_context*, long long int);
   void sqlite3_result_null(sqlite3_context*);
   void sqlite3_result_text(sqlite3_context*, const char*, int n, void(*)(void*));
   void sqlite3_result_text16(sqlite3_context*, const void*, int n, void(*)(void*));
   void sqlite3_result_value(sqlite3_context*, sqlite3_value*);
   void *sqlite3_get_auxdata(sqlite3_context*, int);
   void sqlite3_set_auxdata(sqlite3_context*, int, void*, void (*)(void*));
</pre></blockquote>

<h4>2.4 User-defined collating sequences</h4>

<p>
The following routines are used to implement user-defined
collating sequences:
</p>

<blockquote><pre>
   sqlite3_create_collation(sqlite3*, const char *zName, int eTextRep, void*,
      int(*xCompare)(void*,int,const void*,int,const void*));
   sqlite3_create_collation16(sqlite3*, const void *zName, int eTextRep, void*,
      int(*xCompare)(void*,int,const void*,int,const void*));
   sqlite3_collation_needed(sqlite3*, void*, 
      void(*)(void*,sqlite3*,int eTextRep,const char*));
   sqlite3_collation_needed16(sqlite3*, void*,
      void(*)(void*,sqlite3*,int eTextRep,const void*));
</pre></blockquote>

<p>
The sqlite3_create_collation() function specifies a collating sequence name
and a comparison function to implement that collating sequence.  The
comparison function is only used for comparing text values.  The eTextRep
parameter is one of SQLITE3_UTF8, SQLITE3_UTF16LE, SQLITE3_UTF16BE, or
SQLITE3_ANY to specify which text representation the comparison function works
with.  Separate comparison functions can exist for the same collating
sequence for each of the UTF-8, UTF-16LE and UTF-16BE text representations.
The sqlite3_create_collation16() works like sqlite3_create_collation() except
that the collation name is specified in UTF-16 host byte order instead of
in UTF-8.
</p>

<p>
The sqlite3_collation_needed() routine registers a callback which the
database engine will invoke if it encounters an unknown collating sequence.
The callback can lookup an appropriate comparison function and invoke
sqlite_3_create_collation() as needed.  The fourth parameter to the callback
is the name of the collating sequence in UTF-8.  For sqlite3_collation_need16()
the callback sends the collating sequence name in UTF-16 host byte order.
</p>
}
footer $rcsid

--- NEW FILE: capi3ref.tcl ---
set rcsid {$Id: capi3ref.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {C/C++ Interface For SQLite Version 3}
puts {
<h2>C/C++ Interface For SQLite Version 3</h2>
}

proc api {name prototype desc {notused x}} {
  global apilist
  if {$name==""} {
    regsub -all {sqlite3_[a-z0-9_]+\(} $prototype \
      {[lappend name [string trimright & (]]} x1
    subst $x1
  }
  lappend apilist [list $name $prototype $desc]
}

api {result-codes} {
#define SQLITE_OK           0   /* Successful result */
[...1076 lines suppressed...]
  set i $name_to_idx($name)
  if {[info exists done($i)]} continue
  set done($i) 1
  foreach {namelist prototype desc} [lindex $apilist $i] break
  foreach name $namelist {
    puts "<a name=\"$name\">"
  }
  puts "<p><hr></p>"
  puts "<blockquote><pre>"
  regsub "^( *\n)+" $prototype {} p2
  regsub "(\n *)+\$" $p2 {} p3
  puts $p3
  puts "</pre></blockquote>"
  regsub -all {\[} $desc {\[} desc
  regsub -all {sqlite3_[a-z0-9_]+} $desc "\[resolve_name $name &\]" d2
  regsub -all "\n( *\n)+" [subst $d2] "</p>\n\n<p>" d3
  puts "<p>$d3</p>"
}

footer $rcsid

--- NEW FILE: changes.tcl ---
#
# Run this script to generated a changes.html output file
#
source common.tcl
header {SQLite changes}
puts {
<p>
This page provides a high-level summary of changes to SQLite.
For more detail, refer the the checkin logs generated by
CVS at
<a href="http://www.sqlite.org/cvstrac/timeline">
http://www.sqlite.org/cvstrac/timeline</a>.
</p>

<DL>
}


proc chng {date desc} {
[...1160 lines suppressed...]
but it uses Unix shell globbing wildcards instead of the '%' 
and '_' wildcards of SQL.</li>
<li>Added the <B>COPY</b> command patterned after 
<a href="http://www.postgresql.org/">PostgreSQL</a> so that SQLite
can now read the output of the <b>pg_dump</b> database dump utility
of PostgreSQL.</li>
<li>Added a <B>VACUUM</B> command that that calls the 
<b>gdbm_reorganize()</b> function on the underlying database
files.</li>
<li>And many, many bug fixes...</li>
}

chng {2000 May 29} {
<li>Initial Public Release of Alpha code</li>
}

puts {
</DL>
}
footer {$Id:}

--- NEW FILE: common.tcl ---
# This file contains TCL procedures used to generate standard parts of
# web pages.
#

proc header {txt} {
  puts "<html><head><title>$txt</title></head>"
  puts \
{<body bgcolor="white" link="#50695f" vlink="#508896">
<table width="100%" border="0">
<tr><td valign="top"><img src="sqlite.gif"></td>
<td width="100%"></td>
<td valign="bottom">
<ul>
<li><a href="http://www.sqlite.org/cvstrac/tktnew">bugs</a></li>
<li><a href="changes.html">changes</a></li>
<li><a href="download.html#cvs">cvs&nbsp;repository</a></li>
<li><a href="docs.html">documentation</a></li>
</ul>
</td>
<td width="10"></td>
<td valign="bottom">
<ul>
<li><a href="download.html">download</a></li>
<li><a href="faq.html">faq</a></li>
<li><a href="index.html">home</a></li>
<li><a href="support.html">mailing&nbsp;list</a></li>
</ul>
</td>
<td width="10"></td>
<td valign="bottom">
<ul>
<li><a href="support.html">support</a></li>
<li><a href="lang.html">syntax</a></li>
<li><a href="http://www.sqlite.org/cvstrac/timeline">timeline</a></li>
<li><a href="http://www.sqlite.org/cvstrac/wiki">wiki</a></li>
</ul>
</td>
</tr></table>
<table width="100%">
<tr><td bgcolor="#80a796"></td></tr>
</table>}
}

proc footer {{rcsid {}}} {
  puts {
<table width="100%">
<tr><td bgcolor="#80a796"></td></tr>
</table>}
  set date [lrange $rcsid 3 4]
  if {$date!=""} {
    puts "<small><i>This page last modified on $date</i></small>"
  }
  puts {</body></html>}
}

--- NEW FILE: conflict.tcl ---
#
# Run this Tcl script to generate the constraint.html file.
#
set rcsid {$Id: conflict.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $ }
source common.tcl
header {Constraint Conflict Resolution in SQLite}
puts {
<h1>Constraint Conflict Resolution in SQLite</h1>

<p>
In most SQL databases, if you have a UNIQUE constraint on
a table and you try to do an UPDATE or INSERT that violates
the constraint, the database will abort the operation in
progress, back out any prior changes associated with
UPDATE or INSERT command, and return an error.
This is the default behavior of SQLite.
Beginning with version 2.3.0, though, SQLite allows you to
define alternative ways for dealing with constraint violations.
This article describes those alternatives and how to use them.
</p>

<h2>Conflict Resolution Algorithms</h2>

<p>
SQLite defines five constraint conflict resolution algorithms
as follows:
</p>

<dl>
<dt><b>ROLLBACK</b></dt>
<dd><p>When a constraint violation occurs, an immediate ROLLBACK
occurs, thus ending the current transaction, and the command aborts
with a return code of SQLITE_CONSTRAINT.  If no transaction is
active (other than the implied transaction that is created on every
command) then this algorithm works the same as ABORT.</p></dd>

<dt><b>ABORT</b></dt>
<dd><p>When a constraint violation occurs, the command backs out
any prior changes it might have made and aborts with a return code
of SQLITE_CONSTRAINT.  But no ROLLBACK is executed so changes
from prior commands within the same transaction
are preserved.  This is the default behavior for SQLite.</p></dd>

<dt><b>FAIL</b></dt>
<dd><p>When a constraint violation occurs, the command aborts with a
return code SQLITE_CONSTRAINT.  But any changes to the database that
the command made prior to encountering the constraint violation
are preserved and are not backed out.  For example, if an UPDATE
statement encountered a constraint violation on the 100th row that
it attempts to update, then the first 99 row changes are preserved
by change to rows 100 and beyond never occur.</p></dd>

<dt><b>IGNORE</b></dt>
<dd><p>When a constraint violation occurs, the one row that contains
the constraint violation is not inserted or changed.  But the command
continues executing normally.  Other rows before and after the row that
contained the constraint violation continue to be inserted or updated
normally.  No error is returned.</p></dd>

<dt><b>REPLACE</b></dt>
<dd><p>When a UNIQUE constraint violation occurs, the pre-existing row
that caused the constraint violation is removed prior to inserting
or updating the current row.  Thus the insert or update always occurs.
The command continues executing normally.  No error is returned.</p></dd>
</dl>

<h2>Why So Many Choices?</h2>

<p>SQLite provides multiple conflict resolution algorithms for a
couple of reasons.  First, SQLite tries to be roughly compatible with as
many other SQL databases as possible, but different SQL database
engines exhibit different conflict resolution strategies.  For
example, PostgreSQL always uses ROLLBACK, Oracle always uses ABORT, and
MySQL usually uses FAIL but can be instructed to use IGNORE or REPLACE.
By supporting all five alternatives, SQLite provides maximum
portability.</p>

<p>Another reason for supporing multiple algorithms is that sometimes
it is useful to use an algorithm other than the default.
Suppose, for example, you are
inserting 1000 records into a database, all within a single
transaction, but one of those records is malformed and causes
a constraint error.  Under PostgreSQL or Oracle, none of the
1000 records would get inserted.  In MySQL, some subset of the
records that appeared before the malformed record would be inserted
but the rest would not.  Neither behavior is espeically helpful.
What you really want is to use the IGNORE algorithm to insert
all but the malformed record.</p>

}
footer $rcsid

--- NEW FILE: copyright-release.html ---
<html>
<body bgcolor="white">
<h1 align="center">
Copyright Release for<br>
Contributions To SQLite
</h1>

<p>
SQLite is software that implements an embeddable SQL database engine.
SQLite is available for free download from http://www.sqlite.org/.
The principal author and maintainer of SQLite has disclaimed all
copyright interest in his contributions to SQLite
and thus released his contributions into the public domain.
In order to keep the SQLite software unencumbered by copyright
claims, the principal author asks others who may from time to
time contribute changes and enhancements to likewise disclaim
their own individual copyright interest.
</p>

<p>
Because the SQLite software found at http://www.sqlite.org/ is in the
public domain, anyone is free to download the SQLite software
from that website, make changes to the software, use, distribute,
or sell the modified software, under either the original name or
under some new name, without any need to obtain permission, pay
royalties, acknowledge the original source of the software, or
in any other way compensate, identify, or notify the original authors.  
Nobody is in any way compelled to contribute their SQLite changes and 
enhancements back to the SQLite website.  This document concerns
only changes and enhancements to SQLite that are intentionally and
deliberately contributed back to the SQLite website.  
</p>

<p>
For the purposes of this document, "SQLite software" shall mean any
computer source code, documentation, makefiles, test scripts, or
other information that is published on the SQLite website, 
http://www.sqlite.org/.  Precompiled binaries are excluded from
the definition of "SQLite software" in this document because the
process of compiling the software may introduce information from
outside sources which is not properly a part of SQLite.
</p>

<p>
The header comments on the SQLite source files exhort the reader to
share freely and to never take more than one gives.
In the spirit of that exhortation I make the following declarations:
</p>

<ol>
<li><p>
I dedicate to the public domain 
any and all copyright interest in the SQLite software that
was publicly available on the SQLite website (http://www.sqlite.org/) prior
to the date of the signature below and any changes or enhancements to
the SQLite software 
that I may cause to be published on that website in the future.
I make this dedication for the benefit of the public at large and
to the detriment of my heirs and successors.  I intend this
dedication to be an overt act of relinquishment in perpetuity of
all present and future rights to the SQLite software under copyright
law.
</p></li>

<li><p>
To the best of my knowledge and belief, the changes and enhancements that
I have contributed to SQLite are either originally written by me
or are derived from prior works which I have verified are also
in the public domain and are not subject to claims of copyright
by other parties.
</p></li>

<li><p>
To the best of my knowledge and belief, no individual, business, organization,
government, or other entity has any copyright interest
in the SQLite software as it existed on the
SQLite website as of the date on the signature line below.
</p></li>

<li><p>
I agree never to publish any additional information
to the SQLite website (by CVS, email, scp, FTP, or any other means) unless
that information is an original work of authorship by me or is derived from 
prior published versions of SQLite.
I agree never to copy and paste code into the SQLite code base from
other sources.
I agree never to publish on the SQLite website any information that
would violate a law or breach a contract.
</p></li>
</ol>

<p>
<table width="100%" cellpadding="0" cellspacing="0">
<tr>
<td width="60%" valign="top">
Signature:
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
</td><td valign="top" align="left">
Date:
</td></tr>
<td colspan=2>
Name (printed):
</td>
</tr>
</table>
</body>
</html>

--- NEW FILE: copyright-release.pdf ---
(This appears to be a binary file; contents omitted.)

--- NEW FILE: copyright.tcl ---
set rcsid {$Id: copyright.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite Copyright}
puts {
<h2>SQLite Copyright</h2>

<p>
The original author of SQLite has dedicated the code to the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute
the original SQLite code, either in source code form or as a compiled binary,
for any purpose, commerical or non-commerical, and by any means.
</p>

<h2>Contributed Code</h2>

<p>
In order to keep SQLite complete free and unencumbered by copyright,
other contributors to the SQLite code base are asked to likewise dedicate
their contributions to the public domain.
If you want to send a patch or enhancement for possible inclusion in the
SQLite source tree, please accompany the patch with the following statement:
</p>

<blockquote><i>
The author or authors of this code dedicate any and all copyright interest
in this code to the public domain.  We make this dedication for the benefit
of the public at large and to the detriment of our heirs and successors.
We intend this dedication to be an overt act of relinquishment in
perpetuity of all present and future rights this code under copyright law.
</i></blockquote>

<p>
Regrettably, as of 2003 October 20,
we will no longer be able to accept patches or changes to 
SQLite that are not accompanied by a statement such as the above.
In addition, if you make
changes or enhancements as an employee, then a simple statement such as the
above is insufficient.  You must also send by surface mail a copyright release
signed by a company officer.
A signed original of the copyright release should be mailed to:</p>

<blockquote>
Hwaci<br>
6200 Maple Cove Lane<br>
Charlotte, NC 28269<br>
USA
</blockquote>

<p>
A template copyright release is available
in <a href="copyright-release.pdf">PDF</a> or
<a href="copyright-release.html">HTML</a>.
You can use this release to make future changes.  If you have contributed
changes or enhancements to SQLite in the past, and have not already done
so, you are invited to complete and sign a copy of the template and mail
it to the address above.
</p>
}
footer $rcsid

--- NEW FILE: datatype3.tcl ---
set rcsid {$Id: datatype3.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {Datatypes In SQLite Version 3}
puts {
<h2>Datatypes In SQLite Version 3</h2>

<h3>1. Storage Classes</h3>

<P>Version 2 of SQLite stores all column values as ASCII text.
Version 3 enhances this by providing the ability to store integer and
real numbers in a more compact format and the capability to store
BLOB data.</P>

<P>Each value stored in an SQLite database (or manipulated by the
database engine) has one of the following storage classes:</P>
<UL>
	<LI><P><B>NULL</B>. The value is a NULL value.</P>
	<LI><P><B>INTEGER</B>. The value is a signed integer, stored in 1,
	2, 3, 4, 6, or 8 bytes depending on the magnitude of the value.</P>
	<LI><P><B>REAL</B>. The value is a floating point value, stored as
	an 8-byte IEEE floating point number.</P>
	<LI><P><B>TEXT</B>. The value is a text string, stored using the
	database encoding (UTF-8, UTF-16BE or UTF-16-LE).</P>
	<LI><P><B>BLOB</B>. The value is a blob of data, stored exactly as
	it was input.</P>
</UL>

<P>As in SQLite version 2, any column in a version 3 database except an INTEGER
PRIMARY KEY may be used to store any type of value. The exception to
this rule is described below under 'Strict Affinity Mode'.</P>

<P>All values supplied to SQLite, whether as literals embedded in SQL
statements or values bound to pre-compiled SQL statements
are assigned a storage class before the SQL statement is executed.
Under circumstances described below, the
database engine may convert values between numeric storage classes
(INTEGER and REAL) and TEXT during query execution. 
</P>

<P>Storage classes are initially assigned as follows:</P>
<UL>
	<LI><P>Values specified as literals as part of SQL statements are
	assigned storage class TEXT if they are enclosed by single or double
	quotes, INTEGER if the literal is specified as an unquoted number
	with no decimal point or exponent, REAL if the literal is an
	unquoted number with a decimal point or exponent and NULL if the
	value is a NULL. Literals with storage class BLOB are specified
        using the X'ABCD' notation.</P>
	<LI><P>Values supplied using the sqlite3_bind_* APIs are assigned
	the storage class that most closely matches the native type bound
	(i.e. sqlite3_bind_blob() binds a value with storage class BLOB).</P>
</UL>
<P>The storage class of a value that is the result of an SQL scalar
operator depends on the outermost operator of the expression.
User-defined functions may return values with any storage class. It
is not generally possible to determine the storage class of the
result of an expression at compile time.</P>

<h3>2. Column Affinity</h3>

<p>
In SQLite version 3, the type of a value is associated with the value
itself, not with the column or variable in which the value is stored.
(This is sometimes called
<a href="http://www.cliki.net/manifest%20type%20system">
manifest typing</a>.)
All other SQL databases engines that we are aware of use the more
restrictive system of static typing where the type is associated with
the container, not the value.
</p>

<p>
In order to maximize compatibility between SQLite and other database
engines, SQLite support the concept of "type affinity" on columns.
The type affinity of a column is the recommended type for data stored
in that column.  The key here is that the type is recommended, not
required.  Any column can still store any type of data, in theory.
It is just that some columns, given the choice, will prefer to use
one storage class over another.  The preferred storage class for
a column is called its "affinity".
</p>

<P>Each column in an SQLite 3 database is assigned one of the
following type affinities:</P>
<UL>
	<LI>TEXT</LI>
	<LI>NUMERIC</LI>
	<LI>INTEGER</LI>
	<LI>NONE</LI>
</UL>

<P>A column with TEXT affinity stores all data using storage classes
NULL, TEXT or BLOB. If numerical data is inserted into a column with
TEXT affinity it is converted to text form before being stored.</P>

<P>A column with NUMERIC affinity may contain values using all five
storage classes. When text data is inserted into a NUMERIC column, an
attempt is made to convert it to an integer or real number before it
is stored. If the conversion is successful, then the value is stored
using the INTEGER or REAL storage class. If the conversion cannot be
performed the value is stored using the TEXT storage class. No
attempt is made to convert NULL or blob values.</P>

<P>A column that uses INTEGER affinity behaves in the same way as a
column with NUMERIC affinity, except that if a real value with no
floating point component (or text value that converts to such) is
inserted it is converted to an integer and stored using the INTEGER
storage class.</P>

<P>A column with affinity NONE does not prefer one storage class over
another.  It makes no attempt to coerce data before
it is inserted.</P>

<h4>2.1 Determination Of Column Affinity</h4>

<P>The type affinity of a column is determined by the declared type
of the column, according to the following rules:</P>
<OL>
	<LI><P>If the datatype contains the string &quot;INT&quot; then it
	is assigned INTEGER affinity.</P>

	<LI><P>If the datatype of the column contains any of the strings
	&quot;CHAR&quot;, &quot;CLOB&quot;, or &quot;TEXT&quot; then that
	column has TEXT affinity. Notice that the type VARCHAR contains the
	string &quot;CHAR&quot; and is thus assigned TEXT affinity.</P>

	<LI><P>If the datatype for a column
         contains the string &quot;BLOB&quot; or if
        no datatype is specified then the column has affinity NONE.</P>

	<LI><P>Otherwise, the affinity is NUMERIC.</P>
</OL>

<P>If a table is created using a "CREATE TABLE &lt;table&gt; AS
SELECT..." statement, then all columns have no datatype specified
and they are given no affinity.</P>

<h4>2.2 Column Affinity Example</h4>

<blockquote>
<PRE>CREATE TABLE t1(
    t  TEXT,
    nu NUMERIC, 
    i  INTEGER,
    no BLOB
);

-- Storage classes for the following row:
-- TEXT, REAL, INTEGER, TEXT
INSERT INTO t1 VALUES('500.0', '500.0', '500.0', '500.0');

-- Storage classes for the following row:
-- TEXT, REAL, INTEGER, REAL
INSERT INTO t1 VALUES(500.0, 500.0, 500.0, 500.0);
</PRE>
</blockquote>

<h3>3. Comparison Expressions</h3>

<P>Like SQLite version 2, version 3
features the binary comparison operators '=',
'&lt;', '&lt;=', '&gt;=' and '!=', an operation to test for set
membership, 'IN', and the ternary comparison operator 'BETWEEN'.</P>
<P>The results of a comparison depend on the storage classes of the
two values being compared, according to the following rules:</P>
<UL>
	<LI><P>A value with storage class NULL is considered less than any
	other value (including another value with storage class NULL).</P>

	<LI><P>An INTEGER or REAL value is less than any TEXT or BLOB value.
	When an INTEGER or REAL is compared to another INTEGER or REAL, a
	numerical comparison is performed.</P>

	<LI><P>A TEXT value is less than a BLOB value. When two TEXT values
	are compared, the C library function memcmp() is usually used to
	determine the result. However this can be overriden, as described
	under 'User-defined collation Sequences' below.</P>

	<LI><P>When two BLOB values are compared, the result is always
	determined using memcmp().</P>
</UL>

<P>SQLite may attempt to convert values between the numeric storage
classes (INTEGER and REAL) and TEXT before performing a comparison.
For binary comparisons, this is done in the cases enumerated below.
The term "expression" used in the bullet points below means any
SQL scalar expression or literal other than a column value.</P>
<UL>
	<LI><P>When a column value is compared to the result of an
	expression, the affinity of the column is applied to the result of
	the expression before the comparison takes place.</P>

	<LI><P>When two column values are compared, if one column has
	INTEGER or NUMERIC affinity and the other does not, the NUMERIC
	affinity is applied to any values with storage class TEXT extracted
	from the non-NUMERIC column.</P>

	<LI><P>When the results of two expressions are compared, the no
        conversions occur.  The results are compared as is.  If a string
        is compared to a number, the number will always be less than the
        string.</P>
</UL>

<P>
In SQLite, the expression "a BETWEEN b AND c" is equivalent to "a &gt;= b
AND a &lt;= c", even if this means that different affinities are applied to
'a' in each of the comparisons required to evaluate the expression.
</P>

<P>Expressions of the type "a IN (SELECT b ....)" are handled by the three
rules enumerated above for binary comparisons (e.g. in a
similar manner to "a = b"). For example if 'b' is a column value
and 'a' is an expression, then the affinity of 'b' is applied to 'a'
before any comparisons take place.</P>

<P>SQLite treats the expression "a IN (x, y, z)" as equivalent to "a = z OR
a = y OR a = z".
</P>

<h4>3.1 Comparison Example</h4>

<blockquote>
<PRE>
CREATE TABLE t1(
    a TEXT,
    b NUMERIC,
    c BLOB
);

-- Storage classes for the following row:
-- TEXT, REAL, TEXT
INSERT INTO t1 VALUES('500', '500', '500');

-- 60 and 40 are converted to '60' and '40' and values are compared as TEXT.
SELECT a &lt; 60, a &lt; 40 FROM t1;
1|0

-- Comparisons are numeric. No conversions are required.
SELECT b &lt; 60, b &lt; 600 FROM t1;
0|1

-- Both 60 and 600 (storage class NUMERIC) are less than '500'
-- (storage class TEXT).
SELECT c &lt; 60, c &lt; 600 FROM t1;
0|0
</PRE>
</blockquote>
<h3>4. Operators</h3>

<P>All mathematical operators (which is to say, all operators other
than the concatenation operator &quot;||&quot;) apply NUMERIC
affinity to all operands prior to being carried out. If one or both
operands cannot be converted to NUMERIC then the result of the
operation is NULL.</P>

<P>For the concatenation operator, TEXT affinity is applied to both
operands. If either operand cannot be converted to TEXT (because it
is NULL or a BLOB) then the result of the concatenation is NULL.</P>

<h3>5. Sorting, Grouping and Compound SELECTs</h3>

<P>When values are sorted by an ORDER by clause, values with storage
class NULL come first, followed by INTEGER and REAL values
interspersed in numeric order, followed by TEXT values usually in
memcmp() order, and finally BLOB values in memcmp() order. No storage
class conversions occur before the sort.</P>

<P>When grouping values with the GROUP BY clause values with
different storage classes are considered distinct, except for INTEGER
and REAL values which are considered equal if they are numerically
equal. No affinities are applied to any values as the result of a
GROUP by clause.</P>

<P>The compound SELECT operators UNION,
INTERSECT and EXCEPT perform implicit comparisons between values.
Before these comparisons are performed an affinity may be applied to
each value. The same affinity, if any, is applied to all values that
may be returned in a single column of the compound SELECT result set.
The affinity applied is the affinity of the column returned by the
left most component SELECTs that has a column value (and not some
other kind of expression) in that position. If for a given compound
SELECT column none of the component SELECTs return a column value, no
affinity is applied to the values from that column before they are
compared.</P>

<h3>6. Other Affinity Modes</h3>

<P>The above sections describe the operation of the database engine
in 'normal' affinity mode. SQLite version 3 will feature two other affinity
modes, as follows:</P>
<UL>
	<LI><P><B>Strict affinity</B> mode. In this mode if a conversion
	between storage classes is ever required, the database engine
	returns an error and the current statement is rolled back.</P>

	<LI><P><B>No affinity</B> mode. In this mode no conversions between
	storage classes are ever performed. Comparisons between values of
	different storage classes (except for INTEGER and REAL) are always
	false.</P>
</UL>

<h3>7. User-defined Collation Sequences</h3>

<p>
By default, when SQLite compares two text values, the result of the
comparison is determined using memcmp(), regardless of the encoding of the
string. SQLite v3 provides the ability for users to supply arbitrary
comparison functions, known as user-defined collation sequences, to be used
instead of memcmp().
</p>  
<p>
Aside from the default collation sequence BINARY, implemented using
memcmp(), SQLite features two extra built-in collation sequences 
intended for testing purposes, NOCASE and REVERSE:
</p>  
<UL>
	<LI><b>BINARY</b> - Compares string data using memcmp(), regardless
                            of text encoding.</LI>
	<LI><b>REVERSE</b> - Collate in the reverse order to BINARY. </LI>
	<LI><b>NOCASE</b> - The same as binary, except the 26 upper case
			    characters used by the English language are
			    folded to their lower case equivalents before
                            the comparison is performed.  </UL>


<h4>7.1 Assigning Collation Sequences from SQL</h4>

<p>
Each column of each table has a default collation type. If a collation type
other than BINARY is required, a COLLATE clause is specified as part of the
<a href="lang.html#createtable">column definition</a> to define it. 
</p>  

<p>
Whenever two text values are compared by SQLite, a collation sequence is
used to determine the results of the comparison according to the following
rules. Sections 3 and 5 of this document describe the circumstances under
which such a comparison takes place.
</p>  

<p>
For binary comparison operators (=, <, >, <= and >=) if either operand is a
column, then the default collation type of the column determines the
collation sequence to use for the comparison. If both operands are columns,
then the collation type for the left operand determines the collation
sequence used. If neither operand is a column, then the BINARY collation
sequence is used.
</p>  

<p>
The expression "x BETWEEN y and z" is equivalent to "x &gt;= y AND x &lt;=
z". The expression "x IN (SELECT y ...)" is handled in the same way as the
expression "x = y" for the purposes of determining the collation sequence
to use. The collation sequence used for expressions of the form "x IN (y, z
...)" is the default collation type of x if x is a column, or BINARY
otherwise.
</p>  

<p>
An <a href="lang.html#select">ORDER BY</a> clause that is part of a SELECT
statement may be assigned a collation sequence to be used for the sort
operation explicitly. In this case the explicit collation sequence is
always used.  Otherwise, if the expression sorted by an ORDER BY clause is
a column, then the default collation type of the column is used to
determine sort order. If the expression is not a column, then the BINARY
collation sequence is used.
</p>  

<h4>7.2 Collation Sequences Example</h4>
<p>
The examples below identify the collation sequences that would be used to
determine the results of text comparisons that may be performed by various
SQL statements. Note that a text comparison may not be required, and no
collation sequence used, in the case of numeric, blob or NULL values.
</p>
<blockquote>
<PRE>
CREATE TABLE t1(
    a,                 -- default collation type BINARY
    b COLLATE BINARY,  -- default collation type BINARY
    c COLLATE REVERSE, -- default collation type REVERSE
    d COLLATE NOCASE   -- default collation type NOCASE
);

-- Text comparison is performed using the BINARY collation sequence.
SELECT (a = b) FROM t1;

-- Text comparison is performed using the NOCASE collation sequence.
SELECT (a = d) FROM t1;

-- Text comparison is performed using the BINARY collation sequence.
SELECT (d = a) FROM t1;

-- Text comparison is performed using the REVERSE collation sequence.
SELECT ('abc' = c) FROM t1;

-- Text comparison is performed using the REVERSE collation sequence.
SELECT (c = 'abc') FROM t1;

-- Grouping is performed using the NOCASE collation sequence (i.e. values
-- 'abc' and 'ABC' are placed in the same group).
SELECT count(*) GROUP BY d FROM t1;

-- Grouping is performed using the BINARY collation sequence.
SELECT count(*) GROUP BY (d || '') FROM t1;

-- Sorting is performed using the REVERSE collation sequence.
SELECT * FROM t1 ORDER BY c;

-- Sorting is performed using the BINARY collation sequence.
SELECT * FROM t1 ORDER BY (c || '');

-- Sorting is performed using the NOCASE collation sequence.
SELECT * FROM t1 ORDER BY c COLLATE NOCASE;

</PRE>
</blockquote>

}
footer $rcsid

--- NEW FILE: datatypes.tcl ---
#
# Run this script to generated a datatypes.html output file
#
set rcsid {$Id: datatypes.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {Datatypes In SQLite version 2}
puts {
<h2>Datatypes In SQLite Version 2</h2>

<h3>1.0 &nbsp; Typelessness</h3>
<p>
SQLite is "typeless".  This means that you can store any
kind of data you want in any column of any table, regardless of the
declared datatype of that column.  
(See the one exception to this rule in section 2.0 below.)
This behavior is a feature, not
a bug.  A database is suppose to store and retrieve data and it 
should not matter to the database what format that data is in.
The strong typing system found in most other SQL engines and
codified in the SQL language spec is a misfeature -
it is an example of the implementation showing through into the
interface.  SQLite seeks to overcome this misfeature by allowing
you to store any kind of data into any kind of column and by
allowing flexibility in the specification of datatypes.
</p>

<p>
A datatype to SQLite is any sequence of zero or more names
optionally followed by a parenthesized lists of one or two
signed integers.  Notice in particular that a datatype may
be <em>zero</em> or more names.  That means that an empty
string is a valid datatype as far as SQLite is concerned.
So you can declare tables where the datatype of each column
is left unspecified, like this:
</p>

<blockquote><pre>
CREATE TABLE ex1(a,b,c);
</pre></blockquote>

<p>
Even though SQLite allows the datatype to be omitted, it is
still a good idea to include it in your CREATE TABLE statements,
since the data type often serves as a good hint to other
programmers about what you intend to put in the column. And
if you ever port your code to another database engine, that
other engine will probably require a datatype of some kind.
SQLite accepts all the usual datatypes.  For example:
</p>

<blockquote><pre>
CREATE TABLE ex2(
  a VARCHAR(10),
  b NVARCHAR(15),
  c TEXT,
  d INTEGER,
  e FLOAT,
  f BOOLEAN,
  g CLOB,
  h BLOB,
  i TIMESTAMP,
  j NUMERIC(10,5)
  k VARYING CHARACTER (24),
  l NATIONAL VARYING CHARACTER(16)
);
</pre></blockquote>

<p>
And so forth.  Basically any sequence of names optionally followed by 
one or two signed integers in parentheses will do.
</p>

<h3>2.0 &nbsp; The INTEGER PRIMARY KEY</h3>

<p>
One exception to the typelessness of SQLite is a column whose type
is INTEGER PRIMARY KEY.  (And you must use "INTEGER" not "INT".
A column of type INT PRIMARY KEY is typeless just like any other.)
INTEGER PRIMARY KEY columns must contain a 32-bit signed integer.  Any
attempt to insert non-integer data will result in an error.
</p>

<p>
INTEGER PRIMARY KEY columns can be used to implement the equivalent
of AUTOINCREMENT.  If you try to insert a NULL into an INTEGER PRIMARY
KEY column, the column will actually be filled with a integer that is
one greater than the largest key already in the table.  Or if the
largest key is 2147483647, then the column will be filled with a
random integer.  Either way, the INTEGER PRIMARY KEY column will be
assigned a unique integer.  You can retrieve this integer using
the <b>sqlite_last_insert_rowid()</b> API function or using the
<b>last_insert_rowid()</b> SQL function in a subsequent SELECT statement.
</p>

<h3>3.0 &nbsp; Comparison and Sort Order</h3>

<p>
SQLite is typeless for the purpose of deciding what data is allowed
to be stored in a column.  But some notion of type comes into play
when sorting and comparing data.  For these purposes, a column or
an expression can be one of two types: <b>numeric</b> and <b>text</b>.
The sort or comparison may give different results depending on which
type of data is being sorted or compared.
</p>

<p>
If data is of type <b>text</b> then the comparison is determined by
the standard C data comparison functions <b>memcmp()</b> or
<b>strcmp()</b>.  The comparison looks at bytes from two inputs one
by one and returns the first non-zero difference.
Strings are '\000' terminated so shorter
strings sort before longer strings, as you would expect.
</p>

<p>
For numeric data, this situation is more complex.  If both inputs
look like well-formed numbers, then they are converted
into floating point values using <b>atof()</b> and compared numerically.
If one input is not a well-formed number but the other is, then the
number is considered to be less than the non-number.  If neither inputs
is a well-formed number, then <b>strcmp()</b> is used to do the
comparison.
</p>

<p>
Do not be confused by the fact that a column might have a "numeric"
datatype.  This does not mean that the column can contain only numbers.
It merely means that if the column does contain a number, that number
will sort in numerical order.
</p>

<p>
For both text and numeric values, NULL sorts before any other value.
A comparison of any value against NULL using operators like "&lt;" or
"&gt;=" is always false.
</p>

<h3>4.0 &nbsp; How SQLite Determines Datatypes</h3>

<p>
For SQLite version 2.6.3 and earlier, all values used the numeric datatype.
The text datatype appears in version 2.7.0 and later.  In the sequel it
is assumed that you are using version 2.7.0 or later of SQLite.
</p>

<p>
For an expression, the datatype of the result is often determined by
the outermost operator.  For example, arithmatic operators ("+", "*", "%")
always return a numeric results.  The string concatenation operator
("||") returns a text result.  And so forth.  If you are ever in doubt
about the datatype of an expression you can use the special <b>typeof()</b>
SQL function to determine what the datatype is.  For example:
</p>

<blockquote><pre>
sqlite&gt; SELECT typeof('abc'+123);
numeric
sqlite&gt; SELECT typeof('abc'||123);
text
</pre></blockquote>

<p>
For table columns, the datatype is determined by the type declaration
of the CREATE TABLE statement.  The datatype is text if and only if
the type declaration contains one or more of the following strings:
</p>

<blockquote>
BLOB<br>
CHAR<br>
CLOB</br>
TEXT
</blockquote>

<p>
The search for these strings in the type declaration is case insensitive,
of course.  If any of the above strings occur anywhere in the type
declaration, then the datatype of the column is text.  Notice that
the type "VARCHAR" contains "CHAR" as a substring so it is considered
text.</p>

<p>If none of the strings above occur anywhere in the type declaration,
then the datatype is numeric.  Note in particular that the datatype for columns
with an empty type declaration is numeric.
</p>

<h3>5.0 &nbsp; Examples</h3>

<p>
Consider the following two command sequences:
</p>

<blockquote><pre>
CREATE TABLE t1(a INTEGER UNIQUE);        CREATE TABLE t2(b TEXT UNIQUE);
INSERT INTO t1 VALUES('0');               INSERT INTO t2 VALUES(0);
INSERT INTO t1 VALUES('0.0');             INSERT INTO t2 VALUES(0.0);
</pre></blockquote>

<p>In the sequence on the left, the second insert will fail.  In this case,
the strings '0' and '0.0' are treated as numbers since they are being 
inserted into a numeric column but 0==0.0 which violates the uniqueness
constraint.  However, the second insert in the right-hand sequence works.  In
this case, the constants 0 and 0.0 are treated a strings which means that
they are distinct.</p>

<p>SQLite always converts numbers into double-precision (64-bit) floats
for comparison purposes.  This means that a long sequence of digits that
differ only in insignificant digits will compare equal if they
are in a numeric column but will compare unequal if they are in a text
column.  We have:</p>

<blockquote><pre>
INSERT INTO t1                            INSERT INTO t2
   VALUES('12345678901234567890');           VALUES(12345678901234567890);
INSERT INTO t1                            INSERT INTO t2
   VALUES('12345678901234567891');           VALUES(12345678901234567891);
</pre></blockquote>

<p>As before, the second insert on the left will fail because the comparison
will convert both strings into floating-point number first and the only
difference in the strings is in the 20-th digit which exceeds the resolution
of a 64-bit float.  In contrast, the second insert on the right will work
because in that case, the numbers being inserted are strings and are
compared using memcmp().</p>

<p>
Numeric and text types make a difference for the DISTINCT keyword too:
</p>

<blockquote><pre>
CREATE TABLE t3(a INTEGER);               CREATE TABLE t4(b TEXT);
INSERT INTO t3 VALUES('0');               INSERT INTO t4 VALUES(0);
INSERT INTO t3 VALUES('0.0');             INSERT INTO t4 VALUES(0.0);
SELECT DISTINCT * FROM t3;                SELECT DISTINCT * FROM t4;
</pre></blockquote>

<p>
The SELECT statement on the left returns a single row since '0' and '0.0'
are treated as numbers and are therefore indistinct.  But the SELECT 
statement on the right returns two rows since 0 and 0.0 are treated
a strings which are different.</p>
}
footer $rcsid

--- NEW FILE: docs.tcl ---
# This script generates the "docs.html" page that describes various
# sources of documentation available for SQLite.
#
set rcsid {$Id: docs.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite Documentation}
puts {
<h2>Available Documentation</h2>
<table width="100%" cellpadding="5">
}

proc doc {name url desc} {
  puts {<tr><td valign="top" align="right">}
  regsub -all { +} $name {\&nbsp;} name
  puts "<a href=\"$url\">$name</a></td>"
  puts {<td width="10"></td>}
  puts {<td align="top" align="left">}
  puts $desc
  puts {</td></tr>}
}

doc {SQLite In 5 Minutes Or Less} {quickstart.html} {
  A very quick introduction to programming with SQLite.
}

doc {SQL Syntax} {lang.html} {
  This document describes the SQL language that is understood by
  SQLite.  
}

doc {Version 2 C/C++ API} {c_interface.html} {
  A description of the C/C++ interface bindings for SQLite through version 
  2.8
}
doc {SQLite Version 3} {version3.html} {
  A summary of of the changes between SQLite version 2.8 and SQLite version 3.0.
}
doc {Version 3 C/C++ API} {capi3.html} {
  A description of the C/C++ interface bindings for SQLite version 3.0.0
  and following.
}
doc {Version 3 C/C++ API<br>Reference} {capi3ref.html} {
  This document describes each API function separately.
}

doc {Tcl API} {tclsqlite.html} {
  A description of the TCL interface bindings for SQLite.
}

doc {Locking And Concurrency<br>In SQLite Version 3} {lockingv3.html} {
  A description of how the new locking code in version 3 increases
  concurrancy and decreases the problem of writer starvation.
}

doc {Version 2 DataTypes } {datatypes.html} {
  A description of how SQLite version 2 handles SQL datatypes.
}
doc {Version 3 DataTypes } {datatype3.html} {
  SQLite version 3 introduces the concept of manifest typing, where the
  type of a value is associated with the value itself, not the column that
  it is stored in.
  This page describes data typing for SQLite version 3 in further detail.
}

doc {Release History} {changes.html} {
  A chronology of SQLite releases going back to version 1.0.0
}

doc {Null Handling} {nulls.html} {
  Different SQL database engines handle NULLs in different ways.  The
  SQL standards are ambiguous.  This document describes how SQLite handles
  NULLs in comparison with other SQL database engines.
}

doc {Copyright} {copyright.html} {
  SQLite is in the public domain.  This document describes what that means
  and the implications for contributors.
}

doc {Unsupported SQL} {omitted.html} {
  This page describes features of SQL that SQLite does not support.
}

doc {Speed Comparison} {speed.html} {
  The speed of version 2.7.6 of SQLite is compared against PostgreSQL and
  MySQL.
}

doc {Architecture} {arch.html} {
  An architectural overview of the SQLite library, useful for those who want
  to hack the code.
}

doc {VDBE Tutorial} {vdbe.html} {
  The VDBE is the subsystem within SQLite that does the actual work of
  executing SQL statements.  This page describes the principles of operation
  for the VDBE in SQLite version 2.7.  This is essential reading for anyone
  who want to modify the SQLite sources.
}

doc {VDBE Opcodes} {opcode.html} {
  This document is an automatically generated description of the various
  opcodes that the VDBE understands.  Programmers can use this document as
  a reference to better understand the output of EXPLAIN listings from
  SQLite.
}

puts {</table>}
footer $rcsid

--- NEW FILE: download.tcl ---
#
# Run this TCL script to generate HTML for the download.html file.
#
set rcsid {$Id: download.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite Download Page}

puts {
<h2>SQLite Download Page</h1>
<table width="100%" cellpadding="5">
}

proc Product {pattern desc} {
  regsub VERSION $pattern {([0-9a-z._]+)} p2
  regsub VERSION $pattern {*} p3
  set flist [glob -nocomplain $p3]
  foreach file [lsort -dict $flist] {
    if {![regexp ^$p2\$ $file all version]} continue
    regsub -all _ $version . version
    set size [file size $file]
    puts "<tr><td width=\"10\"></td>"
    puts "<td valign=\"top\" align=\"right\">"
    puts "<a href=\"$file\">$file</a><br>($size bytes)</td>"
    puts "<td width=\"5\"></td>"
    regsub -all VERSION $desc $version d2
    puts "<td valign=\"top\">[string trim $d2]</td></tr>"
  }
}
cd doc

proc Heading {title} {
  puts "<tr><td colspan=4><big><b>$title</b></big></td></tr>"
}

Heading {Precompiled Binaries for Linux}

Product sqlite*-VERSION.bin.gz {
  A statically linked command-line program for accessing and modifing
  SQLite databases.
  See <a href="sqlite.html">the documentation</a> for additional information.
}

Product tclsqlite-VERSION.so.gz {
  Bindings for TCL.  You can import this shared library into either
  tclsh or wish to get SQLite database access from Tcl/Tk.
  See <a href="tclsqlite.html">the documentation</a> for details.
}

Product sqlite-VERSION.so.gz {
  A precompiled shared-library for Linux.  This is the same as
  <b>tclsqlite.so.gz</b> but without the TCL bindings.
}

Product sqlite-devel-VERSION-1.i386.rpm {
  RPM containing documentation, header files, and static library for
  SQLite version VERSION.
}
Product sqlite-VERSION-1.i386.rpm {
  RPM containing shared libraries and the <b>sqlite</b> command-line
  program for SQLite version VERSION.
}

Product sqlite_analyzer-VERSION.bin.gz {
  An analysis program for database files compatible with SQLite 
  version VERSION.
}

Heading {Precompiled Binaries For Windows}

Product sqlite-VERSION.zip {
  A command-line program for accessing and modifing SQLite databases.
  See <a href="sqlite.html">the documentation</a> for additional information.
}
Product tclsqlite-VERSION.zip {
  Bindings for TCL.  You can import this shared library into either
  tclsh or wish to get SQLite database access from Tcl/Tk.
  See <a href="tclsqlite.html">the documentation</a> for details.
}
Product sqlitedll-VERSION.zip {
  This is a DLL of the SQLite library without the TCL bindings.
  The only external dependency is MSVCRT.DLL.
}

Product sqlite_analyzer-VERSION.zip {
  An analysis program for database files compatible with SQLite version
  VERSION.
}


Heading {Source Code}

Product {sqlite-source-VERSION.zip} {
  This ZIP archive contains pure C source code for the SQLite library.
  Unlike the tarballs below, all of the preprocessing and automatic
  code generation has already been done on these C source code, so they
  can be processed directly with any ordinary C compiler.
  This file is provided as a service to
  MS-Windows users who lack the build support infrastructure of Unix.
}

Product {sqlite-VERSION.src.rpm} {
  An RPM containing complete source code for SQLite version VERSION
}

Product {sqlite-VERSION.tar.gz} {
  A tarball of the complete source tree for SQLite version VERSION
  including all of the documentation.
}

puts {
</table>

<a name="cvs">
<h3>Direct Access To The Sources Via Anonymous CVS</h3>

<p>
All SQLite source code is maintained in a 
<a href="http://www.cvshome.org/">CVS</a> repository that is
available for read-only access by anyone.  You can 
interactively view the
respository contents and download individual files
by visiting
<a href="http://www.sqlite.org/cvstrac/dir?d=sqlite">
http://www.sqlite.org/cvstrac/dir?d=sqlite</a>.
To access the respository directly, use the following
commands:
</p>

<blockquote><pre>
cvs -d :pserver:anonymous at www.sqlite.org:/sqlite login
cvs -d :pserver:anonymous at www.sqlite.org:/sqlite checkout sqlite
</pre></blockquote>

<p>
When the first command prompts you for a password, enter "anonymous".
</p>

<p>
To access the SQLite version 2.8 sources, begin by getting the 3.0
tree as described above.  Then update to the "version_2" branch
as follows:
</p>

<blockquote><pre>
cvs update -r version_2
</pre></blockquote>

}

footer $rcsid

--- NEW FILE: dynload.tcl ---
#
# Run this Tcl script to generate the dynload.html file.
#
set rcsid {$Id: dynload.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

puts {<html>
<head>
  <title>How to build a dynamically loaded Tcl extension for SQLite</title>
</head>
<body bgcolor=white>
<h1 align=center>
How To Build A Dynamically Loaded Tcl Extension
</h1>}
puts {<p>
<i>This note was contributed by 
<a href="bsaunder at tampabay.rr.com.nospam">Bill Saunders</a>.  Thanks, Bill!</i>

<p>
To compile the SQLite Tcl extension into a dynamically loaded module 
I did the following:
</p>

<ol>
<li><p>Do a standard compile
(I had a dir called bld at the same level as sqlite  ie
        /root/bld
        /root/sqlite
I followed the directions and did a standard build in the bld
directory)</p></li>

<li><p>
Now do the following in the bld directory
<blockquote><pre>
gcc -shared -I. -lgdbm ../sqlite/src/tclsqlite.c libsqlite.a -o sqlite.so
</pre></blockquote></p></li>

<li><p>
This should produce the file sqlite.so in the bld directory</p></li>

<li><p>
Create a pkgIndex.tcl file that contains this line

<blockquote><pre>
package ifneeded sqlite 1.0 [list load [file join $dir sqlite.so]]
</pre></blockquote></p></li>

<li><p>
To use this put sqlite.so and pkgIndex.tcl in the same directory</p></li>

<li><p>
>From that directory start wish</p></li>

<li><p>
Execute the following tcl command (tells tcl where to fine loadable
modules)
<blockquote><pre>
lappend auto_path [exec pwd]
</pre></blockquote></p></li>

<li><p>
Load the package 
<blockquote><pre>
package require sqlite
</pre></blockquote></p></li>

<li><p>
Have fun....</p></li>
</ul>

</body></html>}

--- NEW FILE: faq.tcl ---
#
# Run this script to generated a faq.html output file
#
set rcsid {$Id: faq.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite Frequently Asked Questions</title>}

set cnt 1
proc faq {question answer} {
  set ::faq($::cnt) [list [string trim $question] [string trim $answer]]
  incr ::cnt
}

#############
# Enter questions and answers here.

faq {
  How do I create an AUTOINCREMENT field.
} {
  <p>Short answer: A column declared INTEGER PRIMARY KEY will
  autoincrement.</p>

  <p>Here is the long answer:
  Beginning with version SQLite 2.3.4, If you declare a column of
  a table to be INTEGER PRIMARY KEY, then whenever you insert a NULL
  into that column of the table, the NULL is automatically converted
  into an integer which is one greater than the largest value of that
  column over all other rows in the table, or 1 if the table is empty.
  For example, suppose you have a table like this:
<blockquote><pre>
CREATE TABLE t1(
  a INTEGER PRIMARY KEY,
  b INTEGER
);
</pre></blockquote>
  <p>With this table, the statement</p>
<blockquote><pre>
INSERT INTO t1 VALUES(NULL,123);
</pre></blockquote>
  <p>is logically equivalent to saying:</p>
<blockquote><pre>
INSERT INTO t1 VALUES((SELECT max(a) FROM t1)+1,123);
</pre></blockquote>
  <p>For SQLite version 2.2.0 through 2.3.3, if you insert a NULL into
  an INTEGER PRIMARY KEY column, the NULL will be changed to a unique
  integer, but it will a semi-random integer.  Unique keys generated this
  way will not be sequential.  For SQLite version 2.3.4 and beyond, the
  unique keys will be sequential until the largest key reaches a value
  of 2147483647.  That is the largest 32-bit signed integer and cannot
  be incremented, so subsequent insert attempts will revert to the
  semi-random key generation algorithm of SQLite version 2.3.3 and
  earlier.</p>

  <p>Beginning with version 2.2.3, there is a new API function named
  <b>sqlite_last_insert_rowid()</b> which will return the integer key
  for the most recent insert operation.  See the API documentation for
  details.</p>

  <p>SQLite version 3.0 expands the size of the rowid to 64 bits.</p>
}

faq {
  What datatypes does SQLite support?
} {
  <p>SQLite ignores
  the datatype information that follows the column name in CREATE TABLE.
  You can put any type of data you want
  into any column, without regard to the declared datatype of that column.
  </p>

  <p>An exception to this rule is a column of type INTEGER PRIMARY KEY.
  Such columns must hold an integer.  An attempt to put a non-integer
  value into an INTEGER PRIMARY KEY column will generate an error.</p>

  <p>There is a page on <a href="datatypes.html">datatypes in SQLite
  version 2.8</a>
  and another for <a href="datatype3.html">version 3.0</a>
  that explains this concept further.</p>
}

faq {
  SQLite lets me insert a string into a database column of type integer!
} {
  <p>This is a feature, not a bug.  SQLite does not enforce data type
  constraints.  Any data can be
  inserted into any column.  You can put arbitrary length strings into
  integer columns, floating point numbers in boolean columns, or dates
  in character columns.  The datatype you assign to a column in the
  CREATE TABLE command does not restrict what data can be put into
  that column.  Every column is able to hold
  an arbitrary length string.  (There is one exception: Columns of
  type INTEGER PRIMARY KEY may only hold an integer.  An error will result
  if you try to put anything other than an integer into an
  INTEGER PRIMARY KEY column.)</p>

  <p>The datatype does effect how values are compared, however.  For
  columns with a numeric type (such as "integer") any string that looks
  like a number is treated as a number for comparison and sorting purposes.
  Consider these two command sequences:</p>

  <blockquote><pre>
CREATE TABLE t1(a INTEGER UNIQUE);        CREATE TABLE t2(b TEXT UNIQUE);
INSERT INTO t1 VALUES('0');               INSERT INTO t2 VALUES(0);
INSERT INTO t1 VALUES('0.0');             INSERT INTO t2 VALUES(0.0);
</pre></blockquote>

  <p>In the sequence on the left, the second insert will fail.  In this case,
  the strings '0' and '0.0' are treated as numbers since they are being 
  inserted into a numeric column and 0==0.0 which violates the uniqueness
  constraint.  But the second insert in the right-hand sequence works.  In
  this case, the constants 0 and 0.0 are treated a strings which means that
  they are distinct.</p>

  <p>There is a page on <a href="datatypes.html">datatypes in SQLite
  version 2.8</a>
  and another for <a href="datatype3.html">version 3.0</a>
  that explains this concept further.</p>
}

faq {
  Why does SQLite think that the expression '0'=='00' is TRUE?
} {
  <p>As of version 2.7.0, it doesn't.</p>

  <p>But if one of the two values being compared is stored in a column that
  has a numeric type, the the other value is treated as a number, not a
  string and the result succeeds.  For example:</p>

<blockquote><pre>
CREATE TABLE t3(a INTEGER, b TEXT);
INSERT INTO t3 VALUES(0,0);
SELECT count(*) FROM t3 WHERE a=='00';
</pre></blockquote>

  <p>The SELECT in the above series of commands returns 1.  The "a" column
  is numeric so in the WHERE clause the string '00' is converted into a
  number for comparison against "a".  0==00 so the test is true.  Now
  consider a different SELECT:</p>

<blockquote><pre>
SELECT count(*) FROM t3 WHERE b=='00';
</pre></blockquote>

  <p>In this case the answer is 0.  B is a text column so a text comparison
  is done against '00'.  '0'!='00' so the WHERE clause returns FALSE and
  the count is zero.</p>

  <p>There is a page on <a href="datatypes.html">datatypes in SQLite
  version 2.8</a>
  and another for <a href="datatype3.html">version 3.0</a>
  that explains this concept further.</p>
}

faq {
  Why doesn't SQLite allow me to use '0' and '0.0' as the primary
  key on two different rows of the same table?
} {
  <p>Your primary key must have a numeric type.  Change the datatype of
  your primary key to TEXT and it should work.</p>

  <p>Every row must have a unique primary key.  For a column with a
  numeric type, SQLite thinks that <b>'0'</b> and <b>'0.0'</b> are the
  same value because they compare equal to one another numerically.
  (See the previous question.)  Hence the values are not unique.</p>
}
        
faq {
  My linux box is not able to read an SQLite database that was created
  on my SparcStation.
} {
  <p>You need to upgrade your SQLite library to version 2.6.3 or later.</p>

  <p>The x86 processor on your linux box is little-endian (meaning that
  the least significant byte of integers comes first) but the Sparc is
  big-endian (the most significant bytes comes first).  SQLite databases
  created on a little-endian architecture cannot be on a big-endian
  machine by version 2.6.2 or earlier of SQLite.  Beginning with
  version 2.6.3, SQLite should be able to read and write database files
  regardless of byte order of the machine on which the file was created.</p>
}

faq {
  Can multiple applications or multiple instances of the same
  application access a single database file at the same time?
} {
  <p>Multiple processes can have the same database open at the same
  time.  Multiple processes can be doing a SELECT
  at the same time.  But only one process can be making changes to
  the database at once.</p>

  <p>Win95/98/ME lacks support for reader/writer locks in the operating
  system.  Prior to version 2.7.0, this meant that under windows you
  could only have a single process reading the database at one time.
  This problem was resolved in version 2.7.0 by implementing a user-space
  probabilistic reader/writer locking strategy in the windows interface
  code file.  Windows
  now works like Unix in allowing multiple simultaneous readers.</p>

  <p>The locking mechanism used to control simultaneous access might
  not work correctly if the database file is kept on an NFS filesystem.
  This is because file locking is broken on some NFS implementations.
  You should avoid putting SQLite database files on NFS if multiple
  processes might try to access the file at the same time.  On Windows,
  Microsoft's documentation says that locking may not work under FAT
  filesystems if you are not running the Share.exe daemon.  People who
  have a lot of experience with Windows tell me that file locking of
  network files is very buggy and is not dependable.  If what they
  say is true, sharing an SQLite database between two or more Windows
  machines might cause unexpected problems.</p>

  <p>Locking in SQLite is very course-grained.  SQLite locks the
  entire database.  Big database servers (PostgreSQL, Oracle, etc.)
  generally have finer grained locking, such as locking on a single
  table or a single row within a table.  If you have a massively
  parallel database application, you should consider using a big database
  server instead of SQLite.</p>

  <p>When SQLite tries to access a file that is locked by another
  process, the default behavior is to return SQLITE_BUSY.  You can
  adjust this behavior from C code using the <b>sqlite_busy_handler()</b> or
  <b>sqlite_busy_timeout()</b> API functions.  See the API documentation
  for details.</p>

  <p>If two or more processes have the same database open and one
  process creates a new table or index, the other processes might
  not be able to see the new table right away.  You might have to
  get the other processes to close and reopen their connection to
  the database before they will be able to see the new table.</p>
}

faq {
  Is SQLite threadsafe?
} {
  <p>Yes.  Sometimes.  In order to be thread-safe, SQLite must be compiled
  with the THREADSAFE preprocessor macro set to 1.  In the default
  distribution, the windows binaries are compiled to be threadsafe but
  the linux binaries are not.  If you want to change this, you'll have to
  recompile.</p>

  <p>"Threadsafe" in the previous paragraph means that two or more threads
  can run SQLite at the same time on different "<b>sqlite</b>" structures
  returned from separate calls to <b>sqlite_open()</b>.  It is never safe
  to use the same <b>sqlite</b> structure pointer simultaneously in two
  or more threads.</p>

  <p>Note that if two or more threads have the same database open and one
  thread creates a new table or index, the other threads might
  not be able to see the new table right away.  You might have to
  get the other threads to close and reopen their connection to
  the database before they will be able to see the new table.</p>

  <p>Under UNIX, you should not carry an open SQLite database across
  a fork() system call into the child process.  Problems will result
  if you do.</p>
}

faq {
  How do I list all tables/indices contained in an SQLite database
} {
  <p>If you are running the <b>sqlite</b> command-line access program
  you can type "<b>.tables</b>" to get a list of all tables.  Or you
  can type "<b>.schema</b>" to see the complete database schema including
  all tables and indices.  Either of these commands can be followed by
  a LIKE pattern that will restrict the tables that are displayed.</p>

  <p>From within a C/C++ program (or a script using Tcl/Ruby/Perl/Python
  bindings) you can get access to table and index names by doing a SELECT
  on a special table named "<b>SQLITE_MASTER</b>".  Every SQLite database
  has an SQLITE_MASTER table that defines the schema for the database.
  The SQLITE_MASTER table looks like this:</p>
<blockquote><pre>
CREATE TABLE sqlite_master (
  type TEXT,
  name TEXT,
  tbl_name TEXT,
  rootpage INTEGER,
  sql TEXT
);
</pre></blockquote>
  <p>For tables, the <b>type</b> field will always be <b>'table'</b> and the
  <b>name</b> field will be the name of the table.  So to get a list of
  all tables in the database, use the following SELECT command:</p>
<blockquote><pre>
SELECT name FROM sqlite_master
WHERE type='table'
ORDER BY name;
</pre></blockquote>
  <p>For indices, <b>type</b> is equal to <b>'index'</b>, <b>name</b> is the
  name of the index and <b>tbl_name</b> is the name of the table to which
  the index belongs.  For both tables and indices, the <b>sql</b> field is
  the text of the original CREATE TABLE or CREATE INDEX statement that
  created the table or index.  For automatically created indices (used
  to implement the PRIMARY KEY or UNIQUE constraints) the <b>sql</b> field
  is NULL.</p>

  <p>The SQLITE_MASTER table is read-only.  You cannot change this table
  using UPDATE, INSERT, or DELETE.  The table is automatically updated by
  CREATE TABLE, CREATE INDEX, DROP TABLE, and DROP INDEX commands.</p>

  <p>Temporary tables do not appear in the SQLITE_MASTER table.  Temporary
  tables and their indices and triggers occur in another special table
  named SQLITE_TEMP_MASTER.  SQLITE_TEMP_MASTER works just like SQLITE_MASTER
  except that it is only visible to the application that created the 
  temporary tables.  To get a list of all tables, both permanent and
  temporary, one can use a command similar to the following:
<blockquote><pre>
SELECT name FROM 
   (SELECT * FROM sqlite_master UNION ALL
    SELECT * FROM sqlite_temp_master)
WHERE type='table'
ORDER BY name
</pre></blockquote>
}

faq {
  Are there any known size limits to SQLite databases?
} {
  <p>As of version 2.7.4, 
  SQLite can handle databases up to 2<sup>41</sup> bytes (2 terabytes)
  in size on both Windows and Unix.  Older version of SQLite
  were limited to databases of 2<sup>31</sup> bytes (2 gigabytes).</p>

  <p>SQLite version 2.8 limits the amount of data in one row to 
  1 megabyte.  SQLite version 3.0 has no limit on the amount of
  data that can be stored in a single row.
  </p>

  <p>The names of tables, indices, view, triggers, and columns can be
  as long as desired.  However, the names of SQL functions (as created
  by the <a href="c_interface.html#cfunc">sqlite_create_function()</a> API)
  may not exceed 255 characters in length.</p>
}

faq {
  What is the maximum size of a VARCHAR in SQLite?
} {
  <p>SQLite does not enforce datatype constraints.
  A VARCHAR column can hold as much data as you care to put it in.</p>
}

faq {
  Does SQLite support a BLOB type?
} {
  <p>SQLite version 3.0 lets you puts BLOB data into any column, even
  columns that are declared to hold some other type.</p>

  <p>SQLite version 2.8 would hold store text data without embedded
  '\000' characters.  If you need to store BLOB data in SQLite version
  2.8 you'll want to encode that data first.
  There is a source file named 
  "<b>src/encode.c</b>" in the SQLite version 2.8 distribution that contains
  implementations of functions named "<b>sqlite_encode_binary()</b>
  and <b>sqlite_decode_binary()</b> that can be used for converting
  binary data to ASCII and back again, if you like.</p>

 
}

faq {
  How do I add or delete columns from an existing table in SQLite.
} {
  <p>SQLite does yes not support the "ALTER TABLE" SQL command.  If you
  what to change the structure of a table, you have to recreate the
  table.  You can save existing data to a temporary table, drop the
  old table, create the new table, then copy the data back in from
  the temporary table.</p>

  <p>For example, suppose you have a table named "t1" with columns
  names "a", "b", and "c" and that you want to delete column "c" from
  this table.  The following steps illustrate how this could be done:
  </p>

  <blockquote><pre>
BEGIN TRANSACTION;
CREATE TEMPORARY TABLE t1_backup(a,b);
INSERT INTO t1_backup SELECT a,b FROM t1;
DROP TABLE t1;
CREATE TABLE t1(a,b);
INSERT INTO t1 SELECT a,b FROM t1_backup;
DROP TABLE t1_backup;
COMMIT;
</pre></blockquote>
}

faq {
  I deleted a lot of data but the database file did not get any
  smaller.  Is this a bug?
} {
  <p>No.  When you delete information from an SQLite database, the
  unused disk space is added to an internal "free-list" and is reused
  the next time you insert data.  The disk space is not lost.  But
  neither is it returned to the operating system.</p>

  <p>If you delete a lot of data and want to shrink the database file,
  run the VACUUM command (version 2.8.1 and later).  VACUUM will reconstruct
  the database from scratch.  This will leave the database with an empty
  free-list and a file that is minimal in size.  Note, however, that the
  VACUUM can take some time to run (around a half second per megabyte
  on the Linux box where SQLite is developed) and it can use up to twice
  as much temporary disk space as the original file while it is running.
  </p>
}

faq {
  Can I use SQLite in my commerical product without paying royalties?
} {
  <p>Yes.  SQLite is in the public domain.  No claim of ownership is made
  to any part of the code.  You can do anything you want with it.</p>
}

faq {
  How do I use a string literal that contains an embedded single-quote (')
  character?
} {
  <p>The SQL standard specifies that single-quotes in strings are escaped
  by putting two single quotes in a row.  SQL works like the Pascal programming
  language in the regard.  SQLite follows this standard.  Example:
  </p>

  <blockquote><pre>
    INSERT INTO xyz VALUES('5 O''clock');
  </pre></blockquote>
}

# End of questions and answers.
#############

puts {<h2>Frequently Asked Questions</h2>}

# puts {<DL COMPACT>}
# for {set i 1} {$i<$cnt} {incr i} {
#   puts "  <DT><A HREF=\"#q$i\">($i)</A></DT>"
#   puts "  <DD>[lindex $faq($i) 0]</DD>"
# }
# puts {</DL>}
puts {<OL>}
for {set i 1} {$i<$cnt} {incr i} {
  puts "<li><a href=\"#q$i\">[lindex $faq($i) 0]</a></li>"
}
puts {</OL>}

for {set i 1} {$i<$cnt} {incr i} {
  puts "<A NAME=\"q$i\"><HR />"
  puts "<P><B>($i) [lindex $faq($i) 0]</B></P>\n"
  puts "<BLOCKQUOTE>[lindex $faq($i) 1]</BLOCKQUOTE></LI>\n"
}

puts {</OL>}
footer $rcsid

--- NEW FILE: fileformat.tcl ---
#
# Run this script to generated a fileformat.html output file
#
set rcsid {$Id: fileformat.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite Database File Format (Version 2)}
puts {
<h2>SQLite 2.X Database File Format</h2>

<p>
This document describes the disk file format for SQLite versions 2.1
through 2.8.  SQLite version 3.0 and following uses a very different
format which is described separately.
</p>

<h3>1.0 &nbsp; Layers</h3>

<p>
SQLite is implemented in layers.
(See the <a href="arch.html">architecture description</a>.)
The format of database files is determined by three different
layers in the architecture.
</p>

<ul>
<li>The <b>schema</b> layer implemented by the VDBE.</li>
<li>The <b>b-tree</b> layer implemented by btree.c</li>
<li>The <b>pager</b> layer implemented by pager.c</li>
</ul>

<p>
We wil describe each layer beginning with the bottom (pager)
layer and working upwards.
</p>

<h3>2.0 &nbsp; The Pager Layer</h3>

<p>
An SQLite database consists of
"pages" of data.  Each page is 1024 bytes in size.
Pages are numbered beginning with 1.
A page number of 0 is used to indicate "no such page" in the
B-Tree and Schema layers.
</p>

<p>
The pager layer is responsible for implementing transactions
with atomic commit and rollback.  It does this using a separate
journal file.  Whenever a new transaction is started, a journal
file is created that records the original state of the database.
If the program terminates before completing the transaction, the next
process to open the database can use the journal file to restore
the database to its original state.
</p>

<p>
The journal file is located in the same directory as the database
file and has the same name as the database file but with the
characters "<tt>-journal</tt>" appended.
</p>

<p>
The pager layer does not impose any content restrictions on the
main database file.  As far as the pager is concerned, each page
contains 1024 bytes of arbitrary data.  But there is structure to
the journal file.
</p>

<p>
A journal file begins with 8 bytes as follows:
0xd9, 0xd5, 0x05, 0xf9, 0x20, 0xa1, 0x63, and 0xd6.
Processes that are attempting to rollback a journal use these 8 bytes
as a sanity check to make sure the file they think is a journal really
is a valid journal.  Prior version of SQLite used different journal
file formats.  The magic numbers for these prior formats are different
so that if a new version of the library attempts to rollback a journal
created by an earlier version, it can detect that the journal uses
an obsolete format and make the necessary adjustments.  This article
describes only the newest journal format - supported as of version
2.8.0.
</p>

<p>
Following the 8 byte prefix is a three 4-byte integers that tell us
the number of pages that have been committed to the journal,
a magic number used for
sanity checking each page, and the
original size of the main database file before the transaction was
started.  The number of committed pages is used to limit how far
into the journal to read.  The use of the checksum magic number is
described below.
The original size of the database is used to restore the database
file back to its original size.
The size is expressed in pages (1024 bytes per page).
</p>

<p>
All three integers in the journal header and all other multi-byte
numbers used in the journal file are big-endian.
That means that the most significant byte
occurs first.  That way, a journal file that is
originally created on one machine can be rolled back by another
machine that uses a different byte order.  So, for example, a
transaction that failed to complete on your big-endian SparcStation
can still be rolled back on your little-endian Linux box.
</p>

<p>
After the 8-byte prefix and the three 4-byte integers, the
journal file consists of zero or more page records.  Each page
record is a 4-byte (big-endian) page number followed by 1024 bytes
of data and a 4-byte checksum.  
The data is the original content of the database page
before the transaction was started.  So to roll back the transaction,
the data is simply written into the corresponding page of the
main database file.  Pages can appear in the journal in any order,
but they are guaranteed to appear only once. All page numbers will be
between 1 and the maximum specified by the page size integer that
appeared at the beginning of the journal.
</p>

<p>
The so-called checksum at the end of each record is not really a
checksum - it is the sum of the page number and the magic number which
was the second integer in the journal header.  The purpose of this
value is to try to detect journal corruption that might have occurred
because of a power loss or OS crash that occurred which the journal
file was being written to disk.  It could have been the case that the
meta-data for the journal file, specifically the size of the file, had
been written to the disk so that when the machine reboots it appears that
file is large enough to hold the current record.  But even though the
file size has changed, the data for the file might not have made it to
the disk surface at the time of the OS crash or power loss.  This means
that after reboot, the end of the journal file will contain quasi-random
garbage data.  The checksum is an attempt to detect such corruption.  If
the checksum does not match, that page of the journal is not rolled back.
</p>

<p>
Here is a summary of the journal file format:
</p>

<ul>
<li>8 byte prefix: 0xd9, 0xd5, 0x05, 0xf9, 0x20, 0xa1, 0x63, 0xd6</li>
<li>4 byte number of records in journal</li>
<li>4 byte magic number used for page checksums</li>
<li>4 byte initial database page count</li>
<li>Zero or more instances of the following:
   <ul>
   <li>4 byte page number</li>
   <li>1024 bytes of original data for the page</li>
   <li>4 byte checksum</li>
   </ul>
</li>
</ul>

<h3>3.0 &nbsp; The B-Tree Layer</h3>

<p>
The B-Tree layer builds on top of the pager layer to implement
one or more separate b-trees all in the same disk file.  The
algorithms used are taken from Knuth's <i>The Art Of Computer
Programming.</i></p>

<p>
Page 1 of a database contains a header string used for sanity
checking, a few 32-bit words of configuration data, and a pointer
to the beginning of a list of unused pages in the database.
All other pages in the
database are either pages of a b-tree, overflow pages, or unused
pages on the freelist.
</p>

<p>
Each b-tree page contains zero or more database entries.
Each entry has an unique key of one or more bytes and data of
zero or more bytes.
Both the key and data are arbitrary byte sequences.  The combination
of key and data are collectively known as "payload".  The current
implementation limits the amount of payload in a single entry to
1048576 bytes.  This limit can be raised to 16777216 by adjusting
a single #define in the source code and recompiling.  But most entries
contain less than a hundred bytes of payload so a megabyte limit seems
more than enough.
</p>

<p>
Up to 238 bytes of payload for an entry can be held directly on
a b-tree page.  Any additional payload is contained on a linked list
of overflow pages.  This limit on the amount of payload held directly
on b-tree pages guarantees that each b-tree page can hold at least
4 entries.  In practice, most entries are smaller than 238 bytes and
thus most pages can hold more than 4 entries.
</p>

<p>
A single database file can hold any number of separate, independent b-trees.
Each b-tree is identified by its root page, which never changes.
Child pages of the b-tree may change as entries are added and removed
and pages split and combine.  But the root page always stays the same.
The b-tree itself does not record which pages are root pages and which
are not.  That information is handled entirely at the schema layer.
</p>

<h4>3.1 &nbsp; B-Tree Page 1 Details</h4>

<p>
Page 1 begins with the following 48-byte string:
</p>

<blockquote><pre>
** This file contains an SQLite 2.1 database **
</pre></blockquote>

<p>
If you count the number of characters in the string above, you will
see that there are only 47.  A '\000' terminator byte is added to
bring the total to 48.
</p>

<p>
A frequent question is why the string says version 2.1 when (as
of this writing) we are up to version 2.7.0 of SQLite and any
change to the second digit of the version is suppose to represent
a database format change.  The answer to this is that the B-tree
layer has not changed any since version 2.1.  There have been
database format changes since version 2.1 but those changes have
all been in the schema layer.  Because the format of the b-tree
layer is unchanged since version 2.1.0, the header string still
says version 2.1.
</p>

<p>
After the format string is a 4-byte integer used to determine the
byte-order of the database.  The integer has a value of
0xdae37528.  If this number is expressed as 0xda, 0xe3, 0x75, 0x28, then
the database is in a big-endian format and all 16 and 32-bit integers
elsewhere in the b-tree layer are also big-endian.  If the number is
expressed as 0x28, 0x75, 0xe3, and 0xda, then the database is in a
little-endian format and all other multi-byte numbers in the b-tree 
layer are also little-endian.  
Prior to version 2.6.3, the SQLite engine was only able to read databases
that used the same byte order as the processor they were running on.
But beginning with 2.6.3, SQLite can read or write databases in any
byte order.
</p>

<p>
After the byte-order code are six 4-byte integers.  Each integer is in the
byte order determined by the byte-order code.  The first integer is the
page number for the first page of the freelist.  If there are no unused
pages in the database, then this integer is 0.  The second integer is
the number of unused pages in the database.  The last 4 integers are
not used by the b-tree layer.  These are the so-called "meta" values that
are passed up to the schema layer
and used there for configuration and format version information.
All bytes of page 1 past beyond the meta-value integers are unused 
and are initialized to zero.
</p>

<p>
Here is a summary of the information contained on page 1 in the b-tree layer:
</p>

<ul>
<li>48 byte header string</li>
<li>4 byte integer used to determine the byte-order</li>
<li>4 byte integer which is the first page of the freelist</li>
<li>4 byte integer which is the number of pages on the freelist</li>
<li>36 bytes of meta-data arranged as nine 4-byte integers</li>
<li>928 bytes of unused space</li>
</ul>

<h4>3.2 &nbsp; Structure Of A Single B-Tree Page</h4>

<p>
Conceptually, a b-tree page contains N database entries and N+1 pointers
to other b-tree pages.
</p>

<blockquote>
<table border=1 cellspacing=0 cellpadding=5>
<tr>
<td align="center">Ptr<br>0</td>
<td align="center">Entry<br>0</td>
<td align="center">Ptr<br>1</td>
<td align="center">Entry<br>1</td>
<td align="center"><b>...</b></td>
<td align="center">Ptr<br>N-1</td>
<td align="center">Entry<br>N-1</td>
<td align="center">Ptr<br>N</td>
</tr>
</table>
</blockquote>

<p>
The entries are arranged in increasing order.  That is, the key to
Entry 0 is less than the key to Entry 1, and the key to Entry 1 is
less than the key of Entry 2, and so forth.  The pointers point to
pages containing additional entries that have keys in between the
entries on either side.  So Ptr 0 points to another b-tree page that
contains entries that all have keys less than Key 0, and Ptr 1
points to a b-tree pages where all entries have keys greater than Key 0
but less than Key 1, and so forth.
</p>

<p>
Each b-tree page in SQLite consists of a header, zero or more "cells"
each holding a single entry and pointer, and zero or more "free blocks"
that represent unused space on the page.
</p>

<p>
The header on a b-tree page is the first 8 bytes of the page.
The header contains the value
of the right-most pointer (Ptr N) and the byte offset into the page
of the first cell and the first free block.  The pointer is a 32-bit
value and the offsets are each 16-bit values.  We have:
</p>

<blockquote>
<table border=1 cellspacing=0 cellpadding=5>
<tr>
<td align="center" width=30>0</td>
<td align="center" width=30>1</td>
<td align="center" width=30>2</td>
<td align="center" width=30>3</td>
<td align="center" width=30>4</td>
<td align="center" width=30>5</td>
<td align="center" width=30>6</td>
<td align="center" width=30>7</td>
</tr>
<tr>
<td align="center" colspan=4>Ptr N</td>
<td align="center" colspan=2>Cell 0</td>
<td align="center" colspan=2>Freeblock 0</td>
</tr>
</table>
</blockquote>

<p>
The 1016 bytes of a b-tree page that come after the header contain
cells and freeblocks.  All 1016 bytes are covered by either a cell
or a freeblock.
</p>

<p>
The cells are connected in a linked list.  Cell 0 contains Ptr 0 and
Entry 0.  Bytes 4 and 5 of the header point to Cell 0.  Cell 0 then
points to Cell 1 which contains Ptr 1 and Entry 1.  And so forth.
Cells vary in size.  Every cell has a 12-byte header and at least 4
bytes of payload space.  Space is allocated to payload in increments
of 4 bytes.  Thus the minimum size of a cell is 16 bytes and up to
63 cells can fit on a single page.  The size of a cell is always a multiple
of 4 bytes.
A cell can have up to 238 bytes of payload space.  If
the payload is more than 238 bytes, then an additional 4 byte page
number is appended to the cell which is the page number of the first
overflow page containing the additional payload.  The maximum size
of a cell is thus 254 bytes, meaning that a least 4 cells can fit into
the 1016 bytes of space available on a b-tree page.
An average cell is usually around 52 to 100 bytes in size with about
10 or 20 cells to a page.
</p>

<p>
The data layout of a cell looks like this:
</p>

<blockquote>
<table border=1 cellspacing=0 cellpadding=5>
<tr>
<td align="center" width=20>0</td>
<td align="center" width=20>1</td>
<td align="center" width=20>2</td>
<td align="center" width=20>3</td>
<td align="center" width=20>4</td>
<td align="center" width=20>5</td>
<td align="center" width=20>6</td>
<td align="center" width=20>7</td>
<td align="center" width=20>8</td>
<td align="center" width=20>9</td>
<td align="center" width=20>10</td>
<td align="center" width=20>11</td>
<td align="center" width=100>12 ... 249</td>
<td align="center" width=20>250</td>
<td align="center" width=20>251</td>
<td align="center" width=20>252</td>
<td align="center" width=20>253</td>
</tr>
<tr>
<td align="center" colspan=4>Ptr</td>
<td align="center" colspan=2>Keysize<br>(low)</td>
<td align="center" colspan=2>Next</td>
<td align="center" colspan=1>Ksz<br>(hi)</td>
<td align="center" colspan=1>Dsz<br>(hi)</td>
<td align="center" colspan=2>Datasize<br>(low)</td>
<td align="center" colspan=1>Payload</td>
<td align="center" colspan=4>Overflow<br>Pointer</td>
</tr>
</table>
</blockquote>

<p>
The first four bytes are the pointer.  The size of the key is a 24-bit
where the upper 8 bits are taken from byte 8 and the lower 16 bits are
taken from bytes 4 and 5 (or bytes 5 and 4 on little-endian machines.)
The size of the data is another 24-bit value where the upper 8 bits
are taken from byte 9 and the lower 16 bits are taken from bytes 10 and
11 or 11 and 10, depending on the byte order.  Bytes 6 and 7 are the
offset to the next cell in the linked list of all cells on the current
page.  This offset is 0 for the last cell on the page.
</p>

<p>
The payload itself can be any number of bytes between 1 and 1048576.
But space to hold the payload is allocated in 4-byte chunks up to
238 bytes.  If the entry contains more than 238 bytes of payload, then
additional payload data is stored on a linked list of overflow pages.
A 4 byte page number is appended to the cell that contains the first
page of this linked list.
</p>

<p>
Each overflow page begins with a 4-byte value which is the
page number of the next overflow page in the list.   This value is
0 for the last page in the list.  The remaining
1020 bytes of the overflow page are available for storing payload.
Note that a full page is allocated regardless of the number of overflow
bytes stored.  Thus, if the total payload for an entry is 239 bytes,
the first 238 are stored in the cell and the overflow page stores just
one byte.
</p>

<p>
The structure of an overflow page looks like this:
</p>

<blockquote>
<table border=1 cellspacing=0 cellpadding=5>
<tr>
<td align="center" width=20>0</td>
<td align="center" width=20>1</td>
<td align="center" width=20>2</td>
<td align="center" width=20>3</td>
<td align="center" width=200>4 ... 1023</td>
</tr>
<tr>
<td align="center" colspan=4>Next Page</td>
<td align="center" colspan=1>Overflow Data</td>
</tr>
</table>
</blockquote>

<p>
All space on a b-tree page which is not used by the header or by cells
is filled by freeblocks.  Freeblocks, like cells, are variable in size.
The size of a freeblock is at least 4 bytes and is always a multiple of
4 bytes.
The first 4 bytes contain a header and the remaining bytes
are unused.  The structure of the freeblock is as follows:
</p>

<blockquote>
<table border=1 cellspacing=0 cellpadding=5>
<tr>
<td align="center" width=20>0</td>
<td align="center" width=20>1</td>
<td align="center" width=20>2</td>
<td align="center" width=20>3</td>
<td align="center" width=200>4 ... 1015</td>
</tr>
<tr>
<td align="center" colspan=2>Size</td>
<td align="center" colspan=2>Next</td>
<td align="center" colspan=1>Unused</td>
</tr>
</table>
</blockquote>

<p>
Freeblocks are stored in a linked list in increasing order.  That is
to say, the first freeblock occurs at a lower index into the page than
the second free block, and so forth.  The first 2 bytes of the header
are an integer which is the total number of bytes in the freeblock.
The second 2 bytes are the index into the page of the next freeblock
in the list.  The last freeblock has a Next value of 0.
</p>

<p>
When a new b-tree is created in a database, the root page of the b-tree
consist of a header and a single 1016 byte freeblock.  As entries are
added, space is carved off of that freeblock and used to make cells.
When b-tree entries are deleted, the space used by their cells is converted
into freeblocks.  Adjacent freeblocks are merged, but the page can still
become fragmented.  The b-tree code will occasionally try to defragment
the page by moving all cells to the beginning and constructing a single
freeblock at the end to take up all remaining space.
</p>

<h4>3.3 &nbsp; The B-Tree Free Page List</h4>

<p>
When information is removed from an SQLite database such that one or
more pages are no longer needed, those pages are added to a list of
free pages so that they can be reused later when new information is
added.  This subsection describes the structure of this freelist.
</p>

<p>
The 32-bit integer beginning at byte-offset 52 in page 1 of the database
contains the address of the first page in a linked list of free pages.
If there are no free pages available, this integer has a value of 0.
The 32-bit integer at byte-offset 56 in page 1 contains the number of
free pages on the freelist.
</p>

<p>
The freelist contains a trunk and many branches.  The trunk of
the freelist is composed of overflow pages.  That is to say, each page
contains a single 32-bit integer at byte offset 0 which
is the page number of the next page on the freelist trunk.
The payload area
of each trunk page is used to record pointers to branch pages. 
The first 32-bit integer in the payload area of a trunk page
is the number of branch pages to follow (between 0 and 254)
and each subsequent 32-bit integer is a page number for a branch page.
The following diagram shows the structure of a trunk freelist page:
</p>

<blockquote>
<table border=1 cellspacing=0 cellpadding=5>
<tr>
<td align="center" width=20>0</td>
<td align="center" width=20>1</td>
<td align="center" width=20>2</td>
<td align="center" width=20>3</td>
<td align="center" width=20>4</td>
<td align="center" width=20>5</td>
<td align="center" width=20>6</td>
<td align="center" width=20>7</td>
<td align="center" width=200>8 ... 1023</td>
</tr>
<tr>
<td align="center" colspan=4>Next trunk page</td>
<td align="center" colspan=4># of branch pages</td>
<td align="center" colspan=1>Page numbers for branch pages</td>
</tr>
</table>
</blockquote>

<p>
It is important to note that only the pages on the trunk of the freelist
contain pointers to other pages.  The branch pages contain no
data whatsoever.  The fact that the branch pages are completely
blank allows for an important optimization in the paging layer.  When
a branch page is removed from the freelist to be reused, it is not
necessary to write the original content of that page into the rollback
journal.  The branch page contained no data to begin with, so there is
no need to restore the page in the event of a rollback.  Similarly,
when a page is not longer needed and is added to the freelist as a branch
page, it is not necessary to write the content of that page
into the database file.
Again, the page contains no real data so it is not necessary to record the
content of that page.  By reducing the amount of disk I/O required,
these two optimizations allow some database operations
to go four to six times faster than they would otherwise.
</p>

<h3>4.0 &nbsp; The Schema Layer</h3>

<p>
The schema layer implements an SQL database on top of one or more
b-trees and keeps track of the root page numbers for all b-trees.
Where the b-tree layer provides only unformatted data storage with
a unique key, the schema layer allows each entry to contain multiple
columns.  The schema layer also allows indices and non-unique key values.
</p>

<p>
The schema layer implements two separate data storage abstractions:
tables and indices.  Each table and each index uses its own b-tree
but they use the b-tree capabilities in different ways.  For a table,
the b-tree key is a unique 4-byte integer and the b-tree data is the
content of the table row, encoded so that columns can be separately
extracted.  For indices, the b-tree key varies in size depending on the
size of the fields being indexed and the b-tree data is empty.
</p>

<h4>4.1 &nbsp; SQL Table Implementation Details</h4>

<p>Each row of an SQL table is stored in a single b-tree entry.
The b-tree key is a 4-byte big-endian integer that is the ROWID
or INTEGER PRIMARY KEY for that table row.
The key is stored in a big-endian format so
that keys will sort in numerical order using memcmp() function.</p>

<p>The content of a table row is stored in the data portion of
the corresponding b-tree table.  The content is encoded to allow
individual columns of the row to be extracted as necessary.  Assuming
that the table has N columns, the content is encoded as N+1 offsets
followed by N column values, as follows:
</p>

<blockquote>
<table border=1 cellspacing=0 cellpadding=5>
<tr>
<td>offset 0</td>
<td>offset 1</td>
<td><b>...</b></td>
<td>offset N-1</td>
<td>offset N</td>
<td>value 0</td>
<td>value 1</td>
<td><b>...</b></td>
<td>value N-1</td>
</tr>
</table>
</blockquote>

<p>
The offsets can be either 8-bit, 16-bit, or 24-bit integers depending
on how much data is to be stored.  If the total size of the content
is less than 256 bytes then 8-bit offsets are used.  If the total size
of the b-tree data is less than 65536 then 16-bit offsets are used.
24-bit offsets are used otherwise.  Offsets are always little-endian,
which means that the least significant byte occurs first.
</p>

<p>
Data is stored as a nul-terminated string.  Any empty string consists
of just the nul terminator.  A NULL value is an empty string with no
nul-terminator.  Thus a NULL value occupies zero bytes and an empty string
occupies 1 byte.
</p>

<p>
Column values are stored in the order that they appear in the CREATE TABLE
statement.  The offsets at the beginning of the record contain the
byte index of the corresponding column value.  Thus, Offset 0 contains
the byte index for Value 0, Offset 1 contains the byte offset
of Value 1, and so forth.  The number of bytes in a column value can
always be found by subtracting offsets.  This allows NULLs to be
recovered from the record unabiguously.
</p>

<p>
Most columns are stored in the b-tree data as described above.
The one exception is column that has type INTEGER PRIMARY KEY.
INTEGER PRIMARY KEY columns correspond to the 4-byte b-tree key.
When an SQL statement attempts to read the INTEGER PRIMARY KEY,
the 4-byte b-tree key is read rather than information out of the
b-tree data.  But there is still an Offset associated with the
INTEGER PRIMARY KEY, just like any other column.  But the Value
associated with that offset is always NULL.
</p>

<h4>4.2 &nbsp; SQL Index Implementation Details</h4>

<p>
SQL indices are implement using a b-tree in which the key is used
but the data is always empty.  The purpose of an index is to map
one or more column values into the ROWID for the table entry that
contains those column values.
</p>

<p>
Each b-tree in an index consists of one or more column values followed
by a 4-byte ROWID.  Each column value is nul-terminated (even NULL values)
and begins with a single character that indicates the datatype for that
column value.  Only three datatypes are supported: NULL, Number, and
Text.  NULL values are encoded as the character 'a' followed by the
nul terminator.  Numbers are encoded as the character 'b' followed by
a string that has been crafted so that sorting the string using memcmp()
will sort the corresponding numbers in numerical order.  (See the
sqliteRealToSortable() function in util.c of the SQLite sources for
additional information on this encoding.)  Numbers are also nul-terminated.
Text values consists of the character 'c' followed by a copy of the
text string and a nul-terminator.  These encoding rules result in
NULLs being sorted first, followed by numerical values in numerical
order, followed by text values in lexigraphical order.
</p>

<h4>4.4 &nbsp; SQL Schema Storage And Root B-Tree Page Numbers</h4>

<p>
The database schema is stored in the database in a special tabled named
"sqlite_master" and which always has a root b-tree page number of 2.
This table contains the original CREATE TABLE,
CREATE INDEX, CREATE VIEW, and CREATE TRIGGER statements used to define
the database to begin with.  Whenever an SQLite database is opened,
the sqlite_master table is scanned from beginning to end and 
all the original CREATE statements are played back through the parser
in order to reconstruct an in-memory representation of the database
schema for use in subsequent command parsing.  For each CREATE TABLE
and CREATE INDEX statement, the root page number for the corresponding
b-tree is also recorded in the sqlite_master table so that SQLite will
know where to look for the appropriate b-tree.
</p>

<p>
SQLite users can query the sqlite_master table just like any other table
in the database.  But the sqlite_master table cannot be directly written.
The sqlite_master table is automatically updated in response to CREATE
and DROP statements but it cannot be changed using INSERT, UPDATE, or
DELETE statements as that would risk corrupting the database.
</p>

<p>
SQLite stores temporary tables and indices in a separate
file from the main database file.  The temporary table database file
is the same structure as the main database file.  The schema table
for the temporary tables is stored on page 2 just as in the main
database.  But the schema table for the temporary database named
"sqlite_temp_master" instead of "sqlite_master".  Other than the
name change, it works exactly the same.
</p>

<h4>4.4 &nbsp; Schema Version Numbering And Other Meta-Information</h4>

<p>
The nine 32-bit integers that are stored beginning at byte offset
60 of Page 1 in the b-tree layer are passed up into the schema layer
and used for versioning and configuration information.  The meaning
of the first four integers is shown below.  The other five are currently
unused.
</p>

<ol>
<li>The schema version number</li>
<li>The format version number</li>
<li>The recommended pager cache size</li>
<li>The safety level</li>
</ol>

<p>
The first meta-value, the schema version number, is used to detect when
the schema of the database is changed by a CREATE or DROP statement.
Recall that when a database is first opened the sqlite_master table is
scanned and an internal representation of the tables, indices, views,
and triggers for the database is built in memory.  This internal
representation is used for all subsequent SQL command parsing and
execution.  But what if another process were to change the schema
by adding or removing a table, index, view, or trigger?  If the original
process were to continue using the old schema, it could potentially
corrupt the database by writing to a table that no longer exists.
To avoid this problem, the schema version number is changed whenever
a CREATE or DROP statement is executed.  Before each command is
executed, the current schema version number for the database file
is compared against the schema version number from when the sqlite_master
table was last read.  If those numbers are different, the internal
schema representation is erased and the sqlite_master table is reread
to reconstruct the internal schema representation.
(Calls to sqlite_exec() generally return SQLITE_SCHEMA when this happens.)
</p>

<p>
The second meta-value is the schema format version number.  This
number tells what version of the schema layer should be used to
interpret the file.  There have been changes to the schema layer
over time and this number is used to detect when an older database
file is being processed by a newer version of the library.
As of this writing (SQLite version 2.7.0) the current format version
is "4".
</p>

<p>
The third meta-value is the recommended pager cache size as set 
by the DEFAULT_CACHE_SIZE pragma.  If the value is positive it
means that synchronous behavior is enable (via the DEFAULT_SYNCHRONOUS
pragma) and if negative it means that synchronous behavior is
disabled.
</p>

<p>
The fourth meta-value is safety level added in version 2.8.0.
A value of 1 corresponds to a SYNCHRONOUS setting of OFF.  In other
words, SQLite does not pause to wait for journal data to reach the disk
surface before overwriting pages of the database.  A value of 2 corresponds
to a SYNCHRONOUS setting of NORMAL.  A value of 3 corresponds to a
SYNCHRONOUS setting of FULL. If the value is 0, that means it has not
been initialized so the default synchronous setting of NORMAL is used.
</p>
}
footer $rcsid

--- NEW FILE: formatchng.tcl ---
#
# Run this Tcl script to generate the formatchng.html file.
#
set rcsid {$Id: formatchng.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $ }
source common.tcl
header {File Format Changes in SQLite}
puts {
<h2>File Format Changes in SQLite</h2>

<p>
>From time to time, enhancements or bug fixes require a change to
the underlying file format for SQLite.  When this happens and you
want to upgrade your library, you must convert the contents of your
databases into a portable ASCII representation using the old version
of the library then reload the data using the new version of the
library.
</p>

<p>
You can tell if you should reload your databases by comparing the
version numbers of the old and new libraries.  If either of the
first two digits in the version number change, then a reload is
either required or recommended.  For example, upgrading from
version 1.0.32 to 2.0.0 requires a reload.  So does going from
version 2.0.8 to 2.1.0.
</p>

<p>
The following table summarizes the SQLite file format changes that have
occurred since version 1.0.0:
</p>

<blockquote>
<table border=2 cellpadding=5>
<tr>
  <th>Version Change</th>
  <th>Approx. Date</th>
  <th>Description Of File Format Change</th>
</tr>
<tr>
  <td valign="top">1.0.32 to 2.0.0</td>
  <td valign="top">2001-Sep-20</td>
  <td>Version 1.0.X of SQLite used the GDBM library as its backend
  interface to the disk.  Beginning in version 2.0.0, GDBM was replaced
  by a custom B-Tree library written especially for SQLite.  The new
  B-Tree backend is twice as fast as GDBM, supports atomic commits and
  rollback, and stores an entire database in a single disk file instead
  using a separate file for each table as GDBM does.  The two
  file formats are not even remotely similar.</td>
</tr>
<tr>
  <td valign="top">2.0.8 to 2.1.0</td>
  <td valign="top">2001-Nov-12</td>
  <td>The same basic B-Tree format is used but the details of the 
  index keys were changed in order to provide better query 
  optimization opportunities.  Some of the headers were also changed in order
  to increase the maximum size of a row from 64KB to 24MB.</td>
</tr>
<tr>
  <td valign="top">2.1.7 to 2.2.0</td>
  <td valign="top">2001-Dec-21</td>
  <td>Beginning with version 2.2.0, SQLite no longer builds an index for
  an INTEGER PRIMARY KEY column.  Instead, it uses that column as the actual
  B-Tree key for the main table.<p>Version 2.2.0 and later of the library
  will automatically detect when it is reading a 2.1.x database and will
  disable the new INTEGER PRIMARY KEY feature.   In other words, version
  2.2.x is backwards compatible to version 2.1.x.  But version 2.1.x is not
  forward compatible with version 2.2.x. If you try to open
  a 2.2.x database with an older 2.1.x library and that database contains
  an INTEGER PRIMARY KEY, you will likely get a coredump.  If the database
  schema does not contain any INTEGER PRIMARY KEYs, then the version 2.1.x
  and version 2.2.x database files will be identical and completely
  interchangeable.</p>
</tr>
<tr>
  <td valign="top">2.2.5 to 2.3.0</td>
  <td valign="top">2002-Jan-30</td>
  <td>Beginning with version 2.3.0, SQLite supports some additional syntax
  (the "ON CONFLICT" clause) in the CREATE TABLE and CREATE INDEX statements
  that are stored in the SQLITE_MASTER table.  If you create a database that
  contains this new syntax, then try to read that database using version 2.2.5
  or earlier, the parser will not understand the new syntax and you will get
  an error.  Otherwise, databases for 2.2.x and 2.3.x are interchangeable.</td>
</tr>
<tr>
  <td valign="top">2.3.3 to 2.4.0</td>
  <td valign="top">2002-Mar-10</td>
  <td>Beginning with version 2.4.0, SQLite added support for views. 
  Information about views is stored in the SQLITE_MASTER table.  If an older
  version of SQLite attempts to read a database that contains VIEW information
  in the SQLITE_MASTER table, the parser will not understand the new syntax
  and initialization will fail.  Also, the
  way SQLite keeps track of unused disk blocks in the database file
  changed slightly.
  If an older version of SQLite attempts to write a database that
  was previously written by version 2.4.0 or later, then it may leak disk
  blocks.</td>
</tr>
<tr>
  <td valign="top">2.4.12 to 2.5.0</td>
  <td valign="top">2002-Jun-17</td>
  <td>Beginning with version 2.5.0, SQLite added support for triggers. 
  Information about triggers is stored in the SQLITE_MASTER table.  If an older
  version of SQLite attempts to read a database that contains a CREATE TRIGGER
  in the SQLITE_MASTER table, the parser will not understand the new syntax
  and initialization will fail.
  </td>
</tr>
<tr>
  <td valign="top">2.5.6 to 2.6.0</td>
  <td valign="top">2002-July-17</td>
  <td>A design flaw in the layout of indices required a file format change
  to correct.  This change appeared in version 2.6.0.<p>

  If you use version 2.6.0 or later of the library to open a database file
  that was originally created by version 2.5.6 or earlier, an attempt to
  rebuild the database into the new format will occur automatically.
  This can take some time for a large database.  (Allow 1 or 2 seconds
  per megabyte of database under Unix - longer under Windows.)  This format
  conversion is irreversible.  It is <strong>strongly</strong> suggested
  that you make a backup copy of older database files prior to opening them
  with version 2.6.0 or later of the library, in case there are errors in
  the format conversion logic.<p>

  Version 2.6.0 or later of the library cannot open read-only database
  files from version 2.5.6 or earlier, since read-only files cannot be
  upgraded to the new format.</p>
  </td>
</tr>
<tr>
  <td valign="top">2.6.3 to 2.7.0</td>
  <td valign="top">2002-Aug-13</td>
  <td><p>Beginning with version 2.7.0, SQLite understands two different
  datatypes: text and numeric.  Text data sorts in memcmp() order.
  Numeric data sorts in numerical order if it looks like a number,
  or in memcmp() order if it does not.</p>

  <p>When SQLite version 2.7.0 or later opens a 2.6.3 or earlier database,
  it assumes all columns of all tables have type "numeric".  For 2.7.0
  and later databases, columns have type "text" if their datatype
  string contains the substrings "char" or "clob" or "blob" or "text".
  Otherwise they are of type "numeric".</p>

  <p>Because "text" columns have a different sort order from numeric,
  indices on "text" columns occur in a different order for version
  2.7.0 and later database.  Hence version 2.6.3 and earlier of SQLite 
  will be unable to read a 2.7.0 or later database.  But version 2.7.0
  and later of SQLite will read earlier databases.</p>
  </td>
</tr>
<tr>
  <td valign="top">2.7.6 to 2.8.0</td>
  <td valign="top">2003-Feb-14</td>
  <td><p>Version 2.8.0 introduces a change to the format of the rollback
  journal file.  The main database file format is unchanged.  Versions
  2.7.6 and earlier can read and write 2.8.0 databases and vice versa.
  Version 2.8.0 can rollback a transation that was started by version
  2.7.6 and earlier.  But version 2.7.6 and earlier cannot rollback a
  transaction started by version 2.8.0 or later.</p>

  <p>The only time this would ever be an issue is when you have a program
  using version 2.8.0 or later that crashes with an incomplete
  transaction, then you try to examine the database using version 2.7.6 or
  earlier.  The 2.7.6 code will not be able to read the journal file
  and thus will not be able to rollback the incomplete transaction
  to restore the database.</p>
  </td>
</tr>
<tr>
  <td valign="top">2.8.14 to 3.0.0</td>
  <td valign="top">(pending)</td>
  <td><p>Version 3.0.0 is a major upgrade for SQLite that incorporates
  support for UTF-16, BLOBs, and a more compact encoding that results
  in database files that are typically 25% to 35% smaller.  The new file
  format is radically different and completely incompatible with the
  version 2 file format.</p>
  </td>
</tr>
</table>
</blockquote>

<p>
To perform a database reload, have ready versions of the
<b>sqlite</b> command-line utility for both the old and new
version of SQLite.  Call these two executables "<b>sqlite-old</b>"
and "<b>sqlite-new</b>".  Suppose the name of your old database
is "<b>old.db</b>" and you want to create a new database with
the same information named "<b>new.db</b>".  The command to do
this is as follows:
</p>

<blockquote>
  echo .dump | sqlite-old old.db | sqlite-new new.db
</blockquote>
}
footer $rcsid

--- NEW FILE: index.tcl ---
#!/usr/bin/tclsh
source common.tcl
header {SQLite home page}
puts {
<table width="100%" border="0" cellspacing="5">
<tr>
<td width="50%" valign="top">
<h2>About SQLite</h2>
<p>
SQLite is a small C library that implements a 
self-contained, embeddable,
zero-configuration SQL database engine.
Features include:
</p>

<p><ul>
<li>Transaction are atomic, consistent, isolated, and durable (ACID)
    even after system crashes and power failures.
<li>Zero-configuration - no setup or administration needed.</li>
<li>Implements most of SQL92.
    (<a href="omitted.html">Features not supported</a>)</li>
<li>A complete database is stored in a single disk file.</li>
<li>Database files can be freely shared between machines with
    different byte orders.</li>
<li>Supports databases up to 2 terabytes (2<sup>41</sup> bytes) in size.</li>
<li>Sizes of strings and BLOBs limited only by available memory.</li>
<li>Small code footprint: less than 30K lines of C code,
    less than 250KB code space (gcc on i486)</li>
<li><a href="speed.html">Faster</a> than popular client/server database
    engines for most common operations.</li>
<li>Simple, easy to use <a href="c_interface.html">API</a>.</li>
<li><a href="tclsqlite.html">TCL bindings</a> included.
    Bindings for many other languages 
    <a href="http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers">
    available separately.</a></li>
<li>Well-commented source code with over 90% test coverage.</li>
<li>Self-contained: no external dependencies.</li>
<li>Sources are in the <a href="copyright.html">public domain</a>.
    Use for any purpose.</li>
</ul>
</p>

<p>
The SQLite distribution comes with a standalone command-line
access program (<a href="sqlite.html">sqlite</a>) that can
be used to administer an SQLite database and which serves as
an example of how to use the SQLite library.
</p>

</td>
<td width="1" bgcolor="#80a796"></td>
<td valign="top" width="50%">
<h2>News</h2>
}

proc newsitem {date title text} {
  puts "<h3>$date - $title</h3>"
  regsub -all "\n( *\n)+" $text "</p>\n\n<p>" txt
  puts "<p>$txt</p>"
  puts "<hr width=\"50%\">"
}

newsitem {2004-Sep-18} {Version 3.0.7} {
  Version 3.0 has now been in use by multiple projects for several
  months with no major difficulties.   We consider it stable and
  ready for production use. 
}


newsitem {2004-Jly-22} {Version 2.8.15} {
  SQLite version 2.8.15 is a maintenance release for the version 2.8
  series.  Version 2.8 continues to be maintained with bug fixes, but
  no new features will be added to version 2.8.  All the changes in
  this release are minor.  If you are not having problems, there is
  there is no reason to upgrade.
}
  

puts {
<p align="right"><a href="oldnews.html">Old news...</a></p>
</td></tr></table>
}
footer {$Id: index.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

--- NEW FILE: lang.tcl ---
#
# Run this Tcl script to generate the sqlite.html file.
#
set rcsid {$Id: lang.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {Query Language Understood by SQLite}
puts {
<h2>SQL As Understood By SQLite</h2>

<p>The SQLite library understands most of the standard SQL
language.  But it does <a href="omitted.html">omit some features</a>
while at the same time
adding a few features of its own.  This document attempts to
describe percisely what parts of the SQL language SQLite does
and does not support.  A list of <a href="#keywords">keywords</a> is 
given at the end.</p>

<p>In all of the syntax diagrams that follow, literal text is shown in
bold blue.  Non-terminal symbols are shown in italic red.  Operators
[...1773 lines suppressed...]
}

puts {
<h2>Special words</h2>

<p>The following are not keywords in SQLite, but are used as names of 
system objects.  They can be used as an identifier for a different 
type of object.</p>
}

keyword_list {
  *_ROWID_
  *MAIN
  OID
  *ROWID
  *SQLITE_MASTER
  *SQLITE_TEMP_MASTER
}

footer $rcsid

--- NEW FILE: lockingv3.tcl ---
#
# Run this script to generated a lockingv3.html output file
#
set rcsid {$Id: }
source common.tcl
header {File Locking And Concurrency In SQLite Version 3}

proc HEADING {level title} {
  global pnum
  incr pnum($level)
  foreach i [array names pnum] {
    if {$i>$level} {set pnum($i) 0}
  }
  set h [expr {$level+1}]
  if {$h>6} {set h 6}
  set n $pnum(1).$pnum(2)
  for {set i 3} {$i<=$level} {incr i} {
    append n .$pnum($i)
  }
  puts "<h$h>$n $title</h$h>"
}
set pnum(1) 0
set pnum(2) 0
set pnum(3) 0
set pnum(4) 0
set pnum(5) 0
set pnum(6) 0
set pnum(7) 0
set pnum(8) 0

HEADING 1 {File Locking And Concurrency In SQLite Version 3}

puts {
<p>Version 3 of SQLite introduces a more complex locking and journaling 
mechanism designed to improve concurrency and reduce the writer starvation 
problem.  The new mechanism also allows atomic commits of transactions
involving multiple database files.
This document describes the new locking mechanism.
The intended audience is programmers who want to understand and/or modify
the pager code and reviewers working to verify the design
of SQLite version 3.
</p>
}

HEADING 1 {Overview}

puts {
<p>
Locking and concurrency control are handled by the the 
<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/pager.c">
pager module</a>.
The pager module is responsible for making SQLite "ACID" (Atomic,
Consistent, Isolated, and Durable).  The pager module makes sure changes
happen all at once, that either all changes occur or none of them do,
that two or more processes do not try to access the database
in incompatible ways at the same time, and that once changes have been
written they persist until explicitly deleted.  The pager also provides
an memory cache of some of the contents of the disk file.</p>

<p>The pager is unconcerned
with the details of B-Trees, text encodings, indices, and so forth.
>From the point of view of the pager the database consists of
a single file of uniform-sized blocks.  Each block is called a
"page" and is usually 1024 bytes in size.   The pages are numbered
beginning with 1.  So the first 1024 bytes of the database are called
"page 1" and the second 1024 bytes are call "page 2" and so forth. All 
other encoding details are handled by higher layers of the library.  
The pager communicates with the operating system using one of several
modules 
(Examples:
<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/os_unix.c">
os_unix.c</a>,
<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/os_win.c">
os_win.c</a>)
that provides a uniform abstraction for operating system services.
</p>

<p>The pager module effectively controls access for separate threads, or
separate processes, or both.  Throughout this document whenever the
word "process" is written you may substitute the word "thread" without
changing the truth of the statement.</p>
}

HEADING 1 {Locking}

puts {
<p>
>From the point of view of a single process, a database file
can be in one of five locking states:
</p>

<p>
<table cellpadding="20">
<tr><td valign="top">UNLOCKED</td>
<td valign="top">
No locks are held on the database.  The database may be neither read nor
written.  Any internally cached data is considered suspect and subject to
verification against the database file before being used.  Other 
processes can read or write the database as their own locking states
permit.  This is the default state.
</td></tr>

<tr><td valign="top">SHARED</td>
<td valign="top">
The database may be read but not written.  Any number of 
processes can hold SHARED locks at the same time, hence there can be
many simultaneous readers.  But no other thread or process is allowed
to write to the database file while one or more SHARED locks are active.
</td></tr>

<tr><td valign="top">RESERVED</td>
<td valign="top">
A RESERVED lock means that the process is planning on writing to the
database file at some point in the future but that it is currently just
reading from the file.  Only a single RESERVED lock may be active at one
time, though multiple SHARED locks can coexist with a single RESERVED lock.
RESERVED differs from PENDING in that new SHARED locks can be acquired
while there is a RESERVED lock.
</td></tr>

<tr><td valign="top">PENDING</td>
<td valign="top">
A PENDING lock means that the process holding the lock wants to write
to the database as soon as possible and is just waiting on all current
SHARED locks to clear so that it can get an EXCLUSIVE lock.  No new 
SHARED locks are permitted against the database if
a PENDING lock is active, though existing SHARED locks are allowed to
continue.
</td></tr>

<tr><td valign="top">EXCLUSIVE</td>
<td valign="top">
An EXCLUSIVE lock is needed in order to write to the database file.
Only one EXCLUSIVE lock is allowed on the file and no other locks of
any kind are allowed to coexist with an EXCLUSIVE lock.  In order to
maximize concurrency, SQLite works to minimize the amount of time that
EXCLUSIVE locks are held.
</td></tr>
</table>
</p>

<p>
The operating system interface layer understands and tracks all five
locking states described above.  
The pager module only tracks four of the five locking states.
A PENDING lock is always just a temporary
stepping stone on the path to an EXCLUSIVE lock and so the pager module
does not track PENDING locks.
</p>
}

HEADING 1 {The Rollback Journal}

puts {
<p>Any time a process wants to make a changes to a database file, it
first records enough information in the <em>rollback journal</em> to
restore the database file back to its initial condition.  Thus, before
altering any page of the database, the original contents of that page
must be written into the journal.  The journal also records the initial
size of the database so that if the database file grows it can be truncated
back to its original size on a rollback.</p>

<p>The rollback journal is a ordinary disk file that has the same name as
the database file with the suffix "<tt>-journal</tt>" added.</p>

<p>If SQLite is working with multiple databases at the same time
(using the ATTACH command) then each database has its own journal.
But there is also a separate aggregate journal
called the <em>master journal</em>.
The master journal does not contain page data used for rolling back
changes.  Instead the master journal contains the names of the
individual file journals for each of the ATTACHed databases.   Each of
the individual file journals also contain the name of the master journal.
If there are no ATTACHed databases (or if none of the ATTACHed database
is participating in the current transaction) no master journal is
created and the normal rollback journal contains an empty string
in the place normally reserved for recording the name of the master
journal.</p>

<p>A individual file journal is said to be <em>hot</em>
if it needs to be rolled back
in order to restore the integrity of its database.  
A hot journal is created when a process is in the middle of a database
update and a program or operating system crash or power failure prevents 
the update from completing.
Hot journals are an exception condition. 
Hot journals exist to facility recovery from crashes and power failures.
If everything is working correctly 
(that is, if there are no crashes or power failures)
you will never get a hot journal.
</p>

<p>
If no master journal is involved, then
a journal is hot if it exists and its corresponding database file
does not have a RESERVED lock.
If a master journal is named in the file journal, then the file journal
is hot if its master journal exists and there is no RESERVED
lock on the corresponding database file.
It is important to understand when a journal is hot so the
preceding rules will be repeated in bullets:
</p>

<ul>
<li>A journal is hot if...
    <ul>
    <li>It exists, and</li>
    <li>It's master journal exists or the master journal name is an
        empty string, and</li>
    <li>There is no RESERVED lock on the corresponding database file.</li>
    </ul>
</li>
</ul>
}

HEADING 2 {Dealing with hot journals}

puts {
<p>
Before reading from a a database file, SQLite always checks to see if that
database file has a hot journal.  If the file does have a hot journal, then
the journal is rolled back before the file is read.  In this way, we ensure
that the database file is in a consistent state before it is read.
</p>

<p>When a process wants to read from a database file, it followed
the following sequence of steps:
</p>

<ol>
<li>Open the database file and obtain a SHARED lock.  If the SHARED lock
    cannot be obtained, fail immediately and return SQLITE_BUSY.</li>
<li>Check to see if the database file has a hot journal.   If the file
    does not have a hot journal, we are done.  Return immediately.
    If there is a hot journal, that journal must be rolled back by
    the subsequent steps of this algorithm.</li>
<li>Acquire a PENDING lock then an EXCLUSIVE lock on the database file.
    (Note: Do not acquire a RESERVED lock because that would make
    other processes think the journal was no longer hot.)  If we
    fail to acquire these locks it means another process
    is already trying to do the rollback.  In that case,
    drop all locks, close the database, and return SQLITE_BUSY. </li>
<li>Read the journal file and roll back the changes.</li>
<li>Wait for the rolled back changes to be written onto 
    the surface of the disk.  This protects the integrity of the database
    in case another power failure or crash occurs.</li>
<li>Delete the journal file.</li>
<li>Delete the master journal file if it is safe to do so.
    This step is optional.  It is here only to prevent stale
    master journals from cluttering up the disk drive.
    See the discussion below for details.</li>
<li>Drop the EXCLUSIVE and PENDING locks but retain the SHARED lock.</li>
</ol>

<p>After the algorithm above completes successfully, it is safe to 
read from the database file.  Once all reading has completed, the
SHARED lock is dropped.</p>
}

HEADING 2 {Deleting stale master journals}

puts {
<p>A stale master journal is a master journal that is no longer being
used for anything.  There is no requirement that stale master journals
be deleted.  The only reason for doing so is to free up disk space.</p>

<p>A master journal is stale if no individual file journals are pointing
to it.  To figure out if a master journal is stale, we first read the
master journal to obtain the names of all of its file journals.  Then
we check each of those file journals.  If any of the file journals named
in the master journal exists and points back to the master journal, then
the master journal is not stale.  If all file journals are either missing
or refer to other master journals or no master journal at all, then the
master journal we are testing is stale and can be safely deleted.</p>
}

HEADING 1 {Writing to a database file}

puts {
<p>To write to a database, a process must first acquire a SHARED lock
as described above (possibly rolling back incomplete changes if there
is a hot journal). 
After a SHARED lock is obtained, a RESERVED lock must be acquired.
The RESERVED lock signals that the process intends to write to the
database at some point in the future.  Only one process at a time
can hold a RESERVED lock.  But other processes can continue to read
the database while the RESERVED lock is held.
</p>

<p>If the process that wants to write is unable to obtain a RESERVED
lock, it must mean that another process already has a RESERVED lock.
In that case, the write attempt fails and returns SQLITE_BUSY.</p>

<p>After obtaining a RESERVED lock, the process that wants to write
creates a rollback journal.  The header of the journal is initialized
with the original size of the database file.  Space in the journal header
is also reserved for a master journal name, though the master journal
name is initially empty.</p>

<p>Before making changes to any page of the database, the process writes
the original content of that page into the rollback journal.  Changes
to pages are held in memory at first and are not written to the disk.
The original database file remains unaltered, which means that other
processes can continue to read the database.</p>

<p>Eventually, the writing process will want to update the database
file, either because its memory cache has filled up or because it is
ready to commit its changes.  Before this happens, the writer must
make sure no other process is reading the database and that the rollback
journal data is safely on the disk surface so that it can be used to
rollback incomplete changes in the event of a power failure.
The steps are as follows:</p>

<ol>
<li>Make sure all rollback journal data has actually been written to
    the surface of the disk (and is not just being held in the operating
    system's  or disk controllers cache) so that if a power failure occurs
    the data will still be there after power is restored.</li>
<li>Obtain a PENDING lock and then an EXCLUSIVE lock on the database file.
    If other processes are still have SHARED locks, the writer might have
    to wait until those SHARED locks clear before it is able to obtain
    an EXCLUSIVE lock.</li>
<li>Write all page modifications currently held in memory out to the
    original database disk file.</li>
</ol>

<p>
If the reason for writing to the database file is because the memory
cache was full, then the writer will not commit right away.  Instead,
the writer might continue to make changes to other pages.  Before 
subsequent changes are written to the database file, the rollback
journal must be flushed to disk again.  Note also that the EXCLUSIVE
lock that the writer obtained in order to write to the database initially
must be held until all changes are committed.  That means that no other
processes are able to access the database from the
time the memory cache first spills to disk until the transaction
commits.
</p>

<p>
When a writer is ready to commit its changes, it executes the following
steps:
</p>

<ol>
<li value="4">
   Obtain an EXCLUSIVE lock on the database file and
   make sure all memory changes have been written to the database file
   using the algorithm of steps 1-3 above.</li>
<li>Flush all database file changes to the disk.  Wait for those changes
    to actually be written onto the disk surface.</li>
<li>Delete the journal file.  This is the instant when the changes are
    committed.  Prior to deleting the journal file, if a power failure
    or crash occurs, the next process to open the database will see that
    it has a hot journal and will roll the changes back.
    After the journal is deleted, there will no longer be a hot journal
    and the changes will persist.
    </li>
<li>Drop the EXCLUSIVE and PENDING locks from the database file.
    </li>
</ol>

<p>As soon as PENDING lock is released from the database file, other
processes can begin reading the database again.  In the current implementation,
the RESERVED lock is also released, but that is not essential.  Future
versions of SQLite might provide a "CHECKPOINT" SQL command that will
commit all changes made so far within a transaction but retain the
RESERVED lock so that additional changes can be made without given
any other process an opportunity to write.</p>

<p>If a transaction involves multiple databases, then a more complex
commit sequence is used, as follows:</p>

<ol>
<li value="4">
   Make sure all individual database files have an EXCLUSIVE lock and a
   valid journal.
<li>Create a master-journal.  The name of the master-journal is arbitrary.
    (The current implementation appends random suffixes to the name of the
    main database file until it finds a name that does not previously exist.)
    Fill the master journal with the names of all the individual journals
    and flush its contents to disk.
<li>Write the name of the master journal into
    all individual journals (in space set aside for that purpose in the
    headers of the individual journals) and flush the contents of the
    individual journals to disk and wait for those changes to reach the
    disk surface.
<li>Flush all database file changes to the disk.  Wait for those changes
    to actually be written onto the disk surface.</li>
<li>Delete the master journal file.  This is the instant when the changes are
    committed.  Prior to deleting the master journal file, if a power failure
    or crash occurs, the individual file journals will be considered hot
    and will be rolled back by the next process that
    attempts to read them.  After the master journal has been deleted,
    the file journals will no longer be considered hot and the changes
    will persist.
    </li>
<li>Delete all individual journal files.
<li>Drop the EXCLUSIVE and PENDING locks from all database files.
    </li>
</ol>
}

HEADING 2 {Writer starvation}

puts {
<p>In SQLite version 2, if many processes are reading from the database,
it might be the case that there is never a time when there are
no active readers.  And if there is always at least one read lock on the
database, no process would ever be able to make changes to the database
because it would be impossible to acquire a write lock.  This situation
is called <em>writer starvation</em>.</p>

<p>SQLite version 3 seeks to avoid writer starvation through the use of
the PENDING lock.  The PENDING lock allows existing readers to continue
but prevents new readers from connecting to the database.  So when a
process wants to write a busy database, it can set a PENDING lock which
will prevent new readers from coming in.  Assuming existing readers do
eventually complete, all SHARED locks will eventually clear and the
writer will be given a chance to make its changes.</p>
}

HEADING 1 {How To Corrupt Your Database Files}

puts {
<p>The pager module is robust but it is not completely failsafe.
It can be subverted.  This section attempts to identify and explain
the risks.</p>

<p>
Clearly, a hardware or operating system fault that introduces incorrect data
into the middle of the database file or journal will cause problems.
Likewise, 
if a rogue process opens a database file or journal and writes malformed
data into the middle of it, then the database will become corrupt.
There is not much that can be done about these kinds of problems
so they are given no further attention.
</p>

<p>
SQLite uses POSIX advisory locks to implement locking on Unix.  On
windows it uses the LockFile(), LockFileEx(), and UnlockFile() system
calls.  SQLite assumes that these system calls all work as advertised.  If
that is not the case, then database corruption can result.  One should
note that POSIX advisory locking is known to be buggy or even unimplemented
on many NFS implementations (including recent versions of Mac OS X)
and that there are reports of locking problems
for network filesystems under windows.  Your best defense is to not
use SQLite for files on a network filesystem.
</p>

<p>
SQLite uses the fsync() system call to flush data to the disk under Unix and
it uses the FlushFileBuffers() to do the same under windows.  Once again,
SQLite assumes that these operating system services function as advertised.
But it has been reported that fsync() and FlushFileBuffers() do not always
work correctly, especially with inexpensive IDE disks.  Apparently some
manufactures of IDE disks have defective controller chips that report
that data has reached the disk surface when in fact the data is still
in volatile cache memory in the disk drive electronics.  There are also
reports that windows sometimes chooses to ignore FlushFileBuffers() for
unspecified reasons.  The author cannot verify any of these reports.
But if they are true, it means that database corruption is a possibility
following an unexpected power loss.  These are hardware and/or operating
system bugs that SQLite is unable to defend against.
</p>

<p>
If a crash or power failure occurs and results in a hot journal but that
journal is deleted, the next process to open the database will not
know that it contains changes that need to be rolled back.  The rollback
will not occur and the database will be left in an inconsistent state.
Rollback journals might be deleted for any number of reasons:
</p>

<ul>
<li>An administrator might be cleaning up after an OS crash or power failure,
    see the journal file, think it is junk, and delete it.</li>
<li>Someone (or some process) might rename the database file but fail to
    also rename its associated journal.</li>
<li>If the database file has aliases (hard or soft links) and the file
    is opened by a different alias than the one used to create the journal,
    then the journal will not be found.  To avoid this problem, you should
    not create links to SQLite database files.</li>
<li>Filesystem corruption following a power failure might cause the
    journal to be renamed or deleted.</li>
</ul>

<p>
The last (fourth) bullet above merits additional comment.  When SQLite creates
a journal file on Unix, it opens the directory that contains that file and
calls fsync() on the directory, in an effort to push the directory information
to disk.  But suppose some other process is adding or removing unrelated
files to the directory that contains the database and journal at the the
moment of a power failure.  The supposedly unrelated actions of this other
process might result in the journal file being dropped from the directory and
moved into "lost+found".  This is an unlikely scenario, but it could happen.
The best defenses are to use a journaling filesystem or to keep the
database and journal in a directory by themselves.
</p>

<p>
For a commit involving multiple databases and a master journal, if the
various databases were on different disk volumes and a power failure occurs
during the commit, then when the machine comes back up the disks might
be remounted with different names.  Or some disks might not be mounted
at all.   When this happens the individual file journals and the master
journal might not be able to find each other. The worst outcome from
this scenario is that the commit ceases to be atomic.  
Some databases might be rolled back and others might not. 
All databases will continue to be self-consistent.
To defend against this problem, keep all databases
on the same disk volume and/or remount disks using exactly the same names
after a power failure.
</p>
}

HEADING 1 {Transaction Control At The SQL Level}

puts {
<p>
The changes to locking and concurrency control in SQLite version 3 also
introduce some subtle changes in the way transactions work at the SQL
language level.
By default, SQLite version 3 operates in <em>autocommit</em> mode.
In autocommit mode,
all changes to the database are committed as soon as all operations associated
with the current database connection complete.</p>

<p>The SQL command "BEGIN TRANSACTION" (the TRANSACTION keyword
is optional) is used to take SQLite out of autocommit mode.
Note that the BEGIN command does not acquire any locks on the database.
After a BEGIN command, a SHARED lock will be acquired when the first
SELECT statement is executed.  A RESERVED lock will be acquired when
the first INSERT, UPDATE, or DELETE statement is executed.  No EXCLUSIVE
lock is acquired until either the memory cache fills up and must
be spilled to disk or until the transaction commits.  In this way,
the system delays blocking read access to the file file until the
last possible moment.
</p>

<p>The SQL command "COMMIT"  does not actually commit the changes to
disk.  It just turns autocommit back on.  Then, at the conclusion of
the command, the regular autocommit logic takes over and causes the
actual commit to disk to occur.
The SQL command "ROLLBACK" also operates by turning autocommit back on,
but it also sets a flag that tells the autocommit logic to rollback rather
than commit.</p>

<p>If the SQL COMMIT command turns autocommit on and the autocommit logic
then tries to commit change but fails because some other process is holding
a SHARED lock, then autocommit is turned back off automatically.  This
allows the user to retry the COMMIT at a later time after the SHARED lock
has had an opportunity to clear.</p>

<p>If multiple commands are being executed against the same SQLite database
connection at the same time, the autocommit is deferred until the very
last command completes.  For example, if a SELECT statement is being
executed, the execution of the command will pause as each row of the
result is returned.  During this pause other INSERT, UPDATE, or DELETE
commands can be executed against other tables in the database.  But none
of these changes will commit until the original SELECT statement finishes.
</p>
}


footer $rcsid

--- NEW FILE: mingw.tcl ---
#
# Run this Tcl script to generate the mingw.html file.
#
set rcsid {$Id: mingw.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

puts {<html>
<head>
  <title>Notes On How To Build MinGW As A Cross-Compiler</title>
</head>
<body bgcolor=white>
<h1 align=center>
Notes On How To Build MinGW As A Cross-Compiler
</h1>}
puts "<p align=center>
(This page was last modified on [lrange $rcsid 3 4] UTC)
</p>"

puts {
<p><a href="http://www.mingw.org/">MinGW</a> or
<a href="http://www.mingw.org/">Minimalist GNU For Windows</a>
is a version of the popular GCC compiler that builds Win95/Win98/WinNT
binaries.  See the website for details.</p>

<p>This page describes how you can build MinGW 
from sources as a cross-compiler
running under Linux.  Doing so will allow you to construct
WinNT binaries from the comfort and convenience of your
Unix desktop.</p>
}

proc Link {path {file {}}} {
  if {$file!=""} {
    set path $path/$file
  } else {
    set file $path
  }
  puts "<a href=\"$path\">$file</a>"
}

puts {
<p>Here are the steps:</p>

<ol>
<li>
<p>Get a copy of source code.  You will need the binutils, the
compiler, and the MinGW runtime.  Each are available separately.
As of this writing, Mumit Khan has collected everything you need
together in one FTP site:
}
set ftpsite \
  ftp://ftp.nanotech.wisc.edu/pub/khan/gnu-win32/mingw32/snapshots/gcc-2.95.2-1
Link $ftpsite
puts {
The three files you will need are:</p>
<ul>
<li>}
Link $ftpsite binutils-19990818-1-src.tar.gz
puts </li><li>
Link $ftpsite gcc-2.95.2-1-src.tar.gz
puts </li><li>
Link $ftpsite mingw-20000203.zip
puts {</li>
</ul>

<p>Put all the downloads in a directory out of the way.  The sequel
will assume all downloads are in a directory named
<b>~/mingw/download</b>.</p>
</li>

<li>
<p>
Create a directory in which to install the new compiler suite and make
the new directory writable.
Depending on what directory you choose, you might need to become
root.  The example shell commands that follow
will assume the installation directory is
<b>/opt/mingw</b> and that your user ID is <b>drh</b>.</p>
<blockquote><pre>
su
mkdir /opt/mingw
chown drh /opt/mingw
exit
</pre></blockquote>
</li>

<li>
<p>Unpack the source tarballs into a separate directory.</p>
<blockquote><pre>
mkdir ~/mingw/src
cd ~/mingw/src
tar xzf ../download/binutils-*.tar.gz
tar xzf ../download/gcc-*.tar.gz
unzip ../download/mingw-*.zip
</pre></blockquote>
</li>

<li>
<p>Create a directory in which to put all the build products.</p>
<blockquote><pre>
mkdir ~/mingw/bld
</pre></blockquote>
</li>

<li>
<p>Configure and build binutils and add the results to your PATH.</p>
<blockquote><pre>
mkdir ~/mingw/bld/binutils
cd ~/mingw/bld/binutils
../../src/binutils/configure --prefix=/opt/mingw --target=i386-mingw32 -v
make 2&gt;&amp;1 | tee make.out
make install 2&gt;&amp;1 | tee make-install.out
export PATH=$PATH:/opt/mingw/bin
</pre></blockquote>
</li>

<li>
<p>Manually copy the runtime include files into the installation directory
before trying to build the compiler.</p>
<blockquote><pre>
mkdir /opt/mingw/i386-mingw32/include
cd ~/mingw/src/mingw-runtime*/mingw/include
cp -r * /opt/mingw/i386-mingw32/include
</pre></blockquote>
</li>

<li>
<p>Configure and build the compiler</p>
<blockquote><pre>
mkdir ~/mingw/bld/gcc
cd ~/mingw/bld/gcc
../../src/gcc-*/configure --prefix=/opt/mingw --target=i386-mingw32 -v
cd gcc
make installdirs
cd ..
make 2&gt;&amp;1 | tee make.out
make install
</pre></blockquote>
</li>

<li>
<p>Configure and build the MinGW runtime</p>
<blockquote><pre>
mkdir ~/mingw/bld/runtime
cd ~/mingw/bld/runtime
../../src/mingw-runtime*/configure --prefix=/opt/mingw --target=i386-mingw32 -v
make install-target-w32api
make install
</pre></blockquote>
</li>
</ol>

<p>And you are done...</p>
}
puts {
<p><hr /></p>
<p><a href="index.html"><img src="/goback.jpg" border=0 />
Back to the SQLite Home Page</a>
</p>

</body></html>}

--- NEW FILE: nulls.tcl ---
#
# Run this script to generated a nulls.html output file
#
set rcsid {$Id: nulls.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {NULL Handling in SQLite}
puts {
<h2>NULL Handling in SQLite Versus Other Database Engines</h2>

<p>
The goal is
to make SQLite handle NULLs in a standards-compliant way.
But the descriptions in the SQL standards on how to handle
NULLs seem ambiguous. 
It is not clear from the standards documents exactly how NULLs should
be handled in all circumstances.
</p>

<p>
So instead of going by the standards documents, various popular
SQL engines were tested to see how they handle NULLs.  The idea
was to make SQLite work like all the other engines.
A SQL test script was developed and run by volunteers on various
SQL RDBMSes and the results of those tests were used to deduce
how each engine processed NULL values.
The original tests were run in May of 2002.
A copy of the test script is found at the end of this document.
</p>

<p>
SQLite was originally coded in such a way that the answer to
all questions in the chart below would be "Yes".  But the
expriments run on other SQL engines showed that none of them
worked this way.  So SQLite was modified to work the same as
Oracle, PostgreSQL, and DB2.  This involved making NULLs
indistinct for the purposes of the SELECT DISTINCT statement and
for the UNION operator in a SELECT.  NULLs are still distinct
in a UNIQUE column.  This seems somewhat arbitrary, but the desire
to be compatible with other engines outweighted that objection.
</p>

<p>
It is possible to make SQLite treat NULLs as distinct for the
purposes of the SELECT DISTINCT and UNION.  To do so, one should
change the value of the NULL_ALWAYS_DISTINCT #define in the
<tt>sqliteInt.h</tt> source file and recompile.
</p>

<blockquote>
<p>
<i>Update 2003-07-13:</i>
Since this document was originally written some of the database engines
tested have been updated and users have been kind enough to send in
corrections to the chart below.  The original data showed a wide variety
of behaviors, but over time the range of behaviors has converged toward
the PostgreSQL/Oracle model.  The only significant difference 
is that Informix and MS-SQL both threat NULLs as
indistinct in a UNIQUE column.
</p>

<p>
The fact that NULLs are distinct for UNIQUE columns but are indistinct for
SELECT DISTINCT and UNION continues to be puzzling.  It seems that NULLs
should be either distinct everywhere or nowhere.  And the SQL standards
documents suggest that NULLs should be distinct everywhere.  Yet as of
this writing, no SQL engine tested treats NULLs as distinct in a SELECT
DISTINCT statement or in a UNION.
</p>
</blockquote>


<p>
The following table shows the results of the NULL handling experiments.
</p>

<table border=1 cellpadding=3 width="100%">
<tr><th>&nbsp&nbsp;</th>
<th>SQLite</th>
<th>PostgreSQL</th>
<th>Oracle</th>
<th>Informix</th>
<th>DB2</th>
<th>MS-SQL</th>
<th>OCELOT</th>
</tr>

<tr><td>Adding anything to null gives null</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>Multiplying null by zero gives null</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>nulls are distinct in a UNIQUE column</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#aaaad2">(Note 4)</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>nulls are distinct in SELECT DISTINCT</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
</tr>
<tr><td>nulls are distinct in a UNION</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
</tr>
<tr><td>"CASE WHEN null THEN 1 ELSE 0 END" is 0?</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>"null OR true" is true</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>"not (null AND false)" is true</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
</table>

<table border=1 cellpadding=3 width="100%">
<tr><th>&nbsp&nbsp;</th>
<th>MySQL<br>3.23.41</th>
<th>MySQL<br>4.0.16</th>
<th>Firebird</th>
<th>SQL<br>Anywhere</th>
<th>Borland<br>Interbase</th>
</tr>

<tr><td>Adding anything to null gives null</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>Multiplying null by zero gives null</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>nulls are distinct in a UNIQUE column</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#aaaad2">(Note 4)</td>
<td valign="center" align="center" bgcolor="#aaaad2">(Note 4)</td>
</tr>
<tr><td>nulls are distinct in SELECT DISTINCT</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No (Note 1)</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
</tr>
<tr><td>nulls are distinct in a UNION</td>
<td valign="center" align="center" bgcolor="#aaaad2">(Note 3)</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No (Note 1)</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
</tr>
<tr><td>"CASE WHEN null THEN 1 ELSE 0 END" is 0?</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#aaaad2">(Note 5)</td>
</tr>
<tr><td>"null OR true" is true</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
<tr><td>"not (null AND false)" is true</td>
<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
</tr>
</table>

<table border=0 align="right" cellpadding=0 cellspacing=0>
<tr>
<td valign="top" rowspan=5>Notes:&nbsp;&nbsp;</td>
<td>1.&nbsp;</td>
<td>Older versions of firebird omits all NULLs from SELECT DISTINCT
and from UNION.</td>
</tr>
<tr><td>2.&nbsp;</td>
<td>Test data unavailable.</td>
</tr>
<tr><td>3.&nbsp;</td>
<td>MySQL version 3.23.41 does not support UNION.</td>
</tr>
<tr><td>4.&nbsp;</td>
<td>DB2, SQL Anywhere, and Borland Interbase 
do not allow NULLs in a UNIQUE column.</td>
</tr>
<tr><td>5.&nbsp;</td>
<td>Borland Interbase does not support CASE expressions.</td>
</tr>
</table>
<br clear="both">

<p>&nbsp;</p>
<p>
The following script was used to gather information for the table
above.
</p>

<pre>
-- I have about decided that SQL's treatment of NULLs is capricious and cannot be
-- deduced by logic.  It must be discovered by experiment.  To that end, I have 
-- prepared the following script to test how various SQL databases deal with NULL.
-- My aim is to use the information gather from this script to make SQLite as much
-- like other databases as possible.
--
-- If you could please run this script in your database engine and mail the results
-- to me at drh at hwaci.com, that will be a big help.  Please be sure to identify the
-- database engine you use for this test.  Thanks.
--
-- If you have to change anything to get this script to run with your database
-- engine, please send your revised script together with your results.
--

-- Create a test table with data
create table t1(a int, b int, c int);
insert into t1 values(1,0,0);
insert into t1 values(2,0,1);
insert into t1 values(3,1,0);
insert into t1 values(4,1,1);
insert into t1 values(5,null,0);
insert into t1 values(6,null,1);
insert into t1 values(7,null,null);

-- Check to see what CASE does with NULLs in its test expressions
select a, case when b<>0 then 1 else 0 end from t1;
select a+10, case when not b<>0 then 1 else 0 end from t1;
select a+20, case when b<>0 and c<>0 then 1 else 0 end from t1;
select a+30, case when not (b<>0 and c<>0) then 1 else 0 end from t1;
select a+40, case when b<>0 or c<>0 then 1 else 0 end from t1;
select a+50, case when not (b<>0 or c<>0) then 1 else 0 end from t1;
select a+60, case b when c then 1 else 0 end from t1;
select a+70, case c when b then 1 else 0 end from t1;

-- What happens when you multiple a NULL by zero?
select a+80, b*0 from t1;
select a+90, b*c from t1;

-- What happens to NULL for other operators?
select a+100, b+c from t1;

-- Test the treatment of aggregate operators
select count(*), count(b), sum(b), avg(b), min(b), max(b) from t1;

-- Check the behavior of NULLs in WHERE clauses
select a+110 from t1 where b<10;
select a+120 from t1 where not b>10;
select a+130 from t1 where b<10 OR c=1;
select a+140 from t1 where b<10 AND c=1;
select a+150 from t1 where not (b<10 AND c=1);
select a+160 from t1 where not (c=1 AND b<10);

-- Check the behavior of NULLs in a DISTINCT query
select distinct b from t1;

-- Check the behavior of NULLs in a UNION query
select b from t1 union select b from t1;

-- Create a new table with a unique column.  Check to see if NULLs are considered
-- to be distinct.
create table t2(a int, b int unique);
insert into t2 values(1,1);
insert into t2 values(2,null);
insert into t2 values(3,null);
select * from t2;

drop table t1;
drop table t2;
</pre>
}

footer $rcsid

--- NEW FILE: oldnews.tcl ---
#!/usr/bin/tclsh
source common.tcl
header {SQLite Older News}

proc newsitem {date title text} {
  puts "<h3>$date - $title</h3>"
  regsub -all "\n( *\n)+" $text "</p>\n\n<p>" txt
  puts "<p>$txt</p>"
  puts "<hr width=\"50%\">"
}

newsitem {2004-Sep-02} {Version 3.0.6 (beta)} {
  Because of some important changes to sqlite3_step(),
  we have decided to
  do an additional beta release prior to the first "stable" release.
  If no serious problems are discovered in this version, we will
  release version 3.0 "stable" in about a week.
}


newsitem {2004-Aug-29} {Version 3.0.5 (beta)} {
  The fourth beta release of SQLite version 3.0 is now available.
  The next release is expected to be called "stable".
}


newsitem {2004-Aug-08} {Version 3.0.4 (beta)} {
  The third beta release of SQLite version 3.0 is now available.
  This new beta fixes several bugs including a database corruption
  problem that can occur when doing a DELETE while a SELECT is pending.
  Expect at least one more beta before version 3.0 goes final.
}

newsitem {2004-Jly-22} {Version 3.0.3 (beta)} {
  The second beta release of SQLite version 3.0 is now available.
  This new beta fixes many bugs and adds support for databases with
  varying page sizes.  The next 3.0 release will probably be called
  a final or stable release.

  Version 3.0 adds support for internationalization and a new
  more compact file format. 
  <a href="version3.html">Details.</a>
  The API and file format have been fixed since 3.0.2.  All
  regression tests pass (over 100000 tests) and the test suite
  exercises over 95% of the code.

  SQLite version 3.0 is made possible in part by AOL
  developers supporting and embracing great Open-Source Software.
}

newsitem {2004-Jun-30} {Version 3.0.2 (beta) Released} {
  The first beta release of SQLite version 3.0 is now available.
  Version 3.0 adds support for internationalization and a new
  more compact file format. 
  <a href="version3.html">Details.</a>
  As of this release, the API and file format are frozen.  All
  regression tests pass (over 100000 tests) and the test suite
  exercises over 95% of the code.

  SQLite version 3.0 is made possible in part by AOL
  developers supporting and embracing great Open-Source Software.
}
  

newsitem {2004-Jun-25} {Website hacked} {
  The www.sqlite.org website was hacked sometime around 2004-Jun-22
  because the lead SQLite developer failed to properly patch CVS.
  Evidence suggests that the attacker was unable to elevate privileges
  above user "cvs".  Nevertheless, as a precaution the entire website
  has been reconstructed from scratch on a fresh machine.  All services
  should be back to normal as of 2004-Jun-28.
}


newsitem {2004-Jun-18} {Version 3.0.0 (alpha) Released} {
  The first alpha release of SQLite version 3.0 is available for
  public review and comment.  Version 3.0 enhances internationalization support
  through the use of UTF-16 and user-defined text collating sequences.
  BLOBs can now be stored directly, without encoding.
  A new file format results in databases that are 25% smaller (depending
  on content).  The code is also a little faster.  In spite of the many
  new features, the library footprint is still less than 240KB
  (x86, gcc -O1).
  <a href="version3.html">Additional information</a>.

  Our intent is to freeze the file format and API on 2004-Jul-01.
  Users are encouraged to review and evaluate this alpha release carefully 
  and submit any feedback prior to that date.

  The 2.8 series of SQLite will continue to be supported with bug
  fixes for the foreseeable future.
}

newsitem {2004-Jun-09} {Version 2.8.14 Released} {
  SQLite version 2.8.14 is a patch release to the stable 2.8 series.
  There is no reason to upgrade if 2.8.13 is working ok for you.
  This is only a bug-fix release.  Most developement effort is
  going into version 3.0.0 which is due out soon.
}

newsitem {2004-May-31} {CVS Access Temporarily Disabled} {
  Anonymous access to the CVS repository will be suspended
  for 2 weeks beginning on 2004-June-04.  Everyone will still
  be able to download
  prepackaged source bundles, create or modify trouble tickets, or view
  change logs during the CVS service interruption. Full open access to the
  CVS repository will be restored on 2004-June-18.
}

newsitem {2004-Apr-23} {Work Begins On SQLite Version 3} {
  Work has begun on version 3 of SQLite.  Version 3 is a major
  changes to both the C-language API and the underlying file format
  that will enable SQLite to better support internationalization.
  The first beta is schedule for release on 2004-July-01.

  Plans are to continue to support SQLite version 2.8 with
  bug fixes.  But all new development will occur in version 3.0.
}
footer {$Id: oldnews.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

--- NEW FILE: omitted.tcl ---
#
# Run this script to generated a omitted.html output file
#
set rcsid {$Id: omitted.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQL Features That SQLite Does Not Implement}
puts {
<h2>SQL Features That SQLite Does Not Implement</h2>

<p>
Rather than try to list all the features of SQL92 that SQLite does
support, it is much easier to list those that it does not.
Unsupported features of SQL92 are shown below.</p>

<p>
The order of this list gives some hint as to when a feature might
be added to SQLite.  Those features near the top of the list are
likely to be added in the near future.  There are no immediate
plans to add features near the bottom of the list.
</p>

<table cellpadding="10">
}

proc feature {name desc} {
  puts "<tr><td valign=\"top\"><b><nobr>$name</nobr></b></td>"
  puts "<td width=\"10\">&nbsp;</th>"
  puts "<td valign=\"top\">$desc</td></tr>"
}

feature {CHECK constraints} {
  CHECK constraints are parsed but they are not enforced.
  NOT NULL and UNIQUE constraints are enforced, however.
}

feature {Variable subqueries} {
  Subqueries must be static.  They are evaluated only once.  They may not,
  therefore, refer to variables in the main query.
}

feature {FOREIGN KEY constraints} {
  FOREIGN KEY constraints are parsed but are not enforced.
}

feature {Complete trigger support} {
  There is some support for triggers but it is not complete.  Missing
  subfeatures include FOR EACH STATEMENT triggers (currently all triggers
  must be FOR EACH ROW), INSTEAD OF triggers on tables (currently 
  INSTEAD OF triggers are only allowed on views), and recursive
  triggers - triggers that trigger themselves.
}

feature {ALTER TABLE} {
  To change a table you have to delete it (saving its contents to a temporary
  table) and recreate it from scratch.
}

feature {Nested transactions} {
  The current implementation only allows a single active transaction.
}

feature {The COUNT(DISTINCT X) function} {
  You can accomplish the same thing using a subquery, like this:<br />
  &nbsp;&nbsp;SELECT count(x) FROM (SELECT DISTINCT x FROM tbl);
}

feature {RIGHT and FULL OUTER JOIN} {
  LEFT OUTER JOIN is implemented, but not RIGHT OUTER JOIN or
  FULL OUTER JOIN.
}

feature {Writing to VIEWs} {
  VIEWs in SQLite are read-only.  You may not execute a DELETE, INSERT, or
  UPDATE statement on a view. But you can create a trigger
  that fires on an attempt to DELETE, INSERT, or UPDATE a view and do
  what you need in the body of the trigger.
}

feature {GRANT and REVOKE} {
  Since SQLite reads and writes an ordinary disk file, the
  only access permissions that can be applied are the normal
  file access permissions of the underlying operating system.
  The GRANT and REVOKE commands commonly found on client/server
  RDBMSes are not implemented because they would be meaningless
  for an embedded database engine.
}

puts {
</table>

<p>
If you find other SQL92 features that SQLite does not support, please
add them to the Wiki page at 
<a href="http://www.sqlite.org/cvstrac/wiki?p=UnsupportedSql">
http://www.sqlite.org/cvstrac/wiki?p=Unsupported</a>
</p>
}
footer $rcsid

--- NEW FILE: opcode.tcl ---
#
# Run this Tcl script to generate the sqlite.html file.
#
set rcsid {$Id: opcode.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite Virtual Machine Opcodes}
puts {
<h2>SQLite Virtual Machine Opcodes</h2>
}

set fd [open [lindex $argv 0] r]
set file [read $fd [file size [lindex $argv 0]]]
close $fd
set current_op {}
foreach line [split $file \n] {
  set line [string trim $line]
  if {[string index $line 1]!="*"} {
    set current_op {}
    continue
  }
  if {[regexp {^/\* Opcode: } $line]} {
    set current_op [lindex $line 2]
    set Opcode($current_op:args) [lrange $line 3 end]
    lappend OpcodeList $current_op
    continue
  }
  if {$current_op==""} continue
  if {[regexp {^\*/} $line]} {
    set current_op {}
    continue
  }
  set line [string trim [string range $line 3 end]]
  if {$line==""} {
    append Opcode($current_op:text) \n<p>
  } else {
    append Opcode($current_op:text) \n$line
  }
}
unset file

puts {
<h3>Introduction</h3>

<p>In order to execute an SQL statement, the SQLite library first parses
the SQL, analyzes the statement, then generates a short program to execute
the statement.  The program is generated for a "virtual machine" implemented
by the SQLite library.  This document describes the operation of that
virtual machine.</p>

<p>This document is intended as a reference, not a tutorial.
A separate <a href="vdbe.html">Virtual Machine Tutorial</a> is 
available.  If you are looking for a narrative description
of how the virtual machine works, you should read the tutorial
and not this document.  Once you have a basic idea of what the
virtual machine does, you can refer back to this document for
the details on a particular opcode.
Unfortunately, the virtual machine tutorial was written for
SQLite version 1.0.  There are substantial changes in the virtual
machine for version 2.0 and the document has not been updated.
</p>

<p>The source code to the virtual machine is in the <b>vdbe.c</b> source
file.  All of the opcode definitions further down in this document are
contained in comments in the source file.  In fact, the opcode table
in this document
was generated by scanning the <b>vdbe.c</b> source file 
and extracting the necessary information from comments.  So the 
source code comments are really the canonical source of information
about the virtual macchine.  When in doubt, refer to the source code.</p>

<p>Each instruction in the virtual machine consists of an opcode and
up to three operands named P1, P2 and P3.  P1 may be an arbitrary
integer.  P2 must be a non-negative integer.  P2 is always the
jump destination in any operation that might cause a jump.
P3 is a null-terminated
string or NULL.  Some operators use all three operands.  Some use
one or two.  Some operators use none of the operands.<p>

<p>The virtual machine begins execution on instruction number 0.
Execution continues until (1) a Halt instruction is seen, or 
(2) the program counter becomes one greater than the address of
last instruction, or (3) there is an execution error.
When the virtual machine halts, all memory
that it allocated is released and all database cursors it may
have had open are closed.  If the execution stopped due to an
error, any pending transactions are terminated and changes made
to the database are rolled back.</p>

<p>The virtual machine also contains an operand stack of unlimited
depth.  Many of the opcodes use operands from the stack.  See the
individual opcode descriptions for details.</p>

<p>The virtual machine can have zero or more cursors.  Each cursor
is a pointer into a single table or index within the database.
There can be multiple cursors pointing at the same index or table.
All cursors operate independently, even cursors pointing to the same
indices or tables.
The only way for the virtual machine to interact with a database
file is through a cursor.
Instructions in the virtual
machine can create a new cursor (Open), read data from a cursor
(Column), advance the cursor to the next entry in the table
(Next) or index (NextIdx), and many other operations.
All cursors are automatically
closed when the virtual machine terminates.</p>

<p>The virtual machine contains an arbitrary number of fixed memory
locations with addresses beginning at zero and growing upward.
Each memory location can hold an arbitrary string.  The memory
cells are typically used to hold the result of a scalar SELECT
that is part of a larger expression.</p>

<p>The virtual machine contains a single sorter.
The sorter is able to accumulate records, sort those records,
then play the records back in sorted order.  The sorter is used
to implement the ORDER BY clause of a SELECT statement.</p>

<p>The virtual machine contains a single "List".
The list stores a list of integers.  The list is used to hold the
rowids for records of a database table that needs to be modified.
The WHERE clause of an UPDATE or DELETE statement scans through
the table and writes the rowid of every record to be modified
into the list.  Then the list is played back and the table is modified
in a separate step.</p>

<p>The virtual machine can contain an arbitrary number of "Sets".
Each set holds an arbitrary number of strings.  Sets are used to
implement the IN operator with a constant right-hand side.</p>

<p>The virtual machine can open a single external file for reading.
This external read file is used to implement the COPY command.</p>

<p>Finally, the virtual machine can have a single set of aggregators.
An aggregator is a device used to implement the GROUP BY clause
of a SELECT.  An aggregator has one or more slots that can hold
values being extracted by the select.  The number of slots is the
same for all aggregators and is defined by the AggReset operation.
At any point in time a single aggregator is current or "has focus".
There are operations to read or write to memory slots of the aggregator
in focus.  There are also operations to change the focus aggregator
and to scan through all aggregators.</p>

<h3>Viewing Programs Generated By SQLite</h3>

<p>Every SQL statement that SQLite interprets results in a program
for the virtual machine.  But if you precede the SQL statement with
the keyword "EXPLAIN" the virtual machine will not execute the
program.  Instead, the instructions of the program will be returned
like a query result.  This feature is useful for debugging and
for learning how the virtual machine operates.</p>

<p>You can use the <b>sqlite</b> command-line tool to see the
instructions generated by an SQL statement.  The following is
an example:</p>}

proc Code {body} {
  puts {<blockquote><tt>}
  regsub -all {&} [string trim $body] {\&amp;} body
  regsub -all {>} $body {\&gt;} body
  regsub -all {<} $body {\&lt;} body
  regsub -all {\(\(\(} $body {<b>} body
  regsub -all {\)\)\)} $body {</b>} body
  regsub -all { } $body {\&nbsp;} body
  regsub -all \n $body <br>\n body
  puts $body
  puts {</tt></blockquote>}
}

Code {
$ (((sqlite ex1)))
sqlite> (((.explain)))
sqlite> (((explain delete from tbl1 where two<20;)))
addr  opcode        p1     p2     p3                                      
----  ------------  -----  -----  ----------------------------------------
0     Transaction   0      0                                              
1     VerifyCookie  219    0                                              
2     ListOpen      0      0                                              
3     Open          0      3      tbl1                                    
4     Rewind        0      0                                              
5     Next          0      12                                             
6     Column        0      1                                              
7     Integer       20     0                                              
8     Ge            0      5                                              
9     Recno         0      0                                              
10    ListWrite     0      0                                              
11    Goto          0      5                                              
12    Close         0      0                                              
13    ListRewind    0      0                                              
14    OpenWrite     0      3                                              
15    ListRead      0      19                                             
16    MoveTo        0      0                                              
17    Delete        0      0                                              
18    Goto          0      15                                             
19    ListClose     0      0                                              
20    Commit        0      0                                              
}

puts {
<p>All you have to do is add the "EXPLAIN" keyword to the front of the
SQL statement.  But if you use the ".explain" command to <b>sqlite</b>
first, it will set up the output mode to make the program more easily
viewable.</p>

<p>If <b>sqlite</b> has been compiled without the "-DNDEBUG=1" option
(that is, with the NDEBUG preprocessor macro not defined) then you
can put the SQLite virtual machine in a mode where it will trace its
execution by writing messages to standard output.  The non-standard
SQL "PRAGMA" comments can be used to turn tracing on and off.  To
turn tracing on, enter:
</p>

<blockquote><pre>
PRAGMA vdbe_trace=on;
</pre></blockquote>

<p>
You can turn tracing back off by entering a similar statement but
changing the value "on" to "off".</p>

<h3>The Opcodes</h3>
}

puts "<p>There are currently [llength $OpcodeList] opcodes defined by
the virtual machine."
puts {All currently defined opcodes are described in the table below.
This table was generated automatically by scanning the source code
from the file <b>vdbe.c</b>.</p>}

puts {
<p><table cellspacing="1" border="1" cellpadding="10">
<tr><th>Opcode&nbsp;Name</th><th>Description</th></tr>}
foreach op [lsort -dictionary $OpcodeList] {
  puts {<tr><td valign="top" align="center">}
  puts "<a name=\"$op\">$op</a>"
  puts "<td>[string trim $Opcode($op:text)]</td></tr>"
}
puts {</table></p>}
footer $rcsid

--- NEW FILE: quickstart.tcl ---
#
# Run this TCL script to generate HTML for the quickstart.html file.
#
set rcsid {$Id: quickstart.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite In 5 Minutes Or Less}
puts {
<p>Here is what you do to start experimenting with SQLite without having
to do a lot of tedious reading and configuration:</p>

<h2>Download The Code</h2>

<ul>
<li><p>Get a copy of the prebuilt binaries for your machine, or get a copy
of the sources and compile them yourself.  Visit
the <a href="download.html">download</a> page for more information.</p></li>
</ul>

<h2>Create A New Database</h2>

<ul>
<li><p>At a shell or DOS prompt, enter: "<b>sqlite3 test.db</b>".  This will
create a new database named "test.db".  (You can use a different name if
you like.)</p></li>
<li><p>Enter SQL commands at the prompt to create and populate the
new database.</p></li>
</ul>

<h2>Write Programs That Use SQLite</h2>

<ul>
<li><p>Below is a simple TCL program that demonstrates how to use
the TCL interface to SQLite.  The program executes the SQL statements
given as the second argument on the database defined by the first
argument.  The commands to watch for are the <b>sqlite3</b> command
on line 7 which opens an SQLite database and creates
a new TCL command named "<b>db</b>" to access that database, the
invocation of the <b>db</b> command on line 8 to execute
SQL commands against the database, and the closing of the database connection
on the last line of the script.</p>

<blockquote><pre>
#!/usr/bin/tclsh
if {$argc!=2} {
  puts stderr "Usage: %s DATABASE SQL-STATEMENT"
  exit 1
}
load /usr/lib/tclsqlite3.so Sqlite
<b>sqlite3</b> db [lindex $argv 0]
<b>db</b> eval [lindex $argv 1] x {
  foreach v $x(*) {
    puts "$v = $x($v)"
  }
  puts ""
}
<b>db</b> close
</pre></blockquote>
</li>

<li><p>Below is a simple C program that demonstrates how to use
the C/C++ interface to SQLite.  The name of a database is given by
the first argument and the second argument is one or more SQL statements
to execute against the database.  The function calls to pay attention
to here are the call to <b>sqlite3_open()</b> on line 22 which opens
the database, <b>sqlite3_exec()</b> on line 27 that executes SQL
commands against the database, and <b>sqlite3_close()</b> on line 31
that closes the database connection.</p>

<blockquote><pre>
#include &lt;stdio.h&gt;
#include &lt;sqlite3.h&gt;

static int callback(void *NotUsed, int argc, char **argv, char **azColName){
  int i;
  for(i=0; i&lt;argc; i++){
    printf("%s = %s\n", azColName[i], argv[i] ? argv[i] : "NULL");
  }
  printf("\n");
  return 0;
}

int main(int argc, char **argv){
  sqlite3 *db;
  char *zErrMsg = 0;
  int rc;

  if( argc!=3 ){
    fprintf(stderr, "Usage: %s DATABASE SQL-STATEMENT\n", argv[0]);
    exit(1);
  }
  rc = <b>sqlite3_open</b>(argv[1], &db);
  if( rc ){
    fprintf(stderr, "Can't open database: %s\n", sqlite3_errmsg(db));
    sqlite3_close(db);
    exit(1);
  }
  rc = <b>sqlite3_exec</b>(db, argv[2], callback, 0, &zErrMsg);
  if( rc!=SQLITE_OK ){
    fprintf(stderr, "SQL error: %s\n", zErrMsg);
  }
  <b>sqlite3_close</b>(db);
  return 0;
}
</pre></blockquote>
</li>
</ul>
}
footer {$Id: quickstart.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

--- NEW FILE: speed.tcl ---
#
# Run this Tcl script to generate the speed.html file.
#
set rcsid {$Id: speed.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $ }
source common.tcl
header {SQLite Database Speed Comparison}

puts {
<h2>Database Speed Comparison</h2>

<h3>Executive Summary</h3>

<p>A series of tests were run to measure the relative performance of
SQLite 2.7.6, PostgreSQL 7.1.3, and MySQL 3.23.41.
The following are general
conclusions drawn from these experiments:
</p>

<ul>
<li><p>
  SQLite 2.7.6 is significantly faster (sometimes as much as 10 or
  20 times faster) than the default PostgreSQL 7.1.3 installation
  on RedHat 7.2 for most common operations.  
</p></li>
<li><p>
  SQLite 2.7.6 is often faster (sometimes
  more than twice as fast) than MySQL 3.23.41
  for most common operations.
</p></li>
<li><p>
  SQLite does not execute CREATE INDEX or DROP TABLE as fast as
  the other databases.  But this is not seen as a problem because
  those are infrequent operations.
</p></li>
<li><p>
  SQLite works best if you group multiple operations together into
  a single transaction.
</p></li>
</ul>

<p>
The results presented here come with the following caveats:
</p>

<ul>
<li><p>
  These tests did not attempt to measure multi-user performance or
  optimization of complex queries involving multiple joins and subqueries.
</p></li>
<li><p>
  These tests are on a relatively small (approximately 14 megabyte) database.
  They do not measure how well the database engines scale to larger problems.
</p></li>
</ul>

<h3>Test Environment</h3>

<p>
The platform used for these tests is a 1.6GHz Athlon with 1GB or memory
and an IDE disk drive.  The operating system is RedHat Linux 7.2 with
a stock kernel.
</p>

<p>
The PostgreSQL and MySQL servers used were as delivered by default on
RedHat 7.2.  (PostgreSQL version 7.1.3 and MySQL version 3.23.41.)
No effort was made to tune these engines.  Note in particular
the the default MySQL configuration on RedHat 7.2 does not support
transactions.  Not having to support transactions gives MySQL a
big speed advantage, but SQLite is still able to hold its own on most
tests.
</p>

<p>
I am told that the default PostgreSQL configuration in RedHat 7.3
is unnecessarily conservative (it is designed to
work on a machine with 8MB of RAM) and that PostgreSQL could
be made to run a lot faster with some knowledgable configuration
tuning.
Matt Sergeant reports that he has tuned his PostgreSQL installation
and rerun the tests shown below.  His results show that
PostgreSQL and MySQL run at about the same speed.  For Matt's
results, visit
</p>

<blockquote>
<a href="http://www.sergeant.org/sqlite_vs_pgsync.html">
http://www.sergeant.org/sqlite_vs_pgsync.html</a>
</blockquote>

<p>
SQLite was tested in the same configuration that it appears
on the website.  It was compiled with -O6 optimization and with
the -DNDEBUG=1 switch which disables the many "assert()" statements
in the SQLite code.  The -DNDEBUG=1 compiler option roughly doubles
the speed of SQLite.
</p>

<p>
All tests are conducted on an otherwise quiescent machine.
A simple Tcl script was used to generate and run all the tests.
A copy of this Tcl script can be found in the SQLite source tree
in the file <b>tools/speedtest.tcl</b>.
</p>

<p>
The times reported on all tests represent wall-clock time 
in seconds.  Two separate time values are reported for SQLite.
The first value is for SQLite in its default configuration with
full disk synchronization turned on.  With synchronization turned
on, SQLite executes
an <b>fsync()</b> system call (or the equivalent) at key points
to make certain that critical data has 
actually been written to the disk drive surface.  Synchronization
is necessary to guarantee the integrity of the database if the
operating system crashes or the computer powers down unexpectedly
in the middle of a database update.  The second time reported for SQLite is
when synchronization is turned off.  With synchronization off,
SQLite is sometimes much faster, but there is a risk that an
operating system crash or an unexpected power failure could
damage the database.  Generally speaking, the synchronous SQLite
times are for comparison against PostgreSQL (which is also
synchronous) and the asynchronous SQLite times are for 
comparison against the asynchronous MySQL engine.
</p>

<h3>Test 1: 1000 INSERTs</h3>
<blockquote>
CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));<br>
INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three');<br>
INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty');<br>
<i>... 995 lines omitted</i><br>
INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine');<br>
INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two');<br>
INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two');<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;4.373</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;0.114</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;13.061</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.223</td></tr>
</table>

<p>
Because it does not have a central server to coordinate access,
SQLite must close and reopen the database file, and thus invalidate
its cache, for each transaction.  In this test, each SQL statement
is a separate transaction so the database file must be opened and closed
and the cache must be flushed 1000 times.  In spite of this, the asynchronous
version of SQLite is still nearly as fast as MySQL.  Notice how much slower
the synchronous version is, however.  SQLite calls <b>fsync()</b> after 
each synchronous transaction to make sure that all data is safely on
the disk surface before continuing.  For most of the 13 seconds in the
synchronous test, SQLite was sitting idle waiting on disk I/O to complete.</p>


<h3>Test 2: 25000 INSERTs in a transaction</h3>
<blockquote>
BEGIN;<br>
CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));<br>
INSERT INTO t2 VALUES(1,59672,'fifty nine thousand six hundred seventy two');<br>
<i>... 24997 lines omitted</i><br>
INSERT INTO t2 VALUES(24999,89569,'eighty nine thousand five hundred sixty nine');<br>
INSERT INTO t2 VALUES(25000,94666,'ninety four thousand six hundred sixty six');<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;4.900</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;2.184</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;0.914</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.757</td></tr>
</table>

<p>
When all the INSERTs are put in a transaction, SQLite no longer has to
close and reopen the database or invalidate its cache between each statement.
It also does not
have to do any fsync()s until the very end.  When unshackled in
this way, SQLite is much faster than either PostgreSQL and MySQL.
</p>

<h3>Test 3: 25000 INSERTs into an indexed table</h3>
<blockquote>
BEGIN;<br>
CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));<br>
CREATE INDEX i3 ON t3(c);<br>
<i>... 24998 lines omitted</i><br>
INSERT INTO t3 VALUES(24999,88509,'eighty eight thousand five hundred nine');<br>
INSERT INTO t3 VALUES(25000,84791,'eighty four thousand seven hundred ninety one');<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;8.175</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;3.197</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;1.555</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;1.402</td></tr>
</table>

<p>
There were reports that SQLite did not perform as well on an indexed table.
This test was recently added to disprove those rumors.  It is true that
SQLite is not as fast at creating new index entries as the other engines
(see Test 6 below) but its overall speed is still better.
</p>

<h3>Test 4: 100 SELECTs without an index</h3>
<blockquote>
BEGIN;<br>
SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<1000;<br>
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<1100;<br>
<i>... 96 lines omitted</i><br>
SELECT count(*), avg(b) FROM t2 WHERE b>=9800 AND b<10800;<br>
SELECT count(*), avg(b) FROM t2 WHERE b>=9900 AND b<10900;<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;3.629</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;2.760</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;2.494</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;2.526</td></tr>
</table>


<p>
This test does 100 queries on a 25000 entry table without an index,
thus requiring a full table scan.   Prior versions of SQLite used to
be slower than PostgreSQL and MySQL on this test, but recent performance
enhancements have increased its speed so that it is now the fastest
of the group.
</p>

<h3>Test 5: 100 SELECTs on a string comparison</h3>
<blockquote>
BEGIN;<br>
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one%';<br>
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%two%';<br>
<i>... 96 lines omitted</i><br>
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%ninety nine%';<br>
SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one hundred%';<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;13.409</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;4.640</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;3.362</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;3.372</td></tr>
</table>

<p>
This test still does 100 full table scans but it uses
uses string comparisons instead of numerical comparisions.
SQLite is over three times faster than PostgreSQL here and about 30%
faster than MySQL.
</p>

<h3>Test 6: Creating an index</h3>
<blockquote>
CREATE INDEX i2a ON t2(a);<br>CREATE INDEX i2b ON t2(b);
</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;0.381</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;0.318</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;0.777</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.659</td></tr>
</table>

<p>
SQLite is slower at creating new indices.  This is not a huge problem
(since new indices are not created very often) but it is something that
is being worked on.  Hopefully, future versions of SQLite will do better
here.
</p>

<h3>Test 7: 5000 SELECTs with an index</h3>
<blockquote>
SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<100;<br>
SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<200;<br>
SELECT count(*), avg(b) FROM t2 WHERE b>=200 AND b<300;<br>
<i>... 4994 lines omitted</i><br>
SELECT count(*), avg(b) FROM t2 WHERE b>=499700 AND b<499800;<br>
SELECT count(*), avg(b) FROM t2 WHERE b>=499800 AND b<499900;<br>
SELECT count(*), avg(b) FROM t2 WHERE b>=499900 AND b<500000;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;4.614</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;1.270</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;1.121</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;1.162</td></tr>
</table>

<p>
All three database engines run faster when they have indices to work with.
But SQLite is still the fastest.
</p>

<h3>Test 8: 1000 UPDATEs without an index</h3>
<blockquote>
BEGIN;<br>
UPDATE t1 SET b=b*2 WHERE a>=0 AND a<10;<br>
UPDATE t1 SET b=b*2 WHERE a>=10 AND a<20;<br>
<i>... 996 lines omitted</i><br>
UPDATE t1 SET b=b*2 WHERE a>=9980 AND a<9990;<br>
UPDATE t1 SET b=b*2 WHERE a>=9990 AND a<10000;<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;1.739</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;8.410</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;0.637</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.638</td></tr>
</table>

<p>
For this particular UPDATE test, MySQL is consistently
five or ten times
slower than PostgreSQL and SQLite.  I do not know why.  MySQL is
normally a very fast engine.  Perhaps this problem has been addressed
in later versions of MySQL.
</p>

<h3>Test 9: 25000 UPDATEs with an index</h3>
<blockquote>
BEGIN;<br>
UPDATE t2 SET b=468026 WHERE a=1;<br>
UPDATE t2 SET b=121928 WHERE a=2;<br>
<i>... 24996 lines omitted</i><br>
UPDATE t2 SET b=35065 WHERE a=24999;<br>
UPDATE t2 SET b=347393 WHERE a=25000;<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;18.797</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;8.134</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;3.520</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;3.104</td></tr>
</table>

<p>
As recently as version 2.7.0, SQLite ran at about the same speed as
MySQL on this test.  But recent optimizations to SQLite have more
than doubled speed of UPDATEs.
</p>

<h3>Test 10: 25000 text UPDATEs with an index</h3>
<blockquote>
BEGIN;<br>
UPDATE t2 SET c='one hundred forty eight thousand three hundred eighty two' WHERE a=1;<br>
UPDATE t2 SET c='three hundred sixty six thousand five hundred two' WHERE a=2;<br>
<i>... 24996 lines omitted</i><br>
UPDATE t2 SET c='three hundred eighty three thousand ninety nine' WHERE a=24999;<br>
UPDATE t2 SET c='two hundred fifty six thousand eight hundred thirty' WHERE a=25000;<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;48.133</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;6.982</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;2.408</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;1.725</td></tr>
</table>

<p>
Here again, version 2.7.0 of SQLite used to run at about the same speed
as MySQL.  But now version 2.7.6 is over two times faster than MySQL and
over twenty times faster than PostgreSQL.
</p>

<p>
In fairness to PostgreSQL, it started thrashing on this test.  A
knowledgeable administrator might be able to get PostgreSQL to run a lot
faster here by tweaking and tuning the server a little.
</p>

<h3>Test 11: INSERTs from a SELECT</h3>
<blockquote>
BEGIN;<br>INSERT INTO t1 SELECT b,a,c FROM t2;<br>INSERT INTO t2 SELECT b,a,c FROM t1;<br>COMMIT;
</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;61.364</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;1.537</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;2.787</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;1.599</td></tr>
</table>

<p>
The asynchronous SQLite is just a shade slower than MySQL on this test.
(MySQL seems to be especially adept at INSERT...SELECT statements.)
The PostgreSQL engine is still thrashing - most of the 61 seconds it used
were spent waiting on disk I/O.
</p>

<h3>Test 12: DELETE without an index</h3>
<blockquote>
DELETE FROM t2 WHERE c LIKE '%fifty%';
</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;1.509</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;0.975</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;4.004</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.560</td></tr>
</table>

<p>
The synchronous version of SQLite is the slowest of the group in this test,
but the asynchronous version is the fastest.  
The difference is the extra time needed to execute fsync().
</p>

<h3>Test 13: DELETE with an index</h3>
<blockquote>
DELETE FROM t2 WHERE a>10 AND a<20000;
</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;1.316</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;2.262</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;2.068</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.752</td></tr>
</table>

<p>
This test is significant because it is one of the few where
PostgreSQL is faster than MySQL.  The asynchronous SQLite is,
however, faster then both the other two.
</p>

<h3>Test 14: A big INSERT after a big DELETE</h3>
<blockquote>
INSERT INTO t2 SELECT * FROM t1;
</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;13.168</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;1.815</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;3.210</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;1.485</td></tr>
</table>

<p>
Some older versions of SQLite (prior to version 2.4.0)
would show decreasing performance after a
sequence of DELETEs followed by new INSERTs.  As this test shows, the
problem has now been resolved.
</p>

<h3>Test 15: A big DELETE followed by many small INSERTs</h3>
<blockquote>
BEGIN;<br>
DELETE FROM t1;<br>
INSERT INTO t1 VALUES(1,10719,'ten thousand seven hundred nineteen');<br>
<i>... 11997 lines omitted</i><br>
INSERT INTO t1 VALUES(11999,72836,'seventy two thousand eight hundred thirty six');<br>
INSERT INTO t1 VALUES(12000,64231,'sixty four thousand two hundred thirty one');<br>
COMMIT;<br>

</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;4.556</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;1.704</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;0.618</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.406</td></tr>
</table>

<p>
SQLite is very good at doing INSERTs within a transaction, which probably
explains why it is so much faster than the other databases at this test.
</p>

<h3>Test 16: DROP TABLE</h3>
<blockquote>
DROP TABLE t1;<br>DROP TABLE t2;<br>DROP TABLE t3;
</blockquote><table border=0 cellpadding=0 cellspacing=0>
<tr><td>PostgreSQL:</td><td align="right">&nbsp;&nbsp;&nbsp;0.135</td></tr>
<tr><td>MySQL:</td><td align="right">&nbsp;&nbsp;&nbsp;0.015</td></tr>
<tr><td>SQLite 2.7.6:</td><td align="right">&nbsp;&nbsp;&nbsp;0.939</td></tr>
<tr><td>SQLite 2.7.6 (nosync):</td><td align="right">&nbsp;&nbsp;&nbsp;0.254</td></tr>
</table>

<p>
SQLite is slower than the other databases when it comes to dropping tables.
This probably is because when SQLite drops a table, it has to go through and
erase the records in the database file that deal with that table.  MySQL and
PostgreSQL, on the other hand, use separate files to represent each table
so they can drop a table simply by deleting a file, which is much faster.
</p>

<p>
On the other hand, dropping tables is not a very common operation 
so if SQLite takes a little longer, that is not seen as a big problem.
</p>

}
footer $rcsid

--- NEW FILE: sqlite.tcl ---
#
# Run this Tcl script to generate the sqlite.html file.
#
set rcsid {$Id: sqlite.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {sqlite: A program of interacting with SQLite databases}
puts {
<h2>sqlite: A command-line program to administer SQLite databases</h2>

<p>The SQLite library includes a simple command-line utility named
<b>sqlite</b> that allows the user to manually enter and execute SQL
commands against an SQLite database.  This document provides a brief
introduction on how to use <b>sqlite</b>.

<h3>Getting Started</h3>

<p>To start the <b>sqlite</b> program, just type "sqlite" followed by
the name the file that holds the SQLite database.  If the file does
not exist, a new one is created automatically.
The <b>sqlite</b> program will
then prompt you to enter SQL.  Type in SQL statements (terminated by a
semicolon), press "Enter" and the SQL will be executed.</p>

<p>For example, to create a new SQLite database named "ex1" 
with a single table named "tbl1", you might do this:</p>
}

proc Code {body} {
  puts {<blockquote><tt>}
  regsub -all {&} [string trim $body] {\&amp;} body
  regsub -all {>} $body {\&gt;} body
  regsub -all {<} $body {\&lt;} body
  regsub -all {\(\(\(} $body {<b>} body
  regsub -all {\)\)\)} $body {</b>} body
  regsub -all { } $body {\&nbsp;} body
  regsub -all \n $body <br>\n body
  puts $body
  puts {</tt></blockquote>}
}

Code {
$ (((sqlite ex1)))
SQLite version 2.0.0
Enter ".help" for instructions
sqlite> (((create table tbl1(one varchar(10), two smallint);)))
sqlite> (((insert into tbl1 values('hello!',10);)))
sqlite> (((insert into tbl1 values('goodbye', 20);)))
sqlite> (((select * from tbl1;)))
hello!|10
goodbye|20
sqlite>
}

puts {
<p>You can terminate the sqlite program by typing your systems
End-Of-File character (usually a Control-D) or the interrupt
character (usually a Control-C).</p>

<p>Make sure you type a semicolon at the end of each SQL command!
The sqlite looks for a semicolon to know when your SQL command is
complete.  If you omit the semicolon, sqlite will give you a
continuation prompt and wait for you to enter more text to be
added to the current SQL command.  This feature allows you to
enter SQL commands that span multiple lines.  For example:</p>
}

Code {
sqlite> (((CREATE TABLE tbl2 ()))
   ...> (((  f1 varchar(30) primary key,)))
   ...> (((  f2 text,)))
   ...> (((  f3 real)))
   ...> ((();)))
sqlite> 
}

puts {

<h3>Aside: Querying the SQLITE_MASTER table</h3>

<p>The database schema in an SQLite database is stored in
a special table named "sqlite_master".
You can execute "SELECT" statements against the
special sqlite_master table just like any other table
in an SQLite database.  For example:</p>
}

Code {
$ (((sqlite ex1)))
SQlite vresion 2.0.0
Enter ".help" for instructions
sqlite> (((select * from sqlite_master;)))
    type = table
    name = tbl1
tbl_name = tbl1
rootpage = 3
     sql = create table tbl1(one varchar(10), two smallint)
sqlite>
}

puts {
<p>
But you cannot execute DROP TABLE, UPDATE, INSERT or DELETE against
the sqlite_master table.  The sqlite_master
table is updated automatically as you create or drop tables and
indices from the database.  You can not make manual changes
to the sqlite_master table.
</p>

<p>
The schema for TEMPORARY tables is not stored in the "sqlite_master" table
since TEMPORARY tables are not visible to applications other than the
application that created the table.  The schema for TEMPORARY tables
is stored in another special table named "sqlite_temp_master".  The
"sqlite_temp_master" table is temporary itself.
</p>

<h3>Special commands to sqlite</h3>

<p>
Most of the time, sqlite just reads lines of input and passes them
on to the SQLite library for execution.
But if an input line begins with a dot ("."), then
that line is intercepted and interpreted by the sqlite program itself.
These "dot commands" are typically used to change the output format
of queries, or to execute certain prepackaged query statements.
</p>

<p>
For a listing of the available dot commands, you can enter ".help"
at any time.  For example:
</p>}

Code {
sqlite> (((.help)))
.databases             List names and files of attached databases
.dump ?TABLE? ...      Dump the database in a text format
.echo ON|OFF           Turn command echo on or off
.exit                  Exit this program
.explain ON|OFF        Turn output mode suitable for EXPLAIN on or off.
.header(s) ON|OFF      Turn display of headers on or off
.help                  Show this message
.indices TABLE         Show names of all indices on TABLE
.mode MODE             Set mode to one of "line(s)", "column(s)", 
                       "insert", "list", or "html"
.mode insert TABLE     Generate SQL insert statements for TABLE
.nullvalue STRING      Print STRING instead of nothing for NULL data
.output FILENAME       Send output to FILENAME
.output stdout         Send output to the screen
.prompt MAIN CONTINUE  Replace the standard prompts
.quit                  Exit this program
.read FILENAME         Execute SQL in FILENAME
.schema ?TABLE?        Show the CREATE statements
.separator STRING      Change separator string for "list" mode
.show                  Show the current values for various settings
.tables ?PATTERN?      List names of tables matching a pattern
.timeout MS            Try opening locked tables for MS milliseconds
.width NUM NUM ...     Set column widths for "column" mode
sqlite> 
}

puts {
<h3>Changing Output Formats</h3>

<p>The sqlite program is able to show the results of a query
in five different formats: "line", "column", "list", "html", and "insert".
You can use the ".mode" dot command to switch between these output
formats.</p>

<p>The default output mode is "list".  In
list mode, each record of a query result is written on one line of
output and each column within that record is separated by a specific
separator string.  The default separator is a pipe symbol ("|").
List mode is especially useful when you are going to send the output
of a query to another program (such as AWK) for additional processing.</p>}

Code {
sqlite> (((.mode list)))
sqlite> (((select * from tbl1;)))
hello|10
goodbye|20
sqlite>
}

puts {
<p>You can use the ".separator" dot command to change the separator
for list mode.  For example, to change the separator to a comma and
a space, you could do this:</p>}

Code {
sqlite> (((.separator ", ")))
sqlite> (((select * from tbl1;)))
hello, 10
goodbye, 20
sqlite>
}

puts {
<p>In "line" mode, each column in a row of the database
is shown on a line by itself.  Each line consists of the column
name, an equal sign and the column data.  Successive records are
separated by a blank line.  Here is an example of line mode
output:</p>}

Code {
sqlite> (((.mode line)))
sqlite> (((select * from tbl1;)))
one = hello
two = 10

one = goodbye
two = 20
sqlite>
}

puts {
<p>In column mode, each record is shown on a separate line with the
data aligned in columns.  For example:</p>}

Code {
sqlite> (((.mode column)))
sqlite> (((select * from tbl1;)))
one         two       
----------  ----------
hello       10        
goodbye     20        
sqlite>
}

puts {
<p>By default, each column is at least 10 characters wide. 
Data that is too wide to fit in a column is truncated.  You can
adjust the column widths using the ".width" command.  Like this:</p>}

Code {
sqlite> (((.width 12 6)))
sqlite> (((select * from tbl1;)))
one           two   
------------  ------
hello         10    
goodbye       20    
sqlite>
}

puts {
<p>The ".width" command in the example above sets the width of the first
column to 12 and the width of the second column to 6.  All other column
widths were unaltered.  You can gives as many arguments to ".width" as
necessary to specify the widths of as many columns as are in your
query results.</p>

<p>If you specify a column a width of 0, then the column
width is automatically adjusted to be the maximum of three
numbers: 10, the width of the header, and the width of the
first row of data.  This makes the column width self-adjusting.
The default width setting for every column is this 
auto-adjusting 0 value.</p>

<p>The column labels that appear on the first two lines of output
can be turned on and off using the ".header" dot command.  In the
examples above, the column labels are on.  To turn them off you
could do this:</p>}

Code {
sqlite> (((.header off)))
sqlite> (((select * from tbl1;)))
hello         10    
goodbye       20    
sqlite>
}

puts {
<p>Another useful output mode is "insert".  In insert mode, the output
is formatted to look like SQL INSERT statements.  You can use insert
mode to generate text that can later be used to input data into a 
different database.</p>

<p>When specifying insert mode, you have to give an extra argument
which is the name of the table to be inserted into.  For example:</p>
}

Code {
sqlite> (((.mode insert new_table)))
sqlite> (((select * from tbl1;)))
INSERT INTO 'new_table' VALUES('hello',10);
INSERT INTO 'new_table' VALUES('goodbye',20);
sqlite>
}

puts {
<p>The last output mode is "html".  In this mode, sqlite writes
the results of the query as an XHTML table.  The beginning
&lt;TABLE&gt; and the ending &lt;/TABLE&gt; are not written, but
all of the intervening &lt;TR&gt;s, &lt;TH&gt;s, and &lt;TD&gt;s
are.  The html output mode is envisioned as being useful for
CGI.</p>
}

puts {
<h3>Writing results to a file</h3>

<p>By default, sqlite sends query results to standard output.  You
can change this using the ".output" command.  Just put the name of
an output file as an argument to the .output command and all subsequent
query results will be written to that file.  Use ".output stdout" to
begin writing to standard output again.  For example:</p>}

Code {
sqlite> (((.mode list)))
sqlite> (((.separator |)))
sqlite> (((.output test_file_1.txt)))
sqlite> (((select * from tbl1;)))
sqlite> (((.exit)))
$ (((cat test_file_1.txt)))
hello|10
goodbye|20
$
}

puts {
<h3>Querying the database schema</h3>

<p>The sqlite program provides several convenience commands that
are useful for looking at the schema of the database.  There is
nothing that these commands do that cannot be done by some other
means.  These commands are provided purely as a shortcut.</p>

<p>For example, to see a list of the tables in the database, you
can enter ".tables".</p>
}

Code {
sqlite> (((.tables)))
tbl1
tbl2
sqlite>
}

puts {
<p>The ".tables" command is the same as setting list mode then
executing the following query:</p>

<blockquote><pre>
SELECT name FROM sqlite_master WHERE type='table' 
UNION ALL SELECT name FROM sqlite_temp_master WHERE type='table'
ORDER BY name;
</pre></blockquote>

<p>In fact, if you look at the source code to the sqlite program
(found in the source tree in the file src/shell.c) you'll find
exactly the above query.</p>

<p>The ".indices" command works in a similar way to list all of
the indices for a particular table.  The ".indices" command takes
a single argument which is the name of the table for which the
indices are desired.  Last, but not least, is the ".schema" command.
With no arguments, the ".schema" command shows the original CREATE TABLE
and CREATE INDEX statements that were used to build the current database.
If you give the name of a table to ".schema", it shows the original
CREATE statement used to make that table and all if its indices.
We have:</p>}

Code {
sqlite> (((.schema)))
create table tbl1(one varchar(10), two smallint)
CREATE TABLE tbl2 (
  f1 varchar(30) primary key,
  f2 text,
  f3 real
)
sqlite> (((.schema tbl2)))
CREATE TABLE tbl2 (
  f1 varchar(30) primary key,
  f2 text,
  f3 real
)
sqlite>
}

puts {
<p>The ".schema" command accomplishes the same thing as setting
list mode, then entering the following query:</p>

<blockquote><pre>
SELECT sql FROM 
   (SELECT * FROM sqlite_master UNION ALL
    SELECT * FROM sqlite_temp_master)
WHERE type!='meta'
ORDER BY tbl_name, type DESC, name
</pre></blockquote>

<p>Or, if you give an argument to ".schema" because you only
want the schema for a single table, the query looks like this:</p>

<blockquote><pre>
SELECT sql FROM
   (SELECT * FROM sqlite_master UNION ALL
    SELECT * FROM sqlite_temp_master)
WHERE tbl_name LIKE '%s' AND type!='meta'
ORDER BY type DESC, name
</pre></blockquote>

<p>The <b>%s</b> in the query above is replaced by the argument
to ".schema", of course.  Notice that the argument to the ".schema"
command appears to the right of an SQL LIKE operator.  So you can
use wildcards in the name of the table.  For example, to get the
schema for all tables whose names contain the character string
"abc" you could enter:</p>}

Code {
sqlite> (((.schema %abc%)))
}

puts {
<p>
Along these same lines,
the ".table" command also accepts a pattern as its first argument.
If you give an argument to the .table command, a "%" is both
appended and prepended and a LIKE clause is added to the query.
This allows you to list only those tables that match a particular
pattern.</p>

<p>The ".databases" command shows a list of all databases open in
the current connection.  There will always be at least 2.  The first
one is "main", the original database opened.  The second is "temp",
the database used for temporary tables. There may be additional 
databases listed for databases attached using the ATTACH statement.
The first output column is the name the database is attached with, 
and the second column is the filename of the external file.</p>}

Code {
sqlite> (((.databases)))
}

puts {
<h3>Converting An Entire Database To An ASCII Text File</h3>

<p>Use the ".dump" command to convert the entire contents of a
database into a single ASCII text file.  This file can be converted
back into a database by piping it back into <b>sqlite</b>.</p>

<p>A good way to make an archival copy of a database is this:</p>
}

Code {
$ (((echo '.dump' | sqlite ex1 | gzip -c >ex1.dump.gz)))
}

puts {
<p>This generates a file named <b>ex1.dump.gz</b> that contains everything
you need to reconstruct the database at a later time, or on another
machine.  To reconstruct the database, just type:</p>
}

Code {
$ (((zcat ex1.dump.gz | sqlite ex2)))
}

puts {
<p>The text format used is the same as used by
<a href="http://www.postgresql.org/">PostgreSQL</a>, so you
can also use the .dump command to export an SQLite database
into a PostgreSQL database.  Like this:</p>
}

Code {
$ (((createdb ex2)))
$ (((echo '.dump' | sqlite ex1 | psql ex2)))
}

puts {
<p>You can almost (but not quite) go the other way and export
a PostgreSQL database into SQLite using the <b>pg_dump</b> utility.
Unfortunately, when <b>pg_dump</b> writes the database schema information,
it uses some SQL syntax that SQLite does not understand.
So you cannot pipe the output of <b>pg_dump</b> directly 
into <b>sqlite</b>.
But if you can recreate the
schema separately, you can use <b>pg_dump</b> with the <b>-a</b>
option to list just the data
of a PostgreSQL database and import that directly into SQLite.</p>
}

Code {
$ (((sqlite ex3 <schema.sql)))
$ (((pg_dump -a ex2 | sqlite ex3)))
}

puts {
<h3>Other Dot Commands</h3>

<p>The ".explain" dot command can be used to set the output mode
to "column" and to set the column widths to values that are reasonable
for looking at the output of an EXPLAIN command.  The EXPLAIN command
is an SQLite-specific SQL extension that is useful for debugging.  If any
regular SQL is prefaced by EXPLAIN, then the SQL command is parsed and
analyzed but is not executed.  Instead, the sequence of virtual machine
instructions that would have been used to execute the SQL command are
returned like a query result.  For example:</p>}

Code {
sqlite> (((.explain)))
sqlite> (((explain delete from tbl1 where two<20;)))
addr  opcode        p1     p2     p3          
----  ------------  -----  -----  -------------------------------------   
0     ListOpen      0      0                  
1     Open          0      1      tbl1        
2     Next          0      9                  
3     Field         0      1                  
4     Integer       20     0                  
5     Ge            0      2                  
6     Key           0      0                  
7     ListWrite     0      0                  
8     Goto          0      2                  
9     Noop          0      0                  
10    ListRewind    0      0                  
11    ListRead      0      14                 
12    Delete        0      0                  
13    Goto          0      11                 
14    ListClose     0      0                  
}

puts {

<p>The ".timeout" command sets the amount of time that the <b>sqlite</b>
program will wait for locks to clear on files it is trying to access
before returning an error.  The default value of the timeout is zero so
that an error is returned immediately if any needed database table or
index is locked.</p>

<p>And finally, we mention the ".exit" command which causes the
sqlite program to exit.</p>

<h3>Using sqlite in a shell script</h3>

<p>
One way to use sqlite in a shell script is to use "echo" or
"cat" to generate a sequence of commands in a file, then invoke sqlite 
while redirecting input from the generated command file.  This
works fine and is appropriate in many circumstances.  But as
an added convenience, sqlite allows a single SQL command to be
entered on the command line as a second argument after the
database name.  When the sqlite program is launched with two
arguments, the second argument is passed to the SQLite library
for processing, the query results are printed on standard output
in list mode, and the program exits.  This mechanism is designed
to make sqlite easy to use in conjunction with programs like
"awk".  For example:</p>}

Code {
$ (((sqlite ex1 'select * from tbl1' |)))
> ((( awk '{printf "<tr><td>%s<td>%s\n",$1,$2 }')))
<tr><td>hello<td>10
<tr><td>goodbye<td>20
$
}

puts {
<h3>Ending shell commands</h3>

<p>
SQLite commands are normally terminated by a semicolon.  In the shell 
you can also use the word "GO" (case-insensitive) or a backslash character 
"\" on a line by itself to end a command.  These are used by SQL Server 
and Oracle, respectively.  These won't work in <b>sqlite_exec()</b>, 
because the shell translates these into a semicolon before passing them 
to that function.</p>
}

puts {
<h3>Compiling the sqlite program from sources</h3>

<p>
The sqlite program is built automatically when you compile the
sqlite library.  Just get a copy of the source tree, run
"configure" and then "make".</p>
}
footer $rcsid

--- NEW FILE: support.tcl ---
set rcsid {$Id: support.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {SQLite Support Options}
puts {
<h2>SQLite Support Options</h2>


<h3>Mailing List</h3>
<p>
A mailing list has been set up for asking questions and
for open discussion of problems
and issues by the SQLite user community.
To subscribe to the mailing list, send an email to
<a href="mailto:sqlite-users-subscribe at sqlite.org">
sqlite-users-subscribe at sqlite.org</a>.
If you would prefer to get digests rather than individual
emails, send a message to to
<a href="mailto:sqlite-users-digest-subscribe at sqlite.org">
sqlite-users-digest-subscribe at sqlite.org</a>.
For additional information about operating and using this
mailing list, send a message to
<a href="mailto:sqlite-users-help at sqlite.org">
sqlite-users-help at sqlite.org</a> and instructions will be
sent by to you by return email.
</p>

<p>
There are multiple archives of the mailing list:
</p>

<blockquote>
<a href="http://www.mail-archive.com/sqlite-users%40sqlite.org/">
http://www.mail-archive.com/sqlite-users%40sqlite.org</a><br>
<a href="http://www.theaimsgroup.com/">
http://www.theaimsgroup.com/</a><br>
<a href="http://news.gmane.org/gmane.comp.db.sqlite.general">
http://news.gmane.org/gmane.comp.db.sqlite.general</a>
</blockquote>

</p>

<h3>Professional Support</h3>

<p>
If you would like professional support for SQLite
or if you want custom modifications to SQLite performed by the
original author, these services are available for a modest fee.
For additional information visit
<a href="http://www.hwaci.com/sw/sqlite/prosupport.html">
http://www.hwaci.com/sw/sqlite/prosupport.html</a> or contact:</p>

<blockquote>
D. Richard Hipp <br />
Hwaci - Applied Software Research <br />
704.948.4565 <br />
<a href="mailto:drh at hwaci.com">drh at hwaci.com</a>
</blockquote>

}
footer $rcsid

--- NEW FILE: tclsqlite.tcl ---
#
# Run this Tcl script to generate the tclsqlite.html file.
#
set rcsid {$Id: tclsqlite.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {The Tcl interface to the SQLite library}
proc METHOD {name text} {
  puts "<a name=\"$name\">\n<h3>The \"$name\" method</h3>\n"
  puts $text
}
puts {
<h2>The Tcl interface to the SQLite library</h2>

<p>The SQLite library is designed to be very easy to use from
a Tcl or Tcl/Tk script.  This document gives an overview of the Tcl
programming interface.</p>

<h3>The API</h3>

<p>The interface to the SQLite library consists of single
tcl command named <b>sqlite</b> (version 2.8) or <b>sqlite3</b>
(version 3.0).  Because there is only this
one command, the interface is not placed in a separate
namespace.</p>

<p>The <b>sqlite3</b> command is used as follows:</p>

<blockquote>
<b>sqlite3</b>&nbsp;&nbsp;<i>dbcmd&nbsp;&nbsp;database-name</i>
</blockquote>

<p>
The <b>sqlite3</b> command opens the database named in the second
argument.  If the database does not already exist, it is
automatically created.
The <b>sqlite3</b> command also creates a new Tcl
command to control the database.  The name of the new Tcl command
is given by the first argument.  This approach is similar to the
way widgets are created in Tk.
</p>

<p>
The name of the database is just the name of a disk file in which
the database is stored.
</p>

<p>
Once an SQLite database is open, it can be controlled using 
methods of the <i>dbcmd</i>.  There are currently 17 methods
defined:</p>

<p>
<ul>
}
foreach m [lsort {
 authorizer
 busy
 changes
 close
 collate
 collation_needed
 commit_hook
 complete
 errorcode
 eval
 function
 last_insert_rowid
 onecolumn
 progress
 timeout
 total_changes
 trace
}] {
 puts "<li><a href=\"#$m\">$m</a></li>"
}
puts {
</ul>
</p>

<p>The use of each of these methods will be explained in the sequel, though
not in the order shown above.</p>

}

##############################################################################
METHOD close {

<p>
As its name suggests, the "close" method to an SQLite database just
closes the database.  This has the side-effect of deleting the
<i>dbcmd</i> Tcl command.  Here is an example of opening and then
immediately closing a database:
</p>

<blockquote>
<b>sqlite3 db1 ./testdb<br>
db1 close</b>
</blockquote>

<p>
If you delete the <i>dbcmd</i> directly, that has the same effect
as invoking the "close" method.  So the following code is equivalent
to the previous:</p>

<blockquote>
<b>sqlite3 db1 ./testdb<br>
rename db1 {}</b>
</blockquote>
}

##############################################################################
METHOD eval {
<p>
The most useful <i>dbcmd</i> method is "eval".  The eval method is used
to execute SQL on the database.  The syntax of the eval method looks
like this:</p>

<blockquote>
<i>dbcmd</i>&nbsp;&nbsp;<b>eval</b>&nbsp;&nbsp;<i>sql</i>
&nbsp;&nbsp;&nbsp;&nbsp;?<i>array-name&nbsp;</i>?&nbsp;?<i>script</i>?
</blockquote>

<p>
The job of the eval method is to execute the SQL statement or statements
given in the second argument.  For example, to create a new table in
a database, you can do this:</p>

<blockquote>
<b>sqlite3 db1 ./testdb<br>
db1 eval {CREATE TABLE t1(a int, b text)}</b>
</blockquote>

<p>The above code creates a new table named <b>t1</b> with columns
<b>a</b> and <b>b</b>.  What could be simpler?</p>

<p>Query results are returned as a list of column values.  If a
query requests 2 columns and there are 3 rows matching the query,
then the returned list will contain 6 elements.  For example:</p>

<blockquote>
<b>db1 eval {INSERT INTO t1 VALUES(1,'hello')}<br>
db1 eval {INSERT INTO t1 VALUES(2,'goodbye')}<br>
db1 eval {INSERT INTO t1 VALUES(3,'howdy!')}<br>
set x [db1 eval {SELECT * FROM t1 ORDER BY a}]</b>
</blockquote>

<p>The variable <b>$x</b> is set by the above code to</p>

<blockquote>
<b>1 hello 2 goodbye 3 howdy!</b>
</blockquote>

<p>You can also process the results of a query one row at a time
by specifying the name of an array variable and a script following
the SQL code.  For each row of the query result, the values of all
columns will be inserted into the array variable and the script will
be executed.  For instance:</p>

<blockquote>
<b>db1 eval {SELECT * FROM t1 ORDER BY a} values {<br>
&nbsp;&nbsp;&nbsp;&nbsp;parray values<br>
&nbsp;&nbsp;&nbsp;&nbsp;puts ""<br>
}</b>
</blockquote>

<p>This last code will give the following output:</p>

<blockquote><b>
values(*) = a b<br>
values(a) = 1<br>
values(b) = hello<p>

values(*) = a b<br>
values(a) = 2<br>
values(b) = goodbye<p>

values(*) = a b<br>
values(a) = 3<br>
values(b) = howdy!</b>
</blockquote>

<p>
For each column in a row of the result, the name of that column
is used as an index in to array.  The value of the column is stored
in the corresponding array entry.  The special array index * is
used to store a list of column names in the order that they appear.
</p>

<p>
If the array variable name is omitted or is the empty string, then the value of
each column is stored in a variable with the same name as the column
itself.  For example:
</p>

<blockquote>
<b>db1 eval {SELECT * FROM t1 ORDER BY a} {<br>
&nbsp;&nbsp;&nbsp;&nbsp;puts "a=$a b=$b"<br>
}</b>
</blockquote>

<p>
>From this we get the following output
</p>

<blockquote><b>
a=1 b=hello<br>
a=2 b=goodbye<br>
a=3 b=howdy!</b>
</blockquote>

<p>
Tcl variable names can appear in the SQL statement of the second argument
in any position where it is legal to put a string or number literal.  The
value of the variable is substituted for the variable name.  If the
variable does not exist a NULL values is used.  For example:
</p>

<blockquote><b>
db1 eval {INSERT INTO t1 VALUES(5,$bigblob)}
</b></blockquote>

<p>
Note that it is not necessary to quote the $bigblob value.  That happens
automatically.  If $bigblob is a large string or binary object, this
technique is not only easier to write, it is also much more efficient
since it avoids making a copy of the content of $bigblob.
</p>

}

##############################################################################
METHOD complete {

<p>
The "complete" method takes a string of supposed SQL as its only argument.
It returns TRUE if the string is a complete statement of SQL and FALSE if
there is more to be entered.</p>

<p>The "complete" method is useful when building interactive applications
in order to know when the user has finished entering a line of SQL code.
This is really just an interface to the <b>sqlite3_complete()</b> C
function.  Refer to the <a href="c_interface.html">C/C++ interface</a>
specification for additional information.</p>
}

##############################################################################
METHOD timeout {

<p>The "timeout" method is used to control how long the SQLite library
will wait for locks to clear before giving up on a database transaction.
The default timeout is 0 millisecond.  (In other words, the default behavior
is not to wait at all.)</p>

<p>The SQLite database allows multiple simultaneous
readers or a single writer but not both.  If any process is writing to
the database no other process is allows to read or write.  If any process
is reading the database other processes are allowed to read but not write.
The entire database shared a single lock.</p>

<p>When SQLite tries to open a database and finds that it is locked, it
can optionally delay for a short while and try to open the file again.
This process repeats until the query times out and SQLite returns a
failure.  The timeout is adjustable.  It is set to 0 by default so that
if the database is locked, the SQL statement fails immediately.  But you
can use the "timeout" method to change the timeout value to a positive
number.  For example:</p>

<blockquote><b>db1 timeout 2000</b></blockquote>

<p>The argument to the timeout method is the maximum number of milliseconds
to wait for the lock to clear.  So in the example above, the maximum delay
would be 2 seconds.</p>
}

##############################################################################
METHOD busy {

<p>The "busy" method, like "timeout", only comes into play when the
database is locked.  But the "busy" method gives the programmer much more
control over what action to take.  The "busy" method specifies a callback
Tcl procedure that is invoked whenever SQLite tries to open a locked
database.  This callback can do whatever is desired.  Presumably, the
callback will do some other useful work for a short while (such as service
GUI events) then return
so that the lock can be tried again.  The callback procedure should
return "0" if it wants SQLite to try again to open the database and
should return "1" if it wants SQLite to abandon the current operation.
}

##############################################################################
METHOD last_insert_rowid {

<p>The "last_insert_rowid" method returns an integer which is the ROWID
of the most recently inserted database row.</p>
}

##############################################################################
METHOD function {

<p>The "function" method registers new SQL functions with the SQLite engine.
The arguments are the name of the new SQL function and a TCL command that
implements that function.  Arguments to the function are appended to the
TCL command before it is invoked.</p>

<p>
The following example creates a new SQL function named "hex" that converts
its numeric argument in to a hexadecimal encoded string:
</p>

<blockquote><b>
db function hex {format 0x%X}
</b></blockquote>

}

##############################################################################
METHOD onecolumn {

<p>The "onecolumn" method works like "eval" in that it evaluates the
SQL query statement given as its argument.  The difference is that
"onecolumn" returns a single element which is the first column of the
first row of the query result.</p>

<p>This is a convenience method.  It saves the user from having to
do a "<tt>[lindex&nbsp;...&nbsp;0]</tt>" on the results of an "eval"
in order to extract a single column result.</p>
}

##############################################################################
METHOD changes {

<p>The "changes" method returns an integer which is the number of rows
in the database that were inserted, deleted, and/or modified by the most
recent "eval" method.</p>
}

##############################################################################
METHOD total_changes {

<p>The "total_changes" method returns an integer which is the number of rows
in the database that were inserted, deleted, and/or modified since the
current database connection was first opened.</p>
}

##############################################################################
METHOD authorizer {

<p>The "authorizer" method provides access to the sqlite3_set_authorizer
C/C++ interface.  The argument to authorizer is the name of a procedure that
is called when SQL statements are being compiled in order to authorize
certain operations.  The callback procedure takes 5 arguments which describe
the operation being coded.  If the callback returns the text string
"SQLITE_OK", then the operation is allowed.  If it returns "SQLITE_IGNORE",
then the operation is silently disabled.  If the return is "SQLITE_DENY"
then the compilation fails with an error.
</p>

<p>If the argument is an empty string then the authorizer is disabled.
If the argument is omitted, then the current authorizer is returned.</p>
}

##############################################################################
METHOD progress {

<p>This method registers a callback that is invoked periodically during
query processing.  There are two arguments: the number of SQLite virtual
machine opcodes between invocations, and the TCL command to invoke.
Setting the progress callback to an empty string disables it.</p>

<p>The progress callback can be used to display the status of a lengthy
query or to process GUI events during a lengthy query.</p>
}


##############################################################################
METHOD collate {

<p>This method registers new text collating sequences.  There are
two arguments: the name of the collating sequence and the name of a
TCL procedure that implements a comparison function for the collating
sequence.
</p>

<p>For example, the following code implements a collating sequence called
"NOCASE" that sorts in text order without regard to case:
</p>

<blockquote><b>
proc nocase_compare {a b} {<br>
&nbsp;&nbsp;&nbsp;&nbsp;return [string compare [string tolower $a] [string tolower $b]]<br>
}<br>
db collate NOCASE nocase_compare<br>
</b></blockquote>
}

##############################################################################
METHOD collation_needed {

<p>This method registers a callback routine that is invoked when the SQLite
engine needs a particular collating sequence but does not have that
collating sequence registered.  The callback can register the collating
sequence.  The callback is invoked with a single parameter which is the
name of the needed collating sequence.</p>
}

##############################################################################
METHOD commit_hook {

<p>This method registers a callback routine that is invoked just before
SQLite tries to commit changes to a database.  If the callback throws
an exception or returns a non-zero result, then the transaction rolls back
rather than commit.</p>
}

##############################################################################
METHOD errorcode {

<p>This method returns the numeric error code that resulted from the most
recent SQLite operation.</p>
}

##############################################################################
METHOD trace {

<p>The "trace" method registers a callback that is invoked as each SQL
statement is compiled.  The text of the SQL is appended as a single string
to the command before it is invoked.  This can be used (for example) to
keep a log of all SQL operations that an application performs.
</p>
}


footer $rcsid

--- NEW FILE: vdbe.tcl ---
#
# Run this Tcl script to generate the vdbe.html file.
#
set rcsid {$Id: vdbe.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}
source common.tcl
header {The Virtual Database Engine of SQLite}
puts {
<h2>The Virtual Database Engine of SQLite</h2>

<blockquote><b>
This document describes the virtual machine used in SQLite version 2.8.0.  
</b></blockquote>
}

puts {
<p>If you want to know how the SQLite library works internally,
you need to begin with a solid understanding of the Virtual Database
Engine or VDBE.  The VDBE occurs right in the middle of the
processing stream (see the <a href="arch.html">architecture diagram</a>)
[...1944 lines suppressed...]

<p>For additional information on how the SQLite library
functions, the reader is directed to look at the SQLite source
code directly.  If you understand the material in this article,
you should not have much difficulty in following the sources.
Serious students of the internals of SQLite will probably
also what to make a careful study of the VDBE opcodes
as documented <a href="opcode.html">here</a>.  Most of the
opcode documentation is extracted from comments in the source
code using a script so you can also get information about the
various opcodes directly from the <b>vdbe.c</b> source file.
If you have successfully read this far, you should have little
difficulty understanding the rest.</p>

<p>If you find errors in either the documentation or the code,
feel free to fix them and/or contact the author at
<a href="mailto:drh at hwaci.com">drh at hwaci.com</a>.  Your bug fixes or
suggestions are always welcomed.</p>
}
footer $rcsid

--- NEW FILE: version3.tcl ---
#!/usr/bin/tclsh
source common.tcl
header {SQLite Version 3 Overview}
puts {
<h2>SQLite Version 3 Overview</h2>

<p>
SQLite version 3.0 introduces important changes to the library, including:
</p>

<ul>
<li>A more compact format for database files.</li>
<li>Manifest typing and BLOB support.</li>
<li>Support for both UTF-8 and UTF-16 text.</li>
<li>User-defined text collating sequences.</li>
<li>64-bit ROWIDs.</li>
<li>Improved Concurrency.</li>
</ul>

<p>
This document is a quick introduction to the changes for SQLite 3.0
for users who are already familiar with SQLite version 2.8.
</p>

<h3>Naming Changes</h3>

<p>
SQLite version 2.8 will continue to be supported with bug fixes
for the foreseeable future.  In order to allow SQLite version 2.8
and SQLite version 3.0 to peacefully coexist, the names of key files
and APIs in SQLite version 3.0 have been changed to include the
character "3".  For example, the include file used by C programs
has been changed from "sqlite.h" to "sqlite3.h".  And the name of
the shell program used to interact with databases has been changed
from "sqlite.exe" to "sqlite3.exe".  With these changes, it is possible
to have both SQLite 2.8 and SQLite 3.0 installed on the same system at
the same time.  And it is possible for the same C program to link
against both SQLite 2.8 and SQLite 3.0 at the same time and to use
both libraries at the same time.
</p>

<h3>New File Format</h3>

<p>
The format used by SQLite database files has been completely revised.
The old version 2.1 format and the new 3.0 format are incompatible with
one another.  Version 2.8 of SQLite will not read a version 3.0 database
files and version 3.0 of SQLite will not read a version 2.8 database file.
</p>

<p>
To convert an SQLite 2.8 database into an SQLite 3.0 database, have
ready the command-line shells for both version 2.8 and 3.0.  Then
enter a command like the following:
</p>

<blockquote><pre>
sqlite OLD.DB .dump | sqlite3 NEW.DB
</pre></blockquote>

<p>
The new database file format uses B+trees for tables.  In a B+tree, all
data is stored in the leaves of the tree instead of in both the leaves and
the intermediate branch nodes.  The use of B+trees for tables allows for
better scalability and the storage of larger data fields without the use of
overflow pages.  Traditional B-trees are still used for indices.</p>

<p>
The new file format also supports variable pages sizes between 512 and
65536 bytes.  The size of a page is stored in the file header so the
same library can read databases with different pages sizes, in theory,
though this feature has not yet been implemented in practice.
</p>

<p>
The new file format omits unused fields from its disk images.  For example,
indices use only the key part of a B-tree record and not the data.  So
for indices, the field that records the length of the data is omitted.
Integer values such as the length of key and data are stored using
a variable-length encoding so that only one or two bytes are required to
store the most common cases but up to 64-bits of information can be encoded
if needed. 
Integer and floating point data is stored on the disk in binary rather
than being converted into ASCII as in SQLite version 2.8.
These changes taken together result in database files that are typically
25% to 35% smaller than the equivalent files in SQLite version 2.8.
</p>

<p>
Details of the low-level B-tree format used in SQLite version 3.0 can
be found in header comments to the 
<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/btree.c">btree.c</a>
source file.
</p>

<h3>Manifest Typing and BLOB Support</h3>

<p>
SQLite version 2.8 will deal with data in various formats internally,
but when writing to the disk or interacting through its API, SQLite 2.8
always converts data into ASCII text.  SQLite 3.0, in contrast, exposes 
its internal data representations to the user and stores binary representations
to disk when appropriate.  The exposing of non-ASCII representations was
added in order to support BLOBs.
</p>

<p>
SQLite version 2.8 had the feature that any type of data could be stored
in any table column regardless of the declared type of that column.  This
feature is retained in version 3.0, though in a slightly modified form.
Each table column will store any type of data, though columns have an
affinity for the format of data defined by their declared datatype.
When data is inserted into a column, that column will make at attempt
to convert the data format into the columns declared type.   All SQL
database engines do this.  The difference is that SQLite 3.0 will 
still store the data even if a format conversion is not possible.
</p>

<p>
For example, if you have a table column declared to be of type "INTEGER"
and you try to insert a string, the column will look at the text string
and see if it looks like a number.  If the string does look like a number
it is converted into a number and into an integer if the number does not
have a fractional part, and stored that way.  But if the string is not
a well-formed number it is still stored as a string.  A column with a
type of "TEXT" tries to convert numbers into an ASCII-Text representation
before storing them.  But BLOBs are stored in TEXT columns as BLOBs because
you cannot in general convert a BLOB into text.
</p>

<p>
In most other SQL database engines the datatype is associated with
the table column that holds the data - with the data container.
In SQLite 3.0, the datatype is associated with the data itself, not
with its container.
<a href="http://www.paulgraham.com/">Paul Graham</a> in his book 
<a href="http://www.paulgraham.com/acl.html"><i>ANSI Common Lisp</i></a>
calls this property "Manifest Typing".
Other writers have other definitions for the term "manifest typing",
so beware of confusion.  But by whatever name, that is the datatype
model supported by SQLite 3.0.
</p>

<p>
Additional information about datatypes in SQLite version 3.0 is
available
<a href="datatype3.html">separately</a>.
</p>

<h3>Support for UTF-8 and UTF-16</h3>

<p>
The new API for SQLite 3.0 contains routines that accept text as
both UTF-8 and UTF-16 in the native byte order of the host machine.
Each database file manages text as either UTF-8, UTF-16BE (big-endian),
or UTF-16LE (little-endian).  Internally and in the disk file, the
same text representation is used everywhere.  If the text representation
specified by the database file (in the file header) does not match
the text representation required by the interface routines, then text
is converted on-the-fly.
Constantly converting text from one representation to another can be
computationally expensive, so it is suggested that programmers choose a
single representation and stick with it throughout their application.
</p>

<p>
In the current implementation of SQLite, the SQL parser only works
with UTF-8 text.  So if you supply UTF-16 text it will be converted.
This is just an implementation issue and there is nothing to prevent
future versions of SQLite from parsing UTF-16 encoded SQL natively.
</p>

<p>
When creating new user-defined SQL functions and collating sequences,
each function or collating sequence can specify it if works with
UTF-8, UTF-16be, or UTF-16le.  Separate implementations can be registered
for each encoding.   If an SQL function or collating sequences is required
but a version for the current text encoding is not available, then 
the text is automatically converted.  As before, this conversion takes
computation time, so programmers are advised to pick a single
encoding and stick with it in order to minimize the amount of unnecessary
format juggling.
</p>

<p>
SQLite is not particular about the text it receives and is more than
happy to process text strings that are not normalized or even
well-formed UTF-8 or UTF-16.  Thus, programmers who want to store
IS08859 data can do so using the UTF-8 interfaces.  As long as no
attempts are made to use a UTF-16 collating sequence or SQL function,
the byte sequence of the text will not be modified in any way.
</p>

<h3>User-defined Collating Sequences</h3>

<p>
A collating sequence is just a defined order for text.  When SQLite 3.0
sorts (or uses a comparison operator like "<" or ">=") the sort order
is first determined by the data type.
</p>

<ul>
<li>NULLs sort first</li>
<li>Numeric values sort next in numerical order</li>
<li>Text values come after numerics</li>
<li>BLOBs sort last</li>
</ul>

<p>
Collating sequences are used for comparing two text strings.
The collating sequence does not change the ordering of NULLs, numbers,
or BLOBs, only text.
</p>

<p>
A collating sequence is implemented as a function that takes the
two strings being compared as inputs and returns negative, zero, or
positive if the first string is less than, equal to, or greater than
the first.
SQLite 3.0 comes with a single built-in collating sequence named "BINARY"
which is implemented using the memcmp() routine from the standard C library.
The BINARY collating sequence works well for English text.  For other
languages or locales, alternative collating sequences may be preferred.
</p>

<p>
The decision of which collating sequence to use is controlled by the
COLLATE clause in SQL.  A COLLATE clause can occur on a table definition,
to define a default collating sequence to a table column, or on field
of an index, or in the ORDER BY clause of a SELECT statement.
Planned enhancements to SQLite are to include standard CAST() syntax
to allow the collating sequence of an expression to be defined.
</p>

<h3>64-bit ROWIDs</h3>

<p>
Every row of a table has a unique rowid.
If the table defines a column with the type "INTEGER PRIMARY KEY" then that
column becomes an alias for the rowid.  But with or without an INTEGER PRIMARY
KEY column, every row still has a rowid.
</p>

<p>
In SQLite version 3.0, the rowid is a 64-bit signed integer.
This is an expansion of SQLite version 2.8 which only permitted
rowids of 32-bits.
</p>

<p>
To minimize storage space, the 64-bit rowid is stored as a variable length
integer.  Rowids between 0 and 127 use only a single byte.  
Rowids between 0 and 16383 use just 2 bytes.  Up to 2097152 uses three
bytes.  And so forth.  Negative rowids are allowed but they always use
nine bytes of storage and so their use is discouraged.  When rowids
are generated automatically by SQLite, they will always be non-negative.
</p>

<h3>Improved Concurrency</h3>

<p>
SQLite version 2.8 allowed multiple simultaneous readers or a single
writer but not both.  SQLite version 3.0 allows one process to begin
writing the database while other processes continue to read.  The
writer must still obtain an exclusive lock on the database for a brief
interval in order to commit its changes, but the exclusive lock is no
longer required for the entire write operation.
A <a href="lockingv3.html">more detailed report</a> on the locking
behavior of SQLite version 3.0 is available separately.
</p>

<p>
A limited form of table-level locking is now also available in SQLite.
If each table is stored in a separate database file, those separate
files can be attached to the main database (using the ATTACH command)
and the combined databases will function as one.  But locks will only
be acquired on individual files as needed.  So if you redefine "database"
to mean two or more database files, then it is entirely possible for
two processes to be writing to the same database at the same time.
To further support this capability, commits of transactions involving
two or more ATTACHed database are now atomic.
</p>

<h3>Credits</h3>

<p>
SQLite version 3.0 is made possible in part by AOL developers
supporting and embracing great Open-Source Software.
</p>


}
footer {$Id: version3.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

--- NEW FILE: whentouse.tcl ---
#
# Run this TCL script to generate HTML for the goals.html file.
#
set rcsid {$Id: whentouse.tcl,v 1.1 2004/11/15 14:42:04 anthm Exp $}

puts {<html>
<head><title>Appropriate Uses of SQLite</title></head>
<body bgcolor=white>
<h1 align=center>Appropriate Uses Of SQLite</h1>
}
puts "<p align=center>
(This page was last modified on [lrange $rcsid 3 4] UTC)
</p>"

puts {
<p>
SQLite is different from most other SQL database engines in that its
primary design goal is to be simple:
</p>

<ul>
<li>Simple to administer</li>
<li>Simple to operate</li>
<li>Simple to use in a program</li>
<li>Simple to maintain and customize</li>
</ul>

<p>
Many people like SQLite because it is small and fast.  But those
qualities are just happy accidents.
Users also find that SQLite is very reliable.  Reliability is
a consequence of simplicity.  With less complication, there is
less to go wrong.  So, yes, SQLite is small, fast, and reliable,
but first and foremost, SQLite strives to be simple.
</p>

<p>
Simplicity in a database engine can be either a strength or a
weakness, depending on what you are trying to do.  In order to
achieve simplicity, SQLite has had to sacrifice other characteristics
that some people find useful, such as high concurrancy, fine-grained
access control, a rich set of built-in functions, stored procedures,
esoteric SQL language features, XML and/or Java extensions,
tera- or peta-byte scalability, and so forth.  If you need these
kinds of features and don't mind the added complexity that they
bring, then SQLite is probably not the database for you.
SQLite is not intended to be an enterprise database engine.  It
not designed to compete with Oracle or PostgreSQL.
</p>

<p>
The basic rule of thumb for when it is appropriate to use SQLite is
this:  Use SQLite in situations where simplicity of administration,
implementation, and maintenance are more important than the countless
complex features that enterprise database engines provide.
As it turns out, situations where simplicity is the better choice
are more common that many people realize.
</p>

<h2>Situations Where SQLite Works Well</h2>

<ul>
<li><p><b>Websites</b></p>

<p>SQLite usually will work great as the database engine for low to
medium traffic websites (which is to say, 99.9% of all websites).
The amount of web traffic that SQLite can handle depends, of course,
on how heavily the website uses its database.  Generally
speaking, any site that gets fewer than a 100000 hits/day should work
fine.  The 100000 hits/day figure is a conservative estimate, not a
hard upper bound.
SQLite has been demonstrated to work with 10 times that amount
of traffic.</p>
</li>

<li><p><b>Embedded devices and applications</b></p>

<p>Because an SQLite database requires little or no administration,
SQLite is a good choice for devices or services that must work
unattended and without human support.  SQLite is a good fit for
use in cellphones, PDAs, set-top boxes, and/or appliances.  It also
works well as an embedded database in downloadable consumer applications.
</p>
</li>

<li><p><b>Application File Format</b></p>

<p>
SQLite has been used with great success as the on-disk file format
for desktop applications such as financial analysis tools, CAD
packages, record keeping programs, and so forth.  The traditional
File/Open operation does an sqlite_open() and executes a
BEGIN TRANSACTION to get exclusive access to the content.  File/Save
does a COMMIT followed by another BEGIN TRANSACTION.  The use
of transactions guarantees that updates to the application file are atomic,
durable, isolated, and consistent.
</p>

<p>
Temporary triggers can be added to the database to record all
changes into a (temporary) undo/redo log table.  These changes can then
be played back when the user presses the Undo and Redo buttons.  Using
this technique, a unlimited depth undo/redo implementation can be written
in surprising little code.
</p>
</li>

<li><p><b>Replacement for <i>ad hoc</i> disk files</b></p>

<p>Many programs use fopen(), fread(), and fwrite() to create and
manage files of data in home-grown formats.  SQLite works well as a
replacement for these <i>ad hoc</i> data files.</p>
</li>

<li><p><b>Internal or temporary databases</b></p>

<p>
For programs that have a lot of data that must be sifted and sorted
in diverse ways, it is often easier and quicker to load the data into
an in-memory SQLite database and use query with joins and ORDER BY
clauses to extract the data in the form and order needed rather than
to try to code the same operations manually.
Using an SQL database internally in this way also gives the program
greater flexibility since new columns and indices can be added without
having to recode every query.
</p>
</li>

<li><p><b>Command-line dataset analysis tool</b></p>

<p>
Experienced SQL users can employ
the command-line <b>sqlite</b> program to analyze miscellaneous
datasets. Raw data can be imported using the COPY command, then that
data can be sliced and diced to generate a myriad of summary
reports.  Possible uses include website log analysis, sports
statistics analysis, compilation of programming metrics, and
analysis of experimental results.
</p>

<p>
You can also do the same thing with a enterprise client/server
database, of course.  The advantages to using SQLite in this situation
are that SQLite is much easier to set up and the resulting database 
is a single file that you can store on a floppy disk or email to
a colleague.
</p>
</li>

<li><p><b>Stand-in for an enterprise database during demos or testing</b></p>

<p>
If you are writting a client application for an enterprise database engine,
it makes sense to use a generic database backend that allows you to connect
to many different kinds of SQL database engines.  It makes even better
sense to
go ahead and include SQLite in the mix of supported database and to statically
link the SQLite engine in with the client.  That way the client program
can be used standalone with an SQLite data file for testing or for
demonstrations.
</p>
</li>

<li><p><b>Database Pedagogy</b></p>

<p>
Because it is simple to setup and use (installation is trivial: just
copy the <b>sqlite</b> or <b>sqlite.exe</b> executable to the target machine
and run it) SQLite makes a good database engine for use in teaching SQL.
Students can easily create as many databases as they like and can
email databases to the instructor for comments or grading.  For more
advanced students who are interested in studying how an RDBMS is
implemented, the modular and well-commented and documented SQLite code
can serve as a good basis.  This is not to say that SQLite is an accurate
model of how other database engines are implemented, but rather a student who
understands how SQLite works can more quickly comprehend the operational
principles of other systems.
</p>
</li>

<li><p><b>Experimental SQL language extensions</b></p>

<p>The simple, modular design of SQLite makes it a good platform for
prototyping new, experimental database language features or ideas.
</p>
</li>


</ul>

<h2>Situations Where Another RDBMS May Work Better</h2>

<ul>
<li><p><b>Client/Server Applications</b><p>

<p>If you have many client programs access a common database
over a network, you should consider using a client/server database
engine instead of SQLite.  SQLite will work over a network filesystem,
but because of the latency associated with most network filesystems,
performance will not be great.  Also, the file locking logic of
many network filesystems implementation contains bugs (on both Unix
and windows).  If file locking does not work like it should,
it might be possible for two or more client programs to modify the
same part of the same database at the same time, resulting in 
database corruption.  Because this problem results from bugs in
the underlying filesystem implementation, there is nothing SQLite
can do to prevent it.</p>

<p>A good rule of thumb is that you should avoid using SQLite
in situations where the same database will be accessed simultenously
from many computers over a network filesystem.</p>
</li>

<li><p><b>High-volume Websites</b></p>

<p>SQLite will normally work fine as the database backend to a website.
But if you website is so busy that your are thinking of splitted the
database component off onto a separate machine, then you should 
definitely consider using an enterprise-class client/server database
engine instead of SQLite.</p>
</li>

<li><p><b>Very large datasets</b></p>

<p>When you start a transaction in SQLite (which happens automatically
before any write operation that is not within an explicit BEGIN...COMMIT)
the engine has to allocate a bitmap of dirty pages in the disk file to
help it manage its rollback journal.  SQLite needs 256 bytes of RAM for
every 1MB of database.  For smaller databases, the amount of memory
required is not a problem, but when database begin to grow into the
multi-gigabyte range, the size of the bitmap can get quite large.  If
you need to store and modify more than a few dozen GB of data, you should
consider using a different database engine.
</p>
</li>

<li><p><b>High Concurrancy</b></p>

<p>
SQLite uses reader/writer locks on the entire database file.  That means
if any process is reading from any part of the database, all other
processes are prevented from writing any other part of the database.
Similarly, if any one process is writing to any part of the database,
all other processes are prevented from reading any other part of the
database.
For many situations, this is not a problem.  Each application
does its database work quickly and moves on, and no lock lasts for more
than a few dozen milliseconds.  But there are some problems that require
more concurrancy, and those problems will need to seek a different
solution.
</p>
</li>

</ul>

}


puts {
<p><hr /></p>
<p>
<a href="index.html"><img src="/goback.jpg" border=0 />
Back to the SQLite home page</a>
</p>

</body></html>}




More information about the svn-commits mailing list