Permalink
Switch branches/tags
version-3.25.3 version-3.25.2 version-3.25.1 version-3.25.0 version-3.24.0 version-3.23.2 version-3.23.1 version-3.23.0 version-3.22.0 version-3.21.0 version-3.20.1 version-3.20.0 version-3.19.4 version-3.19.3 version-3.19.2 version-3.19.1 version-3.19.0 version-3.18.2 version-3.18.1 version-3.18.0 version-3.17.0 version-3.16.2 version-3.16.1 version-3.16.0 version-3.15.2 version-3.15.1 version-3.15.0 version-3.14.2 version-3.14.1 version-3.14.0 version-3.13.0 version-3.12.2 version-3.12.1 version-3.12.0 version-3.11.1 version-3.11.0 version-3.10.2 version-3.10.1 version-3.10.0 version-3.9.3 version-3.9.2 version-3.9.1 version-3.9.0 version-3.8.11.1 version-3.8.11 version-3.8.10.2 version-3.8.10.1 version-3.8.10 version-3.8.9 version-3.8.8.3 version-3.8.8.2 version-3.8.8.1 version-3.8.8 version-3.8.7.4 version-3.8.7.3 version-3.8.7.2 version-3.8.7.1 version-3.8.7 version-3.8.6.1 version-3.8.6 version-3.8.5 version-3.8.4.3 version-3.8.4.2 version-3.8.4.1 version-3.8.4 version-3.8.3.1 version-3.8.3 version-3.8.2 version-3.8.1 version-3.8.0.2 version-3.8.0.1 version-3.8.0 version-3.7.17 version-3.7.16.2 version-3.7.16.1 version-3.7.16 version-3.7.15.2 version-3.7.15.1 version-3.7.15 version-3.7.14.1 version-3.7.14 version-3.7.13 version-3.7.12.1 version-3.7.12 version-3.7.11 version-3.7.10 version-3.7.9 version-3.7.8 version-3.7.7.1 version-3.7.7 version-3.7.6.3 version-3.7.6.2 version-3.7.6.1 version-3.7.6 version-3.7.5 version-3.7.4 version-3.7.2 version-3.6.15 version-3.6.10 release
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
2756 lines (2528 sloc) 93.8 KB
/*
** 2011-07-09
**
** The author disclaims copyright to this source code. In place of
** a legal notice, here is a blessing:
**
** May you do good and not evil.
** May you find forgiveness for yourself and forgive others.
** May you share freely, never taking more than you give.
**
*************************************************************************
** This file contains code for the VdbeSorter object, used in concert with
** a VdbeCursor to sort large numbers of keys for CREATE INDEX statements
** or by SELECT statements with ORDER BY clauses that cannot be satisfied
** using indexes and without LIMIT clauses.
**
** The VdbeSorter object implements a multi-threaded external merge sort
** algorithm that is efficient even if the number of elements being sorted
** exceeds the available memory.
**
** Here is the (internal, non-API) interface between this module and the
** rest of the SQLite system:
**
** sqlite3VdbeSorterInit() Create a new VdbeSorter object.
**
** sqlite3VdbeSorterWrite() Add a single new row to the VdbeSorter
** object. The row is a binary blob in the
** OP_MakeRecord format that contains both
** the ORDER BY key columns and result columns
** in the case of a SELECT w/ ORDER BY, or
** the complete record for an index entry
** in the case of a CREATE INDEX.
**
** sqlite3VdbeSorterRewind() Sort all content previously added.
** Position the read cursor on the
** first sorted element.
**
** sqlite3VdbeSorterNext() Advance the read cursor to the next sorted
** element.
**
** sqlite3VdbeSorterRowkey() Return the complete binary blob for the
** row currently under the read cursor.
**
** sqlite3VdbeSorterCompare() Compare the binary blob for the row
** currently under the read cursor against
** another binary blob X and report if
** X is strictly less than the read cursor.
** Used to enforce uniqueness in a
** CREATE UNIQUE INDEX statement.
**
** sqlite3VdbeSorterClose() Close the VdbeSorter object and reclaim
** all resources.
**
** sqlite3VdbeSorterReset() Refurbish the VdbeSorter for reuse. This
** is like Close() followed by Init() only
** much faster.
**
** The interfaces above must be called in a particular order. Write() can
** only occur in between Init()/Reset() and Rewind(). Next(), Rowkey(), and
** Compare() can only occur in between Rewind() and Close()/Reset(). i.e.
**
** Init()
** for each record: Write()
** Rewind()
** Rowkey()/Compare()
** Next()
** Close()
**
** Algorithm:
**
** Records passed to the sorter via calls to Write() are initially held
** unsorted in main memory. Assuming the amount of memory used never exceeds
** a threshold, when Rewind() is called the set of records is sorted using
** an in-memory merge sort. In this case, no temporary files are required
** and subsequent calls to Rowkey(), Next() and Compare() read records
** directly from main memory.
**
** If the amount of space used to store records in main memory exceeds the
** threshold, then the set of records currently in memory are sorted and
** written to a temporary file in "Packed Memory Array" (PMA) format.
** A PMA created at this point is known as a "level-0 PMA". Higher levels
** of PMAs may be created by merging existing PMAs together - for example
** merging two or more level-0 PMAs together creates a level-1 PMA.
**
** The threshold for the amount of main memory to use before flushing
** records to a PMA is roughly the same as the limit configured for the
** page-cache of the main database. Specifically, the threshold is set to
** the value returned by "PRAGMA main.page_size" multipled by
** that returned by "PRAGMA main.cache_size", in bytes.
**
** If the sorter is running in single-threaded mode, then all PMAs generated
** are appended to a single temporary file. Or, if the sorter is running in
** multi-threaded mode then up to (N+1) temporary files may be opened, where
** N is the configured number of worker threads. In this case, instead of
** sorting the records and writing the PMA to a temporary file itself, the
** calling thread usually launches a worker thread to do so. Except, if
** there are already N worker threads running, the main thread does the work
** itself.
**
** The sorter is running in multi-threaded mode if (a) the library was built
** with pre-processor symbol SQLITE_MAX_WORKER_THREADS set to a value greater
** than zero, and (b) worker threads have been enabled at runtime by calling
** "PRAGMA threads=N" with some value of N greater than 0.
**
** When Rewind() is called, any data remaining in memory is flushed to a
** final PMA. So at this point the data is stored in some number of sorted
** PMAs within temporary files on disk.
**
** If there are fewer than SORTER_MAX_MERGE_COUNT PMAs in total and the
** sorter is running in single-threaded mode, then these PMAs are merged
** incrementally as keys are retreived from the sorter by the VDBE. The
** MergeEngine object, described in further detail below, performs this
** merge.
**
** Or, if running in multi-threaded mode, then a background thread is
** launched to merge the existing PMAs. Once the background thread has
** merged T bytes of data into a single sorted PMA, the main thread
** begins reading keys from that PMA while the background thread proceeds
** with merging the next T bytes of data. And so on.
**
** Parameter T is set to half the value of the memory threshold used
** by Write() above to determine when to create a new PMA.
**
** If there are more than SORTER_MAX_MERGE_COUNT PMAs in total when
** Rewind() is called, then a hierarchy of incremental-merges is used.
** First, T bytes of data from the first SORTER_MAX_MERGE_COUNT PMAs on
** disk are merged together. Then T bytes of data from the second set, and
** so on, such that no operation ever merges more than SORTER_MAX_MERGE_COUNT
** PMAs at a time. This done is to improve locality.
**
** If running in multi-threaded mode and there are more than
** SORTER_MAX_MERGE_COUNT PMAs on disk when Rewind() is called, then more
** than one background thread may be created. Specifically, there may be
** one background thread for each temporary file on disk, and one background
** thread to merge the output of each of the others to a single PMA for
** the main thread to read from.
*/
#include "sqliteInt.h"
#include "vdbeInt.h"
/*
** If SQLITE_DEBUG_SORTER_THREADS is defined, this module outputs various
** messages to stderr that may be helpful in understanding the performance
** characteristics of the sorter in multi-threaded mode.
*/
#if 0
# define SQLITE_DEBUG_SORTER_THREADS 1
#endif
/*
** Hard-coded maximum amount of data to accumulate in memory before flushing
** to a level 0 PMA. The purpose of this limit is to prevent various integer
** overflows. 512MiB.
*/
#define SQLITE_MAX_PMASZ (1<<29)
/*
** Private objects used by the sorter
*/
typedef struct MergeEngine MergeEngine; /* Merge PMAs together */
typedef struct PmaReader PmaReader; /* Incrementally read one PMA */
typedef struct PmaWriter PmaWriter; /* Incrementally write one PMA */
typedef struct SorterRecord SorterRecord; /* A record being sorted */
typedef struct SortSubtask SortSubtask; /* A sub-task in the sort process */
typedef struct SorterFile SorterFile; /* Temporary file object wrapper */
typedef struct SorterList SorterList; /* In-memory list of records */
typedef struct IncrMerger IncrMerger; /* Read & merge multiple PMAs */
/*
** A container for a temp file handle and the current amount of data
** stored in the file.
*/
struct SorterFile {
sqlite3_file *pFd; /* File handle */
i64 iEof; /* Bytes of data stored in pFd */
};
/*
** An in-memory list of objects to be sorted.
**
** If aMemory==0 then each object is allocated separately and the objects
** are connected using SorterRecord.u.pNext. If aMemory!=0 then all objects
** are stored in the aMemory[] bulk memory, one right after the other, and
** are connected using SorterRecord.u.iNext.
*/
struct SorterList {
SorterRecord *pList; /* Linked list of records */
u8 *aMemory; /* If non-NULL, bulk memory to hold pList */
int szPMA; /* Size of pList as PMA in bytes */
};
/*
** The MergeEngine object is used to combine two or more smaller PMAs into
** one big PMA using a merge operation. Separate PMAs all need to be
** combined into one big PMA in order to be able to step through the sorted
** records in order.
**
** The aReadr[] array contains a PmaReader object for each of the PMAs being
** merged. An aReadr[] object either points to a valid key or else is at EOF.
** ("EOF" means "End Of File". When aReadr[] is at EOF there is no more data.)
** For the purposes of the paragraphs below, we assume that the array is
** actually N elements in size, where N is the smallest power of 2 greater
** to or equal to the number of PMAs being merged. The extra aReadr[] elements
** are treated as if they are empty (always at EOF).
**
** The aTree[] array is also N elements in size. The value of N is stored in
** the MergeEngine.nTree variable.
**
** The final (N/2) elements of aTree[] contain the results of comparing
** pairs of PMA keys together. Element i contains the result of
** comparing aReadr[2*i-N] and aReadr[2*i-N+1]. Whichever key is smaller, the
** aTree element is set to the index of it.
**
** For the purposes of this comparison, EOF is considered greater than any
** other key value. If the keys are equal (only possible with two EOF
** values), it doesn't matter which index is stored.
**
** The (N/4) elements of aTree[] that precede the final (N/2) described
** above contains the index of the smallest of each block of 4 PmaReaders
** And so on. So that aTree[1] contains the index of the PmaReader that
** currently points to the smallest key value. aTree[0] is unused.
**
** Example:
**
** aReadr[0] -> Banana
** aReadr[1] -> Feijoa
** aReadr[2] -> Elderberry
** aReadr[3] -> Currant
** aReadr[4] -> Grapefruit
** aReadr[5] -> Apple
** aReadr[6] -> Durian
** aReadr[7] -> EOF
**
** aTree[] = { X, 5 0, 5 0, 3, 5, 6 }
**
** The current element is "Apple" (the value of the key indicated by
** PmaReader 5). When the Next() operation is invoked, PmaReader 5 will
** be advanced to the next key in its segment. Say the next key is
** "Eggplant":
**
** aReadr[5] -> Eggplant
**
** The contents of aTree[] are updated first by comparing the new PmaReader
** 5 key to the current key of PmaReader 4 (still "Grapefruit"). The PmaReader
** 5 value is still smaller, so aTree[6] is set to 5. And so on up the tree.
** The value of PmaReader 6 - "Durian" - is now smaller than that of PmaReader
** 5, so aTree[3] is set to 6. Key 0 is smaller than key 6 (Banana<Durian),
** so the value written into element 1 of the array is 0. As follows:
**
** aTree[] = { X, 0 0, 6 0, 3, 5, 6 }
**
** In other words, each time we advance to the next sorter element, log2(N)
** key comparison operations are required, where N is the number of segments
** being merged (rounded up to the next power of 2).
*/
struct MergeEngine {
int nTree; /* Used size of aTree/aReadr (power of 2) */
SortSubtask *pTask; /* Used by this thread only */
int *aTree; /* Current state of incremental merge */
PmaReader *aReadr; /* Array of PmaReaders to merge data from */
};
/*
** This object represents a single thread of control in a sort operation.
** Exactly VdbeSorter.nTask instances of this object are allocated
** as part of each VdbeSorter object. Instances are never allocated any
** other way. VdbeSorter.nTask is set to the number of worker threads allowed
** (see SQLITE_CONFIG_WORKER_THREADS) plus one (the main thread). Thus for
** single-threaded operation, there is exactly one instance of this object
** and for multi-threaded operation there are two or more instances.
**
** Essentially, this structure contains all those fields of the VdbeSorter
** structure for which each thread requires a separate instance. For example,
** each thread requries its own UnpackedRecord object to unpack records in
** as part of comparison operations.
**
** Before a background thread is launched, variable bDone is set to 0. Then,
** right before it exits, the thread itself sets bDone to 1. This is used for
** two purposes:
**
** 1. When flushing the contents of memory to a level-0 PMA on disk, to
** attempt to select a SortSubtask for which there is not already an
** active background thread (since doing so causes the main thread
** to block until it finishes).
**
** 2. If SQLITE_DEBUG_SORTER_THREADS is defined, to determine if a call
** to sqlite3ThreadJoin() is likely to block. Cases that are likely to
** block provoke debugging output.
**
** In both cases, the effects of the main thread seeing (bDone==0) even
** after the thread has finished are not dire. So we don't worry about
** memory barriers and such here.
*/
typedef int (*SorterCompare)(SortSubtask*,int*,const void*,int,const void*,int);
struct SortSubtask {
SQLiteThread *pThread; /* Background thread, if any */
int bDone; /* Set if thread is finished but not joined */
VdbeSorter *pSorter; /* Sorter that owns this sub-task */
UnpackedRecord *pUnpacked; /* Space to unpack a record */
SorterList list; /* List for thread to write to a PMA */
int nPMA; /* Number of PMAs currently in file */
SorterCompare xCompare; /* Compare function to use */
SorterFile file; /* Temp file for level-0 PMAs */
SorterFile file2; /* Space for other PMAs */
};
/*
** Main sorter structure. A single instance of this is allocated for each
** sorter cursor created by the VDBE.
**
** mxKeysize:
** As records are added to the sorter by calls to sqlite3VdbeSorterWrite(),
** this variable is updated so as to be set to the size on disk of the
** largest record in the sorter.
*/
struct VdbeSorter {
int mnPmaSize; /* Minimum PMA size, in bytes */
int mxPmaSize; /* Maximum PMA size, in bytes. 0==no limit */
int mxKeysize; /* Largest serialized key seen so far */
int pgsz; /* Main database page size */
PmaReader *pReader; /* Readr data from here after Rewind() */
MergeEngine *pMerger; /* Or here, if bUseThreads==0 */
sqlite3 *db; /* Database connection */
KeyInfo *pKeyInfo; /* How to compare records */
UnpackedRecord *pUnpacked; /* Used by VdbeSorterCompare() */
SorterList list; /* List of in-memory records */
int iMemory; /* Offset of free space in list.aMemory */
int nMemory; /* Size of list.aMemory allocation in bytes */
u8 bUsePMA; /* True if one or more PMAs created */
u8 bUseThreads; /* True to use background threads */
u8 iPrev; /* Previous thread used to flush PMA */
u8 nTask; /* Size of aTask[] array */
u8 typeMask;
SortSubtask aTask[1]; /* One or more subtasks */
};
#define SORTER_TYPE_INTEGER 0x01
#define SORTER_TYPE_TEXT 0x02
/*
** An instance of the following object is used to read records out of a
** PMA, in sorted order. The next key to be read is cached in nKey/aKey.
** aKey might point into aMap or into aBuffer. If neither of those locations
** contain a contiguous representation of the key, then aAlloc is allocated
** and the key is copied into aAlloc and aKey is made to poitn to aAlloc.
**
** pFd==0 at EOF.
*/
struct PmaReader {
i64 iReadOff; /* Current read offset */
i64 iEof; /* 1 byte past EOF for this PmaReader */
int nAlloc; /* Bytes of space at aAlloc */
int nKey; /* Number of bytes in key */
sqlite3_file *pFd; /* File handle we are reading from */
u8 *aAlloc; /* Space for aKey if aBuffer and pMap wont work */
u8 *aKey; /* Pointer to current key */
u8 *aBuffer; /* Current read buffer */
int nBuffer; /* Size of read buffer in bytes */
u8 *aMap; /* Pointer to mapping of entire file */
IncrMerger *pIncr; /* Incremental merger */
};
/*
** Normally, a PmaReader object iterates through an existing PMA stored
** within a temp file. However, if the PmaReader.pIncr variable points to
** an object of the following type, it may be used to iterate/merge through
** multiple PMAs simultaneously.
**
** There are two types of IncrMerger object - single (bUseThread==0) and
** multi-threaded (bUseThread==1).
**
** A multi-threaded IncrMerger object uses two temporary files - aFile[0]
** and aFile[1]. Neither file is allowed to grow to more than mxSz bytes in
** size. When the IncrMerger is initialized, it reads enough data from
** pMerger to populate aFile[0]. It then sets variables within the
** corresponding PmaReader object to read from that file and kicks off
** a background thread to populate aFile[1] with the next mxSz bytes of
** sorted record data from pMerger.
**
** When the PmaReader reaches the end of aFile[0], it blocks until the
** background thread has finished populating aFile[1]. It then exchanges
** the contents of the aFile[0] and aFile[1] variables within this structure,
** sets the PmaReader fields to read from the new aFile[0] and kicks off
** another background thread to populate the new aFile[1]. And so on, until
** the contents of pMerger are exhausted.
**
** A single-threaded IncrMerger does not open any temporary files of its
** own. Instead, it has exclusive access to mxSz bytes of space beginning
** at offset iStartOff of file pTask->file2. And instead of using a
** background thread to prepare data for the PmaReader, with a single
** threaded IncrMerger the allocate part of pTask->file2 is "refilled" with
** keys from pMerger by the calling thread whenever the PmaReader runs out
** of data.
*/
struct IncrMerger {
SortSubtask *pTask; /* Task that owns this merger */
MergeEngine *pMerger; /* Merge engine thread reads data from */
i64 iStartOff; /* Offset to start writing file at */
int mxSz; /* Maximum bytes of data to store */
int bEof; /* Set to true when merge is finished */
int bUseThread; /* True to use a bg thread for this object */
SorterFile aFile[2]; /* aFile[0] for reading, [1] for writing */
};
/*
** An instance of this object is used for writing a PMA.
**
** The PMA is written one record at a time. Each record is of an arbitrary
** size. But I/O is more efficient if it occurs in page-sized blocks where
** each block is aligned on a page boundary. This object caches writes to
** the PMA so that aligned, page-size blocks are written.
*/
struct PmaWriter {
int eFWErr; /* Non-zero if in an error state */
u8 *aBuffer; /* Pointer to write buffer */
int nBuffer; /* Size of write buffer in bytes */
int iBufStart; /* First byte of buffer to write */
int iBufEnd; /* Last byte of buffer to write */
i64 iWriteOff; /* Offset of start of buffer in file */
sqlite3_file *pFd; /* File handle to write to */
};
/*
** This object is the header on a single record while that record is being
** held in memory and prior to being written out as part of a PMA.
**
** How the linked list is connected depends on how memory is being managed
** by this module. If using a separate allocation for each in-memory record
** (VdbeSorter.list.aMemory==0), then the list is always connected using the
** SorterRecord.u.pNext pointers.
**
** Or, if using the single large allocation method (VdbeSorter.list.aMemory!=0),
** then while records are being accumulated the list is linked using the
** SorterRecord.u.iNext offset. This is because the aMemory[] array may
** be sqlite3Realloc()ed while records are being accumulated. Once the VM
** has finished passing records to the sorter, or when the in-memory buffer
** is full, the list is sorted. As part of the sorting process, it is
** converted to use the SorterRecord.u.pNext pointers. See function
** vdbeSorterSort() for details.
*/
struct SorterRecord {
int nVal; /* Size of the record in bytes */
union {
SorterRecord *pNext; /* Pointer to next record in list */
int iNext; /* Offset within aMemory of next record */
} u;
/* The data for the record immediately follows this header */
};
/* Return a pointer to the buffer containing the record data for SorterRecord
** object p. Should be used as if:
**
** void *SRVAL(SorterRecord *p) { return (void*)&p[1]; }
*/
#define SRVAL(p) ((void*)((SorterRecord*)(p) + 1))
/* Maximum number of PMAs that a single MergeEngine can merge */
#define SORTER_MAX_MERGE_COUNT 16
static int vdbeIncrSwap(IncrMerger*);
static void vdbeIncrFree(IncrMerger *);
/*
** Free all memory belonging to the PmaReader object passed as the
** argument. All structure fields are set to zero before returning.
*/
static void vdbePmaReaderClear(PmaReader *pReadr){
sqlite3_free(pReadr->aAlloc);
sqlite3_free(pReadr->aBuffer);
if( pReadr->aMap ) sqlite3OsUnfetch(pReadr->pFd, 0, pReadr->aMap);
vdbeIncrFree(pReadr->pIncr);
memset(pReadr, 0, sizeof(PmaReader));
}
/*
** Read the next nByte bytes of data from the PMA p.
** If successful, set *ppOut to point to a buffer containing the data
** and return SQLITE_OK. Otherwise, if an error occurs, return an SQLite
** error code.
**
** The buffer returned in *ppOut is only valid until the
** next call to this function.
*/
static int vdbePmaReadBlob(
PmaReader *p, /* PmaReader from which to take the blob */
int nByte, /* Bytes of data to read */
u8 **ppOut /* OUT: Pointer to buffer containing data */
){
int iBuf; /* Offset within buffer to read from */
int nAvail; /* Bytes of data available in buffer */
if( p->aMap ){
*ppOut = &p->aMap[p->iReadOff];
p->iReadOff += nByte;
return SQLITE_OK;
}
assert( p->aBuffer );
/* If there is no more data to be read from the buffer, read the next
** p->nBuffer bytes of data from the file into it. Or, if there are less
** than p->nBuffer bytes remaining in the PMA, read all remaining data. */
iBuf = p->iReadOff % p->nBuffer;
if( iBuf==0 ){
int nRead; /* Bytes to read from disk */
int rc; /* sqlite3OsRead() return code */
/* Determine how many bytes of data to read. */
if( (p->iEof - p->iReadOff) > (i64)p->nBuffer ){
nRead = p->nBuffer;
}else{
nRead = (int)(p->iEof - p->iReadOff);
}
assert( nRead>0 );
/* Readr data from the file. Return early if an error occurs. */
rc = sqlite3OsRead(p->pFd, p->aBuffer, nRead, p->iReadOff);
assert( rc!=SQLITE_IOERR_SHORT_READ );
if( rc!=SQLITE_OK ) return rc;
}
nAvail = p->nBuffer - iBuf;
if( nByte<=nAvail ){
/* The requested data is available in the in-memory buffer. In this
** case there is no need to make a copy of the data, just return a
** pointer into the buffer to the caller. */
*ppOut = &p->aBuffer[iBuf];
p->iReadOff += nByte;
}else{
/* The requested data is not all available in the in-memory buffer.
** In this case, allocate space at p->aAlloc[] to copy the requested
** range into. Then return a copy of pointer p->aAlloc to the caller. */
int nRem; /* Bytes remaining to copy */
/* Extend the p->aAlloc[] allocation if required. */
if( p->nAlloc<nByte ){
u8 *aNew;
int nNew = MAX(128, p->nAlloc*2);
while( nByte>nNew ) nNew = nNew*2;
aNew = sqlite3Realloc(p->aAlloc, nNew);
if( !aNew ) return SQLITE_NOMEM_BKPT;
p->nAlloc = nNew;
p->aAlloc = aNew;
}
/* Copy as much data as is available in the buffer into the start of
** p->aAlloc[]. */
memcpy(p->aAlloc, &p->aBuffer[iBuf], nAvail);
p->iReadOff += nAvail;
nRem = nByte - nAvail;
/* The following loop copies up to p->nBuffer bytes per iteration into
** the p->aAlloc[] buffer. */
while( nRem>0 ){
int rc; /* vdbePmaReadBlob() return code */
int nCopy; /* Number of bytes to copy */
u8 *aNext; /* Pointer to buffer to copy data from */
nCopy = nRem;
if( nRem>p->nBuffer ) nCopy = p->nBuffer;
rc = vdbePmaReadBlob(p, nCopy, &aNext);
if( rc!=SQLITE_OK ) return rc;
assert( aNext!=p->aAlloc );
memcpy(&p->aAlloc[nByte - nRem], aNext, nCopy);
nRem -= nCopy;
}
*ppOut = p->aAlloc;
}
return SQLITE_OK;
}
/*
** Read a varint from the stream of data accessed by p. Set *pnOut to
** the value read.
*/
static int vdbePmaReadVarint(PmaReader *p, u64 *pnOut){
int iBuf;
if( p->aMap ){
p->iReadOff += sqlite3GetVarint(&p->aMap[p->iReadOff], pnOut);
}else{
iBuf = p->iReadOff % p->nBuffer;
if( iBuf && (p->nBuffer-iBuf)>=9 ){
p->iReadOff += sqlite3GetVarint(&p->aBuffer[iBuf], pnOut);
}else{
u8 aVarint[16], *a;
int i = 0, rc;
do{
rc = vdbePmaReadBlob(p, 1, &a);
if( rc ) return rc;
aVarint[(i++)&0xf] = a[0];
}while( (a[0]&0x80)!=0 );
sqlite3GetVarint(aVarint, pnOut);
}
}
return SQLITE_OK;
}
/*
** Attempt to memory map file pFile. If successful, set *pp to point to the
** new mapping and return SQLITE_OK. If the mapping is not attempted
** (because the file is too large or the VFS layer is configured not to use
** mmap), return SQLITE_OK and set *pp to NULL.
**
** Or, if an error occurs, return an SQLite error code. The final value of
** *pp is undefined in this case.
*/
static int vdbeSorterMapFile(SortSubtask *pTask, SorterFile *pFile, u8 **pp){
int rc = SQLITE_OK;
if( pFile->iEof<=(i64)(pTask->pSorter->db->nMaxSorterMmap) ){
sqlite3_file *pFd = pFile->pFd;
if( pFd->pMethods->iVersion>=3 ){
rc = sqlite3OsFetch(pFd, 0, (int)pFile->iEof, (void**)pp);
testcase( rc!=SQLITE_OK );
}
}
return rc;
}
/*
** Attach PmaReader pReadr to file pFile (if it is not already attached to
** that file) and seek it to offset iOff within the file. Return SQLITE_OK
** if successful, or an SQLite error code if an error occurs.
*/
static int vdbePmaReaderSeek(
SortSubtask *pTask, /* Task context */
PmaReader *pReadr, /* Reader whose cursor is to be moved */
SorterFile *pFile, /* Sorter file to read from */
i64 iOff /* Offset in pFile */
){
int rc = SQLITE_OK;
assert( pReadr->pIncr==0 || pReadr->pIncr->bEof==0 );
if( sqlite3FaultSim(201) ) return SQLITE_IOERR_READ;
if( pReadr->aMap ){
sqlite3OsUnfetch(pReadr->pFd, 0, pReadr->aMap);
pReadr->aMap = 0;
}
pReadr->iReadOff = iOff;
pReadr->iEof = pFile->iEof;
pReadr->pFd = pFile->pFd;
rc = vdbeSorterMapFile(pTask, pFile, &pReadr->aMap);
if( rc==SQLITE_OK && pReadr->aMap==0 ){
int pgsz = pTask->pSorter->pgsz;
int iBuf = pReadr->iReadOff % pgsz;
if( pReadr->aBuffer==0 ){
pReadr->aBuffer = (u8*)sqlite3Malloc(pgsz);
if( pReadr->aBuffer==0 ) rc = SQLITE_NOMEM_BKPT;
pReadr->nBuffer = pgsz;
}
if( rc==SQLITE_OK && iBuf ){
int nRead = pgsz - iBuf;
if( (pReadr->iReadOff + nRead) > pReadr->iEof ){
nRead = (int)(pReadr->iEof - pReadr->iReadOff);
}
rc = sqlite3OsRead(
pReadr->pFd, &pReadr->aBuffer[iBuf], nRead, pReadr->iReadOff
);
testcase( rc!=SQLITE_OK );
}
}
return rc;
}
/*
** Advance PmaReader pReadr to the next key in its PMA. Return SQLITE_OK if
** no error occurs, or an SQLite error code if one does.
*/
static int vdbePmaReaderNext(PmaReader *pReadr){
int rc = SQLITE_OK; /* Return Code */
u64 nRec = 0; /* Size of record in bytes */
if( pReadr->iReadOff>=pReadr->iEof ){
IncrMerger *pIncr = pReadr->pIncr;
int bEof = 1;
if( pIncr ){
rc = vdbeIncrSwap(pIncr);
if( rc==SQLITE_OK && pIncr->bEof==0 ){
rc = vdbePmaReaderSeek(
pIncr->pTask, pReadr, &pIncr->aFile[0], pIncr->iStartOff
);
bEof = 0;
}
}
if( bEof ){
/* This is an EOF condition */
vdbePmaReaderClear(pReadr);
testcase( rc!=SQLITE_OK );
return rc;
}
}
if( rc==SQLITE_OK ){
rc = vdbePmaReadVarint(pReadr, &nRec);
}
if( rc==SQLITE_OK ){
pReadr->nKey = (int)nRec;
rc = vdbePmaReadBlob(pReadr, (int)nRec, &pReadr->aKey);
testcase( rc!=SQLITE_OK );
}
return rc;
}
/*
** Initialize PmaReader pReadr to scan through the PMA stored in file pFile
** starting at offset iStart and ending at offset iEof-1. This function
** leaves the PmaReader pointing to the first key in the PMA (or EOF if the
** PMA is empty).
**
** If the pnByte parameter is NULL, then it is assumed that the file
** contains a single PMA, and that that PMA omits the initial length varint.
*/
static int vdbePmaReaderInit(
SortSubtask *pTask, /* Task context */
SorterFile *pFile, /* Sorter file to read from */
i64 iStart, /* Start offset in pFile */
PmaReader *pReadr, /* PmaReader to populate */
i64 *pnByte /* IN/OUT: Increment this value by PMA size */
){
int rc;
assert( pFile->iEof>iStart );
assert( pReadr->aAlloc==0 && pReadr->nAlloc==0 );
assert( pReadr->aBuffer==0 );
assert( pReadr->aMap==0 );
rc = vdbePmaReaderSeek(pTask, pReadr, pFile, iStart);
if( rc==SQLITE_OK ){
u64 nByte = 0; /* Size of PMA in bytes */
rc = vdbePmaReadVarint(pReadr, &nByte);
pReadr->iEof = pReadr->iReadOff + nByte;
*pnByte += nByte;
}
if( rc==SQLITE_OK ){
rc = vdbePmaReaderNext(pReadr);
}
return rc;
}
/*
** A version of vdbeSorterCompare() that assumes that it has already been
** determined that the first field of key1 is equal to the first field of
** key2.
*/
static int vdbeSorterCompareTail(
SortSubtask *pTask, /* Subtask context (for pKeyInfo) */
int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */
const void *pKey1, int nKey1, /* Left side of comparison */
const void *pKey2, int nKey2 /* Right side of comparison */
){
UnpackedRecord *r2 = pTask->pUnpacked;
if( *pbKey2Cached==0 ){
sqlite3VdbeRecordUnpack(pTask->pSorter->pKeyInfo, nKey2, pKey2, r2);
*pbKey2Cached = 1;
}
return sqlite3VdbeRecordCompareWithSkip(nKey1, pKey1, r2, 1);
}
/*
** Compare key1 (buffer pKey1, size nKey1 bytes) with key2 (buffer pKey2,
** size nKey2 bytes). Use (pTask->pKeyInfo) for the collation sequences
** used by the comparison. Return the result of the comparison.
**
** If IN/OUT parameter *pbKey2Cached is true when this function is called,
** it is assumed that (pTask->pUnpacked) contains the unpacked version
** of key2. If it is false, (pTask->pUnpacked) is populated with the unpacked
** version of key2 and *pbKey2Cached set to true before returning.
**
** If an OOM error is encountered, (pTask->pUnpacked->error_rc) is set
** to SQLITE_NOMEM.
*/
static int vdbeSorterCompare(
SortSubtask *pTask, /* Subtask context (for pKeyInfo) */
int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */
const void *pKey1, int nKey1, /* Left side of comparison */
const void *pKey2, int nKey2 /* Right side of comparison */
){
UnpackedRecord *r2 = pTask->pUnpacked;
if( !*pbKey2Cached ){
sqlite3VdbeRecordUnpack(pTask->pSorter->pKeyInfo, nKey2, pKey2, r2);
*pbKey2Cached = 1;
}
return sqlite3VdbeRecordCompare(nKey1, pKey1, r2);
}
/*
** A specially optimized version of vdbeSorterCompare() that assumes that
** the first field of each key is a TEXT value and that the collation
** sequence to compare them with is BINARY.
*/
static int vdbeSorterCompareText(
SortSubtask *pTask, /* Subtask context (for pKeyInfo) */
int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */
const void *pKey1, int nKey1, /* Left side of comparison */
const void *pKey2, int nKey2 /* Right side of comparison */
){
const u8 * const p1 = (const u8 * const)pKey1;
const u8 * const p2 = (const u8 * const)pKey2;
const u8 * const v1 = &p1[ p1[0] ]; /* Pointer to value 1 */
const u8 * const v2 = &p2[ p2[0] ]; /* Pointer to value 2 */
int n1;
int n2;
int res;
getVarint32(&p1[1], n1);
getVarint32(&p2[1], n2);
res = memcmp(v1, v2, (MIN(n1, n2) - 13)/2);
if( res==0 ){
res = n1 - n2;
}
if( res==0 ){
if( pTask->pSorter->pKeyInfo->nKeyField>1 ){
res = vdbeSorterCompareTail(
pTask, pbKey2Cached, pKey1, nKey1, pKey2, nKey2
);
}
}else{
if( pTask->pSorter->pKeyInfo->aSortOrder[0] ){
res = res * -1;
}
}
return res;
}
/*
** A specially optimized version of vdbeSorterCompare() that assumes that
** the first field of each key is an INTEGER value.
*/
static int vdbeSorterCompareInt(
SortSubtask *pTask, /* Subtask context (for pKeyInfo) */
int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */
const void *pKey1, int nKey1, /* Left side of comparison */
const void *pKey2, int nKey2 /* Right side of comparison */
){
const u8 * const p1 = (const u8 * const)pKey1;
const u8 * const p2 = (const u8 * const)pKey2;
const int s1 = p1[1]; /* Left hand serial type */
const int s2 = p2[1]; /* Right hand serial type */
const u8 * const v1 = &p1[ p1[0] ]; /* Pointer to value 1 */
const u8 * const v2 = &p2[ p2[0] ]; /* Pointer to value 2 */
int res; /* Return value */
assert( (s1>0 && s1<7) || s1==8 || s1==9 );
assert( (s2>0 && s2<7) || s2==8 || s2==9 );
if( s1==s2 ){
/* The two values have the same sign. Compare using memcmp(). */
static const u8 aLen[] = {0, 1, 2, 3, 4, 6, 8, 0, 0, 0 };
const u8 n = aLen[s1];
int i;
res = 0;
for(i=0; i<n; i++){
if( (res = v1[i] - v2[i])!=0 ){
if( ((v1[0] ^ v2[0]) & 0x80)!=0 ){
res = v1[0] & 0x80 ? -1 : +1;
}
break;
}
}
}else if( s1>7 && s2>7 ){
res = s1 - s2;
}else{
if( s2>7 ){
res = +1;
}else if( s1>7 ){
res = -1;
}else{
res = s1 - s2;
}
assert( res!=0 );
if( res>0 ){
if( *v1 & 0x80 ) res = -1;
}else{
if( *v2 & 0x80 ) res = +1;
}
}
if( res==0 ){
if( pTask->pSorter->pKeyInfo->nKeyField>1 ){
res = vdbeSorterCompareTail(
pTask, pbKey2Cached, pKey1, nKey1, pKey2, nKey2
);
}
}else if( pTask->pSorter->pKeyInfo->aSortOrder[0] ){
res = res * -1;
}
return res;
}
/*
** Initialize the temporary index cursor just opened as a sorter cursor.
**
** Usually, the sorter module uses the value of (pCsr->pKeyInfo->nKeyField)
** to determine the number of fields that should be compared from the
** records being sorted. However, if the value passed as argument nField
** is non-zero and the sorter is able to guarantee a stable sort, nField
** is used instead. This is used when sorting records for a CREATE INDEX
** statement. In this case, keys are always delivered to the sorter in
** order of the primary key, which happens to be make up the final part
** of the records being sorted. So if the sort is stable, there is never
** any reason to compare PK fields and they can be ignored for a small
** performance boost.
**
** The sorter can guarantee a stable sort when running in single-threaded
** mode, but not in multi-threaded mode.
**
** SQLITE_OK is returned if successful, or an SQLite error code otherwise.
*/
int sqlite3VdbeSorterInit(
sqlite3 *db, /* Database connection (for malloc()) */
int nField, /* Number of key fields in each record */
VdbeCursor *pCsr /* Cursor that holds the new sorter */
){
int pgsz; /* Page size of main database */
int i; /* Used to iterate through aTask[] */
VdbeSorter *pSorter; /* The new sorter */
KeyInfo *pKeyInfo; /* Copy of pCsr->pKeyInfo with db==0 */
int szKeyInfo; /* Size of pCsr->pKeyInfo in bytes */
int sz; /* Size of pSorter in bytes */
int rc = SQLITE_OK;
#if SQLITE_MAX_WORKER_THREADS==0
# define nWorker 0
#else
int nWorker;
#endif
/* Initialize the upper limit on the number of worker threads */
#if SQLITE_MAX_WORKER_THREADS>0
if( sqlite3TempInMemory(db) || sqlite3GlobalConfig.bCoreMutex==0 ){
nWorker = 0;
}else{
nWorker = db->aLimit[SQLITE_LIMIT_WORKER_THREADS];
}
#endif
/* Do not allow the total number of threads (main thread + all workers)
** to exceed the maximum merge count */
#if SQLITE_MAX_WORKER_THREADS>=SORTER_MAX_MERGE_COUNT
if( nWorker>=SORTER_MAX_MERGE_COUNT ){
nWorker = SORTER_MAX_MERGE_COUNT-1;
}
#endif
assert( pCsr->pKeyInfo && pCsr->pBtx==0 );
assert( pCsr->eCurType==CURTYPE_SORTER );
szKeyInfo = sizeof(KeyInfo) + (pCsr->pKeyInfo->nKeyField-1)*sizeof(CollSeq*);
sz = sizeof(VdbeSorter) + nWorker * sizeof(SortSubtask);
pSorter = (VdbeSorter*)sqlite3DbMallocZero(db, sz + szKeyInfo);
pCsr->uc.pSorter = pSorter;
if( pSorter==0 ){
rc = SQLITE_NOMEM_BKPT;
}else{
pSorter->pKeyInfo = pKeyInfo = (KeyInfo*)((u8*)pSorter + sz);
memcpy(pKeyInfo, pCsr->pKeyInfo, szKeyInfo);
pKeyInfo->db = 0;
if( nField && nWorker==0 ){
pKeyInfo->nKeyField = nField;
}
pSorter->pgsz = pgsz = sqlite3BtreeGetPageSize(db->aDb[0].pBt);
pSorter->nTask = nWorker + 1;
pSorter->iPrev = (u8)(nWorker - 1);
pSorter->bUseThreads = (pSorter->nTask>1);
pSorter->db = db;
for(i=0; i<pSorter->nTask; i++){
SortSubtask *pTask = &pSorter->aTask[i];
pTask->pSorter = pSorter;
}
if( !sqlite3TempInMemory(db) ){
i64 mxCache; /* Cache size in bytes*/
u32 szPma = sqlite3GlobalConfig.szPma;
pSorter->mnPmaSize = szPma * pgsz;
mxCache = db->aDb[0].pSchema->cache_size;
if( mxCache<0 ){
/* A negative cache-size value C indicates that the cache is abs(C)
** KiB in size. */
mxCache = mxCache * -1024;
}else{
mxCache = mxCache * pgsz;
}
mxCache = MIN(mxCache, SQLITE_MAX_PMASZ);
pSorter->mxPmaSize = MAX(pSorter->mnPmaSize, (int)mxCache);
/* Avoid large memory allocations if the application has requested
** SQLITE_CONFIG_SMALL_MALLOC. */
if( sqlite3GlobalConfig.bSmallMalloc==0 ){
assert( pSorter->iMemory==0 );
pSorter->nMemory = pgsz;
pSorter->list.aMemory = (u8*)sqlite3Malloc(pgsz);
if( !pSorter->list.aMemory ) rc = SQLITE_NOMEM_BKPT;
}
}
if( pKeyInfo->nAllField<13
&& (pKeyInfo->aColl[0]==0 || pKeyInfo->aColl[0]==db->pDfltColl)
){
pSorter->typeMask = SORTER_TYPE_INTEGER | SORTER_TYPE_TEXT;
}
}
return rc;
}
#undef nWorker /* Defined at the top of this function */
/*
** Free the list of sorted records starting at pRecord.
*/
static void vdbeSorterRecordFree(sqlite3 *db, SorterRecord *pRecord){
SorterRecord *p;
SorterRecord *pNext;
for(p=pRecord; p; p=pNext){
pNext = p->u.pNext;
sqlite3DbFree(db, p);
}
}
/*
** Free all resources owned by the object indicated by argument pTask. All
** fields of *pTask are zeroed before returning.
*/
static void vdbeSortSubtaskCleanup(sqlite3 *db, SortSubtask *pTask){
sqlite3DbFree(db, pTask->pUnpacked);
#if SQLITE_MAX_WORKER_THREADS>0
/* pTask->list.aMemory can only be non-zero if it was handed memory
** from the main thread. That only occurs SQLITE_MAX_WORKER_THREADS>0 */
if( pTask->list.aMemory ){
sqlite3_free(pTask->list.aMemory);
}else
#endif
{
assert( pTask->list.aMemory==0 );
vdbeSorterRecordFree(0, pTask->list.pList);
}
if( pTask->file.pFd ){
sqlite3OsCloseFree(pTask->file.pFd);
}
if( pTask->file2.pFd ){
sqlite3OsCloseFree(pTask->file2.pFd);
}
memset(pTask, 0, sizeof(SortSubtask));
}
#ifdef SQLITE_DEBUG_SORTER_THREADS
static void vdbeSorterWorkDebug(SortSubtask *pTask, const char *zEvent){
i64 t;
int iTask = (pTask - pTask->pSorter->aTask);
sqlite3OsCurrentTimeInt64(pTask->pSorter->db->pVfs, &t);
fprintf(stderr, "%lld:%d %s\n", t, iTask, zEvent);
}
static void vdbeSorterRewindDebug(const char *zEvent){
i64 t;
sqlite3OsCurrentTimeInt64(sqlite3_vfs_find(0), &t);
fprintf(stderr, "%lld:X %s\n", t, zEvent);
}
static void vdbeSorterPopulateDebug(
SortSubtask *pTask,
const char *zEvent
){
i64 t;
int iTask = (pTask - pTask->pSorter->aTask);
sqlite3OsCurrentTimeInt64(pTask->pSorter->db->pVfs, &t);
fprintf(stderr, "%lld:bg%d %s\n", t, iTask, zEvent);
}
static void vdbeSorterBlockDebug(
SortSubtask *pTask,
int bBlocked,
const char *zEvent
){
if( bBlocked ){
i64 t;
sqlite3OsCurrentTimeInt64(pTask->pSorter->db->pVfs, &t);
fprintf(stderr, "%lld:main %s\n", t, zEvent);
}
}
#else
# define vdbeSorterWorkDebug(x,y)
# define vdbeSorterRewindDebug(y)
# define vdbeSorterPopulateDebug(x,y)
# define vdbeSorterBlockDebug(x,y,z)
#endif
#if SQLITE_MAX_WORKER_THREADS>0
/*
** Join thread pTask->thread.
*/
static int vdbeSorterJoinThread(SortSubtask *pTask){
int rc = SQLITE_OK;
if( pTask->pThread ){
#ifdef SQLITE_DEBUG_SORTER_THREADS
int bDone = pTask->bDone;
#endif
void *pRet = SQLITE_INT_TO_PTR(SQLITE_ERROR);
vdbeSorterBlockDebug(pTask, !bDone, "enter");
(void)sqlite3ThreadJoin(pTask->pThread, &pRet);
vdbeSorterBlockDebug(pTask, !bDone, "exit");
rc = SQLITE_PTR_TO_INT(pRet);
assert( pTask->bDone==1 );
pTask->bDone = 0;
pTask->pThread = 0;
}
return rc;
}
/*
** Launch a background thread to run xTask(pIn).
*/
static int vdbeSorterCreateThread(
SortSubtask *pTask, /* Thread will use this task object */
void *(*xTask)(void*), /* Routine to run in a separate thread */
void *pIn /* Argument passed into xTask() */
){
assert( pTask->pThread==0 && pTask->bDone==0 );
return sqlite3ThreadCreate(&pTask->pThread, xTask, pIn);
}
/*
** Join all outstanding threads launched by SorterWrite() to create
** level-0 PMAs.
*/
static int vdbeSorterJoinAll(VdbeSorter *pSorter, int rcin){
int rc = rcin;
int i;
/* This function is always called by the main user thread.
**
** If this function is being called after SorterRewind() has been called,
** it is possible that thread pSorter->aTask[pSorter->nTask-1].pThread
** is currently attempt to join one of the other threads. To avoid a race
** condition where this thread also attempts to join the same object, join
** thread pSorter->aTask[pSorter->nTask-1].pThread first. */
for(i=pSorter->nTask-1; i>=0; i--){
SortSubtask *pTask = &pSorter->aTask[i];
int rc2 = vdbeSorterJoinThread(pTask);
if( rc==SQLITE_OK ) rc = rc2;
}
return rc;
}
#else
# define vdbeSorterJoinAll(x,rcin) (rcin)
# define vdbeSorterJoinThread(pTask) SQLITE_OK
#endif
/*
** Allocate a new MergeEngine object capable of handling up to
** nReader PmaReader inputs.
**
** nReader is automatically rounded up to the next power of two.
** nReader may not exceed SORTER_MAX_MERGE_COUNT even after rounding up.
*/
static MergeEngine *vdbeMergeEngineNew(int nReader){
int N = 2; /* Smallest power of two >= nReader */
int nByte; /* Total bytes of space to allocate */
MergeEngine *pNew; /* Pointer to allocated object to return */
assert( nReader<=SORTER_MAX_MERGE_COUNT );
while( N<nReader ) N += N;
nByte = sizeof(MergeEngine) + N * (sizeof(int) + sizeof(PmaReader));
pNew = sqlite3FaultSim(100) ? 0 : (MergeEngine*)sqlite3MallocZero(nByte);
if( pNew ){
pNew->nTree = N;
pNew->pTask = 0;
pNew->aReadr = (PmaReader*)&pNew[1];
pNew->aTree = (int*)&pNew->aReadr[N];
}
return pNew;
}
/*
** Free the MergeEngine object passed as the only argument.
*/
static void vdbeMergeEngineFree(MergeEngine *pMerger){
int i;
if( pMerger ){
for(i=0; i<pMerger->nTree; i++){
vdbePmaReaderClear(&pMerger->aReadr[i]);
}
}
sqlite3_free(pMerger);
}
/*
** Free all resources associated with the IncrMerger object indicated by
** the first argument.
*/
static void vdbeIncrFree(IncrMerger *pIncr){
if( pIncr ){
#if SQLITE_MAX_WORKER_THREADS>0
if( pIncr->bUseThread ){
vdbeSorterJoinThread(pIncr->pTask);
if( pIncr->aFile[0].pFd ) sqlite3OsCloseFree(pIncr->aFile[0].pFd);
if( pIncr->aFile[1].pFd ) sqlite3OsCloseFree(pIncr->aFile[1].pFd);
}
#endif
vdbeMergeEngineFree(pIncr->pMerger);
sqlite3_free(pIncr);
}
}
/*
** Reset a sorting cursor back to its original empty state.
*/
void sqlite3VdbeSorterReset(sqlite3 *db, VdbeSorter *pSorter){
int i;
(void)vdbeSorterJoinAll(pSorter, SQLITE_OK);
assert( pSorter->bUseThreads || pSorter->pReader==0 );
#if SQLITE_MAX_WORKER_THREADS>0
if( pSorter->pReader ){
vdbePmaReaderClear(pSorter->pReader);
sqlite3DbFree(db, pSorter->pReader);
pSorter->pReader = 0;
}
#endif
vdbeMergeEngineFree(pSorter->pMerger);
pSorter->pMerger = 0;
for(i=0; i<pSorter->nTask; i++){
SortSubtask *pTask = &pSorter->aTask[i];
vdbeSortSubtaskCleanup(db, pTask);
pTask->pSorter = pSorter;
}
if( pSorter->list.aMemory==0 ){
vdbeSorterRecordFree(0, pSorter->list.pList);
}
pSorter->list.pList = 0;
pSorter->list.szPMA = 0;
pSorter->bUsePMA = 0;
pSorter->iMemory = 0;
pSorter->mxKeysize = 0;
sqlite3DbFree(db, pSorter->pUnpacked);
pSorter->pUnpacked = 0;
}
/*
** Free any cursor components allocated by sqlite3VdbeSorterXXX routines.
*/
void sqlite3VdbeSorterClose(sqlite3 *db, VdbeCursor *pCsr){
VdbeSorter *pSorter;
assert( pCsr->eCurType==CURTYPE_SORTER );
pSorter = pCsr->uc.pSorter;
if( pSorter ){
sqlite3VdbeSorterReset(db, pSorter);
sqlite3_free(pSorter->list.aMemory);
sqlite3DbFree(db, pSorter);
pCsr->uc.pSorter = 0;
}
}
#if SQLITE_MAX_MMAP_SIZE>0
/*
** The first argument is a file-handle open on a temporary file. The file
** is guaranteed to be nByte bytes or smaller in size. This function
** attempts to extend the file to nByte bytes in size and to ensure that
** the VFS has memory mapped it.
**
** Whether or not the file does end up memory mapped of course depends on
** the specific VFS implementation.
*/
static void vdbeSorterExtendFile(sqlite3 *db, sqlite3_file *pFd, i64 nByte){
if( nByte<=(i64)(db->nMaxSorterMmap) && pFd->pMethods->iVersion>=3 ){
void *p = 0;
int chunksize = 4*1024;
sqlite3OsFileControlHint(pFd, SQLITE_FCNTL_CHUNK_SIZE, &chunksize);
sqlite3OsFileControlHint(pFd, SQLITE_FCNTL_SIZE_HINT, &nByte);
sqlite3OsFetch(pFd, 0, (int)nByte, &p);
sqlite3OsUnfetch(pFd, 0, p);
}
}
#else
# define vdbeSorterExtendFile(x,y,z)
#endif
/*
** Allocate space for a file-handle and open a temporary file. If successful,
** set *ppFd to point to the malloc'd file-handle and return SQLITE_OK.
** Otherwise, set *ppFd to 0 and return an SQLite error code.
*/
static int vdbeSorterOpenTempFile(
sqlite3 *db, /* Database handle doing sort */
i64 nExtend, /* Attempt to extend file to this size */
sqlite3_file **ppFd
){
int rc;
if( sqlite3FaultSim(202) ) return SQLITE_IOERR_ACCESS;
rc = sqlite3OsOpenMalloc(db->pVfs, 0, ppFd,
SQLITE_OPEN_TEMP_JOURNAL |
SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE |
SQLITE_OPEN_EXCLUSIVE | SQLITE_OPEN_DELETEONCLOSE, &rc
);
if( rc==SQLITE_OK ){
i64 max = SQLITE_MAX_MMAP_SIZE;
sqlite3OsFileControlHint(*ppFd, SQLITE_FCNTL_MMAP_SIZE, (void*)&max);
if( nExtend>0 ){
vdbeSorterExtendFile(db, *ppFd, nExtend);
}
}
return rc;
}
/*
** If it has not already been allocated, allocate the UnpackedRecord
** structure at pTask->pUnpacked. Return SQLITE_OK if successful (or
** if no allocation was required), or SQLITE_NOMEM otherwise.
*/
static int vdbeSortAllocUnpacked(SortSubtask *pTask){
if( pTask->pUnpacked==0 ){
pTask->pUnpacked = sqlite3VdbeAllocUnpackedRecord(pTask->pSorter->pKeyInfo);
if( pTask->pUnpacked==0 ) return SQLITE_NOMEM_BKPT;
pTask->pUnpacked->nField = pTask->pSorter->pKeyInfo->nKeyField;
pTask->pUnpacked->errCode = 0;
}
return SQLITE_OK;
}
/*
** Merge the two sorted lists p1 and p2 into a single list.
*/
static SorterRecord *vdbeSorterMerge(
SortSubtask *pTask, /* Calling thread context */
SorterRecord *p1, /* First list to merge */
SorterRecord *p2 /* Second list to merge */
){
SorterRecord *pFinal = 0;
SorterRecord **pp = &pFinal;
int bCached = 0;
assert( p1!=0 && p2!=0 );
for(;;){
int res;
res = pTask->xCompare(
pTask, &bCached, SRVAL(p1), p1->nVal, SRVAL(p2), p2->nVal
);
if( res<=0 ){
*pp = p1;
pp = &p1->u.pNext;
p1 = p1->u.pNext;
if( p1==0 ){
*pp = p2;
break;
}
}else{
*pp = p2;
pp = &p2->u.pNext;
p2 = p2->u.pNext;
bCached = 0;
if( p2==0 ){
*pp = p1;
break;
}
}
}
return pFinal;
}
/*
** Return the SorterCompare function to compare values collected by the
** sorter object passed as the only argument.
*/
static SorterCompare vdbeSorterGetCompare(VdbeSorter *p){
if( p->typeMask==SORTER_TYPE_INTEGER ){
return vdbeSorterCompareInt;
}else if( p->typeMask==SORTER_TYPE_TEXT ){
return vdbeSorterCompareText;
}
return vdbeSorterCompare;
}
/*
** Sort the linked list of records headed at pTask->pList. Return
** SQLITE_OK if successful, or an SQLite error code (i.e. SQLITE_NOMEM) if
** an error occurs.
*/
static int vdbeSorterSort(SortSubtask *pTask, SorterList *pList){
int i;
SorterRecord **aSlot;
SorterRecord *p;
int rc;
rc = vdbeSortAllocUnpacked(pTask);
if( rc!=SQLITE_OK ) return rc;
p = pList->pList;
pTask->xCompare = vdbeSorterGetCompare(pTask->pSorter);
aSlot = (SorterRecord **)sqlite3MallocZero(64 * sizeof(SorterRecord *));
if( !aSlot ){
return SQLITE_NOMEM_BKPT;
}
while( p ){
SorterRecord *pNext;
if( pList->aMemory ){
if( (u8*)p==pList->aMemory ){
pNext = 0;
}else{
assert( p->u.iNext<sqlite3MallocSize(pList->aMemory) );
pNext = (SorterRecord*)&pList->aMemory[p->u.iNext];
}
}else{
pNext = p->u.pNext;
}
p->u.pNext = 0;