| OLD | NEW |
| (Empty) |
| 1 /* | |
| 2 ** 2006 Oct 10 | |
| 3 ** | |
| 4 ** The author disclaims copyright to this source code. In place of | |
| 5 ** a legal notice, here is a blessing: | |
| 6 ** | |
| 7 ** May you do good and not evil. | |
| 8 ** May you find forgiveness for yourself and forgive others. | |
| 9 ** May you share freely, never taking more than you give. | |
| 10 ** | |
| 11 ****************************************************************************** | |
| 12 ** | |
| 13 ** This is an SQLite module implementing full-text search. | |
| 14 */ | |
| 15 | |
| 16 /* | |
| 17 ** The code in this file is only compiled if: | |
| 18 ** | |
| 19 ** * The FTS3 module is being built as an extension | |
| 20 ** (in which case SQLITE_CORE is not defined), or | |
| 21 ** | |
| 22 ** * The FTS3 module is being built into the core of | |
| 23 ** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). | |
| 24 */ | |
| 25 | |
| 26 /* TODO(shess) Consider exporting this comment to an HTML file or the | |
| 27 ** wiki. | |
| 28 */ | |
| 29 /* The full-text index is stored in a series of b+tree (-like) | |
| 30 ** structures called segments which map terms to doclists. The | |
| 31 ** structures are like b+trees in layout, but are constructed from the | |
| 32 ** bottom up in optimal fashion and are not updatable. Since trees | |
| 33 ** are built from the bottom up, things will be described from the | |
| 34 ** bottom up. | |
| 35 ** | |
| 36 ** | |
| 37 **** Varints **** | |
| 38 ** The basic unit of encoding is a variable-length integer called a | |
| 39 ** varint. We encode variable-length integers in little-endian order | |
| 40 ** using seven bits * per byte as follows: | |
| 41 ** | |
| 42 ** KEY: | |
| 43 ** A = 0xxxxxxx 7 bits of data and one flag bit | |
| 44 ** B = 1xxxxxxx 7 bits of data and one flag bit | |
| 45 ** | |
| 46 ** 7 bits - A | |
| 47 ** 14 bits - BA | |
| 48 ** 21 bits - BBA | |
| 49 ** and so on. | |
| 50 ** | |
| 51 ** This is identical to how sqlite encodes varints (see util.c). | |
| 52 ** | |
| 53 ** | |
| 54 **** Document lists **** | |
| 55 ** A doclist (document list) holds a docid-sorted list of hits for a | |
| 56 ** given term. Doclists hold docids, and can optionally associate | |
| 57 ** token positions and offsets with docids. | |
| 58 ** | |
| 59 ** A DL_POSITIONS_OFFSETS doclist is stored like this: | |
| 60 ** | |
| 61 ** array { | |
| 62 ** varint docid; | |
| 63 ** array { (position list for column 0) | |
| 64 ** varint position; (delta from previous position plus POS_BASE) | |
| 65 ** varint startOffset; (delta from previous startOffset) | |
| 66 ** varint endOffset; (delta from startOffset) | |
| 67 ** } | |
| 68 ** array { | |
| 69 ** varint POS_COLUMN; (marks start of position list for new column) | |
| 70 ** varint column; (index of new column) | |
| 71 ** array { | |
| 72 ** varint position; (delta from previous position plus POS_BASE) | |
| 73 ** varint startOffset;(delta from previous startOffset) | |
| 74 ** varint endOffset; (delta from startOffset) | |
| 75 ** } | |
| 76 ** } | |
| 77 ** varint POS_END; (marks end of positions for this document. | |
| 78 ** } | |
| 79 ** | |
| 80 ** Here, array { X } means zero or more occurrences of X, adjacent in | |
| 81 ** memory. A "position" is an index of a token in the token stream | |
| 82 ** generated by the tokenizer, while an "offset" is a byte offset, | |
| 83 ** both based at 0. Note that POS_END and POS_COLUMN occur in the | |
| 84 ** same logical place as the position element, and act as sentinals | |
| 85 ** ending a position list array. | |
| 86 ** | |
| 87 ** A DL_POSITIONS doclist omits the startOffset and endOffset | |
| 88 ** information. A DL_DOCIDS doclist omits both the position and | |
| 89 ** offset information, becoming an array of varint-encoded docids. | |
| 90 ** | |
| 91 ** On-disk data is stored as type DL_DEFAULT, so we don't serialize | |
| 92 ** the type. Due to how deletion is implemented in the segmentation | |
| 93 ** system, on-disk doclists MUST store at least positions. | |
| 94 ** | |
| 95 ** | |
| 96 **** Segment leaf nodes **** | |
| 97 ** Segment leaf nodes store terms and doclists, ordered by term. Leaf | |
| 98 ** nodes are written using LeafWriter, and read using LeafReader (to | |
| 99 ** iterate through a single leaf node's data) and LeavesReader (to | |
| 100 ** iterate through a segment's entire leaf layer). Leaf nodes have | |
| 101 ** the format: | |
| 102 ** | |
| 103 ** varint iHeight; (height from leaf level, always 0) | |
| 104 ** varint nTerm; (length of first term) | |
| 105 ** char pTerm[nTerm]; (content of first term) | |
| 106 ** varint nDoclist; (length of term's associated doclist) | |
| 107 ** char pDoclist[nDoclist]; (content of doclist) | |
| 108 ** array { | |
| 109 ** (further terms are delta-encoded) | |
| 110 ** varint nPrefix; (length of prefix shared with previous term) | |
| 111 ** varint nSuffix; (length of unshared suffix) | |
| 112 ** char pTermSuffix[nSuffix];(unshared suffix of next term) | |
| 113 ** varint nDoclist; (length of term's associated doclist) | |
| 114 ** char pDoclist[nDoclist]; (content of doclist) | |
| 115 ** } | |
| 116 ** | |
| 117 ** Here, array { X } means zero or more occurrences of X, adjacent in | |
| 118 ** memory. | |
| 119 ** | |
| 120 ** Leaf nodes are broken into blocks which are stored contiguously in | |
| 121 ** the %_segments table in sorted order. This means that when the end | |
| 122 ** of a node is reached, the next term is in the node with the next | |
| 123 ** greater node id. | |
| 124 ** | |
| 125 ** New data is spilled to a new leaf node when the current node | |
| 126 ** exceeds LEAF_MAX bytes (default 2048). New data which itself is | |
| 127 ** larger than STANDALONE_MIN (default 1024) is placed in a standalone | |
| 128 ** node (a leaf node with a single term and doclist). The goal of | |
| 129 ** these settings is to pack together groups of small doclists while | |
| 130 ** making it efficient to directly access large doclists. The | |
| 131 ** assumption is that large doclists represent terms which are more | |
| 132 ** likely to be query targets. | |
| 133 ** | |
| 134 ** TODO(shess) It may be useful for blocking decisions to be more | |
| 135 ** dynamic. For instance, it may make more sense to have a 2.5k leaf | |
| 136 ** node rather than splitting into 2k and .5k nodes. My intuition is | |
| 137 ** that this might extend through 2x or 4x the pagesize. | |
| 138 ** | |
| 139 ** | |
| 140 **** Segment interior nodes **** | |
| 141 ** Segment interior nodes store blockids for subtree nodes and terms | |
| 142 ** to describe what data is stored by the each subtree. Interior | |
| 143 ** nodes are written using InteriorWriter, and read using | |
| 144 ** InteriorReader. InteriorWriters are created as needed when | |
| 145 ** SegmentWriter creates new leaf nodes, or when an interior node | |
| 146 ** itself grows too big and must be split. The format of interior | |
| 147 ** nodes: | |
| 148 ** | |
| 149 ** varint iHeight; (height from leaf level, always >0) | |
| 150 ** varint iBlockid; (block id of node's leftmost subtree) | |
| 151 ** optional { | |
| 152 ** varint nTerm; (length of first term) | |
| 153 ** char pTerm[nTerm]; (content of first term) | |
| 154 ** array { | |
| 155 ** (further terms are delta-encoded) | |
| 156 ** varint nPrefix; (length of shared prefix with previous term) | |
| 157 ** varint nSuffix; (length of unshared suffix) | |
| 158 ** char pTermSuffix[nSuffix]; (unshared suffix of next term) | |
| 159 ** } | |
| 160 ** } | |
| 161 ** | |
| 162 ** Here, optional { X } means an optional element, while array { X } | |
| 163 ** means zero or more occurrences of X, adjacent in memory. | |
| 164 ** | |
| 165 ** An interior node encodes n terms separating n+1 subtrees. The | |
| 166 ** subtree blocks are contiguous, so only the first subtree's blockid | |
| 167 ** is encoded. The subtree at iBlockid will contain all terms less | |
| 168 ** than the first term encoded (or all terms if no term is encoded). | |
| 169 ** Otherwise, for terms greater than or equal to pTerm[i] but less | |
| 170 ** than pTerm[i+1], the subtree for that term will be rooted at | |
| 171 ** iBlockid+i. Interior nodes only store enough term data to | |
| 172 ** distinguish adjacent children (if the rightmost term of the left | |
| 173 ** child is "something", and the leftmost term of the right child is | |
| 174 ** "wicked", only "w" is stored). | |
| 175 ** | |
| 176 ** New data is spilled to a new interior node at the same height when | |
| 177 ** the current node exceeds INTERIOR_MAX bytes (default 2048). | |
| 178 ** INTERIOR_MIN_TERMS (default 7) keeps large terms from monopolizing | |
| 179 ** interior nodes and making the tree too skinny. The interior nodes | |
| 180 ** at a given height are naturally tracked by interior nodes at | |
| 181 ** height+1, and so on. | |
| 182 ** | |
| 183 ** | |
| 184 **** Segment directory **** | |
| 185 ** The segment directory in table %_segdir stores meta-information for | |
| 186 ** merging and deleting segments, and also the root node of the | |
| 187 ** segment's tree. | |
| 188 ** | |
| 189 ** The root node is the top node of the segment's tree after encoding | |
| 190 ** the entire segment, restricted to ROOT_MAX bytes (default 1024). | |
| 191 ** This could be either a leaf node or an interior node. If the top | |
| 192 ** node requires more than ROOT_MAX bytes, it is flushed to %_segments | |
| 193 ** and a new root interior node is generated (which should always fit | |
| 194 ** within ROOT_MAX because it only needs space for 2 varints, the | |
| 195 ** height and the blockid of the previous root). | |
| 196 ** | |
| 197 ** The meta-information in the segment directory is: | |
| 198 ** level - segment level (see below) | |
| 199 ** idx - index within level | |
| 200 ** - (level,idx uniquely identify a segment) | |
| 201 ** start_block - first leaf node | |
| 202 ** leaves_end_block - last leaf node | |
| 203 ** end_block - last block (including interior nodes) | |
| 204 ** root - contents of root node | |
| 205 ** | |
| 206 ** If the root node is a leaf node, then start_block, | |
| 207 ** leaves_end_block, and end_block are all 0. | |
| 208 ** | |
| 209 ** | |
| 210 **** Segment merging **** | |
| 211 ** To amortize update costs, segments are grouped into levels and | |
| 212 ** merged in batches. Each increase in level represents exponentially | |
| 213 ** more documents. | |
| 214 ** | |
| 215 ** New documents (actually, document updates) are tokenized and | |
| 216 ** written individually (using LeafWriter) to a level 0 segment, with | |
| 217 ** incrementing idx. When idx reaches MERGE_COUNT (default 16), all | |
| 218 ** level 0 segments are merged into a single level 1 segment. Level 1 | |
| 219 ** is populated like level 0, and eventually MERGE_COUNT level 1 | |
| 220 ** segments are merged to a single level 2 segment (representing | |
| 221 ** MERGE_COUNT^2 updates), and so on. | |
| 222 ** | |
| 223 ** A segment merge traverses all segments at a given level in | |
| 224 ** parallel, performing a straightforward sorted merge. Since segment | |
| 225 ** leaf nodes are written in to the %_segments table in order, this | |
| 226 ** merge traverses the underlying sqlite disk structures efficiently. | |
| 227 ** After the merge, all segment blocks from the merged level are | |
| 228 ** deleted. | |
| 229 ** | |
| 230 ** MERGE_COUNT controls how often we merge segments. 16 seems to be | |
| 231 ** somewhat of a sweet spot for insertion performance. 32 and 64 show | |
| 232 ** very similar performance numbers to 16 on insertion, though they're | |
| 233 ** a tiny bit slower (perhaps due to more overhead in merge-time | |
| 234 ** sorting). 8 is about 20% slower than 16, 4 about 50% slower than | |
| 235 ** 16, 2 about 66% slower than 16. | |
| 236 ** | |
| 237 ** At query time, high MERGE_COUNT increases the number of segments | |
| 238 ** which need to be scanned and merged. For instance, with 100k docs | |
| 239 ** inserted: | |
| 240 ** | |
| 241 ** MERGE_COUNT segments | |
| 242 ** 16 25 | |
| 243 ** 8 12 | |
| 244 ** 4 10 | |
| 245 ** 2 6 | |
| 246 ** | |
| 247 ** This appears to have only a moderate impact on queries for very | |
| 248 ** frequent terms (which are somewhat dominated by segment merge | |
| 249 ** costs), and infrequent and non-existent terms still seem to be fast | |
| 250 ** even with many segments. | |
| 251 ** | |
| 252 ** TODO(shess) That said, it would be nice to have a better query-side | |
| 253 ** argument for MERGE_COUNT of 16. Also, it is possible/likely that | |
| 254 ** optimizations to things like doclist merging will swing the sweet | |
| 255 ** spot around. | |
| 256 ** | |
| 257 ** | |
| 258 ** | |
| 259 **** Handling of deletions and updates **** | |
| 260 ** Since we're using a segmented structure, with no docid-oriented | |
| 261 ** index into the term index, we clearly cannot simply update the term | |
| 262 ** index when a document is deleted or updated. For deletions, we | |
| 263 ** write an empty doclist (varint(docid) varint(POS_END)), for updates | |
| 264 ** we simply write the new doclist. Segment merges overwrite older | |
| 265 ** data for a particular docid with newer data, so deletes or updates | |
| 266 ** will eventually overtake the earlier data and knock it out. The | |
| 267 ** query logic likewise merges doclists so that newer data knocks out | |
| 268 ** older data. | |
| 269 ** | |
| 270 ** TODO(shess) Provide a VACUUM type operation to clear out all | |
| 271 ** deletions and duplications. This would basically be a forced merge | |
| 272 ** into a single segment. | |
| 273 */ | |
| 274 #define CHROMIUM_FTS3_CHANGES 1 | |
| 275 | |
| 276 #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) | |
| 277 | |
| 278 #if defined(SQLITE_ENABLE_FTS3) && !defined(SQLITE_CORE) | |
| 279 # define SQLITE_CORE 1 | |
| 280 #endif | |
| 281 | |
| 282 #include <assert.h> | |
| 283 #include <stdlib.h> | |
| 284 #include <stdio.h> | |
| 285 #include <string.h> | |
| 286 #include <ctype.h> | |
| 287 | |
| 288 #include "fts3.h" | |
| 289 #include "fts3_expr.h" | |
| 290 #include "fts3_hash.h" | |
| 291 #include "fts3_tokenizer.h" | |
| 292 #ifndef SQLITE_CORE | |
| 293 # include "sqlite3ext.h" | |
| 294 SQLITE_EXTENSION_INIT1 | |
| 295 #endif | |
| 296 | |
| 297 | |
| 298 /* TODO(shess) MAN, this thing needs some refactoring. At minimum, it | |
| 299 ** would be nice to order the file better, perhaps something along the | |
| 300 ** lines of: | |
| 301 ** | |
| 302 ** - utility functions | |
| 303 ** - table setup functions | |
| 304 ** - table update functions | |
| 305 ** - table query functions | |
| 306 ** | |
| 307 ** Put the query functions last because they're likely to reference | |
| 308 ** typedefs or functions from the table update section. | |
| 309 */ | |
| 310 | |
| 311 #if 0 | |
| 312 # define FTSTRACE(A) printf A; fflush(stdout) | |
| 313 #else | |
| 314 # define FTSTRACE(A) | |
| 315 #endif | |
| 316 | |
| 317 #if 0 | |
| 318 /* Useful to set breakpoints. See main.c sqlite3Corrupt(). */ | |
| 319 static int fts3Corrupt(void){ | |
| 320 return SQLITE_CORRUPT; | |
| 321 } | |
| 322 # define SQLITE_CORRUPT_BKPT fts3Corrupt() | |
| 323 #else | |
| 324 # define SQLITE_CORRUPT_BKPT SQLITE_CORRUPT | |
| 325 #endif | |
| 326 | |
| 327 /* It is not safe to call isspace(), tolower(), or isalnum() on | |
| 328 ** hi-bit-set characters. This is the same solution used in the | |
| 329 ** tokenizer. | |
| 330 */ | |
| 331 /* TODO(shess) The snippet-generation code should be using the | |
| 332 ** tokenizer-generated tokens rather than doing its own local | |
| 333 ** tokenization. | |
| 334 */ | |
| 335 /* TODO(shess) Is __isascii() a portable version of (c&0x80)==0? */ | |
| 336 static int safe_isspace(char c){ | |
| 337 return (c&0x80)==0 ? isspace(c) : 0; | |
| 338 } | |
| 339 static int safe_tolower(char c){ | |
| 340 return (c>='A' && c<='Z') ? (c-'A'+'a') : c; | |
| 341 } | |
| 342 static int safe_isalnum(char c){ | |
| 343 return (c&0x80)==0 ? isalnum(c) : 0; | |
| 344 } | |
| 345 | |
| 346 typedef enum DocListType { | |
| 347 DL_DOCIDS, /* docids only */ | |
| 348 DL_POSITIONS, /* docids + positions */ | |
| 349 DL_POSITIONS_OFFSETS /* docids + positions + offsets */ | |
| 350 } DocListType; | |
| 351 | |
| 352 /* | |
| 353 ** By default, only positions and not offsets are stored in the doclists. | |
| 354 ** To change this so that offsets are stored too, compile with | |
| 355 ** | |
| 356 ** -DDL_DEFAULT=DL_POSITIONS_OFFSETS | |
| 357 ** | |
| 358 ** If DL_DEFAULT is set to DL_DOCIDS, your table can only be inserted | |
| 359 ** into (no deletes or updates). | |
| 360 */ | |
| 361 #ifndef DL_DEFAULT | |
| 362 # define DL_DEFAULT DL_POSITIONS | |
| 363 #endif | |
| 364 | |
| 365 enum { | |
| 366 POS_END = 0, /* end of this position list */ | |
| 367 POS_COLUMN, /* followed by new column number */ | |
| 368 POS_BASE | |
| 369 }; | |
| 370 | |
| 371 /* MERGE_COUNT controls how often we merge segments (see comment at | |
| 372 ** top of file). | |
| 373 */ | |
| 374 #define MERGE_COUNT 16 | |
| 375 | |
| 376 /* utility functions */ | |
| 377 | |
| 378 /* CLEAR() and SCRAMBLE() abstract memset() on a pointer to a single | |
| 379 ** record to prevent errors of the form: | |
| 380 ** | |
| 381 ** my_function(SomeType *b){ | |
| 382 ** memset(b, '\0', sizeof(b)); // sizeof(b)!=sizeof(*b) | |
| 383 ** } | |
| 384 */ | |
| 385 /* TODO(shess) Obvious candidates for a header file. */ | |
| 386 #define CLEAR(b) memset(b, '\0', sizeof(*(b))) | |
| 387 | |
| 388 #ifndef NDEBUG | |
| 389 # define SCRAMBLE(b) memset(b, 0x55, sizeof(*(b))) | |
| 390 #else | |
| 391 # define SCRAMBLE(b) | |
| 392 #endif | |
| 393 | |
| 394 /* We may need up to VARINT_MAX bytes to store an encoded 64-bit integer. */ | |
| 395 #define VARINT_MAX 10 | |
| 396 | |
| 397 /* Write a 64-bit variable-length integer to memory starting at p[0]. | |
| 398 * The length of data written will be between 1 and VARINT_MAX bytes. | |
| 399 * The number of bytes written is returned. */ | |
| 400 static int fts3PutVarint(char *p, sqlite_int64 v){ | |
| 401 unsigned char *q = (unsigned char *) p; | |
| 402 sqlite_uint64 vu = v; | |
| 403 do{ | |
| 404 *q++ = (unsigned char) ((vu & 0x7f) | 0x80); | |
| 405 vu >>= 7; | |
| 406 }while( vu!=0 ); | |
| 407 q[-1] &= 0x7f; /* turn off high bit in final byte */ | |
| 408 assert( q - (unsigned char *)p <= VARINT_MAX ); | |
| 409 return (int) (q - (unsigned char *)p); | |
| 410 } | |
| 411 | |
| 412 /* Read a 64-bit variable-length integer from memory starting at p[0]. | |
| 413 * Return the number of bytes read, or 0 on error. | |
| 414 * The value is stored in *v. */ | |
| 415 static int fts3GetVarintSafe(const char *p, sqlite_int64 *v, int max){ | |
| 416 const unsigned char *q = (const unsigned char *) p; | |
| 417 sqlite_uint64 x = 0, y = 1; | |
| 418 if( max>VARINT_MAX ) max = VARINT_MAX; | |
| 419 while( max && (*q & 0x80) == 0x80 ){ | |
| 420 max--; | |
| 421 x += y * (*q++ & 0x7f); | |
| 422 y <<= 7; | |
| 423 } | |
| 424 if( !max ){ | |
| 425 assert( 0 ); | |
| 426 return 0; /* tried to read too much; bad data */ | |
| 427 } | |
| 428 x += y * (*q++); | |
| 429 *v = (sqlite_int64) x; | |
| 430 return (int) (q - (unsigned char *)p); | |
| 431 } | |
| 432 | |
| 433 static int fts3GetVarint(const char *p, sqlite_int64 *v){ | |
| 434 return fts3GetVarintSafe(p, v, VARINT_MAX); | |
| 435 } | |
| 436 | |
| 437 static int fts3GetVarint32Safe(const char *p, int *pi, int max){ | |
| 438 sqlite_int64 i; | |
| 439 int ret = fts3GetVarintSafe(p, &i, max); | |
| 440 if( !ret ) return ret; | |
| 441 *pi = (int) i; | |
| 442 assert( *pi==i ); | |
| 443 return ret; | |
| 444 } | |
| 445 | |
| 446 static int fts3GetVarint32(const char* p, int *pi){ | |
| 447 return fts3GetVarint32Safe(p, pi, VARINT_MAX); | |
| 448 } | |
| 449 | |
| 450 /*******************************************************************/ | |
| 451 /* DataBuffer is used to collect data into a buffer in piecemeal | |
| 452 ** fashion. It implements the usual distinction between amount of | |
| 453 ** data currently stored (nData) and buffer capacity (nCapacity). | |
| 454 ** | |
| 455 ** dataBufferInit - create a buffer with given initial capacity. | |
| 456 ** dataBufferReset - forget buffer's data, retaining capacity. | |
| 457 ** dataBufferDestroy - free buffer's data. | |
| 458 ** dataBufferSwap - swap contents of two buffers. | |
| 459 ** dataBufferExpand - expand capacity without adding data. | |
| 460 ** dataBufferAppend - append data. | |
| 461 ** dataBufferAppend2 - append two pieces of data at once. | |
| 462 ** dataBufferReplace - replace buffer's data. | |
| 463 */ | |
| 464 typedef struct DataBuffer { | |
| 465 char *pData; /* Pointer to malloc'ed buffer. */ | |
| 466 int nCapacity; /* Size of pData buffer. */ | |
| 467 int nData; /* End of data loaded into pData. */ | |
| 468 } DataBuffer; | |
| 469 | |
| 470 static void dataBufferInit(DataBuffer *pBuffer, int nCapacity){ | |
| 471 assert( nCapacity>=0 ); | |
| 472 pBuffer->nData = 0; | |
| 473 pBuffer->nCapacity = nCapacity; | |
| 474 pBuffer->pData = nCapacity==0 ? NULL : sqlite3_malloc(nCapacity); | |
| 475 } | |
| 476 static void dataBufferReset(DataBuffer *pBuffer){ | |
| 477 pBuffer->nData = 0; | |
| 478 } | |
| 479 static void dataBufferDestroy(DataBuffer *pBuffer){ | |
| 480 if( pBuffer->pData!=NULL ) sqlite3_free(pBuffer->pData); | |
| 481 SCRAMBLE(pBuffer); | |
| 482 } | |
| 483 static void dataBufferSwap(DataBuffer *pBuffer1, DataBuffer *pBuffer2){ | |
| 484 DataBuffer tmp = *pBuffer1; | |
| 485 *pBuffer1 = *pBuffer2; | |
| 486 *pBuffer2 = tmp; | |
| 487 } | |
| 488 static void dataBufferExpand(DataBuffer *pBuffer, int nAddCapacity){ | |
| 489 assert( nAddCapacity>0 ); | |
| 490 /* TODO(shess) Consider expanding more aggressively. Note that the | |
| 491 ** underlying malloc implementation may take care of such things for | |
| 492 ** us already. | |
| 493 */ | |
| 494 if( pBuffer->nData+nAddCapacity>pBuffer->nCapacity ){ | |
| 495 pBuffer->nCapacity = pBuffer->nData+nAddCapacity; | |
| 496 pBuffer->pData = sqlite3_realloc(pBuffer->pData, pBuffer->nCapacity); | |
| 497 } | |
| 498 } | |
| 499 static void dataBufferAppend(DataBuffer *pBuffer, | |
| 500 const char *pSource, int nSource){ | |
| 501 assert( nSource>0 && pSource!=NULL ); | |
| 502 dataBufferExpand(pBuffer, nSource); | |
| 503 memcpy(pBuffer->pData+pBuffer->nData, pSource, nSource); | |
| 504 pBuffer->nData += nSource; | |
| 505 } | |
| 506 static void dataBufferAppend2(DataBuffer *pBuffer, | |
| 507 const char *pSource1, int nSource1, | |
| 508 const char *pSource2, int nSource2){ | |
| 509 assert( nSource1>0 && pSource1!=NULL ); | |
| 510 assert( nSource2>0 && pSource2!=NULL ); | |
| 511 dataBufferExpand(pBuffer, nSource1+nSource2); | |
| 512 memcpy(pBuffer->pData+pBuffer->nData, pSource1, nSource1); | |
| 513 memcpy(pBuffer->pData+pBuffer->nData+nSource1, pSource2, nSource2); | |
| 514 pBuffer->nData += nSource1+nSource2; | |
| 515 } | |
| 516 static void dataBufferReplace(DataBuffer *pBuffer, | |
| 517 const char *pSource, int nSource){ | |
| 518 dataBufferReset(pBuffer); | |
| 519 dataBufferAppend(pBuffer, pSource, nSource); | |
| 520 } | |
| 521 | |
| 522 /* StringBuffer is a null-terminated version of DataBuffer. */ | |
| 523 typedef struct StringBuffer { | |
| 524 DataBuffer b; /* Includes null terminator. */ | |
| 525 } StringBuffer; | |
| 526 | |
| 527 static void initStringBuffer(StringBuffer *sb){ | |
| 528 dataBufferInit(&sb->b, 100); | |
| 529 dataBufferReplace(&sb->b, "", 1); | |
| 530 } | |
| 531 static int stringBufferLength(StringBuffer *sb){ | |
| 532 return sb->b.nData-1; | |
| 533 } | |
| 534 static char *stringBufferData(StringBuffer *sb){ | |
| 535 return sb->b.pData; | |
| 536 } | |
| 537 static void stringBufferDestroy(StringBuffer *sb){ | |
| 538 dataBufferDestroy(&sb->b); | |
| 539 } | |
| 540 | |
| 541 static void nappend(StringBuffer *sb, const char *zFrom, int nFrom){ | |
| 542 assert( sb->b.nData>0 ); | |
| 543 if( nFrom>0 ){ | |
| 544 sb->b.nData--; | |
| 545 dataBufferAppend2(&sb->b, zFrom, nFrom, "", 1); | |
| 546 } | |
| 547 } | |
| 548 static void append(StringBuffer *sb, const char *zFrom){ | |
| 549 nappend(sb, zFrom, strlen(zFrom)); | |
| 550 } | |
| 551 | |
| 552 /* Append a list of strings separated by commas. */ | |
| 553 static void appendList(StringBuffer *sb, int nString, char **azString){ | |
| 554 int i; | |
| 555 for(i=0; i<nString; ++i){ | |
| 556 if( i>0 ) append(sb, ", "); | |
| 557 append(sb, azString[i]); | |
| 558 } | |
| 559 } | |
| 560 | |
| 561 static int endsInWhiteSpace(StringBuffer *p){ | |
| 562 return stringBufferLength(p)>0 && | |
| 563 safe_isspace(stringBufferData(p)[stringBufferLength(p)-1]); | |
| 564 } | |
| 565 | |
| 566 /* If the StringBuffer ends in something other than white space, add a | |
| 567 ** single space character to the end. | |
| 568 */ | |
| 569 static void appendWhiteSpace(StringBuffer *p){ | |
| 570 if( stringBufferLength(p)==0 ) return; | |
| 571 if( !endsInWhiteSpace(p) ) append(p, " "); | |
| 572 } | |
| 573 | |
| 574 /* Remove white space from the end of the StringBuffer */ | |
| 575 static void trimWhiteSpace(StringBuffer *p){ | |
| 576 while( endsInWhiteSpace(p) ){ | |
| 577 p->b.pData[--p->b.nData-1] = '\0'; | |
| 578 } | |
| 579 } | |
| 580 | |
| 581 /*******************************************************************/ | |
| 582 /* DLReader is used to read document elements from a doclist. The | |
| 583 ** current docid is cached, so dlrDocid() is fast. DLReader does not | |
| 584 ** own the doclist buffer. | |
| 585 ** | |
| 586 ** dlrAtEnd - true if there's no more data to read. | |
| 587 ** dlrDocid - docid of current document. | |
| 588 ** dlrDocData - doclist data for current document (including docid). | |
| 589 ** dlrDocDataBytes - length of same. | |
| 590 ** dlrAllDataBytes - length of all remaining data. | |
| 591 ** dlrPosData - position data for current document. | |
| 592 ** dlrPosDataLen - length of pos data for current document (incl POS_END). | |
| 593 ** dlrStep - step to current document. | |
| 594 ** dlrInit - initial for doclist of given type against given data. | |
| 595 ** dlrDestroy - clean up. | |
| 596 ** | |
| 597 ** Expected usage is something like: | |
| 598 ** | |
| 599 ** DLReader reader; | |
| 600 ** dlrInit(&reader, pData, nData); | |
| 601 ** while( !dlrAtEnd(&reader) ){ | |
| 602 ** // calls to dlrDocid() and kin. | |
| 603 ** dlrStep(&reader); | |
| 604 ** } | |
| 605 ** dlrDestroy(&reader); | |
| 606 */ | |
| 607 typedef struct DLReader { | |
| 608 DocListType iType; | |
| 609 const char *pData; | |
| 610 int nData; | |
| 611 | |
| 612 sqlite_int64 iDocid; | |
| 613 int nElement; | |
| 614 } DLReader; | |
| 615 | |
| 616 static int dlrAtEnd(DLReader *pReader){ | |
| 617 assert( pReader->nData>=0 ); | |
| 618 return pReader->nData<=0; | |
| 619 } | |
| 620 static sqlite_int64 dlrDocid(DLReader *pReader){ | |
| 621 assert( !dlrAtEnd(pReader) ); | |
| 622 return pReader->iDocid; | |
| 623 } | |
| 624 static const char *dlrDocData(DLReader *pReader){ | |
| 625 assert( !dlrAtEnd(pReader) ); | |
| 626 return pReader->pData; | |
| 627 } | |
| 628 static int dlrDocDataBytes(DLReader *pReader){ | |
| 629 assert( !dlrAtEnd(pReader) ); | |
| 630 return pReader->nElement; | |
| 631 } | |
| 632 static int dlrAllDataBytes(DLReader *pReader){ | |
| 633 assert( !dlrAtEnd(pReader) ); | |
| 634 return pReader->nData; | |
| 635 } | |
| 636 /* TODO(shess) Consider adding a field to track iDocid varint length | |
| 637 ** to make these two functions faster. This might matter (a tiny bit) | |
| 638 ** for queries. | |
| 639 */ | |
| 640 static const char *dlrPosData(DLReader *pReader){ | |
| 641 sqlite_int64 iDummy; | |
| 642 int n = fts3GetVarintSafe(pReader->pData, &iDummy, pReader->nElement); | |
| 643 if( !n ) return NULL; | |
| 644 assert( !dlrAtEnd(pReader) ); | |
| 645 return pReader->pData+n; | |
| 646 } | |
| 647 static int dlrPosDataLen(DLReader *pReader){ | |
| 648 sqlite_int64 iDummy; | |
| 649 int n = fts3GetVarint(pReader->pData, &iDummy); | |
| 650 assert( !dlrAtEnd(pReader) ); | |
| 651 return pReader->nElement-n; | |
| 652 } | |
| 653 static int dlrStep(DLReader *pReader){ | |
| 654 assert( !dlrAtEnd(pReader) ); | |
| 655 | |
| 656 /* Skip past current doclist element. */ | |
| 657 assert( pReader->nElement<=pReader->nData ); | |
| 658 pReader->pData += pReader->nElement; | |
| 659 pReader->nData -= pReader->nElement; | |
| 660 | |
| 661 /* If there is more data, read the next doclist element. */ | |
| 662 if( pReader->nData>0 ){ | |
| 663 sqlite_int64 iDocidDelta; | |
| 664 int nTotal = 0; | |
| 665 int iDummy, n = fts3GetVarintSafe(pReader->pData, &iDocidDelta, pReader->nDa
ta); | |
| 666 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 667 nTotal += n; | |
| 668 pReader->iDocid += iDocidDelta; | |
| 669 if( pReader->iType>=DL_POSITIONS ){ | |
| 670 while( 1 ){ | |
| 671 n = fts3GetVarint32Safe(pReader->pData+nTotal, &iDummy, pReader->nData-n
Total); | |
| 672 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 673 nTotal += n; | |
| 674 if( iDummy==POS_END ) break; | |
| 675 if( iDummy==POS_COLUMN ){ | |
| 676 n = fts3GetVarint32Safe(pReader->pData+nTotal, &iDummy, pReader->nData
-nTotal); | |
| 677 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 678 nTotal += n; | |
| 679 }else if( pReader->iType==DL_POSITIONS_OFFSETS ){ | |
| 680 n = fts3GetVarint32Safe(pReader->pData+nTotal, &iDummy, pReader->nData
-nTotal); | |
| 681 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 682 nTotal += n; | |
| 683 n = fts3GetVarint32Safe(pReader->pData+nTotal, &iDummy, pReader->nData
-nTotal); | |
| 684 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 685 nTotal += n; | |
| 686 } | |
| 687 } | |
| 688 } | |
| 689 pReader->nElement = nTotal; | |
| 690 assert( pReader->nElement<=pReader->nData ); | |
| 691 } | |
| 692 return SQLITE_OK; | |
| 693 } | |
| 694 static void dlrDestroy(DLReader *pReader){ | |
| 695 SCRAMBLE(pReader); | |
| 696 } | |
| 697 static int dlrInit(DLReader *pReader, DocListType iType, | |
| 698 const char *pData, int nData){ | |
| 699 int rc; | |
| 700 assert( pData!=NULL && nData!=0 ); | |
| 701 pReader->iType = iType; | |
| 702 pReader->pData = pData; | |
| 703 pReader->nData = nData; | |
| 704 pReader->nElement = 0; | |
| 705 pReader->iDocid = 0; | |
| 706 | |
| 707 /* Load the first element's data. There must be a first element. */ | |
| 708 rc = dlrStep(pReader); | |
| 709 if( rc!=SQLITE_OK ) dlrDestroy(pReader); | |
| 710 return rc; | |
| 711 } | |
| 712 | |
| 713 #ifndef NDEBUG | |
| 714 /* Verify that the doclist can be validly decoded. Also returns the | |
| 715 ** last docid found because it is convenient in other assertions for | |
| 716 ** DLWriter. | |
| 717 */ | |
| 718 static void docListValidate(DocListType iType, const char *pData, int nData, | |
| 719 sqlite_int64 *pLastDocid){ | |
| 720 sqlite_int64 iPrevDocid = 0; | |
| 721 assert( nData>0 ); | |
| 722 assert( pData!=0 ); | |
| 723 assert( pData+nData>pData ); | |
| 724 while( nData!=0 ){ | |
| 725 sqlite_int64 iDocidDelta; | |
| 726 int n = fts3GetVarint(pData, &iDocidDelta); | |
| 727 iPrevDocid += iDocidDelta; | |
| 728 if( iType>DL_DOCIDS ){ | |
| 729 int iDummy; | |
| 730 while( 1 ){ | |
| 731 n += fts3GetVarint32(pData+n, &iDummy); | |
| 732 if( iDummy==POS_END ) break; | |
| 733 if( iDummy==POS_COLUMN ){ | |
| 734 n += fts3GetVarint32(pData+n, &iDummy); | |
| 735 }else if( iType>DL_POSITIONS ){ | |
| 736 n += fts3GetVarint32(pData+n, &iDummy); | |
| 737 n += fts3GetVarint32(pData+n, &iDummy); | |
| 738 } | |
| 739 assert( n<=nData ); | |
| 740 } | |
| 741 } | |
| 742 assert( n<=nData ); | |
| 743 pData += n; | |
| 744 nData -= n; | |
| 745 } | |
| 746 if( pLastDocid ) *pLastDocid = iPrevDocid; | |
| 747 } | |
| 748 #define ASSERT_VALID_DOCLIST(i, p, n, o) docListValidate(i, p, n, o) | |
| 749 #else | |
| 750 #define ASSERT_VALID_DOCLIST(i, p, n, o) assert( 1 ) | |
| 751 #endif | |
| 752 | |
| 753 /*******************************************************************/ | |
| 754 /* DLWriter is used to write doclist data to a DataBuffer. DLWriter | |
| 755 ** always appends to the buffer and does not own it. | |
| 756 ** | |
| 757 ** dlwInit - initialize to write a given type doclistto a buffer. | |
| 758 ** dlwDestroy - clear the writer's memory. Does not free buffer. | |
| 759 ** dlwAppend - append raw doclist data to buffer. | |
| 760 ** dlwCopy - copy next doclist from reader to writer. | |
| 761 ** dlwAdd - construct doclist element and append to buffer. | |
| 762 ** Only apply dlwAdd() to DL_DOCIDS doclists (else use PLWriter). | |
| 763 */ | |
| 764 typedef struct DLWriter { | |
| 765 DocListType iType; | |
| 766 DataBuffer *b; | |
| 767 sqlite_int64 iPrevDocid; | |
| 768 #ifndef NDEBUG | |
| 769 int has_iPrevDocid; | |
| 770 #endif | |
| 771 } DLWriter; | |
| 772 | |
| 773 static void dlwInit(DLWriter *pWriter, DocListType iType, DataBuffer *b){ | |
| 774 pWriter->b = b; | |
| 775 pWriter->iType = iType; | |
| 776 pWriter->iPrevDocid = 0; | |
| 777 #ifndef NDEBUG | |
| 778 pWriter->has_iPrevDocid = 0; | |
| 779 #endif | |
| 780 } | |
| 781 static void dlwDestroy(DLWriter *pWriter){ | |
| 782 SCRAMBLE(pWriter); | |
| 783 } | |
| 784 /* iFirstDocid is the first docid in the doclist in pData. It is | |
| 785 ** needed because pData may point within a larger doclist, in which | |
| 786 ** case the first item would be delta-encoded. | |
| 787 ** | |
| 788 ** iLastDocid is the final docid in the doclist in pData. It is | |
| 789 ** needed to create the new iPrevDocid for future delta-encoding. The | |
| 790 ** code could decode the passed doclist to recreate iLastDocid, but | |
| 791 ** the only current user (docListMerge) already has decoded this | |
| 792 ** information. | |
| 793 */ | |
| 794 /* TODO(shess) This has become just a helper for docListMerge. | |
| 795 ** Consider a refactor to make this cleaner. | |
| 796 */ | |
| 797 static int dlwAppend(DLWriter *pWriter, | |
| 798 const char *pData, int nData, | |
| 799 sqlite_int64 iFirstDocid, sqlite_int64 iLastDocid){ | |
| 800 sqlite_int64 iDocid = 0; | |
| 801 char c[VARINT_MAX]; | |
| 802 int nFirstOld, nFirstNew; /* Old and new varint len of first docid. */ | |
| 803 #ifndef NDEBUG | |
| 804 sqlite_int64 iLastDocidDelta; | |
| 805 #endif | |
| 806 | |
| 807 /* Recode the initial docid as delta from iPrevDocid. */ | |
| 808 nFirstOld = fts3GetVarintSafe(pData, &iDocid, nData); | |
| 809 if( !nFirstOld ) return SQLITE_CORRUPT_BKPT; | |
| 810 assert( nFirstOld<nData || (nFirstOld==nData && pWriter->iType==DL_DOCIDS) ); | |
| 811 nFirstNew = fts3PutVarint(c, iFirstDocid-pWriter->iPrevDocid); | |
| 812 | |
| 813 /* Verify that the incoming doclist is valid AND that it ends with | |
| 814 ** the expected docid. This is essential because we'll trust this | |
| 815 ** docid in future delta-encoding. | |
| 816 */ | |
| 817 ASSERT_VALID_DOCLIST(pWriter->iType, pData, nData, &iLastDocidDelta); | |
| 818 assert( iLastDocid==iFirstDocid-iDocid+iLastDocidDelta ); | |
| 819 | |
| 820 /* Append recoded initial docid and everything else. Rest of docids | |
| 821 ** should have been delta-encoded from previous initial docid. | |
| 822 */ | |
| 823 if( nFirstOld<nData ){ | |
| 824 dataBufferAppend2(pWriter->b, c, nFirstNew, | |
| 825 pData+nFirstOld, nData-nFirstOld); | |
| 826 }else{ | |
| 827 dataBufferAppend(pWriter->b, c, nFirstNew); | |
| 828 } | |
| 829 pWriter->iPrevDocid = iLastDocid; | |
| 830 return SQLITE_OK; | |
| 831 } | |
| 832 static int dlwCopy(DLWriter *pWriter, DLReader *pReader){ | |
| 833 return dlwAppend(pWriter, dlrDocData(pReader), dlrDocDataBytes(pReader), | |
| 834 dlrDocid(pReader), dlrDocid(pReader)); | |
| 835 } | |
| 836 static void dlwAdd(DLWriter *pWriter, sqlite_int64 iDocid){ | |
| 837 char c[VARINT_MAX]; | |
| 838 int n = fts3PutVarint(c, iDocid-pWriter->iPrevDocid); | |
| 839 | |
| 840 /* Docids must ascend. */ | |
| 841 assert( !pWriter->has_iPrevDocid || iDocid>pWriter->iPrevDocid ); | |
| 842 assert( pWriter->iType==DL_DOCIDS ); | |
| 843 | |
| 844 dataBufferAppend(pWriter->b, c, n); | |
| 845 pWriter->iPrevDocid = iDocid; | |
| 846 #ifndef NDEBUG | |
| 847 pWriter->has_iPrevDocid = 1; | |
| 848 #endif | |
| 849 } | |
| 850 | |
| 851 /*******************************************************************/ | |
| 852 /* PLReader is used to read data from a document's position list. As | |
| 853 ** the caller steps through the list, data is cached so that varints | |
| 854 ** only need to be decoded once. | |
| 855 ** | |
| 856 ** plrInit, plrDestroy - create/destroy a reader. | |
| 857 ** plrColumn, plrPosition, plrStartOffset, plrEndOffset - accessors | |
| 858 ** plrAtEnd - at end of stream, only call plrDestroy once true. | |
| 859 ** plrStep - step to the next element. | |
| 860 */ | |
| 861 typedef struct PLReader { | |
| 862 /* These refer to the next position's data. nData will reach 0 when | |
| 863 ** reading the last position, so plrStep() signals EOF by setting | |
| 864 ** pData to NULL. | |
| 865 */ | |
| 866 const char *pData; | |
| 867 int nData; | |
| 868 | |
| 869 DocListType iType; | |
| 870 int iColumn; /* the last column read */ | |
| 871 int iPosition; /* the last position read */ | |
| 872 int iStartOffset; /* the last start offset read */ | |
| 873 int iEndOffset; /* the last end offset read */ | |
| 874 } PLReader; | |
| 875 | |
| 876 static int plrAtEnd(PLReader *pReader){ | |
| 877 return pReader->pData==NULL; | |
| 878 } | |
| 879 static int plrColumn(PLReader *pReader){ | |
| 880 assert( !plrAtEnd(pReader) ); | |
| 881 return pReader->iColumn; | |
| 882 } | |
| 883 static int plrPosition(PLReader *pReader){ | |
| 884 assert( !plrAtEnd(pReader) ); | |
| 885 return pReader->iPosition; | |
| 886 } | |
| 887 static int plrStartOffset(PLReader *pReader){ | |
| 888 assert( !plrAtEnd(pReader) ); | |
| 889 return pReader->iStartOffset; | |
| 890 } | |
| 891 static int plrEndOffset(PLReader *pReader){ | |
| 892 assert( !plrAtEnd(pReader) ); | |
| 893 return pReader->iEndOffset; | |
| 894 } | |
| 895 static int plrStep(PLReader *pReader){ | |
| 896 int i, n, nTotal = 0; | |
| 897 | |
| 898 assert( !plrAtEnd(pReader) ); | |
| 899 | |
| 900 if( pReader->nData<=0 ){ | |
| 901 pReader->pData = NULL; | |
| 902 return SQLITE_OK; | |
| 903 } | |
| 904 | |
| 905 n = fts3GetVarint32Safe(pReader->pData, &i, pReader->nData); | |
| 906 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 907 nTotal += n; | |
| 908 if( i==POS_COLUMN ){ | |
| 909 n = fts3GetVarint32Safe(pReader->pData+nTotal, &pReader->iColumn, pReader->n
Data-nTotal); | |
| 910 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 911 nTotal += n; | |
| 912 pReader->iPosition = 0; | |
| 913 pReader->iStartOffset = 0; | |
| 914 n = fts3GetVarint32Safe(pReader->pData+nTotal, &i, pReader->nData-nTotal); | |
| 915 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 916 nTotal += n; | |
| 917 } | |
| 918 /* Should never see adjacent column changes. */ | |
| 919 assert( i!=POS_COLUMN ); | |
| 920 | |
| 921 if( i==POS_END ){ | |
| 922 assert( nTotal<=pReader->nData ); | |
| 923 pReader->nData = 0; | |
| 924 pReader->pData = NULL; | |
| 925 return SQLITE_OK; | |
| 926 } | |
| 927 | |
| 928 pReader->iPosition += i-POS_BASE; | |
| 929 if( pReader->iType==DL_POSITIONS_OFFSETS ){ | |
| 930 n = fts3GetVarint32Safe(pReader->pData+nTotal, &i, pReader->nData-nTotal); | |
| 931 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 932 nTotal += n; | |
| 933 pReader->iStartOffset += i; | |
| 934 n = fts3GetVarint32Safe(pReader->pData+nTotal, &i, pReader->nData-nTotal); | |
| 935 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 936 nTotal += n; | |
| 937 pReader->iEndOffset = pReader->iStartOffset+i; | |
| 938 } | |
| 939 assert( nTotal<=pReader->nData ); | |
| 940 pReader->pData += nTotal; | |
| 941 pReader->nData -= nTotal; | |
| 942 return SQLITE_OK; | |
| 943 } | |
| 944 | |
| 945 static void plrDestroy(PLReader *pReader){ | |
| 946 SCRAMBLE(pReader); | |
| 947 } | |
| 948 static int plrInit(PLReader *pReader, DLReader *pDLReader){ | |
| 949 int rc; | |
| 950 pReader->pData = dlrPosData(pDLReader); | |
| 951 pReader->nData = dlrPosDataLen(pDLReader); | |
| 952 pReader->iType = pDLReader->iType; | |
| 953 pReader->iColumn = 0; | |
| 954 pReader->iPosition = 0; | |
| 955 pReader->iStartOffset = 0; | |
| 956 pReader->iEndOffset = 0; | |
| 957 rc = plrStep(pReader); | |
| 958 if( rc!=SQLITE_OK ) plrDestroy(pReader); | |
| 959 return rc; | |
| 960 } | |
| 961 | |
| 962 /*******************************************************************/ | |
| 963 /* PLWriter is used in constructing a document's position list. As a | |
| 964 ** convenience, if iType is DL_DOCIDS, PLWriter becomes a no-op. | |
| 965 ** PLWriter writes to the associated DLWriter's buffer. | |
| 966 ** | |
| 967 ** plwInit - init for writing a document's poslist. | |
| 968 ** plwDestroy - clear a writer. | |
| 969 ** plwAdd - append position and offset information. | |
| 970 ** plwCopy - copy next position's data from reader to writer. | |
| 971 ** plwTerminate - add any necessary doclist terminator. | |
| 972 ** | |
| 973 ** Calling plwAdd() after plwTerminate() may result in a corrupt | |
| 974 ** doclist. | |
| 975 */ | |
| 976 /* TODO(shess) Until we've written the second item, we can cache the | |
| 977 ** first item's information. Then we'd have three states: | |
| 978 ** | |
| 979 ** - initialized with docid, no positions. | |
| 980 ** - docid and one position. | |
| 981 ** - docid and multiple positions. | |
| 982 ** | |
| 983 ** Only the last state needs to actually write to dlw->b, which would | |
| 984 ** be an improvement in the DLCollector case. | |
| 985 */ | |
| 986 typedef struct PLWriter { | |
| 987 DLWriter *dlw; | |
| 988 | |
| 989 int iColumn; /* the last column written */ | |
| 990 int iPos; /* the last position written */ | |
| 991 int iOffset; /* the last start offset written */ | |
| 992 } PLWriter; | |
| 993 | |
| 994 /* TODO(shess) In the case where the parent is reading these values | |
| 995 ** from a PLReader, we could optimize to a copy if that PLReader has | |
| 996 ** the same type as pWriter. | |
| 997 */ | |
| 998 static void plwAdd(PLWriter *pWriter, int iColumn, int iPos, | |
| 999 int iStartOffset, int iEndOffset){ | |
| 1000 /* Worst-case space for POS_COLUMN, iColumn, iPosDelta, | |
| 1001 ** iStartOffsetDelta, and iEndOffsetDelta. | |
| 1002 */ | |
| 1003 char c[5*VARINT_MAX]; | |
| 1004 int n = 0; | |
| 1005 | |
| 1006 /* Ban plwAdd() after plwTerminate(). */ | |
| 1007 assert( pWriter->iPos!=-1 ); | |
| 1008 | |
| 1009 if( pWriter->dlw->iType==DL_DOCIDS ) return; | |
| 1010 | |
| 1011 if( iColumn!=pWriter->iColumn ){ | |
| 1012 n += fts3PutVarint(c+n, POS_COLUMN); | |
| 1013 n += fts3PutVarint(c+n, iColumn); | |
| 1014 pWriter->iColumn = iColumn; | |
| 1015 pWriter->iPos = 0; | |
| 1016 pWriter->iOffset = 0; | |
| 1017 } | |
| 1018 assert( iPos>=pWriter->iPos ); | |
| 1019 n += fts3PutVarint(c+n, POS_BASE+(iPos-pWriter->iPos)); | |
| 1020 pWriter->iPos = iPos; | |
| 1021 if( pWriter->dlw->iType==DL_POSITIONS_OFFSETS ){ | |
| 1022 assert( iStartOffset>=pWriter->iOffset ); | |
| 1023 n += fts3PutVarint(c+n, iStartOffset-pWriter->iOffset); | |
| 1024 pWriter->iOffset = iStartOffset; | |
| 1025 assert( iEndOffset>=iStartOffset ); | |
| 1026 n += fts3PutVarint(c+n, iEndOffset-iStartOffset); | |
| 1027 } | |
| 1028 dataBufferAppend(pWriter->dlw->b, c, n); | |
| 1029 } | |
| 1030 static void plwCopy(PLWriter *pWriter, PLReader *pReader){ | |
| 1031 plwAdd(pWriter, plrColumn(pReader), plrPosition(pReader), | |
| 1032 plrStartOffset(pReader), plrEndOffset(pReader)); | |
| 1033 } | |
| 1034 static void plwInit(PLWriter *pWriter, DLWriter *dlw, sqlite_int64 iDocid){ | |
| 1035 char c[VARINT_MAX]; | |
| 1036 int n; | |
| 1037 | |
| 1038 pWriter->dlw = dlw; | |
| 1039 | |
| 1040 /* Docids must ascend. */ | |
| 1041 assert( !pWriter->dlw->has_iPrevDocid || iDocid>pWriter->dlw->iPrevDocid ); | |
| 1042 n = fts3PutVarint(c, iDocid-pWriter->dlw->iPrevDocid); | |
| 1043 dataBufferAppend(pWriter->dlw->b, c, n); | |
| 1044 pWriter->dlw->iPrevDocid = iDocid; | |
| 1045 #ifndef NDEBUG | |
| 1046 pWriter->dlw->has_iPrevDocid = 1; | |
| 1047 #endif | |
| 1048 | |
| 1049 pWriter->iColumn = 0; | |
| 1050 pWriter->iPos = 0; | |
| 1051 pWriter->iOffset = 0; | |
| 1052 } | |
| 1053 /* TODO(shess) Should plwDestroy() also terminate the doclist? But | |
| 1054 ** then plwDestroy() would no longer be just a destructor, it would | |
| 1055 ** also be doing work, which isn't consistent with the overall idiom. | |
| 1056 ** Another option would be for plwAdd() to always append any necessary | |
| 1057 ** terminator, so that the output is always correct. But that would | |
| 1058 ** add incremental work to the common case with the only benefit being | |
| 1059 ** API elegance. Punt for now. | |
| 1060 */ | |
| 1061 static void plwTerminate(PLWriter *pWriter){ | |
| 1062 if( pWriter->dlw->iType>DL_DOCIDS ){ | |
| 1063 char c[VARINT_MAX]; | |
| 1064 int n = fts3PutVarint(c, POS_END); | |
| 1065 dataBufferAppend(pWriter->dlw->b, c, n); | |
| 1066 } | |
| 1067 #ifndef NDEBUG | |
| 1068 /* Mark as terminated for assert in plwAdd(). */ | |
| 1069 pWriter->iPos = -1; | |
| 1070 #endif | |
| 1071 } | |
| 1072 static void plwDestroy(PLWriter *pWriter){ | |
| 1073 SCRAMBLE(pWriter); | |
| 1074 } | |
| 1075 | |
| 1076 /*******************************************************************/ | |
| 1077 /* DLCollector wraps PLWriter and DLWriter to provide a | |
| 1078 ** dynamically-allocated doclist area to use during tokenization. | |
| 1079 ** | |
| 1080 ** dlcNew - malloc up and initialize a collector. | |
| 1081 ** dlcDelete - destroy a collector and all contained items. | |
| 1082 ** dlcAddPos - append position and offset information. | |
| 1083 ** dlcAddDoclist - add the collected doclist to the given buffer. | |
| 1084 ** dlcNext - terminate the current document and open another. | |
| 1085 */ | |
| 1086 typedef struct DLCollector { | |
| 1087 DataBuffer b; | |
| 1088 DLWriter dlw; | |
| 1089 PLWriter plw; | |
| 1090 } DLCollector; | |
| 1091 | |
| 1092 /* TODO(shess) This could also be done by calling plwTerminate() and | |
| 1093 ** dataBufferAppend(). I tried that, expecting nominal performance | |
| 1094 ** differences, but it seemed to pretty reliably be worth 1% to code | |
| 1095 ** it this way. I suspect it is the incremental malloc overhead (some | |
| 1096 ** percentage of the plwTerminate() calls will cause a realloc), so | |
| 1097 ** this might be worth revisiting if the DataBuffer implementation | |
| 1098 ** changes. | |
| 1099 */ | |
| 1100 static void dlcAddDoclist(DLCollector *pCollector, DataBuffer *b){ | |
| 1101 if( pCollector->dlw.iType>DL_DOCIDS ){ | |
| 1102 char c[VARINT_MAX]; | |
| 1103 int n = fts3PutVarint(c, POS_END); | |
| 1104 dataBufferAppend2(b, pCollector->b.pData, pCollector->b.nData, c, n); | |
| 1105 }else{ | |
| 1106 dataBufferAppend(b, pCollector->b.pData, pCollector->b.nData); | |
| 1107 } | |
| 1108 } | |
| 1109 static void dlcNext(DLCollector *pCollector, sqlite_int64 iDocid){ | |
| 1110 plwTerminate(&pCollector->plw); | |
| 1111 plwDestroy(&pCollector->plw); | |
| 1112 plwInit(&pCollector->plw, &pCollector->dlw, iDocid); | |
| 1113 } | |
| 1114 static void dlcAddPos(DLCollector *pCollector, int iColumn, int iPos, | |
| 1115 int iStartOffset, int iEndOffset){ | |
| 1116 plwAdd(&pCollector->plw, iColumn, iPos, iStartOffset, iEndOffset); | |
| 1117 } | |
| 1118 | |
| 1119 static DLCollector *dlcNew(sqlite_int64 iDocid, DocListType iType){ | |
| 1120 DLCollector *pCollector = sqlite3_malloc(sizeof(DLCollector)); | |
| 1121 dataBufferInit(&pCollector->b, 0); | |
| 1122 dlwInit(&pCollector->dlw, iType, &pCollector->b); | |
| 1123 plwInit(&pCollector->plw, &pCollector->dlw, iDocid); | |
| 1124 return pCollector; | |
| 1125 } | |
| 1126 static void dlcDelete(DLCollector *pCollector){ | |
| 1127 plwDestroy(&pCollector->plw); | |
| 1128 dlwDestroy(&pCollector->dlw); | |
| 1129 dataBufferDestroy(&pCollector->b); | |
| 1130 SCRAMBLE(pCollector); | |
| 1131 sqlite3_free(pCollector); | |
| 1132 } | |
| 1133 | |
| 1134 | |
| 1135 /* Copy the doclist data of iType in pData/nData into *out, trimming | |
| 1136 ** unnecessary data as we go. Only columns matching iColumn are | |
| 1137 ** copied, all columns copied if iColumn is -1. Elements with no | |
| 1138 ** matching columns are dropped. The output is an iOutType doclist. | |
| 1139 */ | |
| 1140 /* NOTE(shess) This code is only valid after all doclists are merged. | |
| 1141 ** If this is run before merges, then doclist items which represent | |
| 1142 ** deletion will be trimmed, and will thus not effect a deletion | |
| 1143 ** during the merge. | |
| 1144 */ | |
| 1145 static int docListTrim(DocListType iType, const char *pData, int nData, | |
| 1146 int iColumn, DocListType iOutType, DataBuffer *out){ | |
| 1147 DLReader dlReader; | |
| 1148 DLWriter dlWriter; | |
| 1149 int rc; | |
| 1150 | |
| 1151 assert( iOutType<=iType ); | |
| 1152 | |
| 1153 rc = dlrInit(&dlReader, iType, pData, nData); | |
| 1154 if( rc!=SQLITE_OK ) return rc; | |
| 1155 dlwInit(&dlWriter, iOutType, out); | |
| 1156 | |
| 1157 while( !dlrAtEnd(&dlReader) ){ | |
| 1158 PLReader plReader; | |
| 1159 PLWriter plWriter; | |
| 1160 int match = 0; | |
| 1161 | |
| 1162 rc = plrInit(&plReader, &dlReader); | |
| 1163 if( rc!=SQLITE_OK ) break; | |
| 1164 | |
| 1165 while( !plrAtEnd(&plReader) ){ | |
| 1166 if( iColumn==-1 || plrColumn(&plReader)==iColumn ){ | |
| 1167 if( !match ){ | |
| 1168 plwInit(&plWriter, &dlWriter, dlrDocid(&dlReader)); | |
| 1169 match = 1; | |
| 1170 } | |
| 1171 plwAdd(&plWriter, plrColumn(&plReader), plrPosition(&plReader), | |
| 1172 plrStartOffset(&plReader), plrEndOffset(&plReader)); | |
| 1173 } | |
| 1174 rc = plrStep(&plReader); | |
| 1175 if( rc!=SQLITE_OK ){ | |
| 1176 plrDestroy(&plReader); | |
| 1177 goto err; | |
| 1178 } | |
| 1179 } | |
| 1180 if( match ){ | |
| 1181 plwTerminate(&plWriter); | |
| 1182 plwDestroy(&plWriter); | |
| 1183 } | |
| 1184 | |
| 1185 plrDestroy(&plReader); | |
| 1186 rc = dlrStep(&dlReader); | |
| 1187 if( rc!=SQLITE_OK ) break; | |
| 1188 } | |
| 1189 err: | |
| 1190 dlwDestroy(&dlWriter); | |
| 1191 dlrDestroy(&dlReader); | |
| 1192 return rc; | |
| 1193 } | |
| 1194 | |
| 1195 /* Used by docListMerge() to keep doclists in the ascending order by | |
| 1196 ** docid, then ascending order by age (so the newest comes first). | |
| 1197 */ | |
| 1198 typedef struct OrderedDLReader { | |
| 1199 DLReader *pReader; | |
| 1200 | |
| 1201 /* TODO(shess) If we assume that docListMerge pReaders is ordered by | |
| 1202 ** age (which we do), then we could use pReader comparisons to break | |
| 1203 ** ties. | |
| 1204 */ | |
| 1205 int idx; | |
| 1206 } OrderedDLReader; | |
| 1207 | |
| 1208 /* Order eof to end, then by docid asc, idx desc. */ | |
| 1209 static int orderedDLReaderCmp(OrderedDLReader *r1, OrderedDLReader *r2){ | |
| 1210 if( dlrAtEnd(r1->pReader) ){ | |
| 1211 if( dlrAtEnd(r2->pReader) ) return 0; /* Both atEnd(). */ | |
| 1212 return 1; /* Only r1 atEnd(). */ | |
| 1213 } | |
| 1214 if( dlrAtEnd(r2->pReader) ) return -1; /* Only r2 atEnd(). */ | |
| 1215 | |
| 1216 if( dlrDocid(r1->pReader)<dlrDocid(r2->pReader) ) return -1; | |
| 1217 if( dlrDocid(r1->pReader)>dlrDocid(r2->pReader) ) return 1; | |
| 1218 | |
| 1219 /* Descending on idx. */ | |
| 1220 return r2->idx-r1->idx; | |
| 1221 } | |
| 1222 | |
| 1223 /* Bubble p[0] to appropriate place in p[1..n-1]. Assumes that | |
| 1224 ** p[1..n-1] is already sorted. | |
| 1225 */ | |
| 1226 /* TODO(shess) Is this frequent enough to warrant a binary search? | |
| 1227 ** Before implementing that, instrument the code to check. In most | |
| 1228 ** current usage, I expect that p[0] will be less than p[1] a very | |
| 1229 ** high proportion of the time. | |
| 1230 */ | |
| 1231 static void orderedDLReaderReorder(OrderedDLReader *p, int n){ | |
| 1232 while( n>1 && orderedDLReaderCmp(p, p+1)>0 ){ | |
| 1233 OrderedDLReader tmp = p[0]; | |
| 1234 p[0] = p[1]; | |
| 1235 p[1] = tmp; | |
| 1236 n--; | |
| 1237 p++; | |
| 1238 } | |
| 1239 } | |
| 1240 | |
| 1241 /* Given an array of doclist readers, merge their doclist elements | |
| 1242 ** into out in sorted order (by docid), dropping elements from older | |
| 1243 ** readers when there is a duplicate docid. pReaders is assumed to be | |
| 1244 ** ordered by age, oldest first. | |
| 1245 */ | |
| 1246 /* TODO(shess) nReaders must be <= MERGE_COUNT. This should probably | |
| 1247 ** be fixed. | |
| 1248 */ | |
| 1249 static int docListMerge(DataBuffer *out, | |
| 1250 DLReader *pReaders, int nReaders){ | |
| 1251 OrderedDLReader readers[MERGE_COUNT]; | |
| 1252 DLWriter writer; | |
| 1253 int i, n; | |
| 1254 const char *pStart = 0; | |
| 1255 int nStart = 0; | |
| 1256 sqlite_int64 iFirstDocid = 0, iLastDocid = 0; | |
| 1257 int rc = SQLITE_OK; | |
| 1258 | |
| 1259 assert( nReaders>0 ); | |
| 1260 if( nReaders==1 ){ | |
| 1261 dataBufferAppend(out, dlrDocData(pReaders), dlrAllDataBytes(pReaders)); | |
| 1262 return SQLITE_OK; | |
| 1263 } | |
| 1264 | |
| 1265 assert( nReaders<=MERGE_COUNT ); | |
| 1266 n = 0; | |
| 1267 for(i=0; i<nReaders; i++){ | |
| 1268 assert( pReaders[i].iType==pReaders[0].iType ); | |
| 1269 readers[i].pReader = pReaders+i; | |
| 1270 readers[i].idx = i; | |
| 1271 n += dlrAllDataBytes(&pReaders[i]); | |
| 1272 } | |
| 1273 /* Conservatively size output to sum of inputs. Output should end | |
| 1274 ** up strictly smaller than input. | |
| 1275 */ | |
| 1276 dataBufferExpand(out, n); | |
| 1277 | |
| 1278 /* Get the readers into sorted order. */ | |
| 1279 while( i-->0 ){ | |
| 1280 orderedDLReaderReorder(readers+i, nReaders-i); | |
| 1281 } | |
| 1282 | |
| 1283 dlwInit(&writer, pReaders[0].iType, out); | |
| 1284 while( !dlrAtEnd(readers[0].pReader) ){ | |
| 1285 sqlite_int64 iDocid = dlrDocid(readers[0].pReader); | |
| 1286 | |
| 1287 /* If this is a continuation of the current buffer to copy, extend | |
| 1288 ** that buffer. memcpy() seems to be more efficient if it has a | |
| 1289 ** lots of data to copy. | |
| 1290 */ | |
| 1291 if( dlrDocData(readers[0].pReader)==pStart+nStart ){ | |
| 1292 nStart += dlrDocDataBytes(readers[0].pReader); | |
| 1293 }else{ | |
| 1294 if( pStart!=0 ){ | |
| 1295 rc = dlwAppend(&writer, pStart, nStart, iFirstDocid, iLastDocid); | |
| 1296 if( rc!=SQLITE_OK ) goto err; | |
| 1297 } | |
| 1298 pStart = dlrDocData(readers[0].pReader); | |
| 1299 nStart = dlrDocDataBytes(readers[0].pReader); | |
| 1300 iFirstDocid = iDocid; | |
| 1301 } | |
| 1302 iLastDocid = iDocid; | |
| 1303 rc = dlrStep(readers[0].pReader); | |
| 1304 if( rc!= SQLITE_OK ) goto err; | |
| 1305 | |
| 1306 /* Drop all of the older elements with the same docid. */ | |
| 1307 for(i=1; i<nReaders && | |
| 1308 !dlrAtEnd(readers[i].pReader) && | |
| 1309 dlrDocid(readers[i].pReader)==iDocid; i++){ | |
| 1310 rc = dlrStep(readers[i].pReader); | |
| 1311 if( rc!=SQLITE_OK ) goto err; | |
| 1312 } | |
| 1313 | |
| 1314 /* Get the readers back into order. */ | |
| 1315 while( i-->0 ){ | |
| 1316 orderedDLReaderReorder(readers+i, nReaders-i); | |
| 1317 } | |
| 1318 } | |
| 1319 | |
| 1320 /* Copy over any remaining elements. */ | |
| 1321 if( nStart>0 ) rc = dlwAppend(&writer, pStart, nStart, iFirstDocid, iLastDocid
); | |
| 1322 err: | |
| 1323 dlwDestroy(&writer); | |
| 1324 return rc; | |
| 1325 } | |
| 1326 | |
| 1327 /* Helper function for posListUnion(). Compares the current position | |
| 1328 ** between left and right, returning as standard C idiom of <0 if | |
| 1329 ** left<right, >0 if left>right, and 0 if left==right. "End" always | |
| 1330 ** compares greater. | |
| 1331 */ | |
| 1332 static int posListCmp(PLReader *pLeft, PLReader *pRight){ | |
| 1333 assert( pLeft->iType==pRight->iType ); | |
| 1334 if( pLeft->iType==DL_DOCIDS ) return 0; | |
| 1335 | |
| 1336 if( plrAtEnd(pLeft) ) return plrAtEnd(pRight) ? 0 : 1; | |
| 1337 if( plrAtEnd(pRight) ) return -1; | |
| 1338 | |
| 1339 if( plrColumn(pLeft)<plrColumn(pRight) ) return -1; | |
| 1340 if( plrColumn(pLeft)>plrColumn(pRight) ) return 1; | |
| 1341 | |
| 1342 if( plrPosition(pLeft)<plrPosition(pRight) ) return -1; | |
| 1343 if( plrPosition(pLeft)>plrPosition(pRight) ) return 1; | |
| 1344 if( pLeft->iType==DL_POSITIONS ) return 0; | |
| 1345 | |
| 1346 if( plrStartOffset(pLeft)<plrStartOffset(pRight) ) return -1; | |
| 1347 if( plrStartOffset(pLeft)>plrStartOffset(pRight) ) return 1; | |
| 1348 | |
| 1349 if( plrEndOffset(pLeft)<plrEndOffset(pRight) ) return -1; | |
| 1350 if( plrEndOffset(pLeft)>plrEndOffset(pRight) ) return 1; | |
| 1351 | |
| 1352 return 0; | |
| 1353 } | |
| 1354 | |
| 1355 /* Write the union of position lists in pLeft and pRight to pOut. | |
| 1356 ** "Union" in this case meaning "All unique position tuples". Should | |
| 1357 ** work with any doclist type, though both inputs and the output | |
| 1358 ** should be the same type. | |
| 1359 */ | |
| 1360 static int posListUnion(DLReader *pLeft, DLReader *pRight, DLWriter *pOut){ | |
| 1361 PLReader left, right; | |
| 1362 PLWriter writer; | |
| 1363 int rc; | |
| 1364 | |
| 1365 assert( dlrDocid(pLeft)==dlrDocid(pRight) ); | |
| 1366 assert( pLeft->iType==pRight->iType ); | |
| 1367 assert( pLeft->iType==pOut->iType ); | |
| 1368 | |
| 1369 rc = plrInit(&left, pLeft); | |
| 1370 if( rc!=SQLITE_OK ) return rc; | |
| 1371 rc = plrInit(&right, pRight); | |
| 1372 if( rc!=SQLITE_OK ){ | |
| 1373 plrDestroy(&left); | |
| 1374 return rc; | |
| 1375 } | |
| 1376 plwInit(&writer, pOut, dlrDocid(pLeft)); | |
| 1377 | |
| 1378 while( !plrAtEnd(&left) || !plrAtEnd(&right) ){ | |
| 1379 int c = posListCmp(&left, &right); | |
| 1380 if( c<0 ){ | |
| 1381 plwCopy(&writer, &left); | |
| 1382 rc = plrStep(&left); | |
| 1383 if( rc!=SQLITE_OK ) break; | |
| 1384 }else if( c>0 ){ | |
| 1385 plwCopy(&writer, &right); | |
| 1386 rc = plrStep(&right); | |
| 1387 if( rc!=SQLITE_OK ) break; | |
| 1388 }else{ | |
| 1389 plwCopy(&writer, &left); | |
| 1390 rc = plrStep(&left); | |
| 1391 if( rc!=SQLITE_OK ) break; | |
| 1392 rc = plrStep(&right); | |
| 1393 if( rc!=SQLITE_OK ) break; | |
| 1394 } | |
| 1395 } | |
| 1396 | |
| 1397 plwTerminate(&writer); | |
| 1398 plwDestroy(&writer); | |
| 1399 plrDestroy(&left); | |
| 1400 plrDestroy(&right); | |
| 1401 return rc; | |
| 1402 } | |
| 1403 | |
| 1404 /* Write the union of doclists in pLeft and pRight to pOut. For | |
| 1405 ** docids in common between the inputs, the union of the position | |
| 1406 ** lists is written. Inputs and outputs are always type DL_DEFAULT. | |
| 1407 */ | |
| 1408 static int docListUnion( | |
| 1409 const char *pLeft, int nLeft, | |
| 1410 const char *pRight, int nRight, | |
| 1411 DataBuffer *pOut /* Write the combined doclist here */ | |
| 1412 ){ | |
| 1413 DLReader left, right; | |
| 1414 DLWriter writer; | |
| 1415 int rc; | |
| 1416 | |
| 1417 if( nLeft==0 ){ | |
| 1418 if( nRight!=0) dataBufferAppend(pOut, pRight, nRight); | |
| 1419 return SQLITE_OK; | |
| 1420 } | |
| 1421 if( nRight==0 ){ | |
| 1422 dataBufferAppend(pOut, pLeft, nLeft); | |
| 1423 return SQLITE_OK; | |
| 1424 } | |
| 1425 | |
| 1426 rc = dlrInit(&left, DL_DEFAULT, pLeft, nLeft); | |
| 1427 if( rc!=SQLITE_OK ) return rc; | |
| 1428 rc = dlrInit(&right, DL_DEFAULT, pRight, nRight); | |
| 1429 if( rc!=SQLITE_OK){ | |
| 1430 dlrDestroy(&left); | |
| 1431 return rc; | |
| 1432 } | |
| 1433 dlwInit(&writer, DL_DEFAULT, pOut); | |
| 1434 | |
| 1435 while( !dlrAtEnd(&left) || !dlrAtEnd(&right) ){ | |
| 1436 if( dlrAtEnd(&right) ){ | |
| 1437 rc = dlwCopy(&writer, &left); | |
| 1438 if( rc!=SQLITE_OK) break; | |
| 1439 rc = dlrStep(&left); | |
| 1440 if( rc!=SQLITE_OK) break; | |
| 1441 }else if( dlrAtEnd(&left) ){ | |
| 1442 rc = dlwCopy(&writer, &right); | |
| 1443 if( rc!=SQLITE_OK ) break; | |
| 1444 rc = dlrStep(&right); | |
| 1445 if( rc!=SQLITE_OK ) break; | |
| 1446 }else if( dlrDocid(&left)<dlrDocid(&right) ){ | |
| 1447 rc = dlwCopy(&writer, &left); | |
| 1448 if( rc!=SQLITE_OK ) break; | |
| 1449 rc = dlrStep(&left); | |
| 1450 if( rc!=SQLITE_OK ) break; | |
| 1451 }else if( dlrDocid(&left)>dlrDocid(&right) ){ | |
| 1452 rc = dlwCopy(&writer, &right); | |
| 1453 if( rc!=SQLITE_OK ) break; | |
| 1454 rc = dlrStep(&right); | |
| 1455 if( rc!=SQLITE_OK ) break; | |
| 1456 }else{ | |
| 1457 rc = posListUnion(&left, &right, &writer); | |
| 1458 if( rc!=SQLITE_OK ) break; | |
| 1459 rc = dlrStep(&left); | |
| 1460 if( rc!=SQLITE_OK ) break; | |
| 1461 rc = dlrStep(&right); | |
| 1462 if( rc!=SQLITE_OK ) break; | |
| 1463 } | |
| 1464 } | |
| 1465 | |
| 1466 dlrDestroy(&left); | |
| 1467 dlrDestroy(&right); | |
| 1468 dlwDestroy(&writer); | |
| 1469 return rc; | |
| 1470 } | |
| 1471 | |
| 1472 /* | |
| 1473 ** This function is used as part of the implementation of phrase and | |
| 1474 ** NEAR matching. | |
| 1475 ** | |
| 1476 ** pLeft and pRight are DLReaders positioned to the same docid in | |
| 1477 ** lists of type DL_POSITION. This function writes an entry to the | |
| 1478 ** DLWriter pOut for each position in pRight that is less than | |
| 1479 ** (nNear+1) greater (but not equal to or smaller) than a position | |
| 1480 ** in pLeft. For example, if nNear is 0, and the positions contained | |
| 1481 ** by pLeft and pRight are: | |
| 1482 ** | |
| 1483 ** pLeft: 5 10 15 20 | |
| 1484 ** pRight: 6 9 17 21 | |
| 1485 ** | |
| 1486 ** then the docid is added to pOut. If pOut is of type DL_POSITIONS, | |
| 1487 ** then a positionids "6" and "21" are also added to pOut. | |
| 1488 ** | |
| 1489 ** If boolean argument isSaveLeft is true, then positionids are copied | |
| 1490 ** from pLeft instead of pRight. In the example above, the positions "5" | |
| 1491 ** and "20" would be added instead of "6" and "21". | |
| 1492 */ | |
| 1493 static int posListPhraseMerge( | |
| 1494 DLReader *pLeft, | |
| 1495 DLReader *pRight, | |
| 1496 int nNear, | |
| 1497 int isSaveLeft, | |
| 1498 DLWriter *pOut | |
| 1499 ){ | |
| 1500 PLReader left, right; | |
| 1501 PLWriter writer; | |
| 1502 int match = 0; | |
| 1503 int rc; | |
| 1504 | |
| 1505 assert( dlrDocid(pLeft)==dlrDocid(pRight) ); | |
| 1506 assert( pOut->iType!=DL_POSITIONS_OFFSETS ); | |
| 1507 | |
| 1508 rc = plrInit(&left, pLeft); | |
| 1509 if( rc!=SQLITE_OK ) return rc; | |
| 1510 rc = plrInit(&right, pRight); | |
| 1511 if( rc!=SQLITE_OK ){ | |
| 1512 plrDestroy(&left); | |
| 1513 return rc; | |
| 1514 } | |
| 1515 | |
| 1516 while( !plrAtEnd(&left) && !plrAtEnd(&right) ){ | |
| 1517 if( plrColumn(&left)<plrColumn(&right) ){ | |
| 1518 rc = plrStep(&left); | |
| 1519 if( rc!=SQLITE_OK ) break; | |
| 1520 }else if( plrColumn(&left)>plrColumn(&right) ){ | |
| 1521 rc = plrStep(&right); | |
| 1522 if( rc!=SQLITE_OK ) break; | |
| 1523 }else if( plrPosition(&left)>=plrPosition(&right) ){ | |
| 1524 rc = plrStep(&right); | |
| 1525 if( rc!=SQLITE_OK ) break; | |
| 1526 }else{ | |
| 1527 if( (plrPosition(&right)-plrPosition(&left))<=(nNear+1) ){ | |
| 1528 if( !match ){ | |
| 1529 plwInit(&writer, pOut, dlrDocid(pLeft)); | |
| 1530 match = 1; | |
| 1531 } | |
| 1532 if( !isSaveLeft ){ | |
| 1533 plwAdd(&writer, plrColumn(&right), plrPosition(&right), 0, 0); | |
| 1534 }else{ | |
| 1535 plwAdd(&writer, plrColumn(&left), plrPosition(&left), 0, 0); | |
| 1536 } | |
| 1537 rc = plrStep(&right); | |
| 1538 if( rc!=SQLITE_OK ) break; | |
| 1539 }else{ | |
| 1540 rc = plrStep(&left); | |
| 1541 if( rc!=SQLITE_OK ) break; | |
| 1542 } | |
| 1543 } | |
| 1544 } | |
| 1545 | |
| 1546 if( match ){ | |
| 1547 plwTerminate(&writer); | |
| 1548 plwDestroy(&writer); | |
| 1549 } | |
| 1550 | |
| 1551 plrDestroy(&left); | |
| 1552 plrDestroy(&right); | |
| 1553 return rc; | |
| 1554 } | |
| 1555 | |
| 1556 /* | |
| 1557 ** Compare the values pointed to by the PLReaders passed as arguments. | |
| 1558 ** Return -1 if the value pointed to by pLeft is considered less than | |
| 1559 ** the value pointed to by pRight, +1 if it is considered greater | |
| 1560 ** than it, or 0 if it is equal. i.e. | |
| 1561 ** | |
| 1562 ** (*pLeft - *pRight) | |
| 1563 ** | |
| 1564 ** A PLReader that is in the EOF condition is considered greater than | |
| 1565 ** any other. If neither argument is in EOF state, the return value of | |
| 1566 ** plrColumn() is used. If the plrColumn() values are equal, the | |
| 1567 ** comparison is on the basis of plrPosition(). | |
| 1568 */ | |
| 1569 static int plrCompare(PLReader *pLeft, PLReader *pRight){ | |
| 1570 assert(!plrAtEnd(pLeft) || !plrAtEnd(pRight)); | |
| 1571 | |
| 1572 if( plrAtEnd(pRight) || plrAtEnd(pLeft) ){ | |
| 1573 return (plrAtEnd(pRight) ? -1 : 1); | |
| 1574 } | |
| 1575 if( plrColumn(pLeft)!=plrColumn(pRight) ){ | |
| 1576 return ((plrColumn(pLeft)<plrColumn(pRight)) ? -1 : 1); | |
| 1577 } | |
| 1578 if( plrPosition(pLeft)!=plrPosition(pRight) ){ | |
| 1579 return ((plrPosition(pLeft)<plrPosition(pRight)) ? -1 : 1); | |
| 1580 } | |
| 1581 return 0; | |
| 1582 } | |
| 1583 | |
| 1584 /* We have two doclists with positions: pLeft and pRight. Depending | |
| 1585 ** on the value of the nNear parameter, perform either a phrase | |
| 1586 ** intersection (if nNear==0) or a NEAR intersection (if nNear>0) | |
| 1587 ** and write the results into pOut. | |
| 1588 ** | |
| 1589 ** A phrase intersection means that two documents only match | |
| 1590 ** if pLeft.iPos+1==pRight.iPos. | |
| 1591 ** | |
| 1592 ** A NEAR intersection means that two documents only match if | |
| 1593 ** (abs(pLeft.iPos-pRight.iPos)<nNear). | |
| 1594 ** | |
| 1595 ** If a NEAR intersection is requested, then the nPhrase argument should | |
| 1596 ** be passed the number of tokens in the two operands to the NEAR operator | |
| 1597 ** combined. For example: | |
| 1598 ** | |
| 1599 ** Query syntax nPhrase | |
| 1600 ** ------------------------------------ | |
| 1601 ** "A B C" NEAR "D E" 5 | |
| 1602 ** A NEAR B 2 | |
| 1603 ** | |
| 1604 ** iType controls the type of data written to pOut. If iType is | |
| 1605 ** DL_POSITIONS, the positions are those from pRight. | |
| 1606 */ | |
| 1607 static int docListPhraseMerge( | |
| 1608 const char *pLeft, int nLeft, | |
| 1609 const char *pRight, int nRight, | |
| 1610 int nNear, /* 0 for a phrase merge, non-zero for a NEAR merge */ | |
| 1611 int nPhrase, /* Number of tokens in left+right operands to NEAR */ | |
| 1612 DocListType iType, /* Type of doclist to write to pOut */ | |
| 1613 DataBuffer *pOut /* Write the combined doclist here */ | |
| 1614 ){ | |
| 1615 DLReader left, right; | |
| 1616 DLWriter writer; | |
| 1617 int rc; | |
| 1618 | |
| 1619 /* These two buffers are used in the 'while', but are declared here | |
| 1620 ** to simplify error-handling. | |
| 1621 */ | |
| 1622 DataBuffer one = {0, 0, 0}; | |
| 1623 DataBuffer two = {0, 0, 0}; | |
| 1624 | |
| 1625 if( nLeft==0 || nRight==0 ) return SQLITE_OK; | |
| 1626 | |
| 1627 assert( iType!=DL_POSITIONS_OFFSETS ); | |
| 1628 | |
| 1629 rc = dlrInit(&left, DL_POSITIONS, pLeft, nLeft); | |
| 1630 if( rc!=SQLITE_OK ) return rc; | |
| 1631 rc = dlrInit(&right, DL_POSITIONS, pRight, nRight); | |
| 1632 if( rc!=SQLITE_OK ){ | |
| 1633 dlrDestroy(&left); | |
| 1634 return rc; | |
| 1635 } | |
| 1636 dlwInit(&writer, iType, pOut); | |
| 1637 | |
| 1638 while( !dlrAtEnd(&left) && !dlrAtEnd(&right) ){ | |
| 1639 if( dlrDocid(&left)<dlrDocid(&right) ){ | |
| 1640 rc = dlrStep(&left); | |
| 1641 if( rc!=SQLITE_OK ) goto err; | |
| 1642 }else if( dlrDocid(&right)<dlrDocid(&left) ){ | |
| 1643 rc = dlrStep(&right); | |
| 1644 if( rc!=SQLITE_OK ) goto err; | |
| 1645 }else{ | |
| 1646 if( nNear==0 ){ | |
| 1647 rc = posListPhraseMerge(&left, &right, 0, 0, &writer); | |
| 1648 if( rc!=SQLITE_OK ) goto err; | |
| 1649 }else{ | |
| 1650 /* This case occurs when two terms (simple terms or phrases) are | |
| 1651 * connected by a NEAR operator, span (nNear+1). i.e. | |
| 1652 * | |
| 1653 * '"terrible company" NEAR widget' | |
| 1654 */ | |
| 1655 DLWriter dlwriter2; | |
| 1656 DLReader dr1 = {0, 0, 0, 0, 0}; | |
| 1657 DLReader dr2 = {0, 0, 0, 0, 0}; | |
| 1658 | |
| 1659 dlwInit(&dlwriter2, iType, &one); | |
| 1660 rc = posListPhraseMerge(&right, &left, nNear-3+nPhrase, 1, &dlwriter2); | |
| 1661 if( rc!=SQLITE_OK ) goto err; | |
| 1662 dlwInit(&dlwriter2, iType, &two); | |
| 1663 rc = posListPhraseMerge(&left, &right, nNear-1, 0, &dlwriter2); | |
| 1664 if( rc!=SQLITE_OK ) goto err; | |
| 1665 | |
| 1666 if( one.nData){ | |
| 1667 rc = dlrInit(&dr1, iType, one.pData, one.nData); | |
| 1668 if( rc!=SQLITE_OK ) goto err; | |
| 1669 } | |
| 1670 if( two.nData){ | |
| 1671 rc = dlrInit(&dr2, iType, two.pData, two.nData); | |
| 1672 if( rc!=SQLITE_OK ) goto err; | |
| 1673 } | |
| 1674 | |
| 1675 if( !dlrAtEnd(&dr1) || !dlrAtEnd(&dr2) ){ | |
| 1676 PLReader pr1 = {0}; | |
| 1677 PLReader pr2 = {0}; | |
| 1678 | |
| 1679 PLWriter plwriter; | |
| 1680 plwInit(&plwriter, &writer, dlrDocid(dlrAtEnd(&dr1)?&dr2:&dr1)); | |
| 1681 | |
| 1682 if( one.nData ){ | |
| 1683 rc = plrInit(&pr1, &dr1); | |
| 1684 if( rc!=SQLITE_OK ) goto err; | |
| 1685 } | |
| 1686 if( two.nData ){ | |
| 1687 rc = plrInit(&pr2, &dr2); | |
| 1688 if( rc!=SQLITE_OK ) goto err; | |
| 1689 } | |
| 1690 while( !plrAtEnd(&pr1) || !plrAtEnd(&pr2) ){ | |
| 1691 int iCompare = plrCompare(&pr1, &pr2); | |
| 1692 switch( iCompare ){ | |
| 1693 case -1: | |
| 1694 plwCopy(&plwriter, &pr1); | |
| 1695 rc = plrStep(&pr1); | |
| 1696 if( rc!=SQLITE_OK ) goto err; | |
| 1697 break; | |
| 1698 case 1: | |
| 1699 plwCopy(&plwriter, &pr2); | |
| 1700 rc = plrStep(&pr2); | |
| 1701 if( rc!=SQLITE_OK ) goto err; | |
| 1702 break; | |
| 1703 case 0: | |
| 1704 plwCopy(&plwriter, &pr1); | |
| 1705 rc = plrStep(&pr1); | |
| 1706 if( rc!=SQLITE_OK ) goto err; | |
| 1707 rc = plrStep(&pr2); | |
| 1708 if( rc!=SQLITE_OK ) goto err; | |
| 1709 break; | |
| 1710 } | |
| 1711 } | |
| 1712 plwTerminate(&plwriter); | |
| 1713 } | |
| 1714 dataBufferReset(&one); | |
| 1715 dataBufferReset(&two); | |
| 1716 } | |
| 1717 rc = dlrStep(&left); | |
| 1718 if( rc!=SQLITE_OK ) goto err; | |
| 1719 rc = dlrStep(&right); | |
| 1720 if( rc!=SQLITE_OK ) goto err; | |
| 1721 } | |
| 1722 } | |
| 1723 | |
| 1724 err: | |
| 1725 dataBufferDestroy(&one); | |
| 1726 dataBufferDestroy(&two); | |
| 1727 dlrDestroy(&left); | |
| 1728 dlrDestroy(&right); | |
| 1729 dlwDestroy(&writer); | |
| 1730 return rc; | |
| 1731 } | |
| 1732 | |
| 1733 /* We have two DL_DOCIDS doclists: pLeft and pRight. | |
| 1734 ** Write the intersection of these two doclists into pOut as a | |
| 1735 ** DL_DOCIDS doclist. | |
| 1736 */ | |
| 1737 static int docListAndMerge( | |
| 1738 const char *pLeft, int nLeft, | |
| 1739 const char *pRight, int nRight, | |
| 1740 DataBuffer *pOut /* Write the combined doclist here */ | |
| 1741 ){ | |
| 1742 DLReader left, right; | |
| 1743 DLWriter writer; | |
| 1744 int rc; | |
| 1745 | |
| 1746 if( nLeft==0 || nRight==0 ) return SQLITE_OK; | |
| 1747 | |
| 1748 rc = dlrInit(&left, DL_DOCIDS, pLeft, nLeft); | |
| 1749 if( rc!=SQLITE_OK ) return rc; | |
| 1750 rc = dlrInit(&right, DL_DOCIDS, pRight, nRight); | |
| 1751 if( rc!=SQLITE_OK ){ | |
| 1752 dlrDestroy(&left); | |
| 1753 return rc; | |
| 1754 } | |
| 1755 dlwInit(&writer, DL_DOCIDS, pOut); | |
| 1756 | |
| 1757 while( !dlrAtEnd(&left) && !dlrAtEnd(&right) ){ | |
| 1758 if( dlrDocid(&left)<dlrDocid(&right) ){ | |
| 1759 rc = dlrStep(&left); | |
| 1760 if( rc!=SQLITE_OK ) break; | |
| 1761 }else if( dlrDocid(&right)<dlrDocid(&left) ){ | |
| 1762 rc = dlrStep(&right); | |
| 1763 if( rc!=SQLITE_OK ) break; | |
| 1764 }else{ | |
| 1765 dlwAdd(&writer, dlrDocid(&left)); | |
| 1766 rc = dlrStep(&left); | |
| 1767 if( rc!=SQLITE_OK ) break; | |
| 1768 rc = dlrStep(&right); | |
| 1769 if( rc!=SQLITE_OK ) break; | |
| 1770 } | |
| 1771 } | |
| 1772 | |
| 1773 dlrDestroy(&left); | |
| 1774 dlrDestroy(&right); | |
| 1775 dlwDestroy(&writer); | |
| 1776 return rc; | |
| 1777 } | |
| 1778 | |
| 1779 /* We have two DL_DOCIDS doclists: pLeft and pRight. | |
| 1780 ** Write the union of these two doclists into pOut as a | |
| 1781 ** DL_DOCIDS doclist. | |
| 1782 */ | |
| 1783 static int docListOrMerge( | |
| 1784 const char *pLeft, int nLeft, | |
| 1785 const char *pRight, int nRight, | |
| 1786 DataBuffer *pOut /* Write the combined doclist here */ | |
| 1787 ){ | |
| 1788 DLReader left, right; | |
| 1789 DLWriter writer; | |
| 1790 int rc; | |
| 1791 | |
| 1792 if( nLeft==0 ){ | |
| 1793 if( nRight!=0 ) dataBufferAppend(pOut, pRight, nRight); | |
| 1794 return SQLITE_OK; | |
| 1795 } | |
| 1796 if( nRight==0 ){ | |
| 1797 dataBufferAppend(pOut, pLeft, nLeft); | |
| 1798 return SQLITE_OK; | |
| 1799 } | |
| 1800 | |
| 1801 rc = dlrInit(&left, DL_DOCIDS, pLeft, nLeft); | |
| 1802 if( rc!=SQLITE_OK ) return rc; | |
| 1803 rc = dlrInit(&right, DL_DOCIDS, pRight, nRight); | |
| 1804 if( rc!=SQLITE_OK ){ | |
| 1805 dlrDestroy(&left); | |
| 1806 return rc; | |
| 1807 } | |
| 1808 dlwInit(&writer, DL_DOCIDS, pOut); | |
| 1809 | |
| 1810 while( !dlrAtEnd(&left) || !dlrAtEnd(&right) ){ | |
| 1811 if( dlrAtEnd(&right) ){ | |
| 1812 dlwAdd(&writer, dlrDocid(&left)); | |
| 1813 rc = dlrStep(&left); | |
| 1814 if( rc!=SQLITE_OK ) break; | |
| 1815 }else if( dlrAtEnd(&left) ){ | |
| 1816 dlwAdd(&writer, dlrDocid(&right)); | |
| 1817 rc = dlrStep(&right); | |
| 1818 if( rc!=SQLITE_OK ) break; | |
| 1819 }else if( dlrDocid(&left)<dlrDocid(&right) ){ | |
| 1820 dlwAdd(&writer, dlrDocid(&left)); | |
| 1821 rc = dlrStep(&left); | |
| 1822 if( rc!=SQLITE_OK ) break; | |
| 1823 }else if( dlrDocid(&right)<dlrDocid(&left) ){ | |
| 1824 dlwAdd(&writer, dlrDocid(&right)); | |
| 1825 rc = dlrStep(&right); | |
| 1826 if( rc!=SQLITE_OK ) break; | |
| 1827 }else{ | |
| 1828 dlwAdd(&writer, dlrDocid(&left)); | |
| 1829 rc = dlrStep(&left); | |
| 1830 if( rc!=SQLITE_OK ) break; | |
| 1831 rc = dlrStep(&right); | |
| 1832 if( rc!=SQLITE_OK ) break; | |
| 1833 } | |
| 1834 } | |
| 1835 | |
| 1836 dlrDestroy(&left); | |
| 1837 dlrDestroy(&right); | |
| 1838 dlwDestroy(&writer); | |
| 1839 return rc; | |
| 1840 } | |
| 1841 | |
| 1842 /* We have two DL_DOCIDS doclists: pLeft and pRight. | |
| 1843 ** Write into pOut as DL_DOCIDS doclist containing all documents that | |
| 1844 ** occur in pLeft but not in pRight. | |
| 1845 */ | |
| 1846 static int docListExceptMerge( | |
| 1847 const char *pLeft, int nLeft, | |
| 1848 const char *pRight, int nRight, | |
| 1849 DataBuffer *pOut /* Write the combined doclist here */ | |
| 1850 ){ | |
| 1851 DLReader left, right; | |
| 1852 DLWriter writer; | |
| 1853 int rc; | |
| 1854 | |
| 1855 if( nLeft==0 ) return SQLITE_OK; | |
| 1856 if( nRight==0 ){ | |
| 1857 dataBufferAppend(pOut, pLeft, nLeft); | |
| 1858 return SQLITE_OK; | |
| 1859 } | |
| 1860 | |
| 1861 rc = dlrInit(&left, DL_DOCIDS, pLeft, nLeft); | |
| 1862 if( rc!=SQLITE_OK ) return rc; | |
| 1863 rc = dlrInit(&right, DL_DOCIDS, pRight, nRight); | |
| 1864 if( rc!=SQLITE_OK ){ | |
| 1865 dlrDestroy(&left); | |
| 1866 return rc; | |
| 1867 } | |
| 1868 dlwInit(&writer, DL_DOCIDS, pOut); | |
| 1869 | |
| 1870 while( !dlrAtEnd(&left) ){ | |
| 1871 while( !dlrAtEnd(&right) && dlrDocid(&right)<dlrDocid(&left) ){ | |
| 1872 rc = dlrStep(&right); | |
| 1873 if( rc!=SQLITE_OK ) goto err; | |
| 1874 } | |
| 1875 if( dlrAtEnd(&right) || dlrDocid(&left)<dlrDocid(&right) ){ | |
| 1876 dlwAdd(&writer, dlrDocid(&left)); | |
| 1877 } | |
| 1878 rc = dlrStep(&left); | |
| 1879 if( rc!=SQLITE_OK ) break; | |
| 1880 } | |
| 1881 | |
| 1882 err: | |
| 1883 dlrDestroy(&left); | |
| 1884 dlrDestroy(&right); | |
| 1885 dlwDestroy(&writer); | |
| 1886 return rc; | |
| 1887 } | |
| 1888 | |
| 1889 static char *string_dup_n(const char *s, int n){ | |
| 1890 char *str = sqlite3_malloc(n + 1); | |
| 1891 memcpy(str, s, n); | |
| 1892 str[n] = '\0'; | |
| 1893 return str; | |
| 1894 } | |
| 1895 | |
| 1896 /* Duplicate a string; the caller must free() the returned string. | |
| 1897 * (We don't use strdup() since it is not part of the standard C library and | |
| 1898 * may not be available everywhere.) */ | |
| 1899 static char *string_dup(const char *s){ | |
| 1900 return string_dup_n(s, strlen(s)); | |
| 1901 } | |
| 1902 | |
| 1903 /* Format a string, replacing each occurrence of the % character with | |
| 1904 * zDb.zName. This may be more convenient than sqlite_mprintf() | |
| 1905 * when one string is used repeatedly in a format string. | |
| 1906 * The caller must free() the returned string. */ | |
| 1907 static char *string_format(const char *zFormat, | |
| 1908 const char *zDb, const char *zName){ | |
| 1909 const char *p; | |
| 1910 size_t len = 0; | |
| 1911 size_t nDb = strlen(zDb); | |
| 1912 size_t nName = strlen(zName); | |
| 1913 size_t nFullTableName = nDb+1+nName; | |
| 1914 char *result; | |
| 1915 char *r; | |
| 1916 | |
| 1917 /* first compute length needed */ | |
| 1918 for(p = zFormat ; *p ; ++p){ | |
| 1919 len += (*p=='%' ? nFullTableName : 1); | |
| 1920 } | |
| 1921 len += 1; /* for null terminator */ | |
| 1922 | |
| 1923 r = result = sqlite3_malloc(len); | |
| 1924 for(p = zFormat; *p; ++p){ | |
| 1925 if( *p=='%' ){ | |
| 1926 memcpy(r, zDb, nDb); | |
| 1927 r += nDb; | |
| 1928 *r++ = '.'; | |
| 1929 memcpy(r, zName, nName); | |
| 1930 r += nName; | |
| 1931 } else { | |
| 1932 *r++ = *p; | |
| 1933 } | |
| 1934 } | |
| 1935 *r++ = '\0'; | |
| 1936 assert( r == result + len ); | |
| 1937 return result; | |
| 1938 } | |
| 1939 | |
| 1940 static int sql_exec(sqlite3 *db, const char *zDb, const char *zName, | |
| 1941 const char *zFormat){ | |
| 1942 char *zCommand = string_format(zFormat, zDb, zName); | |
| 1943 int rc; | |
| 1944 FTSTRACE(("FTS3 sql: %s\n", zCommand)); | |
| 1945 rc = sqlite3_exec(db, zCommand, NULL, 0, NULL); | |
| 1946 sqlite3_free(zCommand); | |
| 1947 return rc; | |
| 1948 } | |
| 1949 | |
| 1950 static int sql_prepare(sqlite3 *db, const char *zDb, const char *zName, | |
| 1951 sqlite3_stmt **ppStmt, const char *zFormat){ | |
| 1952 char *zCommand = string_format(zFormat, zDb, zName); | |
| 1953 int rc; | |
| 1954 FTSTRACE(("FTS3 prepare: %s\n", zCommand)); | |
| 1955 rc = sqlite3_prepare_v2(db, zCommand, -1, ppStmt, NULL); | |
| 1956 sqlite3_free(zCommand); | |
| 1957 return rc; | |
| 1958 } | |
| 1959 | |
| 1960 /* end utility functions */ | |
| 1961 | |
| 1962 /* Forward reference */ | |
| 1963 typedef struct fulltext_vtab fulltext_vtab; | |
| 1964 | |
| 1965 /* | |
| 1966 ** An instance of the following structure keeps track of generated | |
| 1967 ** matching-word offset information and snippets. | |
| 1968 */ | |
| 1969 typedef struct Snippet { | |
| 1970 int nMatch; /* Total number of matches */ | |
| 1971 int nAlloc; /* Space allocated for aMatch[] */ | |
| 1972 struct snippetMatch { /* One entry for each matching term */ | |
| 1973 char snStatus; /* Status flag for use while constructing snippets */ | |
| 1974 short int iCol; /* The column that contains the match */ | |
| 1975 short int iTerm; /* The index in Query.pTerms[] of the matching term */ | |
| 1976 int iToken; /* The index of the matching document token */ | |
| 1977 short int nByte; /* Number of bytes in the term */ | |
| 1978 int iStart; /* The offset to the first character of the term */ | |
| 1979 } *aMatch; /* Points to space obtained from malloc */ | |
| 1980 char *zOffset; /* Text rendering of aMatch[] */ | |
| 1981 int nOffset; /* strlen(zOffset) */ | |
| 1982 char *zSnippet; /* Snippet text */ | |
| 1983 int nSnippet; /* strlen(zSnippet) */ | |
| 1984 } Snippet; | |
| 1985 | |
| 1986 | |
| 1987 typedef enum QueryType { | |
| 1988 QUERY_GENERIC, /* table scan */ | |
| 1989 QUERY_DOCID, /* lookup by docid */ | |
| 1990 QUERY_FULLTEXT /* QUERY_FULLTEXT + [i] is a full-text search for column i*/ | |
| 1991 } QueryType; | |
| 1992 | |
| 1993 typedef enum fulltext_statement { | |
| 1994 CONTENT_INSERT_STMT, | |
| 1995 CONTENT_SELECT_STMT, | |
| 1996 CONTENT_UPDATE_STMT, | |
| 1997 CONTENT_DELETE_STMT, | |
| 1998 CONTENT_EXISTS_STMT, | |
| 1999 | |
| 2000 BLOCK_INSERT_STMT, | |
| 2001 BLOCK_SELECT_STMT, | |
| 2002 BLOCK_DELETE_STMT, | |
| 2003 BLOCK_DELETE_ALL_STMT, | |
| 2004 | |
| 2005 SEGDIR_MAX_INDEX_STMT, | |
| 2006 SEGDIR_SET_STMT, | |
| 2007 SEGDIR_SELECT_LEVEL_STMT, | |
| 2008 SEGDIR_SPAN_STMT, | |
| 2009 SEGDIR_DELETE_STMT, | |
| 2010 SEGDIR_SELECT_SEGMENT_STMT, | |
| 2011 SEGDIR_SELECT_ALL_STMT, | |
| 2012 SEGDIR_DELETE_ALL_STMT, | |
| 2013 SEGDIR_COUNT_STMT, | |
| 2014 | |
| 2015 MAX_STMT /* Always at end! */ | |
| 2016 } fulltext_statement; | |
| 2017 | |
| 2018 /* These must exactly match the enum above. */ | |
| 2019 /* TODO(shess): Is there some risk that a statement will be used in two | |
| 2020 ** cursors at once, e.g. if a query joins a virtual table to itself? | |
| 2021 ** If so perhaps we should move some of these to the cursor object. | |
| 2022 */ | |
| 2023 static const char *const fulltext_zStatement[MAX_STMT] = { | |
| 2024 /* CONTENT_INSERT */ NULL, /* generated in contentInsertStatement() */ | |
| 2025 /* CONTENT_SELECT */ NULL, /* generated in contentSelectStatement() */ | |
| 2026 /* CONTENT_UPDATE */ NULL, /* generated in contentUpdateStatement() */ | |
| 2027 /* CONTENT_DELETE */ "delete from %_content where docid = ?", | |
| 2028 /* CONTENT_EXISTS */ "select docid from %_content limit 1", | |
| 2029 | |
| 2030 /* BLOCK_INSERT */ | |
| 2031 "insert into %_segments (blockid, block) values (null, ?)", | |
| 2032 /* BLOCK_SELECT */ "select block from %_segments where blockid = ?", | |
| 2033 /* BLOCK_DELETE */ "delete from %_segments where blockid between ? and ?", | |
| 2034 /* BLOCK_DELETE_ALL */ "delete from %_segments", | |
| 2035 | |
| 2036 /* SEGDIR_MAX_INDEX */ "select max(idx) from %_segdir where level = ?", | |
| 2037 /* SEGDIR_SET */ "insert into %_segdir values (?, ?, ?, ?, ?, ?)", | |
| 2038 /* SEGDIR_SELECT_LEVEL */ | |
| 2039 "select start_block, leaves_end_block, root, idx from %_segdir " | |
| 2040 " where level = ? order by idx", | |
| 2041 /* SEGDIR_SPAN */ | |
| 2042 "select min(start_block), max(end_block) from %_segdir " | |
| 2043 " where level = ? and start_block <> 0", | |
| 2044 /* SEGDIR_DELETE */ "delete from %_segdir where level = ?", | |
| 2045 | |
| 2046 /* NOTE(shess): The first three results of the following two | |
| 2047 ** statements must match. | |
| 2048 */ | |
| 2049 /* SEGDIR_SELECT_SEGMENT */ | |
| 2050 "select start_block, leaves_end_block, root from %_segdir " | |
| 2051 " where level = ? and idx = ?", | |
| 2052 /* SEGDIR_SELECT_ALL */ | |
| 2053 "select start_block, leaves_end_block, root from %_segdir " | |
| 2054 " order by level desc, idx asc", | |
| 2055 /* SEGDIR_DELETE_ALL */ "delete from %_segdir", | |
| 2056 /* SEGDIR_COUNT */ "select count(*), ifnull(max(level),0) from %_segdir", | |
| 2057 }; | |
| 2058 | |
| 2059 /* | |
| 2060 ** A connection to a fulltext index is an instance of the following | |
| 2061 ** structure. The xCreate and xConnect methods create an instance | |
| 2062 ** of this structure and xDestroy and xDisconnect free that instance. | |
| 2063 ** All other methods receive a pointer to the structure as one of their | |
| 2064 ** arguments. | |
| 2065 */ | |
| 2066 struct fulltext_vtab { | |
| 2067 sqlite3_vtab base; /* Base class used by SQLite core */ | |
| 2068 sqlite3 *db; /* The database connection */ | |
| 2069 const char *zDb; /* logical database name */ | |
| 2070 const char *zName; /* virtual table name */ | |
| 2071 int nColumn; /* number of columns in virtual table */ | |
| 2072 char **azColumn; /* column names. malloced */ | |
| 2073 char **azContentColumn; /* column names in content table; malloced */ | |
| 2074 sqlite3_tokenizer *pTokenizer; /* tokenizer for inserts and queries */ | |
| 2075 | |
| 2076 /* Precompiled statements which we keep as long as the table is | |
| 2077 ** open. | |
| 2078 */ | |
| 2079 sqlite3_stmt *pFulltextStatements[MAX_STMT]; | |
| 2080 | |
| 2081 /* Precompiled statements used for segment merges. We run a | |
| 2082 ** separate select across the leaf level of each tree being merged. | |
| 2083 */ | |
| 2084 sqlite3_stmt *pLeafSelectStmts[MERGE_COUNT]; | |
| 2085 /* The statement used to prepare pLeafSelectStmts. */ | |
| 2086 #define LEAF_SELECT \ | |
| 2087 "select block from %_segments where blockid between ? and ? order by blockid" | |
| 2088 | |
| 2089 /* These buffer pending index updates during transactions. | |
| 2090 ** nPendingData estimates the memory size of the pending data. It | |
| 2091 ** doesn't include the hash-bucket overhead, nor any malloc | |
| 2092 ** overhead. When nPendingData exceeds kPendingThreshold, the | |
| 2093 ** buffer is flushed even before the transaction closes. | |
| 2094 ** pendingTerms stores the data, and is only valid when nPendingData | |
| 2095 ** is >=0 (nPendingData<0 means pendingTerms has not been | |
| 2096 ** initialized). iPrevDocid is the last docid written, used to make | |
| 2097 ** certain we're inserting in sorted order. | |
| 2098 */ | |
| 2099 int nPendingData; | |
| 2100 #define kPendingThreshold (1*1024*1024) | |
| 2101 sqlite_int64 iPrevDocid; | |
| 2102 fts3Hash pendingTerms; | |
| 2103 }; | |
| 2104 | |
| 2105 /* | |
| 2106 ** When the core wants to do a query, it create a cursor using a | |
| 2107 ** call to xOpen. This structure is an instance of a cursor. It | |
| 2108 ** is destroyed by xClose. | |
| 2109 */ | |
| 2110 typedef struct fulltext_cursor { | |
| 2111 sqlite3_vtab_cursor base; /* Base class used by SQLite core */ | |
| 2112 QueryType iCursorType; /* Copy of sqlite3_index_info.idxNum */ | |
| 2113 sqlite3_stmt *pStmt; /* Prepared statement in use by the cursor */ | |
| 2114 int eof; /* True if at End Of Results */ | |
| 2115 Fts3Expr *pExpr; /* Parsed MATCH query string */ | |
| 2116 Snippet snippet; /* Cached snippet for the current row */ | |
| 2117 int iColumn; /* Column being searched */ | |
| 2118 DataBuffer result; /* Doclist results from fulltextQuery */ | |
| 2119 DLReader reader; /* Result reader if result not empty */ | |
| 2120 } fulltext_cursor; | |
| 2121 | |
| 2122 static fulltext_vtab *cursor_vtab(fulltext_cursor *c){ | |
| 2123 return (fulltext_vtab *) c->base.pVtab; | |
| 2124 } | |
| 2125 | |
| 2126 static const sqlite3_module fts3Module; /* forward declaration */ | |
| 2127 | |
| 2128 /* Return a dynamically generated statement of the form | |
| 2129 * insert into %_content (docid, ...) values (?, ...) | |
| 2130 */ | |
| 2131 static const char *contentInsertStatement(fulltext_vtab *v){ | |
| 2132 StringBuffer sb; | |
| 2133 int i; | |
| 2134 | |
| 2135 initStringBuffer(&sb); | |
| 2136 append(&sb, "insert into %_content (docid, "); | |
| 2137 appendList(&sb, v->nColumn, v->azContentColumn); | |
| 2138 append(&sb, ") values (?"); | |
| 2139 for(i=0; i<v->nColumn; ++i) | |
| 2140 append(&sb, ", ?"); | |
| 2141 append(&sb, ")"); | |
| 2142 return stringBufferData(&sb); | |
| 2143 } | |
| 2144 | |
| 2145 /* Return a dynamically generated statement of the form | |
| 2146 * select <content columns> from %_content where docid = ? | |
| 2147 */ | |
| 2148 static const char *contentSelectStatement(fulltext_vtab *v){ | |
| 2149 StringBuffer sb; | |
| 2150 initStringBuffer(&sb); | |
| 2151 append(&sb, "SELECT "); | |
| 2152 appendList(&sb, v->nColumn, v->azContentColumn); | |
| 2153 append(&sb, " FROM %_content WHERE docid = ?"); | |
| 2154 return stringBufferData(&sb); | |
| 2155 } | |
| 2156 | |
| 2157 /* Return a dynamically generated statement of the form | |
| 2158 * update %_content set [col_0] = ?, [col_1] = ?, ... | |
| 2159 * where docid = ? | |
| 2160 */ | |
| 2161 static const char *contentUpdateStatement(fulltext_vtab *v){ | |
| 2162 StringBuffer sb; | |
| 2163 int i; | |
| 2164 | |
| 2165 initStringBuffer(&sb); | |
| 2166 append(&sb, "update %_content set "); | |
| 2167 for(i=0; i<v->nColumn; ++i) { | |
| 2168 if( i>0 ){ | |
| 2169 append(&sb, ", "); | |
| 2170 } | |
| 2171 append(&sb, v->azContentColumn[i]); | |
| 2172 append(&sb, " = ?"); | |
| 2173 } | |
| 2174 append(&sb, " where docid = ?"); | |
| 2175 return stringBufferData(&sb); | |
| 2176 } | |
| 2177 | |
| 2178 /* Puts a freshly-prepared statement determined by iStmt in *ppStmt. | |
| 2179 ** If the indicated statement has never been prepared, it is prepared | |
| 2180 ** and cached, otherwise the cached version is reset. | |
| 2181 */ | |
| 2182 static int sql_get_statement(fulltext_vtab *v, fulltext_statement iStmt, | |
| 2183 sqlite3_stmt **ppStmt){ | |
| 2184 assert( iStmt<MAX_STMT ); | |
| 2185 if( v->pFulltextStatements[iStmt]==NULL ){ | |
| 2186 const char *zStmt; | |
| 2187 int rc; | |
| 2188 switch( iStmt ){ | |
| 2189 case CONTENT_INSERT_STMT: | |
| 2190 zStmt = contentInsertStatement(v); break; | |
| 2191 case CONTENT_SELECT_STMT: | |
| 2192 zStmt = contentSelectStatement(v); break; | |
| 2193 case CONTENT_UPDATE_STMT: | |
| 2194 zStmt = contentUpdateStatement(v); break; | |
| 2195 default: | |
| 2196 zStmt = fulltext_zStatement[iStmt]; | |
| 2197 } | |
| 2198 rc = sql_prepare(v->db, v->zDb, v->zName, &v->pFulltextStatements[iStmt], | |
| 2199 zStmt); | |
| 2200 if( zStmt != fulltext_zStatement[iStmt]) sqlite3_free((void *) zStmt); | |
| 2201 if( rc!=SQLITE_OK ) return rc; | |
| 2202 } else { | |
| 2203 int rc = sqlite3_reset(v->pFulltextStatements[iStmt]); | |
| 2204 if( rc!=SQLITE_OK ) return rc; | |
| 2205 } | |
| 2206 | |
| 2207 *ppStmt = v->pFulltextStatements[iStmt]; | |
| 2208 return SQLITE_OK; | |
| 2209 } | |
| 2210 | |
| 2211 /* Like sqlite3_step(), but convert SQLITE_DONE to SQLITE_OK and | |
| 2212 ** SQLITE_ROW to SQLITE_ERROR. Useful for statements like UPDATE, | |
| 2213 ** where we expect no results. | |
| 2214 */ | |
| 2215 static int sql_single_step(sqlite3_stmt *s){ | |
| 2216 int rc = sqlite3_step(s); | |
| 2217 return (rc==SQLITE_DONE) ? SQLITE_OK : rc; | |
| 2218 } | |
| 2219 | |
| 2220 /* Like sql_get_statement(), but for special replicated LEAF_SELECT | |
| 2221 ** statements. idx -1 is a special case for an uncached version of | |
| 2222 ** the statement (used in the optimize implementation). | |
| 2223 */ | |
| 2224 /* TODO(shess) Write version for generic statements and then share | |
| 2225 ** that between the cached-statement functions. | |
| 2226 */ | |
| 2227 static int sql_get_leaf_statement(fulltext_vtab *v, int idx, | |
| 2228 sqlite3_stmt **ppStmt){ | |
| 2229 assert( idx>=-1 && idx<MERGE_COUNT ); | |
| 2230 if( idx==-1 ){ | |
| 2231 return sql_prepare(v->db, v->zDb, v->zName, ppStmt, LEAF_SELECT); | |
| 2232 }else if( v->pLeafSelectStmts[idx]==NULL ){ | |
| 2233 int rc = sql_prepare(v->db, v->zDb, v->zName, &v->pLeafSelectStmts[idx], | |
| 2234 LEAF_SELECT); | |
| 2235 if( rc!=SQLITE_OK ) return rc; | |
| 2236 }else{ | |
| 2237 int rc = sqlite3_reset(v->pLeafSelectStmts[idx]); | |
| 2238 if( rc!=SQLITE_OK ) return rc; | |
| 2239 } | |
| 2240 | |
| 2241 *ppStmt = v->pLeafSelectStmts[idx]; | |
| 2242 return SQLITE_OK; | |
| 2243 } | |
| 2244 | |
| 2245 /* insert into %_content (docid, ...) values ([docid], [pValues]) | |
| 2246 ** If the docid contains SQL NULL, then a unique docid will be | |
| 2247 ** generated. | |
| 2248 */ | |
| 2249 static int content_insert(fulltext_vtab *v, sqlite3_value *docid, | |
| 2250 sqlite3_value **pValues){ | |
| 2251 sqlite3_stmt *s; | |
| 2252 int i; | |
| 2253 int rc = sql_get_statement(v, CONTENT_INSERT_STMT, &s); | |
| 2254 if( rc!=SQLITE_OK ) return rc; | |
| 2255 | |
| 2256 rc = sqlite3_bind_value(s, 1, docid); | |
| 2257 if( rc!=SQLITE_OK ) return rc; | |
| 2258 | |
| 2259 for(i=0; i<v->nColumn; ++i){ | |
| 2260 rc = sqlite3_bind_value(s, 2+i, pValues[i]); | |
| 2261 if( rc!=SQLITE_OK ) return rc; | |
| 2262 } | |
| 2263 | |
| 2264 return sql_single_step(s); | |
| 2265 } | |
| 2266 | |
| 2267 /* update %_content set col0 = pValues[0], col1 = pValues[1], ... | |
| 2268 * where docid = [iDocid] */ | |
| 2269 static int content_update(fulltext_vtab *v, sqlite3_value **pValues, | |
| 2270 sqlite_int64 iDocid){ | |
| 2271 sqlite3_stmt *s; | |
| 2272 int i; | |
| 2273 int rc = sql_get_statement(v, CONTENT_UPDATE_STMT, &s); | |
| 2274 if( rc!=SQLITE_OK ) return rc; | |
| 2275 | |
| 2276 for(i=0; i<v->nColumn; ++i){ | |
| 2277 rc = sqlite3_bind_value(s, 1+i, pValues[i]); | |
| 2278 if( rc!=SQLITE_OK ) return rc; | |
| 2279 } | |
| 2280 | |
| 2281 rc = sqlite3_bind_int64(s, 1+v->nColumn, iDocid); | |
| 2282 if( rc!=SQLITE_OK ) return rc; | |
| 2283 | |
| 2284 return sql_single_step(s); | |
| 2285 } | |
| 2286 | |
| 2287 static void freeStringArray(int nString, const char **pString){ | |
| 2288 int i; | |
| 2289 | |
| 2290 for (i=0 ; i < nString ; ++i) { | |
| 2291 if( pString[i]!=NULL ) sqlite3_free((void *) pString[i]); | |
| 2292 } | |
| 2293 sqlite3_free((void *) pString); | |
| 2294 } | |
| 2295 | |
| 2296 /* select * from %_content where docid = [iDocid] | |
| 2297 * The caller must delete the returned array and all strings in it. | |
| 2298 * null fields will be NULL in the returned array. | |
| 2299 * | |
| 2300 * TODO: Perhaps we should return pointer/length strings here for consistency | |
| 2301 * with other code which uses pointer/length. */ | |
| 2302 static int content_select(fulltext_vtab *v, sqlite_int64 iDocid, | |
| 2303 const char ***pValues){ | |
| 2304 sqlite3_stmt *s; | |
| 2305 const char **values; | |
| 2306 int i; | |
| 2307 int rc; | |
| 2308 | |
| 2309 *pValues = NULL; | |
| 2310 | |
| 2311 rc = sql_get_statement(v, CONTENT_SELECT_STMT, &s); | |
| 2312 if( rc!=SQLITE_OK ) return rc; | |
| 2313 | |
| 2314 rc = sqlite3_bind_int64(s, 1, iDocid); | |
| 2315 if( rc!=SQLITE_OK ) return rc; | |
| 2316 | |
| 2317 rc = sqlite3_step(s); | |
| 2318 if( rc!=SQLITE_ROW ) return rc; | |
| 2319 | |
| 2320 values = (const char **) sqlite3_malloc(v->nColumn * sizeof(const char *)); | |
| 2321 for(i=0; i<v->nColumn; ++i){ | |
| 2322 if( sqlite3_column_type(s, i)==SQLITE_NULL ){ | |
| 2323 values[i] = NULL; | |
| 2324 }else{ | |
| 2325 values[i] = string_dup((char*)sqlite3_column_text(s, i)); | |
| 2326 } | |
| 2327 } | |
| 2328 | |
| 2329 /* We expect only one row. We must execute another sqlite3_step() | |
| 2330 * to complete the iteration; otherwise the table will remain locked. */ | |
| 2331 rc = sqlite3_step(s); | |
| 2332 if( rc==SQLITE_DONE ){ | |
| 2333 *pValues = values; | |
| 2334 return SQLITE_OK; | |
| 2335 } | |
| 2336 | |
| 2337 freeStringArray(v->nColumn, values); | |
| 2338 return rc; | |
| 2339 } | |
| 2340 | |
| 2341 /* delete from %_content where docid = [iDocid ] */ | |
| 2342 static int content_delete(fulltext_vtab *v, sqlite_int64 iDocid){ | |
| 2343 sqlite3_stmt *s; | |
| 2344 int rc = sql_get_statement(v, CONTENT_DELETE_STMT, &s); | |
| 2345 if( rc!=SQLITE_OK ) return rc; | |
| 2346 | |
| 2347 rc = sqlite3_bind_int64(s, 1, iDocid); | |
| 2348 if( rc!=SQLITE_OK ) return rc; | |
| 2349 | |
| 2350 return sql_single_step(s); | |
| 2351 } | |
| 2352 | |
| 2353 /* Returns SQLITE_ROW if any rows exist in %_content, SQLITE_DONE if | |
| 2354 ** no rows exist, and any error in case of failure. | |
| 2355 */ | |
| 2356 static int content_exists(fulltext_vtab *v){ | |
| 2357 sqlite3_stmt *s; | |
| 2358 int rc = sql_get_statement(v, CONTENT_EXISTS_STMT, &s); | |
| 2359 if( rc!=SQLITE_OK ) return rc; | |
| 2360 | |
| 2361 rc = sqlite3_step(s); | |
| 2362 if( rc!=SQLITE_ROW ) return rc; | |
| 2363 | |
| 2364 /* We expect only one row. We must execute another sqlite3_step() | |
| 2365 * to complete the iteration; otherwise the table will remain locked. */ | |
| 2366 rc = sqlite3_step(s); | |
| 2367 if( rc==SQLITE_DONE ) return SQLITE_ROW; | |
| 2368 if( rc==SQLITE_ROW ) return SQLITE_ERROR; | |
| 2369 return rc; | |
| 2370 } | |
| 2371 | |
| 2372 /* insert into %_segments values ([pData]) | |
| 2373 ** returns assigned blockid in *piBlockid | |
| 2374 */ | |
| 2375 static int block_insert(fulltext_vtab *v, const char *pData, int nData, | |
| 2376 sqlite_int64 *piBlockid){ | |
| 2377 sqlite3_stmt *s; | |
| 2378 int rc = sql_get_statement(v, BLOCK_INSERT_STMT, &s); | |
| 2379 if( rc!=SQLITE_OK ) return rc; | |
| 2380 | |
| 2381 rc = sqlite3_bind_blob(s, 1, pData, nData, SQLITE_STATIC); | |
| 2382 if( rc!=SQLITE_OK ) return rc; | |
| 2383 | |
| 2384 rc = sqlite3_step(s); | |
| 2385 if( rc==SQLITE_ROW ) return SQLITE_ERROR; | |
| 2386 if( rc!=SQLITE_DONE ) return rc; | |
| 2387 | |
| 2388 /* blockid column is an alias for rowid. */ | |
| 2389 *piBlockid = sqlite3_last_insert_rowid(v->db); | |
| 2390 return SQLITE_OK; | |
| 2391 } | |
| 2392 | |
| 2393 /* delete from %_segments | |
| 2394 ** where blockid between [iStartBlockid] and [iEndBlockid] | |
| 2395 ** | |
| 2396 ** Deletes the range of blocks, inclusive, used to delete the blocks | |
| 2397 ** which form a segment. | |
| 2398 */ | |
| 2399 static int block_delete(fulltext_vtab *v, | |
| 2400 sqlite_int64 iStartBlockid, sqlite_int64 iEndBlockid){ | |
| 2401 sqlite3_stmt *s; | |
| 2402 int rc = sql_get_statement(v, BLOCK_DELETE_STMT, &s); | |
| 2403 if( rc!=SQLITE_OK ) return rc; | |
| 2404 | |
| 2405 rc = sqlite3_bind_int64(s, 1, iStartBlockid); | |
| 2406 if( rc!=SQLITE_OK ) return rc; | |
| 2407 | |
| 2408 rc = sqlite3_bind_int64(s, 2, iEndBlockid); | |
| 2409 if( rc!=SQLITE_OK ) return rc; | |
| 2410 | |
| 2411 return sql_single_step(s); | |
| 2412 } | |
| 2413 | |
| 2414 /* Returns SQLITE_ROW with *pidx set to the maximum segment idx found | |
| 2415 ** at iLevel. Returns SQLITE_DONE if there are no segments at | |
| 2416 ** iLevel. Otherwise returns an error. | |
| 2417 */ | |
| 2418 static int segdir_max_index(fulltext_vtab *v, int iLevel, int *pidx){ | |
| 2419 sqlite3_stmt *s; | |
| 2420 int rc = sql_get_statement(v, SEGDIR_MAX_INDEX_STMT, &s); | |
| 2421 if( rc!=SQLITE_OK ) return rc; | |
| 2422 | |
| 2423 rc = sqlite3_bind_int(s, 1, iLevel); | |
| 2424 if( rc!=SQLITE_OK ) return rc; | |
| 2425 | |
| 2426 rc = sqlite3_step(s); | |
| 2427 /* Should always get at least one row due to how max() works. */ | |
| 2428 if( rc==SQLITE_DONE ) return SQLITE_DONE; | |
| 2429 if( rc!=SQLITE_ROW ) return rc; | |
| 2430 | |
| 2431 /* NULL means that there were no inputs to max(). */ | |
| 2432 if( SQLITE_NULL==sqlite3_column_type(s, 0) ){ | |
| 2433 rc = sqlite3_step(s); | |
| 2434 if( rc==SQLITE_ROW ) return SQLITE_ERROR; | |
| 2435 return rc; | |
| 2436 } | |
| 2437 | |
| 2438 *pidx = sqlite3_column_int(s, 0); | |
| 2439 | |
| 2440 /* We expect only one row. We must execute another sqlite3_step() | |
| 2441 * to complete the iteration; otherwise the table will remain locked. */ | |
| 2442 rc = sqlite3_step(s); | |
| 2443 if( rc==SQLITE_ROW ) return SQLITE_ERROR; | |
| 2444 if( rc!=SQLITE_DONE ) return rc; | |
| 2445 return SQLITE_ROW; | |
| 2446 } | |
| 2447 | |
| 2448 /* insert into %_segdir values ( | |
| 2449 ** [iLevel], [idx], | |
| 2450 ** [iStartBlockid], [iLeavesEndBlockid], [iEndBlockid], | |
| 2451 ** [pRootData] | |
| 2452 ** ) | |
| 2453 */ | |
| 2454 static int segdir_set(fulltext_vtab *v, int iLevel, int idx, | |
| 2455 sqlite_int64 iStartBlockid, | |
| 2456 sqlite_int64 iLeavesEndBlockid, | |
| 2457 sqlite_int64 iEndBlockid, | |
| 2458 const char *pRootData, int nRootData){ | |
| 2459 sqlite3_stmt *s; | |
| 2460 int rc = sql_get_statement(v, SEGDIR_SET_STMT, &s); | |
| 2461 if( rc!=SQLITE_OK ) return rc; | |
| 2462 | |
| 2463 rc = sqlite3_bind_int(s, 1, iLevel); | |
| 2464 if( rc!=SQLITE_OK ) return rc; | |
| 2465 | |
| 2466 rc = sqlite3_bind_int(s, 2, idx); | |
| 2467 if( rc!=SQLITE_OK ) return rc; | |
| 2468 | |
| 2469 rc = sqlite3_bind_int64(s, 3, iStartBlockid); | |
| 2470 if( rc!=SQLITE_OK ) return rc; | |
| 2471 | |
| 2472 rc = sqlite3_bind_int64(s, 4, iLeavesEndBlockid); | |
| 2473 if( rc!=SQLITE_OK ) return rc; | |
| 2474 | |
| 2475 rc = sqlite3_bind_int64(s, 5, iEndBlockid); | |
| 2476 if( rc!=SQLITE_OK ) return rc; | |
| 2477 | |
| 2478 rc = sqlite3_bind_blob(s, 6, pRootData, nRootData, SQLITE_STATIC); | |
| 2479 if( rc!=SQLITE_OK ) return rc; | |
| 2480 | |
| 2481 return sql_single_step(s); | |
| 2482 } | |
| 2483 | |
| 2484 /* Queries %_segdir for the block span of the segments in level | |
| 2485 ** iLevel. Returns SQLITE_DONE if there are no blocks for iLevel, | |
| 2486 ** SQLITE_ROW if there are blocks, else an error. | |
| 2487 */ | |
| 2488 static int segdir_span(fulltext_vtab *v, int iLevel, | |
| 2489 sqlite_int64 *piStartBlockid, | |
| 2490 sqlite_int64 *piEndBlockid){ | |
| 2491 sqlite3_stmt *s; | |
| 2492 int rc = sql_get_statement(v, SEGDIR_SPAN_STMT, &s); | |
| 2493 if( rc!=SQLITE_OK ) return rc; | |
| 2494 | |
| 2495 rc = sqlite3_bind_int(s, 1, iLevel); | |
| 2496 if( rc!=SQLITE_OK ) return rc; | |
| 2497 | |
| 2498 rc = sqlite3_step(s); | |
| 2499 if( rc==SQLITE_DONE ) return SQLITE_DONE; /* Should never happen */ | |
| 2500 if( rc!=SQLITE_ROW ) return rc; | |
| 2501 | |
| 2502 /* This happens if all segments at this level are entirely inline. */ | |
| 2503 if( SQLITE_NULL==sqlite3_column_type(s, 0) ){ | |
| 2504 /* We expect only one row. We must execute another sqlite3_step() | |
| 2505 * to complete the iteration; otherwise the table will remain locked. */ | |
| 2506 int rc2 = sqlite3_step(s); | |
| 2507 if( rc2==SQLITE_ROW ) return SQLITE_ERROR; | |
| 2508 return rc2; | |
| 2509 } | |
| 2510 | |
| 2511 *piStartBlockid = sqlite3_column_int64(s, 0); | |
| 2512 *piEndBlockid = sqlite3_column_int64(s, 1); | |
| 2513 | |
| 2514 /* We expect only one row. We must execute another sqlite3_step() | |
| 2515 * to complete the iteration; otherwise the table will remain locked. */ | |
| 2516 rc = sqlite3_step(s); | |
| 2517 if( rc==SQLITE_ROW ) return SQLITE_ERROR; | |
| 2518 if( rc!=SQLITE_DONE ) return rc; | |
| 2519 return SQLITE_ROW; | |
| 2520 } | |
| 2521 | |
| 2522 /* Delete the segment blocks and segment directory records for all | |
| 2523 ** segments at iLevel. | |
| 2524 */ | |
| 2525 static int segdir_delete(fulltext_vtab *v, int iLevel){ | |
| 2526 sqlite3_stmt *s; | |
| 2527 sqlite_int64 iStartBlockid, iEndBlockid; | |
| 2528 int rc = segdir_span(v, iLevel, &iStartBlockid, &iEndBlockid); | |
| 2529 if( rc!=SQLITE_ROW && rc!=SQLITE_DONE ) return rc; | |
| 2530 | |
| 2531 if( rc==SQLITE_ROW ){ | |
| 2532 rc = block_delete(v, iStartBlockid, iEndBlockid); | |
| 2533 if( rc!=SQLITE_OK ) return rc; | |
| 2534 } | |
| 2535 | |
| 2536 /* Delete the segment directory itself. */ | |
| 2537 rc = sql_get_statement(v, SEGDIR_DELETE_STMT, &s); | |
| 2538 if( rc!=SQLITE_OK ) return rc; | |
| 2539 | |
| 2540 rc = sqlite3_bind_int64(s, 1, iLevel); | |
| 2541 if( rc!=SQLITE_OK ) return rc; | |
| 2542 | |
| 2543 return sql_single_step(s); | |
| 2544 } | |
| 2545 | |
| 2546 /* Delete entire fts index, SQLITE_OK on success, relevant error on | |
| 2547 ** failure. | |
| 2548 */ | |
| 2549 static int segdir_delete_all(fulltext_vtab *v){ | |
| 2550 sqlite3_stmt *s; | |
| 2551 int rc = sql_get_statement(v, SEGDIR_DELETE_ALL_STMT, &s); | |
| 2552 if( rc!=SQLITE_OK ) return rc; | |
| 2553 | |
| 2554 rc = sql_single_step(s); | |
| 2555 if( rc!=SQLITE_OK ) return rc; | |
| 2556 | |
| 2557 rc = sql_get_statement(v, BLOCK_DELETE_ALL_STMT, &s); | |
| 2558 if( rc!=SQLITE_OK ) return rc; | |
| 2559 | |
| 2560 return sql_single_step(s); | |
| 2561 } | |
| 2562 | |
| 2563 /* Returns SQLITE_OK with *pnSegments set to the number of entries in | |
| 2564 ** %_segdir and *piMaxLevel set to the highest level which has a | |
| 2565 ** segment. Otherwise returns the SQLite error which caused failure. | |
| 2566 */ | |
| 2567 static int segdir_count(fulltext_vtab *v, int *pnSegments, int *piMaxLevel){ | |
| 2568 sqlite3_stmt *s; | |
| 2569 int rc = sql_get_statement(v, SEGDIR_COUNT_STMT, &s); | |
| 2570 if( rc!=SQLITE_OK ) return rc; | |
| 2571 | |
| 2572 rc = sqlite3_step(s); | |
| 2573 /* TODO(shess): This case should not be possible? Should stronger | |
| 2574 ** measures be taken if it happens? | |
| 2575 */ | |
| 2576 if( rc==SQLITE_DONE ){ | |
| 2577 *pnSegments = 0; | |
| 2578 *piMaxLevel = 0; | |
| 2579 return SQLITE_OK; | |
| 2580 } | |
| 2581 if( rc!=SQLITE_ROW ) return rc; | |
| 2582 | |
| 2583 *pnSegments = sqlite3_column_int(s, 0); | |
| 2584 *piMaxLevel = sqlite3_column_int(s, 1); | |
| 2585 | |
| 2586 /* We expect only one row. We must execute another sqlite3_step() | |
| 2587 * to complete the iteration; otherwise the table will remain locked. */ | |
| 2588 rc = sqlite3_step(s); | |
| 2589 if( rc==SQLITE_DONE ) return SQLITE_OK; | |
| 2590 if( rc==SQLITE_ROW ) return SQLITE_ERROR; | |
| 2591 return rc; | |
| 2592 } | |
| 2593 | |
| 2594 /* TODO(shess) clearPendingTerms() is far down the file because | |
| 2595 ** writeZeroSegment() is far down the file because LeafWriter is far | |
| 2596 ** down the file. Consider refactoring the code to move the non-vtab | |
| 2597 ** code above the vtab code so that we don't need this forward | |
| 2598 ** reference. | |
| 2599 */ | |
| 2600 static int clearPendingTerms(fulltext_vtab *v); | |
| 2601 | |
| 2602 /* | |
| 2603 ** Free the memory used to contain a fulltext_vtab structure. | |
| 2604 */ | |
| 2605 static void fulltext_vtab_destroy(fulltext_vtab *v){ | |
| 2606 int iStmt, i; | |
| 2607 | |
| 2608 FTSTRACE(("FTS3 Destroy %p\n", v)); | |
| 2609 for( iStmt=0; iStmt<MAX_STMT; iStmt++ ){ | |
| 2610 if( v->pFulltextStatements[iStmt]!=NULL ){ | |
| 2611 sqlite3_finalize(v->pFulltextStatements[iStmt]); | |
| 2612 v->pFulltextStatements[iStmt] = NULL; | |
| 2613 } | |
| 2614 } | |
| 2615 | |
| 2616 for( i=0; i<MERGE_COUNT; i++ ){ | |
| 2617 if( v->pLeafSelectStmts[i]!=NULL ){ | |
| 2618 sqlite3_finalize(v->pLeafSelectStmts[i]); | |
| 2619 v->pLeafSelectStmts[i] = NULL; | |
| 2620 } | |
| 2621 } | |
| 2622 | |
| 2623 if( v->pTokenizer!=NULL ){ | |
| 2624 v->pTokenizer->pModule->xDestroy(v->pTokenizer); | |
| 2625 v->pTokenizer = NULL; | |
| 2626 } | |
| 2627 | |
| 2628 clearPendingTerms(v); | |
| 2629 | |
| 2630 sqlite3_free(v->azColumn); | |
| 2631 for(i = 0; i < v->nColumn; ++i) { | |
| 2632 sqlite3_free(v->azContentColumn[i]); | |
| 2633 } | |
| 2634 sqlite3_free(v->azContentColumn); | |
| 2635 sqlite3_free(v); | |
| 2636 } | |
| 2637 | |
| 2638 /* | |
| 2639 ** Token types for parsing the arguments to xConnect or xCreate. | |
| 2640 */ | |
| 2641 #define TOKEN_EOF 0 /* End of file */ | |
| 2642 #define TOKEN_SPACE 1 /* Any kind of whitespace */ | |
| 2643 #define TOKEN_ID 2 /* An identifier */ | |
| 2644 #define TOKEN_STRING 3 /* A string literal */ | |
| 2645 #define TOKEN_PUNCT 4 /* A single punctuation character */ | |
| 2646 | |
| 2647 /* | |
| 2648 ** If X is a character that can be used in an identifier then | |
| 2649 ** ftsIdChar(X) will be true. Otherwise it is false. | |
| 2650 ** | |
| 2651 ** For ASCII, any character with the high-order bit set is | |
| 2652 ** allowed in an identifier. For 7-bit characters, | |
| 2653 ** isFtsIdChar[X] must be 1. | |
| 2654 ** | |
| 2655 ** Ticket #1066. the SQL standard does not allow '$' in the | |
| 2656 ** middle of identfiers. But many SQL implementations do. | |
| 2657 ** SQLite will allow '$' in identifiers for compatibility. | |
| 2658 ** But the feature is undocumented. | |
| 2659 */ | |
| 2660 static const char isFtsIdChar[] = { | |
| 2661 /* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */ | |
| 2662 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 2x */ | |
| 2663 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 3x */ | |
| 2664 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 4x */ | |
| 2665 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, /* 5x */ | |
| 2666 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 6x */ | |
| 2667 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 7x */ | |
| 2668 }; | |
| 2669 #define ftsIdChar(C) (((c=C)&0x80)!=0 || (c>0x1f && isFtsIdChar[c-0x20])) | |
| 2670 | |
| 2671 | |
| 2672 /* | |
| 2673 ** Return the length of the token that begins at z[0]. | |
| 2674 ** Store the token type in *tokenType before returning. | |
| 2675 */ | |
| 2676 static int ftsGetToken(const char *z, int *tokenType){ | |
| 2677 int i, c; | |
| 2678 switch( *z ){ | |
| 2679 case 0: { | |
| 2680 *tokenType = TOKEN_EOF; | |
| 2681 return 0; | |
| 2682 } | |
| 2683 case ' ': case '\t': case '\n': case '\f': case '\r': { | |
| 2684 for(i=1; safe_isspace(z[i]); i++){} | |
| 2685 *tokenType = TOKEN_SPACE; | |
| 2686 return i; | |
| 2687 } | |
| 2688 case '`': | |
| 2689 case '\'': | |
| 2690 case '"': { | |
| 2691 int delim = z[0]; | |
| 2692 for(i=1; (c=z[i])!=0; i++){ | |
| 2693 if( c==delim ){ | |
| 2694 if( z[i+1]==delim ){ | |
| 2695 i++; | |
| 2696 }else{ | |
| 2697 break; | |
| 2698 } | |
| 2699 } | |
| 2700 } | |
| 2701 *tokenType = TOKEN_STRING; | |
| 2702 return i + (c!=0); | |
| 2703 } | |
| 2704 case '[': { | |
| 2705 for(i=1, c=z[0]; c!=']' && (c=z[i])!=0; i++){} | |
| 2706 *tokenType = TOKEN_ID; | |
| 2707 return i; | |
| 2708 } | |
| 2709 default: { | |
| 2710 if( !ftsIdChar(*z) ){ | |
| 2711 break; | |
| 2712 } | |
| 2713 for(i=1; ftsIdChar(z[i]); i++){} | |
| 2714 *tokenType = TOKEN_ID; | |
| 2715 return i; | |
| 2716 } | |
| 2717 } | |
| 2718 *tokenType = TOKEN_PUNCT; | |
| 2719 return 1; | |
| 2720 } | |
| 2721 | |
| 2722 /* | |
| 2723 ** A token extracted from a string is an instance of the following | |
| 2724 ** structure. | |
| 2725 */ | |
| 2726 typedef struct FtsToken { | |
| 2727 const char *z; /* Pointer to token text. Not '\000' terminated */ | |
| 2728 short int n; /* Length of the token text in bytes. */ | |
| 2729 } FtsToken; | |
| 2730 | |
| 2731 /* | |
| 2732 ** Given a input string (which is really one of the argv[] parameters | |
| 2733 ** passed into xConnect or xCreate) split the string up into tokens. | |
| 2734 ** Return an array of pointers to '\000' terminated strings, one string | |
| 2735 ** for each non-whitespace token. | |
| 2736 ** | |
| 2737 ** The returned array is terminated by a single NULL pointer. | |
| 2738 ** | |
| 2739 ** Space to hold the returned array is obtained from a single | |
| 2740 ** malloc and should be freed by passing the return value to free(). | |
| 2741 ** The individual strings within the token list are all a part of | |
| 2742 ** the single memory allocation and will all be freed at once. | |
| 2743 */ | |
| 2744 static char **tokenizeString(const char *z, int *pnToken){ | |
| 2745 int nToken = 0; | |
| 2746 FtsToken *aToken = sqlite3_malloc( strlen(z) * sizeof(aToken[0]) ); | |
| 2747 int n = 1; | |
| 2748 int e, i; | |
| 2749 int totalSize = 0; | |
| 2750 char **azToken; | |
| 2751 char *zCopy; | |
| 2752 while( n>0 ){ | |
| 2753 n = ftsGetToken(z, &e); | |
| 2754 if( e!=TOKEN_SPACE ){ | |
| 2755 aToken[nToken].z = z; | |
| 2756 aToken[nToken].n = n; | |
| 2757 nToken++; | |
| 2758 totalSize += n+1; | |
| 2759 } | |
| 2760 z += n; | |
| 2761 } | |
| 2762 azToken = (char**)sqlite3_malloc( nToken*sizeof(char*) + totalSize ); | |
| 2763 zCopy = (char*)&azToken[nToken]; | |
| 2764 nToken--; | |
| 2765 for(i=0; i<nToken; i++){ | |
| 2766 azToken[i] = zCopy; | |
| 2767 n = aToken[i].n; | |
| 2768 memcpy(zCopy, aToken[i].z, n); | |
| 2769 zCopy[n] = 0; | |
| 2770 zCopy += n+1; | |
| 2771 } | |
| 2772 azToken[nToken] = 0; | |
| 2773 sqlite3_free(aToken); | |
| 2774 *pnToken = nToken; | |
| 2775 return azToken; | |
| 2776 } | |
| 2777 | |
| 2778 /* | |
| 2779 ** Convert an SQL-style quoted string into a normal string by removing | |
| 2780 ** the quote characters. The conversion is done in-place. If the | |
| 2781 ** input does not begin with a quote character, then this routine | |
| 2782 ** is a no-op. | |
| 2783 ** | |
| 2784 ** Examples: | |
| 2785 ** | |
| 2786 ** "abc" becomes abc | |
| 2787 ** 'xyz' becomes xyz | |
| 2788 ** [pqr] becomes pqr | |
| 2789 ** `mno` becomes mno | |
| 2790 */ | |
| 2791 static void dequoteString(char *z){ | |
| 2792 int quote; | |
| 2793 int i, j; | |
| 2794 if( z==0 ) return; | |
| 2795 quote = z[0]; | |
| 2796 switch( quote ){ | |
| 2797 case '\'': break; | |
| 2798 case '"': break; | |
| 2799 case '`': break; /* For MySQL compatibility */ | |
| 2800 case '[': quote = ']'; break; /* For MS SqlServer compatibility */ | |
| 2801 default: return; | |
| 2802 } | |
| 2803 for(i=1, j=0; z[i]; i++){ | |
| 2804 if( z[i]==quote ){ | |
| 2805 if( z[i+1]==quote ){ | |
| 2806 z[j++] = quote; | |
| 2807 i++; | |
| 2808 }else{ | |
| 2809 z[j++] = 0; | |
| 2810 break; | |
| 2811 } | |
| 2812 }else{ | |
| 2813 z[j++] = z[i]; | |
| 2814 } | |
| 2815 } | |
| 2816 } | |
| 2817 | |
| 2818 /* | |
| 2819 ** The input azIn is a NULL-terminated list of tokens. Remove the first | |
| 2820 ** token and all punctuation tokens. Remove the quotes from | |
| 2821 ** around string literal tokens. | |
| 2822 ** | |
| 2823 ** Example: | |
| 2824 ** | |
| 2825 ** input: tokenize chinese ( 'simplifed' , 'mixed' ) | |
| 2826 ** output: chinese simplifed mixed | |
| 2827 ** | |
| 2828 ** Another example: | |
| 2829 ** | |
| 2830 ** input: delimiters ( '[' , ']' , '...' ) | |
| 2831 ** output: [ ] ... | |
| 2832 */ | |
| 2833 static void tokenListToIdList(char **azIn){ | |
| 2834 int i, j; | |
| 2835 if( azIn ){ | |
| 2836 for(i=0, j=-1; azIn[i]; i++){ | |
| 2837 if( safe_isalnum(azIn[i][0]) || azIn[i][1] ){ | |
| 2838 dequoteString(azIn[i]); | |
| 2839 if( j>=0 ){ | |
| 2840 azIn[j] = azIn[i]; | |
| 2841 } | |
| 2842 j++; | |
| 2843 } | |
| 2844 } | |
| 2845 azIn[j] = 0; | |
| 2846 } | |
| 2847 } | |
| 2848 | |
| 2849 | |
| 2850 /* | |
| 2851 ** Find the first alphanumeric token in the string zIn. Null-terminate | |
| 2852 ** this token. Remove any quotation marks. And return a pointer to | |
| 2853 ** the result. | |
| 2854 */ | |
| 2855 static char *firstToken(char *zIn, char **pzTail){ | |
| 2856 int n, ttype; | |
| 2857 while(1){ | |
| 2858 n = ftsGetToken(zIn, &ttype); | |
| 2859 if( ttype==TOKEN_SPACE ){ | |
| 2860 zIn += n; | |
| 2861 }else if( ttype==TOKEN_EOF ){ | |
| 2862 *pzTail = zIn; | |
| 2863 return 0; | |
| 2864 }else{ | |
| 2865 zIn[n] = 0; | |
| 2866 *pzTail = &zIn[1]; | |
| 2867 dequoteString(zIn); | |
| 2868 return zIn; | |
| 2869 } | |
| 2870 } | |
| 2871 /*NOTREACHED*/ | |
| 2872 } | |
| 2873 | |
| 2874 /* Return true if... | |
| 2875 ** | |
| 2876 ** * s begins with the string t, ignoring case | |
| 2877 ** * s is longer than t | |
| 2878 ** * The first character of s beyond t is not a alphanumeric | |
| 2879 ** | |
| 2880 ** Ignore leading space in *s. | |
| 2881 ** | |
| 2882 ** To put it another way, return true if the first token of | |
| 2883 ** s[] is t[]. | |
| 2884 */ | |
| 2885 static int startsWith(const char *s, const char *t){ | |
| 2886 while( safe_isspace(*s) ){ s++; } | |
| 2887 while( *t ){ | |
| 2888 if( safe_tolower(*s++)!=safe_tolower(*t++) ) return 0; | |
| 2889 } | |
| 2890 return *s!='_' && !safe_isalnum(*s); | |
| 2891 } | |
| 2892 | |
| 2893 /* | |
| 2894 ** An instance of this structure defines the "spec" of a | |
| 2895 ** full text index. This structure is populated by parseSpec | |
| 2896 ** and use by fulltextConnect and fulltextCreate. | |
| 2897 */ | |
| 2898 typedef struct TableSpec { | |
| 2899 const char *zDb; /* Logical database name */ | |
| 2900 const char *zName; /* Name of the full-text index */ | |
| 2901 int nColumn; /* Number of columns to be indexed */ | |
| 2902 char **azColumn; /* Original names of columns to be indexed */ | |
| 2903 char **azContentColumn; /* Column names for %_content */ | |
| 2904 char **azTokenizer; /* Name of tokenizer and its arguments */ | |
| 2905 } TableSpec; | |
| 2906 | |
| 2907 /* | |
| 2908 ** Reclaim all of the memory used by a TableSpec | |
| 2909 */ | |
| 2910 static void clearTableSpec(TableSpec *p) { | |
| 2911 sqlite3_free(p->azColumn); | |
| 2912 sqlite3_free(p->azContentColumn); | |
| 2913 sqlite3_free(p->azTokenizer); | |
| 2914 } | |
| 2915 | |
| 2916 /* Parse a CREATE VIRTUAL TABLE statement, which looks like this: | |
| 2917 * | |
| 2918 * CREATE VIRTUAL TABLE email | |
| 2919 * USING fts3(subject, body, tokenize mytokenizer(myarg)) | |
| 2920 * | |
| 2921 * We return parsed information in a TableSpec structure. | |
| 2922 * | |
| 2923 */ | |
| 2924 static int parseSpec(TableSpec *pSpec, int argc, const char *const*argv, | |
| 2925 char**pzErr){ | |
| 2926 int i, n; | |
| 2927 char *z, *zDummy; | |
| 2928 char **azArg; | |
| 2929 const char *zTokenizer = 0; /* argv[] entry describing the tokenizer */ | |
| 2930 | |
| 2931 assert( argc>=3 ); | |
| 2932 /* Current interface: | |
| 2933 ** argv[0] - module name | |
| 2934 ** argv[1] - database name | |
| 2935 ** argv[2] - table name | |
| 2936 ** argv[3..] - columns, optionally followed by tokenizer specification | |
| 2937 ** and snippet delimiters specification. | |
| 2938 */ | |
| 2939 | |
| 2940 /* Make a copy of the complete argv[][] array in a single allocation. | |
| 2941 ** The argv[][] array is read-only and transient. We can write to the | |
| 2942 ** copy in order to modify things and the copy is persistent. | |
| 2943 */ | |
| 2944 CLEAR(pSpec); | |
| 2945 for(i=n=0; i<argc; i++){ | |
| 2946 n += strlen(argv[i]) + 1; | |
| 2947 } | |
| 2948 azArg = sqlite3_malloc( sizeof(char*)*argc + n ); | |
| 2949 if( azArg==0 ){ | |
| 2950 return SQLITE_NOMEM; | |
| 2951 } | |
| 2952 z = (char*)&azArg[argc]; | |
| 2953 for(i=0; i<argc; i++){ | |
| 2954 azArg[i] = z; | |
| 2955 strcpy(z, argv[i]); | |
| 2956 z += strlen(z)+1; | |
| 2957 } | |
| 2958 | |
| 2959 /* Identify the column names and the tokenizer and delimiter arguments | |
| 2960 ** in the argv[][] array. | |
| 2961 */ | |
| 2962 pSpec->zDb = azArg[1]; | |
| 2963 pSpec->zName = azArg[2]; | |
| 2964 pSpec->nColumn = 0; | |
| 2965 pSpec->azColumn = azArg; | |
| 2966 zTokenizer = "tokenize simple"; | |
| 2967 for(i=3; i<argc; ++i){ | |
| 2968 if( startsWith(azArg[i],"tokenize") ){ | |
| 2969 zTokenizer = azArg[i]; | |
| 2970 }else{ | |
| 2971 z = azArg[pSpec->nColumn] = firstToken(azArg[i], &zDummy); | |
| 2972 pSpec->nColumn++; | |
| 2973 } | |
| 2974 } | |
| 2975 if( pSpec->nColumn==0 ){ | |
| 2976 azArg[0] = "content"; | |
| 2977 pSpec->nColumn = 1; | |
| 2978 } | |
| 2979 | |
| 2980 /* | |
| 2981 ** Construct the list of content column names. | |
| 2982 ** | |
| 2983 ** Each content column name will be of the form cNNAAAA | |
| 2984 ** where NN is the column number and AAAA is the sanitized | |
| 2985 ** column name. "sanitized" means that special characters are | |
| 2986 ** converted to "_". The cNN prefix guarantees that all column | |
| 2987 ** names are unique. | |
| 2988 ** | |
| 2989 ** The AAAA suffix is not strictly necessary. It is included | |
| 2990 ** for the convenience of people who might examine the generated | |
| 2991 ** %_content table and wonder what the columns are used for. | |
| 2992 */ | |
| 2993 pSpec->azContentColumn = sqlite3_malloc( pSpec->nColumn * sizeof(char *) ); | |
| 2994 if( pSpec->azContentColumn==0 ){ | |
| 2995 clearTableSpec(pSpec); | |
| 2996 return SQLITE_NOMEM; | |
| 2997 } | |
| 2998 for(i=0; i<pSpec->nColumn; i++){ | |
| 2999 char *p; | |
| 3000 pSpec->azContentColumn[i] = sqlite3_mprintf("c%d%s", i, azArg[i]); | |
| 3001 for (p = pSpec->azContentColumn[i]; *p ; ++p) { | |
| 3002 if( !safe_isalnum(*p) ) *p = '_'; | |
| 3003 } | |
| 3004 } | |
| 3005 | |
| 3006 /* | |
| 3007 ** Parse the tokenizer specification string. | |
| 3008 */ | |
| 3009 pSpec->azTokenizer = tokenizeString(zTokenizer, &n); | |
| 3010 tokenListToIdList(pSpec->azTokenizer); | |
| 3011 | |
| 3012 return SQLITE_OK; | |
| 3013 } | |
| 3014 | |
| 3015 /* | |
| 3016 ** Generate a CREATE TABLE statement that describes the schema of | |
| 3017 ** the virtual table. Return a pointer to this schema string. | |
| 3018 ** | |
| 3019 ** Space is obtained from sqlite3_mprintf() and should be freed | |
| 3020 ** using sqlite3_free(). | |
| 3021 */ | |
| 3022 static char *fulltextSchema( | |
| 3023 int nColumn, /* Number of columns */ | |
| 3024 const char *const* azColumn, /* List of columns */ | |
| 3025 const char *zTableName /* Name of the table */ | |
| 3026 ){ | |
| 3027 int i; | |
| 3028 char *zSchema, *zNext; | |
| 3029 const char *zSep = "("; | |
| 3030 zSchema = sqlite3_mprintf("CREATE TABLE x"); | |
| 3031 for(i=0; i<nColumn; i++){ | |
| 3032 zNext = sqlite3_mprintf("%s%s%Q", zSchema, zSep, azColumn[i]); | |
| 3033 sqlite3_free(zSchema); | |
| 3034 zSchema = zNext; | |
| 3035 zSep = ","; | |
| 3036 } | |
| 3037 zNext = sqlite3_mprintf("%s,%Q HIDDEN", zSchema, zTableName); | |
| 3038 sqlite3_free(zSchema); | |
| 3039 zSchema = zNext; | |
| 3040 zNext = sqlite3_mprintf("%s,docid HIDDEN)", zSchema); | |
| 3041 sqlite3_free(zSchema); | |
| 3042 return zNext; | |
| 3043 } | |
| 3044 | |
| 3045 /* | |
| 3046 ** Build a new sqlite3_vtab structure that will describe the | |
| 3047 ** fulltext index defined by spec. | |
| 3048 */ | |
| 3049 static int constructVtab( | |
| 3050 sqlite3 *db, /* The SQLite database connection */ | |
| 3051 fts3Hash *pHash, /* Hash table containing tokenizers */ | |
| 3052 TableSpec *spec, /* Parsed spec information from parseSpec() */ | |
| 3053 sqlite3_vtab **ppVTab, /* Write the resulting vtab structure here */ | |
| 3054 char **pzErr /* Write any error message here */ | |
| 3055 ){ | |
| 3056 int rc; | |
| 3057 int n; | |
| 3058 fulltext_vtab *v = 0; | |
| 3059 const sqlite3_tokenizer_module *m = NULL; | |
| 3060 char *schema; | |
| 3061 | |
| 3062 char const *zTok; /* Name of tokenizer to use for this fts table */ | |
| 3063 int nTok; /* Length of zTok, including nul terminator */ | |
| 3064 | |
| 3065 v = (fulltext_vtab *) sqlite3_malloc(sizeof(fulltext_vtab)); | |
| 3066 if( v==0 ) return SQLITE_NOMEM; | |
| 3067 CLEAR(v); | |
| 3068 /* sqlite will initialize v->base */ | |
| 3069 v->db = db; | |
| 3070 v->zDb = spec->zDb; /* Freed when azColumn is freed */ | |
| 3071 v->zName = spec->zName; /* Freed when azColumn is freed */ | |
| 3072 v->nColumn = spec->nColumn; | |
| 3073 v->azContentColumn = spec->azContentColumn; | |
| 3074 spec->azContentColumn = 0; | |
| 3075 v->azColumn = spec->azColumn; | |
| 3076 spec->azColumn = 0; | |
| 3077 | |
| 3078 if( spec->azTokenizer==0 ){ | |
| 3079 return SQLITE_NOMEM; | |
| 3080 } | |
| 3081 | |
| 3082 zTok = spec->azTokenizer[0]; | |
| 3083 if( !zTok ){ | |
| 3084 zTok = "simple"; | |
| 3085 } | |
| 3086 nTok = strlen(zTok)+1; | |
| 3087 | |
| 3088 m = (sqlite3_tokenizer_module *)sqlite3Fts3HashFind(pHash, zTok, nTok); | |
| 3089 if( !m ){ | |
| 3090 *pzErr = sqlite3_mprintf("unknown tokenizer: %s", spec->azTokenizer[0]); | |
| 3091 rc = SQLITE_ERROR; | |
| 3092 goto err; | |
| 3093 } | |
| 3094 | |
| 3095 for(n=0; spec->azTokenizer[n]; n++){} | |
| 3096 if( n ){ | |
| 3097 rc = m->xCreate(n-1, (const char*const*)&spec->azTokenizer[1], | |
| 3098 &v->pTokenizer); | |
| 3099 }else{ | |
| 3100 rc = m->xCreate(0, 0, &v->pTokenizer); | |
| 3101 } | |
| 3102 if( rc!=SQLITE_OK ) goto err; | |
| 3103 v->pTokenizer->pModule = m; | |
| 3104 | |
| 3105 /* TODO: verify the existence of backing tables foo_content, foo_term */ | |
| 3106 | |
| 3107 schema = fulltextSchema(v->nColumn, (const char*const*)v->azColumn, | |
| 3108 spec->zName); | |
| 3109 rc = sqlite3_declare_vtab(db, schema); | |
| 3110 sqlite3_free(schema); | |
| 3111 if( rc!=SQLITE_OK ) goto err; | |
| 3112 | |
| 3113 memset(v->pFulltextStatements, 0, sizeof(v->pFulltextStatements)); | |
| 3114 | |
| 3115 /* Indicate that the buffer is not live. */ | |
| 3116 v->nPendingData = -1; | |
| 3117 | |
| 3118 *ppVTab = &v->base; | |
| 3119 FTSTRACE(("FTS3 Connect %p\n", v)); | |
| 3120 | |
| 3121 return rc; | |
| 3122 | |
| 3123 err: | |
| 3124 fulltext_vtab_destroy(v); | |
| 3125 return rc; | |
| 3126 } | |
| 3127 | |
| 3128 static int fulltextConnect( | |
| 3129 sqlite3 *db, | |
| 3130 void *pAux, | |
| 3131 int argc, const char *const*argv, | |
| 3132 sqlite3_vtab **ppVTab, | |
| 3133 char **pzErr | |
| 3134 ){ | |
| 3135 TableSpec spec; | |
| 3136 int rc = parseSpec(&spec, argc, argv, pzErr); | |
| 3137 if( rc!=SQLITE_OK ) return rc; | |
| 3138 | |
| 3139 rc = constructVtab(db, (fts3Hash *)pAux, &spec, ppVTab, pzErr); | |
| 3140 clearTableSpec(&spec); | |
| 3141 return rc; | |
| 3142 } | |
| 3143 | |
| 3144 /* The %_content table holds the text of each document, with | |
| 3145 ** the docid column exposed as the SQLite rowid for the table. | |
| 3146 */ | |
| 3147 /* TODO(shess) This comment needs elaboration to match the updated | |
| 3148 ** code. Work it into the top-of-file comment at that time. | |
| 3149 */ | |
| 3150 static int fulltextCreate(sqlite3 *db, void *pAux, | |
| 3151 int argc, const char * const *argv, | |
| 3152 sqlite3_vtab **ppVTab, char **pzErr){ | |
| 3153 int rc; | |
| 3154 TableSpec spec; | |
| 3155 StringBuffer schema; | |
| 3156 FTSTRACE(("FTS3 Create\n")); | |
| 3157 | |
| 3158 rc = parseSpec(&spec, argc, argv, pzErr); | |
| 3159 if( rc!=SQLITE_OK ) return rc; | |
| 3160 | |
| 3161 initStringBuffer(&schema); | |
| 3162 append(&schema, "CREATE TABLE %_content("); | |
| 3163 append(&schema, " docid INTEGER PRIMARY KEY,"); | |
| 3164 appendList(&schema, spec.nColumn, spec.azContentColumn); | |
| 3165 append(&schema, ")"); | |
| 3166 rc = sql_exec(db, spec.zDb, spec.zName, stringBufferData(&schema)); | |
| 3167 stringBufferDestroy(&schema); | |
| 3168 if( rc!=SQLITE_OK ) goto out; | |
| 3169 | |
| 3170 rc = sql_exec(db, spec.zDb, spec.zName, | |
| 3171 "create table %_segments(" | |
| 3172 " blockid INTEGER PRIMARY KEY," | |
| 3173 " block blob" | |
| 3174 ");" | |
| 3175 ); | |
| 3176 if( rc!=SQLITE_OK ) goto out; | |
| 3177 | |
| 3178 rc = sql_exec(db, spec.zDb, spec.zName, | |
| 3179 "create table %_segdir(" | |
| 3180 " level integer," | |
| 3181 " idx integer," | |
| 3182 " start_block integer," | |
| 3183 " leaves_end_block integer," | |
| 3184 " end_block integer," | |
| 3185 " root blob," | |
| 3186 " primary key(level, idx)" | |
| 3187 ");"); | |
| 3188 if( rc!=SQLITE_OK ) goto out; | |
| 3189 | |
| 3190 rc = constructVtab(db, (fts3Hash *)pAux, &spec, ppVTab, pzErr); | |
| 3191 | |
| 3192 out: | |
| 3193 clearTableSpec(&spec); | |
| 3194 return rc; | |
| 3195 } | |
| 3196 | |
| 3197 /* Decide how to handle an SQL query. */ | |
| 3198 static int fulltextBestIndex(sqlite3_vtab *pVTab, sqlite3_index_info *pInfo){ | |
| 3199 fulltext_vtab *v = (fulltext_vtab *)pVTab; | |
| 3200 int i; | |
| 3201 FTSTRACE(("FTS3 BestIndex\n")); | |
| 3202 | |
| 3203 for(i=0; i<pInfo->nConstraint; ++i){ | |
| 3204 const struct sqlite3_index_constraint *pConstraint; | |
| 3205 pConstraint = &pInfo->aConstraint[i]; | |
| 3206 if( pConstraint->usable ) { | |
| 3207 if( (pConstraint->iColumn==-1 || pConstraint->iColumn==v->nColumn+1) && | |
| 3208 pConstraint->op==SQLITE_INDEX_CONSTRAINT_EQ ){ | |
| 3209 pInfo->idxNum = QUERY_DOCID; /* lookup by docid */ | |
| 3210 FTSTRACE(("FTS3 QUERY_DOCID\n")); | |
| 3211 } else if( pConstraint->iColumn>=0 && pConstraint->iColumn<=v->nColumn && | |
| 3212 pConstraint->op==SQLITE_INDEX_CONSTRAINT_MATCH ){ | |
| 3213 /* full-text search */ | |
| 3214 pInfo->idxNum = QUERY_FULLTEXT + pConstraint->iColumn; | |
| 3215 FTSTRACE(("FTS3 QUERY_FULLTEXT %d\n", pConstraint->iColumn)); | |
| 3216 } else continue; | |
| 3217 | |
| 3218 pInfo->aConstraintUsage[i].argvIndex = 1; | |
| 3219 pInfo->aConstraintUsage[i].omit = 1; | |
| 3220 | |
| 3221 /* An arbitrary value for now. | |
| 3222 * TODO: Perhaps docid matches should be considered cheaper than | |
| 3223 * full-text searches. */ | |
| 3224 pInfo->estimatedCost = 1.0; | |
| 3225 | |
| 3226 return SQLITE_OK; | |
| 3227 } | |
| 3228 } | |
| 3229 pInfo->idxNum = QUERY_GENERIC; | |
| 3230 return SQLITE_OK; | |
| 3231 } | |
| 3232 | |
| 3233 static int fulltextDisconnect(sqlite3_vtab *pVTab){ | |
| 3234 FTSTRACE(("FTS3 Disconnect %p\n", pVTab)); | |
| 3235 fulltext_vtab_destroy((fulltext_vtab *)pVTab); | |
| 3236 return SQLITE_OK; | |
| 3237 } | |
| 3238 | |
| 3239 static int fulltextDestroy(sqlite3_vtab *pVTab){ | |
| 3240 fulltext_vtab *v = (fulltext_vtab *)pVTab; | |
| 3241 int rc; | |
| 3242 | |
| 3243 FTSTRACE(("FTS3 Destroy %p\n", pVTab)); | |
| 3244 rc = sql_exec(v->db, v->zDb, v->zName, | |
| 3245 "drop table if exists %_content;" | |
| 3246 "drop table if exists %_segments;" | |
| 3247 "drop table if exists %_segdir;" | |
| 3248 ); | |
| 3249 if( rc!=SQLITE_OK ) return rc; | |
| 3250 | |
| 3251 fulltext_vtab_destroy((fulltext_vtab *)pVTab); | |
| 3252 return SQLITE_OK; | |
| 3253 } | |
| 3254 | |
| 3255 static int fulltextOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){ | |
| 3256 fulltext_cursor *c; | |
| 3257 | |
| 3258 c = (fulltext_cursor *) sqlite3_malloc(sizeof(fulltext_cursor)); | |
| 3259 if( c ){ | |
| 3260 memset(c, 0, sizeof(fulltext_cursor)); | |
| 3261 /* sqlite will initialize c->base */ | |
| 3262 *ppCursor = &c->base; | |
| 3263 FTSTRACE(("FTS3 Open %p: %p\n", pVTab, c)); | |
| 3264 return SQLITE_OK; | |
| 3265 }else{ | |
| 3266 return SQLITE_NOMEM; | |
| 3267 } | |
| 3268 } | |
| 3269 | |
| 3270 /* Free all of the dynamically allocated memory held by the | |
| 3271 ** Snippet | |
| 3272 */ | |
| 3273 static void snippetClear(Snippet *p){ | |
| 3274 sqlite3_free(p->aMatch); | |
| 3275 sqlite3_free(p->zOffset); | |
| 3276 sqlite3_free(p->zSnippet); | |
| 3277 CLEAR(p); | |
| 3278 } | |
| 3279 | |
| 3280 /* | |
| 3281 ** Append a single entry to the p->aMatch[] log. | |
| 3282 */ | |
| 3283 static void snippetAppendMatch( | |
| 3284 Snippet *p, /* Append the entry to this snippet */ | |
| 3285 int iCol, int iTerm, /* The column and query term */ | |
| 3286 int iToken, /* Matching token in document */ | |
| 3287 int iStart, int nByte /* Offset and size of the match */ | |
| 3288 ){ | |
| 3289 int i; | |
| 3290 struct snippetMatch *pMatch; | |
| 3291 if( p->nMatch+1>=p->nAlloc ){ | |
| 3292 p->nAlloc = p->nAlloc*2 + 10; | |
| 3293 p->aMatch = sqlite3_realloc(p->aMatch, p->nAlloc*sizeof(p->aMatch[0]) ); | |
| 3294 if( p->aMatch==0 ){ | |
| 3295 p->nMatch = 0; | |
| 3296 p->nAlloc = 0; | |
| 3297 return; | |
| 3298 } | |
| 3299 } | |
| 3300 i = p->nMatch++; | |
| 3301 pMatch = &p->aMatch[i]; | |
| 3302 pMatch->iCol = iCol; | |
| 3303 pMatch->iTerm = iTerm; | |
| 3304 pMatch->iToken = iToken; | |
| 3305 pMatch->iStart = iStart; | |
| 3306 pMatch->nByte = nByte; | |
| 3307 } | |
| 3308 | |
| 3309 /* | |
| 3310 ** Sizing information for the circular buffer used in snippetOffsetsOfColumn() | |
| 3311 */ | |
| 3312 #define FTS3_ROTOR_SZ (32) | |
| 3313 #define FTS3_ROTOR_MASK (FTS3_ROTOR_SZ-1) | |
| 3314 | |
| 3315 /* | |
| 3316 ** Function to iterate through the tokens of a compiled expression. | |
| 3317 ** | |
| 3318 ** Except, skip all tokens on the right-hand side of a NOT operator. | |
| 3319 ** This function is used to find tokens as part of snippet and offset | |
| 3320 ** generation and we do nt want snippets and offsets to report matches | |
| 3321 ** for tokens on the RHS of a NOT. | |
| 3322 */ | |
| 3323 static int fts3NextExprToken(Fts3Expr **ppExpr, int *piToken){ | |
| 3324 Fts3Expr *p = *ppExpr; | |
| 3325 int iToken = *piToken; | |
| 3326 if( iToken<0 ){ | |
| 3327 /* In this case the expression p is the root of an expression tree. | |
| 3328 ** Move to the first token in the expression tree. | |
| 3329 */ | |
| 3330 while( p->pLeft ){ | |
| 3331 p = p->pLeft; | |
| 3332 } | |
| 3333 iToken = 0; | |
| 3334 }else{ | |
| 3335 assert(p && p->eType==FTSQUERY_PHRASE ); | |
| 3336 if( iToken<(p->pPhrase->nToken-1) ){ | |
| 3337 iToken++; | |
| 3338 }else{ | |
| 3339 iToken = 0; | |
| 3340 while( p->pParent && p->pParent->pLeft!=p ){ | |
| 3341 assert( p->pParent->pRight==p ); | |
| 3342 p = p->pParent; | |
| 3343 } | |
| 3344 p = p->pParent; | |
| 3345 if( p ){ | |
| 3346 assert( p->pRight!=0 ); | |
| 3347 p = p->pRight; | |
| 3348 while( p->pLeft ){ | |
| 3349 p = p->pLeft; | |
| 3350 } | |
| 3351 } | |
| 3352 } | |
| 3353 } | |
| 3354 | |
| 3355 *ppExpr = p; | |
| 3356 *piToken = iToken; | |
| 3357 return p?1:0; | |
| 3358 } | |
| 3359 | |
| 3360 /* | |
| 3361 ** Return TRUE if the expression node pExpr is located beneath the | |
| 3362 ** RHS of a NOT operator. | |
| 3363 */ | |
| 3364 static int fts3ExprBeneathNot(Fts3Expr *p){ | |
| 3365 Fts3Expr *pParent; | |
| 3366 while( p ){ | |
| 3367 pParent = p->pParent; | |
| 3368 if( pParent && pParent->eType==FTSQUERY_NOT && pParent->pRight==p ){ | |
| 3369 return 1; | |
| 3370 } | |
| 3371 p = pParent; | |
| 3372 } | |
| 3373 return 0; | |
| 3374 } | |
| 3375 | |
| 3376 /* | |
| 3377 ** Add entries to pSnippet->aMatch[] for every match that occurs against | |
| 3378 ** document zDoc[0..nDoc-1] which is stored in column iColumn. | |
| 3379 */ | |
| 3380 static void snippetOffsetsOfColumn( | |
| 3381 fulltext_cursor *pCur, /* The fulltest search cursor */ | |
| 3382 Snippet *pSnippet, /* The Snippet object to be filled in */ | |
| 3383 int iColumn, /* Index of fulltext table column */ | |
| 3384 const char *zDoc, /* Text of the fulltext table column */ | |
| 3385 int nDoc /* Length of zDoc in bytes */ | |
| 3386 ){ | |
| 3387 const sqlite3_tokenizer_module *pTModule; /* The tokenizer module */ | |
| 3388 sqlite3_tokenizer *pTokenizer; /* The specific tokenizer */ | |
| 3389 sqlite3_tokenizer_cursor *pTCursor; /* Tokenizer cursor */ | |
| 3390 fulltext_vtab *pVtab; /* The full text index */ | |
| 3391 int nColumn; /* Number of columns in the index */ | |
| 3392 int i, j; /* Loop counters */ | |
| 3393 int rc; /* Return code */ | |
| 3394 unsigned int match, prevMatch; /* Phrase search bitmasks */ | |
| 3395 const char *zToken; /* Next token from the tokenizer */ | |
| 3396 int nToken; /* Size of zToken */ | |
| 3397 int iBegin, iEnd, iPos; /* Offsets of beginning and end */ | |
| 3398 | |
| 3399 /* The following variables keep a circular buffer of the last | |
| 3400 ** few tokens */ | |
| 3401 unsigned int iRotor = 0; /* Index of current token */ | |
| 3402 int iRotorBegin[FTS3_ROTOR_SZ]; /* Beginning offset of token */ | |
| 3403 int iRotorLen[FTS3_ROTOR_SZ]; /* Length of token */ | |
| 3404 | |
| 3405 pVtab = cursor_vtab(pCur); | |
| 3406 nColumn = pVtab->nColumn; | |
| 3407 pTokenizer = pVtab->pTokenizer; | |
| 3408 pTModule = pTokenizer->pModule; | |
| 3409 rc = pTModule->xOpen(pTokenizer, zDoc, nDoc, &pTCursor); | |
| 3410 if( rc ) return; | |
| 3411 pTCursor->pTokenizer = pTokenizer; | |
| 3412 | |
| 3413 prevMatch = 0; | |
| 3414 while( !pTModule->xNext(pTCursor, &zToken, &nToken, &iBegin, &iEnd, &iPos) ){ | |
| 3415 Fts3Expr *pIter = pCur->pExpr; | |
| 3416 int iIter = -1; | |
| 3417 iRotorBegin[iRotor&FTS3_ROTOR_MASK] = iBegin; | |
| 3418 iRotorLen[iRotor&FTS3_ROTOR_MASK] = iEnd-iBegin; | |
| 3419 match = 0; | |
| 3420 for(i=0; i<(FTS3_ROTOR_SZ-1) && fts3NextExprToken(&pIter, &iIter); i++){ | |
| 3421 int nPhrase; /* Number of tokens in current phrase */ | |
| 3422 struct PhraseToken *pToken; /* Current token */ | |
| 3423 int iCol; /* Column index */ | |
| 3424 | |
| 3425 if( fts3ExprBeneathNot(pIter) ) continue; | |
| 3426 nPhrase = pIter->pPhrase->nToken; | |
| 3427 pToken = &pIter->pPhrase->aToken[iIter]; | |
| 3428 iCol = pIter->pPhrase->iColumn; | |
| 3429 if( iCol>=0 && iCol<nColumn && iCol!=iColumn ) continue; | |
| 3430 if( pToken->n>nToken ) continue; | |
| 3431 if( !pToken->isPrefix && pToken->n<nToken ) continue; | |
| 3432 assert( pToken->n<=nToken ); | |
| 3433 if( memcmp(pToken->z, zToken, pToken->n) ) continue; | |
| 3434 if( iIter>0 && (prevMatch & (1<<i))==0 ) continue; | |
| 3435 match |= 1<<i; | |
| 3436 if( i==(FTS3_ROTOR_SZ-2) || nPhrase==iIter+1 ){ | |
| 3437 for(j=nPhrase-1; j>=0; j--){ | |
| 3438 int k = (iRotor-j) & FTS3_ROTOR_MASK; | |
| 3439 snippetAppendMatch(pSnippet, iColumn, i-j, iPos-j, | |
| 3440 iRotorBegin[k], iRotorLen[k]); | |
| 3441 } | |
| 3442 } | |
| 3443 } | |
| 3444 prevMatch = match<<1; | |
| 3445 iRotor++; | |
| 3446 } | |
| 3447 pTModule->xClose(pTCursor); | |
| 3448 } | |
| 3449 | |
| 3450 /* | |
| 3451 ** Remove entries from the pSnippet structure to account for the NEAR | |
| 3452 ** operator. When this is called, pSnippet contains the list of token | |
| 3453 ** offsets produced by treating all NEAR operators as AND operators. | |
| 3454 ** This function removes any entries that should not be present after | |
| 3455 ** accounting for the NEAR restriction. For example, if the queried | |
| 3456 ** document is: | |
| 3457 ** | |
| 3458 ** "A B C D E A" | |
| 3459 ** | |
| 3460 ** and the query is: | |
| 3461 ** | |
| 3462 ** A NEAR/0 E | |
| 3463 ** | |
| 3464 ** then when this function is called the Snippet contains token offsets | |
| 3465 ** 0, 4 and 5. This function removes the "0" entry (because the first A | |
| 3466 ** is not near enough to an E). | |
| 3467 ** | |
| 3468 ** When this function is called, the value pointed to by parameter piLeft is | |
| 3469 ** the integer id of the left-most token in the expression tree headed by | |
| 3470 ** pExpr. This function increments *piLeft by the total number of tokens | |
| 3471 ** in the expression tree headed by pExpr. | |
| 3472 ** | |
| 3473 ** Return 1 if any trimming occurs. Return 0 if no trimming is required. | |
| 3474 */ | |
| 3475 static int trimSnippetOffsets( | |
| 3476 Fts3Expr *pExpr, /* The search expression */ | |
| 3477 Snippet *pSnippet, /* The set of snippet offsets to be trimmed */ | |
| 3478 int *piLeft /* Index of left-most token in pExpr */ | |
| 3479 ){ | |
| 3480 if( pExpr ){ | |
| 3481 if( trimSnippetOffsets(pExpr->pLeft, pSnippet, piLeft) ){ | |
| 3482 return 1; | |
| 3483 } | |
| 3484 | |
| 3485 switch( pExpr->eType ){ | |
| 3486 case FTSQUERY_PHRASE: | |
| 3487 *piLeft += pExpr->pPhrase->nToken; | |
| 3488 break; | |
| 3489 case FTSQUERY_NEAR: { | |
| 3490 /* The right-hand-side of a NEAR operator is always a phrase. The | |
| 3491 ** left-hand-side is either a phrase or an expression tree that is | |
| 3492 ** itself headed by a NEAR operator. The following initializations | |
| 3493 ** set local variable iLeft to the token number of the left-most | |
| 3494 ** token in the right-hand phrase, and iRight to the right most | |
| 3495 ** token in the same phrase. For example, if we had: | |
| 3496 ** | |
| 3497 ** <col> MATCH '"abc def" NEAR/2 "ghi jkl"' | |
| 3498 ** | |
| 3499 ** then iLeft will be set to 2 (token number of ghi) and nToken will | |
| 3500 ** be set to 4. | |
| 3501 */ | |
| 3502 Fts3Expr *pLeft = pExpr->pLeft; | |
| 3503 Fts3Expr *pRight = pExpr->pRight; | |
| 3504 int iLeft = *piLeft; | |
| 3505 int nNear = pExpr->nNear; | |
| 3506 int nToken = pRight->pPhrase->nToken; | |
| 3507 int jj, ii; | |
| 3508 if( pLeft->eType==FTSQUERY_NEAR ){ | |
| 3509 pLeft = pLeft->pRight; | |
| 3510 } | |
| 3511 assert( pRight->eType==FTSQUERY_PHRASE ); | |
| 3512 assert( pLeft->eType==FTSQUERY_PHRASE ); | |
| 3513 nToken += pLeft->pPhrase->nToken; | |
| 3514 | |
| 3515 for(ii=0; ii<pSnippet->nMatch; ii++){ | |
| 3516 struct snippetMatch *p = &pSnippet->aMatch[ii]; | |
| 3517 if( p->iTerm==iLeft ){ | |
| 3518 int isOk = 0; | |
| 3519 /* Snippet ii is an occurence of query term iLeft in the document. | |
| 3520 ** It occurs at position (p->iToken) of the document. We now | |
| 3521 ** search for an instance of token (iLeft-1) somewhere in the | |
| 3522 ** range (p->iToken - nNear)...(p->iToken + nNear + nToken) within | |
| 3523 ** the set of snippetMatch structures. If one is found, proceed. | |
| 3524 ** If one cannot be found, then remove snippets ii..(ii+N-1) | |
| 3525 ** from the matching snippets, where N is the number of tokens | |
| 3526 ** in phrase pRight->pPhrase. | |
| 3527 */ | |
| 3528 for(jj=0; isOk==0 && jj<pSnippet->nMatch; jj++){ | |
| 3529 struct snippetMatch *p2 = &pSnippet->aMatch[jj]; | |
| 3530 if( p2->iTerm==(iLeft-1) ){ | |
| 3531 if( p2->iToken>=(p->iToken-nNear-1) | |
| 3532 && p2->iToken<(p->iToken+nNear+nToken) | |
| 3533 ){ | |
| 3534 isOk = 1; | |
| 3535 } | |
| 3536 } | |
| 3537 } | |
| 3538 if( !isOk ){ | |
| 3539 int kk; | |
| 3540 for(kk=0; kk<pRight->pPhrase->nToken; kk++){ | |
| 3541 pSnippet->aMatch[kk+ii].iTerm = -2; | |
| 3542 } | |
| 3543 return 1; | |
| 3544 } | |
| 3545 } | |
| 3546 if( p->iTerm==(iLeft-1) ){ | |
| 3547 int isOk = 0; | |
| 3548 for(jj=0; isOk==0 && jj<pSnippet->nMatch; jj++){ | |
| 3549 struct snippetMatch *p2 = &pSnippet->aMatch[jj]; | |
| 3550 if( p2->iTerm==iLeft ){ | |
| 3551 if( p2->iToken<=(p->iToken+nNear+1) | |
| 3552 && p2->iToken>(p->iToken-nNear-nToken) | |
| 3553 ){ | |
| 3554 isOk = 1; | |
| 3555 } | |
| 3556 } | |
| 3557 } | |
| 3558 if( !isOk ){ | |
| 3559 int kk; | |
| 3560 for(kk=0; kk<pLeft->pPhrase->nToken; kk++){ | |
| 3561 pSnippet->aMatch[ii-kk].iTerm = -2; | |
| 3562 } | |
| 3563 return 1; | |
| 3564 } | |
| 3565 } | |
| 3566 } | |
| 3567 break; | |
| 3568 } | |
| 3569 } | |
| 3570 | |
| 3571 if( trimSnippetOffsets(pExpr->pRight, pSnippet, piLeft) ){ | |
| 3572 return 1; | |
| 3573 } | |
| 3574 } | |
| 3575 return 0; | |
| 3576 } | |
| 3577 | |
| 3578 /* | |
| 3579 ** Compute all offsets for the current row of the query. | |
| 3580 ** If the offsets have already been computed, this routine is a no-op. | |
| 3581 */ | |
| 3582 static void snippetAllOffsets(fulltext_cursor *p){ | |
| 3583 int nColumn; | |
| 3584 int iColumn, i; | |
| 3585 int iFirst, iLast; | |
| 3586 int iTerm = 0; | |
| 3587 fulltext_vtab *pFts = cursor_vtab(p); | |
| 3588 | |
| 3589 if( p->snippet.nMatch || p->pExpr==0 ){ | |
| 3590 return; | |
| 3591 } | |
| 3592 nColumn = pFts->nColumn; | |
| 3593 iColumn = (p->iCursorType - QUERY_FULLTEXT); | |
| 3594 if( iColumn<0 || iColumn>=nColumn ){ | |
| 3595 /* Look for matches over all columns of the full-text index */ | |
| 3596 iFirst = 0; | |
| 3597 iLast = nColumn-1; | |
| 3598 }else{ | |
| 3599 /* Look for matches in the iColumn-th column of the index only */ | |
| 3600 iFirst = iColumn; | |
| 3601 iLast = iColumn; | |
| 3602 } | |
| 3603 for(i=iFirst; i<=iLast; i++){ | |
| 3604 const char *zDoc; | |
| 3605 int nDoc; | |
| 3606 zDoc = (const char*)sqlite3_column_text(p->pStmt, i+1); | |
| 3607 nDoc = sqlite3_column_bytes(p->pStmt, i+1); | |
| 3608 snippetOffsetsOfColumn(p, &p->snippet, i, zDoc, nDoc); | |
| 3609 } | |
| 3610 | |
| 3611 while( trimSnippetOffsets(p->pExpr, &p->snippet, &iTerm) ){ | |
| 3612 iTerm = 0; | |
| 3613 } | |
| 3614 } | |
| 3615 | |
| 3616 /* | |
| 3617 ** Convert the information in the aMatch[] array of the snippet | |
| 3618 ** into the string zOffset[0..nOffset-1]. This string is used as | |
| 3619 ** the return of the SQL offsets() function. | |
| 3620 */ | |
| 3621 static void snippetOffsetText(Snippet *p){ | |
| 3622 int i; | |
| 3623 int cnt = 0; | |
| 3624 StringBuffer sb; | |
| 3625 char zBuf[200]; | |
| 3626 if( p->zOffset ) return; | |
| 3627 initStringBuffer(&sb); | |
| 3628 for(i=0; i<p->nMatch; i++){ | |
| 3629 struct snippetMatch *pMatch = &p->aMatch[i]; | |
| 3630 if( pMatch->iTerm>=0 ){ | |
| 3631 /* If snippetMatch.iTerm is less than 0, then the match was | |
| 3632 ** discarded as part of processing the NEAR operator (see the | |
| 3633 ** trimSnippetOffsetsForNear() function for details). Ignore | |
| 3634 ** it in this case | |
| 3635 */ | |
| 3636 zBuf[0] = ' '; | |
| 3637 sqlite3_snprintf(sizeof(zBuf)-1, &zBuf[cnt>0], "%d %d %d %d", | |
| 3638 pMatch->iCol, pMatch->iTerm, pMatch->iStart, pMatch->nByte); | |
| 3639 append(&sb, zBuf); | |
| 3640 cnt++; | |
| 3641 } | |
| 3642 } | |
| 3643 p->zOffset = stringBufferData(&sb); | |
| 3644 p->nOffset = stringBufferLength(&sb); | |
| 3645 } | |
| 3646 | |
| 3647 /* | |
| 3648 ** zDoc[0..nDoc-1] is phrase of text. aMatch[0..nMatch-1] are a set | |
| 3649 ** of matching words some of which might be in zDoc. zDoc is column | |
| 3650 ** number iCol. | |
| 3651 ** | |
| 3652 ** iBreak is suggested spot in zDoc where we could begin or end an | |
| 3653 ** excerpt. Return a value similar to iBreak but possibly adjusted | |
| 3654 ** to be a little left or right so that the break point is better. | |
| 3655 */ | |
| 3656 static int wordBoundary( | |
| 3657 int iBreak, /* The suggested break point */ | |
| 3658 const char *zDoc, /* Document text */ | |
| 3659 int nDoc, /* Number of bytes in zDoc[] */ | |
| 3660 struct snippetMatch *aMatch, /* Matching words */ | |
| 3661 int nMatch, /* Number of entries in aMatch[] */ | |
| 3662 int iCol /* The column number for zDoc[] */ | |
| 3663 ){ | |
| 3664 int i; | |
| 3665 if( iBreak<=10 ){ | |
| 3666 return 0; | |
| 3667 } | |
| 3668 if( iBreak>=nDoc-10 ){ | |
| 3669 return nDoc; | |
| 3670 } | |
| 3671 for(i=0; i<nMatch && aMatch[i].iCol<iCol; i++){} | |
| 3672 while( i<nMatch && aMatch[i].iStart+aMatch[i].nByte<iBreak ){ i++; } | |
| 3673 if( i<nMatch ){ | |
| 3674 if( aMatch[i].iStart<iBreak+10 ){ | |
| 3675 return aMatch[i].iStart; | |
| 3676 } | |
| 3677 if( i>0 && aMatch[i-1].iStart+aMatch[i-1].nByte>=iBreak ){ | |
| 3678 return aMatch[i-1].iStart; | |
| 3679 } | |
| 3680 } | |
| 3681 for(i=1; i<=10; i++){ | |
| 3682 if( safe_isspace(zDoc[iBreak-i]) ){ | |
| 3683 return iBreak - i + 1; | |
| 3684 } | |
| 3685 if( safe_isspace(zDoc[iBreak+i]) ){ | |
| 3686 return iBreak + i + 1; | |
| 3687 } | |
| 3688 } | |
| 3689 return iBreak; | |
| 3690 } | |
| 3691 | |
| 3692 | |
| 3693 | |
| 3694 /* | |
| 3695 ** Allowed values for Snippet.aMatch[].snStatus | |
| 3696 */ | |
| 3697 #define SNIPPET_IGNORE 0 /* It is ok to omit this match from the snippet */ | |
| 3698 #define SNIPPET_DESIRED 1 /* We want to include this match in the snippet */ | |
| 3699 | |
| 3700 /* | |
| 3701 ** Generate the text of a snippet. | |
| 3702 */ | |
| 3703 static void snippetText( | |
| 3704 fulltext_cursor *pCursor, /* The cursor we need the snippet for */ | |
| 3705 const char *zStartMark, /* Markup to appear before each match */ | |
| 3706 const char *zEndMark, /* Markup to appear after each match */ | |
| 3707 const char *zEllipsis /* Ellipsis mark */ | |
| 3708 ){ | |
| 3709 int i, j; | |
| 3710 struct snippetMatch *aMatch; | |
| 3711 int nMatch; | |
| 3712 int nDesired; | |
| 3713 StringBuffer sb; | |
| 3714 int tailCol; | |
| 3715 int tailOffset; | |
| 3716 int iCol; | |
| 3717 int nDoc; | |
| 3718 const char *zDoc; | |
| 3719 int iStart, iEnd; | |
| 3720 int tailEllipsis = 0; | |
| 3721 int iMatch; | |
| 3722 | |
| 3723 | |
| 3724 sqlite3_free(pCursor->snippet.zSnippet); | |
| 3725 pCursor->snippet.zSnippet = 0; | |
| 3726 aMatch = pCursor->snippet.aMatch; | |
| 3727 nMatch = pCursor->snippet.nMatch; | |
| 3728 initStringBuffer(&sb); | |
| 3729 | |
| 3730 for(i=0; i<nMatch; i++){ | |
| 3731 aMatch[i].snStatus = SNIPPET_IGNORE; | |
| 3732 } | |
| 3733 nDesired = 0; | |
| 3734 for(i=0; i<FTS3_ROTOR_SZ; i++){ | |
| 3735 for(j=0; j<nMatch; j++){ | |
| 3736 if( aMatch[j].iTerm==i ){ | |
| 3737 aMatch[j].snStatus = SNIPPET_DESIRED; | |
| 3738 nDesired++; | |
| 3739 break; | |
| 3740 } | |
| 3741 } | |
| 3742 } | |
| 3743 | |
| 3744 iMatch = 0; | |
| 3745 tailCol = -1; | |
| 3746 tailOffset = 0; | |
| 3747 for(i=0; i<nMatch && nDesired>0; i++){ | |
| 3748 if( aMatch[i].snStatus!=SNIPPET_DESIRED ) continue; | |
| 3749 nDesired--; | |
| 3750 iCol = aMatch[i].iCol; | |
| 3751 zDoc = (const char*)sqlite3_column_text(pCursor->pStmt, iCol+1); | |
| 3752 nDoc = sqlite3_column_bytes(pCursor->pStmt, iCol+1); | |
| 3753 iStart = aMatch[i].iStart - 40; | |
| 3754 iStart = wordBoundary(iStart, zDoc, nDoc, aMatch, nMatch, iCol); | |
| 3755 if( iStart<=10 ){ | |
| 3756 iStart = 0; | |
| 3757 } | |
| 3758 if( iCol==tailCol && iStart<=tailOffset+20 ){ | |
| 3759 iStart = tailOffset; | |
| 3760 } | |
| 3761 if( (iCol!=tailCol && tailCol>=0) || iStart!=tailOffset ){ | |
| 3762 trimWhiteSpace(&sb); | |
| 3763 appendWhiteSpace(&sb); | |
| 3764 append(&sb, zEllipsis); | |
| 3765 appendWhiteSpace(&sb); | |
| 3766 } | |
| 3767 iEnd = aMatch[i].iStart + aMatch[i].nByte + 40; | |
| 3768 iEnd = wordBoundary(iEnd, zDoc, nDoc, aMatch, nMatch, iCol); | |
| 3769 if( iEnd>=nDoc-10 ){ | |
| 3770 iEnd = nDoc; | |
| 3771 tailEllipsis = 0; | |
| 3772 }else{ | |
| 3773 tailEllipsis = 1; | |
| 3774 } | |
| 3775 while( iMatch<nMatch && aMatch[iMatch].iCol<iCol ){ iMatch++; } | |
| 3776 while( iStart<iEnd ){ | |
| 3777 while( iMatch<nMatch && aMatch[iMatch].iStart<iStart | |
| 3778 && aMatch[iMatch].iCol<=iCol ){ | |
| 3779 iMatch++; | |
| 3780 } | |
| 3781 if( iMatch<nMatch && aMatch[iMatch].iStart<iEnd | |
| 3782 && aMatch[iMatch].iCol==iCol ){ | |
| 3783 nappend(&sb, &zDoc[iStart], aMatch[iMatch].iStart - iStart); | |
| 3784 iStart = aMatch[iMatch].iStart; | |
| 3785 append(&sb, zStartMark); | |
| 3786 nappend(&sb, &zDoc[iStart], aMatch[iMatch].nByte); | |
| 3787 append(&sb, zEndMark); | |
| 3788 iStart += aMatch[iMatch].nByte; | |
| 3789 for(j=iMatch+1; j<nMatch; j++){ | |
| 3790 if( aMatch[j].iTerm==aMatch[iMatch].iTerm | |
| 3791 && aMatch[j].snStatus==SNIPPET_DESIRED ){ | |
| 3792 nDesired--; | |
| 3793 aMatch[j].snStatus = SNIPPET_IGNORE; | |
| 3794 } | |
| 3795 } | |
| 3796 }else{ | |
| 3797 nappend(&sb, &zDoc[iStart], iEnd - iStart); | |
| 3798 iStart = iEnd; | |
| 3799 } | |
| 3800 } | |
| 3801 tailCol = iCol; | |
| 3802 tailOffset = iEnd; | |
| 3803 } | |
| 3804 trimWhiteSpace(&sb); | |
| 3805 if( tailEllipsis ){ | |
| 3806 appendWhiteSpace(&sb); | |
| 3807 append(&sb, zEllipsis); | |
| 3808 } | |
| 3809 pCursor->snippet.zSnippet = stringBufferData(&sb); | |
| 3810 pCursor->snippet.nSnippet = stringBufferLength(&sb); | |
| 3811 } | |
| 3812 | |
| 3813 | |
| 3814 /* | |
| 3815 ** Close the cursor. For additional information see the documentation | |
| 3816 ** on the xClose method of the virtual table interface. | |
| 3817 */ | |
| 3818 static int fulltextClose(sqlite3_vtab_cursor *pCursor){ | |
| 3819 fulltext_cursor *c = (fulltext_cursor *) pCursor; | |
| 3820 FTSTRACE(("FTS3 Close %p\n", c)); | |
| 3821 sqlite3_finalize(c->pStmt); | |
| 3822 sqlite3Fts3ExprFree(c->pExpr); | |
| 3823 snippetClear(&c->snippet); | |
| 3824 if( c->result.nData!=0 ){ | |
| 3825 dlrDestroy(&c->reader); | |
| 3826 } | |
| 3827 dataBufferDestroy(&c->result); | |
| 3828 sqlite3_free(c); | |
| 3829 return SQLITE_OK; | |
| 3830 } | |
| 3831 | |
| 3832 static int fulltextNext(sqlite3_vtab_cursor *pCursor){ | |
| 3833 fulltext_cursor *c = (fulltext_cursor *) pCursor; | |
| 3834 int rc; | |
| 3835 | |
| 3836 FTSTRACE(("FTS3 Next %p\n", pCursor)); | |
| 3837 snippetClear(&c->snippet); | |
| 3838 if( c->iCursorType < QUERY_FULLTEXT ){ | |
| 3839 /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */ | |
| 3840 rc = sqlite3_step(c->pStmt); | |
| 3841 switch( rc ){ | |
| 3842 case SQLITE_ROW: | |
| 3843 c->eof = 0; | |
| 3844 return SQLITE_OK; | |
| 3845 case SQLITE_DONE: | |
| 3846 c->eof = 1; | |
| 3847 return SQLITE_OK; | |
| 3848 default: | |
| 3849 c->eof = 1; | |
| 3850 return rc; | |
| 3851 } | |
| 3852 } else { /* full-text query */ | |
| 3853 rc = sqlite3_reset(c->pStmt); | |
| 3854 if( rc!=SQLITE_OK ) return rc; | |
| 3855 | |
| 3856 if( c->result.nData==0 || dlrAtEnd(&c->reader) ){ | |
| 3857 c->eof = 1; | |
| 3858 return SQLITE_OK; | |
| 3859 } | |
| 3860 rc = sqlite3_bind_int64(c->pStmt, 1, dlrDocid(&c->reader)); | |
| 3861 if( rc!=SQLITE_OK ) return rc; | |
| 3862 rc = dlrStep(&c->reader); | |
| 3863 if( rc!=SQLITE_OK ) return rc; | |
| 3864 /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */ | |
| 3865 rc = sqlite3_step(c->pStmt); | |
| 3866 if( rc==SQLITE_ROW ){ /* the case we expect */ | |
| 3867 c->eof = 0; | |
| 3868 return SQLITE_OK; | |
| 3869 } | |
| 3870 /* Corrupt if the index refers to missing document. */ | |
| 3871 if( rc==SQLITE_DONE ) return SQLITE_CORRUPT_BKPT; | |
| 3872 | |
| 3873 return rc; | |
| 3874 } | |
| 3875 } | |
| 3876 | |
| 3877 | |
| 3878 /* TODO(shess) If we pushed LeafReader to the top of the file, or to | |
| 3879 ** another file, term_select() could be pushed above | |
| 3880 ** docListOfTerm(). | |
| 3881 */ | |
| 3882 static int termSelect(fulltext_vtab *v, int iColumn, | |
| 3883 const char *pTerm, int nTerm, int isPrefix, | |
| 3884 DocListType iType, DataBuffer *out); | |
| 3885 | |
| 3886 /* | |
| 3887 ** Return a DocList corresponding to the phrase *pPhrase. | |
| 3888 ** | |
| 3889 ** The resulting DL_DOCIDS doclist is stored in pResult, which is | |
| 3890 ** overwritten. | |
| 3891 */ | |
| 3892 static int docListOfPhrase( | |
| 3893 fulltext_vtab *pTab, /* The full text index */ | |
| 3894 Fts3Phrase *pPhrase, /* Phrase to return a doclist corresponding to */ | |
| 3895 DocListType eListType, /* Either DL_DOCIDS or DL_POSITIONS */ | |
| 3896 DataBuffer *pResult /* Write the result here */ | |
| 3897 ){ | |
| 3898 int ii; | |
| 3899 int rc = SQLITE_OK; | |
| 3900 int iCol = pPhrase->iColumn; | |
| 3901 DocListType eType = eListType; | |
| 3902 assert( eType==DL_POSITIONS || eType==DL_DOCIDS ); | |
| 3903 if( pPhrase->nToken>1 ){ | |
| 3904 eType = DL_POSITIONS; | |
| 3905 } | |
| 3906 | |
| 3907 /* This code should never be called with buffered updates. */ | |
| 3908 assert( pTab->nPendingData<0 ); | |
| 3909 | |
| 3910 for(ii=0; rc==SQLITE_OK && ii<pPhrase->nToken; ii++){ | |
| 3911 DataBuffer tmp; | |
| 3912 struct PhraseToken *p = &pPhrase->aToken[ii]; | |
| 3913 rc = termSelect(pTab, iCol, p->z, p->n, p->isPrefix, eType, &tmp); | |
| 3914 if( rc==SQLITE_OK ){ | |
| 3915 if( ii==0 ){ | |
| 3916 *pResult = tmp; | |
| 3917 }else{ | |
| 3918 DataBuffer res = *pResult; | |
| 3919 dataBufferInit(pResult, 0); | |
| 3920 if( ii==(pPhrase->nToken-1) ){ | |
| 3921 eType = eListType; | |
| 3922 } | |
| 3923 rc = docListPhraseMerge( | |
| 3924 res.pData, res.nData, tmp.pData, tmp.nData, 0, 0, eType, pResult | |
| 3925 ); | |
| 3926 dataBufferDestroy(&res); | |
| 3927 dataBufferDestroy(&tmp); | |
| 3928 if( rc!= SQLITE_OK ) return rc; | |
| 3929 } | |
| 3930 } | |
| 3931 } | |
| 3932 | |
| 3933 return rc; | |
| 3934 } | |
| 3935 | |
| 3936 /* | |
| 3937 ** Evaluate the full-text expression pExpr against fts3 table pTab. Write | |
| 3938 ** the results into pRes. | |
| 3939 */ | |
| 3940 static int evalFts3Expr( | |
| 3941 fulltext_vtab *pTab, /* Fts3 Virtual table object */ | |
| 3942 Fts3Expr *pExpr, /* Parsed fts3 expression */ | |
| 3943 DataBuffer *pRes /* OUT: Write results of the expression here */ | |
| 3944 ){ | |
| 3945 int rc = SQLITE_OK; | |
| 3946 | |
| 3947 /* Initialize the output buffer. If this is an empty query (pExpr==0), | |
| 3948 ** this is all that needs to be done. Empty queries produce empty | |
| 3949 ** result sets. | |
| 3950 */ | |
| 3951 dataBufferInit(pRes, 0); | |
| 3952 | |
| 3953 if( pExpr ){ | |
| 3954 if( pExpr->eType==FTSQUERY_PHRASE ){ | |
| 3955 DocListType eType = DL_DOCIDS; | |
| 3956 if( pExpr->pParent && pExpr->pParent->eType==FTSQUERY_NEAR ){ | |
| 3957 eType = DL_POSITIONS; | |
| 3958 } | |
| 3959 rc = docListOfPhrase(pTab, pExpr->pPhrase, eType, pRes); | |
| 3960 }else{ | |
| 3961 DataBuffer lhs; | |
| 3962 DataBuffer rhs; | |
| 3963 | |
| 3964 dataBufferInit(&rhs, 0); | |
| 3965 if( SQLITE_OK==(rc = evalFts3Expr(pTab, pExpr->pLeft, &lhs)) | |
| 3966 && SQLITE_OK==(rc = evalFts3Expr(pTab, pExpr->pRight, &rhs)) | |
| 3967 ){ | |
| 3968 switch( pExpr->eType ){ | |
| 3969 case FTSQUERY_NEAR: { | |
| 3970 int nToken; | |
| 3971 Fts3Expr *pLeft; | |
| 3972 DocListType eType = DL_DOCIDS; | |
| 3973 if( pExpr->pParent && pExpr->pParent->eType==FTSQUERY_NEAR ){ | |
| 3974 eType = DL_POSITIONS; | |
| 3975 } | |
| 3976 pLeft = pExpr->pLeft; | |
| 3977 while( pLeft->eType==FTSQUERY_NEAR ){ | |
| 3978 pLeft=pLeft->pRight; | |
| 3979 } | |
| 3980 assert( pExpr->pRight->eType==FTSQUERY_PHRASE ); | |
| 3981 assert( pLeft->eType==FTSQUERY_PHRASE ); | |
| 3982 nToken = pLeft->pPhrase->nToken + pExpr->pRight->pPhrase->nToken; | |
| 3983 rc = docListPhraseMerge(lhs.pData, lhs.nData, rhs.pData, rhs.nData, | |
| 3984 pExpr->nNear+1, nToken, eType, pRes | |
| 3985 ); | |
| 3986 break; | |
| 3987 } | |
| 3988 case FTSQUERY_NOT: { | |
| 3989 rc = docListExceptMerge(lhs.pData, lhs.nData, rhs.pData, rhs.nData,p
Res); | |
| 3990 break; | |
| 3991 } | |
| 3992 case FTSQUERY_AND: { | |
| 3993 rc = docListAndMerge(lhs.pData, lhs.nData, rhs.pData, rhs.nData, pRe
s); | |
| 3994 break; | |
| 3995 } | |
| 3996 case FTSQUERY_OR: { | |
| 3997 rc = docListOrMerge(lhs.pData, lhs.nData, rhs.pData, rhs.nData, pRes
); | |
| 3998 break; | |
| 3999 } | |
| 4000 } | |
| 4001 } | |
| 4002 dataBufferDestroy(&lhs); | |
| 4003 dataBufferDestroy(&rhs); | |
| 4004 } | |
| 4005 } | |
| 4006 | |
| 4007 return rc; | |
| 4008 } | |
| 4009 | |
| 4010 /* TODO(shess) Refactor the code to remove this forward decl. */ | |
| 4011 static int flushPendingTerms(fulltext_vtab *v); | |
| 4012 | |
| 4013 /* Perform a full-text query using the search expression in | |
| 4014 ** zInput[0..nInput-1]. Return a list of matching documents | |
| 4015 ** in pResult. | |
| 4016 ** | |
| 4017 ** Queries must match column iColumn. Or if iColumn>=nColumn | |
| 4018 ** they are allowed to match against any column. | |
| 4019 */ | |
| 4020 static int fulltextQuery( | |
| 4021 fulltext_vtab *v, /* The full text index */ | |
| 4022 int iColumn, /* Match against this column by default */ | |
| 4023 const char *zInput, /* The query string */ | |
| 4024 int nInput, /* Number of bytes in zInput[] */ | |
| 4025 DataBuffer *pResult, /* Write the result doclist here */ | |
| 4026 Fts3Expr **ppExpr /* Put parsed query string here */ | |
| 4027 ){ | |
| 4028 int rc; | |
| 4029 | |
| 4030 /* TODO(shess) Instead of flushing pendingTerms, we could query for | |
| 4031 ** the relevant term and merge the doclist into what we receive from | |
| 4032 ** the database. Wait and see if this is a common issue, first. | |
| 4033 ** | |
| 4034 ** A good reason not to flush is to not generate update-related | |
| 4035 ** error codes from here. | |
| 4036 */ | |
| 4037 | |
| 4038 /* Flush any buffered updates before executing the query. */ | |
| 4039 rc = flushPendingTerms(v); | |
| 4040 if( rc!=SQLITE_OK ){ | |
| 4041 return rc; | |
| 4042 } | |
| 4043 | |
| 4044 /* Parse the query passed to the MATCH operator. */ | |
| 4045 rc = sqlite3Fts3ExprParse(v->pTokenizer, | |
| 4046 v->azColumn, v->nColumn, iColumn, zInput, nInput, ppExpr | |
| 4047 ); | |
| 4048 if( rc!=SQLITE_OK ){ | |
| 4049 assert( 0==(*ppExpr) ); | |
| 4050 return rc; | |
| 4051 } | |
| 4052 | |
| 4053 return evalFts3Expr(v, *ppExpr, pResult); | |
| 4054 } | |
| 4055 | |
| 4056 /* | |
| 4057 ** This is the xFilter interface for the virtual table. See | |
| 4058 ** the virtual table xFilter method documentation for additional | |
| 4059 ** information. | |
| 4060 ** | |
| 4061 ** If idxNum==QUERY_GENERIC then do a full table scan against | |
| 4062 ** the %_content table. | |
| 4063 ** | |
| 4064 ** If idxNum==QUERY_DOCID then do a docid lookup for a single entry | |
| 4065 ** in the %_content table. | |
| 4066 ** | |
| 4067 ** If idxNum>=QUERY_FULLTEXT then use the full text index. The | |
| 4068 ** column on the left-hand side of the MATCH operator is column | |
| 4069 ** number idxNum-QUERY_FULLTEXT, 0 indexed. argv[0] is the right-hand | |
| 4070 ** side of the MATCH operator. | |
| 4071 */ | |
| 4072 /* TODO(shess) Upgrade the cursor initialization and destruction to | |
| 4073 ** account for fulltextFilter() being called multiple times on the | |
| 4074 ** same cursor. The current solution is very fragile. Apply fix to | |
| 4075 ** fts3 as appropriate. | |
| 4076 */ | |
| 4077 static int fulltextFilter( | |
| 4078 sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */ | |
| 4079 int idxNum, const char *idxStr, /* Which indexing scheme to use */ | |
| 4080 int argc, sqlite3_value **argv /* Arguments for the indexing scheme */ | |
| 4081 ){ | |
| 4082 fulltext_cursor *c = (fulltext_cursor *) pCursor; | |
| 4083 fulltext_vtab *v = cursor_vtab(c); | |
| 4084 int rc; | |
| 4085 | |
| 4086 FTSTRACE(("FTS3 Filter %p\n",pCursor)); | |
| 4087 | |
| 4088 /* If the cursor has a statement that was not prepared according to | |
| 4089 ** idxNum, clear it. I believe all calls to fulltextFilter with a | |
| 4090 ** given cursor will have the same idxNum , but in this case it's | |
| 4091 ** easy to be safe. | |
| 4092 */ | |
| 4093 if( c->pStmt && c->iCursorType!=idxNum ){ | |
| 4094 sqlite3_finalize(c->pStmt); | |
| 4095 c->pStmt = NULL; | |
| 4096 } | |
| 4097 | |
| 4098 /* Get a fresh statement appropriate to idxNum. */ | |
| 4099 /* TODO(shess): Add a prepared-statement cache in the vt structure. | |
| 4100 ** The cache must handle multiple open cursors. Easier to cache the | |
| 4101 ** statement variants at the vt to reduce malloc/realloc/free here. | |
| 4102 ** Or we could have a StringBuffer variant which allowed stack | |
| 4103 ** construction for small values. | |
| 4104 */ | |
| 4105 if( !c->pStmt ){ | |
| 4106 StringBuffer sb; | |
| 4107 initStringBuffer(&sb); | |
| 4108 append(&sb, "SELECT docid, "); | |
| 4109 appendList(&sb, v->nColumn, v->azContentColumn); | |
| 4110 append(&sb, " FROM %_content"); | |
| 4111 if( idxNum!=QUERY_GENERIC ) append(&sb, " WHERE docid = ?"); | |
| 4112 rc = sql_prepare(v->db, v->zDb, v->zName, &c->pStmt, | |
| 4113 stringBufferData(&sb)); | |
| 4114 stringBufferDestroy(&sb); | |
| 4115 if( rc!=SQLITE_OK ) return rc; | |
| 4116 c->iCursorType = idxNum; | |
| 4117 }else{ | |
| 4118 sqlite3_reset(c->pStmt); | |
| 4119 assert( c->iCursorType==idxNum ); | |
| 4120 } | |
| 4121 | |
| 4122 switch( idxNum ){ | |
| 4123 case QUERY_GENERIC: | |
| 4124 break; | |
| 4125 | |
| 4126 case QUERY_DOCID: | |
| 4127 rc = sqlite3_bind_int64(c->pStmt, 1, sqlite3_value_int64(argv[0])); | |
| 4128 if( rc!=SQLITE_OK ) return rc; | |
| 4129 break; | |
| 4130 | |
| 4131 default: /* full-text search */ | |
| 4132 { | |
| 4133 int iCol = idxNum-QUERY_FULLTEXT; | |
| 4134 const char *zQuery = (const char *)sqlite3_value_text(argv[0]); | |
| 4135 assert( idxNum<=QUERY_FULLTEXT+v->nColumn); | |
| 4136 assert( argc==1 ); | |
| 4137 if( c->result.nData!=0 ){ | |
| 4138 /* This case happens if the same cursor is used repeatedly. */ | |
| 4139 dlrDestroy(&c->reader); | |
| 4140 dataBufferReset(&c->result); | |
| 4141 }else{ | |
| 4142 dataBufferInit(&c->result, 0); | |
| 4143 } | |
| 4144 rc = fulltextQuery(v, iCol, zQuery, -1, &c->result, &c->pExpr); | |
| 4145 if( rc!=SQLITE_OK ) return rc; | |
| 4146 if( c->result.nData!=0 ){ | |
| 4147 dlrInit(&c->reader, DL_DOCIDS, c->result.pData, c->result.nData); | |
| 4148 } | |
| 4149 break; | |
| 4150 } | |
| 4151 } | |
| 4152 | |
| 4153 return fulltextNext(pCursor); | |
| 4154 } | |
| 4155 | |
| 4156 /* This is the xEof method of the virtual table. The SQLite core | |
| 4157 ** calls this routine to find out if it has reached the end of | |
| 4158 ** a query's results set. | |
| 4159 */ | |
| 4160 static int fulltextEof(sqlite3_vtab_cursor *pCursor){ | |
| 4161 fulltext_cursor *c = (fulltext_cursor *) pCursor; | |
| 4162 return c->eof; | |
| 4163 } | |
| 4164 | |
| 4165 /* This is the xColumn method of the virtual table. The SQLite | |
| 4166 ** core calls this method during a query when it needs the value | |
| 4167 ** of a column from the virtual table. This method needs to use | |
| 4168 ** one of the sqlite3_result_*() routines to store the requested | |
| 4169 ** value back in the pContext. | |
| 4170 */ | |
| 4171 static int fulltextColumn(sqlite3_vtab_cursor *pCursor, | |
| 4172 sqlite3_context *pContext, int idxCol){ | |
| 4173 fulltext_cursor *c = (fulltext_cursor *) pCursor; | |
| 4174 fulltext_vtab *v = cursor_vtab(c); | |
| 4175 | |
| 4176 if( idxCol<v->nColumn ){ | |
| 4177 sqlite3_value *pVal = sqlite3_column_value(c->pStmt, idxCol+1); | |
| 4178 sqlite3_result_value(pContext, pVal); | |
| 4179 }else if( idxCol==v->nColumn ){ | |
| 4180 /* The extra column whose name is the same as the table. | |
| 4181 ** Return a blob which is a pointer to the cursor | |
| 4182 */ | |
| 4183 sqlite3_result_blob(pContext, &c, sizeof(c), SQLITE_TRANSIENT); | |
| 4184 }else if( idxCol==v->nColumn+1 ){ | |
| 4185 /* The docid column, which is an alias for rowid. */ | |
| 4186 sqlite3_value *pVal = sqlite3_column_value(c->pStmt, 0); | |
| 4187 sqlite3_result_value(pContext, pVal); | |
| 4188 } | |
| 4189 return SQLITE_OK; | |
| 4190 } | |
| 4191 | |
| 4192 /* This is the xRowid method. The SQLite core calls this routine to | |
| 4193 ** retrieve the rowid for the current row of the result set. fts3 | |
| 4194 ** exposes %_content.docid as the rowid for the virtual table. The | |
| 4195 ** rowid should be written to *pRowid. | |
| 4196 */ | |
| 4197 static int fulltextRowid(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){ | |
| 4198 fulltext_cursor *c = (fulltext_cursor *) pCursor; | |
| 4199 | |
| 4200 *pRowid = sqlite3_column_int64(c->pStmt, 0); | |
| 4201 return SQLITE_OK; | |
| 4202 } | |
| 4203 | |
| 4204 /* Add all terms in [zText] to pendingTerms table. If [iColumn] > 0, | |
| 4205 ** we also store positions and offsets in the hash table using that | |
| 4206 ** column number. | |
| 4207 */ | |
| 4208 static int buildTerms(fulltext_vtab *v, sqlite_int64 iDocid, | |
| 4209 const char *zText, int iColumn){ | |
| 4210 sqlite3_tokenizer *pTokenizer = v->pTokenizer; | |
| 4211 sqlite3_tokenizer_cursor *pCursor; | |
| 4212 const char *pToken; | |
| 4213 int nTokenBytes; | |
| 4214 int iStartOffset, iEndOffset, iPosition; | |
| 4215 int rc; | |
| 4216 | |
| 4217 rc = pTokenizer->pModule->xOpen(pTokenizer, zText, -1, &pCursor); | |
| 4218 if( rc!=SQLITE_OK ) return rc; | |
| 4219 | |
| 4220 pCursor->pTokenizer = pTokenizer; | |
| 4221 while( SQLITE_OK==(rc=pTokenizer->pModule->xNext(pCursor, | |
| 4222 &pToken, &nTokenBytes, | |
| 4223 &iStartOffset, &iEndOffset, | |
| 4224 &iPosition)) ){ | |
| 4225 DLCollector *p; | |
| 4226 int nData; /* Size of doclist before our update. */ | |
| 4227 | |
| 4228 /* Positions can't be negative; we use -1 as a terminator | |
| 4229 * internally. Token can't be NULL or empty. */ | |
| 4230 if( iPosition<0 || pToken == NULL || nTokenBytes == 0 ){ | |
| 4231 rc = SQLITE_ERROR; | |
| 4232 break; | |
| 4233 } | |
| 4234 | |
| 4235 p = fts3HashFind(&v->pendingTerms, pToken, nTokenBytes); | |
| 4236 if( p==NULL ){ | |
| 4237 nData = 0; | |
| 4238 p = dlcNew(iDocid, DL_DEFAULT); | |
| 4239 fts3HashInsert(&v->pendingTerms, pToken, nTokenBytes, p); | |
| 4240 | |
| 4241 /* Overhead for our hash table entry, the key, and the value. */ | |
| 4242 v->nPendingData += sizeof(struct fts3HashElem)+sizeof(*p)+nTokenBytes; | |
| 4243 }else{ | |
| 4244 nData = p->b.nData; | |
| 4245 if( p->dlw.iPrevDocid!=iDocid ) dlcNext(p, iDocid); | |
| 4246 } | |
| 4247 if( iColumn>=0 ){ | |
| 4248 dlcAddPos(p, iColumn, iPosition, iStartOffset, iEndOffset); | |
| 4249 } | |
| 4250 | |
| 4251 /* Accumulate data added by dlcNew or dlcNext, and dlcAddPos. */ | |
| 4252 v->nPendingData += p->b.nData-nData; | |
| 4253 } | |
| 4254 | |
| 4255 /* TODO(shess) Check return? Should this be able to cause errors at | |
| 4256 ** this point? Actually, same question about sqlite3_finalize(), | |
| 4257 ** though one could argue that failure there means that the data is | |
| 4258 ** not durable. *ponder* | |
| 4259 */ | |
| 4260 pTokenizer->pModule->xClose(pCursor); | |
| 4261 if( SQLITE_DONE == rc ) return SQLITE_OK; | |
| 4262 return rc; | |
| 4263 } | |
| 4264 | |
| 4265 /* Add doclists for all terms in [pValues] to pendingTerms table. */ | |
| 4266 static int insertTerms(fulltext_vtab *v, sqlite_int64 iDocid, | |
| 4267 sqlite3_value **pValues){ | |
| 4268 int i; | |
| 4269 for(i = 0; i < v->nColumn ; ++i){ | |
| 4270 char *zText = (char*)sqlite3_value_text(pValues[i]); | |
| 4271 int rc = buildTerms(v, iDocid, zText, i); | |
| 4272 if( rc!=SQLITE_OK ) return rc; | |
| 4273 } | |
| 4274 return SQLITE_OK; | |
| 4275 } | |
| 4276 | |
| 4277 /* Add empty doclists for all terms in the given row's content to | |
| 4278 ** pendingTerms. | |
| 4279 */ | |
| 4280 static int deleteTerms(fulltext_vtab *v, sqlite_int64 iDocid){ | |
| 4281 const char **pValues; | |
| 4282 int i, rc; | |
| 4283 | |
| 4284 /* TODO(shess) Should we allow such tables at all? */ | |
| 4285 if( DL_DEFAULT==DL_DOCIDS ) return SQLITE_ERROR; | |
| 4286 | |
| 4287 rc = content_select(v, iDocid, &pValues); | |
| 4288 if( rc!=SQLITE_OK ) return rc; | |
| 4289 | |
| 4290 for(i = 0 ; i < v->nColumn; ++i) { | |
| 4291 rc = buildTerms(v, iDocid, pValues[i], -1); | |
| 4292 if( rc!=SQLITE_OK ) break; | |
| 4293 } | |
| 4294 | |
| 4295 freeStringArray(v->nColumn, pValues); | |
| 4296 return SQLITE_OK; | |
| 4297 } | |
| 4298 | |
| 4299 /* TODO(shess) Refactor the code to remove this forward decl. */ | |
| 4300 static int initPendingTerms(fulltext_vtab *v, sqlite_int64 iDocid); | |
| 4301 | |
| 4302 /* Insert a row into the %_content table; set *piDocid to be the ID of the | |
| 4303 ** new row. Add doclists for terms to pendingTerms. | |
| 4304 */ | |
| 4305 static int index_insert(fulltext_vtab *v, sqlite3_value *pRequestDocid, | |
| 4306 sqlite3_value **pValues, sqlite_int64 *piDocid){ | |
| 4307 int rc; | |
| 4308 | |
| 4309 rc = content_insert(v, pRequestDocid, pValues); /* execute an SQL INSERT */ | |
| 4310 if( rc!=SQLITE_OK ) return rc; | |
| 4311 | |
| 4312 /* docid column is an alias for rowid. */ | |
| 4313 *piDocid = sqlite3_last_insert_rowid(v->db); | |
| 4314 rc = initPendingTerms(v, *piDocid); | |
| 4315 if( rc!=SQLITE_OK ) return rc; | |
| 4316 | |
| 4317 return insertTerms(v, *piDocid, pValues); | |
| 4318 } | |
| 4319 | |
| 4320 /* Delete a row from the %_content table; add empty doclists for terms | |
| 4321 ** to pendingTerms. | |
| 4322 */ | |
| 4323 static int index_delete(fulltext_vtab *v, sqlite_int64 iRow){ | |
| 4324 int rc = initPendingTerms(v, iRow); | |
| 4325 if( rc!=SQLITE_OK ) return rc; | |
| 4326 | |
| 4327 rc = deleteTerms(v, iRow); | |
| 4328 if( rc!=SQLITE_OK ) return rc; | |
| 4329 | |
| 4330 return content_delete(v, iRow); /* execute an SQL DELETE */ | |
| 4331 } | |
| 4332 | |
| 4333 /* Update a row in the %_content table; add delete doclists to | |
| 4334 ** pendingTerms for old terms not in the new data, add insert doclists | |
| 4335 ** to pendingTerms for terms in the new data. | |
| 4336 */ | |
| 4337 static int index_update(fulltext_vtab *v, sqlite_int64 iRow, | |
| 4338 sqlite3_value **pValues){ | |
| 4339 int rc = initPendingTerms(v, iRow); | |
| 4340 if( rc!=SQLITE_OK ) return rc; | |
| 4341 | |
| 4342 /* Generate an empty doclist for each term that previously appeared in this | |
| 4343 * row. */ | |
| 4344 rc = deleteTerms(v, iRow); | |
| 4345 if( rc!=SQLITE_OK ) return rc; | |
| 4346 | |
| 4347 rc = content_update(v, pValues, iRow); /* execute an SQL UPDATE */ | |
| 4348 if( rc!=SQLITE_OK ) return rc; | |
| 4349 | |
| 4350 /* Now add positions for terms which appear in the updated row. */ | |
| 4351 return insertTerms(v, iRow, pValues); | |
| 4352 } | |
| 4353 | |
| 4354 /*******************************************************************/ | |
| 4355 /* InteriorWriter is used to collect terms and block references into | |
| 4356 ** interior nodes in %_segments. See commentary at top of file for | |
| 4357 ** format. | |
| 4358 */ | |
| 4359 | |
| 4360 /* How large interior nodes can grow. */ | |
| 4361 #define INTERIOR_MAX 2048 | |
| 4362 | |
| 4363 /* Minimum number of terms per interior node (except the root). This | |
| 4364 ** prevents large terms from making the tree too skinny - must be >0 | |
| 4365 ** so that the tree always makes progress. Note that the min tree | |
| 4366 ** fanout will be INTERIOR_MIN_TERMS+1. | |
| 4367 */ | |
| 4368 #define INTERIOR_MIN_TERMS 7 | |
| 4369 #if INTERIOR_MIN_TERMS<1 | |
| 4370 # error INTERIOR_MIN_TERMS must be greater than 0. | |
| 4371 #endif | |
| 4372 | |
| 4373 /* ROOT_MAX controls how much data is stored inline in the segment | |
| 4374 ** directory. | |
| 4375 */ | |
| 4376 /* TODO(shess) Push ROOT_MAX down to whoever is writing things. It's | |
| 4377 ** only here so that interiorWriterRootInfo() and leafWriterRootInfo() | |
| 4378 ** can both see it, but if the caller passed it in, we wouldn't even | |
| 4379 ** need a define. | |
| 4380 */ | |
| 4381 #define ROOT_MAX 1024 | |
| 4382 #if ROOT_MAX<VARINT_MAX*2 | |
| 4383 # error ROOT_MAX must have enough space for a header. | |
| 4384 #endif | |
| 4385 | |
| 4386 /* InteriorBlock stores a linked-list of interior blocks while a lower | |
| 4387 ** layer is being constructed. | |
| 4388 */ | |
| 4389 typedef struct InteriorBlock { | |
| 4390 DataBuffer term; /* Leftmost term in block's subtree. */ | |
| 4391 DataBuffer data; /* Accumulated data for the block. */ | |
| 4392 struct InteriorBlock *next; | |
| 4393 } InteriorBlock; | |
| 4394 | |
| 4395 static InteriorBlock *interiorBlockNew(int iHeight, sqlite_int64 iChildBlock, | |
| 4396 const char *pTerm, int nTerm){ | |
| 4397 InteriorBlock *block = sqlite3_malloc(sizeof(InteriorBlock)); | |
| 4398 char c[VARINT_MAX+VARINT_MAX]; | |
| 4399 int n; | |
| 4400 | |
| 4401 if( block ){ | |
| 4402 memset(block, 0, sizeof(*block)); | |
| 4403 dataBufferInit(&block->term, 0); | |
| 4404 dataBufferReplace(&block->term, pTerm, nTerm); | |
| 4405 | |
| 4406 n = fts3PutVarint(c, iHeight); | |
| 4407 n += fts3PutVarint(c+n, iChildBlock); | |
| 4408 dataBufferInit(&block->data, INTERIOR_MAX); | |
| 4409 dataBufferReplace(&block->data, c, n); | |
| 4410 } | |
| 4411 return block; | |
| 4412 } | |
| 4413 | |
| 4414 #ifndef NDEBUG | |
| 4415 /* Verify that the data is readable as an interior node. */ | |
| 4416 static void interiorBlockValidate(InteriorBlock *pBlock){ | |
| 4417 const char *pData = pBlock->data.pData; | |
| 4418 int nData = pBlock->data.nData; | |
| 4419 int n, iDummy; | |
| 4420 sqlite_int64 iBlockid; | |
| 4421 | |
| 4422 assert( nData>0 ); | |
| 4423 assert( pData!=0 ); | |
| 4424 assert( pData+nData>pData ); | |
| 4425 | |
| 4426 /* Must lead with height of node as a varint(n), n>0 */ | |
| 4427 n = fts3GetVarint32(pData, &iDummy); | |
| 4428 assert( n>0 ); | |
| 4429 assert( iDummy>0 ); | |
| 4430 assert( n<nData ); | |
| 4431 pData += n; | |
| 4432 nData -= n; | |
| 4433 | |
| 4434 /* Must contain iBlockid. */ | |
| 4435 n = fts3GetVarint(pData, &iBlockid); | |
| 4436 assert( n>0 ); | |
| 4437 assert( n<=nData ); | |
| 4438 pData += n; | |
| 4439 nData -= n; | |
| 4440 | |
| 4441 /* Zero or more terms of positive length */ | |
| 4442 if( nData!=0 ){ | |
| 4443 /* First term is not delta-encoded. */ | |
| 4444 n = fts3GetVarint32(pData, &iDummy); | |
| 4445 assert( n>0 ); | |
| 4446 assert( iDummy>0 ); | |
| 4447 assert( n+iDummy>0); | |
| 4448 assert( n+iDummy<=nData ); | |
| 4449 pData += n+iDummy; | |
| 4450 nData -= n+iDummy; | |
| 4451 | |
| 4452 /* Following terms delta-encoded. */ | |
| 4453 while( nData!=0 ){ | |
| 4454 /* Length of shared prefix. */ | |
| 4455 n = fts3GetVarint32(pData, &iDummy); | |
| 4456 assert( n>0 ); | |
| 4457 assert( iDummy>=0 ); | |
| 4458 assert( n<nData ); | |
| 4459 pData += n; | |
| 4460 nData -= n; | |
| 4461 | |
| 4462 /* Length and data of distinct suffix. */ | |
| 4463 n = fts3GetVarint32(pData, &iDummy); | |
| 4464 assert( n>0 ); | |
| 4465 assert( iDummy>0 ); | |
| 4466 assert( n+iDummy>0); | |
| 4467 assert( n+iDummy<=nData ); | |
| 4468 pData += n+iDummy; | |
| 4469 nData -= n+iDummy; | |
| 4470 } | |
| 4471 } | |
| 4472 } | |
| 4473 #define ASSERT_VALID_INTERIOR_BLOCK(x) interiorBlockValidate(x) | |
| 4474 #else | |
| 4475 #define ASSERT_VALID_INTERIOR_BLOCK(x) assert( 1 ) | |
| 4476 #endif | |
| 4477 | |
| 4478 typedef struct InteriorWriter { | |
| 4479 int iHeight; /* from 0 at leaves. */ | |
| 4480 InteriorBlock *first, *last; | |
| 4481 struct InteriorWriter *parentWriter; | |
| 4482 | |
| 4483 DataBuffer term; /* Last term written to block "last". */ | |
| 4484 sqlite_int64 iOpeningChildBlock; /* First child block in block "last". */ | |
| 4485 #ifndef NDEBUG | |
| 4486 sqlite_int64 iLastChildBlock; /* for consistency checks. */ | |
| 4487 #endif | |
| 4488 } InteriorWriter; | |
| 4489 | |
| 4490 /* Initialize an interior node where pTerm[nTerm] marks the leftmost | |
| 4491 ** term in the tree. iChildBlock is the leftmost child block at the | |
| 4492 ** next level down the tree. | |
| 4493 */ | |
| 4494 static void interiorWriterInit(int iHeight, const char *pTerm, int nTerm, | |
| 4495 sqlite_int64 iChildBlock, | |
| 4496 InteriorWriter *pWriter){ | |
| 4497 InteriorBlock *block; | |
| 4498 assert( iHeight>0 ); | |
| 4499 CLEAR(pWriter); | |
| 4500 | |
| 4501 pWriter->iHeight = iHeight; | |
| 4502 pWriter->iOpeningChildBlock = iChildBlock; | |
| 4503 #ifndef NDEBUG | |
| 4504 pWriter->iLastChildBlock = iChildBlock; | |
| 4505 #endif | |
| 4506 block = interiorBlockNew(iHeight, iChildBlock, pTerm, nTerm); | |
| 4507 pWriter->last = pWriter->first = block; | |
| 4508 ASSERT_VALID_INTERIOR_BLOCK(pWriter->last); | |
| 4509 dataBufferInit(&pWriter->term, 0); | |
| 4510 } | |
| 4511 | |
| 4512 /* Append the child node rooted at iChildBlock to the interior node, | |
| 4513 ** with pTerm[nTerm] as the leftmost term in iChildBlock's subtree. | |
| 4514 */ | |
| 4515 static void interiorWriterAppend(InteriorWriter *pWriter, | |
| 4516 const char *pTerm, int nTerm, | |
| 4517 sqlite_int64 iChildBlock){ | |
| 4518 char c[VARINT_MAX+VARINT_MAX]; | |
| 4519 int n, nPrefix = 0; | |
| 4520 | |
| 4521 ASSERT_VALID_INTERIOR_BLOCK(pWriter->last); | |
| 4522 | |
| 4523 /* The first term written into an interior node is actually | |
| 4524 ** associated with the second child added (the first child was added | |
| 4525 ** in interiorWriterInit, or in the if clause at the bottom of this | |
| 4526 ** function). That term gets encoded straight up, with nPrefix left | |
| 4527 ** at 0. | |
| 4528 */ | |
| 4529 if( pWriter->term.nData==0 ){ | |
| 4530 n = fts3PutVarint(c, nTerm); | |
| 4531 }else{ | |
| 4532 while( nPrefix<pWriter->term.nData && | |
| 4533 pTerm[nPrefix]==pWriter->term.pData[nPrefix] ){ | |
| 4534 nPrefix++; | |
| 4535 } | |
| 4536 | |
| 4537 n = fts3PutVarint(c, nPrefix); | |
| 4538 n += fts3PutVarint(c+n, nTerm-nPrefix); | |
| 4539 } | |
| 4540 | |
| 4541 #ifndef NDEBUG | |
| 4542 pWriter->iLastChildBlock++; | |
| 4543 #endif | |
| 4544 assert( pWriter->iLastChildBlock==iChildBlock ); | |
| 4545 | |
| 4546 /* Overflow to a new block if the new term makes the current block | |
| 4547 ** too big, and the current block already has enough terms. | |
| 4548 */ | |
| 4549 if( pWriter->last->data.nData+n+nTerm-nPrefix>INTERIOR_MAX && | |
| 4550 iChildBlock-pWriter->iOpeningChildBlock>INTERIOR_MIN_TERMS ){ | |
| 4551 pWriter->last->next = interiorBlockNew(pWriter->iHeight, iChildBlock, | |
| 4552 pTerm, nTerm); | |
| 4553 pWriter->last = pWriter->last->next; | |
| 4554 pWriter->iOpeningChildBlock = iChildBlock; | |
| 4555 dataBufferReset(&pWriter->term); | |
| 4556 }else{ | |
| 4557 dataBufferAppend2(&pWriter->last->data, c, n, | |
| 4558 pTerm+nPrefix, nTerm-nPrefix); | |
| 4559 dataBufferReplace(&pWriter->term, pTerm, nTerm); | |
| 4560 } | |
| 4561 ASSERT_VALID_INTERIOR_BLOCK(pWriter->last); | |
| 4562 } | |
| 4563 | |
| 4564 /* Free the space used by pWriter, including the linked-list of | |
| 4565 ** InteriorBlocks, and parentWriter, if present. | |
| 4566 */ | |
| 4567 static int interiorWriterDestroy(InteriorWriter *pWriter){ | |
| 4568 InteriorBlock *block = pWriter->first; | |
| 4569 | |
| 4570 while( block!=NULL ){ | |
| 4571 InteriorBlock *b = block; | |
| 4572 block = block->next; | |
| 4573 dataBufferDestroy(&b->term); | |
| 4574 dataBufferDestroy(&b->data); | |
| 4575 sqlite3_free(b); | |
| 4576 } | |
| 4577 if( pWriter->parentWriter!=NULL ){ | |
| 4578 interiorWriterDestroy(pWriter->parentWriter); | |
| 4579 sqlite3_free(pWriter->parentWriter); | |
| 4580 } | |
| 4581 dataBufferDestroy(&pWriter->term); | |
| 4582 SCRAMBLE(pWriter); | |
| 4583 return SQLITE_OK; | |
| 4584 } | |
| 4585 | |
| 4586 /* If pWriter can fit entirely in ROOT_MAX, return it as the root info | |
| 4587 ** directly, leaving *piEndBlockid unchanged. Otherwise, flush | |
| 4588 ** pWriter to %_segments, building a new layer of interior nodes, and | |
| 4589 ** recursively ask for their root into. | |
| 4590 */ | |
| 4591 static int interiorWriterRootInfo(fulltext_vtab *v, InteriorWriter *pWriter, | |
| 4592 char **ppRootInfo, int *pnRootInfo, | |
| 4593 sqlite_int64 *piEndBlockid){ | |
| 4594 InteriorBlock *block = pWriter->first; | |
| 4595 sqlite_int64 iBlockid = 0; | |
| 4596 int rc; | |
| 4597 | |
| 4598 /* If we can fit the segment inline */ | |
| 4599 if( block==pWriter->last && block->data.nData<ROOT_MAX ){ | |
| 4600 *ppRootInfo = block->data.pData; | |
| 4601 *pnRootInfo = block->data.nData; | |
| 4602 return SQLITE_OK; | |
| 4603 } | |
| 4604 | |
| 4605 /* Flush the first block to %_segments, and create a new level of | |
| 4606 ** interior node. | |
| 4607 */ | |
| 4608 ASSERT_VALID_INTERIOR_BLOCK(block); | |
| 4609 rc = block_insert(v, block->data.pData, block->data.nData, &iBlockid); | |
| 4610 if( rc!=SQLITE_OK ) return rc; | |
| 4611 *piEndBlockid = iBlockid; | |
| 4612 | |
| 4613 pWriter->parentWriter = sqlite3_malloc(sizeof(*pWriter->parentWriter)); | |
| 4614 interiorWriterInit(pWriter->iHeight+1, | |
| 4615 block->term.pData, block->term.nData, | |
| 4616 iBlockid, pWriter->parentWriter); | |
| 4617 | |
| 4618 /* Flush additional blocks and append to the higher interior | |
| 4619 ** node. | |
| 4620 */ | |
| 4621 for(block=block->next; block!=NULL; block=block->next){ | |
| 4622 ASSERT_VALID_INTERIOR_BLOCK(block); | |
| 4623 rc = block_insert(v, block->data.pData, block->data.nData, &iBlockid); | |
| 4624 if( rc!=SQLITE_OK ) return rc; | |
| 4625 *piEndBlockid = iBlockid; | |
| 4626 | |
| 4627 interiorWriterAppend(pWriter->parentWriter, | |
| 4628 block->term.pData, block->term.nData, iBlockid); | |
| 4629 } | |
| 4630 | |
| 4631 /* Parent node gets the chance to be the root. */ | |
| 4632 return interiorWriterRootInfo(v, pWriter->parentWriter, | |
| 4633 ppRootInfo, pnRootInfo, piEndBlockid); | |
| 4634 } | |
| 4635 | |
| 4636 /****************************************************************/ | |
| 4637 /* InteriorReader is used to read off the data from an interior node | |
| 4638 ** (see comment at top of file for the format). | |
| 4639 */ | |
| 4640 typedef struct InteriorReader { | |
| 4641 const char *pData; | |
| 4642 int nData; | |
| 4643 | |
| 4644 DataBuffer term; /* previous term, for decoding term delta. */ | |
| 4645 | |
| 4646 sqlite_int64 iBlockid; | |
| 4647 } InteriorReader; | |
| 4648 | |
| 4649 static void interiorReaderDestroy(InteriorReader *pReader){ | |
| 4650 dataBufferDestroy(&pReader->term); | |
| 4651 SCRAMBLE(pReader); | |
| 4652 } | |
| 4653 | |
| 4654 static int interiorReaderInit(const char *pData, int nData, | |
| 4655 InteriorReader *pReader){ | |
| 4656 int n, nTerm; | |
| 4657 | |
| 4658 /* These conditions are checked and met by the callers. */ | |
| 4659 assert( nData>0 ); | |
| 4660 assert( pData[0]!='\0' ); | |
| 4661 | |
| 4662 CLEAR(pReader); | |
| 4663 | |
| 4664 /* Decode the base blockid, and set the cursor to the first term. */ | |
| 4665 n = fts3GetVarintSafe(pData+1, &pReader->iBlockid, nData-1); | |
| 4666 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 4667 pReader->pData = pData+1+n; | |
| 4668 pReader->nData = nData-(1+n); | |
| 4669 | |
| 4670 /* A single-child interior node (such as when a leaf node was too | |
| 4671 ** large for the segment directory) won't have any terms. | |
| 4672 ** Otherwise, decode the first term. | |
| 4673 */ | |
| 4674 if( pReader->nData==0 ){ | |
| 4675 dataBufferInit(&pReader->term, 0); | |
| 4676 }else{ | |
| 4677 n = fts3GetVarint32Safe(pReader->pData, &nTerm, pReader->nData); | |
| 4678 if( !n || nTerm<0 || nTerm>pReader->nData-n) return SQLITE_CORRUPT_BKPT; | |
| 4679 dataBufferInit(&pReader->term, nTerm); | |
| 4680 dataBufferReplace(&pReader->term, pReader->pData+n, nTerm); | |
| 4681 pReader->pData += n+nTerm; | |
| 4682 pReader->nData -= n+nTerm; | |
| 4683 } | |
| 4684 return SQLITE_OK; | |
| 4685 } | |
| 4686 | |
| 4687 static int interiorReaderAtEnd(InteriorReader *pReader){ | |
| 4688 return pReader->term.nData<=0; | |
| 4689 } | |
| 4690 | |
| 4691 static sqlite_int64 interiorReaderCurrentBlockid(InteriorReader *pReader){ | |
| 4692 return pReader->iBlockid; | |
| 4693 } | |
| 4694 | |
| 4695 static int interiorReaderTermBytes(InteriorReader *pReader){ | |
| 4696 assert( !interiorReaderAtEnd(pReader) ); | |
| 4697 return pReader->term.nData; | |
| 4698 } | |
| 4699 static const char *interiorReaderTerm(InteriorReader *pReader){ | |
| 4700 assert( !interiorReaderAtEnd(pReader) ); | |
| 4701 return pReader->term.pData; | |
| 4702 } | |
| 4703 | |
| 4704 /* Step forward to the next term in the node. */ | |
| 4705 static int interiorReaderStep(InteriorReader *pReader){ | |
| 4706 assert( !interiorReaderAtEnd(pReader) ); | |
| 4707 | |
| 4708 /* If the last term has been read, signal eof, else construct the | |
| 4709 ** next term. | |
| 4710 */ | |
| 4711 if( pReader->nData==0 ){ | |
| 4712 dataBufferReset(&pReader->term); | |
| 4713 }else{ | |
| 4714 int n, nPrefix, nSuffix; | |
| 4715 | |
| 4716 n = fts3GetVarint32Safe(pReader->pData, &nPrefix, pReader->nData); | |
| 4717 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 4718 pReader->nData -= n; | |
| 4719 pReader->pData += n; | |
| 4720 n = fts3GetVarint32Safe(pReader->pData, &nSuffix, pReader->nData); | |
| 4721 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 4722 pReader->nData -= n; | |
| 4723 pReader->pData += n; | |
| 4724 if( nSuffix<0 || nSuffix>pReader->nData ) return SQLITE_CORRUPT_BKPT; | |
| 4725 if( nPrefix<0 || nPrefix>pReader->term.nData ) return SQLITE_CORRUPT_BKPT; | |
| 4726 | |
| 4727 /* Truncate the current term and append suffix data. */ | |
| 4728 pReader->term.nData = nPrefix; | |
| 4729 dataBufferAppend(&pReader->term, pReader->pData, nSuffix); | |
| 4730 | |
| 4731 pReader->pData += nSuffix; | |
| 4732 pReader->nData -= nSuffix; | |
| 4733 } | |
| 4734 pReader->iBlockid++; | |
| 4735 return SQLITE_OK; | |
| 4736 } | |
| 4737 | |
| 4738 /* Compare the current term to pTerm[nTerm], returning strcmp-style | |
| 4739 ** results. If isPrefix, equality means equal through nTerm bytes. | |
| 4740 */ | |
| 4741 static int interiorReaderTermCmp(InteriorReader *pReader, | |
| 4742 const char *pTerm, int nTerm, int isPrefix){ | |
| 4743 const char *pReaderTerm = interiorReaderTerm(pReader); | |
| 4744 int nReaderTerm = interiorReaderTermBytes(pReader); | |
| 4745 int c, n = nReaderTerm<nTerm ? nReaderTerm : nTerm; | |
| 4746 | |
| 4747 if( n==0 ){ | |
| 4748 if( nReaderTerm>0 ) return -1; | |
| 4749 if( nTerm>0 ) return 1; | |
| 4750 return 0; | |
| 4751 } | |
| 4752 | |
| 4753 c = memcmp(pReaderTerm, pTerm, n); | |
| 4754 if( c!=0 ) return c; | |
| 4755 if( isPrefix && n==nTerm ) return 0; | |
| 4756 return nReaderTerm - nTerm; | |
| 4757 } | |
| 4758 | |
| 4759 /****************************************************************/ | |
| 4760 /* LeafWriter is used to collect terms and associated doclist data | |
| 4761 ** into leaf blocks in %_segments (see top of file for format info). | |
| 4762 ** Expected usage is: | |
| 4763 ** | |
| 4764 ** LeafWriter writer; | |
| 4765 ** leafWriterInit(0, 0, &writer); | |
| 4766 ** while( sorted_terms_left_to_process ){ | |
| 4767 ** // data is doclist data for that term. | |
| 4768 ** rc = leafWriterStep(v, &writer, pTerm, nTerm, pData, nData); | |
| 4769 ** if( rc!=SQLITE_OK ) goto err; | |
| 4770 ** } | |
| 4771 ** rc = leafWriterFinalize(v, &writer); | |
| 4772 **err: | |
| 4773 ** leafWriterDestroy(&writer); | |
| 4774 ** return rc; | |
| 4775 ** | |
| 4776 ** leafWriterStep() may write a collected leaf out to %_segments. | |
| 4777 ** leafWriterFinalize() finishes writing any buffered data and stores | |
| 4778 ** a root node in %_segdir. leafWriterDestroy() frees all buffers and | |
| 4779 ** InteriorWriters allocated as part of writing this segment. | |
| 4780 ** | |
| 4781 ** TODO(shess) Document leafWriterStepMerge(). | |
| 4782 */ | |
| 4783 | |
| 4784 /* Put terms with data this big in their own block. */ | |
| 4785 #define STANDALONE_MIN 1024 | |
| 4786 | |
| 4787 /* Keep leaf blocks below this size. */ | |
| 4788 #define LEAF_MAX 2048 | |
| 4789 | |
| 4790 typedef struct LeafWriter { | |
| 4791 int iLevel; | |
| 4792 int idx; | |
| 4793 sqlite_int64 iStartBlockid; /* needed to create the root info */ | |
| 4794 sqlite_int64 iEndBlockid; /* when we're done writing. */ | |
| 4795 | |
| 4796 DataBuffer term; /* previous encoded term */ | |
| 4797 DataBuffer data; /* encoding buffer */ | |
| 4798 | |
| 4799 /* bytes of first term in the current node which distinguishes that | |
| 4800 ** term from the last term of the previous node. | |
| 4801 */ | |
| 4802 int nTermDistinct; | |
| 4803 | |
| 4804 InteriorWriter parentWriter; /* if we overflow */ | |
| 4805 int has_parent; | |
| 4806 } LeafWriter; | |
| 4807 | |
| 4808 static void leafWriterInit(int iLevel, int idx, LeafWriter *pWriter){ | |
| 4809 CLEAR(pWriter); | |
| 4810 pWriter->iLevel = iLevel; | |
| 4811 pWriter->idx = idx; | |
| 4812 | |
| 4813 dataBufferInit(&pWriter->term, 32); | |
| 4814 | |
| 4815 /* Start out with a reasonably sized block, though it can grow. */ | |
| 4816 dataBufferInit(&pWriter->data, LEAF_MAX); | |
| 4817 } | |
| 4818 | |
| 4819 #ifndef NDEBUG | |
| 4820 /* Verify that the data is readable as a leaf node. */ | |
| 4821 static void leafNodeValidate(const char *pData, int nData){ | |
| 4822 int n, iDummy; | |
| 4823 | |
| 4824 if( nData==0 ) return; | |
| 4825 assert( nData>0 ); | |
| 4826 assert( pData!=0 ); | |
| 4827 assert( pData+nData>pData ); | |
| 4828 | |
| 4829 /* Must lead with a varint(0) */ | |
| 4830 n = fts3GetVarint32(pData, &iDummy); | |
| 4831 assert( iDummy==0 ); | |
| 4832 assert( n>0 ); | |
| 4833 assert( n<nData ); | |
| 4834 pData += n; | |
| 4835 nData -= n; | |
| 4836 | |
| 4837 /* Leading term length and data must fit in buffer. */ | |
| 4838 n = fts3GetVarint32(pData, &iDummy); | |
| 4839 assert( n>0 ); | |
| 4840 assert( iDummy>0 ); | |
| 4841 assert( n+iDummy>0 ); | |
| 4842 assert( n+iDummy<nData ); | |
| 4843 pData += n+iDummy; | |
| 4844 nData -= n+iDummy; | |
| 4845 | |
| 4846 /* Leading term's doclist length and data must fit. */ | |
| 4847 n = fts3GetVarint32(pData, &iDummy); | |
| 4848 assert( n>0 ); | |
| 4849 assert( iDummy>0 ); | |
| 4850 assert( n+iDummy>0 ); | |
| 4851 assert( n+iDummy<=nData ); | |
| 4852 ASSERT_VALID_DOCLIST(DL_DEFAULT, pData+n, iDummy, NULL); | |
| 4853 pData += n+iDummy; | |
| 4854 nData -= n+iDummy; | |
| 4855 | |
| 4856 /* Verify that trailing terms and doclists also are readable. */ | |
| 4857 while( nData!=0 ){ | |
| 4858 n = fts3GetVarint32(pData, &iDummy); | |
| 4859 assert( n>0 ); | |
| 4860 assert( iDummy>=0 ); | |
| 4861 assert( n<nData ); | |
| 4862 pData += n; | |
| 4863 nData -= n; | |
| 4864 n = fts3GetVarint32(pData, &iDummy); | |
| 4865 assert( n>0 ); | |
| 4866 assert( iDummy>0 ); | |
| 4867 assert( n+iDummy>0 ); | |
| 4868 assert( n+iDummy<nData ); | |
| 4869 pData += n+iDummy; | |
| 4870 nData -= n+iDummy; | |
| 4871 | |
| 4872 n = fts3GetVarint32(pData, &iDummy); | |
| 4873 assert( n>0 ); | |
| 4874 assert( iDummy>0 ); | |
| 4875 assert( n+iDummy>0 ); | |
| 4876 assert( n+iDummy<=nData ); | |
| 4877 ASSERT_VALID_DOCLIST(DL_DEFAULT, pData+n, iDummy, NULL); | |
| 4878 pData += n+iDummy; | |
| 4879 nData -= n+iDummy; | |
| 4880 } | |
| 4881 } | |
| 4882 #define ASSERT_VALID_LEAF_NODE(p, n) leafNodeValidate(p, n) | |
| 4883 #else | |
| 4884 #define ASSERT_VALID_LEAF_NODE(p, n) assert( 1 ) | |
| 4885 #endif | |
| 4886 | |
| 4887 /* Flush the current leaf node to %_segments, and adding the resulting | |
| 4888 ** blockid and the starting term to the interior node which will | |
| 4889 ** contain it. | |
| 4890 */ | |
| 4891 static int leafWriterInternalFlush(fulltext_vtab *v, LeafWriter *pWriter, | |
| 4892 int iData, int nData){ | |
| 4893 sqlite_int64 iBlockid = 0; | |
| 4894 const char *pStartingTerm; | |
| 4895 int nStartingTerm, rc, n; | |
| 4896 | |
| 4897 /* Must have the leading varint(0) flag, plus at least some | |
| 4898 ** valid-looking data. | |
| 4899 */ | |
| 4900 assert( nData>2 ); | |
| 4901 assert( iData>=0 ); | |
| 4902 assert( iData+nData<=pWriter->data.nData ); | |
| 4903 ASSERT_VALID_LEAF_NODE(pWriter->data.pData+iData, nData); | |
| 4904 | |
| 4905 rc = block_insert(v, pWriter->data.pData+iData, nData, &iBlockid); | |
| 4906 if( rc!=SQLITE_OK ) return rc; | |
| 4907 assert( iBlockid!=0 ); | |
| 4908 | |
| 4909 /* Reconstruct the first term in the leaf for purposes of building | |
| 4910 ** the interior node. | |
| 4911 */ | |
| 4912 n = fts3GetVarint32(pWriter->data.pData+iData+1, &nStartingTerm); | |
| 4913 pStartingTerm = pWriter->data.pData+iData+1+n; | |
| 4914 assert( pWriter->data.nData>iData+1+n+nStartingTerm ); | |
| 4915 assert( pWriter->nTermDistinct>0 ); | |
| 4916 assert( pWriter->nTermDistinct<=nStartingTerm ); | |
| 4917 nStartingTerm = pWriter->nTermDistinct; | |
| 4918 | |
| 4919 if( pWriter->has_parent ){ | |
| 4920 interiorWriterAppend(&pWriter->parentWriter, | |
| 4921 pStartingTerm, nStartingTerm, iBlockid); | |
| 4922 }else{ | |
| 4923 interiorWriterInit(1, pStartingTerm, nStartingTerm, iBlockid, | |
| 4924 &pWriter->parentWriter); | |
| 4925 pWriter->has_parent = 1; | |
| 4926 } | |
| 4927 | |
| 4928 /* Track the span of this segment's leaf nodes. */ | |
| 4929 if( pWriter->iEndBlockid==0 ){ | |
| 4930 pWriter->iEndBlockid = pWriter->iStartBlockid = iBlockid; | |
| 4931 }else{ | |
| 4932 pWriter->iEndBlockid++; | |
| 4933 assert( iBlockid==pWriter->iEndBlockid ); | |
| 4934 } | |
| 4935 | |
| 4936 return SQLITE_OK; | |
| 4937 } | |
| 4938 static int leafWriterFlush(fulltext_vtab *v, LeafWriter *pWriter){ | |
| 4939 int rc = leafWriterInternalFlush(v, pWriter, 0, pWriter->data.nData); | |
| 4940 if( rc!=SQLITE_OK ) return rc; | |
| 4941 | |
| 4942 /* Re-initialize the output buffer. */ | |
| 4943 dataBufferReset(&pWriter->data); | |
| 4944 | |
| 4945 return SQLITE_OK; | |
| 4946 } | |
| 4947 | |
| 4948 /* Fetch the root info for the segment. If the entire leaf fits | |
| 4949 ** within ROOT_MAX, then it will be returned directly, otherwise it | |
| 4950 ** will be flushed and the root info will be returned from the | |
| 4951 ** interior node. *piEndBlockid is set to the blockid of the last | |
| 4952 ** interior or leaf node written to disk (0 if none are written at | |
| 4953 ** all). | |
| 4954 */ | |
| 4955 static int leafWriterRootInfo(fulltext_vtab *v, LeafWriter *pWriter, | |
| 4956 char **ppRootInfo, int *pnRootInfo, | |
| 4957 sqlite_int64 *piEndBlockid){ | |
| 4958 /* we can fit the segment entirely inline */ | |
| 4959 if( !pWriter->has_parent && pWriter->data.nData<ROOT_MAX ){ | |
| 4960 *ppRootInfo = pWriter->data.pData; | |
| 4961 *pnRootInfo = pWriter->data.nData; | |
| 4962 *piEndBlockid = 0; | |
| 4963 return SQLITE_OK; | |
| 4964 } | |
| 4965 | |
| 4966 /* Flush remaining leaf data. */ | |
| 4967 if( pWriter->data.nData>0 ){ | |
| 4968 int rc = leafWriterFlush(v, pWriter); | |
| 4969 if( rc!=SQLITE_OK ) return rc; | |
| 4970 } | |
| 4971 | |
| 4972 /* We must have flushed a leaf at some point. */ | |
| 4973 assert( pWriter->has_parent ); | |
| 4974 | |
| 4975 /* Tenatively set the end leaf blockid as the end blockid. If the | |
| 4976 ** interior node can be returned inline, this will be the final | |
| 4977 ** blockid, otherwise it will be overwritten by | |
| 4978 ** interiorWriterRootInfo(). | |
| 4979 */ | |
| 4980 *piEndBlockid = pWriter->iEndBlockid; | |
| 4981 | |
| 4982 return interiorWriterRootInfo(v, &pWriter->parentWriter, | |
| 4983 ppRootInfo, pnRootInfo, piEndBlockid); | |
| 4984 } | |
| 4985 | |
| 4986 /* Collect the rootInfo data and store it into the segment directory. | |
| 4987 ** This has the effect of flushing the segment's leaf data to | |
| 4988 ** %_segments, and also flushing any interior nodes to %_segments. | |
| 4989 */ | |
| 4990 static int leafWriterFinalize(fulltext_vtab *v, LeafWriter *pWriter){ | |
| 4991 sqlite_int64 iEndBlockid; | |
| 4992 char *pRootInfo; | |
| 4993 int rc, nRootInfo; | |
| 4994 | |
| 4995 rc = leafWriterRootInfo(v, pWriter, &pRootInfo, &nRootInfo, &iEndBlockid); | |
| 4996 if( rc!=SQLITE_OK ) return rc; | |
| 4997 | |
| 4998 /* Don't bother storing an entirely empty segment. */ | |
| 4999 if( iEndBlockid==0 && nRootInfo==0 ) return SQLITE_OK; | |
| 5000 | |
| 5001 return segdir_set(v, pWriter->iLevel, pWriter->idx, | |
| 5002 pWriter->iStartBlockid, pWriter->iEndBlockid, | |
| 5003 iEndBlockid, pRootInfo, nRootInfo); | |
| 5004 } | |
| 5005 | |
| 5006 static void leafWriterDestroy(LeafWriter *pWriter){ | |
| 5007 if( pWriter->has_parent ) interiorWriterDestroy(&pWriter->parentWriter); | |
| 5008 dataBufferDestroy(&pWriter->term); | |
| 5009 dataBufferDestroy(&pWriter->data); | |
| 5010 } | |
| 5011 | |
| 5012 /* Encode a term into the leafWriter, delta-encoding as appropriate. | |
| 5013 ** Returns the length of the new term which distinguishes it from the | |
| 5014 ** previous term, which can be used to set nTermDistinct when a node | |
| 5015 ** boundary is crossed. | |
| 5016 */ | |
| 5017 static int leafWriterEncodeTerm(LeafWriter *pWriter, | |
| 5018 const char *pTerm, int nTerm){ | |
| 5019 char c[VARINT_MAX+VARINT_MAX]; | |
| 5020 int n, nPrefix = 0; | |
| 5021 | |
| 5022 assert( nTerm>0 ); | |
| 5023 while( nPrefix<pWriter->term.nData && | |
| 5024 pTerm[nPrefix]==pWriter->term.pData[nPrefix] ){ | |
| 5025 nPrefix++; | |
| 5026 /* Failing this implies that the terms weren't in order. */ | |
| 5027 assert( nPrefix<nTerm ); | |
| 5028 } | |
| 5029 | |
| 5030 if( pWriter->data.nData==0 ){ | |
| 5031 /* Encode the node header and leading term as: | |
| 5032 ** varint(0) | |
| 5033 ** varint(nTerm) | |
| 5034 ** char pTerm[nTerm] | |
| 5035 */ | |
| 5036 n = fts3PutVarint(c, '\0'); | |
| 5037 n += fts3PutVarint(c+n, nTerm); | |
| 5038 dataBufferAppend2(&pWriter->data, c, n, pTerm, nTerm); | |
| 5039 }else{ | |
| 5040 /* Delta-encode the term as: | |
| 5041 ** varint(nPrefix) | |
| 5042 ** varint(nSuffix) | |
| 5043 ** char pTermSuffix[nSuffix] | |
| 5044 */ | |
| 5045 n = fts3PutVarint(c, nPrefix); | |
| 5046 n += fts3PutVarint(c+n, nTerm-nPrefix); | |
| 5047 dataBufferAppend2(&pWriter->data, c, n, pTerm+nPrefix, nTerm-nPrefix); | |
| 5048 } | |
| 5049 dataBufferReplace(&pWriter->term, pTerm, nTerm); | |
| 5050 | |
| 5051 return nPrefix+1; | |
| 5052 } | |
| 5053 | |
| 5054 /* Used to avoid a memmove when a large amount of doclist data is in | |
| 5055 ** the buffer. This constructs a node and term header before | |
| 5056 ** iDoclistData and flushes the resulting complete node using | |
| 5057 ** leafWriterInternalFlush(). | |
| 5058 */ | |
| 5059 static int leafWriterInlineFlush(fulltext_vtab *v, LeafWriter *pWriter, | |
| 5060 const char *pTerm, int nTerm, | |
| 5061 int iDoclistData){ | |
| 5062 char c[VARINT_MAX+VARINT_MAX]; | |
| 5063 int iData, n = fts3PutVarint(c, 0); | |
| 5064 n += fts3PutVarint(c+n, nTerm); | |
| 5065 | |
| 5066 /* There should always be room for the header. Even if pTerm shared | |
| 5067 ** a substantial prefix with the previous term, the entire prefix | |
| 5068 ** could be constructed from earlier data in the doclist, so there | |
| 5069 ** should be room. | |
| 5070 */ | |
| 5071 assert( iDoclistData>=n+nTerm ); | |
| 5072 | |
| 5073 iData = iDoclistData-(n+nTerm); | |
| 5074 memcpy(pWriter->data.pData+iData, c, n); | |
| 5075 memcpy(pWriter->data.pData+iData+n, pTerm, nTerm); | |
| 5076 | |
| 5077 return leafWriterInternalFlush(v, pWriter, iData, pWriter->data.nData-iData); | |
| 5078 } | |
| 5079 | |
| 5080 /* Push pTerm[nTerm] along with the doclist data to the leaf layer of | |
| 5081 ** %_segments. | |
| 5082 */ | |
| 5083 static int leafWriterStepMerge(fulltext_vtab *v, LeafWriter *pWriter, | |
| 5084 const char *pTerm, int nTerm, | |
| 5085 DLReader *pReaders, int nReaders){ | |
| 5086 char c[VARINT_MAX+VARINT_MAX]; | |
| 5087 int iTermData = pWriter->data.nData, iDoclistData; | |
| 5088 int i, nData, n, nActualData, nActual, rc, nTermDistinct; | |
| 5089 | |
| 5090 ASSERT_VALID_LEAF_NODE(pWriter->data.pData, pWriter->data.nData); | |
| 5091 nTermDistinct = leafWriterEncodeTerm(pWriter, pTerm, nTerm); | |
| 5092 | |
| 5093 /* Remember nTermDistinct if opening a new node. */ | |
| 5094 if( iTermData==0 ) pWriter->nTermDistinct = nTermDistinct; | |
| 5095 | |
| 5096 iDoclistData = pWriter->data.nData; | |
| 5097 | |
| 5098 /* Estimate the length of the merged doclist so we can leave space | |
| 5099 ** to encode it. | |
| 5100 */ | |
| 5101 for(i=0, nData=0; i<nReaders; i++){ | |
| 5102 nData += dlrAllDataBytes(&pReaders[i]); | |
| 5103 } | |
| 5104 n = fts3PutVarint(c, nData); | |
| 5105 dataBufferAppend(&pWriter->data, c, n); | |
| 5106 | |
| 5107 rc = docListMerge(&pWriter->data, pReaders, nReaders); | |
| 5108 if( rc!=SQLITE_OK ) return rc; | |
| 5109 ASSERT_VALID_DOCLIST(DL_DEFAULT, | |
| 5110 pWriter->data.pData+iDoclistData+n, | |
| 5111 pWriter->data.nData-iDoclistData-n, NULL); | |
| 5112 | |
| 5113 /* The actual amount of doclist data at this point could be smaller | |
| 5114 ** than the length we encoded. Additionally, the space required to | |
| 5115 ** encode this length could be smaller. For small doclists, this is | |
| 5116 ** not a big deal, we can just use memmove() to adjust things. | |
| 5117 */ | |
| 5118 nActualData = pWriter->data.nData-(iDoclistData+n); | |
| 5119 nActual = fts3PutVarint(c, nActualData); | |
| 5120 assert( nActualData<=nData ); | |
| 5121 assert( nActual<=n ); | |
| 5122 | |
| 5123 /* If the new doclist is big enough for force a standalone leaf | |
| 5124 ** node, we can immediately flush it inline without doing the | |
| 5125 ** memmove(). | |
| 5126 */ | |
| 5127 /* TODO(shess) This test matches leafWriterStep(), which does this | |
| 5128 ** test before it knows the cost to varint-encode the term and | |
| 5129 ** doclist lengths. At some point, change to | |
| 5130 ** pWriter->data.nData-iTermData>STANDALONE_MIN. | |
| 5131 */ | |
| 5132 if( nTerm+nActualData>STANDALONE_MIN ){ | |
| 5133 /* Push leaf node from before this term. */ | |
| 5134 if( iTermData>0 ){ | |
| 5135 rc = leafWriterInternalFlush(v, pWriter, 0, iTermData); | |
| 5136 if( rc!=SQLITE_OK ) return rc; | |
| 5137 | |
| 5138 pWriter->nTermDistinct = nTermDistinct; | |
| 5139 } | |
| 5140 | |
| 5141 /* Fix the encoded doclist length. */ | |
| 5142 iDoclistData += n - nActual; | |
| 5143 memcpy(pWriter->data.pData+iDoclistData, c, nActual); | |
| 5144 | |
| 5145 /* Push the standalone leaf node. */ | |
| 5146 rc = leafWriterInlineFlush(v, pWriter, pTerm, nTerm, iDoclistData); | |
| 5147 if( rc!=SQLITE_OK ) return rc; | |
| 5148 | |
| 5149 /* Leave the node empty. */ | |
| 5150 dataBufferReset(&pWriter->data); | |
| 5151 | |
| 5152 return rc; | |
| 5153 } | |
| 5154 | |
| 5155 /* At this point, we know that the doclist was small, so do the | |
| 5156 ** memmove if indicated. | |
| 5157 */ | |
| 5158 if( nActual<n ){ | |
| 5159 memmove(pWriter->data.pData+iDoclistData+nActual, | |
| 5160 pWriter->data.pData+iDoclistData+n, | |
| 5161 pWriter->data.nData-(iDoclistData+n)); | |
| 5162 pWriter->data.nData -= n-nActual; | |
| 5163 } | |
| 5164 | |
| 5165 /* Replace written length with actual length. */ | |
| 5166 memcpy(pWriter->data.pData+iDoclistData, c, nActual); | |
| 5167 | |
| 5168 /* If the node is too large, break things up. */ | |
| 5169 /* TODO(shess) This test matches leafWriterStep(), which does this | |
| 5170 ** test before it knows the cost to varint-encode the term and | |
| 5171 ** doclist lengths. At some point, change to | |
| 5172 ** pWriter->data.nData>LEAF_MAX. | |
| 5173 */ | |
| 5174 if( iTermData+nTerm+nActualData>LEAF_MAX ){ | |
| 5175 /* Flush out the leading data as a node */ | |
| 5176 rc = leafWriterInternalFlush(v, pWriter, 0, iTermData); | |
| 5177 if( rc!=SQLITE_OK ) return rc; | |
| 5178 | |
| 5179 pWriter->nTermDistinct = nTermDistinct; | |
| 5180 | |
| 5181 /* Rebuild header using the current term */ | |
| 5182 n = fts3PutVarint(pWriter->data.pData, 0); | |
| 5183 n += fts3PutVarint(pWriter->data.pData+n, nTerm); | |
| 5184 memcpy(pWriter->data.pData+n, pTerm, nTerm); | |
| 5185 n += nTerm; | |
| 5186 | |
| 5187 /* There should always be room, because the previous encoding | |
| 5188 ** included all data necessary to construct the term. | |
| 5189 */ | |
| 5190 assert( n<iDoclistData ); | |
| 5191 /* So long as STANDALONE_MIN is half or less of LEAF_MAX, the | |
| 5192 ** following memcpy() is safe (as opposed to needing a memmove). | |
| 5193 */ | |
| 5194 assert( 2*STANDALONE_MIN<=LEAF_MAX ); | |
| 5195 assert( n+pWriter->data.nData-iDoclistData<iDoclistData ); | |
| 5196 memcpy(pWriter->data.pData+n, | |
| 5197 pWriter->data.pData+iDoclistData, | |
| 5198 pWriter->data.nData-iDoclistData); | |
| 5199 pWriter->data.nData -= iDoclistData-n; | |
| 5200 } | |
| 5201 ASSERT_VALID_LEAF_NODE(pWriter->data.pData, pWriter->data.nData); | |
| 5202 | |
| 5203 return SQLITE_OK; | |
| 5204 } | |
| 5205 | |
| 5206 /* Push pTerm[nTerm] along with the doclist data to the leaf layer of | |
| 5207 ** %_segments. | |
| 5208 */ | |
| 5209 /* TODO(shess) Revise writeZeroSegment() so that doclists are | |
| 5210 ** constructed directly in pWriter->data. | |
| 5211 */ | |
| 5212 static int leafWriterStep(fulltext_vtab *v, LeafWriter *pWriter, | |
| 5213 const char *pTerm, int nTerm, | |
| 5214 const char *pData, int nData){ | |
| 5215 int rc; | |
| 5216 DLReader reader; | |
| 5217 | |
| 5218 rc = dlrInit(&reader, DL_DEFAULT, pData, nData); | |
| 5219 if( rc!=SQLITE_OK ) return rc; | |
| 5220 rc = leafWriterStepMerge(v, pWriter, pTerm, nTerm, &reader, 1); | |
| 5221 dlrDestroy(&reader); | |
| 5222 | |
| 5223 return rc; | |
| 5224 } | |
| 5225 | |
| 5226 | |
| 5227 /****************************************************************/ | |
| 5228 /* LeafReader is used to iterate over an individual leaf node. */ | |
| 5229 typedef struct LeafReader { | |
| 5230 DataBuffer term; /* copy of current term. */ | |
| 5231 | |
| 5232 const char *pData; /* data for current term. */ | |
| 5233 int nData; | |
| 5234 } LeafReader; | |
| 5235 | |
| 5236 static void leafReaderDestroy(LeafReader *pReader){ | |
| 5237 dataBufferDestroy(&pReader->term); | |
| 5238 SCRAMBLE(pReader); | |
| 5239 } | |
| 5240 | |
| 5241 static int leafReaderAtEnd(LeafReader *pReader){ | |
| 5242 return pReader->nData<=0; | |
| 5243 } | |
| 5244 | |
| 5245 /* Access the current term. */ | |
| 5246 static int leafReaderTermBytes(LeafReader *pReader){ | |
| 5247 return pReader->term.nData; | |
| 5248 } | |
| 5249 static const char *leafReaderTerm(LeafReader *pReader){ | |
| 5250 assert( pReader->term.nData>0 ); | |
| 5251 return pReader->term.pData; | |
| 5252 } | |
| 5253 | |
| 5254 /* Access the doclist data for the current term. */ | |
| 5255 static int leafReaderDataBytes(LeafReader *pReader){ | |
| 5256 int nData; | |
| 5257 assert( pReader->term.nData>0 ); | |
| 5258 fts3GetVarint32(pReader->pData, &nData); | |
| 5259 return nData; | |
| 5260 } | |
| 5261 static const char *leafReaderData(LeafReader *pReader){ | |
| 5262 int n, nData; | |
| 5263 assert( pReader->term.nData>0 ); | |
| 5264 n = fts3GetVarint32Safe(pReader->pData, &nData, pReader->nData); | |
| 5265 if( !n || nData>pReader->nData-n ) return NULL; | |
| 5266 return pReader->pData+n; | |
| 5267 } | |
| 5268 | |
| 5269 static int leafReaderInit(const char *pData, int nData, | |
| 5270 LeafReader *pReader){ | |
| 5271 int nTerm, n; | |
| 5272 | |
| 5273 /* All callers check this precondition. */ | |
| 5274 assert( nData>0 ); | |
| 5275 assert( pData[0]=='\0' ); | |
| 5276 | |
| 5277 CLEAR(pReader); | |
| 5278 | |
| 5279 /* Read the first term, skipping the header byte. */ | |
| 5280 n = fts3GetVarint32Safe(pData+1, &nTerm, nData-1); | |
| 5281 if( !n || nTerm<0 || nTerm>nData-1-n ) return SQLITE_CORRUPT_BKPT; | |
| 5282 dataBufferInit(&pReader->term, nTerm); | |
| 5283 dataBufferReplace(&pReader->term, pData+1+n, nTerm); | |
| 5284 | |
| 5285 /* Position after the first term. */ | |
| 5286 pReader->pData = pData+1+n+nTerm; | |
| 5287 pReader->nData = nData-1-n-nTerm; | |
| 5288 return SQLITE_OK; | |
| 5289 } | |
| 5290 | |
| 5291 /* Step the reader forward to the next term. */ | |
| 5292 static int leafReaderStep(LeafReader *pReader){ | |
| 5293 int n, nData, nPrefix, nSuffix; | |
| 5294 assert( !leafReaderAtEnd(pReader) ); | |
| 5295 | |
| 5296 /* Skip previous entry's data block. */ | |
| 5297 n = fts3GetVarint32Safe(pReader->pData, &nData, pReader->nData); | |
| 5298 if( !n || nData<0 || nData>pReader->nData-n ) return SQLITE_CORRUPT_BKPT; | |
| 5299 pReader->pData += n+nData; | |
| 5300 pReader->nData -= n+nData; | |
| 5301 | |
| 5302 if( !leafReaderAtEnd(pReader) ){ | |
| 5303 /* Construct the new term using a prefix from the old term plus a | |
| 5304 ** suffix from the leaf data. | |
| 5305 */ | |
| 5306 n = fts3GetVarint32Safe(pReader->pData, &nPrefix, pReader->nData); | |
| 5307 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 5308 pReader->nData -= n; | |
| 5309 pReader->pData += n; | |
| 5310 n = fts3GetVarint32Safe(pReader->pData, &nSuffix, pReader->nData); | |
| 5311 if( !n ) return SQLITE_CORRUPT_BKPT; | |
| 5312 pReader->nData -= n; | |
| 5313 pReader->pData += n; | |
| 5314 if( nSuffix<0 || nSuffix>pReader->nData ) return SQLITE_CORRUPT_BKPT; | |
| 5315 if( nPrefix<0 || nPrefix>pReader->term.nData ) return SQLITE_CORRUPT_BKPT; | |
| 5316 pReader->term.nData = nPrefix; | |
| 5317 dataBufferAppend(&pReader->term, pReader->pData, nSuffix); | |
| 5318 | |
| 5319 pReader->pData += nSuffix; | |
| 5320 pReader->nData -= nSuffix; | |
| 5321 } | |
| 5322 return SQLITE_OK; | |
| 5323 } | |
| 5324 | |
| 5325 /* strcmp-style comparison of pReader's current term against pTerm. | |
| 5326 ** If isPrefix, equality means equal through nTerm bytes. | |
| 5327 */ | |
| 5328 static int leafReaderTermCmp(LeafReader *pReader, | |
| 5329 const char *pTerm, int nTerm, int isPrefix){ | |
| 5330 int c, n = pReader->term.nData<nTerm ? pReader->term.nData : nTerm; | |
| 5331 if( n==0 ){ | |
| 5332 if( pReader->term.nData>0 ) return -1; | |
| 5333 if(nTerm>0 ) return 1; | |
| 5334 return 0; | |
| 5335 } | |
| 5336 | |
| 5337 c = memcmp(pReader->term.pData, pTerm, n); | |
| 5338 if( c!=0 ) return c; | |
| 5339 if( isPrefix && n==nTerm ) return 0; | |
| 5340 return pReader->term.nData - nTerm; | |
| 5341 } | |
| 5342 | |
| 5343 | |
| 5344 /****************************************************************/ | |
| 5345 /* LeavesReader wraps LeafReader to allow iterating over the entire | |
| 5346 ** leaf layer of the tree. | |
| 5347 */ | |
| 5348 typedef struct LeavesReader { | |
| 5349 int idx; /* Index within the segment. */ | |
| 5350 | |
| 5351 sqlite3_stmt *pStmt; /* Statement we're streaming leaves from. */ | |
| 5352 int eof; /* we've seen SQLITE_DONE from pStmt. */ | |
| 5353 | |
| 5354 LeafReader leafReader; /* reader for the current leaf. */ | |
| 5355 DataBuffer rootData; /* root data for inline. */ | |
| 5356 } LeavesReader; | |
| 5357 | |
| 5358 /* Access the current term. */ | |
| 5359 static int leavesReaderTermBytes(LeavesReader *pReader){ | |
| 5360 assert( !pReader->eof ); | |
| 5361 return leafReaderTermBytes(&pReader->leafReader); | |
| 5362 } | |
| 5363 static const char *leavesReaderTerm(LeavesReader *pReader){ | |
| 5364 assert( !pReader->eof ); | |
| 5365 return leafReaderTerm(&pReader->leafReader); | |
| 5366 } | |
| 5367 | |
| 5368 /* Access the doclist data for the current term. */ | |
| 5369 static int leavesReaderDataBytes(LeavesReader *pReader){ | |
| 5370 assert( !pReader->eof ); | |
| 5371 return leafReaderDataBytes(&pReader->leafReader); | |
| 5372 } | |
| 5373 static const char *leavesReaderData(LeavesReader *pReader){ | |
| 5374 assert( !pReader->eof ); | |
| 5375 return leafReaderData(&pReader->leafReader); | |
| 5376 } | |
| 5377 | |
| 5378 static int leavesReaderAtEnd(LeavesReader *pReader){ | |
| 5379 return pReader->eof; | |
| 5380 } | |
| 5381 | |
| 5382 /* loadSegmentLeaves() may not read all the way to SQLITE_DONE, thus | |
| 5383 ** leaving the statement handle open, which locks the table. | |
| 5384 */ | |
| 5385 /* TODO(shess) This "solution" is not satisfactory. Really, there | |
| 5386 ** should be check-in function for all statement handles which | |
| 5387 ** arranges to call sqlite3_reset(). This most likely will require | |
| 5388 ** modification to control flow all over the place, though, so for now | |
| 5389 ** just punt. | |
| 5390 ** | |
| 5391 ** Note the the current system assumes that segment merges will run to | |
| 5392 ** completion, which is why this particular probably hasn't arisen in | |
| 5393 ** this case. Probably a brittle assumption. | |
| 5394 */ | |
| 5395 static int leavesReaderReset(LeavesReader *pReader){ | |
| 5396 return sqlite3_reset(pReader->pStmt); | |
| 5397 } | |
| 5398 | |
| 5399 static void leavesReaderDestroy(LeavesReader *pReader){ | |
| 5400 /* If idx is -1, that means we're using a non-cached statement | |
| 5401 ** handle in the optimize() case, so we need to release it. | |
| 5402 */ | |
| 5403 if( pReader->pStmt!=NULL && pReader->idx==-1 ){ | |
| 5404 sqlite3_finalize(pReader->pStmt); | |
| 5405 } | |
| 5406 leafReaderDestroy(&pReader->leafReader); | |
| 5407 dataBufferDestroy(&pReader->rootData); | |
| 5408 SCRAMBLE(pReader); | |
| 5409 } | |
| 5410 | |
| 5411 /* Initialize pReader with the given root data (if iStartBlockid==0 | |
| 5412 ** the leaf data was entirely contained in the root), or from the | |
| 5413 ** stream of blocks between iStartBlockid and iEndBlockid, inclusive. | |
| 5414 */ | |
| 5415 static int leavesReaderInit(fulltext_vtab *v, | |
| 5416 int idx, | |
| 5417 sqlite_int64 iStartBlockid, | |
| 5418 sqlite_int64 iEndBlockid, | |
| 5419 const char *pRootData, int nRootData, | |
| 5420 LeavesReader *pReader){ | |
| 5421 CLEAR(pReader); | |
| 5422 pReader->idx = idx; | |
| 5423 | |
| 5424 dataBufferInit(&pReader->rootData, 0); | |
| 5425 if( iStartBlockid==0 ){ | |
| 5426 int rc; | |
| 5427 /* Corrupt if this can't be a leaf node. */ | |
| 5428 if( pRootData==NULL || nRootData<1 || pRootData[0]!='\0' ){ | |
| 5429 return SQLITE_CORRUPT_BKPT; | |
| 5430 } | |
| 5431 /* Entire leaf level fit in root data. */ | |
| 5432 dataBufferReplace(&pReader->rootData, pRootData, nRootData); | |
| 5433 rc = leafReaderInit(pReader->rootData.pData, pReader->rootData.nData, | |
| 5434 &pReader->leafReader); | |
| 5435 if( rc!=SQLITE_OK ){ | |
| 5436 dataBufferDestroy(&pReader->rootData); | |
| 5437 return rc; | |
| 5438 } | |
| 5439 }else{ | |
| 5440 sqlite3_stmt *s; | |
| 5441 int rc = sql_get_leaf_statement(v, idx, &s); | |
| 5442 if( rc!=SQLITE_OK ) return rc; | |
| 5443 | |
| 5444 rc = sqlite3_bind_int64(s, 1, iStartBlockid); | |
| 5445 if( rc!=SQLITE_OK ) goto err; | |
| 5446 | |
| 5447 rc = sqlite3_bind_int64(s, 2, iEndBlockid); | |
| 5448 if( rc!=SQLITE_OK ) goto err; | |
| 5449 | |
| 5450 rc = sqlite3_step(s); | |
| 5451 | |
| 5452 /* Corrupt if interior node referenced missing leaf node. */ | |
| 5453 if( rc==SQLITE_DONE ){ | |
| 5454 rc = SQLITE_CORRUPT_BKPT; | |
| 5455 goto err; | |
| 5456 } | |
| 5457 | |
| 5458 if( rc!=SQLITE_ROW ) goto err; | |
| 5459 rc = SQLITE_OK; | |
| 5460 | |
| 5461 /* Corrupt if leaf data isn't a blob. */ | |
| 5462 if( sqlite3_column_type(s, 0)!=SQLITE_BLOB ){ | |
| 5463 rc = SQLITE_CORRUPT_BKPT; | |
| 5464 }else{ | |
| 5465 const char *pLeafData = sqlite3_column_blob(s, 0); | |
| 5466 int nLeafData = sqlite3_column_bytes(s, 0); | |
| 5467 | |
| 5468 /* Corrupt if this can't be a leaf node. */ | |
| 5469 if( pLeafData==NULL || nLeafData<1 || pLeafData[0]!='\0' ){ | |
| 5470 rc = SQLITE_CORRUPT_BKPT; | |
| 5471 }else{ | |
| 5472 rc = leafReaderInit(pLeafData, nLeafData, &pReader->leafReader); | |
| 5473 } | |
| 5474 } | |
| 5475 | |
| 5476 err: | |
| 5477 if( rc!=SQLITE_OK ){ | |
| 5478 if( idx==-1 ){ | |
| 5479 sqlite3_finalize(s); | |
| 5480 }else{ | |
| 5481 sqlite3_reset(s); | |
| 5482 } | |
| 5483 return rc; | |
| 5484 } | |
| 5485 | |
| 5486 pReader->pStmt = s; | |
| 5487 } | |
| 5488 return SQLITE_OK; | |
| 5489 } | |
| 5490 | |
| 5491 /* Step the current leaf forward to the next term. If we reach the | |
| 5492 ** end of the current leaf, step forward to the next leaf block. | |
| 5493 */ | |
| 5494 static int leavesReaderStep(fulltext_vtab *v, LeavesReader *pReader){ | |
| 5495 int rc; | |
| 5496 assert( !leavesReaderAtEnd(pReader) ); | |
| 5497 rc = leafReaderStep(&pReader->leafReader); | |
| 5498 if( rc!=SQLITE_OK ) return rc; | |
| 5499 | |
| 5500 if( leafReaderAtEnd(&pReader->leafReader) ){ | |
| 5501 if( pReader->rootData.pData ){ | |
| 5502 pReader->eof = 1; | |
| 5503 return SQLITE_OK; | |
| 5504 } | |
| 5505 rc = sqlite3_step(pReader->pStmt); | |
| 5506 if( rc!=SQLITE_ROW ){ | |
| 5507 pReader->eof = 1; | |
| 5508 return rc==SQLITE_DONE ? SQLITE_OK : rc; | |
| 5509 } | |
| 5510 | |
| 5511 /* Corrupt if leaf data isn't a blob. */ | |
| 5512 if( sqlite3_column_type(pReader->pStmt, 0)!=SQLITE_BLOB ){ | |
| 5513 return SQLITE_CORRUPT_BKPT; | |
| 5514 }else{ | |
| 5515 LeafReader tmp; | |
| 5516 const char *pLeafData = sqlite3_column_blob(pReader->pStmt, 0); | |
| 5517 int nLeafData = sqlite3_column_bytes(pReader->pStmt, 0); | |
| 5518 | |
| 5519 /* Corrupt if this can't be a leaf node. */ | |
| 5520 if( pLeafData==NULL || nLeafData<1 || pLeafData[0]!='\0' ){ | |
| 5521 return SQLITE_CORRUPT_BKPT; | |
| 5522 } | |
| 5523 | |
| 5524 rc = leafReaderInit(pLeafData, nLeafData, &tmp); | |
| 5525 if( rc!=SQLITE_OK ) return rc; | |
| 5526 leafReaderDestroy(&pReader->leafReader); | |
| 5527 pReader->leafReader = tmp; | |
| 5528 } | |
| 5529 } | |
| 5530 return SQLITE_OK; | |
| 5531 } | |
| 5532 | |
| 5533 /* Order LeavesReaders by their term, ignoring idx. Readers at eof | |
| 5534 ** always sort to the end. | |
| 5535 */ | |
| 5536 static int leavesReaderTermCmp(LeavesReader *lr1, LeavesReader *lr2){ | |
| 5537 if( leavesReaderAtEnd(lr1) ){ | |
| 5538 if( leavesReaderAtEnd(lr2) ) return 0; | |
| 5539 return 1; | |
| 5540 } | |
| 5541 if( leavesReaderAtEnd(lr2) ) return -1; | |
| 5542 | |
| 5543 return leafReaderTermCmp(&lr1->leafReader, | |
| 5544 leavesReaderTerm(lr2), leavesReaderTermBytes(lr2), | |
| 5545 0); | |
| 5546 } | |
| 5547 | |
| 5548 /* Similar to leavesReaderTermCmp(), with additional ordering by idx | |
| 5549 ** so that older segments sort before newer segments. | |
| 5550 */ | |
| 5551 static int leavesReaderCmp(LeavesReader *lr1, LeavesReader *lr2){ | |
| 5552 int c = leavesReaderTermCmp(lr1, lr2); | |
| 5553 if( c!=0 ) return c; | |
| 5554 return lr1->idx-lr2->idx; | |
| 5555 } | |
| 5556 | |
| 5557 /* Assume that pLr[1]..pLr[nLr] are sorted. Bubble pLr[0] into its | |
| 5558 ** sorted position. | |
| 5559 */ | |
| 5560 static void leavesReaderReorder(LeavesReader *pLr, int nLr){ | |
| 5561 while( nLr>1 && leavesReaderCmp(pLr, pLr+1)>0 ){ | |
| 5562 LeavesReader tmp = pLr[0]; | |
| 5563 pLr[0] = pLr[1]; | |
| 5564 pLr[1] = tmp; | |
| 5565 nLr--; | |
| 5566 pLr++; | |
| 5567 } | |
| 5568 } | |
| 5569 | |
| 5570 /* Initializes pReaders with the segments from level iLevel, returning | |
| 5571 ** the number of segments in *piReaders. Leaves pReaders in sorted | |
| 5572 ** order. | |
| 5573 */ | |
| 5574 static int leavesReadersInit(fulltext_vtab *v, int iLevel, | |
| 5575 LeavesReader *pReaders, int *piReaders){ | |
| 5576 sqlite3_stmt *s; | |
| 5577 int i, rc = sql_get_statement(v, SEGDIR_SELECT_LEVEL_STMT, &s); | |
| 5578 if( rc!=SQLITE_OK ) return rc; | |
| 5579 | |
| 5580 rc = sqlite3_bind_int(s, 1, iLevel); | |
| 5581 if( rc!=SQLITE_OK ) return rc; | |
| 5582 | |
| 5583 i = 0; | |
| 5584 while( (rc = sqlite3_step(s))==SQLITE_ROW ){ | |
| 5585 sqlite_int64 iStart = sqlite3_column_int64(s, 0); | |
| 5586 sqlite_int64 iEnd = sqlite3_column_int64(s, 1); | |
| 5587 const char *pRootData = sqlite3_column_blob(s, 2); | |
| 5588 int nRootData = sqlite3_column_bytes(s, 2); | |
| 5589 sqlite_int64 iIndex = sqlite3_column_int64(s, 3); | |
| 5590 | |
| 5591 /* Corrupt if we get back different types than we stored. */ | |
| 5592 /* Also corrupt if the index is not sequential starting at 0. */ | |
| 5593 if( sqlite3_column_type(s, 0)!=SQLITE_INTEGER || | |
| 5594 sqlite3_column_type(s, 1)!=SQLITE_INTEGER || | |
| 5595 sqlite3_column_type(s, 2)!=SQLITE_BLOB || | |
| 5596 i!=iIndex || | |
| 5597 i>=MERGE_COUNT ){ | |
| 5598 rc = SQLITE_CORRUPT_BKPT; | |
| 5599 break; | |
| 5600 } | |
| 5601 | |
| 5602 rc = leavesReaderInit(v, i, iStart, iEnd, pRootData, nRootData, | |
| 5603 &pReaders[i]); | |
| 5604 if( rc!=SQLITE_OK ) break; | |
| 5605 | |
| 5606 i++; | |
| 5607 } | |
| 5608 if( rc!=SQLITE_DONE ){ | |
| 5609 while( i-->0 ){ | |
| 5610 leavesReaderDestroy(&pReaders[i]); | |
| 5611 } | |
| 5612 sqlite3_reset(s); /* So we don't leave a lock. */ | |
| 5613 return rc; | |
| 5614 } | |
| 5615 | |
| 5616 *piReaders = i; | |
| 5617 | |
| 5618 /* Leave our results sorted by term, then age. */ | |
| 5619 while( i-- ){ | |
| 5620 leavesReaderReorder(pReaders+i, *piReaders-i); | |
| 5621 } | |
| 5622 return SQLITE_OK; | |
| 5623 } | |
| 5624 | |
| 5625 /* Merge doclists from pReaders[nReaders] into a single doclist, which | |
| 5626 ** is written to pWriter. Assumes pReaders is ordered oldest to | |
| 5627 ** newest. | |
| 5628 */ | |
| 5629 /* TODO(shess) Consider putting this inline in segmentMerge(). */ | |
| 5630 static int leavesReadersMerge(fulltext_vtab *v, | |
| 5631 LeavesReader *pReaders, int nReaders, | |
| 5632 LeafWriter *pWriter){ | |
| 5633 DLReader dlReaders[MERGE_COUNT]; | |
| 5634 const char *pTerm = leavesReaderTerm(pReaders); | |
| 5635 int i, nTerm = leavesReaderTermBytes(pReaders); | |
| 5636 int rc; | |
| 5637 | |
| 5638 assert( nReaders<=MERGE_COUNT ); | |
| 5639 | |
| 5640 for(i=0; i<nReaders; i++){ | |
| 5641 const char *pData = leavesReaderData(pReaders+i); | |
| 5642 if( pData==NULL ){ | |
| 5643 rc = SQLITE_CORRUPT_BKPT; | |
| 5644 break; | |
| 5645 } | |
| 5646 rc = dlrInit(&dlReaders[i], DL_DEFAULT, | |
| 5647 pData, | |
| 5648 leavesReaderDataBytes(pReaders+i)); | |
| 5649 if( rc!=SQLITE_OK ) break; | |
| 5650 } | |
| 5651 if( rc!=SQLITE_OK ){ | |
| 5652 while( i-->0 ){ | |
| 5653 dlrDestroy(&dlReaders[i]); | |
| 5654 } | |
| 5655 return rc; | |
| 5656 } | |
| 5657 | |
| 5658 return leafWriterStepMerge(v, pWriter, pTerm, nTerm, dlReaders, nReaders); | |
| 5659 } | |
| 5660 | |
| 5661 /* Forward ref due to mutual recursion with segdirNextIndex(). */ | |
| 5662 static int segmentMerge(fulltext_vtab *v, int iLevel); | |
| 5663 | |
| 5664 /* Put the next available index at iLevel into *pidx. If iLevel | |
| 5665 ** already has MERGE_COUNT segments, they are merged to a higher | |
| 5666 ** level to make room. | |
| 5667 */ | |
| 5668 static int segdirNextIndex(fulltext_vtab *v, int iLevel, int *pidx){ | |
| 5669 int rc = segdir_max_index(v, iLevel, pidx); | |
| 5670 if( rc==SQLITE_DONE ){ /* No segments at iLevel. */ | |
| 5671 *pidx = 0; | |
| 5672 }else if( rc==SQLITE_ROW ){ | |
| 5673 if( *pidx==(MERGE_COUNT-1) ){ | |
| 5674 rc = segmentMerge(v, iLevel); | |
| 5675 if( rc!=SQLITE_OK ) return rc; | |
| 5676 *pidx = 0; | |
| 5677 }else{ | |
| 5678 (*pidx)++; | |
| 5679 } | |
| 5680 }else{ | |
| 5681 return rc; | |
| 5682 } | |
| 5683 return SQLITE_OK; | |
| 5684 } | |
| 5685 | |
| 5686 /* Merge MERGE_COUNT segments at iLevel into a new segment at | |
| 5687 ** iLevel+1. If iLevel+1 is already full of segments, those will be | |
| 5688 ** merged to make room. | |
| 5689 */ | |
| 5690 static int segmentMerge(fulltext_vtab *v, int iLevel){ | |
| 5691 LeafWriter writer; | |
| 5692 LeavesReader lrs[MERGE_COUNT]; | |
| 5693 int i, rc, idx = 0; | |
| 5694 | |
| 5695 /* Determine the next available segment index at the next level, | |
| 5696 ** merging as necessary. | |
| 5697 */ | |
| 5698 rc = segdirNextIndex(v, iLevel+1, &idx); | |
| 5699 if( rc!=SQLITE_OK ) return rc; | |
| 5700 | |
| 5701 /* TODO(shess) This assumes that we'll always see exactly | |
| 5702 ** MERGE_COUNT segments to merge at a given level. That will be | |
| 5703 ** broken if we allow the developer to request preemptive or | |
| 5704 ** deferred merging. | |
| 5705 */ | |
| 5706 memset(&lrs, '\0', sizeof(lrs)); | |
| 5707 rc = leavesReadersInit(v, iLevel, lrs, &i); | |
| 5708 if( rc!=SQLITE_OK ) return rc; | |
| 5709 | |
| 5710 leafWriterInit(iLevel+1, idx, &writer); | |
| 5711 | |
| 5712 if( i!=MERGE_COUNT ){ | |
| 5713 rc = SQLITE_CORRUPT_BKPT; | |
| 5714 goto err; | |
| 5715 } | |
| 5716 | |
| 5717 /* Since leavesReaderReorder() pushes readers at eof to the end, | |
| 5718 ** when the first reader is empty, all will be empty. | |
| 5719 */ | |
| 5720 while( !leavesReaderAtEnd(lrs) ){ | |
| 5721 /* Figure out how many readers share their next term. */ | |
| 5722 for(i=1; i<MERGE_COUNT && !leavesReaderAtEnd(lrs+i); i++){ | |
| 5723 if( 0!=leavesReaderTermCmp(lrs, lrs+i) ) break; | |
| 5724 } | |
| 5725 | |
| 5726 rc = leavesReadersMerge(v, lrs, i, &writer); | |
| 5727 if( rc!=SQLITE_OK ) goto err; | |
| 5728 | |
| 5729 /* Step forward those that were merged. */ | |
| 5730 while( i-->0 ){ | |
| 5731 rc = leavesReaderStep(v, lrs+i); | |
| 5732 if( rc!=SQLITE_OK ) goto err; | |
| 5733 | |
| 5734 /* Reorder by term, then by age. */ | |
| 5735 leavesReaderReorder(lrs+i, MERGE_COUNT-i); | |
| 5736 } | |
| 5737 } | |
| 5738 | |
| 5739 for(i=0; i<MERGE_COUNT; i++){ | |
| 5740 leavesReaderDestroy(&lrs[i]); | |
| 5741 } | |
| 5742 | |
| 5743 rc = leafWriterFinalize(v, &writer); | |
| 5744 leafWriterDestroy(&writer); | |
| 5745 if( rc!=SQLITE_OK ) return rc; | |
| 5746 | |
| 5747 /* Delete the merged segment data. */ | |
| 5748 return segdir_delete(v, iLevel); | |
| 5749 | |
| 5750 err: | |
| 5751 for(i=0; i<MERGE_COUNT; i++){ | |
| 5752 leavesReaderDestroy(&lrs[i]); | |
| 5753 } | |
| 5754 leafWriterDestroy(&writer); | |
| 5755 return rc; | |
| 5756 } | |
| 5757 | |
| 5758 /* Accumulate the union of *acc and *pData into *acc. */ | |
| 5759 static int docListAccumulateUnion(DataBuffer *acc, | |
| 5760 const char *pData, int nData) { | |
| 5761 DataBuffer tmp = *acc; | |
| 5762 int rc; | |
| 5763 dataBufferInit(acc, tmp.nData+nData); | |
| 5764 rc = docListUnion(tmp.pData, tmp.nData, pData, nData, acc); | |
| 5765 dataBufferDestroy(&tmp); | |
| 5766 return rc; | |
| 5767 } | |
| 5768 | |
| 5769 /* TODO(shess) It might be interesting to explore different merge | |
| 5770 ** strategies, here. For instance, since this is a sorted merge, we | |
| 5771 ** could easily merge many doclists in parallel. With some | |
| 5772 ** comprehension of the storage format, we could merge all of the | |
| 5773 ** doclists within a leaf node directly from the leaf node's storage. | |
| 5774 ** It may be worthwhile to merge smaller doclists before larger | |
| 5775 ** doclists, since they can be traversed more quickly - but the | |
| 5776 ** results may have less overlap, making them more expensive in a | |
| 5777 ** different way. | |
| 5778 */ | |
| 5779 | |
| 5780 /* Scan pReader for pTerm/nTerm, and merge the term's doclist over | |
| 5781 ** *out (any doclists with duplicate docids overwrite those in *out). | |
| 5782 ** Internal function for loadSegmentLeaf(). | |
| 5783 */ | |
| 5784 static int loadSegmentLeavesInt(fulltext_vtab *v, LeavesReader *pReader, | |
| 5785 const char *pTerm, int nTerm, int isPrefix, | |
| 5786 DataBuffer *out){ | |
| 5787 /* doclist data is accumulated into pBuffers similar to how one does | |
| 5788 ** increment in binary arithmetic. If index 0 is empty, the data is | |
| 5789 ** stored there. If there is data there, it is merged and the | |
| 5790 ** results carried into position 1, with further merge-and-carry | |
| 5791 ** until an empty position is found. | |
| 5792 */ | |
| 5793 DataBuffer *pBuffers = NULL; | |
| 5794 int nBuffers = 0, nMaxBuffers = 0, rc; | |
| 5795 | |
| 5796 assert( nTerm>0 ); | |
| 5797 | |
| 5798 for(rc=SQLITE_OK; rc==SQLITE_OK && !leavesReaderAtEnd(pReader); | |
| 5799 rc=leavesReaderStep(v, pReader)){ | |
| 5800 /* TODO(shess) Really want leavesReaderTermCmp(), but that name is | |
| 5801 ** already taken to compare the terms of two LeavesReaders. Think | |
| 5802 ** on a better name. [Meanwhile, break encapsulation rather than | |
| 5803 ** use a confusing name.] | |
| 5804 */ | |
| 5805 int c = leafReaderTermCmp(&pReader->leafReader, pTerm, nTerm, isPrefix); | |
| 5806 if( c>0 ) break; /* Past any possible matches. */ | |
| 5807 if( c==0 ){ | |
| 5808 int iBuffer, nData; | |
| 5809 const char *pData = leavesReaderData(pReader); | |
| 5810 if( pData==NULL ){ | |
| 5811 rc = SQLITE_CORRUPT_BKPT; | |
| 5812 break; | |
| 5813 } | |
| 5814 nData = leavesReaderDataBytes(pReader); | |
| 5815 | |
| 5816 /* Find the first empty buffer. */ | |
| 5817 for(iBuffer=0; iBuffer<nBuffers; ++iBuffer){ | |
| 5818 if( 0==pBuffers[iBuffer].nData ) break; | |
| 5819 } | |
| 5820 | |
| 5821 /* Out of buffers, add an empty one. */ | |
| 5822 if( iBuffer==nBuffers ){ | |
| 5823 if( nBuffers==nMaxBuffers ){ | |
| 5824 DataBuffer *p; | |
| 5825 nMaxBuffers += 20; | |
| 5826 | |
| 5827 /* Manual realloc so we can handle NULL appropriately. */ | |
| 5828 p = sqlite3_malloc(nMaxBuffers*sizeof(*pBuffers)); | |
| 5829 if( p==NULL ){ | |
| 5830 rc = SQLITE_NOMEM; | |
| 5831 break; | |
| 5832 } | |
| 5833 | |
| 5834 if( nBuffers>0 ){ | |
| 5835 assert(pBuffers!=NULL); | |
| 5836 memcpy(p, pBuffers, nBuffers*sizeof(*pBuffers)); | |
| 5837 sqlite3_free(pBuffers); | |
| 5838 } | |
| 5839 pBuffers = p; | |
| 5840 } | |
| 5841 dataBufferInit(&(pBuffers[nBuffers]), 0); | |
| 5842 nBuffers++; | |
| 5843 } | |
| 5844 | |
| 5845 /* At this point, must have an empty at iBuffer. */ | |
| 5846 assert(iBuffer<nBuffers && pBuffers[iBuffer].nData==0); | |
| 5847 | |
| 5848 /* If empty was first buffer, no need for merge logic. */ | |
| 5849 if( iBuffer==0 ){ | |
| 5850 dataBufferReplace(&(pBuffers[0]), pData, nData); | |
| 5851 }else{ | |
| 5852 /* pAcc is the empty buffer the merged data will end up in. */ | |
| 5853 DataBuffer *pAcc = &(pBuffers[iBuffer]); | |
| 5854 DataBuffer *p = &(pBuffers[0]); | |
| 5855 | |
| 5856 /* Handle position 0 specially to avoid need to prime pAcc | |
| 5857 ** with pData/nData. | |
| 5858 */ | |
| 5859 dataBufferSwap(p, pAcc); | |
| 5860 rc = docListAccumulateUnion(pAcc, pData, nData); | |
| 5861 if( rc!=SQLITE_OK ) goto err; | |
| 5862 | |
| 5863 /* Accumulate remaining doclists into pAcc. */ | |
| 5864 for(++p; p<pAcc; ++p){ | |
| 5865 rc = docListAccumulateUnion(pAcc, p->pData, p->nData); | |
| 5866 if( rc!=SQLITE_OK ) goto err; | |
| 5867 | |
| 5868 /* dataBufferReset() could allow a large doclist to blow up | |
| 5869 ** our memory requirements. | |
| 5870 */ | |
| 5871 if( p->nCapacity<1024 ){ | |
| 5872 dataBufferReset(p); | |
| 5873 }else{ | |
| 5874 dataBufferDestroy(p); | |
| 5875 dataBufferInit(p, 0); | |
| 5876 } | |
| 5877 } | |
| 5878 } | |
| 5879 } | |
| 5880 } | |
| 5881 | |
| 5882 /* Union all the doclists together into *out. */ | |
| 5883 /* TODO(shess) What if *out is big? Sigh. */ | |
| 5884 if( rc==SQLITE_OK && nBuffers>0 ){ | |
| 5885 int iBuffer; | |
| 5886 for(iBuffer=0; iBuffer<nBuffers; ++iBuffer){ | |
| 5887 if( pBuffers[iBuffer].nData>0 ){ | |
| 5888 if( out->nData==0 ){ | |
| 5889 dataBufferSwap(out, &(pBuffers[iBuffer])); | |
| 5890 }else{ | |
| 5891 rc = docListAccumulateUnion(out, pBuffers[iBuffer].pData, | |
| 5892 pBuffers[iBuffer].nData); | |
| 5893 if( rc!=SQLITE_OK ) break; | |
| 5894 } | |
| 5895 } | |
| 5896 } | |
| 5897 } | |
| 5898 | |
| 5899 err: | |
| 5900 while( nBuffers-- ){ | |
| 5901 dataBufferDestroy(&(pBuffers[nBuffers])); | |
| 5902 } | |
| 5903 if( pBuffers!=NULL ) sqlite3_free(pBuffers); | |
| 5904 | |
| 5905 return rc; | |
| 5906 } | |
| 5907 | |
| 5908 /* Call loadSegmentLeavesInt() with pData/nData as input. */ | |
| 5909 static int loadSegmentLeaf(fulltext_vtab *v, const char *pData, int nData, | |
| 5910 const char *pTerm, int nTerm, int isPrefix, | |
| 5911 DataBuffer *out){ | |
| 5912 LeavesReader reader; | |
| 5913 int rc; | |
| 5914 | |
| 5915 assert( nData>1 ); | |
| 5916 assert( *pData=='\0' ); | |
| 5917 rc = leavesReaderInit(v, 0, 0, 0, pData, nData, &reader); | |
| 5918 if( rc!=SQLITE_OK ) return rc; | |
| 5919 | |
| 5920 rc = loadSegmentLeavesInt(v, &reader, pTerm, nTerm, isPrefix, out); | |
| 5921 leavesReaderReset(&reader); | |
| 5922 leavesReaderDestroy(&reader); | |
| 5923 return rc; | |
| 5924 } | |
| 5925 | |
| 5926 /* Call loadSegmentLeavesInt() with the leaf nodes from iStartLeaf to | |
| 5927 ** iEndLeaf (inclusive) as input, and merge the resulting doclist into | |
| 5928 ** out. | |
| 5929 */ | |
| 5930 static int loadSegmentLeaves(fulltext_vtab *v, | |
| 5931 sqlite_int64 iStartLeaf, sqlite_int64 iEndLeaf, | |
| 5932 const char *pTerm, int nTerm, int isPrefix, | |
| 5933 DataBuffer *out){ | |
| 5934 int rc; | |
| 5935 LeavesReader reader; | |
| 5936 | |
| 5937 assert( iStartLeaf<=iEndLeaf ); | |
| 5938 rc = leavesReaderInit(v, 0, iStartLeaf, iEndLeaf, NULL, 0, &reader); | |
| 5939 if( rc!=SQLITE_OK ) return rc; | |
| 5940 | |
| 5941 rc = loadSegmentLeavesInt(v, &reader, pTerm, nTerm, isPrefix, out); | |
| 5942 leavesReaderReset(&reader); | |
| 5943 leavesReaderDestroy(&reader); | |
| 5944 return rc; | |
| 5945 } | |
| 5946 | |
| 5947 /* Taking pData/nData as an interior node, find the sequence of child | |
| 5948 ** nodes which could include pTerm/nTerm/isPrefix. Note that the | |
| 5949 ** interior node terms logically come between the blocks, so there is | |
| 5950 ** one more blockid than there are terms (that block contains terms >= | |
| 5951 ** the last interior-node term). | |
| 5952 */ | |
| 5953 /* TODO(shess) The calling code may already know that the end child is | |
| 5954 ** not worth calculating, because the end may be in a later sibling | |
| 5955 ** node. Consider whether breaking symmetry is worthwhile. I suspect | |
| 5956 ** it is not worthwhile. | |
| 5957 */ | |
| 5958 static int getChildrenContaining(const char *pData, int nData, | |
| 5959 const char *pTerm, int nTerm, int isPrefix, | |
| 5960 sqlite_int64 *piStartChild, | |
| 5961 sqlite_int64 *piEndChild){ | |
| 5962 InteriorReader reader; | |
| 5963 int rc; | |
| 5964 | |
| 5965 assert( nData>1 ); | |
| 5966 assert( *pData!='\0' ); | |
| 5967 rc = interiorReaderInit(pData, nData, &reader); | |
| 5968 if( rc!=SQLITE_OK ) return rc; | |
| 5969 | |
| 5970 /* Scan for the first child which could contain pTerm/nTerm. */ | |
| 5971 while( !interiorReaderAtEnd(&reader) ){ | |
| 5972 if( interiorReaderTermCmp(&reader, pTerm, nTerm, 0)>0 ) break; | |
| 5973 rc = interiorReaderStep(&reader); | |
| 5974 if( rc!=SQLITE_OK ){ | |
| 5975 interiorReaderDestroy(&reader); | |
| 5976 return rc; | |
| 5977 } | |
| 5978 } | |
| 5979 *piStartChild = interiorReaderCurrentBlockid(&reader); | |
| 5980 | |
| 5981 /* Keep scanning to find a term greater than our term, using prefix | |
| 5982 ** comparison if indicated. If isPrefix is false, this will be the | |
| 5983 ** same blockid as the starting block. | |
| 5984 */ | |
| 5985 while( !interiorReaderAtEnd(&reader) ){ | |
| 5986 if( interiorReaderTermCmp(&reader, pTerm, nTerm, isPrefix)>0 ) break; | |
| 5987 rc = interiorReaderStep(&reader); | |
| 5988 if( rc!=SQLITE_OK ){ | |
| 5989 interiorReaderDestroy(&reader); | |
| 5990 return rc; | |
| 5991 } | |
| 5992 } | |
| 5993 *piEndChild = interiorReaderCurrentBlockid(&reader); | |
| 5994 | |
| 5995 interiorReaderDestroy(&reader); | |
| 5996 | |
| 5997 /* Children must ascend, and if !prefix, both must be the same. */ | |
| 5998 assert( *piEndChild>=*piStartChild ); | |
| 5999 assert( isPrefix || *piStartChild==*piEndChild ); | |
| 6000 return rc; | |
| 6001 } | |
| 6002 | |
| 6003 /* Read block at iBlockid and pass it with other params to | |
| 6004 ** getChildrenContaining(). | |
| 6005 */ | |
| 6006 static int loadAndGetChildrenContaining( | |
| 6007 fulltext_vtab *v, | |
| 6008 sqlite_int64 iBlockid, | |
| 6009 const char *pTerm, int nTerm, int isPrefix, | |
| 6010 sqlite_int64 *piStartChild, sqlite_int64 *piEndChild | |
| 6011 ){ | |
| 6012 sqlite3_stmt *s = NULL; | |
| 6013 int rc; | |
| 6014 | |
| 6015 assert( iBlockid!=0 ); | |
| 6016 assert( pTerm!=NULL ); | |
| 6017 assert( nTerm!=0 ); /* TODO(shess) Why not allow this? */ | |
| 6018 assert( piStartChild!=NULL ); | |
| 6019 assert( piEndChild!=NULL ); | |
| 6020 | |
| 6021 rc = sql_get_statement(v, BLOCK_SELECT_STMT, &s); | |
| 6022 if( rc!=SQLITE_OK ) return rc; | |
| 6023 | |
| 6024 rc = sqlite3_bind_int64(s, 1, iBlockid); | |
| 6025 if( rc!=SQLITE_OK ) return rc; | |
| 6026 | |
| 6027 rc = sqlite3_step(s); | |
| 6028 /* Corrupt if interior node references missing child node. */ | |
| 6029 if( rc==SQLITE_DONE ) return SQLITE_CORRUPT_BKPT; | |
| 6030 if( rc!=SQLITE_ROW ) return rc; | |
| 6031 | |
| 6032 /* Corrupt if child node isn't a blob. */ | |
| 6033 if( sqlite3_column_type(s, 0)!=SQLITE_BLOB ){ | |
| 6034 sqlite3_reset(s); /* So we don't leave a lock. */ | |
| 6035 return SQLITE_CORRUPT_BKPT; | |
| 6036 }else{ | |
| 6037 const char *pData = sqlite3_column_blob(s, 0); | |
| 6038 int nData = sqlite3_column_bytes(s, 0); | |
| 6039 | |
| 6040 /* Corrupt if child is not a valid interior node. */ | |
| 6041 if( pData==NULL || nData<1 || pData[0]=='\0' ){ | |
| 6042 sqlite3_reset(s); /* So we don't leave a lock. */ | |
| 6043 return SQLITE_CORRUPT_BKPT; | |
| 6044 } | |
| 6045 | |
| 6046 rc = getChildrenContaining(pData, nData, pTerm, nTerm, | |
| 6047 isPrefix, piStartChild, piEndChild); | |
| 6048 if( rc!=SQLITE_OK ){ | |
| 6049 sqlite3_reset(s); | |
| 6050 return rc; | |
| 6051 } | |
| 6052 } | |
| 6053 | |
| 6054 /* We expect only one row. We must execute another sqlite3_step() | |
| 6055 * to complete the iteration; otherwise the table will remain | |
| 6056 * locked. */ | |
| 6057 rc = sqlite3_step(s); | |
| 6058 if( rc==SQLITE_ROW ) return SQLITE_ERROR; | |
| 6059 if( rc!=SQLITE_DONE ) return rc; | |
| 6060 | |
| 6061 return SQLITE_OK; | |
| 6062 } | |
| 6063 | |
| 6064 /* Traverse the tree represented by pData[nData] looking for | |
| 6065 ** pTerm[nTerm], placing its doclist into *out. This is internal to | |
| 6066 ** loadSegment() to make error-handling cleaner. | |
| 6067 */ | |
| 6068 static int loadSegmentInt(fulltext_vtab *v, const char *pData, int nData, | |
| 6069 sqlite_int64 iLeavesEnd, | |
| 6070 const char *pTerm, int nTerm, int isPrefix, | |
| 6071 DataBuffer *out){ | |
| 6072 /* Special case where root is a leaf. */ | |
| 6073 if( *pData=='\0' ){ | |
| 6074 return loadSegmentLeaf(v, pData, nData, pTerm, nTerm, isPrefix, out); | |
| 6075 }else{ | |
| 6076 int rc; | |
| 6077 sqlite_int64 iStartChild, iEndChild; | |
| 6078 | |
| 6079 /* Process pData as an interior node, then loop down the tree | |
| 6080 ** until we find the set of leaf nodes to scan for the term. | |
| 6081 */ | |
| 6082 rc = getChildrenContaining(pData, nData, pTerm, nTerm, isPrefix, | |
| 6083 &iStartChild, &iEndChild); | |
| 6084 if( rc!=SQLITE_OK ) return rc; | |
| 6085 while( iStartChild>iLeavesEnd ){ | |
| 6086 sqlite_int64 iNextStart, iNextEnd; | |
| 6087 rc = loadAndGetChildrenContaining(v, iStartChild, pTerm, nTerm, isPrefix, | |
| 6088 &iNextStart, &iNextEnd); | |
| 6089 if( rc!=SQLITE_OK ) return rc; | |
| 6090 | |
| 6091 /* If we've branched, follow the end branch, too. */ | |
| 6092 if( iStartChild!=iEndChild ){ | |
| 6093 sqlite_int64 iDummy; | |
| 6094 rc = loadAndGetChildrenContaining(v, iEndChild, pTerm, nTerm, isPrefix, | |
| 6095 &iDummy, &iNextEnd); | |
| 6096 if( rc!=SQLITE_OK ) return rc; | |
| 6097 } | |
| 6098 | |
| 6099 assert( iNextStart<=iNextEnd ); | |
| 6100 iStartChild = iNextStart; | |
| 6101 iEndChild = iNextEnd; | |
| 6102 } | |
| 6103 assert( iStartChild<=iLeavesEnd ); | |
| 6104 assert( iEndChild<=iLeavesEnd ); | |
| 6105 | |
| 6106 /* Scan through the leaf segments for doclists. */ | |
| 6107 return loadSegmentLeaves(v, iStartChild, iEndChild, | |
| 6108 pTerm, nTerm, isPrefix, out); | |
| 6109 } | |
| 6110 } | |
| 6111 | |
| 6112 /* Call loadSegmentInt() to collect the doclist for pTerm/nTerm, then | |
| 6113 ** merge its doclist over *out (any duplicate doclists read from the | |
| 6114 ** segment rooted at pData will overwrite those in *out). | |
| 6115 */ | |
| 6116 /* TODO(shess) Consider changing this to determine the depth of the | |
| 6117 ** leaves using either the first characters of interior nodes (when | |
| 6118 ** ==1, we're one level above the leaves), or the first character of | |
| 6119 ** the root (which will describe the height of the tree directly). | |
| 6120 ** Either feels somewhat tricky to me. | |
| 6121 */ | |
| 6122 /* TODO(shess) The current merge is likely to be slow for large | |
| 6123 ** doclists (though it should process from newest/smallest to | |
| 6124 ** oldest/largest, so it may not be that bad). It might be useful to | |
| 6125 ** modify things to allow for N-way merging. This could either be | |
| 6126 ** within a segment, with pairwise merges across segments, or across | |
| 6127 ** all segments at once. | |
| 6128 */ | |
| 6129 static int loadSegment(fulltext_vtab *v, const char *pData, int nData, | |
| 6130 sqlite_int64 iLeavesEnd, | |
| 6131 const char *pTerm, int nTerm, int isPrefix, | |
| 6132 DataBuffer *out){ | |
| 6133 DataBuffer result; | |
| 6134 int rc; | |
| 6135 | |
| 6136 /* Corrupt if segment root can't be valid. */ | |
| 6137 if( pData==NULL || nData<1 ) return SQLITE_CORRUPT_BKPT; | |
| 6138 | |
| 6139 /* This code should never be called with buffered updates. */ | |
| 6140 assert( v->nPendingData<0 ); | |
| 6141 | |
| 6142 dataBufferInit(&result, 0); | |
| 6143 rc = loadSegmentInt(v, pData, nData, iLeavesEnd, | |
| 6144 pTerm, nTerm, isPrefix, &result); | |
| 6145 if( rc==SQLITE_OK && result.nData>0 ){ | |
| 6146 if( out->nData==0 ){ | |
| 6147 DataBuffer tmp = *out; | |
| 6148 *out = result; | |
| 6149 result = tmp; | |
| 6150 }else{ | |
| 6151 DataBuffer merged; | |
| 6152 DLReader readers[2]; | |
| 6153 | |
| 6154 rc = dlrInit(&readers[0], DL_DEFAULT, out->pData, out->nData); | |
| 6155 if( rc==SQLITE_OK ){ | |
| 6156 rc = dlrInit(&readers[1], DL_DEFAULT, result.pData, result.nData); | |
| 6157 if( rc==SQLITE_OK ){ | |
| 6158 dataBufferInit(&merged, out->nData+result.nData); | |
| 6159 rc = docListMerge(&merged, readers, 2); | |
| 6160 dataBufferDestroy(out); | |
| 6161 *out = merged; | |
| 6162 dlrDestroy(&readers[1]); | |
| 6163 } | |
| 6164 dlrDestroy(&readers[0]); | |
| 6165 } | |
| 6166 } | |
| 6167 } | |
| 6168 | |
| 6169 dataBufferDestroy(&result); | |
| 6170 return rc; | |
| 6171 } | |
| 6172 | |
| 6173 /* Scan the database and merge together the posting lists for the term | |
| 6174 ** into *out. | |
| 6175 */ | |
| 6176 static int termSelect( | |
| 6177 fulltext_vtab *v, | |
| 6178 int iColumn, | |
| 6179 const char *pTerm, int nTerm, /* Term to query for */ | |
| 6180 int isPrefix, /* True for a prefix search */ | |
| 6181 DocListType iType, | |
| 6182 DataBuffer *out /* Write results here */ | |
| 6183 ){ | |
| 6184 DataBuffer doclist; | |
| 6185 sqlite3_stmt *s; | |
| 6186 int rc = sql_get_statement(v, SEGDIR_SELECT_ALL_STMT, &s); | |
| 6187 if( rc!=SQLITE_OK ) return rc; | |
| 6188 | |
| 6189 /* This code should never be called with buffered updates. */ | |
| 6190 assert( v->nPendingData<0 ); | |
| 6191 | |
| 6192 dataBufferInit(&doclist, 0); | |
| 6193 dataBufferInit(out, 0); | |
| 6194 | |
| 6195 /* Traverse the segments from oldest to newest so that newer doclist | |
| 6196 ** elements for given docids overwrite older elements. | |
| 6197 */ | |
| 6198 while( (rc = sqlite3_step(s))==SQLITE_ROW ){ | |
| 6199 const char *pData = sqlite3_column_blob(s, 2); | |
| 6200 const int nData = sqlite3_column_bytes(s, 2); | |
| 6201 const sqlite_int64 iLeavesEnd = sqlite3_column_int64(s, 1); | |
| 6202 | |
| 6203 /* Corrupt if we get back different types than we stored. */ | |
| 6204 if( sqlite3_column_type(s, 1)!=SQLITE_INTEGER || | |
| 6205 sqlite3_column_type(s, 2)!=SQLITE_BLOB ){ | |
| 6206 rc = SQLITE_CORRUPT_BKPT; | |
| 6207 goto err; | |
| 6208 } | |
| 6209 | |
| 6210 rc = loadSegment(v, pData, nData, iLeavesEnd, pTerm, nTerm, isPrefix, | |
| 6211 &doclist); | |
| 6212 if( rc!=SQLITE_OK ) goto err; | |
| 6213 } | |
| 6214 if( rc==SQLITE_DONE ){ | |
| 6215 rc = SQLITE_OK; | |
| 6216 if( doclist.nData!=0 ){ | |
| 6217 /* TODO(shess) The old term_select_all() code applied the column | |
| 6218 ** restrict as we merged segments, leading to smaller buffers. | |
| 6219 ** This is probably worthwhile to bring back, once the new storage | |
| 6220 ** system is checked in. | |
| 6221 */ | |
| 6222 if( iColumn==v->nColumn) iColumn = -1; | |
| 6223 rc = docListTrim(DL_DEFAULT, doclist.pData, doclist.nData, | |
| 6224 iColumn, iType, out); | |
| 6225 } | |
| 6226 } | |
| 6227 | |
| 6228 err: | |
| 6229 sqlite3_reset(s); /* So we don't leave a lock. */ | |
| 6230 dataBufferDestroy(&doclist); | |
| 6231 return rc; | |
| 6232 } | |
| 6233 | |
| 6234 /****************************************************************/ | |
| 6235 /* Used to hold hashtable data for sorting. */ | |
| 6236 typedef struct TermData { | |
| 6237 const char *pTerm; | |
| 6238 int nTerm; | |
| 6239 DLCollector *pCollector; | |
| 6240 } TermData; | |
| 6241 | |
| 6242 /* Orders TermData elements in strcmp fashion ( <0 for less-than, 0 | |
| 6243 ** for equal, >0 for greater-than). | |
| 6244 */ | |
| 6245 static int termDataCmp(const void *av, const void *bv){ | |
| 6246 const TermData *a = (const TermData *)av; | |
| 6247 const TermData *b = (const TermData *)bv; | |
| 6248 int n = a->nTerm<b->nTerm ? a->nTerm : b->nTerm; | |
| 6249 int c = memcmp(a->pTerm, b->pTerm, n); | |
| 6250 if( c!=0 ) return c; | |
| 6251 return a->nTerm-b->nTerm; | |
| 6252 } | |
| 6253 | |
| 6254 /* Order pTerms data by term, then write a new level 0 segment using | |
| 6255 ** LeafWriter. | |
| 6256 */ | |
| 6257 static int writeZeroSegment(fulltext_vtab *v, fts3Hash *pTerms){ | |
| 6258 fts3HashElem *e; | |
| 6259 int idx, rc, i, n; | |
| 6260 TermData *pData; | |
| 6261 LeafWriter writer; | |
| 6262 DataBuffer dl; | |
| 6263 | |
| 6264 /* Determine the next index at level 0, merging as necessary. */ | |
| 6265 rc = segdirNextIndex(v, 0, &idx); | |
| 6266 if( rc!=SQLITE_OK ) return rc; | |
| 6267 | |
| 6268 n = fts3HashCount(pTerms); | |
| 6269 pData = sqlite3_malloc(n*sizeof(TermData)); | |
| 6270 | |
| 6271 for(i = 0, e = fts3HashFirst(pTerms); e; i++, e = fts3HashNext(e)){ | |
| 6272 assert( i<n ); | |
| 6273 pData[i].pTerm = fts3HashKey(e); | |
| 6274 pData[i].nTerm = fts3HashKeysize(e); | |
| 6275 pData[i].pCollector = fts3HashData(e); | |
| 6276 } | |
| 6277 assert( i==n ); | |
| 6278 | |
| 6279 /* TODO(shess) Should we allow user-defined collation sequences, | |
| 6280 ** here? I think we only need that once we support prefix searches. | |
| 6281 */ | |
| 6282 if( n>1 ) qsort(pData, n, sizeof(*pData), termDataCmp); | |
| 6283 | |
| 6284 /* TODO(shess) Refactor so that we can write directly to the segment | |
| 6285 ** DataBuffer, as happens for segment merges. | |
| 6286 */ | |
| 6287 leafWriterInit(0, idx, &writer); | |
| 6288 dataBufferInit(&dl, 0); | |
| 6289 for(i=0; i<n; i++){ | |
| 6290 dataBufferReset(&dl); | |
| 6291 dlcAddDoclist(pData[i].pCollector, &dl); | |
| 6292 rc = leafWriterStep(v, &writer, | |
| 6293 pData[i].pTerm, pData[i].nTerm, dl.pData, dl.nData); | |
| 6294 if( rc!=SQLITE_OK ) goto err; | |
| 6295 } | |
| 6296 rc = leafWriterFinalize(v, &writer); | |
| 6297 | |
| 6298 err: | |
| 6299 dataBufferDestroy(&dl); | |
| 6300 sqlite3_free(pData); | |
| 6301 leafWriterDestroy(&writer); | |
| 6302 return rc; | |
| 6303 } | |
| 6304 | |
| 6305 /* If pendingTerms has data, free it. */ | |
| 6306 static int clearPendingTerms(fulltext_vtab *v){ | |
| 6307 if( v->nPendingData>=0 ){ | |
| 6308 fts3HashElem *e; | |
| 6309 for(e=fts3HashFirst(&v->pendingTerms); e; e=fts3HashNext(e)){ | |
| 6310 dlcDelete(fts3HashData(e)); | |
| 6311 } | |
| 6312 fts3HashClear(&v->pendingTerms); | |
| 6313 v->nPendingData = -1; | |
| 6314 } | |
| 6315 return SQLITE_OK; | |
| 6316 } | |
| 6317 | |
| 6318 /* If pendingTerms has data, flush it to a level-zero segment, and | |
| 6319 ** free it. | |
| 6320 */ | |
| 6321 static int flushPendingTerms(fulltext_vtab *v){ | |
| 6322 if( v->nPendingData>=0 ){ | |
| 6323 int rc = writeZeroSegment(v, &v->pendingTerms); | |
| 6324 if( rc==SQLITE_OK ) clearPendingTerms(v); | |
| 6325 return rc; | |
| 6326 } | |
| 6327 return SQLITE_OK; | |
| 6328 } | |
| 6329 | |
| 6330 /* If pendingTerms is "too big", or docid is out of order, flush it. | |
| 6331 ** Regardless, be certain that pendingTerms is initialized for use. | |
| 6332 */ | |
| 6333 static int initPendingTerms(fulltext_vtab *v, sqlite_int64 iDocid){ | |
| 6334 /* TODO(shess) Explore whether partially flushing the buffer on | |
| 6335 ** forced-flush would provide better performance. I suspect that if | |
| 6336 ** we ordered the doclists by size and flushed the largest until the | |
| 6337 ** buffer was half empty, that would let the less frequent terms | |
| 6338 ** generate longer doclists. | |
| 6339 */ | |
| 6340 if( iDocid<=v->iPrevDocid || v->nPendingData>kPendingThreshold ){ | |
| 6341 int rc = flushPendingTerms(v); | |
| 6342 if( rc!=SQLITE_OK ) return rc; | |
| 6343 } | |
| 6344 if( v->nPendingData<0 ){ | |
| 6345 fts3HashInit(&v->pendingTerms, FTS3_HASH_STRING, 1); | |
| 6346 v->nPendingData = 0; | |
| 6347 } | |
| 6348 v->iPrevDocid = iDocid; | |
| 6349 return SQLITE_OK; | |
| 6350 } | |
| 6351 | |
| 6352 /* This function implements the xUpdate callback; it is the top-level entry | |
| 6353 * point for inserting, deleting or updating a row in a full-text table. */ | |
| 6354 static int fulltextUpdate(sqlite3_vtab *pVtab, int nArg, sqlite3_value **ppArg, | |
| 6355 sqlite_int64 *pRowid){ | |
| 6356 fulltext_vtab *v = (fulltext_vtab *) pVtab; | |
| 6357 int rc; | |
| 6358 | |
| 6359 FTSTRACE(("FTS3 Update %p\n", pVtab)); | |
| 6360 | |
| 6361 if( nArg<2 ){ | |
| 6362 rc = index_delete(v, sqlite3_value_int64(ppArg[0])); | |
| 6363 if( rc==SQLITE_OK ){ | |
| 6364 /* If we just deleted the last row in the table, clear out the | |
| 6365 ** index data. | |
| 6366 */ | |
| 6367 rc = content_exists(v); | |
| 6368 if( rc==SQLITE_ROW ){ | |
| 6369 rc = SQLITE_OK; | |
| 6370 }else if( rc==SQLITE_DONE ){ | |
| 6371 /* Clear the pending terms so we don't flush a useless level-0 | |
| 6372 ** segment when the transaction closes. | |
| 6373 */ | |
| 6374 rc = clearPendingTerms(v); | |
| 6375 if( rc==SQLITE_OK ){ | |
| 6376 rc = segdir_delete_all(v); | |
| 6377 } | |
| 6378 } | |
| 6379 } | |
| 6380 } else if( sqlite3_value_type(ppArg[0]) != SQLITE_NULL ){ | |
| 6381 /* An update: | |
| 6382 * ppArg[0] = old rowid | |
| 6383 * ppArg[1] = new rowid | |
| 6384 * ppArg[2..2+v->nColumn-1] = values | |
| 6385 * ppArg[2+v->nColumn] = value for magic column (we ignore this) | |
| 6386 * ppArg[2+v->nColumn+1] = value for docid | |
| 6387 */ | |
| 6388 sqlite_int64 rowid = sqlite3_value_int64(ppArg[0]); | |
| 6389 if( sqlite3_value_type(ppArg[1]) != SQLITE_INTEGER || | |
| 6390 sqlite3_value_int64(ppArg[1]) != rowid ){ | |
| 6391 rc = SQLITE_ERROR; /* we don't allow changing the rowid */ | |
| 6392 }else if( sqlite3_value_type(ppArg[2+v->nColumn+1]) != SQLITE_INTEGER || | |
| 6393 sqlite3_value_int64(ppArg[2+v->nColumn+1]) != rowid ){ | |
| 6394 rc = SQLITE_ERROR; /* we don't allow changing the docid */ | |
| 6395 }else{ | |
| 6396 assert( nArg==2+v->nColumn+2); | |
| 6397 rc = index_update(v, rowid, &ppArg[2]); | |
| 6398 } | |
| 6399 } else { | |
| 6400 /* An insert: | |
| 6401 * ppArg[1] = requested rowid | |
| 6402 * ppArg[2..2+v->nColumn-1] = values | |
| 6403 * ppArg[2+v->nColumn] = value for magic column (we ignore this) | |
| 6404 * ppArg[2+v->nColumn+1] = value for docid | |
| 6405 */ | |
| 6406 sqlite3_value *pRequestDocid = ppArg[2+v->nColumn+1]; | |
| 6407 assert( nArg==2+v->nColumn+2); | |
| 6408 if( SQLITE_NULL != sqlite3_value_type(pRequestDocid) && | |
| 6409 SQLITE_NULL != sqlite3_value_type(ppArg[1]) ){ | |
| 6410 /* TODO(shess) Consider allowing this to work if the values are | |
| 6411 ** identical. I'm inclined to discourage that usage, though, | |
| 6412 ** given that both rowid and docid are special columns. Better | |
| 6413 ** would be to define one or the other as the default winner, | |
| 6414 ** but should it be fts3-centric (docid) or SQLite-centric | |
| 6415 ** (rowid)? | |
| 6416 */ | |
| 6417 rc = SQLITE_ERROR; | |
| 6418 }else{ | |
| 6419 if( SQLITE_NULL == sqlite3_value_type(pRequestDocid) ){ | |
| 6420 pRequestDocid = ppArg[1]; | |
| 6421 } | |
| 6422 rc = index_insert(v, pRequestDocid, &ppArg[2], pRowid); | |
| 6423 } | |
| 6424 } | |
| 6425 | |
| 6426 return rc; | |
| 6427 } | |
| 6428 | |
| 6429 static int fulltextSync(sqlite3_vtab *pVtab){ | |
| 6430 FTSTRACE(("FTS3 xSync()\n")); | |
| 6431 return flushPendingTerms((fulltext_vtab *)pVtab); | |
| 6432 } | |
| 6433 | |
| 6434 static int fulltextBegin(sqlite3_vtab *pVtab){ | |
| 6435 fulltext_vtab *v = (fulltext_vtab *) pVtab; | |
| 6436 FTSTRACE(("FTS3 xBegin()\n")); | |
| 6437 | |
| 6438 /* Any buffered updates should have been cleared by the previous | |
| 6439 ** transaction. | |
| 6440 */ | |
| 6441 assert( v->nPendingData<0 ); | |
| 6442 return clearPendingTerms(v); | |
| 6443 } | |
| 6444 | |
| 6445 static int fulltextCommit(sqlite3_vtab *pVtab){ | |
| 6446 fulltext_vtab *v = (fulltext_vtab *) pVtab; | |
| 6447 FTSTRACE(("FTS3 xCommit()\n")); | |
| 6448 | |
| 6449 /* Buffered updates should have been cleared by fulltextSync(). */ | |
| 6450 assert( v->nPendingData<0 ); | |
| 6451 return clearPendingTerms(v); | |
| 6452 } | |
| 6453 | |
| 6454 static int fulltextRollback(sqlite3_vtab *pVtab){ | |
| 6455 FTSTRACE(("FTS3 xRollback()\n")); | |
| 6456 return clearPendingTerms((fulltext_vtab *)pVtab); | |
| 6457 } | |
| 6458 | |
| 6459 /* | |
| 6460 ** Implementation of the snippet() function for FTS3 | |
| 6461 */ | |
| 6462 static void snippetFunc( | |
| 6463 sqlite3_context *pContext, | |
| 6464 int argc, | |
| 6465 sqlite3_value **argv | |
| 6466 ){ | |
| 6467 fulltext_cursor *pCursor; | |
| 6468 if( argc<1 ) return; | |
| 6469 if( sqlite3_value_type(argv[0])!=SQLITE_BLOB || | |
| 6470 sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){ | |
| 6471 sqlite3_result_error(pContext, "illegal first argument to html_snippet",-1); | |
| 6472 }else{ | |
| 6473 const char *zStart = "<b>"; | |
| 6474 const char *zEnd = "</b>"; | |
| 6475 const char *zEllipsis = "<b>...</b>"; | |
| 6476 memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor)); | |
| 6477 if( argc>=2 ){ | |
| 6478 zStart = (const char*)sqlite3_value_text(argv[1]); | |
| 6479 if( argc>=3 ){ | |
| 6480 zEnd = (const char*)sqlite3_value_text(argv[2]); | |
| 6481 if( argc>=4 ){ | |
| 6482 zEllipsis = (const char*)sqlite3_value_text(argv[3]); | |
| 6483 } | |
| 6484 } | |
| 6485 } | |
| 6486 snippetAllOffsets(pCursor); | |
| 6487 snippetText(pCursor, zStart, zEnd, zEllipsis); | |
| 6488 sqlite3_result_text(pContext, pCursor->snippet.zSnippet, | |
| 6489 pCursor->snippet.nSnippet, SQLITE_STATIC); | |
| 6490 } | |
| 6491 } | |
| 6492 | |
| 6493 /* | |
| 6494 ** Implementation of the offsets() function for FTS3 | |
| 6495 */ | |
| 6496 static void snippetOffsetsFunc( | |
| 6497 sqlite3_context *pContext, | |
| 6498 int argc, | |
| 6499 sqlite3_value **argv | |
| 6500 ){ | |
| 6501 fulltext_cursor *pCursor; | |
| 6502 if( argc<1 ) return; | |
| 6503 if( sqlite3_value_type(argv[0])!=SQLITE_BLOB || | |
| 6504 sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){ | |
| 6505 sqlite3_result_error(pContext, "illegal first argument to offsets",-1); | |
| 6506 }else{ | |
| 6507 memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor)); | |
| 6508 snippetAllOffsets(pCursor); | |
| 6509 snippetOffsetText(&pCursor->snippet); | |
| 6510 sqlite3_result_text(pContext, | |
| 6511 pCursor->snippet.zOffset, pCursor->snippet.nOffset, | |
| 6512 SQLITE_STATIC); | |
| 6513 } | |
| 6514 } | |
| 6515 | |
| 6516 /* OptLeavesReader is nearly identical to LeavesReader, except that | |
| 6517 ** where LeavesReader is geared towards the merging of complete | |
| 6518 ** segment levels (with exactly MERGE_COUNT segments), OptLeavesReader | |
| 6519 ** is geared towards implementation of the optimize() function, and | |
| 6520 ** can merge all segments simultaneously. This version may be | |
| 6521 ** somewhat less efficient than LeavesReader because it merges into an | |
| 6522 ** accumulator rather than doing an N-way merge, but since segment | |
| 6523 ** size grows exponentially (so segment count logrithmically) this is | |
| 6524 ** probably not an immediate problem. | |
| 6525 */ | |
| 6526 /* TODO(shess): Prove that assertion, or extend the merge code to | |
| 6527 ** merge tree fashion (like the prefix-searching code does). | |
| 6528 */ | |
| 6529 /* TODO(shess): OptLeavesReader and LeavesReader could probably be | |
| 6530 ** merged with little or no loss of performance for LeavesReader. The | |
| 6531 ** merged code would need to handle >MERGE_COUNT segments, and would | |
| 6532 ** also need to be able to optionally optimize away deletes. | |
| 6533 */ | |
| 6534 typedef struct OptLeavesReader { | |
| 6535 /* Segment number, to order readers by age. */ | |
| 6536 int segment; | |
| 6537 LeavesReader reader; | |
| 6538 } OptLeavesReader; | |
| 6539 | |
| 6540 static int optLeavesReaderAtEnd(OptLeavesReader *pReader){ | |
| 6541 return leavesReaderAtEnd(&pReader->reader); | |
| 6542 } | |
| 6543 static int optLeavesReaderTermBytes(OptLeavesReader *pReader){ | |
| 6544 return leavesReaderTermBytes(&pReader->reader); | |
| 6545 } | |
| 6546 static const char *optLeavesReaderData(OptLeavesReader *pReader){ | |
| 6547 return leavesReaderData(&pReader->reader); | |
| 6548 } | |
| 6549 static int optLeavesReaderDataBytes(OptLeavesReader *pReader){ | |
| 6550 return leavesReaderDataBytes(&pReader->reader); | |
| 6551 } | |
| 6552 static const char *optLeavesReaderTerm(OptLeavesReader *pReader){ | |
| 6553 return leavesReaderTerm(&pReader->reader); | |
| 6554 } | |
| 6555 static int optLeavesReaderStep(fulltext_vtab *v, OptLeavesReader *pReader){ | |
| 6556 return leavesReaderStep(v, &pReader->reader); | |
| 6557 } | |
| 6558 static int optLeavesReaderTermCmp(OptLeavesReader *lr1, OptLeavesReader *lr2){ | |
| 6559 return leavesReaderTermCmp(&lr1->reader, &lr2->reader); | |
| 6560 } | |
| 6561 /* Order by term ascending, segment ascending (oldest to newest), with | |
| 6562 ** exhausted readers to the end. | |
| 6563 */ | |
| 6564 static int optLeavesReaderCmp(OptLeavesReader *lr1, OptLeavesReader *lr2){ | |
| 6565 int c = optLeavesReaderTermCmp(lr1, lr2); | |
| 6566 if( c!=0 ) return c; | |
| 6567 return lr1->segment-lr2->segment; | |
| 6568 } | |
| 6569 /* Bubble pLr[0] to appropriate place in pLr[1..nLr-1]. Assumes that | |
| 6570 ** pLr[1..nLr-1] is already sorted. | |
| 6571 */ | |
| 6572 static void optLeavesReaderReorder(OptLeavesReader *pLr, int nLr){ | |
| 6573 while( nLr>1 && optLeavesReaderCmp(pLr, pLr+1)>0 ){ | |
| 6574 OptLeavesReader tmp = pLr[0]; | |
| 6575 pLr[0] = pLr[1]; | |
| 6576 pLr[1] = tmp; | |
| 6577 nLr--; | |
| 6578 pLr++; | |
| 6579 } | |
| 6580 } | |
| 6581 | |
| 6582 /* optimize() helper function. Put the readers in order and iterate | |
| 6583 ** through them, merging doclists for matching terms into pWriter. | |
| 6584 ** Returns SQLITE_OK on success, or the SQLite error code which | |
| 6585 ** prevented success. | |
| 6586 */ | |
| 6587 static int optimizeInternal(fulltext_vtab *v, | |
| 6588 OptLeavesReader *readers, int nReaders, | |
| 6589 LeafWriter *pWriter){ | |
| 6590 int i, rc = SQLITE_OK; | |
| 6591 DataBuffer doclist, merged, tmp; | |
| 6592 const char *pData; | |
| 6593 | |
| 6594 /* Order the readers. */ | |
| 6595 i = nReaders; | |
| 6596 while( i-- > 0 ){ | |
| 6597 optLeavesReaderReorder(&readers[i], nReaders-i); | |
| 6598 } | |
| 6599 | |
| 6600 dataBufferInit(&doclist, LEAF_MAX); | |
| 6601 dataBufferInit(&merged, LEAF_MAX); | |
| 6602 | |
| 6603 /* Exhausted readers bubble to the end, so when the first reader is | |
| 6604 ** at eof, all are at eof. | |
| 6605 */ | |
| 6606 while( !optLeavesReaderAtEnd(&readers[0]) ){ | |
| 6607 | |
| 6608 /* Figure out how many readers share the next term. */ | |
| 6609 for(i=1; i<nReaders && !optLeavesReaderAtEnd(&readers[i]); i++){ | |
| 6610 if( 0!=optLeavesReaderTermCmp(&readers[0], &readers[i]) ) break; | |
| 6611 } | |
| 6612 | |
| 6613 pData = optLeavesReaderData(&readers[0]); | |
| 6614 if( pData==NULL ){ | |
| 6615 rc = SQLITE_CORRUPT_BKPT; | |
| 6616 break; | |
| 6617 } | |
| 6618 | |
| 6619 /* Special-case for no merge. */ | |
| 6620 if( i==1 ){ | |
| 6621 /* Trim deletions from the doclist. */ | |
| 6622 dataBufferReset(&merged); | |
| 6623 rc = docListTrim(DL_DEFAULT, pData, | |
| 6624 optLeavesReaderDataBytes(&readers[0]), | |
| 6625 -1, DL_DEFAULT, &merged); | |
| 6626 if( rc!=SQLITE_OK ) break; | |
| 6627 }else{ | |
| 6628 DLReader dlReaders[MERGE_COUNT]; | |
| 6629 int iReader, nReaders; | |
| 6630 | |
| 6631 /* Prime the pipeline with the first reader's doclist. After | |
| 6632 ** one pass index 0 will reference the accumulated doclist. | |
| 6633 */ | |
| 6634 rc = dlrInit(&dlReaders[0], DL_DEFAULT, | |
| 6635 pData, | |
| 6636 optLeavesReaderDataBytes(&readers[0])); | |
| 6637 if( rc!=SQLITE_OK ) break; | |
| 6638 iReader = 1; | |
| 6639 | |
| 6640 assert( iReader<i ); /* Must execute the loop at least once. */ | |
| 6641 while( iReader<i ){ | |
| 6642 /* Merge 16 inputs per pass. */ | |
| 6643 for( nReaders=1; iReader<i && nReaders<MERGE_COUNT; | |
| 6644 iReader++, nReaders++ ){ | |
| 6645 pData = optLeavesReaderData(&readers[iReader]); | |
| 6646 if( pData==NULL ){ | |
| 6647 rc = SQLITE_CORRUPT_BKPT; | |
| 6648 break; | |
| 6649 } | |
| 6650 rc = dlrInit(&dlReaders[nReaders], DL_DEFAULT, pData, | |
| 6651 optLeavesReaderDataBytes(&readers[iReader])); | |
| 6652 if( rc!=SQLITE_OK ) break; | |
| 6653 } | |
| 6654 | |
| 6655 /* Merge doclists and swap result into accumulator. */ | |
| 6656 if( rc==SQLITE_OK ){ | |
| 6657 dataBufferReset(&merged); | |
| 6658 rc = docListMerge(&merged, dlReaders, nReaders); | |
| 6659 tmp = merged; | |
| 6660 merged = doclist; | |
| 6661 doclist = tmp; | |
| 6662 } | |
| 6663 | |
| 6664 while( nReaders-- > 0 ){ | |
| 6665 dlrDestroy(&dlReaders[nReaders]); | |
| 6666 } | |
| 6667 | |
| 6668 if( rc!=SQLITE_OK ) goto err; | |
| 6669 | |
| 6670 /* Accumulated doclist to reader 0 for next pass. */ | |
| 6671 rc = dlrInit(&dlReaders[0], DL_DEFAULT, doclist.pData, doclist.nData); | |
| 6672 if( rc!=SQLITE_OK ) goto err; | |
| 6673 } | |
| 6674 | |
| 6675 /* Destroy reader that was left in the pipeline. */ | |
| 6676 dlrDestroy(&dlReaders[0]); | |
| 6677 | |
| 6678 /* Trim deletions from the doclist. */ | |
| 6679 dataBufferReset(&merged); | |
| 6680 rc = docListTrim(DL_DEFAULT, doclist.pData, doclist.nData, | |
| 6681 -1, DL_DEFAULT, &merged); | |
| 6682 if( rc!=SQLITE_OK ) goto err; | |
| 6683 } | |
| 6684 | |
| 6685 /* Only pass doclists with hits (skip if all hits deleted). */ | |
| 6686 if( merged.nData>0 ){ | |
| 6687 rc = leafWriterStep(v, pWriter, | |
| 6688 optLeavesReaderTerm(&readers[0]), | |
| 6689 optLeavesReaderTermBytes(&readers[0]), | |
| 6690 merged.pData, merged.nData); | |
| 6691 if( rc!=SQLITE_OK ) goto err; | |
| 6692 } | |
| 6693 | |
| 6694 /* Step merged readers to next term and reorder. */ | |
| 6695 while( i-- > 0 ){ | |
| 6696 rc = optLeavesReaderStep(v, &readers[i]); | |
| 6697 if( rc!=SQLITE_OK ) goto err; | |
| 6698 | |
| 6699 optLeavesReaderReorder(&readers[i], nReaders-i); | |
| 6700 } | |
| 6701 } | |
| 6702 | |
| 6703 err: | |
| 6704 dataBufferDestroy(&doclist); | |
| 6705 dataBufferDestroy(&merged); | |
| 6706 return rc; | |
| 6707 } | |
| 6708 | |
| 6709 /* Implement optimize() function for FTS3. optimize(t) merges all | |
| 6710 ** segments in the fts index into a single segment. 't' is the magic | |
| 6711 ** table-named column. | |
| 6712 */ | |
| 6713 static void optimizeFunc(sqlite3_context *pContext, | |
| 6714 int argc, sqlite3_value **argv){ | |
| 6715 fulltext_cursor *pCursor; | |
| 6716 if( argc>1 ){ | |
| 6717 sqlite3_result_error(pContext, "excess arguments to optimize()",-1); | |
| 6718 }else if( sqlite3_value_type(argv[0])!=SQLITE_BLOB || | |
| 6719 sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){ | |
| 6720 sqlite3_result_error(pContext, "illegal first argument to optimize",-1); | |
| 6721 }else{ | |
| 6722 fulltext_vtab *v; | |
| 6723 int i, rc, iMaxLevel; | |
| 6724 OptLeavesReader *readers; | |
| 6725 int nReaders; | |
| 6726 LeafWriter writer; | |
| 6727 sqlite3_stmt *s; | |
| 6728 | |
| 6729 memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor)); | |
| 6730 v = cursor_vtab(pCursor); | |
| 6731 | |
| 6732 /* Flush any buffered updates before optimizing. */ | |
| 6733 rc = flushPendingTerms(v); | |
| 6734 if( rc!=SQLITE_OK ) goto err; | |
| 6735 | |
| 6736 rc = segdir_count(v, &nReaders, &iMaxLevel); | |
| 6737 if( rc!=SQLITE_OK ) goto err; | |
| 6738 if( nReaders==0 || nReaders==1 ){ | |
| 6739 sqlite3_result_text(pContext, "Index already optimal", -1, | |
| 6740 SQLITE_STATIC); | |
| 6741 return; | |
| 6742 } | |
| 6743 | |
| 6744 rc = sql_get_statement(v, SEGDIR_SELECT_ALL_STMT, &s); | |
| 6745 if( rc!=SQLITE_OK ) goto err; | |
| 6746 | |
| 6747 readers = sqlite3_malloc(nReaders*sizeof(readers[0])); | |
| 6748 if( readers==NULL ) goto err; | |
| 6749 | |
| 6750 /* Note that there will already be a segment at this position | |
| 6751 ** until we call segdir_delete() on iMaxLevel. | |
| 6752 */ | |
| 6753 leafWriterInit(iMaxLevel, 0, &writer); | |
| 6754 | |
| 6755 i = 0; | |
| 6756 while( (rc = sqlite3_step(s))==SQLITE_ROW ){ | |
| 6757 sqlite_int64 iStart = sqlite3_column_int64(s, 0); | |
| 6758 sqlite_int64 iEnd = sqlite3_column_int64(s, 1); | |
| 6759 const char *pRootData = sqlite3_column_blob(s, 2); | |
| 6760 int nRootData = sqlite3_column_bytes(s, 2); | |
| 6761 | |
| 6762 /* Corrupt if we get back different types than we stored. */ | |
| 6763 if( sqlite3_column_type(s, 0)!=SQLITE_INTEGER || | |
| 6764 sqlite3_column_type(s, 1)!=SQLITE_INTEGER || | |
| 6765 sqlite3_column_type(s, 2)!=SQLITE_BLOB ){ | |
| 6766 rc = SQLITE_CORRUPT_BKPT; | |
| 6767 break; | |
| 6768 } | |
| 6769 | |
| 6770 assert( i<nReaders ); | |
| 6771 rc = leavesReaderInit(v, -1, iStart, iEnd, pRootData, nRootData, | |
| 6772 &readers[i].reader); | |
| 6773 if( rc!=SQLITE_OK ) break; | |
| 6774 | |
| 6775 readers[i].segment = i; | |
| 6776 i++; | |
| 6777 } | |
| 6778 | |
| 6779 /* If we managed to successfully read them all, optimize them. */ | |
| 6780 if( rc==SQLITE_DONE ){ | |
| 6781 assert( i==nReaders ); | |
| 6782 rc = optimizeInternal(v, readers, nReaders, &writer); | |
| 6783 }else{ | |
| 6784 sqlite3_reset(s); /* So we don't leave a lock. */ | |
| 6785 } | |
| 6786 | |
| 6787 while( i-- > 0 ){ | |
| 6788 leavesReaderDestroy(&readers[i].reader); | |
| 6789 } | |
| 6790 sqlite3_free(readers); | |
| 6791 | |
| 6792 /* If we've successfully gotten to here, delete the old segments | |
| 6793 ** and flush the interior structure of the new segment. | |
| 6794 */ | |
| 6795 if( rc==SQLITE_OK ){ | |
| 6796 for( i=0; i<=iMaxLevel; i++ ){ | |
| 6797 rc = segdir_delete(v, i); | |
| 6798 if( rc!=SQLITE_OK ) break; | |
| 6799 } | |
| 6800 | |
| 6801 if( rc==SQLITE_OK ) rc = leafWriterFinalize(v, &writer); | |
| 6802 } | |
| 6803 | |
| 6804 leafWriterDestroy(&writer); | |
| 6805 | |
| 6806 if( rc!=SQLITE_OK ) goto err; | |
| 6807 | |
| 6808 sqlite3_result_text(pContext, "Index optimized", -1, SQLITE_STATIC); | |
| 6809 return; | |
| 6810 | |
| 6811 /* TODO(shess): Error-handling needs to be improved along the | |
| 6812 ** lines of the dump_ functions. | |
| 6813 */ | |
| 6814 err: | |
| 6815 { | |
| 6816 char buf[512]; | |
| 6817 sqlite3_snprintf(sizeof(buf), buf, "Error in optimize: %s", | |
| 6818 sqlite3_errmsg(sqlite3_context_db_handle(pContext))); | |
| 6819 sqlite3_result_error(pContext, buf, -1); | |
| 6820 } | |
| 6821 } | |
| 6822 } | |
| 6823 | |
| 6824 #ifdef SQLITE_TEST | |
| 6825 /* Generate an error of the form "<prefix>: <msg>". If msg is NULL, | |
| 6826 ** pull the error from the context's db handle. | |
| 6827 */ | |
| 6828 static void generateError(sqlite3_context *pContext, | |
| 6829 const char *prefix, const char *msg){ | |
| 6830 char buf[512]; | |
| 6831 if( msg==NULL ) msg = sqlite3_errmsg(sqlite3_context_db_handle(pContext)); | |
| 6832 sqlite3_snprintf(sizeof(buf), buf, "%s: %s", prefix, msg); | |
| 6833 sqlite3_result_error(pContext, buf, -1); | |
| 6834 } | |
| 6835 | |
| 6836 /* Helper function to collect the set of terms in the segment into | |
| 6837 ** pTerms. The segment is defined by the leaf nodes between | |
| 6838 ** iStartBlockid and iEndBlockid, inclusive, or by the contents of | |
| 6839 ** pRootData if iStartBlockid is 0 (in which case the entire segment | |
| 6840 ** fit in a leaf). | |
| 6841 */ | |
| 6842 static int collectSegmentTerms(fulltext_vtab *v, sqlite3_stmt *s, | |
| 6843 fts3Hash *pTerms){ | |
| 6844 const sqlite_int64 iStartBlockid = sqlite3_column_int64(s, 0); | |
| 6845 const sqlite_int64 iEndBlockid = sqlite3_column_int64(s, 1); | |
| 6846 const char *pRootData = sqlite3_column_blob(s, 2); | |
| 6847 const int nRootData = sqlite3_column_bytes(s, 2); | |
| 6848 int rc; | |
| 6849 LeavesReader reader; | |
| 6850 | |
| 6851 /* Corrupt if we get back different types than we stored. */ | |
| 6852 if( sqlite3_column_type(s, 0)!=SQLITE_INTEGER || | |
| 6853 sqlite3_column_type(s, 1)!=SQLITE_INTEGER || | |
| 6854 sqlite3_column_type(s, 2)!=SQLITE_BLOB ){ | |
| 6855 return SQLITE_CORRUPT_BKPT; | |
| 6856 } | |
| 6857 | |
| 6858 rc = leavesReaderInit(v, 0, iStartBlockid, iEndBlockid, | |
| 6859 pRootData, nRootData, &reader); | |
| 6860 if( rc!=SQLITE_OK ) return rc; | |
| 6861 | |
| 6862 while( rc==SQLITE_OK && !leavesReaderAtEnd(&reader) ){ | |
| 6863 const char *pTerm = leavesReaderTerm(&reader); | |
| 6864 const int nTerm = leavesReaderTermBytes(&reader); | |
| 6865 void *oldValue = sqlite3Fts3HashFind(pTerms, pTerm, nTerm); | |
| 6866 void *newValue = (void *)((char *)oldValue+1); | |
| 6867 | |
| 6868 /* From the comment before sqlite3Fts3HashInsert in fts3_hash.c, | |
| 6869 ** the data value passed is returned in case of malloc failure. | |
| 6870 */ | |
| 6871 if( newValue==sqlite3Fts3HashInsert(pTerms, pTerm, nTerm, newValue) ){ | |
| 6872 rc = SQLITE_NOMEM; | |
| 6873 }else{ | |
| 6874 rc = leavesReaderStep(v, &reader); | |
| 6875 } | |
| 6876 } | |
| 6877 | |
| 6878 leavesReaderDestroy(&reader); | |
| 6879 return rc; | |
| 6880 } | |
| 6881 | |
| 6882 /* Helper function to build the result string for dump_terms(). */ | |
| 6883 static int generateTermsResult(sqlite3_context *pContext, fts3Hash *pTerms){ | |
| 6884 int iTerm, nTerms, nResultBytes, iByte; | |
| 6885 char *result; | |
| 6886 TermData *pData; | |
| 6887 fts3HashElem *e; | |
| 6888 | |
| 6889 /* Iterate pTerms to generate an array of terms in pData for | |
| 6890 ** sorting. | |
| 6891 */ | |
| 6892 nTerms = fts3HashCount(pTerms); | |
| 6893 assert( nTerms>0 ); | |
| 6894 pData = sqlite3_malloc(nTerms*sizeof(TermData)); | |
| 6895 if( pData==NULL ) return SQLITE_NOMEM; | |
| 6896 | |
| 6897 nResultBytes = 0; | |
| 6898 for(iTerm = 0, e = fts3HashFirst(pTerms); e; iTerm++, e = fts3HashNext(e)){ | |
| 6899 nResultBytes += fts3HashKeysize(e)+1; /* Term plus trailing space */ | |
| 6900 assert( iTerm<nTerms ); | |
| 6901 pData[iTerm].pTerm = fts3HashKey(e); | |
| 6902 pData[iTerm].nTerm = fts3HashKeysize(e); | |
| 6903 pData[iTerm].pCollector = fts3HashData(e); /* unused */ | |
| 6904 } | |
| 6905 assert( iTerm==nTerms ); | |
| 6906 | |
| 6907 assert( nResultBytes>0 ); /* nTerms>0, nResultsBytes must be, too. */ | |
| 6908 result = sqlite3_malloc(nResultBytes); | |
| 6909 if( result==NULL ){ | |
| 6910 sqlite3_free(pData); | |
| 6911 return SQLITE_NOMEM; | |
| 6912 } | |
| 6913 | |
| 6914 if( nTerms>1 ) qsort(pData, nTerms, sizeof(*pData), termDataCmp); | |
| 6915 | |
| 6916 /* Read the terms in order to build the result. */ | |
| 6917 iByte = 0; | |
| 6918 for(iTerm=0; iTerm<nTerms; ++iTerm){ | |
| 6919 memcpy(result+iByte, pData[iTerm].pTerm, pData[iTerm].nTerm); | |
| 6920 iByte += pData[iTerm].nTerm; | |
| 6921 result[iByte++] = ' '; | |
| 6922 } | |
| 6923 assert( iByte==nResultBytes ); | |
| 6924 assert( result[nResultBytes-1]==' ' ); | |
| 6925 result[nResultBytes-1] = '\0'; | |
| 6926 | |
| 6927 /* Passes away ownership of result. */ | |
| 6928 sqlite3_result_text(pContext, result, nResultBytes-1, sqlite3_free); | |
| 6929 sqlite3_free(pData); | |
| 6930 return SQLITE_OK; | |
| 6931 } | |
| 6932 | |
| 6933 /* Implements dump_terms() for use in inspecting the fts3 index from | |
| 6934 ** tests. TEXT result containing the ordered list of terms joined by | |
| 6935 ** spaces. dump_terms(t, level, idx) dumps the terms for the segment | |
| 6936 ** specified by level, idx (in %_segdir), while dump_terms(t) dumps | |
| 6937 ** all terms in the index. In both cases t is the fts table's magic | |
| 6938 ** table-named column. | |
| 6939 */ | |
| 6940 static void dumpTermsFunc( | |
| 6941 sqlite3_context *pContext, | |
| 6942 int argc, sqlite3_value **argv | |
| 6943 ){ | |
| 6944 fulltext_cursor *pCursor; | |
| 6945 if( argc!=3 && argc!=1 ){ | |
| 6946 generateError(pContext, "dump_terms", "incorrect arguments"); | |
| 6947 }else if( sqlite3_value_type(argv[0])!=SQLITE_BLOB || | |
| 6948 sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){ | |
| 6949 generateError(pContext, "dump_terms", "illegal first argument"); | |
| 6950 }else{ | |
| 6951 fulltext_vtab *v; | |
| 6952 fts3Hash terms; | |
| 6953 sqlite3_stmt *s = NULL; | |
| 6954 int rc; | |
| 6955 | |
| 6956 memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor)); | |
| 6957 v = cursor_vtab(pCursor); | |
| 6958 | |
| 6959 /* If passed only the cursor column, get all segments. Otherwise | |
| 6960 ** get the segment described by the following two arguments. | |
| 6961 */ | |
| 6962 if( argc==1 ){ | |
| 6963 rc = sql_get_statement(v, SEGDIR_SELECT_ALL_STMT, &s); | |
| 6964 }else{ | |
| 6965 rc = sql_get_statement(v, SEGDIR_SELECT_SEGMENT_STMT, &s); | |
| 6966 if( rc==SQLITE_OK ){ | |
| 6967 rc = sqlite3_bind_int(s, 1, sqlite3_value_int(argv[1])); | |
| 6968 if( rc==SQLITE_OK ){ | |
| 6969 rc = sqlite3_bind_int(s, 2, sqlite3_value_int(argv[2])); | |
| 6970 } | |
| 6971 } | |
| 6972 } | |
| 6973 | |
| 6974 if( rc!=SQLITE_OK ){ | |
| 6975 generateError(pContext, "dump_terms", NULL); | |
| 6976 return; | |
| 6977 } | |
| 6978 | |
| 6979 /* Collect the terms for each segment. */ | |
| 6980 sqlite3Fts3HashInit(&terms, FTS3_HASH_STRING, 1); | |
| 6981 while( (rc = sqlite3_step(s))==SQLITE_ROW ){ | |
| 6982 rc = collectSegmentTerms(v, s, &terms); | |
| 6983 if( rc!=SQLITE_OK ) break; | |
| 6984 } | |
| 6985 | |
| 6986 if( rc!=SQLITE_DONE ){ | |
| 6987 sqlite3_reset(s); | |
| 6988 generateError(pContext, "dump_terms", NULL); | |
| 6989 }else{ | |
| 6990 const int nTerms = fts3HashCount(&terms); | |
| 6991 if( nTerms>0 ){ | |
| 6992 rc = generateTermsResult(pContext, &terms); | |
| 6993 if( rc==SQLITE_NOMEM ){ | |
| 6994 generateError(pContext, "dump_terms", "out of memory"); | |
| 6995 }else{ | |
| 6996 assert( rc==SQLITE_OK ); | |
| 6997 } | |
| 6998 }else if( argc==3 ){ | |
| 6999 /* The specific segment asked for could not be found. */ | |
| 7000 generateError(pContext, "dump_terms", "segment not found"); | |
| 7001 }else{ | |
| 7002 /* No segments found. */ | |
| 7003 /* TODO(shess): It should be impossible to reach this. This | |
| 7004 ** case can only happen for an empty table, in which case | |
| 7005 ** SQLite has no rows to call this function on. | |
| 7006 */ | |
| 7007 sqlite3_result_null(pContext); | |
| 7008 } | |
| 7009 } | |
| 7010 sqlite3Fts3HashClear(&terms); | |
| 7011 } | |
| 7012 } | |
| 7013 | |
| 7014 /* Expand the DL_DEFAULT doclist in pData into a text result in | |
| 7015 ** pContext. | |
| 7016 */ | |
| 7017 static void createDoclistResult(sqlite3_context *pContext, | |
| 7018 const char *pData, int nData){ | |
| 7019 DataBuffer dump; | |
| 7020 DLReader dlReader; | |
| 7021 int rc; | |
| 7022 | |
| 7023 assert( pData!=NULL && nData>0 ); | |
| 7024 | |
| 7025 rc = dlrInit(&dlReader, DL_DEFAULT, pData, nData); | |
| 7026 if( rc!=SQLITE_OK ) return rc; | |
| 7027 dataBufferInit(&dump, 0); | |
| 7028 for( ; rc==SQLITE_OK && !dlrAtEnd(&dlReader); rc = dlrStep(&dlReader) ){ | |
| 7029 char buf[256]; | |
| 7030 PLReader plReader; | |
| 7031 | |
| 7032 rc = plrInit(&plReader, &dlReader); | |
| 7033 if( rc!=SQLITE_OK ) break; | |
| 7034 if( DL_DEFAULT==DL_DOCIDS || plrAtEnd(&plReader) ){ | |
| 7035 sqlite3_snprintf(sizeof(buf), buf, "[%lld] ", dlrDocid(&dlReader)); | |
| 7036 dataBufferAppend(&dump, buf, strlen(buf)); | |
| 7037 }else{ | |
| 7038 int iColumn = plrColumn(&plReader); | |
| 7039 | |
| 7040 sqlite3_snprintf(sizeof(buf), buf, "[%lld %d[", | |
| 7041 dlrDocid(&dlReader), iColumn); | |
| 7042 dataBufferAppend(&dump, buf, strlen(buf)); | |
| 7043 | |
| 7044 for( ; !plrAtEnd(&plReader); rc = plrStep(&plReader) ){ | |
| 7045 if( rc!=SQLITE_OK ) break; | |
| 7046 if( plrColumn(&plReader)!=iColumn ){ | |
| 7047 iColumn = plrColumn(&plReader); | |
| 7048 sqlite3_snprintf(sizeof(buf), buf, "] %d[", iColumn); | |
| 7049 assert( dump.nData>0 ); | |
| 7050 dump.nData--; /* Overwrite trailing space. */ | |
| 7051 assert( dump.pData[dump.nData]==' '); | |
| 7052 dataBufferAppend(&dump, buf, strlen(buf)); | |
| 7053 } | |
| 7054 if( DL_DEFAULT==DL_POSITIONS_OFFSETS ){ | |
| 7055 sqlite3_snprintf(sizeof(buf), buf, "%d,%d,%d ", | |
| 7056 plrPosition(&plReader), | |
| 7057 plrStartOffset(&plReader), plrEndOffset(&plReader)); | |
| 7058 }else if( DL_DEFAULT==DL_POSITIONS ){ | |
| 7059 sqlite3_snprintf(sizeof(buf), buf, "%d ", plrPosition(&plReader)); | |
| 7060 }else{ | |
| 7061 assert( NULL=="Unhandled DL_DEFAULT value"); | |
| 7062 } | |
| 7063 dataBufferAppend(&dump, buf, strlen(buf)); | |
| 7064 } | |
| 7065 plrDestroy(&plReader); | |
| 7066 if( rc!= SQLITE_OK ) break; | |
| 7067 | |
| 7068 assert( dump.nData>0 ); | |
| 7069 dump.nData--; /* Overwrite trailing space. */ | |
| 7070 assert( dump.pData[dump.nData]==' '); | |
| 7071 dataBufferAppend(&dump, "]] ", 3); | |
| 7072 } | |
| 7073 } | |
| 7074 dlrDestroy(&dlReader); | |
| 7075 if( rc!=SQLITE_OK ){ | |
| 7076 dataBufferDestroy(&dump); | |
| 7077 return rc; | |
| 7078 } | |
| 7079 | |
| 7080 assert( dump.nData>0 ); | |
| 7081 dump.nData--; /* Overwrite trailing space. */ | |
| 7082 assert( dump.pData[dump.nData]==' '); | |
| 7083 dump.pData[dump.nData] = '\0'; | |
| 7084 assert( dump.nData>0 ); | |
| 7085 | |
| 7086 /* Passes ownership of dump's buffer to pContext. */ | |
| 7087 sqlite3_result_text(pContext, dump.pData, dump.nData, sqlite3_free); | |
| 7088 dump.pData = NULL; | |
| 7089 dump.nData = dump.nCapacity = 0; | |
| 7090 return SQLITE_OK; | |
| 7091 } | |
| 7092 | |
| 7093 /* Implements dump_doclist() for use in inspecting the fts3 index from | |
| 7094 ** tests. TEXT result containing a string representation of the | |
| 7095 ** doclist for the indicated term. dump_doclist(t, term, level, idx) | |
| 7096 ** dumps the doclist for term from the segment specified by level, idx | |
| 7097 ** (in %_segdir), while dump_doclist(t, term) dumps the logical | |
| 7098 ** doclist for the term across all segments. The per-segment doclist | |
| 7099 ** can contain deletions, while the full-index doclist will not | |
| 7100 ** (deletions are omitted). | |
| 7101 ** | |
| 7102 ** Result formats differ with the setting of DL_DEFAULTS. Examples: | |
| 7103 ** | |
| 7104 ** DL_DOCIDS: [1] [3] [7] | |
| 7105 ** DL_POSITIONS: [1 0[0 4] 1[17]] [3 1[5]] | |
| 7106 ** DL_POSITIONS_OFFSETS: [1 0[0,0,3 4,23,26] 1[17,102,105]] [3 1[5,20,23]] | |
| 7107 ** | |
| 7108 ** In each case the number after the outer '[' is the docid. In the | |
| 7109 ** latter two cases, the number before the inner '[' is the column | |
| 7110 ** associated with the values within. For DL_POSITIONS the numbers | |
| 7111 ** within are the positions, for DL_POSITIONS_OFFSETS they are the | |
| 7112 ** position, the start offset, and the end offset. | |
| 7113 */ | |
| 7114 static void dumpDoclistFunc( | |
| 7115 sqlite3_context *pContext, | |
| 7116 int argc, sqlite3_value **argv | |
| 7117 ){ | |
| 7118 fulltext_cursor *pCursor; | |
| 7119 if( argc!=2 && argc!=4 ){ | |
| 7120 generateError(pContext, "dump_doclist", "incorrect arguments"); | |
| 7121 }else if( sqlite3_value_type(argv[0])!=SQLITE_BLOB || | |
| 7122 sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){ | |
| 7123 generateError(pContext, "dump_doclist", "illegal first argument"); | |
| 7124 }else if( sqlite3_value_text(argv[1])==NULL || | |
| 7125 sqlite3_value_text(argv[1])[0]=='\0' ){ | |
| 7126 generateError(pContext, "dump_doclist", "empty second argument"); | |
| 7127 }else{ | |
| 7128 const char *pTerm = (const char *)sqlite3_value_text(argv[1]); | |
| 7129 const int nTerm = strlen(pTerm); | |
| 7130 fulltext_vtab *v; | |
| 7131 int rc; | |
| 7132 DataBuffer doclist; | |
| 7133 | |
| 7134 memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor)); | |
| 7135 v = cursor_vtab(pCursor); | |
| 7136 | |
| 7137 dataBufferInit(&doclist, 0); | |
| 7138 | |
| 7139 /* termSelect() yields the same logical doclist that queries are | |
| 7140 ** run against. | |
| 7141 */ | |
| 7142 if( argc==2 ){ | |
| 7143 rc = termSelect(v, v->nColumn, pTerm, nTerm, 0, DL_DEFAULT, &doclist); | |
| 7144 }else{ | |
| 7145 sqlite3_stmt *s = NULL; | |
| 7146 | |
| 7147 /* Get our specific segment's information. */ | |
| 7148 rc = sql_get_statement(v, SEGDIR_SELECT_SEGMENT_STMT, &s); | |
| 7149 if( rc==SQLITE_OK ){ | |
| 7150 rc = sqlite3_bind_int(s, 1, sqlite3_value_int(argv[2])); | |
| 7151 if( rc==SQLITE_OK ){ | |
| 7152 rc = sqlite3_bind_int(s, 2, sqlite3_value_int(argv[3])); | |
| 7153 } | |
| 7154 } | |
| 7155 | |
| 7156 if( rc==SQLITE_OK ){ | |
| 7157 rc = sqlite3_step(s); | |
| 7158 | |
| 7159 if( rc==SQLITE_DONE ){ | |
| 7160 dataBufferDestroy(&doclist); | |
| 7161 generateError(pContext, "dump_doclist", "segment not found"); | |
| 7162 return; | |
| 7163 } | |
| 7164 | |
| 7165 /* Found a segment, load it into doclist. */ | |
| 7166 if( rc==SQLITE_ROW ){ | |
| 7167 const sqlite_int64 iLeavesEnd = sqlite3_column_int64(s, 1); | |
| 7168 const char *pData = sqlite3_column_blob(s, 2); | |
| 7169 const int nData = sqlite3_column_bytes(s, 2); | |
| 7170 | |
| 7171 /* loadSegment() is used by termSelect() to load each | |
| 7172 ** segment's data. | |
| 7173 */ | |
| 7174 rc = loadSegment(v, pData, nData, iLeavesEnd, pTerm, nTerm, 0, | |
| 7175 &doclist); | |
| 7176 if( rc==SQLITE_OK ){ | |
| 7177 rc = sqlite3_step(s); | |
| 7178 | |
| 7179 /* Should not have more than one matching segment. */ | |
| 7180 if( rc!=SQLITE_DONE ){ | |
| 7181 sqlite3_reset(s); | |
| 7182 dataBufferDestroy(&doclist); | |
| 7183 generateError(pContext, "dump_doclist", "invalid segdir"); | |
| 7184 return; | |
| 7185 } | |
| 7186 rc = SQLITE_OK; | |
| 7187 } | |
| 7188 } | |
| 7189 } | |
| 7190 | |
| 7191 sqlite3_reset(s); | |
| 7192 } | |
| 7193 | |
| 7194 if( rc==SQLITE_OK ){ | |
| 7195 if( doclist.nData>0 ){ | |
| 7196 createDoclistResult(pContext, doclist.pData, doclist.nData); | |
| 7197 }else{ | |
| 7198 /* TODO(shess): This can happen if the term is not present, or | |
| 7199 ** if all instances of the term have been deleted and this is | |
| 7200 ** an all-index dump. It may be interesting to distinguish | |
| 7201 ** these cases. | |
| 7202 */ | |
| 7203 sqlite3_result_text(pContext, "", 0, SQLITE_STATIC); | |
| 7204 } | |
| 7205 }else if( rc==SQLITE_NOMEM ){ | |
| 7206 /* Handle out-of-memory cases specially because if they are | |
| 7207 ** generated in fts3 code they may not be reflected in the db | |
| 7208 ** handle. | |
| 7209 */ | |
| 7210 /* TODO(shess): Handle this more comprehensively. | |
| 7211 ** sqlite3ErrStr() has what I need, but is internal. | |
| 7212 */ | |
| 7213 generateError(pContext, "dump_doclist", "out of memory"); | |
| 7214 }else{ | |
| 7215 generateError(pContext, "dump_doclist", NULL); | |
| 7216 } | |
| 7217 | |
| 7218 dataBufferDestroy(&doclist); | |
| 7219 } | |
| 7220 } | |
| 7221 #endif | |
| 7222 | |
| 7223 /* | |
| 7224 ** This routine implements the xFindFunction method for the FTS3 | |
| 7225 ** virtual table. | |
| 7226 */ | |
| 7227 static int fulltextFindFunction( | |
| 7228 sqlite3_vtab *pVtab, | |
| 7229 int nArg, | |
| 7230 const char *zName, | |
| 7231 void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), | |
| 7232 void **ppArg | |
| 7233 ){ | |
| 7234 if( strcmp(zName,"snippet")==0 ){ | |
| 7235 *pxFunc = snippetFunc; | |
| 7236 return 1; | |
| 7237 }else if( strcmp(zName,"offsets")==0 ){ | |
| 7238 *pxFunc = snippetOffsetsFunc; | |
| 7239 return 1; | |
| 7240 }else if( strcmp(zName,"optimize")==0 ){ | |
| 7241 *pxFunc = optimizeFunc; | |
| 7242 return 1; | |
| 7243 #ifdef SQLITE_TEST | |
| 7244 /* NOTE(shess): These functions are present only for testing | |
| 7245 ** purposes. No particular effort is made to optimize their | |
| 7246 ** execution or how they build their results. | |
| 7247 */ | |
| 7248 }else if( strcmp(zName,"dump_terms")==0 ){ | |
| 7249 /* fprintf(stderr, "Found dump_terms\n"); */ | |
| 7250 *pxFunc = dumpTermsFunc; | |
| 7251 return 1; | |
| 7252 }else if( strcmp(zName,"dump_doclist")==0 ){ | |
| 7253 /* fprintf(stderr, "Found dump_doclist\n"); */ | |
| 7254 *pxFunc = dumpDoclistFunc; | |
| 7255 return 1; | |
| 7256 #endif | |
| 7257 } | |
| 7258 return 0; | |
| 7259 } | |
| 7260 | |
| 7261 /* | |
| 7262 ** Rename an fts3 table. | |
| 7263 */ | |
| 7264 static int fulltextRename( | |
| 7265 sqlite3_vtab *pVtab, | |
| 7266 const char *zName | |
| 7267 ){ | |
| 7268 fulltext_vtab *p = (fulltext_vtab *)pVtab; | |
| 7269 int rc = SQLITE_NOMEM; | |
| 7270 char *zSql = sqlite3_mprintf( | |
| 7271 "ALTER TABLE %Q.'%q_content' RENAME TO '%q_content';" | |
| 7272 "ALTER TABLE %Q.'%q_segments' RENAME TO '%q_segments';" | |
| 7273 "ALTER TABLE %Q.'%q_segdir' RENAME TO '%q_segdir';" | |
| 7274 , p->zDb, p->zName, zName | |
| 7275 , p->zDb, p->zName, zName | |
| 7276 , p->zDb, p->zName, zName | |
| 7277 ); | |
| 7278 if( zSql ){ | |
| 7279 rc = sqlite3_exec(p->db, zSql, 0, 0, 0); | |
| 7280 sqlite3_free(zSql); | |
| 7281 } | |
| 7282 return rc; | |
| 7283 } | |
| 7284 | |
| 7285 static const sqlite3_module fts3Module = { | |
| 7286 /* iVersion */ 0, | |
| 7287 /* xCreate */ fulltextCreate, | |
| 7288 /* xConnect */ fulltextConnect, | |
| 7289 /* xBestIndex */ fulltextBestIndex, | |
| 7290 /* xDisconnect */ fulltextDisconnect, | |
| 7291 /* xDestroy */ fulltextDestroy, | |
| 7292 /* xOpen */ fulltextOpen, | |
| 7293 /* xClose */ fulltextClose, | |
| 7294 /* xFilter */ fulltextFilter, | |
| 7295 /* xNext */ fulltextNext, | |
| 7296 /* xEof */ fulltextEof, | |
| 7297 /* xColumn */ fulltextColumn, | |
| 7298 /* xRowid */ fulltextRowid, | |
| 7299 /* xUpdate */ fulltextUpdate, | |
| 7300 /* xBegin */ fulltextBegin, | |
| 7301 /* xSync */ fulltextSync, | |
| 7302 /* xCommit */ fulltextCommit, | |
| 7303 /* xRollback */ fulltextRollback, | |
| 7304 /* xFindFunction */ fulltextFindFunction, | |
| 7305 /* xRename */ fulltextRename, | |
| 7306 }; | |
| 7307 | |
| 7308 static void hashDestroy(void *p){ | |
| 7309 fts3Hash *pHash = (fts3Hash *)p; | |
| 7310 sqlite3Fts3HashClear(pHash); | |
| 7311 sqlite3_free(pHash); | |
| 7312 } | |
| 7313 | |
| 7314 /* | |
| 7315 ** The fts3 built-in tokenizers - "simple" and "porter" - are implemented | |
| 7316 ** in files fts3_tokenizer1.c and fts3_porter.c respectively. The following | |
| 7317 ** two forward declarations are for functions declared in these files | |
| 7318 ** used to retrieve the respective implementations. | |
| 7319 ** | |
| 7320 ** Calling sqlite3Fts3SimpleTokenizerModule() sets the value pointed | |
| 7321 ** to by the argument to point a the "simple" tokenizer implementation. | |
| 7322 ** Function ...PorterTokenizerModule() sets *pModule to point to the | |
| 7323 ** porter tokenizer/stemmer implementation. | |
| 7324 */ | |
| 7325 void sqlite3Fts3SimpleTokenizerModule(sqlite3_tokenizer_module const**ppModule); | |
| 7326 void sqlite3Fts3PorterTokenizerModule(sqlite3_tokenizer_module const**ppModule); | |
| 7327 void sqlite3Fts3IcuTokenizerModule(sqlite3_tokenizer_module const**ppModule); | |
| 7328 | |
| 7329 int sqlite3Fts3InitHashTable(sqlite3 *, fts3Hash *, const char *); | |
| 7330 | |
| 7331 /* | |
| 7332 ** Initialise the fts3 extension. If this extension is built as part | |
| 7333 ** of the sqlite library, then this function is called directly by | |
| 7334 ** SQLite. If fts3 is built as a dynamically loadable extension, this | |
| 7335 ** function is called by the sqlite3_extension_init() entry point. | |
| 7336 */ | |
| 7337 int sqlite3Fts3Init(sqlite3 *db){ | |
| 7338 int rc = SQLITE_OK; | |
| 7339 fts3Hash *pHash = 0; | |
| 7340 const sqlite3_tokenizer_module *pSimple = 0; | |
| 7341 const sqlite3_tokenizer_module *pPorter = 0; | |
| 7342 const sqlite3_tokenizer_module *pIcu = 0; | |
| 7343 | |
| 7344 sqlite3Fts3SimpleTokenizerModule(&pSimple); | |
| 7345 sqlite3Fts3PorterTokenizerModule(&pPorter); | |
| 7346 #ifdef SQLITE_ENABLE_ICU | |
| 7347 sqlite3Fts3IcuTokenizerModule(&pIcu); | |
| 7348 #endif | |
| 7349 | |
| 7350 /* Allocate and initialise the hash-table used to store tokenizers. */ | |
| 7351 pHash = sqlite3_malloc(sizeof(fts3Hash)); | |
| 7352 if( !pHash ){ | |
| 7353 rc = SQLITE_NOMEM; | |
| 7354 }else{ | |
| 7355 sqlite3Fts3HashInit(pHash, FTS3_HASH_STRING, 1); | |
| 7356 } | |
| 7357 | |
| 7358 /* Load the built-in tokenizers into the hash table */ | |
| 7359 if( rc==SQLITE_OK ){ | |
| 7360 if( sqlite3Fts3HashInsert(pHash, "simple", 7, (void *)pSimple) | |
| 7361 || sqlite3Fts3HashInsert(pHash, "porter", 7, (void *)pPorter) | |
| 7362 || (pIcu && sqlite3Fts3HashInsert(pHash, "icu", 4, (void *)pIcu)) | |
| 7363 ){ | |
| 7364 rc = SQLITE_NOMEM; | |
| 7365 } | |
| 7366 } | |
| 7367 | |
| 7368 #ifdef SQLITE_TEST | |
| 7369 sqlite3Fts3ExprInitTestInterface(db); | |
| 7370 #endif | |
| 7371 | |
| 7372 /* Create the virtual table wrapper around the hash-table and overload | |
| 7373 ** the two scalar functions. If this is successful, register the | |
| 7374 ** module with sqlite. | |
| 7375 */ | |
| 7376 if( SQLITE_OK==rc | |
| 7377 #if CHROMIUM_FTS3_CHANGES && !SQLITE_TEST | |
| 7378 /* fts3_tokenizer() disabled for security reasons. */ | |
| 7379 #else | |
| 7380 && SQLITE_OK==(rc = sqlite3Fts3InitHashTable(db, pHash, "fts3_tokenizer")) | |
| 7381 #endif | |
| 7382 && SQLITE_OK==(rc = sqlite3_overload_function(db, "snippet", -1)) | |
| 7383 && SQLITE_OK==(rc = sqlite3_overload_function(db, "offsets", -1)) | |
| 7384 && SQLITE_OK==(rc = sqlite3_overload_function(db, "optimize", -1)) | |
| 7385 #ifdef SQLITE_TEST | |
| 7386 && SQLITE_OK==(rc = sqlite3_overload_function(db, "dump_terms", -1)) | |
| 7387 && SQLITE_OK==(rc = sqlite3_overload_function(db, "dump_doclist", -1)) | |
| 7388 #endif | |
| 7389 ){ | |
| 7390 return sqlite3_create_module_v2( | |
| 7391 db, "fts3", &fts3Module, (void *)pHash, hashDestroy | |
| 7392 ); | |
| 7393 } | |
| 7394 | |
| 7395 /* An error has occurred. Delete the hash table and return the error code. */ | |
| 7396 assert( rc!=SQLITE_OK ); | |
| 7397 if( pHash ){ | |
| 7398 sqlite3Fts3HashClear(pHash); | |
| 7399 sqlite3_free(pHash); | |
| 7400 } | |
| 7401 return rc; | |
| 7402 } | |
| 7403 | |
| 7404 #if !SQLITE_CORE | |
| 7405 int sqlite3_extension_init( | |
| 7406 sqlite3 *db, | |
| 7407 char **pzErrMsg, | |
| 7408 const sqlite3_api_routines *pApi | |
| 7409 ){ | |
| 7410 SQLITE_EXTENSION_INIT2(pApi) | |
| 7411 return sqlite3Fts3Init(db); | |
| 7412 } | |
| 7413 #endif | |
| 7414 | |
| 7415 #endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ | |
| OLD | NEW |