Chromium Code Reviews| OLD | NEW |
|---|---|
| (Empty) | |
| 1 #!/usr/bin/env python | |
| 2 | |
| 3 # Copyright 2014 The Chromium Authors. All rights reserved. | |
| 4 # Use of this source code is governed by a BSD-style license that can be | |
| 5 # found in the LICENSE file. | |
| 6 | |
| 7 """ | |
| 8 A Deterministic acyclic finite state automaton (DAFSA) is a compact | |
| 9 representation of an unordered word list (dictionary). | |
| 10 | |
| 11 http://en.wikipedia.org/wiki/Deterministic_acyclic_finite_state_automaton | |
| 12 | |
| 13 This python program converts a list of strings to a byte array in C++. | |
| 14 This python program fetches strings and return values from a gperf file | |
| 15 and generates a C++ file with a byte array representing graph that can be | |
| 16 used as a memory efficient replacement for the perfect hash table. | |
| 17 | |
| 18 The input strings are assumed to consist of printable 7-bit ASCII characters | |
| 19 and the return values are assumed to be one digit integers. | |
| 20 | |
| 21 In this program a DAFSA is a diamond shaped graph starting at a common | |
| 22 source node and ending at a common sink node. All internal nodes contain | |
| 23 a label and each word is represented by the labels in one path from | |
| 24 the source node to the sink node. | |
| 25 | |
| 26 The following python represention is used for nodes: | |
| 27 | |
| 28 Source node: [ children ] | |
| 29 Internal node: (label, [ children ]) | |
| 30 Sink node: None | |
| 31 | |
| 32 The graph is first compressed by prefixes like a trie. In the next step | |
| 33 suffixes are compressed so that the graph gets diamond shaped. Finally | |
| 34 one to one linked nodes are replaced by nodes with the labels joined. | |
| 35 | |
| 36 The order of the operations is crucial since lookups will be performed | |
| 37 starting from the source with no backtracking. Thus a node must have at | |
| 38 most one child with a label starting by the same character. The output | |
| 39 is also arranged so that all jumps are to increasing addresses, thus forward | |
| 40 in memory. | |
| 41 | |
| 42 The generated output has suffix free decoding so that the sign of leading | |
| 43 bits in a link (a reference to a child node) indicate if it has a size of one, | |
| 44 two or three bytes and if it is the last outgoing link from the actual node. | |
| 45 A node label is terminated by a byte with the leading bit set. | |
| 46 | |
| 47 The generated byte array can described by the following BNF: | |
| 48 | |
| 49 <byte> ::= < 8-bit value in range [0x00-0xFF] > | |
| 50 | |
| 51 <char> ::= < printable 7-bit ASCII character, byte in range [0x20-0x7F] > | |
| 52 <end_char> ::= < char + 0x80, byte in range [0xA0-0xFF] > | |
| 53 <return value> ::= < value + 0x80, byte in range [0x80-0x8F] > | |
| 54 | |
| 55 <offset1> ::= < byte in range [0x00-0x3F] > | |
| 56 <offset2> ::= < byte in range [0x40-0x5F] > | |
| 57 <offset3> ::= < byte in range [0x60-0x7F] > | |
| 58 | |
| 59 <end_offset1> ::= < byte in range [0x80-0xBF] > | |
| 60 <end_offset2> ::= < byte in range [0xC0-0xDF] > | |
| 61 <end_offset3> ::= < byte in range [0xE0-0xFF] > | |
| 62 | |
| 63 <prefix> ::= <char> | |
| 64 | |
| 65 <label> ::= <end_char> | |
| 66 | <char> <label> | |
| 67 | |
| 68 <end_label> ::= <return_value> | |
| 69 | <char> <end_label> | |
| 70 | |
| 71 <offset> ::= <offset1> | |
| 72 | <offset2> <byte> | |
| 73 | <offset3> <byte> <byte> | |
| 74 | |
| 75 <end_offset> ::= <end_offset1> | |
| 76 | <end_offset2> <byte> | |
| 77 | <end_offset3> <byte> <byte> | |
| 78 | |
| 79 <offsets> ::= <end_offset> | |
| 80 | <offset> <offsets> | |
| 81 | |
| 82 <source> ::= <offsets> | |
| 83 | |
| 84 <node> ::= <label> <offsets> | |
| 85 | <prefix> <node> | |
| 86 | <end_label> | |
| 87 | |
| 88 <dafsa> ::= <source> | |
| 89 | <dafsa> <node> | |
| 90 | |
| 91 Decoding: | |
| 92 | |
| 93 <char> -> printable 7-bit ASCII character | |
| 94 <end_char> & 0x7F -> printable 7-bit ASCII character | |
| 95 <return value> & 0x0F -> integer | |
| 96 <offset1 & 0x3F> -> integer | |
| 97 ((<offset2> & 0x1F>) << 8) + <byte> -> integer | |
| 98 ((<offset3> & 0x1F>) << 16) + (<byte> << 8) + <byte> -> integer | |
| 99 | |
| 100 end_offset1, end_offset2 and and_offset3 are decoded same as offset1, | |
| 101 offset2 and offset3 respectively. | |
| 102 | |
| 103 The first offset in a list of offsets is the distance in bytes between the | |
| 104 offset itself and the first child node. Subsequent offsets are the distance | |
| 105 between previous child node and next child node. Thus each offset links a node | |
| 106 to a child node. The distance is always counted between start addresses, i.e. | |
| 107 first byte in decoded offset or first byte in child node. | |
| 108 | |
| 109 Example 1: | |
| 110 | |
| 111 %% | |
| 112 aa, 1 | |
| 113 a, 2 | |
| 114 %% | |
| 115 | |
| 116 The input is first parsed to a list of words: | |
| 117 ["aa1", "a2"] | |
| 118 | |
| 119 A fully expanded graph is created from the words: | |
| 120 source = [node1, node4] | |
| 121 node1 = ("a", [node2]) | |
| 122 node2 = ("a", [node3]) | |
| 123 node3 = ("\x01", [sink]) | |
| 124 node4 = ("a", [node5]) | |
| 125 node5 = ("\x02", [sink]) | |
| 126 sink = None | |
| 127 | |
| 128 Compression results in the following graph: | |
| 129 source = [node1] | |
| 130 node1 = ("a", [node2, node3]) | |
| 131 node2 = ("\x02", [sink]) | |
| 132 node3 = ("a\x01", [sink]) | |
| 133 sink = None | |
| 134 | |
| 135 A C++ representation of the compressed graph is generated: | |
| 136 | |
| 137 const unsigned char dafsa[7] = { | |
| 138 0x81, 0xE1, 0x02, 0x81, 0x82, 0x61, 0x81, | |
| 139 }; | |
| 140 | |
| 141 The bytes in the generated array has the following meaning: | |
| 142 | |
| 143 0: 0x81 <end_offset1> child at position 0 + (0x81 & 0x3F) -> jump to 1 | |
| 144 | |
| 145 1: 0xE1 <end_char> label character (0xE1 & 0x7F) -> match "a" | |
| 146 2: 0x02 <offset1> child at position 2 + (0x02 & 0x3F) -> jump to 4 | |
| 147 | |
| 148 3: 0x81 <end_offset1> child at position 4 + (0x81 & 0x3F) -> jump to 5 | |
| 149 4: 0x82 <return_value> 0x82 & 0x0F -> return 2 | |
| 150 | |
| 151 5: 0x61 <char> label character 0x61 -> match "a" | |
| 152 6: 0x81 <return_value> 0x81 & 0x0F -> return 1 | |
| 153 | |
| 154 Example 2: | |
| 155 | |
| 156 %% | |
| 157 aa, 1 | |
| 158 bbb, 2 | |
| 159 baa, 1 | |
| 160 %% | |
| 161 | |
| 162 The input is first parsed to a list of words: | |
| 163 ["aa1", "bbb2", "baa1"] | |
| 164 | |
| 165 Compression results in the following graph: | |
| 166 source = [node1, node2] | |
| 167 node1 = ("b", [node2, node3]) | |
| 168 node2 = ("aa\x01", [sink]) | |
| 169 node3 = ("bb\x02", [sink]) | |
| 170 sink = None | |
| 171 | |
| 172 A C++ representation of the compressed graph is generated: | |
| 173 | |
| 174 const unsigned char dafsa[11] = { | |
| 175 0x02, 0x83, 0xE2, 0x02, 0x83, 0x61, 0x61, 0x81, 0x62, 0x62, 0x82, | |
| 176 }; | |
| 177 | |
| 178 The bytes in the generated array has the following meaning: | |
| 179 | |
| 180 0: 0x02 <offset1> child at position 0 + (0x02 & 0x3F) -> jump to 2 | |
| 181 1: 0x83 <end_offset1> child at position 2 + (0x83 & 0x3F) -> jump to 5 | |
| 182 | |
| 183 2: 0xE2 <end_char> label character (0xE2 & 0x7F) -> match "b" | |
| 184 3: 0x02 <offset1> child at position 3 + (0x02 & 0x3F) -> jump to 5 | |
| 185 4: 0x83 <end_offset1> child at position 5 + (0x83 & 0x3F) -> jump to 8 | |
| 186 | |
| 187 5: 0x61 <char> label character 0x61 -> match "a" | |
| 188 6: 0x61 <char> label character 0x61 -> match "a" | |
| 189 7: 0x81 <return_value> 0x81 & 0x0F -> return 1 | |
| 190 | |
| 191 8: 0x62 <char> label character 0x62 -> match "b" | |
| 192 9: 0x62 <char> label character 0x62 -> match "b" | |
| 193 10: 0x82 <return_value> 0x82 & 0x0F -> return 2 | |
| 194 """ | |
| 195 | |
| 196 import sys | |
| 197 | |
| 198 class InputError(Exception): | |
| 199 """Exception raised for errors in the input file.""" | |
| 200 def __init__(self, msg): | |
| 201 self.msg = msg | |
| 202 | |
| 203 def to_dafsa(words): | |
| 204 """Generates a DAFSA from a word list and returns the source node. | |
| 205 | |
| 206 Each word is split into characters so that each character is represented by | |
| 207 a unique node. It is assumed the word list is not empty. | |
| 208 """ | |
| 209 if not words: | |
|
M-A Ruel
2014/04/29 21:12:58
optional: This requires words to be a list, not a
Olle Liljenzin
2014/04/30 11:43:43
See comment below.
| |
| 210 raise InputError('The domain list must not be empty') | |
| 211 def ToNodes(word): | |
| 212 """Split words into characters""" | |
| 213 if not 0x1F < ord(word[0]) < 0x80: | |
| 214 raise InputError('Domain names must be printable 7-bit ASCII') | |
| 215 if len(word) == 1: | |
| 216 return chr(ord(word[0]) & 0x0F), [None] | |
| 217 return word[0], [ToNodes(word[1:])] | |
| 218 return [ToNodes(word) for word in words] | |
| 219 | |
| 220 | |
| 221 def to_words(node): | |
| 222 """Generates a word list from all paths starting from an internal node.""" | |
| 223 if not node: | |
| 224 return [''] | |
| 225 return [(node[0] + word) for child in node[1] for word in to_words(child)] | |
| 226 | |
| 227 | |
| 228 def reverse(dafsa): | |
| 229 """Generates a new DAFSA that is reversed, so that the old sink node becomes | |
| 230 the new source node. | |
| 231 """ | |
| 232 sink = [] | |
| 233 nodemap = {} | |
| 234 | |
| 235 def dfs(node, parent): | |
| 236 """Creates reverse nodes. | |
| 237 | |
| 238 A new reverse node will be created for each old node. The new node will | |
| 239 get a reversed label and the parents of the old node as children. | |
| 240 """ | |
| 241 if not node: | |
| 242 sink.append(parent) | |
| 243 elif id(node) not in nodemap: | |
| 244 nodemap[id(node)] = (node[0][::-1], [parent]) | |
| 245 for child in node[1]: | |
| 246 dfs(child, nodemap[id(node)]) | |
| 247 else: | |
| 248 nodemap[id(node)][1].append(parent) | |
| 249 | |
| 250 for node in dafsa: | |
| 251 dfs(node, None) | |
| 252 return sink | |
| 253 | |
| 254 | |
| 255 def join_labels(dafsa): | |
| 256 """Generate a new DAFSA where internal nodes are merged if there is a one to | |
| 257 one connection. | |
| 258 """ | |
| 259 parentcount = { id(None): 2 } | |
| 260 nodemap = { id(None): None } | |
| 261 | |
| 262 def count_parents(node): | |
| 263 """Count incoming references""" | |
| 264 if id(node) in parentcount: | |
| 265 parentcount[id(node)] += 1 | |
| 266 else: | |
| 267 parentcount[id(node)] = 1 | |
| 268 for child in node[1]: | |
| 269 count_parents(child) | |
| 270 | |
| 271 def join(node): | |
| 272 """Create new nodes""" | |
| 273 if id(node) not in nodemap: | |
| 274 children = [join(child) for child in node[1]] | |
| 275 if len(children) == 1 and parentcount[id(node[1][0])] == 1: | |
| 276 child = children[0] | |
| 277 nodemap[id(node)] = (node[0] + child[0], child[1]) | |
| 278 else: | |
| 279 nodemap[id(node)] = (node[0], children) | |
| 280 return nodemap[id(node)] | |
| 281 | |
| 282 for node in dafsa: | |
| 283 count_parents(node) | |
| 284 return [join(node) for node in dafsa] | |
| 285 | |
| 286 | |
| 287 def join_suffixes(dafsa): | |
| 288 """Generates a new DAFSA where nodes that represent the same word lists | |
| 289 towards the sink are merged. | |
| 290 """ | |
| 291 nodemap = { frozenset(('',)): None } | |
| 292 | |
| 293 def join(node): | |
| 294 """Returns a macthing node. A new node is created if no matching node | |
| 295 exists. The graph is accessed in dfs order. | |
| 296 """ | |
| 297 suffixes = frozenset(to_words(node)) | |
| 298 if suffixes not in nodemap: | |
| 299 nodemap[suffixes] = (node[0], [join(child) for child in node[1]]) | |
| 300 return nodemap[suffixes] | |
| 301 | |
| 302 return [join(node) for node in dafsa] | |
| 303 | |
| 304 | |
| 305 def top_sort(dafsa): | |
| 306 """Generates list of nodes in topological sort order.""" | |
| 307 incoming = {} | |
| 308 | |
| 309 def count_incoming(node): | |
| 310 """Counts incoming references.""" | |
| 311 if node: | |
| 312 if id(node) not in incoming: | |
| 313 incoming[id(node)] = 1 | |
| 314 for child in node[1]: | |
| 315 count_incoming(child) | |
| 316 else: | |
| 317 incoming[id(node)] += 1 | |
| 318 | |
| 319 for node in dafsa: | |
| 320 count_incoming(node) | |
| 321 | |
| 322 for node in dafsa: | |
| 323 incoming[id(node)] -= 1 | |
| 324 | |
| 325 waiting = [node for node in dafsa if incoming[id(node)] == 0] | |
| 326 nodes = [] | |
| 327 | |
| 328 while waiting: | |
| 329 node = waiting.pop() | |
| 330 assert incoming[id(node)] == 0 | |
| 331 nodes.append(node) | |
| 332 for child in node[1]: | |
| 333 if child: | |
| 334 incoming[id(child)] -= 1 | |
| 335 if incoming[id(child)] == 0: | |
| 336 waiting.append(child) | |
| 337 return nodes | |
| 338 | |
| 339 | |
| 340 def encode_links(children, offsets, current): | |
| 341 """Encodes a list of children as one, two or three byte offsets.""" | |
| 342 if not children[0]: | |
| 343 # This is an <end_label> node and no links follow such nodes | |
| 344 assert len(children) == 1 | |
| 345 return [] | |
| 346 guess = 3 * len(children) | |
| 347 assert children | |
| 348 while True: | |
| 349 offset = current + guess | |
| 350 buf = [] | |
| 351 for child in sorted(children, key = lambda x: -offsets[id(x)]): | |
| 352 last = len(buf) | |
| 353 distance = offset - offsets[id(child)] | |
| 354 assert distance > 0 and distance < (1 << 21) | |
| 355 | |
| 356 if distance < (1 << 6): | |
| 357 # A 6-bit offset: "s0xxxxxx" | |
| 358 buf.append(distance) | |
| 359 elif distance < (1 << 13): | |
| 360 # A 13-bit offset: "s10xxxxxxxxxxxxx" | |
| 361 buf.append(0x40 | (distance >> 8)) | |
| 362 buf.append(distance & 0xFF) | |
| 363 else: | |
| 364 # A 21-bit offset: "s11xxxxxxxxxxxxxxxxxxxxx" | |
| 365 buf.append(0x60 | (distance >> 16)) | |
| 366 buf.append((distance >> 8) & 0xFF) | |
| 367 buf.append(distance & 0xFF) | |
| 368 # Distance in first link is relative to following record. | |
| 369 # Distance in other links are relative to previous link. | |
| 370 offset -= distance | |
| 371 if len(buf) == guess: | |
| 372 break | |
| 373 guess = len(buf) | |
| 374 # Set most significant bit to mark end of links in this node. | |
| 375 buf[last] |= (1 << 7) | |
| 376 buf.reverse() | |
| 377 return buf | |
| 378 | |
| 379 | |
| 380 def encode_label(label): | |
| 381 """ | |
| 382 Encodes a node label as a list of bytes with a trailing high byte >0x80. | |
| 383 """ | |
| 384 assert label | |
| 385 buf = [ord(c) for c in label] | |
| 386 buf.reverse() | |
| 387 # Set most significant bit to mark end of label in this node. | |
| 388 buf[0] |= (1 << 7) | |
| 389 return buf | |
| 390 | |
| 391 | |
| 392 def encode_prefix(label): | |
| 393 """ | |
| 394 Encodes a node label as a list of bytes without a trailing high byte. | |
| 395 | |
| 396 This method encodes a node if there is exactly one child and the | |
| 397 child follows immidiately after so that no jump is needed. This label | |
| 398 will then be a prefix to the label in the child node. | |
| 399 """ | |
| 400 assert label | |
| 401 return [ord(c) for c in reversed(label)] | |
| 402 | |
| 403 | |
| 404 def encode(dafsa): | |
| 405 """Encodes a DAFSA to a list of bytes""" | |
| 406 output = [] | |
| 407 offsets = {} | |
| 408 | |
| 409 for node in reversed(top_sort(dafsa)): | |
| 410 if (len(node[1]) == 1 and node[1][0] and | |
| 411 (offsets[id(node[1][0])] == len(output))): | |
| 412 output.extend(encode_prefix(node[0])) | |
| 413 else: | |
| 414 output.extend(encode_links(node[1], offsets, len(output))) | |
| 415 output.extend(encode_label(node[0])) | |
| 416 offsets[id(node)] = len(output) | |
| 417 | |
| 418 output.extend(encode_links(dafsa, offsets, len(output))) | |
| 419 output.reverse() | |
| 420 return output | |
| 421 | |
| 422 | |
| 423 def to_cxx(data): | |
| 424 """Generates C++ code from a list of encoded bytes.""" | |
| 425 text = '/* This file is generated. DO NOT EDIT!\n\n' | |
| 426 text += 'The byte array encodes effective tld names. See make_dafsa.py for' | |
| 427 text += ' documentation.' | |
| 428 text += '*/\n\n' | |
| 429 text += 'const unsigned char kDafsa[%s] = {\n' % len(data) | |
| 430 for i in range(0, len(data), 12): | |
| 431 text += ' ' | |
| 432 text += ', '.join('0x%02x' % byte for byte in data[i:i + 12]) | |
| 433 text += ',\n' | |
| 434 text += '};\n' | |
| 435 return text | |
| 436 | |
| 437 | |
| 438 def words_to_cxx(words): | |
| 439 """Generate C++ code from a word list""" | |
| 440 dafsa = to_dafsa(words) | |
| 441 for fun in (reverse, join_suffixes, reverse, join_suffixes, join_labels): | |
| 442 dafsa = fun(dafsa) | |
| 443 return to_cxx(encode(dafsa)) | |
| 444 | |
| 445 | |
| 446 def parse_gperf(infile): | |
| 447 """Parse gperf file and extract strings and return code""" | |
| 448 lines = [line.strip() for line in infile] | |
| 449 # Extract strings after the first '%%' and before the second '%%'. | |
| 450 begin = lines.index('%%') + 1 | |
|
M-A Ruel
2014/04/29 21:12:58
optional: Technically, you could create a small st
Olle Liljenzin
2014/04/30 11:43:43
I tried this but it is just slower. Low level impl
| |
| 451 end = lines.index('%%', begin) | |
| 452 lines = lines[begin:end] | |
| 453 for line in lines: | |
|
M-A Ruel
2014/04/29 21:12:58
for line in lines[begin:end]:
| |
| 454 if line[-3:-1] != ', ': | |
| 455 raise InputError('Expected "domainname, <digit>", found "%s"' % line) | |
| 456 # Technically the DAFSA format could support return values in range [0-31], | |
| 457 # but the values below are the only with a defined meaning. | |
| 458 if line[-1] not in '0124': | |
| 459 raise InputError('Expected value to be one of {0,1,2,4}, found "%s"' % | |
| 460 line[-1]) | |
|
M-A Ruel
2014/04/29 21:12:58
optional: add right after line 460:
yield line
| |
| 461 return [line[:-3] + line[-1] for line in lines] | |
| 462 | |
| 463 | |
| 464 def main(): | |
| 465 if len(sys.argv) != 3: | |
| 466 print('usage: %s infile outfile' % sys.argv[0]) | |
| 467 sys.exit(-1) | |
| 468 with open(sys.argv[1], 'r') as infile, open(sys.argv[2], 'w') as outfile: | |
| 469 outfile.write(words_to_cxx(parse_gperf(infile))) | |
| 470 return 0 | |
| 471 | |
| 472 | |
| 473 if __name__ == '__main__': | |
| 474 sys.exit(main()) | |
| OLD | NEW |