Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(59)

Side by Side Diff: tools/lexer_generator/regex_parser.py

Issue 145723010: Experimental parser: better rule tree visualization (Closed) Base URL: https://v8.googlecode.com/svn/branches/experimental/parser
Patch Set: Created 6 years, 10 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « tools/lexer_generator/nfa_builder.py ('k') | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 # Copyright 2013 the V8 project authors. All rights reserved. 1 # Copyright 2013 the V8 project authors. All rights reserved.
2 # Redistribution and use in source and binary forms, with or without 2 # Redistribution and use in source and binary forms, with or without
3 # modification, are permitted provided that the following conditions are 3 # modification, are permitted provided that the following conditions are
4 # met: 4 # met:
5 # 5 #
6 # * Redistributions of source code must retain the above copyright 6 # * Redistributions of source code must retain the above copyright
7 # notice, this list of conditions and the following disclaimer. 7 # notice, this list of conditions and the following disclaimer.
8 # * Redistributions in binary form must reproduce the above 8 # * Redistributions in binary form must reproduce the above
9 # copyright notice, this list of conditions and the following 9 # copyright notice, this list of conditions and the following
10 # disclaimer in the documentation and/or other materials provided 10 # disclaimer in the documentation and/or other materials provided
(...skipping 153 matching lines...) Expand 10 before | Expand all | Expand 10 after
164 164
165 def p_fragments(self, p): 165 def p_fragments(self, p):
166 '''fragments : fragment 166 '''fragments : fragment
167 | fragment fragments''' 167 | fragment fragments'''
168 if len(p) == 2: 168 if len(p) == 2:
169 p[0] = p[1] 169 p[0] = p[1]
170 else: 170 else:
171 p[0] = self.__cat(p[1], p[2]) 171 p[0] = self.__cat(p[1], p[2])
172 172
173 def p_fragment(self, p): 173 def p_fragment(self, p):
174 '''fragment : literal maybe_modifier 174 '''fragment : literal_array maybe_modifier
175 | class maybe_modifier 175 | class maybe_modifier
176 | group maybe_modifier 176 | group maybe_modifier
177 | any maybe_modifier 177 | any maybe_modifier
178 ''' 178 '''
179 if p[2] != None: 179 if p[2] != None:
180 if isinstance(p[2], tuple) and p[2][0] == 'REPEAT': 180 if isinstance(p[2], tuple) and p[2][0] == 'REPEAT':
181 p[0] = Term(p[2][0], p[2][1], p[2][2], p[1]) 181 p[0] = Term(p[2][0], p[2][1], p[2][2], p[1])
182 else: 182 else:
183 p[0] = Term(p[2], p[1]) 183 p[0] = Term(p[2], p[1])
184 else: 184 else:
(...skipping 10 matching lines...) Expand all
195 p[0] = self.token_map[p[1]] 195 p[0] = self.token_map[p[1]]
196 196
197 def p_repetition(self, p): 197 def p_repetition(self, p):
198 '''repetition : REPEAT_BEGIN NUMBER REPEAT_END 198 '''repetition : REPEAT_BEGIN NUMBER REPEAT_END
199 | REPEAT_BEGIN NUMBER COMMA NUMBER REPEAT_END''' 199 | REPEAT_BEGIN NUMBER COMMA NUMBER REPEAT_END'''
200 if len(p) == 4: 200 if len(p) == 4:
201 p[0] = ("REPEAT", p[2], p[2]) 201 p[0] = ("REPEAT", p[2], p[2])
202 else: 202 else:
203 p[0] = ("REPEAT", p[2], p[4]) 203 p[0] = ("REPEAT", p[2], p[4])
204 204
205 def p_literal(self, p): 205 def p_literal_array(self, p):
206 '''literal : LITERAL''' 206 '''literal_array : literals'''
207 p[0] = Term('LITERAL', p[1]) 207 p[0] = Term('LITERAL', ''.join(reversed(p[1])))
208
209 def p_literals(self, p):
210 '''literals : LITERAL maybe_literals'''
211 if not p[2]:
212 p[0] = [p[1]]
213 else:
214 p[2].append(p[1])
215 p[0] = p[2]
216
217 def p_maybe_literals(self, p):
218 '''maybe_literals : literals
219 | empty'''
220 p[0] = p[1]
208 221
209 def p_any(self, p): 222 def p_any(self, p):
210 '''any : ANY''' 223 '''any : ANY'''
211 p[0] = Term(self.token_map[p[1]]) 224 p[0] = Term(self.token_map[p[1]])
212 225
213 def p_class(self, p): 226 def p_class(self, p):
214 '''class : CLASS_BEGIN class_content CLASS_END 227 '''class : CLASS_BEGIN class_content CLASS_END
215 | CLASS_BEGIN NOT class_content CLASS_END''' 228 | CLASS_BEGIN NOT class_content CLASS_END'''
216 if len(p) == 4: 229 if len(p) == 4:
217 p[0] = Term("CLASS", p[2]) 230 p[0] = Term("CLASS", p[2])
(...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after
266 parser = RegexParser.__static_instance 279 parser = RegexParser.__static_instance
267 if not parser: 280 if not parser:
268 parser = RegexParser() 281 parser = RegexParser()
269 parser.build() 282 parser.build()
270 RegexParser.__static_instance = parser 283 RegexParser.__static_instance = parser
271 try: 284 try:
272 return parser.parser.parse(data, lexer=parser.lexer.lexer) 285 return parser.parser.parse(data, lexer=parser.lexer.lexer)
273 except Exception: 286 except Exception:
274 RegexParser.__static_instance = None 287 RegexParser.__static_instance = None
275 raise 288 raise
OLDNEW
« no previous file with comments | « tools/lexer_generator/nfa_builder.py ('k') | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698