词条 | LL parser | |||||||||||||||||
释义 |
In computer science, an LL parser (Left-to-right, Leftmost derivation) is a top-down parser for a subset of context-free languages. It parses the input from Left to right, performing Leftmost derivation of the sentence. An LL parser is called an LL(k) parser if it uses k tokens of lookahead when parsing a sentence. A grammar is called an LL(k) grammar if an LL(k) parser can be constructed from it. A formal language is called an LL(k) language if it has an LL(k) grammar. The set of LL(k) languages is properly contained in that of LL(k+1) languages, for each k≥0.[1] A corollary of this is that not all context-free languages can be recognized by an LL(k) parser. An LL parser is called an LL(*), or LL-regular,[2] parser if it is not restricted to a finite number k of tokens of lookahead, but can make parsing decisions by recognizing whether the following tokens belong to a regular language (for example by means of a Deterministic Finite Automaton). LL grammars, particularly LL(1) grammars, are of great practical interest, as parsers for these grammars are easy to construct, and many computer languages are designed to be LL(1) for this reason.[3] LL parsers are table-based parsers,{{cn|All LL parsers I ever saw were recursive-descent based.|date=February 2019}} similar to LR parsers. LL grammars can also be parsed by recursive descent parsers. According to Waite and Goos (1984),[4] LL(k) grammars were introduced by Stearns and Lewis (1969).[5] OverviewFor a given context-free grammar, the parser attempts to find the leftmost derivation. Given an example grammar : the leftmost derivation for is: Generally, there are multiple possibilities when selecting a rule to expand given (leftmost) non-terminal. In the previous example of the leftmost derivation, in step 2: To be effective, the parser must be able to make this choice deterministically when possible, without backtracking. For some grammars, it can do this by peeking on the unread input (without reading). In our example, if the parser knows that the next unread symbol is , the only correct rule that can be used is 2. Generally, parser can look ahead at symbols. However, given a grammar, the problem of determining if there exists a parser for some that recognizes it is undecidable. For each , there is a language that cannot be recognized by parser, but can be by . We can use the above analysis to give the following formal definition: Let be a context-free grammar and . We say that is , if and only if for any two leftmost derivations: Following condition holds: Prefix of the string of length equals the prefix of the string of length implies . In this definition, is the starting and any non-terminal. The already derived input , and yet unread and are strings of terminals. The Greek letters , and represent any string of both terminals and non-terminals (possibly empty). The prefix length corresponds to the lookahead buffer size, and the definition says that this buffer is enough to distinguish between any two derivations of different words. ParserThe parser is a deterministic pushdown automaton with the ability to peek on the next input symbols without reading. This capability can be emulated by storing the lookahead buffer contents in the finite state space, since both buffer and input alphabet are finite in size. As a result, this does not make the automaton more powerful, but is a convenient abstraction. The stack alphabet , where:
The parser stack initially contains the starting symbol above the EOI: . During operation, the parser repeatedly replaces the symbol on top of the stack:
If the last symbol to be removed from the stack is the EOI, the parsing is successful; the automaton accepts via an empty stack. The states and the transition function are not explicitly given; they are specified (generated) using a more convenient parse table instead. The table provides the following mapping:
If the parser cannot perform a valid transition, the input is rejected (empty cells). To make the table more compact, only the non-terminal rows are commonly displayed, since the action is the same for terminals. Concrete exampleSet upTo explain an LL(1) parser's workings we will consider the following small LL(1) grammar:
and parse the following input: ( a + a ) We construct a parsing table for this grammar by expanding all the terminals by column and all nonterminals by row. Later, the expressions are numbered by the position where the columns and rows cross. For example, the terminal '(' and non-terminal 'S' match for expression number 2. The table is as follows:
(Note that there is also a column for the special terminal, represented here as $, that is used to indicate the end of the input stream.) Parsing procedureIn each step, the parser reads the next-available symbol from the input stream, and the top-most symbol from the stack. If the input symbol and the stack-top symbol match, the parser discards them both, leaving only the unmatched symbols in the input stream and on the stack. Thus, in its first step, the parser reads the input symbol ' Since the '( In the second step, the parser removes the '( Now the parser has an 'a' on its input stream and an 'S' as its stack top. The parsing table instructs it to apply rule (1) from the grammar and write the rule number 1 to the output stream. The stack becomes: The parser now has an 'a' on its input stream and an 'F' as its stack top. The parsing table instructs it to apply rule (3) from the grammar and write the rule number 3 to the output stream. The stack becomes: The parser now has an ' In the next three steps the parser will replace 'F' on the stack by 'a', write the rule number 3 to the output stream and remove the 'a' and ')' from both the stack and the input stream. The parser thus ends with '$' on both its stack and its input stream. In this case the parser will report that it has accepted the input string and write the following list of rule numbers to the output stream: [ 2, 1, 3, 3 ] This is indeed a list of rules for a leftmost derivation of the input string, which is: S → ( S + F ) → ( F + F ) → ( a + F ) → ( a + a ) Parser implementation in C++Below follows a C++ implementation of a table-based LL parser for the example language: Parser implementation in PythonRemarksAs can be seen from the example, the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol $:
These steps are repeated until the parser stops, and then it will have either completely parsed the input and written a leftmost derivation to the output stream or it will have reported an error. Constructing an LL(1) parsing tableIn order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminal A on the top of its stack and a symbol a on its input stream. It is easy to see that such a rule should be of the form A → w and that the language corresponding to w should have at least one string starting with a. For this purpose we define the First-set of w, written here as Fi(w), as the set of terminals that can be found at the start of some string in w, plus ε if the empty string also belongs to w. Given a grammar with the rules A1 → w1, ..., An → wn, we can compute the Fi(wi) and Fi(Ai) for every rule as follows:
Unfortunately, the First-sets are not sufficient to compute the parsing table. This is because a right-hand side w of a rule might ultimately be rewritten to the empty string. So the parser should also use the rule A → w if ε is in Fi(w) and it sees on the input stream a symbol that could follow A. Therefore, we also need the Follow-set of A, written as Fo(A) here, which is defined as the set of terminals a such that there is a string of symbols αAaβ that can be derived from the start symbol. We use $ as a special terminal indicating end of input stream and S as start symbol. Computing the Follow-sets for the nonterminals in a grammar can be done as follows:
Now we can define exactly which rules will be contained where in the parsing table. If T[A, a] denotes the entry in the table for nonterminal A and terminal a, then T[A,a] contains the rule A → w if and only if a is in Fi(w) or ε is in Fi(w) and a is in Fo(A). If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking. It is in precisely this case that the grammar is called an LL(1) grammar. Constructing an LL(k) parsing tableUntil the mid-1990s, it was widely believed that LL(k) parsing (for k > 1) was impractical,{{Citation needed|date=February 2007}} since the parser table would have exponential size in k in the worst case. This perception changed gradually after the release of the Purdue Compiler Construction Tool Set around 1992, when it was demonstrated that many programming languages can be parsed efficiently by an LL(k) parser without triggering the worst-case behavior of the parser. Moreover, in certain cases LL parsing is feasible even with unlimited lookahead. By contrast, traditional parser generators like yacc use LALR(1) parser tables to construct a restricted LR parser with a fixed one-token lookahead. ConflictsAs described in the introduction, LL(1) parsers recognize languages that have LL(1) grammars, which are a special case of context-free grammars (CFGs); LL(1) parsers cannot recognize all context-free languages. The LL(1) languages are a proper subset of the LR(1) languages which in turn are a proper subset of all context-free languages. In order for a CFG to be an LL(1) grammar, certain conflicts must not arise, which we describe in this section. Terminology[6]Let A be a non-terminal. FIRST(A) is (defined to be) the set of terminals that can appear in the first position of any string derived from A. FOLLOW(A) is the union over FIRST(B) where B is any non-terminal that immediately follows A in the right hand side of a production rule. LL(1) ConflictsThere are 2 main types of LL(1) conflicts: FIRST/FIRST ConflictThe FIRST sets of two different grammar rules for the same non-terminal intersect. An example of an LL(1) FIRST/FIRST conflict: S -> E | E 'a' E -> 'b' | ε FIRST(E) = {'b', ε} and FIRST(E 'a') = {'b', 'a'}, so when the table is drawn, there is conflict under terminal 'b' of production rule S. Special Case: Left RecursionLeft recursion will cause a FIRST/FIRST conflict with all alternatives. FIRST/FOLLOW ConflictThe FIRST and FOLLOW set of a grammar rule overlap. With an empty string (ε) in the FIRST set it is unknown which alternative to select. An example of an LL(1) conflict: S -> A 'a' 'b' A -> 'a' | ε The FIRST set of A now is {'a', ε} and the FOLLOW set {'a'}. Solutions to LL(1) ConflictsLeft FactoringA common left-factor is "factored out". becomes A -> X B B -> Y Z | ε Can be applied when two alternatives start with the same symbol like a FIRST/FIRST conflict. Another example (more complex) using above FIRST/FIRST conflict example: S -> E | E 'a' E -> 'b' | ε becomes (merging into a single non-terminal) then through left-factoring, becomes S -> 'b' E | E E -> 'a' | ε SubstitutionSubstituting a rule into another rule to remove indirect or FIRST/FOLLOW conflicts. Note that this may cause a FIRST/FIRST conflict. Left recursion removal[7]For a general method, see removing left recursion. A simple example for left recursion removal: The following production rule has left recursion on E E -> E '+' T E -> T This rule is nothing but list of Ts separated by '+'. In a regular expression form T ('+' T)*. So the rule could be rewritten as E -> T Z Z -> '+' T Z Z -> ε Now there is no left recursion and no conflicts on either of the rules. However, not all CFGs have an equivalent LL(k)-grammar, e.g.: S -> A | B A -> 'a' A 'b' | ε B -> 'a' B 'b' 'b' | ε It can be shown that there does not exist any LL(k)-grammar accepting the language generated by this grammar. See also
Notes1. ^{{cite journal| last1=Rosenkrantz| first1=D. J.| last2=Stearns| first2=R. E.| title=Properties of Deterministic Top Down Grammars| journal=Information and Control| year=1970| volume=17| issue=3| pages=226–256| doi=10.1016/s0019-9958(70)90446-8}} 2. ^{{cite book|author1=Dick Grune|author2=Ceriel J.H. Jacobs|title=Parsing Techniques: A Practical Guide|url=https://books.google.com/books?id=05xA_d5dSwAC&pg=PA585|date=29 October 2007|publisher=Springer|isbn=978-0-387-68954-8|pages=585–}} 3. ^{{cite book | author=Pat Terry | title=Compiling with C# and Java | url=https://books.google.pl/books?id=4O9ffYfX_H0C | publisher=Pearson Education | pages=159–164| isbn=9780321263605 | year=2005 }} 4. ^{{cite book | isbn=978-3-540-90821-0 | author=William M. Waite and Gerhard Goos | title=Compiler Construction | location=Heidelberg | publisher=Springer | series=Texts and Monographs in Computer Science | volume= | year=1984 }} Here: Sect. 5.3.2, p. 121-127; in particular, p. 123. 5. ^{{cite journal | url=https://www.sciencedirect.com/science/article/pii/S001999586990312X/pdf?md5=9ecc337554d8e6f138846d604f7b2bd0&pid=1-s2.0-S001999586990312X-main.pdf&_valck=1 | author=Richard E. Stearns and P.M. Lewis | title=Property Grammars and Table Machines | journal=Information and Control | volume=14 | number=6 | pages=524–549 | year=1969 | doi=10.1016/S0019-9958(69)90312-X }} 6. ^http://www.cs.uaf.edu/~cs331/notes/LL.pdf 7. ^Modern Compiler Design, Grune, Bal, Jacobs and Langendoen External links
3 : Parsing algorithms|Articles with example C++ code|Articles with example Python code |
|||||||||||||||||
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。