Chapter 5: Bottom-up Parsing


4.3 THE PREDICTIVE TOP-DOWN PARSER

A backtracking parser is a non-deterministic recognizer of the language generated by the grammar. The backtracking problems in the top-down parser can be solved ; that is, a top-down parser can function as a deterministic recognizer if it is capable of predicting or detecting which alternatives are right choices for the expansion of nonterminals (that derive to more than one alternative) during the parsing of input string w . By carefully writing a grammar, eliminating left recursion, and left-factoring the result, we obtain a grammar that can be parsed by a top-down parser. This grammar will be able to predict the right alternative for the expansion of a nonterminal during the parsing process; and hence, it need not backtrack.

If A ± 1 ± 2 ± n are the A- productions in the grammar, then a top-down parser can decide if a nonterminal A is to be expanded or not. And if it is to be expanded, the parser decides which A -production should be used. It looks at the next input symbol and finds out which of the ± i derivatives to a string that start with the terminal symbol comes next in the input. If none of the ± i derives to a string starting with a terminal symbol, the parser reports the failure; otherwise , it carries out the derivation of A using a production A ± i , where ± i derives to a string whose first terminal symbol is the symbol coming next in the input. Therefore, we conclude that if the set of first-terminal symbols of the strings derivable from ± i is computed for each ± i , and this set is made available to the parser, then the parser can predict the right choice for the expansion of nonterminal A . This information can be easily computed using the productions of the grammar. We define a function FIRST( ± ), where ± is in ( V ˆ T )*, as follows :

FIRST( ± ) = Set of those terminals with which the strings derivable from ± start

If ± = XYZ , then FIRST( ± ) is computed as follows:

FIRST( ± ) = FIRST( XYZ ) = { X } if X is terminal.

Otherwise,

FIRST( ± ) = FIRST( XYZ ) = FIRST( X ) if X does not derive to an empty string; that is, if

FIRST( X ) does not contain ˆˆ .

If FIRST( X ) contains ˆˆ , then

FIRST( ± ) = FIRST( XYZ ) = FIRST( X ) ˆ’ { ˆˆ } ˆ FIRST( YZ )

FIRST( YZ ) is computed in an identical manner:

FIRST( YZ ) = { Y } if Y is terminal.

Otherwise,

FIRST( YZ ) = FIRST( Y ) if Y does not derive to an empty string (i.e., if FIRST( Y ) does not contain ˆˆ ). If FIRST( Y ) contains ˆˆ , then

FIRST( YZ ) = FIRST( Y ) ˆ’ { ˆˆ } ˆ FIRST( Z )

For example, consider the grammar:

FIRST( S ) = FIRST( ACB ) ˆ FIRST( CbB ) ˆ

FIRST( A ) = FIRST( da ) ˆ FIRST( BC )

FIRST( B ) = FIRST( g ) ˆ FIRST( ˆˆ )

FIRST( C ) = FIRST( h ) ˆ FIRST( ˆˆ )

Therefore:

FIRST( BC ) = FIRST( B ) ˆ’ { ˆˆ } ˆ FIRST( C )

Substituting in (II) we get:

FIRST( A )={ d } ˆ { g, h , ˆˆ }

FIRST( ACB ) =FIRST( A ) ˆ’ { ˆˆ } ˆ FIRST( CB )

FIRST( CB ) =FIRST( C ) ˆ’ { ˆˆ } ˆ FIRST( B )

Therefore, substituting in (III) we get:

FIRST( ACB )={ d, g, h , ˆˆ } ˆ { g, h , ˆˆ }

Similarly,

FIRST( CbB ) =FIRST( C ) ˆ’ { ˆˆ } ˆ FIRST( bB )

Similarly,

FIRST( Ba ) =FIRST( B ) ˆ’ { ˆˆ } ˆ FIRST( a )

Therefore, substituting in (I), we get:

FIRST( S )={ d, g, h , ˆˆ } ˆ { b, h , ˆˆ } ˆ { a, g , ˆˆ }

EXAMPLE 4.2
start example

Consider the following grammar:

FIRST( aAb )= { a }

FIRST( cd )= { c }, and

FIRST( ef )= { e }

end example
 

Hence, while deriving S, the parser looks at the next input symbol. And if it happens to be the terminal a , then the parser derives S using S aAb . Otherwise, the parser reports an error. Similarly, when expanding A , the parser looks at the next input symbol; if it happens to be the terminal c , then the parser derives A using A cd . If the next terminal input symbol happens to be e , then the parser derives A using A ef . Otherwise, an error is reported .

Therefore, we conclude that if the right-hand FIRST for the production S aAb is computed, we can decide when the parser should do the derivation using the production S aAb . Similarly, if the right-hand FIRST for the productions A cd and A ef are computed, then we can decide when derivation is to be done using A cd and A ef , respectively. These decisions can be encoded in the form of table, as shown in Table 4.1, and can be made available to the parser for the correct selection of productions for derivations during parsing.

Table 4.1: Production Selections for Parsing Derivations
 

a

b

c

d

e

f

$

S

S aAb

           

A

   

A cd

 

A ef

   

The number of rows of the table are equal to the number of nonterminals, whereas the number of columns are equal to the number of terminals, including the end marker. The parser uses of the nonterminal to be derived as the row index of the table, and the next input symbol is used as the column index when the parser decides which production will be derived. Here, the production S aAb is added in the table at [ S, a ] because FIRST( aAb ) contains a terminal a . Hence, S must be derived using S aAb if and only if the terminal symbol coming next in the input is a . Similarly, the production A cd is added at [ A, c ], because FIRST( cd ) contain c . Hence, A must be derived using A cd if and only if the terminal symbol coming next in the input is c . Finally, A must be derived using A ef if and only if the terminal symbol coming next in the input is e . Hence, the production A ef is added at [ A, e ]. Therefore, we conclude that the table can be constructed as follows:

 for every production  A      do   for every  a  in FIRST(   ) do     TABLE[  A, a  ] =  A      

Using the above method, every production of the grammar gets added into the table at the proper place when the grammar is ˆˆ -free. But when the grammar is not ˆˆ -free, ˆˆ -productions will not get added to the table. If there is an ˆˆ -production A ˆˆ in the grammar, then deciding when A is to be derived to ˆˆ is not possible using the production's right-hand FIRST. Some additional information is required to decide where the production A ˆˆ is to be added to the table.

Tip  

The derivation by A ˆˆ is a right choice when the parser is on the verge of expanding the nonterminal A and the next input symbol happens to be a terminal, which can occur immediately following A in any string occurring on the right side of the production. This will lead to the expansion of A to ˆˆ , and the next leaf in the parse tree will be considered , which is labeled by the symbol immediately following A and, therefore, may match the next input symbol.

Therefore, we conclude that the production A ˆˆ is to be added in the table at [ A, b ] for every b that immediately follows A in any of the production grammar's right-hand strings. To compute the set of all such terminals, we make use of the function FOLLOW( A ), where A is a nonterminal, as defined below:

FOLLOW( A ) = Set of terminals that immediately follow A in any string occurring on the right side of productions of the grammar

For example, if A ± B ² is a production, then FOLLOW( B ) can be computed using A ± B ² , as shown below:

FOLLOW( B ) = FIRST( ² ) if FIRST( ² ) does not contain ˆˆ .

Therefore, we conclude that when the grammar is not ˆˆ -free, then the table can be constructed as follows:

  1. Compute FIRST and FOLLOW for every nonterminal of the grammar.

  2. For every production A ± , do:

     {  for every non-   member  a  in FIRST(   ) do  TABLE[  A, a  ] =  A      If FIRST(   ) contain   then  For every  b  in FOLLOW(  A  ) do  TABLE[  A, b  ] =  A      } 

Therefore, we conclude that if the table is constructed using the above algorithm, a top-down parser can be constructed that will be a nonbacktracking, or ˜predictive parser.

4.3.1 Implementation of a Table-Driven Predictive Parser

A table-driven parser can be implemented using an input buffer, a stack, and a parsing table. The input buffer is used to hold the string to be parsed. The string is followed by a "$" symbol that is used as a right-end maker to indicate the end of the input string. The stack is used to hold the sequence of grammar symbols. A "$" indicates bottom of the stack. Initially, the stack has the start symbol of a grammar above the $. The parsing table is a table obtained by using the above algorithm presented in the previous section. It is a two-dimensional array TABLE[ A, a ], where A is a nonterminal and a is a terminal, or $ symbol. The parser is controlled by a program that behaves as follows:

  1. The program considers X , the symbol on the top of the stack, and the next input symbol a .

  2. If X = a = $, then parser announces the successful completion of the parsing and halts.

  3. If X = a ‰  $, then the parser pops the X off the stack and advances the input pointer to the next input symbol.

  4. If X is a nonterminal, then the program consults the parsing table entry TABLE[ x, a ]. If TABLE[ x, a ] = x UVW , then the parser replaces X on the top of the stack by UVW in such a manner that U will come on the top. If TABLE[ x, a ] = error, then the parser calls the error-recovery routine.

For example consider the following grammar:

FIRST( S ) = FIRST( aABb ) = { a }

FIRST( A ) = FIRST( c ) ˆ FIRST( ˆˆ ) = { c , ˆˆ }

FIRST( B ) = FIRST( d ) ˆ FIRST( ˆˆ ) = { d , ˆˆ }

Since the right-end marker $ is used to mark the bottom of the stack, $ will initially be immediately below S (the start symbol) on the stack; and hence, $ will be in the FOLLOW( S ). Therefore:

Using S aABb , we get:

Therefore, the parsing table is as shown in Table 4.2.

Table 4.2: Production Selections for Parsing Derivations
 

a

b

c

d

$

S

S aABb

       

A

 

A ˆˆ

A c

A ˆˆ

 

B

 

B ˆˆ

 

B d

 

Consider an input string acdb . The various steps in the parsing of this string, in terms of the contents of the stack and unspent input, are shown in Table 4.3.

 
Table 4.3: Steps Involved in Parsing the String acdb

Stack Contents

Unspent Input

Moves


$ S

acdb $

Derivation using S aABb

$ bBAa

acdb $

Popping a off the stack and advancing one position in the input

$ bBA

cdb $

Derivation using A c

$ bBc

cdb $

Popping c off the stack and advancing one position in the input

$ bB

db $

Derivation using B d

$ bd

db $

Popping d off the stack and advancing one position in the input

$ b

b $

Popping b off the stack and advancing one position in the input

$

$

Announce successful completion of the parsing

Similarly, for the input string ab , the various steps in the parsing of the string, in terms of the contents of the stack and unspent input, are shown in Table 4.4.

 
Table 4.4: Production Selections for String ab Parsing Derivations

Stack Contents

Unspent Input

Moves


$ S

ab $

Derivation using S aABb

$ bBAa

ab $

Popping a off the stack and advancing one position in the input

$ bBA

b $

Derivation using A ˆˆ

$ bB

b $

Derivation using B ˆˆ

$ b

b $

Popping b off the stack and advancing one position in the input

$

$

Announce successful completion of the parsing

For a string adb , the various steps in the parsing of the string, in terms of the contents of the stack and unspent input, are shown in Table 4.5.

 
Table 4.5: Production Selections for Parsing Derivations for the String adb

Stack Contents

Unspent Input

Moves


$ S

adb $

Derivation using S aABb

$ bBAa

adb $

Popping a off the stack and advancing one position in the input

$ bBA

ab $

Calling an error-handling routine

The heart of the table-driven parser is the parsing table-the parser looks at the parsing table to decide which alternative is a right choice for the expansion of a nonterminal during the parsing of the input string. Hence, constructing a table-driven predictive parser can be considered as equivalent to constructing the parsing table.

A parsing table for any grammar can be obtained by the application of the above algorithm; but for some grammars, some of the entries in the parsing table may end up being multiple defined entries. Whereas for certain grammars, all of the entries in the parsing table are singly defined entries. If the parsing table contains multiple entries, then the parser is still non-deterministic. The parser will be a deterministic recognizer if and only if there are no multiple entries in the parsing table. All such grammars (i.e., those grammars that, after applying the algorithm above, contain no multiple entries in the parsing table) constitute a subset of CFGs called " LL (1)" grammars. Therefore, a given grammar is LL (1) if its parsing table, constructed by algorithm above, contains no multiple entries. If the table contains multiple entries, then the grammar is not LL (1).

In the acronym LL (1), the first L stands for the left-to-right scan of the input, the second L stands for the left-most derivation, and the (1) indicates that the next input symbol is used to decide the next parsing process (i.e., length of the lookahead is "1").

In the LL (1) parsing system, parsing is done by scanning the input from left to right, and an attempt is made to derive the input string in a left-most order. The next input symbol is used to decide what is to be done next in the parsing process. The predictive parser discussed above, therefore, is a LL (1) parser, because it also scans the input from left to right and attempts to obtain the left-most derivation of it; and it also makes use of the next input symbol to decide what is to be done next. And if the parsing table used by the predictive parser does not contain multiple entries, then the parser acts as a recognizer of only the members of L ( G ); hence, the grammar is LL (1).

Therefore, LL (1) is the grammar for which an LL (1) parser can be constructed, which acts as a deterministic recognizer of L ( G ). If a grammar is LL (1), then a deterministic top-down table-driven recognizer can be constructed to recognize L ( G ). A parsing table constructed for a given grammar G will have multiple entries if the grammar contains multiple productions that derive the same nonterminal-that is, the grammar contains the productions A ± ² , and both ± and ² derive to a string that starts with the same terminal symbol. Therefore, one of the basic requirements for a grammar to be considered LL (1) is when the grammar contains multiple productions that derive the same nonterminal, such as:

for every pair of productions A ± ²

FIRST( ± ) ˆ FIRST( ² ) = (i.e., FIRST( ± ) and FIRST( ² ) should be disjoint sets for every pair of productions A ± ² )

For a grammar to be LL (1), the satisfaction of the condition above is necessary as well sufficient if the grammar is ˆˆ -free. When the grammar is not ˆˆ -free, then the satisfaction of the above condition is necessary but not sufficient, because either FIRST( ± ) or FIRST( ² ) might contain ˆˆ , but not both. The above condition will still be satisfied; but if FIRST( ² ) contains ˆˆ , then production A ² will be added in the table on all terminals in FOLLOW( A ). Hence, it also required that FIRST( ± ) and FOLLOW( A ) contain no common symbols. Therefore, an additional condition must be satisfied in order for a grammar to be LL (1). When the grammar is not ˆˆ -free: for every pair of productions A ± ²

if FIRST( ² ) contains ˆˆ , and FIRST( ± ) does not contain ˆˆ , then

FIRST( ± ) ˆ FOLLOW( A ) =

Therefore, for a grammar to be LL (1), the following conditions must be satisfied:

For every pair of productions

 {   (1) FIRST(   )   FIRST(   ) =   and   if FIRST(   ) contains   , and FIRST(   ) does not contain   then   (1) FIRST(   )   FOLLOW(  A  ) =   } 

4.3.2 Examples

EXAMPLE 4.3
start example

Test whether the grammar is LL (1) or not, and construct a predictive parsing table for it.

Since the grammar contains a pair of productions S AaAb BbBa , for the grammar to be LL (1), it is required that:

Hence, the grammar is LL (1).

To construct a parsing table, the FIRST and FOLLOW sets are computed, as shown below:

  1. Using S AaAb , we get:

  2. Using S BbBa, we get

end example
 
Table 4.6: Production Selections for Example 4.3 Parsing Derivations
 

a

b

$

S

S AaAb

S BbBa

 

A

A ˆˆ

A ˆˆ

 

B

B ˆˆ

B ˆˆ

 
EXAMPLE 4.4
start example

Consider the following grammar, and test whether the grammar is LL (1) or not.

end example
 

For a pair of productions S 1 AB ˆˆ :

because FOLLOW( S ) = { $ } (i.e., it contains only the end marker. Similarly, for a pair of productions A 1 AC C :

Hence, the grammar is LL (1). Now, show that no left-recursive grammar can be LL (1).

One of the basic requirements for a grammar to be LL (1) is: for every pair of productions A ± ² in the grammar's set of productions, FIRST( ± ) and FIRST( ² ) should be disjointed .

If a grammar is left-recursive, then the set of productions will contain at least one pair of the form A A ± ² ; and hence, FIRST( A ± ) and FIRST( ² ) will not be disjointed sets, because everything in the FIRST( ² ) will also be in the FIRST( A ± ). It thereby violates the condition for LL (1) grammar. Hence, a grammar containing a pair of productions A A ± ² (i.e., a left-recursive grammar) cannot be LL (1).

Now, let X be a nullable nonterminal that derives to at least two terminal strings. Show that in LL (1) grammar, no production rule can have two consecutive occurrences of X on the right side of the production.

Since X is a nullable X ˆˆ , X is also deriving to at least to two terminal strings- Xw 1 and Xw 2 -where w 1 and w 2 are the strings of terminals. Therefore, for a grammar using X to be LL (1), it is required that:

FIRST( w 1 ) ˆ FIRST( w 2 ) =

FIRST ( w 1 ) ˆ FOLLOW( X ) and FIRST( w 2 ) ˆ FOLLOW( X ) =

If this grammar contains a production rule A ± XX ² -a production whose right side has two consecutive occurrences of X -then everything in FIRST( X ) will also be in the FOLLOW( X ); and since FIRST( X ) contains FIRST( w 1 ) as well as FIRST( w 2 ), the second condition will therefore not be satisfied. Hence, a grammar containing a production of the form A ± XX ² will never be LL (1), thereby proving that in LL (1) grammar, no production rule can have two consecutive occurrences of X on the right side of the production.

EXAMPLE 4.5
start example

Construct a predictive parsing table for the following grammar where S is a start symbol and # is the end marker.

end example
 

Here, # is taken as one of the grammar symbols. And therefore, the initial configuration of the parser will be ( S , w#), where the first member of the pair is the contents of the stack and the second member is the contents of input buffer.

Therefore, by substituting in (I), we get:

  1. Using S S # we get:

  2. Using S qABC we get:

    Substituting in (II) we get:

  3. Using A bbD we get:

Therefore, the parsing table is derived as shown in Table 4.7.

Table 4.7: Production Selections for Example 4.5 Parsing Derivations
 

q

a

b

c

#

S

S S #

       

S

S qabc

       

A

 

A a

A bbD

   

B

 

B a

B ˆˆ

 

B ˆˆ

C

   

C b

 

C ˆˆ

D

 

D ˆˆ

D ˆˆ

D c

D ˆˆ

EXAMPLE 4.6
start example

Construct predictive parsing table for the following grammar:

end example
 

Since the grammar is ˆˆ -free, FOLLOW sets are not required to be computed in order to enter the productions into the parsing table. Therefore the parsing table is as shown in Table 4.8.

Table 4.8: Production Selections for Example 4.6 Parsing Derivations
 

a

b

f

g

d

S

S A

       

A

A aS

   

A d

 

B

 

B bBC

B f

   

C

     

C g

 
EXAMPLE 4.7
start example

Construct a predictive parsing table for the following grammar, where S is a start symbol.

end example
 
  1. Using S iEtSS 1 :

  2. Using S 1 eS :

Therefore, the parsing table is as shown in Table 4.9.

Table 4.9: Production Selections for Example 4.7 Parsing Derivations
 

i

a

b

e

T

$

S

S iEtSS 1

S a

       

S 1

     

S 1 eS

 

S 1 ˆˆ

S 1

     

S 1 ˆˆ

   

E

   

E b

     
EXAMPLE 4.8
start example

Construct an LL (1) parsing table for the following grammar:

end example
 

Computation of FIRST and FOLLOW:

Therefore by substituting in (I) we get:

  1. Using the production S aBDh we get:

  2. Using the production B cC , we get:

  3. Using the production C bC , we get:

  4. Using the production D EF , we get:

Therefore, the parsing table is as shown in Table 4.10.

Table 4.10: Production Selections for Example 4.8 Parsing Derivations
 

a

b

c

g

f

h

$

S

S aBDh

           

B

   

B cC

       

C

 

C bC

 

C ˆˆ

C ˆˆ

C ˆˆ

 

D

     

D EF

D EF

D EF

 

E

     

E g

E ˆˆ

E ˆˆ

 

F

       

F f

F ˆˆ

 



Algorithms for Compiler Design
Algorithms for Compiler Design (Electrical and Computer Engineering Series)
ISBN: 1584501006
EAN: 2147483647
Year: 2005
Pages: 108
Authors: O G Kakde

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net