Compiler Design | Subject Wise
Question 1 |
Dynamic type checking slows down the execution | |
Dynamic type checking offers more flexibility to the programmers | |
In contrast to Static type checking, dynamic type checking may cause failure in runtime due to type errors | |
Unlike static type checking, dynamic type checking is done during compilation |
Statically typed languages
A language is statically typed if the type of a variable is known at compile time. For some languages this means that you as the programmer must specify what type each variable is (e.g Java, C, C++); other languages offer some form of type inference, the capability of the type system to deduce the type of a variable (e.g.: OCaml, Haskell, Scala, Kotlin)
The main advantage here is that all kinds of checking can be done by the compiler, and therefore a lot of trivial bugs are caught at a very early stage.
Examples: C, C++, Java, Rust, Go, Scala
Dynamically typed languages
A language is dynamically typed if the type is associated with run-time values, and not named variables/fields/etc. This means that you as a programmer can write a little quicker because you do not have to specify types every time (unless using a statically-typed language with type inference).
Examples: Perl, Ruby, Python, PHP, JavaScript
Most scripting languages have this feature as there is no compiler to do static type-checking anyway, but you may find yourself searching for a bug that is due to the interpreter misinterpreting the type of a variable. Luckily, scripts tend to be small so bugs have not so many places to hide.
Most dynamically typed languages do allow you to provide type information, but do not require it. One language that is currently being developed, Rascal, takes a hybrid approach allowing dynamic typing within functions but enforcing static typing for the function signature.
Question 2 |
Which is written in a language that is different from the source language | |
Compiles the whole source code to generate object code afresh | |
Compiles only those portion of source code that has been modified. | |
That runs on one machine but produces object code for another machine |
- Incremental Compiler is a compiler Compiles only those portion of source code that has been modified.
- Incremental Compiler compiles only those portion of source code that have been modified and without recompiling the whole program it can also compile the additional statements of a program.
- An incremental compiler is a kind of incremental computation applied to the field of compilation. Quite naturally, whereas ordinary compilers make so called clean build, that is, (re)build all program modules, incremental compiler recompiles only those portions of a program that have been modified.
Question 3 |
Consist of a definition of a variable and all its uses, reachable from that definition | |
Are created using a form of static code analysis | |
Are prerequisite for many compiler optimization including constant propagation and common sub-expression elimination | |
All of the above |
A counterpart of a UD Chain is a Definition-Use Chain (DU Chain), which consists of a definition, D, of a variable and all the uses, U, reachable from that definition without any other intervening definitions.
Both UD and DU chains are created by using a form of static code analysis known as data flow analysis. Knowing the use-def and def-use chains for a program or subprogram is a prerequisite for many compiler optimizations, including constant propagation and common subexpression elimination.
Refer : https://en.wikipedia.org/wiki/Use-define_chain
Question 4 |
It is applied to a small part of the code and applied repeatedly | |
It can be used to optimize intermediate code | |
It can be applied to a portion of the code that is not contiguous | |
It is applied in the symbol table to optimize the memory requirements. |
PeepHole Optimization is done by examining a sliding window of target instructions(called the peephole) and replacing instruction sequences within the peephole by a faster sequence or shorter.
It can be applied directly after Intermediate Code Generation to improve the intermediate representation.
The code in the peephole need not be contiguous , although some implementations do require this.
So, options(A), option(B) and option(C) are true. Only option(D) is false.
Question 5 |
Faster | |
Slower | |
At the same speed | |
May be faster or slower |
- A compiler is a computer program that transforms code written in a high-level programming language into the machine code. It is a program which translates the human-readable code to a language a computer processor understands (binary 1 and 0 bits)
- An interpreter is a computer program, which coverts each high-level program statement into the machine code. This includes source code, pre-compiled code, and scripts. Both compiler and interpreters do the same job which is converting higher level programming language to machine code. However, a compiler will convert the code into machine code (create an exe) before program run. Interpreters convert code into machine code when the program is run.
- Compiler transforms code written in a high-level programming language into the machine code, at once, before program runs, whereas an Interpreter coverts each high-level program statement, one by one, into the machine code, during program run.
- Compiled code runs faster while interpreted code runs slower.
- Compiler displays all errors after compilation, on the other hand, the Interpreter displays errors of each line one by one.
- Compiler is based on translation linking-loading model, whereas Interpreter is based on Interpretation Method.
- Compiler takes an entire program whereas the Interpreter takes a single line of code.
Question 6 |
A parse tree | |
Intermediate code | |
Machine code | |
A stream of tokens |
-lexical analyzer is the first phase of a compiler
-lexical analyzer read input characters from the source program and produce output as sequence of tokens
Question 7 |
Declaration | |
Assignment statements | |
Input and output statements | |
Structural statements |
So the meaning of "a declaration is not an executable statement" is that no machine code corresponds to a declaration. This also means that:
CPU doesn't spend any time to "run" the declaration (but the compiler does spend time to examine it) A visual debugger will "skip" the declaration when running your program step by step
Question 8 |
Linear list | |
Search tree | |
Hash table | |
Self organization list |
- Linear (sorted or unsorted) list
- Binary Search Tree
- Hash table
The symbol table provides the following operations:
Insert ()
lookup()
Implementation of Symbol table
Following are commonly used data structure for implementing symbol table
List
- In this method, an array is used to store names and associated information.
- A pointer “available” is maintained at end of all stored records and new names are added in the order as they arrive
- To search for a name we start from beginning of list till available pointer and if not found we get an error “use of undeclared name”
- While inserting a new name we must ensure that it is not already present otherwise error occurs i.e. “Multiple defined name”
- Insertion is fast O(1), but lookup is slow for large tables – O(n) on average Advantage is that it takes minimum amount of space.
- This implementation is using linked list. A link field is added to each record.
- Searching of names is done in order pointed by link of link field.
- A pointer “First” is maintained to point to first record of symbol table.
- Insertion is fast O(1), but lookup is slow for large tables – O(n) on average
- In hashing scheme two tables are maintained – a hash table and symbol table and is the most commonly used method to implement symbol tables..
- A hash table is an array with index range: 0 to tablesize – 1.These entries are pointer pointing to names of symbol table.
- To search for a name we use hash function that will result in any integer between 0 to tablesize – 1.
- Insertion and lookup can be made very fast – O(1).
- Advantage is quick search is possible and disadvantage is that hashing is complicated to implement.
- Another approach to implement symbol table is to use binary search tree i.e. we add two link fields i.e. left and right child.
- All names are created as child of root node that always follow the property of binary search tree.
- Insertion and lookup are O(log2 n) on average
Question 9 |
Top-down parsers | |
Bottom-up parsers | |
Predictive parsers | |
None of the above |
Top-Down Parsers:
In this Parsing technique we expand the start symbol to the whole program.
Recursive Descent and LL parsers are the Top-Down parsers.
Bottom-Up Parsers:
In this Parsing technique we reduce the whole program to start symbol.
Operator Precedence Parser, LR(0) Parser, SLR Parser, LALR Parser and CLR Parser are the Bottom-Up parsers.
Recursive descent
Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right. It uses procedures for every terminal and non-terminal entity. This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking. But the grammar associated with it (if not left factored) cannot avoid back-tracking. A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing.
Question 10 |
Rightmost Derivation | |
Rightmost derivation in reverse | |
Leftmost derivation | |
Leftmost derivation in reverse |
- In this Parsing technique we expand the start symbol to the whole program.
- In top down parsing, we just start with the start symbol and compare the right side of the different productions against the first piece of input to see which of the productions should be used.
- A top down parser is called LL parser because it parses the input from Left to right, and constructs a Leftmost derivation of the sentence.
- Recursive Descent and LL parsers are the Top-Down parsers.
- A bottom up parser is LR parser with Left to Right parsing performing Right Most Derivation in reverse order (LR)
Question 11 |
Loop optimization | |
Local optimization | |
Constant folding | |
Data flow analysis |
So, Peephole optimization is form of Local optimization
Question 12 |
Local optimization | |
Loop optimization | |
Constant folding | |
Strength reduction |
Constant folding is the process of recognizing and evaluating constant expressions at compile time rather than computing them at runtime. Terms in constant expressions are typically simple literals, such as the integer literal 2, but they may also be variables whose values are known at compile time. Consider the statement:
i = 320 * 200 * 32;
Most compilers would not actually generate two multiply instructions and a store for this statement. Instead, they identify constructs such as these and substitute the computed values at compile time (in this case, 2,048,000).
Constant folding can make use of arithmetic identities. If x is numeric, the value of 0 * x is zero even if the compiler does not know the value of x (note that this is not valid for IEEE floats since x could be Infinity or NotANumber. Still, some languages that favor performance such as GLSL allow this for constants, which can occasionally cause bugs).
Constant folding may apply to more than just numbers. Concatenation of string literals and constant strings can be constant folded. Code such as "abc" + "def" may be replaced with "abcdef".
Simple Example:
In the code fragment The below expression (3 + 5) can be evaluated at compile time and replaced with the constant 8.
int f (void)
{
return 3 + 5;
}
Below is the code fragment after constant folding.
int f (void)
{
return 8;
}
Question 13 |
I. All function calls are resolved at compile time in C language
II. All function calls are resolved at compile time in C++ language
Only II is correct | |
Both I and II are correct | |
Only I is correct | |
Both I and II are incorrect |
II. All function calls are resolved at compile time in C++ language -False
In both C and C++ Language If any function contains branch instruction and branch address are not know at compile time . Branch instructions address are know at only run time
Question 14 |
Checks to see if the instructions are legal in the current assembly mode | |
It allocates space for the literals. | |
It builds the symbol table for the symbols and their values. | |
All of these |
2nd pass: actual translation (object code)
Pass 1 : define symbols (assign addresses)
- Assign addresses to all statements in the program
- Save the values assigned to all labels for use in Pass 2
- Process some assembler directives
- Table creation , Adding literals to literal table
- Allocate space for literals
- Compute total length of a program
- Assemble instructions
- Generate data values defined by BYTE, WORD, etc.
- Process the assembler directives not done in Pass 1
- Write the object program and the assembly listing
- Called object program
- Will be later loaded into memory for execution
Question 15 |
Replacing run time computation by compile time computation | |
Removing loop invariant computation | |
Removing common subexpressions | |
Replacing a costly operation by a relatively cheaper one |
Other Examples of strength reduction include:
- Replacing a multiplication within a loop with an addition
- Replacing an exponentiation within a loop with a multiplication
Exponentiation is replaced by multiplication and multiplication is in return replaced by addition.
The below code is having multiplication operator:
a = 10; for (j=0; j < X; j++) { Z[j] = a * j; }This code can be replaced by the following code by replacing the multiplication with addition.
a = 10; k = 0; for (j=0; j < X; j++) { Z[j] = k; k = k + a; }
Question 16 |
It is applied to a small part of the code | |
It can be used to optimize intermediate code | |
To get the best out of this, it has to be applied repeatedly | |
It can be applied to the portion of the code that is not contiguous |
This is often applied repeatedly. So, all given options are true in this question.
Refer this link : http://www.cse.iitm.ac.in/~krishna/cs3300/pm-lecture3.pdf
Question 17 |
S → AB
A → a
B → b
B → C
A | |
B | |
C | |
S |
Question 18 |
Bottom up parsing | |
Top down parsing | |
Recursive parsing | |
Predictive parsing |

Question 19 |
Leftmost derivation in reverse | |
Right-most derivation in reverse | |
Left-most derivation | |
Right-most derivation |
Question 20 |
(1) i=1
(2) t1=5*I
(3) t2=4*t1
(4) t3=t2
(5) a[t3]=0
(6) i=i+1;
(7) if i<15 goto(2)
33 | |
44 | |
43 | |
34 |
- Start
- 1
- 2 to 7
- End
An edge from Start to 1.
An edge from 1 to 2-7.
A self-loop on 2-7
An edge from 2-7 to end.
Hence, Option(B) 44 is the correct answer.
Question 21 |
Variable Table | |
Terminal Table | |
Keyword Table | |
Identifier Table |
option(B) is the correct answer
Question 22 |
7 | |
8 | |
9 | |
13 |
- printf
- (
- "A%B="
- ,
- &
- i
- )
- ;
Question 23 |
Replace P^2 by P*P | |
Replace P*16 by P<< 4 | |
Replace pow(P,3) by P*P*P | |
Replace (P <<5) -P by P*3 |
Here, most suitable option is 2) Replace P × 16 by P ≪ 4
Question 24 |
Loop rolling | |
Loop folding | |
Loop merge | |
Loop jamming |
Example:
Initial Code: for(int i=0; i<5; i++) a = i + 5; for(int i=0; i<5; i++) b = i + 10;
Using Loop jamming above code converted into below Optimized code :
for(int i=0; i<5; i++) { a = i + 5; b = i + 10; }
Question 25 |
Loop rolling | |
Loop folding | |
Loop merge | |
Loop jamming |
Example:
Initial Code: for(int i=0; i<5; i++) a = i + 5; for(int i=0; i<5; i++) b = i + 10;
Using Loop jamming above code converted into below Optimized code :
for(int i=0; i<5; i++) { a = i + 5; b = i + 10; }
Question 26 |
External data segments | |
External subroutines | |
Data located in other procedure | |
All of these |
Refer : http://faculty.cs.niu.edu/~byrnes/csci360/notes/360ext.htm
Question 27 |
SLR parsing table | |
Canonical LR parsing table | |
LALR parsing table | |
None of these |
- YACC stands for Yet Another Compiler Compiler.
- YACC is an LALR(1) ( LookAhead, Left-to-right, Rightmost derivation producer with 1 lookahead token) parser generator
- YAAC builds up LALR(1) parsing table.
Question 28 |
The name of source program in micro computers | |
The set of instructions indicating the primitive operations in a system | |
Primitive form of macros used in assembly language programming | |
Program of very small size |
Micro program is a set of microinstructions that defines the individual operations that a computer carries out in response to a machine-language instruction
Question 29 |
N/2 | |
N-1 | |
2n-1 | |
2n |
A → BC
B → aa
C → bb
now let the string w = aabb then
A → BC(reduction 3)
→ aaC (reduction 2)
→ aabb (reduction 1)
n = 4 and number of productions are 3 so n - 1
Note :
There is a mistake in question it is not supposed to have A -> a, even though that is not a unit production. Given key is n-1.
Question 30 |
Parsing of the program | |
The code generation | |
The lexical analysis of the program | |
Dataflow analysis |
- Lexical Analysis is the first phase of the compiler also known as a scanner. It converts the High level input program into a sequence of Tokens.
- A lexical token is a sequence of characters that can be treated as a unit in the grammar of the programming languages.
- Type token (id, number, real, . . . )
- Punctuation tokens (IF, void, return, . . . )
- Alphabetic tokens (keywords)
Keywords; Examples-for, while, if etc. Identifier; Examples-Variable name, function name, etc. Operators; Examples '+', '++', '-' etc. Separators; Examples ',' ';' etc.
Question 31 |
Assembler | |
Linking loader | |
Cross compiler | |
Load and go |
Question 32 |
There exist parsing algorithms for some programming languages whose complexities are less than O(n3) | |
A programming language which allows recursion can be implemented with static storage allocation | |
L-attributed definition can be evaluated in the framework of bottom-up parsing | |
Code improving transformation can be performed at both source language and intermediate code level. |
- Statement I - True Bottom up and top down parser are linear time parsers which take O(n) time to parse the string. i.e, only one scan of input is required
-LL(1) and LR parsers takes O(n)
-CFG is O(n3)
-LL and LR parsers are linear time parsers.
Refer : https://www.csee.umbc.edu/courses/331/fall11/notes/04/04cparsing.ppt.pdf - Statement II - False It cannot be implemented with static storage allocation. It needs dynamic memory allocation.
- Statement III - True Every s-attributed SDT is L-attributed SDT (vice-versa need not be true)
S-attributed SDT follows bottom up approach. Hence L-attributed SDT which are also S-attributed can be solved by bottom up parsing
Refer : http://www.cs.sunysb.edu/~cse304/Fall09/Lectures/attributes-handout.pdf - Statement IV - True Code improving transformation can be performed at both source language and intermediate code level.
Question 33 |
On the number of strings/lifereacs | |
That the data segment must be defined after the code segment | |
On unconditional rump | |
That the data segment be defined before the code segment |
Question 34 |
An Operator Grammar | |
Right Recursive | |
Left Recursive | |
Ambiguous |
- Free from ambiguity
- Free from left recursion
- Free from left factoring
option(C) is also satisfying but option(D) is a better answer hear. Because we have standard procedure for removing left-recursion but ambiguity is not easy to remove.
Question 35 |
Assembler directives | |
Instructions in any program that have no corresponding machine code instruction | |
Instruction in any program whose presence or absence will not change the output for any input | |
None of these |
Refer : https://en.wikibooks.org/wiki/360_Assembly/Pseudo_Instructions
Question 36 |
Consider the following Grammar G :
S➝ A | B A➝ a | c B➝ b | c
Where {S,A,B} is the set of non-terminals, {a,b,c} is the set of terminals.
Which of the following statement(s) is/are correct ?
S1 : LR(1) can parse all strings that are generated using grammar G.S2 : LL(1) can parse all strings that are generated using grammar G.
Choose the correct answer from the code given below :
Code :Both S1 and S2 | |
Only S2 | |
Neither S1 nor S2 | |
Only S1 |
First Leftmost Derivation
S → A
A → c
Second Leftmost Derivation
S → B
B → c
An Ambiguous grammar can neither be LL(1) nor LR(1)
Question 37 |
Local optimization | |
Constant folding | |
Loop Optimization | |
Data flow analysis |
Constant folding is the process of recognizing and evaluating constant expressions at compile time rather than computing them at runtime. Terms in constant expressions are typically simple literals, such as the integer literal 2, but they may also be variables whose values are known at compile time. Consider the statement:
i = 320 * 200 * 32;
Most compilers would not actually generate two multiply instructions and a store for this statement. Instead, they identify constructs such as these and substitute the computed values at compile time (in this case, 2,048,000).
Constant folding can make use of arithmetic identities. If x is numeric, the value of 0 * x is zero even if the compiler does not know the value of x (note that this is not valid for IEEE floats since x could be Infinity or NotANumber. Still, some languages that favor performance such as GLSL allow this for constants, which can occasionally cause bugs).
Constant folding may apply to more than just numbers. Concatenation of string literals and constant strings can be constant folded. Code such as "abc" + "def" may be replaced with "abcdef".
Simple Example:
In the code fragment The below expression (3 + 5) can be evaluated at compile time and replaced with the constant 8.
int f (void)
{
return 3 + 5;
}
Below is the code fragment after constant folding.
int f (void)
{
return 8;
}
Loop Optimization
Loop Optimization is the process of increasing execution speed and reducing the overheads associated with loops
Decreasing the number of instructions in an inner loop improves the running time of a program even if the amount of code outside that loop is increased.
Loop Optimization Techniques -
- Frequency Reduction (Code Motion)
- Loop Unrolling:
- Loop Jamming
Question 38 |
Syntax | |
Struct | |
Semantic | |
None of the above |
Question 39 |
DAG | |
Control graph | |
Flow graph | |
Hamiltonian graph |
Basic block is a set of statements that always executes in a sequence one after the other.
The characteristics of basic blocks are :
- They do not contain any kind of jump statements in them.
- There is no possibility of branching or getting halt in the middle.
- All the statements execute in the same order they appear.
- They do not lose lose the flow control of the program.
- A flow graph is a directed graph with flow control information added to the basic blocks.
- The basic blocks serve as nodes of the flow graph.
- There is a directed edge from block B1 to block B2 if B2 appears immediately after B1 in the code
Question 40 |
Leftmost derivation | |
Rightmost derivation | |
Leftmost derivation in reverse | |
Rightmost derivation in reverse |
Question 41 |
It is based on the syntax | |
It is easy to modify | |
Its description is independent of any implementation | |
All of these |
- An SDD is a CFG with attributes and rules.
– Attributes are associated with grammar symbols.
– Rules are associated with productions. - An SDD specifies the semantics of productions.
– It does not enforce a specific way of achieving the semantics
- An SDT is done by attaching rules or program fragments to productions.
- The order induced by the syntax analysis produces a translation of the input program
- To associate actions with productions
- To associate attributes with non-terminals
- To create implicit or explicit syntax tree
- To perform semantic analysis
Question 42 |
A set of regular expressions | |
Strings of character | |
Syntax tree | |
Set of tokens |
-lexical analyzer is the first phase of a compiler
-lexical analyzer read input characters from the source program and produce output as sequence of tokens
Question 43 |
E→ E*F | F+E | F
F→ F-F | id
Which of the following is true?
* has higher precedence than + | |
– has higher precedence than * | |
+ and – have same precedence | |
+ has higher precedence than * |
- * and + have least priority because are present at the top most level .
- * and + have equal precedence because they are same level .
- '-' have higher precedence than '+' and '*' because is present at the bottom most level
E / | \ E * F | / | \ F F - F | | | id(6) id(7) id(8)
As we can see that first ‘- ‘ will be evaluated then ‘ * ‘ is evaluated
Thus, ‘ – ‘ has higher precedence then *.
So correct answer is option(B)
Question 44 |
printf(“i=%d, &i=%x”, i&i);
13 | |
6 | |
10 | |
9 |
- printf
- (
- "i=%d, &i=%x"
- ,
- i
- ,
- &
- i
- )
- ;
Number of Tokens are 9 for printf("i=%d, &i=%x", i&i);
- printf
- (
- "i=%d, &i=%x"
- ,
- i
- &
- i
- )
- ;
Question 45 |
1) A → BC
2) A → CcBb
3) A → BaC
4) A → ε
1 only | |
1 and 2 only | |
1 and 3 only | |
1 and 4 only |
So, A → BC and A → ε are not allowed.
Thus, Correct option is (D).
Question 46 |
Recursive descent parser | |
Shift left associative parser | |
SLR(k) parser | |
LR(k) parser |

Question 47 |
Yet accept compiler constructs | |
Yet accept compiler compiler | |
Yet another compiler construct | |
Yet another compiler compiler |
- YACC stands for Yet Another Compiler Compiler.
- YACC is an LALR(1) ( LookAhead, Left-to-right, Rightmost derivation producer with 1 lookahead token) parser generator
- YAAC builds up LALR(1) parsing table.
Question 48 |
LALR parser is more powerful and costly as compare to other parsers | |
All CFG’s are LP and not all grammars are uniquely defined | |
Every SLR grammar is unambiguous but not every unambiguous grammar is SLR | |
LR(K) is the most general backtracking shift reduce parsing method |
LR(0) < SLR(1) < LALR(1) < CLR(1)
(B) . The grammar generated by LP(Linear Precedence) are CFG but not implying All CFG are LP.
(C) . Every SLR grammar is unambiguous but not every unambiguous grammar is SLR
(D) . LR(k) has a k-look ahead which can always see ahead the input in order to avoid backtracking and LR(k) is the most general non back tracking shift reduce parsing method
Question 49 |
Top-down, right to left | |
Top-down, left to right | |
Bottom up, right to left | |
Bottom up, left to right |
- Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right.
- It uses procedures for every terminal and non-terminal entity.
- This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking. But the grammar associated with it (if not left factored) cannot avoid back-tracking.
- A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing.
- This parsing technique is regarded recursive as it uses context-free grammar which is recursive in nature.

Question 50 |
LR parser | |
LL parser | |
SLR parser | |
LALR parser |

Question 51 |
Loop optimization | |
Redundancy Elimination | |
Folding | |
All of the options |
- Enable other transformations
-Procedure inlining, cloning, loop unrolling - Eliminate redundancy
-Redundant expression elimination - Eliminate useless and unreachable code
-Dead code elimination - Specialization and strength reduction
-Constant propagation, peephole optimization - Move operations to less-frequently executed places
-Loop invariant code motion - Other Machine Independent optimization are
-Loop optimization
-Redundancy elimination
-Folding
-Strength reduction
-Deadlock elimination
- Take advantage of special hardware features
- Instruction selection, prefetching - Manage or hide latency, introduce parallelism
- Instruction scheduling, prefetching - Manage bounded machine resources
- Register allocation - Other Machine Dependent optimization are
-Register allocation
-Use of addressing modes
-Peephole optimization
Question 52 |
S1: LR(0) grammar and SLR(1) grammar are equivalent
S2: LR(1) grammar are subset of LALR(1) grammars
S1 only | |
S1 and S2 both | |
S2 only | |
None of the options |

The containment rules are the following:
- Every LR(0) grammar is also SLR(1), but not all SLR(1) grammars are LR(0).
- Every SLR(1) grammar is also LALR(1), but not all LALR(1) grammars are SLR(1).
- Every LALR(1) grammar is also LR(1), but not all LR(1) grammars are LALR(1).
- Every LL(1) grammar is also LR(1), but not all LR(1) grammars are LL(1).
- Every LL(0) grammar is also LR(0), SLR(1), LALR(1), LR(1), and LL(1)
Question 53 |
Reduces the space of the code | |
Optimization the code to reduce execution time | |
Both (A) and (B) | |
Neither (A) nor (B) |
Code Optimization
Code Optimization phase removes the unnecessary code line and arranges the sequence of statements to speed up the execution of the program without wasting resources. The main goal of Code Optimization phase is to improve on the intermediate code to generate a code that runs faster and occupies less space.
The main functions of Code Optimization phase are:
- Code Optimization helps you to establish a trade-off between execution and compilation speed
- It Improves the running time of the target program
- Generates streamlined code still in intermediate representation
- Removing unreachable code and getting rid of unused variables
- Removing statements which are not altered from the loop
a = intofloat(10) b = c * a d = e + b f = dThen Code Optimizer will take above code and transformed it into below code
b = c * 10.0 f = e+bSo, Both option (A) and option(B) are correct answers.
Question 54 |
Lexical analysis | |
Code optimization | |
Syntax analysis | |
Semantic analysis |
- Syntax analysis or parsing is the second phase of a compiler.
- Syntax Analysis is also called as parsing.
- Syntax analysis takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree).
- In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
Question 55 |
Linker | |
Loader | |
Compiler | |
Editor |
Resolving ref to global symbols is trickier. When the compiler encounters a symbol (either a var or func name) that is not defined in the current module, it assumes that it is defined in some other module, generally a linker symbol table entry, and leaves it for the linker to handle. If the linker is unable to find a def for the referenced symbol in any of its input modules, it prints an error msg and terminates.
Question 56 |
Assembler and editor | |
Compiler and word processor | |
Only Assembler and compiler | |
Assembler,Compiler and Interpreter |
A program written in any high-level programming language is called the Source code or program. To convert the source code into machine code is called the object code or program. A Translator translates the source program into the object program that the computer can understand and execute.
In programming Language processors are three types:
- Assembler
- Interpreter
- Compiler
Interpreter converts the source program written in high level language into machine languages.
An interpreter converts each statement of the program line by line into machine code.
Compiler : The purpose of compiler is same as interpreter but unlike interpreters which translate the source program line by line, compiler are the translators, which translate the entire program into machine codes.
Question 57 |
LL grammar | |
Ambiguous grammar | |
LR grammar | |
None of the above |
Example :
X -> Y | Z i attribute is there in X,Y and Z
X.i = f( Y.i , Z.i ) means parent is taking attribute from there children
Bottom Up Parsers
LR( 0 ) , SLR( 1 ) , LR( 1 ) , LALR( 1 ) and Operator Precedence
Top Down Parsers
Non Recursive descent ( LL( 1 ) ) and Recursive descent
LR Parsers which are generated from LR grammar are also coming under bottom up parsing fashion Therefore option(C) is correct.
Question 58 |
If FIRST(u) ∩ FIRST(v) is empty then the CFG has to be LL(1) | |
If the CFG is LL(1) then FIRST(u) ∩ FIRST(v) has to be empty | |
Both (A) and (B) | |
None of the above |
A → ⍺,β we have
1. First(⍺) ∩ First (β) =Θ.
2. If ⍺ *⇒ ε then First(β) ∩ Follow(A)= Θ.
Condition 2 is not required If grammar is epsilon free
Refer : https://www.csd.uwo.ca/~mmorenom/CS447/Lectures/Syntax.html/node14.html
Question 59 |
Type checks | |
Spelling checks | |
Uniquencess checks | |
Flow of control checks |
- Type Checking
Ensures that data types are used in a way consistent with their definition. - Label Checking
A program should contain labels references. - Flow Control Check
Keeps a check that control structures are used in a proper manner.(example: no break statement outside a loop) - Scope resolution
- Array-bound checking
- Type mismatch
- Undeclared variable
- Reserved identifier misuse.
- Multiple declaration of variable in a scope.
- Accessing an out of scope variable.
- Actual and formal parameter mismatch.
Question 60 |
Parsing of the program | |
Code generation | |
Lexical analysis of the program | |
Data flow diagrams |
- Lexical Analysis is the first phase of the compiler also known as a scanner. It converts the High level input program into a sequence of Tokens.
- A lexical token is a sequence of characters that can be treated as a unit in the grammar of the programming languages.
- Type token (id, number, real, . . . )
- Punctuation tokens (IF, void, return, . . . )
- Alphabetic tokens (keywords)
Keywords; Examples-for, while, if etc. Identifier; Examples-Variable name, function name, etc. Operators; Examples '+', '++', '-' etc. Separators; Examples ',' ';' etc.
Question 61 |
List - I |
List - II |
A. A part of a compiler that is responsible for recognizing syntax |
1. Optimizer |
B. A part of a compiler that takes as input a stream of characters and produces as output a stream of words along with their associated syntactic categories. |
2. Semantic Analysis |
C. A part of a compiler that understands the meanings of variable names and other symbols and checks that they are used in ways consistent with their definitions. |
3. Parser |
D. An IR-to-IR transformer that tries to improve the IR program in some way (Intermediate Representation). |
4. Scanner |
(a)-(iii), (b)-(iv), (c)-(ii), (d)-(i) | |
(a)-(iv), (b)-(iii), (c)-(ii), (d)-(i) | |
(a)-(ii), (b)-(iv), (c)-(i), (d)-(iii) | |
(a)-(ii), (b)-(iv), (c)-(iii), (d)-(i) |
- Parser is a part of compiler and responsible for syntax recognition.
- Scanner is a part of compiler that used by lexical analyzer it takes input as stream of characters and produce output as a stream of words along with their associated syntactic categories
- In Semantic analysis consistency and definition of syntax is checked.
- An optimizer is used improve the IR program and an IR-to-IR transformer that tries to improve the IR program in same way(intermediate representation)
Question 62 |
Tolerance | |
Scalability | |
Capability | |
Loading |
Question 63 |
I. Lexical Analysis is specified by context-free grammars and implemented by pushdown automata.
II. Syntax Analysis is specified by regular expressions and implemented by finite-state machine.
Which of the above statement(s) is/are correct ?
Only I | |
Only II | |
Both I and II | |
Neither I nor II |
Lexical Analysis is specified by regular grammar and implemented by finite automata
False-Syntax Analysis is specified by regular expressions and implemented by finite-state machine
Syntax Analysis uses context free grammar which uses push down automata
Question 64 |
Replace P + P by 2 * P or Replace 3 + 4 by 7. | |
Replace P * 32 by P << 5 | |
Replace P * 0 by 0 | |
Replace (P << 4) – P by P * 15 |
For Example : Multiplication is more expensive operation than addition.
Shift operation is less expensive operation than Multiplication/Division.
In option (B), Multiplication operation is replaced with Shift Operator . It reduce the operator strength because shift operator is less expensive than multiplication operation.
Here, most suitable option is 2) Replace P × 32 by P ≪ 5
Question 65 |
I. Resolve external references among separately compiled program units.
II. Translate assembly language to machine code.
III. Relocate code and data relative to the beginning of the program.
IV. Enforce access-control restrictions on system libraries
I and II | |
I and III | |
II and III | |
I and IV |

Principles tasks of the linker are:
- Resolve external references among separately compiled program units.
- Relocate code and data relative to the beginning of the program.
-Loader enforces access-control restrictions on system libraries
Question 66 |
Loop unrolling | |
Strength reduction | |
Loop concatenation | |
Loop jamming |
Example:
Initial Code: for(int i=0; i<5; i++) a = i + 5; for(int i=0; i<5; i++) b = i + 10;
Using Loop jamming above code converted into below Optimized code :
for(int i=0; i<5; i++) { a = i + 5; b = i + 10; }
Question 67 |
I. Reduction in overall program execution time.
II. Reduction in overall space consumption in memory.
III. Reduction in overall space consumption on disk.
IV. Reduction in the cost of software updates.
I and IV | |
I only | |
II and III | |
IV only |
I. Reduction in overall program execution time - True
II. Reduction in overall space consumption in memory - False
III. Reduction in overall space consumption on disk- False
IV. Reduction in the cost of software updates- False
Question 68 |
The grammar S → a Sb |bSa|SS|∈, where S is the only non-terminal symbol and ∈ is the null string, is ambiguous. | |
SLR is powerful than LALR. | |
An LL(1) parser is a top-down parser. | |
YACC tool is an LALR(1) parser generator. |
The grammar S → a Sb |bSa|SS|∈, where S is the only non-terminal symbol and ∈ is the null string, is ambiguous grammar because we are able to derive two different parse trees for string "aabb" therefore the given grammar is ambiguous.
Consider the string ="aabb"
Parse Tree 1 :


Option(A) : False
LALR is more powerful then SLR.
Option(D) : True
An LL(1) parser is a top-down parser.
Option(D) : True
YACC tool is an LALR(1) parser generator.
Question 69 |
That avoids tests at every iteration of the loop. | |
That improves performance by decreasing the number of instructions in a basic block. | |
That exchanges inner loops with outer loops | |
That reorders operations to allow multiple computations to happen in parallel |
- Loop unrolling is a loop transformation technique that helps to optimize the execution time of a program.
- Loop unrolling basically remove or reduce iterations.
- Loop unrolling increases the program’s speed by eliminating loop control instruction and loop test instructions.
Program 1 without loop unrolling:
#include int main(void) { for (int i=0; i<5; i++) printf("Hello\n"); //print hello 5 times return 0; }Output :
- Hello
- Hello
- Hello
- Hello
- Hello
#include int main(void) { // unrolled the for loop in program 1 printf("Hello\n"); printf("Hello\n"); printf("Hello\n"); printf("Hello\n"); printf("Hello\n"); return 0; }Output :
- Hello
- Hello
- Hello
- Hello
- Hello
Question 70 |
Macro processor | |
Micro preprocessor | |
Macro preprocessor | |
Dynamic Linker |
- The translator which performs macro calls expansion is called Macro pre-processor .
- A macro processor is a program that copies a stream of text from one place to another, making a systematic set of replacements as it does so.
- Macro processors have been used for language expansion (defining new language constructs that can be expressed in terms of existing language components), for systematic text replacements that require decision making, and for text reformatting (e.g. conditional extraction of material from an HTML file).
- A Dynamic linker is the part of an operating system that loads and links the shared libraries.
- C preprocessor is a micro processor that is used by compiler to transform your code before compilation. It is called micro pre-processor because it allows us to add macros.
Question 71 |
LALR parser is Bottom up parser | |
A parsing algorithm which performs a left to right scanning and a right most deviation is RL (1) | |
LR parser is Bottom up parser. | |
In LL(1), the 1 indicates that there is a one - symbol look - ahead. |
LALR parser is Bottom Up parse
option(B) : False
We don't have any RL(1) parsing. We have LR(1) parsing algorithm. Incase of LR(1) parsing L stands for left-to-right scanning of the input string and R signifies that we will get a Rightmost Derivation.
option(C) : True
LR parser is Bottom Up parser.
option(D) : True
In LL(1), the 1 indicates that there is a one – symbol look – ahead.
Question 72 |
Syntax Analysis | |
Lexical Analysis | |
Code Generation | |
Code Optimization |

Question 73 |
Build the symbol table | |
Construct the intermediate code | |
Separate mnemonic opcode and operand fields | |
ALL |

Question 74 |
Application programmer | |
System programmer | |
Operating system | |
All of the above |
- Code optimization is responsibility of System Programmers
- Optimization can be performed by automatic optimizers, or programmers.
- An optimizer is either a specialized software tool or a built-in unit of a compiler (the so-called optimizing compiler).
- Modern processors can also optimize the execution order of code instructions.
Question 75 |
Build the symbol table | |
Construct the intermediate code | |
Separate mnemonic opcode and operand fields | |
ALL |

Question 76 |
Label and value | |
Only value | |
Mnemonic | |
Memory Location |
- In two pass assembler the symbol table is used to store memory locations
- The symbol table contains information to locate and relocate symbolic definitions and references. The assembler creates the symbol table section for the object file. It makes an entry in the symbol table for each symbol that is defined or referenced in the input file and is needed during linking.
- The symbol table is then used by the link editor during relocation. The symbol table's section header contains the symbol table index for the first non-local symbol.
- Instead of using an absolute memory address it uses a symbolic address.
Question 77 |
Leftmost derivation | |
Rightmost derivation | |
Rightmost derivation in reverse | |
Leftmost derivation in reverse |
- In this Parsing technique we expand the start symbol to the whole program.
- In top down parsing, we just start with the start symbol and compare the right side of the different productions against the first piece of input to see which of the productions should be used.
- A top down parser is called LL parser because it parses the input from Left to right, and constructs a Leftmost derivation of the sentence.
- Recursive Descent and LL parsers are the Top-Down parsers.
- A bottom up parser is LR parser with Left to Right parsing performing Right Most Derivation in reverse order (LR)
Question 78 |
Loader | |
Linker | |
Editor | |
Assembler |
- A general-purpose macro processor or general purpose preprocessor is a macro processor that is not tied to or integrated with a particular language or piece of software.
- A macro processor is a program that copies a stream of text from one place to another, making a systematic set of replacements as it does so.
- Macro processors are often embedded in other programs, such as assemblers and compilers. Sometimes they are standalone programs that can be used to process any kind of text.
- Macro processors have been used for language expansion (defining new language constructs that can be expressed in terms of existing language components), for systematic text replacements that require decision making, and for text reformatting (e.g. conditional extraction of material from an HTML file).
Question 79 |
Build the symbol table | |
Construct the Intermediate code | |
Separate mnemonic opcode and operand field. | |
Above All |

Question 80 |
Lexical analysis is breaking the input into tokens | |
Syntax analysis is for parsing the phrase | |
Syntax analysis is for analyzing the semantic | |
None of these |
- lexical analyzer is the first phase of a compiler
- lexical analyzer read input characters from the source program and produce output as sequence of tokens
- Syntax analysis or parsing is the second phase of a compiler.
- Syntax Analysis is also called as parsing.
- Syntax analysis takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree).
- In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
- Semantic means meaning of language. syntax analysis phase only cares about the correct syntax of the program.
- Programs are logically valid or not, this part is done by semantic analysis phase.
Question 81 |
Compile time | |
Run time | |
Linking time | |
Pre-processing time. |
- Static binding happens at compile-time while dynamic binding happens at runtime.
- Binding of private, static and final methods always happen at compile time since these methods cannot be overridden.
- In Static binding All information needed to call a function is known at compile time.
- In dynamic binding All information need to call a function come to know at run time.
- When the method overriding is actually happening and the reference of parent type is assigned to the object of child class type then such binding is resolved during runtime.
- The binding of overloaded methods is static and the binding of overridden methods is dynamic.
Question 82 |
Checking type compatibility | |
Suppressing duplication of error message | |
Storage allocation | |
All of these |
(A) Checking type compatibility
(B) Suppressing duplication of error message
(C) Storage allocation
Question 83 |
Optimizing | |
One pass compiler | |
Cross compiler | |
Multipass compiler |
- A compiler for a high-level language that runs on one machine and produces code for a different machine is called cross compiler.
- A one-pass compiler is a compiler that passes through the source code of each compilation unit only once. traverses the program only once.
- A multi-pass compiler or two pass compiler is a type of compiler that processes the source code or abstract syntax tree of a program several times.
- A one-pass compilers is faster than multi-pass compilers.
Question 84 |
0 | |
1 | |
2 | |
None of these |
Question 85 |
Loop optimization | |
Local optimization | |
Constant folding | |
Data flow analysis |
So, Peephole optimization is form of Local optimization
**Local optimization is correct answer**
Question 86 |
Identifier table | |
Page map table | |
Literal table | |
Terminal table |
option(B) is the correct answer
Question 87 |
Check the validity of a source string | |
Determine the syntactic structure of a source string | |
Both (A) and (B) | |
None of these |
- Check the validity of a source string
- Determine the syntactic structure of a source string
- Syntax analysis takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree).
- In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
Question 88 |
LALR | |
LR | |
SLR | |
LLR |
- YACC stands for Yet Another Compiler Compiler.
- YACC is an LALR(1) ( LookAhead, Left-to-right, Rightmost derivation producer with 1 lookahead token) parser generator
- YAAC builds up LALR(1) parsing table.
Question 89 |
Syntax analysis | |
Lexical analysis | |
Interpretation analysis | |
Uniform symbol generation |
- The action of parsing the source program into the proper syntactic classes is known as lexical analysis
- lexical analyzer is the first phase of a compiler
- The output of a lexical analyzer is stream of tokens
- lexical analyzer read input characters from the source program and produce output as sequence of tokens
Question 90 |
Compile time | |
Run time | |
Linking time | |
Pre - processing time |
- Static binding happens at compile-time while dynamic binding happens at runtime.
- Binding of private, static and final methods always happen at compile time since these methods cannot be overridden.
- In Static binding All information needed to call a function is known at compile time.
- In dynamic binding All information need to call a function come to know at run time.
- When the method overriding is actually happening and the reference of parent type is assigned to the object of child class type then such binding is resolved during runtime.
- The binding of overloaded methods is static and the binding of overridden methods is dynamic.
Question 91 |
Shift step that advances in the input stream by K(K > 1) symbols and Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol. | |
Shift step that advances in the input stream by one symbol and Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol. | |
Shift step that advances in the input stream by K(K = 2) symbols and Reduce step that applies a completed grammar rule to form a single tree. | |
Shift step that does not advance in the input stream and Reduce step that applies a completed grammar rule to form a single tree. |
- Shift step that advances in the input stream by one symbol and
- Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol.
Question 92 |
Canonical LR parser is LR (1) parser with single look ahead terminal | |
All LR(K) parsers with K > 1 can be transformed into LR(1) parsers. | |
Both (A) and (B) | |
None of the above |
- True : Canonical LR parser is LR (1) parser with single look ahead terminal
- True : All LR(K) parsers with K > 1 can be transformed into LR(1) parsers.
Question 93 |
Generated in first pass | |
Generated in second pass | |
Not generated at all | |
Generated and used only in second pass |
Following tasks performed in first pass
- It allocates space for the literals.
- It computes the total length of the program.
- It builds the symbol table for the symbols and their values.
Question 94 |
Allows to examine and modify the contents of registers | |
Does not allow execution of a segment of program | |
Allows to set breakpoints, execute a segment of program and display contents of register | |
All of the above |
- A debugger is a computer program used by programmers to test and debug a target program
- Debugger allows to set breakpoints, execute a segment of program and display contents of register. One of the Debugger name is GDB used under Unix.
Refer 2 : https://www.cs.cmu.edu/~gilpin/tutorial/
Question 95 |
I. First (α) ∩ First (β) ≠ { a } where a is some terminal symbol of the grammar.
II. First (α) ∩ First (β) ≠ λ
III. First (α) ∩ Follow (A)=ϕ if λ ∈ First (β)
I and II | |
I and III | |
II and III | |
I, II and III |

Question 96 |
Removing left recursion alone | |
Removing the grammar alone | |
Removing left recursion and factoring the grammar | |
None of the above |
Following are the steps to convert an arbitrary CFG to LL(1) grammar:
- Remove ambiguity (by taking care of precedence and associatively)
- Remove Left Recursion (This will come when you remove ambiguity)
- Make grammar deterministic by Left factoring (In case occur) To convert CFG to LL(1), which is top down parser.
Question 97 |
Shift reduce conflict only | |
Reduce reduce conflict only | |
Both shift reduce conflict and reduce reduce conflict | |
Shift handle and reduce handle conflicts |

Question 98 |
An Operator Grammar | |
Right Recursive | |
Left Recursive | |
Ambiguous |
- Free from ambiguity
- Free from left recursion
- Free from left factoring
option(C) is also satisfying but option(D) is a better answer hear. Because we have standard procedure for removing left-recursion but ambiguity is not easy to remove.
Question 99 |
S ➝ A | B
A➝ a | c
B➝ b | c
Where {S,A,B} is the set of non-terminals, {a,b,c} is the set of terminals.
Which of the following statement(s) is/are correct ?
S1 : LR(1) can parse all strings that are generated using grammar G.
S2 : LL(1) can parse all strings that are generated using grammar G.
Both S 1 and S 2 | |
Only S 2 | |
Neither S1 nor S2 | |
Only S 1 |
First Leftmost Derivation
S → A
A → c
Second Leftmost Derivation
S → B
B → c
An Ambiguous grammar can neither be LL(1) nor LR(1)
Question 100 |
G1 : S → SbS | a
G2 : S → aB | ab, A → GAB | a, B → ABb | b
Which of the following option is correct ?
Only G1 is ambiguous | |
Only G2 is ambiguous | |
Both G1 and G2 are ambiguous | |
Both G1 and G2 are not ambiguous |
Here both Grammar G1 and G2 are ambiguous as For G1 we can generate more than one parse tree for the same string “ababa” and for G2 we can also generate more than one parse the for the string ” ab ” so both G1 and G2 is also ambiguous grammars.

Question 101 |
Leftmost derivation in reverse | |
Right-most derivation in reverse | |
Left-most derivation | |
Right-most derivation |
Question 102 |
Reducing the range of values of input variables. | |
Code optimization using cheaper machine instructions. | |
Reducing efficiency of program. | |
None of the above |
- In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations.
- Classic example of strength reduction converts "strong" multiplications inside a loop into "weaker" additions – something that frequently occurs in array addressing.
- Instead of Multiplication by 2 we can use left shift by 2
- Instead of division by 2 we can use right shift by 2
Question 103 |
E → E * F / F + E / F
F → F – F / id
Which of the following is true ?
* has higher precedence than + | |
– has higher precedence than * | |
+ and – have same precedence | |
+ has higher precedence than * |
- * and + have least priority because are present at the top most level .
- * and + have equal precedence because they are same level .
- '-' have higher precedence than '+' and '*' because is present at the bottom most level
E / | \ E * F | / | \ F F - F | | | id(6) id(7) id(8)
As we can see that first ‘- ‘ will be evaluated then ‘ * ‘ is evaluated
Thus, ‘ – ‘ has higher precedence then *.
So correct answer is option(B)
Question 104 |
Remove left recursion alone | |
Factoring grammar alone | |
Both of the above | |
None of the above |
Following are the steps to convert an arbitrary CFG to LL(1) grammar:
- Remove ambiguity (by taking care of precedence and associatively)
- Remove Left Recursion (This will come when you remove ambiguity)
- Make grammar deterministic by Left factoring (In case occur) To convert CFG to LL(1), which is top down parser.
Question 105 |
LL(I) | |
Canonical LR | |
SLR | |
LALR |
LR > LALR > SLR

Question 106 |
S1: SLR uses follow information to guide reductions. In case of LR and LALR parsers, the lookaheads are associated with the items and they make use of the left context available to the parser.
S2: LR grammar is a larger subclass of context free grammar as compared to that SLR and LALR grammars.
Which of the following is true ?
S1 is not correct and S2 is not correct. | |
S1 is not correct and S2 is correct. | |
S1 is correct and S2 is not correct. | |
S1 is correct and S2 is correct. |
Question 107 |
Leftmost derivation | |
Leftmost derivation traced out in reverse | |
Rightmost derivation traced out in reverse | |
Rightmost derivation |
- Top-down parser - Leftmost derivation
- Bottom-Up parser - Reverse of rightmost derivation
Question 108 |
Paragraph by paragraph | |
Instruction by instruction | |
Line by line | |
None of the above |
Question 109 |
Program with wheels | |
Independent from its authors | |
Independent of platform | |
None of the above |
Question 110 |
S1 → AB | aaB
A → a | Aa
B → b
and the production rules of a grammar G2 as
S2 → aS2bS2 | bS2aS2 | λ
Which of the following is correct statement ?
G1 is ambiguous and G2 is not ambiguous. | |
G1 is ambiguous and G2 is ambiguous. | |
G1 is not ambiguous and G2 is ambiguous. | |
G1 is not ambiguous and G2 is not ambiguous. |
- A grammar is said to be ambiguous if there exists more than one leftmost derivation or more than one rightmost derivation or more than one parse tree for the given input string
- Here both Grammar G1 and G2 are ambiguous as For G1 we can generate more than one parse tree for the same string “aab” and for G2 we can also generate more than one parse the for the string ” abab ”,"baba" etc. So both G1 and G2 is also ambiguous grammars.

Question 111 |
S1 ⇒ Sc ⇒ SAC ⇒ SaSbc
Thus, SaSbc is a right sentential form, and its handle is
SaS | |
Bc | |
Sbc | |
aSb |

Question 112 |
S → β1 | β2, A → α1A | α2A | λ | |
S → β1| β2 | β1A | β2A, A → α1A | α2A | |
S → β1 | β2, A → α1A | α2A | |
S → β1 | β2 | β1A | β2A, A → α1A | α2A | λ |
Given grammar can generate : {β1,β2,β1α1,β1α2,β2α1,β2α2,β2α1α2.........., ...., ,,,etc}
We can generate only {β1,β2} in option(A).
In option(B) there is no terminating point string containing {α1,α2}
option(C) also not correct because it can generate only {β1,β2}
option(D) can generate all the string generate by the given grammar.
Method 2 :
S→Sα1 ∣ Sα2 ∣ β1 ∣ β2,
Apply left recursion then we get
S → β1 ∣ β2 ∣ β1A ∣ β2A, A → α1A ∣ α2A ∣ λ
Therefore, Option(D) S→β1 ∣ β2 ∣ β1A ∣ β2A, A → α1A ∣ α2A ∣ λ
Question 113 |
S1 : First(α) = { t | α⇒ *tβ for some string β } ⇒ *tβ
S2 : Follow(X) = { a | S⇒ *αXaβ for some strings α and β }
Both statements S1 and S2 are incorrect. | |
S1 is incorrect and S2 is correct. | |
S1 is correct and S2 is incorrect. | |
Both statements S1 and S2 are correct. |
Question 114 |
Sub program | |
A complete program | |
A hardware portion | |
Relative coding |
- A macro is a sequence of instructions, assigned by a name and could be used anywhere in the program.
- In NASM, macros are defined with %macro and %endmacro directives.
- The macro begins with the %macro directive and ends with the %endmacro directive.
Macro definition syntax :
%macro macro_name number_of_params <macro body> %endmacro
Where, number_of_params specifies the number parameters, macro_name specifies the name of the macro.
The macro is invoked by using the macro name along with the necessary parameters. When you need to use some sequence of instructions many times in a program, you can put those instructions in a macro and use it instead of writing the instructions all the time.
Question 115 |
Semantic analysis | |
Code generation | |
Syntax analysis | |
Code optimization |
- Grammar of the programming is checked at syntax phase of compiler
- Type checking is done during semantic analysis phase and syntax analysis is used to generate parse trees.
Example : int a = "world" . Hence false - logical errors will checked in semantic analysis
Question 116 |
Hardware | |
Compiler | |
Registers | |
None of the above |
Question 117 |
(A + B) * C | |
A + * BC | |
A + B * C | |
A * C + B |
- All leaf nodes have to be terminals.
- All interior nodes have to be non-terminals.
- In-order traversal gives original input string.
- Parse tree is always following in-order traversal. It visits left, root and right
(A+B)∗C
Question 118 |
- – subtraction (highest precedence)
- * multiplication
- $ exponentiation (lowest precedence)
3 – 2 * 4 $ | * 2**3
– 61 | |
64 | |
512 | |
4096 |
- 3 – 2 * 4 $ | * 2** 3
- (3 – 2) * 4 $ | * 2** 3
- ((3 – 2) * 4) $ | * (2** 3)
- (((3 – 2) * 4) $ | * (2** 3))
- (((3 - 2) *) 4 $ | * (2**3))
- (((1) *) 4 $ | * (2**3))
- (((4 $ | * (2**3))
- (((4 $ | * (6))
- (((46))
- 4096
Question 119 |
A parser | |
Code optimizer | |
Code generator | |
Scanner |
- Lexical analysis or Scanner is the first phase of a compiler,
- It takes the modified source code from language preprocessors that are written in the form of sentences.
- A scanner or lexical analyzer reads the program from left to right, organize them into tokens and pass those tokens to the next level.
Question 120 |
Cross compilation | |
One pass compilation | |
Two pass compilation | |
None of the above |
- A compiler for a high-level language that runs on one machine and produces code for a different machine is called cross compiler.
- A one-pass compiler is a compiler that passes through the source code of each compilation unit only once. traverses the program only once.
- A multi-pass compiler or two pass compiler is a type of compiler that processes the source code or abstract syntax tree of a program several times.
- A one-pass compilers is faster than multi-pass compilers.
Question 121 |
Re-allocation | |
Allocation | |
Linking | |
Loading |
- Absolute loader is a kind of loader in which relocated object files are created, loader accepts these files and places them at a specified location in the memory.
- This type of loader is called absolute loader because no relocating information is needed, rather it is obtained from the programmer or assembler.
Question 122 |
A → a A b, A → b A b, A → a , A →b | |
A → a A a, A → a A b, A → c | |
A → A + A, A → a | |
Both (A) and (B) |
Question 123 |
S → x x W [ print “1”]
S → y [print “2”]
W → Sz [print “3”]
what is the translation of “x x x x y z z” ?
1 1 2 3 1 | |
1 1 2 3 3 | |
2 3 1 3 1 | |
2 3 3 2 1 |
W → Sz(print “3”)
S → xxW(print “1”)
W → Sz(print “3”)
S → y(print “2”)
Answer is option(C) 23131
Method : 2

Answer is option(C) 23131
Question 124 |
LL grammar | |
Ambiguous grammar | |
LR grammar | |
None of the above |
Example :
X -> Y | Z i attribute is there in X,Y and Z
X.i = f( Y.i , Z.i ) means parent is taking attribute from there children
Bottom Up Parsers
LR( 0 ) , SLR( 1 ) , LR( 1 ) , LALR( 1 ) and Operator Precedence
Top Down Parsers
Non Recursive descent ( LL( 1 ) ) and Recursive descent
LR Parsers which are generated from LR grammar are also coming under bottom up parsing fashion Therefore option(C) is correct.
Question 125 |
(i) EQU
(ii) ORIGIN
(iii) START
(iv) END
(ii), (iii) and (iv) | |
(i), (iii) and (iv) | |
(iii) and (iv) | |
(i), (ii), (iii) and (iv) |
Basic Assembly directives
EQU →Equate
ORG →Origin
START → Start (Define the start of the first control section in a program)
END → End (End of assembly language program and "starting address" for execution)
Question 126 |
Dependent on the operating system | |
Dependent on the compiler | |
Dependent on the hardware | |
Independent of the hardware |
- Assembly languages are hardware dependent; there is a different one for each CPU series.
- Assembly language (or Assembler) is a compiled, low-level computer language. It is processor-dependent, since it basically translates the Assembler's mnemonics directly into the commands a particular CPU understands, on a one-to-one basis.
- These Assembler mnemonics are the instruction set for that processor
Question 127 |
Tokens are identified. | |
Set of instructions are identified. | |
The syntactic groups are identified. | |
Machine instructions are identified |
Question 128 |
Removal of all labels. | |
Removal of values that never get used. | |
Removal of function which are not involved. | |
Removal of a module after its use. |
- Dead code elimination is a compiler optimization to remove code which does not affect the program results.
- Removing such code has several benefits: it shrinks program size, an important consideration in some contexts, and it allows the running program to avoid executing irrelevant operations, which reduces its running time.
- It can also enable further optimizations by simplifying program structure. Dead code includes code that can never be executed (unreachable code), and code that only affects dead variables (written to, but never read again), that is, irrelevant to the program.
Question 129 |
It shows attribute values at each node. | |
There are no inherited attributes. | |
It has synthesized nodes as terminal nodes. | |
Every non-terminal nodes is an inherited attribute. |
Question 130 |
User defined address symbols are correlated with their binary equivalent | |
The syntax of the statement is checked and mistakes, if any, are listed | |
Object program is generated | |
Semantic of the source program is elucidated. |
Question 131 |
One micro operation | |
One macro operation | |
One instruction to be completed in a single pulse | |
One machine code instruction |
Question 132 |
Start address of the available main memory | |
Total size of the program | |
Actual address of the data location | |
Absolute values of the operands used |
- The absolute loader is a kind of loader in which relocated object files are created, loader accepts these files and places them at a specified location in the memory.
- This type of loader is called absolute loader because no relocating information is needed, rather it is obtained from the programmer or assembler.
- The starting address of every module is known to the programmer, this corresponding starting address is stored in the object file then the task of loader becomes very simple that is to simply place the executable form of the machine instructions at the locations mentioned in the object file.
- In this scheme, the programmer or assembler should have knowledge of memory management. The programmer should take care of two things:
- Specification of starting address of each module to be used. If some modification is done in some module then the length of that module may vary. This causes a change in the starting address of immediate next modules, it's then the programmer's duty to make necessary changes in the starting address of respective modules.
- While branching from one segment to another the absolute starting address of respective module is to be known by the programmer so that such address can be specified at respective JMP instruction.
Question 133 |
Next tokens are predicted. | |
Length of the parse tree is predicted beforehand | |
Lowest node in the parse tree is predicted. | |
Next lower level of the parse tree is predicted. |
- Predictive Parsers
- Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string.
- Predict the production rule to be applied using lookahead tokens
- Backtracking Parsers Will try different productions, backing up when a parse fails
Question 134 |
Code optimization obtained by the use of cheaper machine instructions | |
Reduction in accuracy of the output | |
Reduction in the range of values of input variables | |
Reduction in efficiency of the program |
Other Examples of strength reduction include:
- Replacing a multiplication within a loop with an addition
- Replacing an exponentiation within a loop with a multiplication
Exponentiation is replaced by multiplication and multiplication is in return replaced by addition.
The below code is having multiplication operator:
a = 10; for (j=0; j < X; j++) { Z[j] = a * j; }This code can be replaced by the following code by replacing the multiplication with addition.
a = 10; k = 0; for (j=0; j < X; j++) { Z[j] = k; k = k + a; }
Question 135 |
Top - down parsing | |
Recursive - descent parsing | |
Predicative | |
Syntax tree |
- Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string.
- The predictive parser does not suffer from backtracking.
- To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next input symbols.
- To make the parser back-tracking free, the predictive parser puts some constraints on the grammar and accepts only a class of grammar known as LL(k) grammar.
Question 136 |
Rightmost derivation. | |
Rightmost derivation, in reverse. | |
Leftmost derivation. | |
Leftmost derivation in reverse. |
Question 137 |
Checking type compatibility | |
Suppressing duplication of error message | |
Storage allocation | |
All of these above |
(A) Checking type compatibility
(B) Suppressing duplication of error message
(C) Storage allocation
Question 138 |
Use dynamic scope rules | |
Support dynamic data structures | |
Support recursion | |
Support recursion and dynamic data structures |
Question 139 |
0 | |
1 | |
2 | |
None of these |
- LR Parsers also known as LR(k) parsers, where L stands for left to right scanning of the input stream and R stands for the construction of right most derivation in reverse and k denotes the number of lookahead symbols to make decisions.
- In LR(k), k is for the number of input symbols of lookahead that are used in parsing decisions. Main cases in this are when k = 0 or k =1. We mostly consider the cases where k<= 1. Also, when k is omitted, value of k should be taken as 1.
- LR parsers would recognize all the programming language for which can be written in context-free grammar .
Good