Compiler Design | Subject Wise

Compiler Design | Subject Wise

Question 1
Which of the following comparisons between static and dynamic type checking is incorrect?
A
Dynamic type checking slows down the execution
B
Dynamic type checking offers more flexibility to the programmers
C
In contrast to Static type checking, dynamic type checking may cause failure in runtime due to type errors
D
Unlike static type checking, dynamic type checking is done during compilation
Question 1 Explanation: 

Statically typed languages

A language is statically typed if the type of a variable is known at compile time. For some languages this means that you as the programmer must specify what type each variable is (e.g Java, C, C++); other languages offer some form of type inference, the capability of the type system to deduce the type of a variable (e.g.: OCaml, Haskell, Scala, Kotlin)

The main advantage here is that all kinds of checking can be done by the compiler, and therefore a lot of trivial bugs are caught at a very early stage.

Examples: C, C++, Java, Rust, Go, Scala

Dynamically typed languages

A language is dynamically typed if the type is associated with run-time values, and not named variables/fields/etc. This means that you as a programmer can write a little quicker because you do not have to specify types every time (unless using a statically-typed language with type inference).

Examples: Perl, Ruby, Python, PHP, JavaScript

Most scripting languages have this feature as there is no compiler to do static type-checking anyway, but you may find yourself searching for a bug that is due to the interpreter misinterpreting the type of a variable. Luckily, scripts tend to be small so bugs have not so many places to hide.

Most dynamically typed languages do allow you to provide type information, but do not require it. One language that is currently being developed, Rascal, takes a hybrid approach allowing dynamic typing within functions but enforcing static typing for the function signature.
Question 2
Incremental Compiler is a compiler
A
Which is written in a language that is different from the source language
B
Compiles the whole source code to generate object code afresh
C
Compiles only those portion of source code that has been modified.
D
That runs on one machine but produces object code for another machine
Question 2 Explanation: 
  • Incremental Compiler is a compiler Compiles only those portion of source code that has been modified.
  • Incremental Compiler compiles only those portion of source code that have been modified and without recompiling the whole program it can also compile the additional statements of a program.
  • An incremental compiler is a kind of incremental computation applied to the field of compilation. Quite naturally, whereas ordinary compilers make so called clean build, that is, (re)build all program modules, incremental compiler recompiles only those portions of a program that have been modified.
Refer : https://en.wikipedia.org/wiki/Incremental_compiler
Question 3
DU-chains(Definition-Use) in compiler design
A
Consist of a definition of a variable and all its uses, reachable from that definition
B
Are created using a form of static code analysis
C
Are prerequisite for many compiler optimization including constant propagation and common sub-expression elimination
D
All of the above
Question 3 Explanation: 
Use-Definition Chain (UD Chain) is a data structure that consists of a use, U, of a variable, and all the definitions, D, of that variable that can reach that use without any other intervening definitions. A UD Chain generally means the assignment of some value to a variable.

A counterpart of a UD Chain is a Definition-Use Chain (DU Chain), which consists of a definition, D, of a variable and all the uses, U, reachable from that definition without any other intervening definitions.

Both UD and DU chains are created by using a form of static code analysis known as data flow analysis. Knowing the use-def and def-use chains for a program or subprogram is a prerequisite for many compiler optimizations, including constant propagation and common subexpression elimination.

Refer : https://en.wikipedia.org/wiki/Use-define_chain
Question 4
Which of the following comment about peephole optimization is true?
A
It is applied to a small part of the code and applied repeatedly
B
It can be used to optimize intermediate code
C
It can be applied to a portion of the code that is not contiguous
D
It is applied in the symbol table to optimize the memory requirements.
Question 4 Explanation: 
According to Aho Ullman book : 
PeepHole Optimization is done by examining a sliding window of target instructions(called the peephole) and replacing instruction sequences within the peephole by a faster sequence or shorter.
It can be applied directly after Intermediate Code Generation to improve the intermediate representation.
The code in the peephole need not be contiguous , although some implementations do require this.

So, options(A), option(B) and option(C) are true. Only option(D) is false.
Question 5
Relative to the program translated by a compiler, the same program when interpreted runs
A
Faster
B
Slower
C
At the same speed
D
May be faster or slower
Question 5 Explanation: 
  • A compiler is a computer program that transforms code written in a high-level programming language into the machine code. It is a program which translates the human-readable code to a language a computer processor understands (binary 1 and 0 bits)
  • An interpreter is a computer program, which coverts each high-level program statement into the machine code. This includes source code, pre-compiled code, and scripts. Both compiler and interpreters do the same job which is converting higher level programming language to machine code. However, a compiler will convert the code into machine code (create an exe) before program run. Interpreters convert code into machine code when the program is run.
  • Compiler transforms code written in a high-level programming language into the machine code, at once, before program runs, whereas an Interpreter coverts each high-level program statement, one by one, into the machine code, during program run.
  • Compiled code runs faster while interpreted code runs slower.
  • Compiler displays all errors after compilation, on the other hand, the Interpreter displays errors of each line one by one.
  • Compiler is based on translation linking-loading model, whereas Interpreter is based on Interpretation Method.
  • Compiler takes an entire program whereas the Interpreter takes a single line of code.
Question 6
The output of a lexical analyzer is
A
A parse tree
B
Intermediate code
C
Machine code
D
A stream of tokens
Question 6 Explanation: 
The output of a lexical analyzer is stream of tokens
-lexical analyzer is the first phase of a compiler
-lexical analyzer read input characters from the source program and produce output as sequence of tokens
Question 7
Which of the following class of statement usually produces no executable code when compiled?
A
Declaration
B
Assignment statements
C
Input and output statements
D
Structural statements
Question 7 Explanation: 
Compilers translate the C language into machine code, which can be executed by the CPU. It's often easy to see which bytes of machine code correspond to which statements in your C code. Debuggers use this correspondence when showing how your C code runs.

So the meaning of "a declaration is not an executable statement" is that no machine code corresponds to a declaration. This also means that:

CPU doesn't spend any time to "run" the declaration (but the compiler does spend time to examine it) A visual debugger will "skip" the declaration when running your program step by step
Question 8
The access time of the symbol table will be logarithmic if it is implemented by
A
Linear list
B
Search tree
C
Hash table
D
Self organization list
Question 8 Explanation: 
If a compiler is to handle a small amount of data, then the symbol table can be implemented as an unordered list, which is easy to code, but it is only suitable for small tables only. A symbol table can be implemented in one of the following ways:
  1. Linear (sorted or unsorted) list
  2. Binary Search Tree
  3. Hash table
Among all, symbol tables are mostly implemented as hash tables, where the source code symbol itself is treated as a key for the hash function and the return value is the information about the symbol.

The symbol table provides the following operations:
Insert ()
lookup()

Implementation of Symbol table

Following are commonly used data structure for implementing symbol table

List
  • In this method, an array is used to store names and associated information.
  • A pointer “available” is maintained at end of all stored records and new names are added in the order as they arrive
  • To search for a name we start from beginning of list till available pointer and if not found we get an error “use of undeclared name”
  • While inserting a new name we must ensure that it is not already present otherwise error occurs i.e. “Multiple defined name”
  • Insertion is fast O(1), but lookup is slow for large tables – O(n) on average Advantage is that it takes minimum amount of space.
Linked List :
  • This implementation is using linked list. A link field is added to each record.
  • Searching of names is done in order pointed by link of link field.
  • A pointer “First” is maintained to point to first record of symbol table.
  • Insertion is fast O(1), but lookup is slow for large tables – O(n) on average
Hash Table :
  • In hashing scheme two tables are maintained – a hash table and symbol table and is the most commonly used method to implement symbol tables..
  • A hash table is an array with index range: 0 to tablesize – 1.These entries are pointer pointing to names of symbol table.
  • To search for a name we use hash function that will result in any integer between 0 to tablesize – 1.
  • Insertion and lookup can be made very fast – O(1).
  • Advantage is quick search is possible and disadvantage is that hashing is complicated to implement.
Binary Search Tree :
  • Another approach to implement symbol table is to use binary search tree i.e. we add two link fields i.e. left and right child.
  • All names are created as child of root node that always follow the property of binary search tree.
  • Insertion and lookup are O(log2 n) on average
Refer : http://www.cse.aucegypt.edu/~rafea/csce447/slides/table.pdf
Question 9
Recursive descent parsing is an example of
A
Top-down parsers
B
Bottom-up parsers
C
Predictive parsers
D
None of the above
Question 9 Explanation: 
There are generally two types of Parsers:

Top-Down Parsers:
In this Parsing technique we expand the start symbol to the whole program.
Recursive Descent and LL parsers are the Top-Down parsers.

Bottom-Up Parsers:
In this Parsing technique we reduce the whole program to start symbol.
Operator Precedence Parser, LR(0) Parser, SLR Parser, LALR Parser and CLR Parser are the Bottom-Up parsers.

Recursive descent
Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right. It uses procedures for every terminal and non-terminal entity. This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking. But the grammar associated with it (if not left factored) cannot avoid back-tracking. A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing.
Question 10
A top-down parser generates
A
Rightmost Derivation
B
Rightmost derivation in reverse
C
Leftmost derivation
D
Leftmost derivation in reverse
Question 10 Explanation: 
Top-Down Parsers:
  • In this Parsing technique we expand the start symbol to the whole program.
  • In top down parsing, we just start with the start symbol and compare the right side of the different productions against the first piece of input to see which of the productions should be used.
  • A top down parser is called LL parser because it parses the input from Left to right, and constructs a Leftmost derivation of the sentence.
  • Recursive Descent and LL parsers are the Top-Down parsers.
  • A bottom up parser is LR parser with Left to Right parsing performing Right Most Derivation in reverse order (LR)
Question 11
Peephole optimization is a form of
A
Loop optimization
B
Local optimization
C
Constant folding
D
Data flow analysis
Question 11 Explanation: 
Peephole optimization is a form of optimization that is done on a segment of generated code.
So, Peephole optimization is form of Local optimization
Question 12
Substitution of values for names (whose values are constants) is done in
A
Local optimization
B
Loop optimization
C
Constant folding
D
Strength reduction
Question 12 Explanation: 
Constant folding

Constant folding is the process of recognizing and evaluating constant expressions at compile time rather than computing them at runtime. Terms in constant expressions are typically simple literals, such as the integer literal 2, but they may also be variables whose values are known at compile time. Consider the statement:
i = 320 * 200 * 32;

Most compilers would not actually generate two multiply instructions and a store for this statement. Instead, they identify constructs such as these and substitute the computed values at compile time (in this case, 2,048,000).

Constant folding can make use of arithmetic identities. If x is numeric, the value of 0 * x is zero even if the compiler does not know the value of x (note that this is not valid for IEEE floats since x could be Infinity or NotANumber. Still, some languages that favor performance such as GLSL allow this for constants, which can occasionally cause bugs).

Constant folding may apply to more than just numbers. Concatenation of string literals and constant strings can be constant folded. Code such as "abc" + "def" may be replaced with "abcdef".

Simple Example:

In the code fragment The below expression (3 + 5) can be evaluated at compile time and replaced with the constant 8.

int f (void)
{
return 3 + 5;
}
Below is the code fragment after constant folding.
int f (void)
{
return 8;
}
Question 13
Which one of the following is correct about the statements are given below?
I.  All function calls are resolved at compile time in C language
II. All function calls are resolved at compile time in C++ language
A
Only II is correct
B
Both I and II are correct
C
Only I is correct
D
Both I and II are incorrect
Question 13 Explanation: 
I. All function calls are resolved at compile time in C language -False
II. All function calls are resolved at compile time in C++ language -False

In both C and C++ Language If any function contains branch instruction and branch address are not  know at compile time . Branch instructions address are know at only run time
Question 14
A simple two-pass assembler does which of the following in the first pass:
A
Checks to see if the instructions are legal in the current assembly mode
B
It allocates space for the literals.
C
It builds the symbol table for the symbols and their values.
D
All of these
Question 14 Explanation: 
1st pass: scan label definitions and assign addresses
2nd pass: actual translation (object code)

Pass 1 : define symbols (assign addresses)
  • Assign addresses to all statements in the program
  • Save the values assigned to all labels for use in Pass 2
  • Process some assembler directives
  • Table creation , Adding literals to literal table
  • Allocate space for literals
  • Compute total length of a program
Pass 2 : assemble instructions and generate object program
  • Assemble instructions
  • Generate data values defined by BYTE, WORD, etc.
  • Process the assembler directives not done in Pass 1
  • Write the object program and the assembly listing
Finally, assembler must write the generated object code to some output device
- Called object program
- Will be later loaded into memory for execution
Question 15
In compiler terminology reduction in strength means
A
Replacing run time computation by compile time computation
B
Removing loop invariant computation
C
Removing common subexpressions
D
Replacing a costly operation by a relatively cheaper one
Question 15 Explanation: 
In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts "strong" multiplications inside a loop into "weaker" additions – something that frequently occurs in array addressing.

Other Examples of strength reduction include:
  • Replacing a multiplication within a loop with an addition
  • Replacing an exponentiation within a loop with a multiplication
Let take an Example:
Exponentiation is replaced by multiplication and multiplication is in return replaced by addition.
The below code is having multiplication operator:
a = 10;
for (j=0; j < X; j++)
{
 Z[j] = a * j;
}
This code can be replaced by the following code by replacing the multiplication with addition.
a = 10;
k = 0;
for (j=0; j < X; j++)
{
 Z[j] = k;
 k = k + a;
}
Question 16
Which of the following statements about peephole optimization is False?
A
It is applied to a small part of the code
B
It can be used to optimize intermediate code
C
To get the best out of this, it has to be applied repeatedly
D
It can be applied to the portion of the code that is not contiguous
Question 16 Explanation: 
In peephole optimization a small portion of code need not be contiguous  and it is replaced by another which is expected to be better performing.
This is often applied repeatedly. So, all given options are true in this question.

Refer this link : http://www.cse.iitm.ac.in/~krishna/cs3300/pm-lecture3.pdf
Question 17
Which variable does not drive a terminal string in grammar?
S → AB
A → a
B → b
B → C
A
A
B
B
C
C
D
S
Question 17 Explanation: 
C is the useless variable as there is no production rule which replaces C with a terminal. Thus, it does not derive any non terminal.
Question 18
Shift reduce parsing belongs to a class of
A
Bottom up parsing
B
Top down parsing
C
Recursive parsing
D
Predictive parsing
Question 18 Explanation: 
shift reduce
Question 19
A bottom-up parser generates :
A
Leftmost derivation in reverse
B
Right-most derivation in reverse
C
Left-most derivation
D
Right-most derivation
Question 19 Explanation: 
Bottom-up parser generates the right-most derivation in reverse order. It constructs the parse tree from the input string to the root and tries to construct the right-most derivation of the specified string backward.
Question 20
Identify the correct nodes and edges in the given intermediate code:
(1) i=1
(2) t1=5*I
(3) t2=4*t1
(4) t3=t2
(5) a[t3]=0
(6) i=i+1;
(7) if i<15 goto(2)
A
33
B
44
C
43
D
34
Question 20 Explanation: 
The nodes are :
  1. Start
  2. 1
  3. 2 to 7
  4. End
Edges will be 4.
An edge from Start to 1.
An edge from 1 to 2-7.
A self-loop on 2-7
An edge from 2-7 to end.
Hence, Option(B) 44 is the correct answer.
Question 21
________ is holding an entry for each terminal symbol and is acting as permanent database.
A
Variable Table
B
Terminal Table
C
Keyword Table
D
Identifier Table
Question 21 Explanation: 
Terminal table is a permanent database that has an entry for each terminal symbol.such as arithmetic operators, keywords, punctuation characters such as ';' , ','etc Fields: Name of the symbol.
option(B) is the correct answer
Question 22
Identify the total number of tokens in the given statements printf(“A%B=”, &i);
A
7
B
8
C
9
D
13
Question 22 Explanation: 
Total number of token = 8
  1. printf
  2. (
  3. "A%B="
  4. ,
  5. &
  6. i
  7. )
  8. ;
Question 23
Which of the following code replacements is an example of operator strength reduction?
A
Replace P^2 by P*P
B
Replace P*16 by P<< 4
C
Replace pow(P,3) by P*P*P
D
Replace (P <<5) -P by P*3
Question 23 Explanation: 
In option (B), Multiplication operation is replaced with Shift Operator . It reduce the operator strength because shift operator is less expensive than multiplication operation.
Here, most suitable option is 2) Replace P × 16 by P ≪ 4
Question 24
____ merges the bodies of two loops.
A
Loop rolling
B
Loop folding
C
Loop merge
D
Loop jamming
Question 24 Explanation: 
In loop jamming, the bodies of the two loops are merged together to form a single loop provided that they do not make any references to each other. It reduces the time taken to compile the many number of loops.

Example:
Initial Code:

for(int i=0; i<5; i++)
  a = i + 5;
for(int i=0; i<5; i++)
  b = i + 10;

Using Loop jamming above code converted into below Optimized code :
for(int i=0; i<5; i++)
{
  a = i + 5;
  b = i + 10;
}
Question 25
____ merges the bodies of two loops.
A
Loop rolling
B
Loop folding
C
Loop merge
D
Loop jamming
Question 25 Explanation: 
In loop jamming, the bodies of the two loops are merged together to form a single loop provided that they do not make any references to each other. It reduces the time taken to compile the many number of loops.

Example:
Initial Code:

for(int i=0; i<5; i++)
  a = i + 5;
for(int i=0; i<5; i++)
  b = i + 10;

Using Loop jamming above code converted into below Optimized code :
for(int i=0; i<5; i++)
{
  a = i + 5;
  b = i + 10;
}
Question 26
Which of the following can be accessed by transfer vector approach of linking?
A
External data segments
B
External subroutines
C
Data located in other procedure
D
All of these
Question 26 Explanation: 
External subroutines are routines that are created and maintained separately from the program that will be calling them.

Refer : http://faculty.cs.niu.edu/~byrnes/csci360/notes/360ext.htm
Question 27
YACC builds up
A
SLR parsing table
B
Canonical LR parsing table
C
LALR parsing table
D
None of these
Question 27 Explanation: 
  • YACC stands for Yet Another Compiler Compiler.
  • YACC is an LALR(1) ( LookAhead, Left-to-right, Rightmost derivation producer with 1 lookahead token) parser generator
  • YAAC builds up LALR(1)  parsing table.
Question 28
Micro program is
A
The name of source program in micro computers
B
The set of instructions indicating the primitive operations in a system
C
Primitive form of macros used in assembly language programming
D
Program of very small size
Question 28 Explanation: 
Micro program is the set of instructions indicating the primitive operations in a system
Micro program is a set of microinstructions that defines the individual operations that a computer carries out in response to a machine-language instruction
Question 29
What is the maximum number of reduce moves that can be taken by a bottom up parser for a grammar with no epsilon and unit production (i.e., of type A →  ∈ and A →  a) to parse a string with n tokens?
A
N/2
B
N-1
C
2n-1
D
2n
Question 29 Explanation: 
Let the grammar is:
A → BC
B → aa
C → bb
now let the string w = aabb then
A  → BC(reduction 3)
     → aaC (reduction 2)
     → aabb (reduction 1)
n = 4 and number of productions are 3 so n - 1

Note :
There is a mistake in question it is not supposed to have A -> a, even though that is not a unit production. Given key is n-1.
Question 30
In a compiler, keywords of a language are recognized during
A
Parsing of the program
B
The code generation
C
The lexical analysis of the program
D
Dataflow analysis
Question 30 Explanation: 
  • Lexical Analysis is the first phase of the compiler also known as a scanner. It converts the High level input program into a sequence of Tokens.
  • A lexical token is a sequence of characters that can be treated as a unit in the grammar of the programming languages.
Example of tokens:
  • Type token (id, number, real, . . . )
  • Punctuation tokens (IF, void, return, . . . )
  • Alphabetic tokens (keywords)
Keywords; Examples-for, while, if etc.
Identifier; Examples-Variable name, function name, etc.
Operators; Examples '+', '++', '-' etc.
Separators; Examples ',' ';' etc.
Question 31
A system program that combines the separately compiled modules of a program into a form suitable for execution
A
Assembler
B
Linking loader
C
Cross compiler
D
Load and go
Question 31 Explanation: 
linking loader : A loader which combines the functions of a relocating loader with the ability to combine a number of program segments that have been independently compiled into an executable program.
Question 32
Which of the following statements is false?
A
There exist parsing algorithms for some programming languages whose complexities are less than O(n3)
B
A programming language which allows recursion can be implemented with static storage allocation
C
L-attributed definition can be evaluated in the framework of bottom-up parsing
D
Code improving transformation can be performed at both source language and intermediate code level.
Question 32 Explanation: 
  1. Statement I - True Bottom up and top down parser are linear time parsers which take O(n) time to parse the string. i.e, only one scan of input is required
    -LL(1) and LR parsers takes O(n)
    -CFG is O(n3)
    -LL and LR parsers are linear time parsers.
    Refer : https://www.csee.umbc.edu/courses/331/fall11/notes/04/04cparsing.ppt.pdf
  2. Statement II - False It cannot be implemented with static storage allocation. It needs dynamic memory allocation.
  3. Statement III - True Every s-attributed SDT is L-attributed SDT (vice-versa need not be true)
    S-attributed SDT follows bottom up approach. Hence L-attributed SDT which are also S-attributed can be solved by bottom up parsing
    Refer : http://www.cs.sunysb.edu/~cse304/Fall09/Lectures/attributes-handout.pdf
  4. Statement IV - True  Code improving transformation can be performed at both source language and intermediate code level.
Question 33
In a single pass assembler, most of the forward references can be avoided by putting the restriction
A
On the number of strings/lifereacs
B
That the data segment must be defined after the code segment
C
On unconditional rump
D
That the data segment be defined before the code segment
Question 33 Explanation: 
A single pass assembler scans the program only once and creates the equivalent binary program
Question 34
The grammar S ⟶ (S) | SS | ∈  is not suitable for predictive parsing because the grammar is
A
An Operator Grammar
B
Right Recursive
C
Left Recursive
D
Ambiguous
Question 34 Explanation: 
For predictive-parsing, grammar should be:
  • Free from ambiguity
  • Free from left recursion
  • Free from left factoring
Since given grammar can have infinite parse trees for string ‘ε’, so grammar is ambiguous, and also S → SS has left recursion. so it can not have predictive parser.

option(C) is also satisfying but option(D) is a better answer hear. Because we have standard procedure for removing left-recursion but ambiguity is not easy to remove.
Question 35
Pseudo-instructions are
A
Assembler directives
B
Instructions in any program that have no corresponding machine code instruction
C
Instruction in any program whose presence or absence will not change the output for any input
D
None of these
Question 35 Explanation: 
Pseudo Instructions are special commands to the assembler about the positioning of the program, the address the program should presumed to be assembled at, the name of the module, data declarations, the title and printing options for the program, defining and calling macros, macro looping and test, and end of source code. Unless a machine instruction is issued, these do not generate executable code.

Refer : https://en.wikibooks.org/wiki/360_Assembly/Pseudo_Instructions
Question 36

Consider the following Grammar G :

S➝ A | B
A➝ a | c
B➝ b | c

Where {S,A,B} is the set of non-terminals, {a,b,c} is the set of terminals.

Which of the following statement(s) is/are correct ?

        S1 : LR(1) can parse all strings that are generated using grammar G.
        S2 : LL(1)  can parse all strings that are generated using grammar G.

Choose the correct answer from the code given below :

Code :
A
Both S1 and S2
B
Only S2
C
Neither S1 nor S2
D
Only S1
Question 36 Explanation: 
The given grammar is ambiguous as there are two possible leftmost derivations for string “c”.

First Leftmost Derivation
S → A
A → c

Second Leftmost Derivation
S → B
B → c

An Ambiguous grammar can neither be LL(1) nor LR(1)
Question 37
The identification of common sub-expression and replacement of run time computations by compile-time computations is:
A
Local optimization
B
Constant folding
C
Loop Optimization
D
Data flow analysis
Question 37 Explanation: 
Constant folding
Constant folding is the process of recognizing and evaluating constant expressions at compile time rather than computing them at runtime. Terms in constant expressions are typically simple literals, such as the integer literal 2, but they may also be variables whose values are known at compile time. Consider the statement:
i = 320 * 200 * 32;

Most compilers would not actually generate two multiply instructions and a store for this statement. Instead, they identify constructs such as these and substitute the computed values at compile time (in this case, 2,048,000).

Constant folding can make use of arithmetic identities. If x is numeric, the value of 0 * x is zero even if the compiler does not know the value of x (note that this is not valid for IEEE floats since x could be Infinity or NotANumber. Still, some languages that favor performance such as GLSL allow this for constants, which can occasionally cause bugs).

Constant folding may apply to more than just numbers. Concatenation of string literals and constant strings can be constant folded. Code such as "abc" + "def" may be replaced with "abcdef".

Simple Example:

In the code fragment The below expression (3 + 5) can be evaluated at compile time and replaced with the constant 8.

int f (void)
{
return 3 + 5;
}
Below is the code fragment after constant folding.
int f (void)
{
return 8;
}

Loop Optimization
Loop Optimization is the process of increasing execution speed and reducing the overheads associated with loops
Decreasing the number of instructions in an inner loop improves the running time of a program even if the amount of code outside that loop is increased.

Loop Optimization Techniques -
  1. Frequency Reduction (Code Motion)
  2. Loop Unrolling:
  3. Loop Jamming
 
Question 38
The structure or format of data is called:
A
Syntax
B
Struct
C
Semantic
D
None of the above
Question 38 Explanation: 
The structure and format of data are defined using syntax. Semantics defines how a particular pattern to be interpreted, and what action is to be taken based on that interpretation. In programming languages, syntax of the instructions plays a vital role in designing of the program.
Question 39
The graph that shows basic blocks and their successor relationship is called:
A
DAG
B
Control graph
C
Flow graph
D
Hamiltonian graph
Question 39 Explanation: 
Basic Blocks

Basic block is a set of statements that always executes in a sequence one after the other.

The characteristics of basic blocks are :
  • They do not contain any kind of jump statements in them.
  • There is no possibility of branching or getting halt in the middle.
  • All the statements execute in the same order they appear.
  • They do not lose lose the flow control of the program.
Flow Graphs
  • A flow graph is a directed graph with flow control information added to the basic blocks.
  • The basic blocks serve as nodes of the flow graph.
  • There is a directed edge from block B1 to block B2 if B2 appears immediately after B1 in the code
Question 40
A top down parser generates:
A
Leftmost derivation
B
Rightmost derivation
C
Leftmost derivation in reverse
D
Rightmost derivation in reverse
Question 40 Explanation: 
Top-down parser generates the left-most derivation. It constructs the parse tree from left to right and constructs the left-most derivation of the specified sentence.
Question 41
Syntax directed translation scheme is desirable because:
A
It is based on the syntax
B
It is easy to modify
C
Its description is independent of any implementation
D
All of these
Question 41 Explanation: 
Syntax Directed Definition
  • An SDD is a CFG with attributes and rules.
    Attributes are associated with grammar symbols.
    Rules are associated with productions.
  • An SDD specifies the semantics of productions.
    It does not enforce a specific way of achieving the semantics
Syntax Directed Translation
  • An SDT is done by attaching rules or program fragments to productions.
  • The order induced by the syntax analysis produces a translation of the input program
Role of SDT
  •  To associate actions with productions
  •  To associate attributes with non-terminals
  •  To create implicit or explicit syntax tree
  •  To perform semantic analysis
Question 42
The output of lexical analyzer is:
A
A set of regular expressions
B
Strings of character
C
Syntax tree
D
Set of tokens
Question 42 Explanation: 
The output of a lexical analyzer is stream of tokens
-lexical analyzer is the first phase of a compiler
-lexical analyzer read input characters from the source program and produce output as sequence of tokens
Question 43
Given the following expression grammar:
E→ E*F | F+E | F
F→ F-F | id

Which of the following is true?
A
* has higher precedence than +
B
– has higher precedence than *
C
+ and – have same precedence
D
+ has higher precedence than *
Question 43 Explanation: 
Operator which is at lower level in the grammar is termed to have higher precedence.
  • * and + have least priority because  are present at the top most level .
  • * and + have equal precedence because they are same level .
  • '-' have higher precedence than '+' and '*' because is present at the bottom most level
Let us take an example 6*7-8 when we draw parse tree according to grammar
      E
   /  |  \
  E   *   F
  |     / | \
  F    F  -  F
  |    |     |
id(6) id(7) id(8)

As we can see that first ‘- ‘ will be evaluated then ‘ * ‘ is evaluated
Thus, ‘ – ‘ has higher precedence then *.

So correct answer is option(B)
Question 44
The number of tokens in the following C statement is
printf(“i=%d, &i=%x”, i&i);
A
13
B
6
C
10
D
9
Question 44 Explanation: 
Number of Tokens are 10 for printf("i=%d, &i=%x", i, &i);
  1. printf
  2. (
  3. "i=%d, &i=%x"
  4. ,
  5. i
  6. ,
  7. &
  8. i
  9. )
  10. ;
WhiteSpaces will be removed by the lexical analyzer from the source code.

Number of Tokens are 9 for printf("i=%d, &i=%x", i&i);
  1. printf
  2. (
  3. "i=%d, &i=%x"
  4. ,
  5. i
  6. &
  7. i
  8. )
  9. ;
Question 45
Which grammar rules violate the requirement of the operator grammar ? A, B, C are variables and a, b, c are terminals
1) A → BC
2) A → CcBb
3) A → BaC
4) A → ε
A
1 only
B
1 and 2 only
C
1 and 3 only
D
1 and 4 only
Question 45 Explanation: 
Operator precedence parser can be constructed from a grammar called Operator grammar. These grammars have the property that no production on right side is ɛ or has two adjacent non-terminals.
So, A → BC and A → ε are not allowed.
Thus, Correct option is (D).
Question 46
Which one of the following is a top-down parser?
A
Recursive descent parser
B
Shift left associative parser
C
SLR(k) parser
D
LR(k) parser
Question 46 Explanation: 
LL parse and Recursive Descent Parse is a Top Down Parsers.
parsers
Question 47
YACC stands for
A
Yet accept compiler constructs
B
Yet accept compiler compiler
C
Yet another compiler construct
D
Yet another compiler compiler
Question 47 Explanation: 
  • YACC stands for Yet Another Compiler Compiler.
  • YACC is an LALR(1) ( LookAhead, Left-to-right, Rightmost derivation producer with 1 lookahead token) parser generator
  • YAAC builds up LALR(1)  parsing table.
Question 48
Which statement is true?
A
LALR parser is more powerful and costly as compare to other parsers
B
All CFG’s are LP and not all grammars are uniquely defined
C
Every SLR grammar is unambiguous but not every unambiguous grammar is SLR
D
LR(K) is the most general backtracking shift reduce parsing method
Question 48 Explanation: 
(A). Canonical LR is the most powerful parser as compared to other LR parsers. Order is
LR(0) < SLR(1) < LALR(1) < CLR(1)

(B) . The grammar generated by LP(Linear Precedence) are CFG but not implying All CFG are LP.

(C) . Every SLR grammar is unambiguous but not every unambiguous grammar is SLR

(D) . LR(k) has a k-look ahead which can always see ahead the input in order to avoid backtracking and LR(k) is the most general non back tracking shift reduce parsing method
Question 49
With respect to compiler design, "recursive descent" is a ____ parsing technique that reads the inputs from ____.
A
Top-down, right to left
B
Top-down, left to right
C
Bottom up, right to left
D
Bottom up, left to right
Question 49 Explanation: 
Recursive Descent Parsing
  • Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right.
  • It uses procedures for every terminal and non-terminal entity.
  • This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking. But the grammar associated with it (if not left factored) cannot avoid back-tracking.
  • A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing.
  • This parsing technique is regarded recursive as it uses context-free grammar which is recursive in nature.
Recursive Descent Parsing
Question 50
Which of the following is NOT a bottom up, shift reduce parser?
A
LR parser
B
LL parser
C
SLR parser
D
LALR parser
Question 50 Explanation: 
LL parser is a Top Down Parsers.
parsers
Question 51
Which of the following is machine independent optimization?
A
Loop optimization
B
Redundancy Elimination
C
Folding
D
All of the options
Question 51 Explanation: 
Machine independent optimizations
  • Enable other transformations
    -Procedure inlining, cloning, loop unrolling
  • Eliminate redundancy
    -Redundant expression elimination
  • Eliminate useless and unreachable code
    -Dead code elimination
  • Specialization and strength reduction
    -Constant propagation, peephole optimization
  • Move operations to less-frequently executed places
    -Loop invariant code motion
  • Other Machine Independent optimization are
    -Loop optimization
    -Redundancy elimination
    -Folding
    -Strength reduction
    -Deadlock elimination
Machine dependent (scheduling) transformations
  •  Take advantage of special hardware features
    - Instruction selection, prefetching
  •  Manage or hide latency, introduce parallelism
    - Instruction scheduling, prefetching
  •  Manage bounded machine resources
    - Register allocation
  • Other Machine Dependent optimization are
    -Register allocation
    -Use of addressing modes
    -Peephole optimization
Question 52
Which of the following statements is/ are false?
S1: LR(0) grammar and SLR(1) grammar are equivalent
S2: LR(1) grammar are subset of LALR(1) grammars
A
S1 only
B
S1 and S2 both
C
S2 only
D
None of the options
Question 52 Explanation: 
grammar space
The containment rules are the following:
  • Every LR(0) grammar is also SLR(1), but not all SLR(1) grammars are LR(0).
  • Every SLR(1) grammar is also LALR(1), but not all LALR(1) grammars are SLR(1).
  • Every LALR(1) grammar is also LR(1), but not all LR(1) grammars are LALR(1).
  • Every LL(1) grammar is also LR(1), but not all LR(1) grammars are LL(1).
  • Every LL(0) grammar is also LR(0), SLR(1), LALR(1), LR(1), and LL(1)
Question 53
The optimization phase in a compiler gererally:
A
Reduces the space of the code
B
Optimization the code to reduce execution time
C
Both (A) and (B)
D
Neither (A) nor (B)
Question 53 Explanation: 
typo mistake***option(A) : Reduces the space of the code****

Code Optimization
Code Optimization phase removes the unnecessary code line and arranges the sequence of statements to speed up the execution of the program without wasting resources. The main goal of Code Optimization phase is to improve on the intermediate code to generate a code that runs faster and occupies less space.

The main functions of Code Optimization phase are:
  • Code Optimization helps you to establish a trade-off between execution and compilation speed
  • It Improves the running time of the target program
  • Generates streamlined code still in intermediate representation
  • Removing unreachable code and getting rid of unused variables
  • Removing statements which are not altered from the loop
Let us consider the following example code
a = intofloat(10)
b = c * a
d = e + b
f = d
Then Code Optimizer will take above code and transformed it into below code
b = c * 10.0
f = e+b
So, Both option (A) and option(B)  are correct answers.
Question 54
Which of the following phases of the compilation process is also known as parsing?
A
Lexical analysis
B
Code optimization
C
Syntax analysis
D
Semantic analysis
Question 54 Explanation: 
  • Syntax analysis or parsing is the second phase of a compiler.
  • Syntax Analysis is also called as parsing.
  • Syntax analysis takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree).
  • In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
Question 55
Resolution of externally defined symbols is performed by____
A
Linker
B
Loader
C
Compiler
D
Editor
Question 55 Explanation: 
The linker resolves symbol references by associating each ref with exactly one symbol definition from the symbol tables of its input relocatable obj files. Symbol resolution is easy for references to local variables that are defined in the same module as the ref.

Resolving ref to global symbols is trickier. When the compiler encounters a symbol (either a var or func name) that is not defined in the current module, it assumes that it is defined in some other module, generally a linker symbol table entry, and leaves it for the linker to handle. If the linker is unable to find a def for the referenced symbol in any of its input modules, it prints an error msg and terminates.
Question 56
Which of the following are language processor?
A
Assembler and editor
B
Compiler and word processor
C
Only Assembler and compiler
D
Assembler,Compiler and Interpreter
Question 56 Explanation: 
A language translator is a program which is used to translate an input program written in one programming language into another programming language. Language processor is also called a language translator.

A program written in any high-level programming language is called the Source code or program. To convert the source code into machine code is called the object code or program. A Translator translates the source program into the object program that the computer can understand and execute.

In programming Language processors are three types:
  1. Assembler
  2. Interpreter
  3. Compiler
Assembler is a translator which is used to translate the assembly language code into machine language code.

Interpreter converts the source program written in high level language into machine languages.
An interpreter converts each statement of the program line by line into machine code.

Compiler : The purpose of compiler is same as interpreter but unlike interpreters which translate the source program line by line, compiler are the translators, which translate the entire program into machine codes.
Question 57
Synthesized attribute can easily be simulated by an
A
LL grammar
B
Ambiguous grammar
C
LR grammar
D
None of the above
Question 57 Explanation: 
Synthesized attributes follows bottom up parsing evaluation.

Example :
X -> Y | Z  i attribute is there in X,Y and Z
X.i = f( Y.i , Z.i ) means parent is taking attribute from there children

Bottom Up Parsers
LR( 0 ) , SLR( 1 ) , LR( 1 ) , LALR( 1 ) and Operator Precedence

Top Down Parsers
Non Recursive descent ( LL( 1 ) ) and Recursive descent

LR Parsers which are generated from LR grammar are also coming under bottom up parsing fashion Therefore option(C) is correct.
Question 58
Consider an ε-tree CFG. If for every pair of productions A → u and A → v
A
If FIRST(u) ∩ FIRST(v) is empty then the CFG has to be LL(1)
B
If the CFG is LL(1) then FIRST(u) ∩ FIRST(v) has to be empty
C
Both (A) and (B)
D
None of the above
Question 58 Explanation: 
Theorem : A context free grammar G=(V​ T​ , V​ N​ , S,P) is LL(1) if and if only if for every nonterminal A and every strings of symbols ⍺,β such that ⍺≠β and
A → ⍺,β we have
1. First(⍺) ∩ First (β) =Θ.
2. If ⍺ *⇒ ε then First(β) ∩ Follow(A)= Θ.

Condition 2 is not required If grammar is epsilon free

Refer : https://www.csd.uwo.ca/~mmorenom/CS447/Lectures/Syntax.html/node14.html
Question 59
Which of the following checks are not included in semantic analysis done by the compiler:
A
Type checks
B
Spelling checks
C
Uniquencess checks
D
Flow of control checks
Question 59 Explanation: 
Functions of Semantic Analysis:
  • Type Checking 
    Ensures that data types are used in a way consistent with their definition.
  • Label Checking
    A program should contain labels references.
  • Flow Control Check 
    Keeps a check that control structures are used in a proper manner.(example: no break statement outside a loop)
  • Scope resolution
  • Array-bound checking
Semantics errors that the semantic analyzer is expected to recognize:
  • Type mismatch
  • Undeclared variable
  • Reserved identifier misuse.
  • Multiple declaration of variable in a scope.
  • Accessing an out of scope variable.
  • Actual and formal parameter mismatch.
Question 60
In a computer, keywords of a language are recognized during
A
Parsing of the program
B
Code generation
C
Lexical analysis of the program
D
Data flow diagrams
Question 60 Explanation: 
  • Lexical Analysis is the first phase of the compiler also known as a scanner. It converts the High level input program into a sequence of Tokens.
  • A lexical token is a sequence of characters that can be treated as a unit in the grammar of the programming languages.
Example of tokens:
  • Type token (id, number, real, . . . )
  • Punctuation tokens (IF, void, return, . . . )
  • Alphabetic tokens (keywords)
Keywords; Examples-for, while, if etc.
Identifier; Examples-Variable name, function name, etc.
Operators; Examples '+', '++', '-' etc.
Separators; Examples ',' ';' etc.
Question 61
Match the description of several parts of a classic optimizing compiler in List - I, with the names of those parts in List - II:

List - I

List - II

A. A part of a compiler that is responsible for recognizing syntax

1. Optimizer

B. A part of a compiler that takes as input a stream of characters and produces as output a stream of words along with their associated syntactic categories.

2. Semantic Analysis

C. A part of a compiler that understands the meanings of variable names and other symbols and checks that they are used in ways consistent with their definitions.

3. Parser

D. An IR-to-IR transformer that tries to improve the IR program in some way (Intermediate Representation).

4. Scanner

A
(a)-(iii), (b)-(iv), (c)-(ii), (d)-(i)
B
(a)-(iv), (b)-(iii), (c)-(ii), (d)-(i)
C
(a)-(ii), (b)-(iv), (c)-(i), (d)-(iii)
D
(a)-(ii), (b)-(iv), (c)-(iii), (d)-(i)
Question 61 Explanation: 
  • Parser is a part of compiler and responsible for syntax recognition.
  • Scanner is a part of compiler that used by lexical analyzer it takes input as stream of characters and produce output as a stream of words along with their associated syntactic categories
  • In Semantic analysis consistency and definition of syntax is checked.
  • An optimizer is used improve the IR program and an IR-to-IR transformer that tries to improve the IR program in same way(intermediate representation)
Question 62
In Distributed system, the capacity of a system to adapt the increased service load is called __________ .
A
Tolerance
B
Scalability
C
Capability
D
Loading
Question 62 Explanation: 
Scalability is the capacity of distributed system to adopt the increased load in Distributed System
Question 63
Consider the following statements related to compiler construction :
I. Lexical Analysis is specified by context-free grammars and implemented by pushdown automata.
II. Syntax Analysis is specified by regular expressions and implemented by finite-state machine.
Which of the above statement(s) is/are correct ?
A
Only I
B
Only II
C
Both I and II
D
Neither I nor II
Question 63 Explanation: 
False-Lexical Analysis is specified by context-free grammars and implemented by pushdown automata
Lexical Analysis is specified by regular grammar and implemented by finite automata
False-Syntax Analysis is specified by regular expressions and implemented by finite-state machine
Syntax Analysis uses context free grammar which uses push down automata
Question 64
In compiler optimization, operator strength reduction uses mathematical identities to replace slow math operations with faster operations. Which of the following code replacements is an illustration of operator strength reduction ?
A
Replace P + P by 2 * P or Replace 3 + 4 by 7.
B
Replace P * 32 by P << 5
C
Replace P * 0 by 0
D
Replace (P << 4) – P by P * 15
Question 64 Explanation: 
Strength Reduction : Replace expensive operations with less expensive but equivalent operations.
For Example : Multiplication is more expensive operation than addition.
Shift operation is less expensive operation than Multiplication/Division.

In option (B), Multiplication operation is replaced with Shift Operator . It reduce the operator strength because shift operator is less expensive than multiplication operation.
Here, most suitable option is 2) Replace P × 32 by P ≪ 5
Question 65
Which of the following are the principles tasks of the linker?
I. Resolve external references among separately compiled program units.
II. Translate assembly language to machine code.
III. Relocate code and data relative to the beginning of the program.
IV. Enforce access-control restrictions on system libraries
A
I and II
B
I and III
C
II and III
D
I and IV
Question 65 Explanation: 
linker is a computer program that takes one or more object files generated by a compiler and combines them into one executable program, library file or another object file
linker and loader

Principles tasks of the linker are:
  • Resolve external references among separately compiled program units.
  • Relocate code and data relative to the beginning of the program.
-Assembler translate assembly language to machine code.
-Loader enforces access-control restrictions on system libraries
Question 66
In _______, the bodies of the two loops are merged together to form a single loop provided that they do not make any references to each other.
A
Loop unrolling
B
Strength reduction
C
Loop concatenation
D
Loop jamming
Question 66 Explanation: 
In loop jamming, the bodies of the two loops are merged together to form a single loop provided that they do not make any references to each other. It reduces the time taken to compile the many number of loops.

Example:
Initial Code:

for(int i=0; i<5; i++)
  a = i + 5;
for(int i=0; i<5; i++)
  b = i + 10;

Using Loop jamming above code converted into below Optimized code :
for(int i=0; i<5; i++)
{
  a = i + 5;
  b = i + 10;
}
Question 67
​Which of the following is not typically a benefit of dynamic linking?
I. Reduction in overall program execution time.
II. Reduction in overall space consumption in memory.
III. Reduction in overall space consumption on disk.
IV. Reduction in the cost of software updates.
A
I and IV
B
I only
C
II and III
D
IV only
Question 67 Explanation: 
Dynamic linking is used to reduce space consumption on disk and memory and it also reduce the cost of software update.
I. Reduction in overall program execution time - True
II. Reduction in overall space consumption in memory - False
III. Reduction in overall space consumption on disk- False
IV. Reduction in the cost of software updates- False
Question 68
Which of the following is FALSE?
A
The grammar S → a Sb |bSa|SS|∈, where S is the only non-terminal symbol and ∈ is the null string, is ambiguous.
B
SLR is powerful than LALR.
C
An LL(1) parser is a top-down parser.
D
YACC tool is an LALR(1) parser generator.
Question 68 Explanation: 
Option(A) : True
The grammar S → a Sb |bSa|SS|∈, where S is the only non-terminal symbol and ∈ is the null string, is ambiguous grammar because we are able to derive two different parse trees for string "aabb" therefore the given grammar is ambiguous.
Consider the string ="aabb"
Parse Tree 1 :
parse tree 1 Parse Tree 2:
parse tree 2

Option(A) : False
LALR is more powerful then SLR.

Option(D) : True
An LL(1) parser is a top-down parser.

Option(D) : True
YACC tool is an LALR(1) parser generator.
Question 69
Loop unrolling is a code optimization technique:
A
That avoids tests at every iteration of the loop.
B
That improves performance by decreasing the number of instructions in a basic block.
C
That exchanges inner loops with outer loops
D
That reorders operations to allow multiple computations to happen in parallel
Question 69 Explanation: 
Loop unrolling
  • Loop unrolling is a loop transformation technique that helps to optimize the execution time of a program.
  • Loop unrolling basically remove or reduce iterations.
  • Loop unrolling increases the program’s speed by eliminating loop control instruction and loop test instructions.
Let us consider an example :
Program 1 without loop unrolling:
#include
int main(void)
{
 for (int i=0; i<5; i++)
  printf("Hello\n"); //print hello 5 times
 return 0;
}
Output :
  1. Hello
  2. Hello
  3. Hello
  4. Hello
  5. Hello
This program uses loop unrolling.
#include
int main(void)
{
 // unrolled the for loop in program 1
 printf("Hello\n");
 printf("Hello\n");
 printf("Hello\n");
 printf("Hello\n");
 printf("Hello\n");
 return 0;
}
Output :
  1. Hello
  2. Hello
  3. Hello
  4. Hello
  5. Hello
Question 70
The translator which performs macro calls expansion is called:
A
Macro processor
B
Micro preprocessor
C
Macro preprocessor
D
Dynamic Linker
Question 70 Explanation: 
  • The translator which performs macro calls expansion is called Macro pre-processor .
  • A macro processor is a program that copies a stream of text from one place to another, making a systematic set of replacements as it does so.
  • Macro processors have been used for language expansion (defining new language constructs that can be expressed in terms of existing language components), for systematic text replacements that require decision making, and for text reformatting (e.g. conditional extraction of material from an HTML file).
  • A Dynamic linker is the part of an operating system that loads and links the shared libraries.
  • C preprocessor is a micro processor that is used by compiler to transform your code before compilation. It is called micro pre-processor because it allows us to add macros.
Question 71
Which one from the following is false?
A
LALR parser is Bottom up parser
B
A parsing algorithm which performs a left to right scanning and a right most deviation is RL (1)
C
LR parser is Bottom up parser.
D
In LL(1), the 1 indicates that there is a one - symbol look - ahead.
Question 71 Explanation: 
option(A) : True
LALR parser is Bottom Up parse

option(B) : False
We don't have any RL(1) parsing. We have LR(1) parsing algorithm. Incase of LR(1) parsing L stands for left-to-right scanning of the input string and R signifies that we will get a Rightmost Derivation.

option(C) : True
LR parser is Bottom Up parser.

option(D) : True
In LL(1), the 1 indicates that there is a one – symbol look – ahead.
Question 72
Which phase of compiler generates stream of atoms?
A
Syntax Analysis
B
Lexical Analysis
C
Code Generation
D
Code Optimization
Question 72 Explanation: 
syntax tree
Question 73
Which activity is not included in the first pass of two pass assemblers ?
A
Build the symbol table
B
Construct the intermediate code
C
Separate mnemonic opcode and operand fields
D
ALL
Question 73 Explanation: 
option A,B,C are included.
Two Pass Assembler
Question 74
Code optimization is responsibility of :
A
Application programmer
B
System programmer
C
Operating system
D
All of the above
Question 74 Explanation: 
  • Code optimization is responsibility of System Programmers
  • Optimization can be performed by automatic optimizers, or programmers.
  • An optimizer is either a specialized software tool or a built-in unit of a compiler (the so-called optimizing compiler).
  • Modern processors can also optimize the execution order of code instructions.
Question 75
Which activity is included in the first pass of two pass assemblers ?
A
Build the symbol table
B
Construct the intermediate code
C
Separate mnemonic opcode and operand fields
D
ALL
Question 75 Explanation: 
option A,B,C are included.
Two Pass Assembler
Question 76
In two pass assembler the symbol table is used to store :
A
Label and value
B
Only value
C
Mnemonic
D
Memory Location
Question 76 Explanation: 
  • In two pass assembler the symbol table is used to store memory locations
  • The symbol table contains information to locate and relocate symbolic definitions and references. The assembler creates the symbol table section for the object file. It makes an entry in the symbol table for each symbol that is defined or referenced in the input file and is needed during linking.
  • The symbol table is then used by the link editor during relocation. The symbol table's section header contains the symbol table index for the first non-local symbol.
  • Instead of using an absolute memory address it uses a symbolic address.
Question 77
A Top-down Parser generates :
A
Leftmost derivation
B
Rightmost derivation
C
Rightmost derivation in reverse
D
Leftmost derivation in reverse
Question 77 Explanation: 
Top-Down Parsers:
  • In this Parsing technique we expand the start symbol to the whole program.
  • In top down parsing, we just start with the start symbol and compare the right side of the different productions against the first piece of input to see which of the productions should be used.
  • A top down parser is called LL parser because it parses the input from Left to right, and constructs a Leftmost derivation of the sentence.
  • Recursive Descent and LL parsers are the Top-Down parsers.
  • A bottom up parser is LR parser with Left to Right parsing performing Right Most Derivation in reverse order (LR)
Question 78
A general macroprocessor is an in built function of :
A
Loader
B
Linker
C
Editor
D
Assembler
Question 78 Explanation: 
  • A general-purpose macro processor or general purpose preprocessor is a macro processor that is not tied to or integrated with a particular language or piece of software.
  • A macro processor is a program that copies a stream of text from one place to another, making a systematic set of replacements as it does so.
  • Macro processors are often embedded in other programs, such as assemblers and compilers. Sometimes they are standalone programs that can be used to process any kind of text.
  • Macro processors have been used for language expansion (defining new language constructs that can be expressed in terms of existing language components), for systematic text replacements that require decision making, and for text reformatting (e.g. conditional extraction of material from an HTML file).
Question 79
Which activities is not included in the first pass of two pass assembler ?
A
Build the symbol table
B
Construct the Intermediate code
C
Separate mnemonic opcode and operand field.
D
Above All
Question 79 Explanation: 
option A,B,C are included.
Two Pass Assembler
Question 80
Which of the statements related to Compilers is wrong ?
A
Lexical analysis is breaking the input into tokens
B
Syntax analysis is for parsing the phrase
C
Syntax analysis is for analyzing the semantic
D
None of these
Question 80 Explanation: 
option(A) : Correct
  • lexical analyzer is the first phase of a compiler
  • lexical analyzer read input characters from the source program and produce output as sequence of tokens
option(B) : Correct
  • Syntax analysis or parsing is the second phase of a compiler.
  • Syntax Analysis is also called as parsing.
  • Syntax analysis takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree).
  • In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
option(C) : Wrong
  • Semantic means meaning of language. syntax analysis phase only cares about the correct syntax of the program.
  • Programs are logically valid or not, this part is done by semantic analysis phase.
Question 81
The dynamic binding occurs during the :
A
Compile time
B
Run time
C
Linking time
D
Pre-processing time.
Question 81 Explanation: 
Difference between static and dynamic binding 
  • Static binding happens at compile-time while dynamic binding happens at runtime.
  • Binding of private, static and final methods always happen at compile time since these methods cannot be overridden.
  • In Static binding All information needed to call a function is known at compile time.
  • In dynamic binding All information need to call a function come to know at run time.
  • When the method overriding is actually happening and the reference of parent type is assigned to the object of child class type then such binding is resolved during runtime.
  • The binding of overloaded methods is static and the binding of overridden methods is dynamic.
Question 82
Symbol Table can be used for :
A
Checking type compatibility
B
Suppressing duplication of error message
C
Storage allocation
D
All of these
Question 82 Explanation: 
Symbol table can be used for
(A) Checking type compatibility
(B) Suppressing duplication of error message
(C) Storage allocation
Question 83
A compiler for a high level language that runs on one machine and produces code for a different machine is called :
A
Optimizing
B
One pass compiler
C
Cross compiler
D
Multipass compiler
Question 83 Explanation: 
  • A compiler for a high-level language that runs on one machine and produces code for a different machine is called cross compiler.
  • A one-pass compiler is a compiler that passes through the source code of each compilation unit only once. traverses the program only once.
  • A multi-pass compiler or two pass compiler is a type of compiler that processes the source code or abstract syntax tree of a program several times.
  • A one-pass compilers is faster than multi-pass compilers.
Question 84
The ‘K’ in LR(K) cannot be :
A
0
B
1
C
2
D
None of these
Question 84 Explanation: 
In LR (K) , "K" indicate number of lookahead and can be any finite value.
Question 85
Peer-hole optimization is a form of :
A
Loop optimization
B
Local optimization
C
Constant folding
D
Data flow analysis
Question 85 Explanation: 
Peephole optimization is a form of optimization that is done on a segment of generated code.
So, Peephole optimization is form of Local optimization
**Local optimization is correct answer**
Question 86
A permanent database of a general model of compiler is ____________ .
A
Identifier table
B
Page map table
C
Literal table
D
Terminal table
Question 86 Explanation: 
Terminal table is a permanent database that has an entry for each terminal symbol.such as arithmetic operators, keywords, punctuation characters such as ';' , ','etc Fields: Name of the symbol.
option(B) is the correct answer
Question 87
Tasks done in parsing are :
A
Check the validity of a source string
B
Determine the syntactic structure of a source string
C
Both (A) and (B)
D
None of these
Question 87 Explanation: 
Tasks done in parsing are :
  • Check the validity of a source string
  • Determine the syntactic structure of a source string
  • Syntax analysis takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree).
  • In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
Question 88
YACC builds up __________ parsing table.
A
LALR
B
LR
C
SLR
D
LLR
Question 88 Explanation: 
  • YACC stands for Yet Another Compiler Compiler.
  • YACC is an LALR(1) ( LookAhead, Left-to-right, Rightmost derivation producer with 1 lookahead token) parser generator
  • YAAC builds up LALR(1)  parsing table.
Question 89
The action of passing the source program into the proper syntactic class is known as :
A
Syntax analysis
B
Lexical analysis
C
Interpretation analysis
D
Uniform symbol generation
Question 89 Explanation: 
  • The action of parsing the source program into the proper syntactic classes is known as lexical analysis
  • lexical analyzer is the first phase of a compiler
  • The output of a lexical analyzer is stream of tokens
  • lexical analyzer read input characters from the source program and produce output as sequence of tokens
Question 90
The dynamic binding occurs during the :
A
Compile time
B
Run time
C
Linking time
D
Pre - processing time
Question 90 Explanation: 
Difference between static and dynamic binding 
  • Static binding happens at compile-time while dynamic binding happens at runtime.
  • Binding of private, static and final methods always happen at compile time since these methods cannot be overridden.
  • In Static binding All information needed to call a function is known at compile time.
  • In dynamic binding All information need to call a function come to know at run time.
  • When the method overriding is actually happening and the reference of parent type is assigned to the object of child class type then such binding is resolved during runtime.
  • The binding of overloaded methods is static and the binding of overridden methods is dynamic.
Question 91
Shift-Reduce parsers perform the following :
A
Shift step that advances in the input stream by K(K > 1) symbols and Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol.
B
Shift step that advances in the input stream by one symbol and Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol.
C
Shift step that advances in the input stream by K(K = 2) symbols and Reduce step that applies a completed grammar rule to form a single tree.
D
Shift step that does not advance in the input stream and Reduce step that applies a completed grammar rule to form a single tree.
Question 91 Explanation: 
Shift-Reduce parsers perform the following
  • Shift step that advances in the input stream by one symbol and
  • Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol.
Question 92
Which of the following is true ?
A
Canonical LR parser is LR (1) parser with single look ahead terminal
B
All LR(K) parsers with K > 1 can be transformed into LR(1) parsers.
C
Both (A) and (B)
D
None of the above
Question 92 Explanation: 
  • True : Canonical LR parser is LR (1) parser with single look ahead terminal
  • True : All LR(K) parsers with K > 1 can be transformed into LR(1) parsers.
Question 93
In a two-pass assembler, symbol table is
A
Generated in first pass
B
Generated in second pass
C
Not generated at all
D
Generated and used only in second pass
Question 93 Explanation: 
In a two-pass assembler, symbol table is generated in first pass but it need two pass to scan whole source file. While one pass assembler scan whole file in one go.

Following tasks performed in first pass
  • It allocates space for the literals.
  • It computes the total length of the program.
  • It builds the symbol table for the symbols and their values.
Question 94
Debugger is a program that
A
Allows to examine and modify the contents of registers
B
Does not allow execution of a segment of program
C
Allows to set breakpoints, execute a segment of program and display contents of register
D
All of the above
Question 94 Explanation: 
  • A debugger is a computer program used by programmers to test and debug a target program
  • Debugger allows to set breakpoints, execute a segment of program and display contents of register. One of the Debugger name is GDB used under Unix.
Refer 1 : https://www.cs.utah.edu/~germain/PPS/Topics/debugging_programs.html
Refer 2 : https://www.cs.cmu.edu/~gilpin/tutorial/
Question 95
A grammar G is LL(1) if and only if the following conditions hold for two distinct productions A → α | β
I. First (α) ∩ First (β) ≠ { a } where a is some terminal symbol of the grammar.
II. First (α) ∩ First (β) ≠ λ
III. First (α) ∩ Follow (A)=ϕ if λ ∈ First (β)
A
I and II
B
I and III
C
II and III
D
I, II and III
Question 95 Explanation: 
LL(1) grammar
Question 96
Which of the following suffices to convert an arbitrary CFG to an LL(1) grammar ?
A
Removing left recursion alone
B
Removing the grammar alone
C
Removing left recursion and factoring the grammar
D
None of the above
Question 96 Explanation: 
Removing left recursion and factoring the grammar do not suffice to convert an arbitrary CFG to LL(1) grammar.

Following are the steps to convert an arbitrary CFG to LL(1) grammar:
  1. Remove ambiguity (by taking care of precedence and associatively)
  2. Remove Left Recursion (This will come when you remove ambiguity)
  3. Make grammar deterministic by Left factoring (In case occur) To convert CFG to LL(1), which is top down parser.
If the grammar is ambiguous, is left recursive and has left factoring, then, it is not a LL(1). For LL(1) all the 3 should be removed.
Question 97
A shift reduce parser suffers from
A
Shift reduce conflict only
B
Reduce reduce conflict only
C
Both shift reduce conflict and reduce reduce conflict
D
Shift handle and reduce handle conflicts
Question 97 Explanation: 
SR Parsers conflicts
Question 98
The grammar S ⟶ (S) | SS | ∈ is ​ not​ suitable for predictive parsing because the grammar is
A
An Operator Grammar
B
Right Recursive
C
Left Recursive
D
Ambiguous
Question 98 Explanation: 
For predictive-parsing, grammar should be:
  • Free from ambiguity
  • Free from left recursion
  • Free from left factoring
Since given grammar can have infinite parse trees for string ‘ε’, so grammar is ambiguous, and also S → SS has left recursion. so it can not have predictive parser.

option(C) is also satisfying but option(D) is a better answer hear. Because we have standard procedure for removing left-recursion but ambiguity is not easy to remove.
Question 99
Consider the following Grammar G :
S ➝ A | B
A➝ a | c
B➝ b | c
Where {S,A,B} is the set of non-terminals, {a,b,c} is the set of terminals.
Which of the following statement(s) is/are correct ?
S​1​ : LR(1) can parse all strings that are generated using grammar G.
S​2​ : LL(1) can parse all strings that are generated using grammar G.
A
Both S​ 1​ and S​ 2
B
Only S​ 2
C
Neither S​1​ nor S2
D
Only S​ 1
Question 99 Explanation: 
The given grammar is ambiguous as there are two possible leftmost derivations for string “c”.

First Leftmost Derivation
S → A
A → c

Second Leftmost Derivation
S → B
B → c

An Ambiguous grammar can neither be LL(1) nor LR(1)
Question 100
Consider the following two Grammars :
G1 : S → SbS | a
G2 : S → aB | ab, A → GAB | a, B → ABb | b
Which of the following option is correct ?
A
Only G1 is ambiguous
B
Only G2 is ambiguous
C
Both G1 and G2 are ambiguous
D
Both G1 and G2 are not ambiguous
Question 100 Explanation: 
A grammar is said to be ambiguous if there exists more than one leftmost derivation or more than one rightmost derivation or more than one parse tree for the given input string

Here both Grammar G1 and G2 are ambiguous as For G1 we can generate more than one parse tree for the same string “ababa” and for G2 we can also generate more than one parse the for the string ” ab ” so both G1 and G2 is also ambiguous grammars.
ambiguous grammar
Question 101
A bottom-up parser generates :
A
Leftmost derivation in reverse
B
Right-most derivation in reverse
C
Left-most derivation
D
Right-most derivation
Question 101 Explanation: 
Bottom-up parser generates the right-most derivation in reverse order. It constructs the parse tree from the input string to the root and tries to construct the right-most derivation of the specified string backward.
Question 102
In compiler design ‘reducing the strength’ refers to
A
Reducing the range of values of input variables.
B
Code optimization using cheaper machine instructions.
C
Reducing efficiency of program.
D
None of the above
Question 102 Explanation: 
**Correct answer is option(B): Code optimization using cheaper machine instructions.**
  • In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations.
  • Classic example of strength reduction converts "strong" multiplications inside a loop into "weaker" additions – something that frequently occurs in array addressing.
Strength reduction examples
  • Instead of Multiplication by 2 we can use left shift by 2
  • Instead of division by 2 we can use right shift by 2
Question 103
Given the following expressions of a grammar
E → E * F / F + E / F
F → F – F / id
Which of the following is true ?
A
* has higher precedence than +
B
– has higher precedence than *
C
+ and – have same precedence
D
+ has higher precedence than *
Question 103 Explanation: 
Operator which is at lower level in the grammar is termed to have higher precedence.
  • * and + have least priority because  are present at the top most level .
  • * and + have equal precedence because they are same level .
  • '-' have higher precedence than '+' and '*' because is present at the bottom most level
Let us take an example 6*7-8 when we draw parse tree according to grammar
      E
   /  |  \
  E   *   F
  |     / | \
  F    F  -  F
  |    |     |
id(6) id(7) id(8)

As we can see that first ‘- ‘ will be evaluated then ‘ * ‘ is evaluated
Thus, ‘ – ‘ has higher precedence then *.

So correct answer is option(B)
Question 104
Which of the following is true while converting CFG to LL(I) grammar ?
A
Remove left recursion alone
B
Factoring grammar alone
C
Both of the above
D
None of the above
Question 104 Explanation: 
Removing left recursion and factoring the grammar do not suffice to convert an arbitrary CFG to LL(1) grammar.

Following are the steps to convert an arbitrary CFG to LL(1) grammar:
  1. Remove ambiguity (by taking care of precedence and associatively)
  2. Remove Left Recursion (This will come when you remove ambiguity)
  3. Make grammar deterministic by Left factoring (In case occur) To convert CFG to LL(1), which is top down parser.
If the grammar is ambiguous, is left recursive and has left factoring, then, it is not a LL(1). For LL(1) all the 3 should be removed.
Question 105
Which of the following is the most powerful parsing method ?
A
LL(I)
B
Canonical LR
C
SLR
D
LALR
Question 105 Explanation: 
Canonical LR is most powerful parsing method
LR > LALR > SLR
most powerful parser is CLR
Question 106
Given the following statements :
S1: SLR uses follow information to guide reductions. In case of LR and LALR parsers, the lookaheads are associated with the items and they make use of the left context available to the parser.
S2: LR grammar is a larger subclass of context free grammar as compared to that SLR and LALR grammars.
Which of the following is true ?
A
S1 is not correct and S2 is not correct.
B
S1 is not correct and S2 is correct.
C
S1 is correct and S2 is not correct.
D
S1 is correct and S2 is correct.
Question 106 Explanation: 
S1 is correct and S2 is correct.
  • SLR parser uses the follow information to construct the SLR parse table.
  • LR and LALR uses the look-ahead symbol to generate the parse table
  • LR is powerful than SLR and LALR. Therefore, LR will be able to recognize large class of CFG than the SLR and LALLR
grammar space
Question 107
Which of the following derivations does a top-down parser use while parsing an input string? The input is scanned from left to right.
A
Leftmost derivation
B
Leftmost derivation traced out in reverse
C
Rightmost derivation traced out in reverse
D
Rightmost derivation
Question 107 Explanation: 
In Top-down parser takes input from Left to right constructing leftmost derivation of the sentence.
  • Top-down parser   - Leftmost derivation
  • Bottom-Up parser - Reverse of rightmost derivation
Question 108
The scheme of which interpreter translates the source program is known as
A
Paragraph by paragraph
B
Instruction by instruction
C
Line by line
D
None of the above
Question 108 Explanation: 
Interpreter converts the source program written in high level language into machine languages. An interpreter converts each statement of the program line by line into machine code.
Question 109
Portable program means
A
Program with wheels
B
Independent from its authors
C
Independent of platform
D
None of the above
Question 109 Explanation: 
Platform independence means that the same program works on any platform(operating system like windows, linux, unix) without needing any modification.
Question 110
Given the production rules of a grammar G1 as
S1 → AB | aaB
A → a | Aa
B → b
and the production rules of a grammar G2 as
S2 → aS2bS2 | bS2aS2 | λ
Which of the following is correct statement ?
A
G1 is ambiguous and G2 is not ambiguous.
B
G1 is ambiguous and G2 is ambiguous.
C
G1 is not ambiguous and G2 is ambiguous.
D
G1 is not ambiguous and G2 is not ambiguous.
Question 110 Explanation: 
  • A grammar is said to be ambiguous if there exists more than one leftmost derivation or more than one rightmost derivation or more than one parse tree for the given input string
  • Here both Grammar G1 and G2 are ambiguous as For G1 we can generate more than one parse tree for the same string “aab” and for G2 we can also generate more than one parse the for the string ” abab ”,"baba" etc. So both G1 and G2 is also ambiguous grammars.
ambiguous grammar
Question 111
Given a grammar : S1 → Sc, S → SA | A, A → aSb | ab, there is a rightmost derivation
S1 ⇒ Sc ⇒ SAC ⇒ SaSbc
Thus, SaSbc is a right sentential form, and its handle is
A
SaS
B
Bc
C
Sbc
D
aSb
Question 111 Explanation: 
**Correct answer : option(D) = aSb**
handle in compiler design
Question 112
The equivalent production rules corresponding to the production rules S → Sα1 | Sα2 | β1 | β2 is
A
S → β1 | β2, A → α1A | α2A | λ
B
S → β1| β2 | β1A | β2A, A → α1A | α2A
C
S → β1 | β2, A → α1A | α2A
D
S → β1 | β2 | β1A | β2A, A → α1A | α2A | λ
Question 112 Explanation: 
S→Sα1 ∣ Sα2 ∣ β1 ∣ β2
Given grammar can generate : {β1,β2,β1α1,β1α2,β2α1,β2α2,β2α1α2.........., ...., ,,,etc}
We can generate only {β1,β2} in option(A).
In option(B) there is no terminating point string containing {α1,α2}
option(C) also not correct because it can generate only {β1,β2}
option(D) can generate all the string generate by the given grammar.

Method 2 : 
S→Sα1 ∣ Sα2 ∣ β1 ∣ β2,
Apply left recursion  then we get
S → β1 ∣ β2 ∣ β1A ∣ β2A, A → α1A ∣ α2A ∣ λ
Therefore, Option(D) S→β1 ∣ β2 ∣ β1A ∣ β2A, A → α1A ∣ α2A ∣ λ
Question 113
Which is the correct statement(s) for Non Recursive predictive parser ?
S1 : First(α) = { t | α⇒ *tβ for some string β } ⇒ *tβ
S2 : Follow(X) = { a | S⇒ *αXaβ for some strings α and β }
A
Both statements S1 and S2 are incorrect.
B
S1 is incorrect and S2 is correct.
C
S1 is correct and S2 is incorrect.
D
Both statements S1 and S2 are correct.
Question 113 Explanation: 
Discuss this question in Discussion forum : https://academyera.com/ugc-net-cs-2013-june-paper-2-parsers-2
Question 114
‘Macro’ in an assembly level program is _______.
A
Sub program
B
A complete program
C
A hardware portion
D
Relative coding
Question 114 Explanation: 
Writing a macro is another way of ensuring modular programming in assembly language.
  • A macro is a sequence of instructions, assigned by a name and could be used anywhere in the program.
  • In NASM, macros are defined with %macro and %endmacro directives.
  • The macro begins with the %macro directive and ends with the %endmacro directive.

Macro definition syntax :
%macro macro_name number_of_params
<macro body>
%endmacro

Where, number_of_params specifies the number parameters, macro_name specifies the name of the macro.

The macro is invoked by using the macro name along with the necessary parameters. When you need to use some sequence of instructions many times in a program, you can put those instructions in a macro and use it instead of writing the instructions all the time.
Question 115
Grammar of the programming is checked at ________ phase of compiler
A
Semantic analysis
B
Code generation
C
Syntax analysis
D
Code optimization
Question 115 Explanation: 
  • Grammar of the programming is checked at syntax phase of compiler
  • Type checking is done during semantic analysis phase and syntax analysis is used to generate parse trees.
    Example : int a = "world" . Hence false
  • logical errors will checked in semantic analysis
Question 116
Macro-processors are ______.
A
Hardware
B
Compiler
C
Registers
D
None of the above
Question 116 Explanation: 
A macro processor is a program that copies a stream of text from one place to another, making a systematic set of replacements as it does so. Macro processors are often embedded in other programs, such as assemblers and compilers. Sometimes they are standalone programs that can be used to process any kind of text.
Question 117
Which of the following expression is represented by the parse tree ?
Context sensitive language
A
(A + B) * C
B
A + * BC
C
A + B * C
D
A * C + B
Question 117 Explanation: 
Parse tree follows below points always:
  • All leaf nodes have to be terminals.
  • All interior nodes have to be non-terminals.
  • In-order traversal gives original input string.
  • Parse tree is always following in-order traversal. It visits left, root and right

precedence grammar
(A+B)∗C

Question 118
Consider the following left associative operators in decreasing order of precedence :
  • subtraction (highest precedence)
  • * multiplication
  • $ exponentiation (lowest precedence)
What is the result of the following expression ?

3 – 2 * 4 $ | * 2**3
A
– 61
B
64
C
512
D
4096
Question 118 Explanation: 
Given question is wrong they given wrong order as Two operator between operands . But according to there intention we are assuming ** is single(*) and there is no |* these are useless symbols
  • 3 – 2 * 4 $ | * 2** 3
  • (3 – 2) * 4 $ | * 2** 3
  • ((3 – 2) * 4) $ | * (2** 3)
  • (((3 – 2) * 4) $ | * (2** 3))
  • (((3 - 2) *) 4 $ | * (2**3))
  • (((1) *) 4 $ | * (2**3))
  • (((4 $ | * (2**3))
  • (((4 $ | * (6))
  • (((46))
  • 4096
Question 119
Which of the following is used for grouping of characters into tokens(in a computer)?
A
A parser
B
Code optimizer
C
Code generator
D
Scanner
Question 119 Explanation: 
  • Lexical analysis or Scanner is the first phase of a compiler,
  • It takes the modified source code from language preprocessors that are written in the form of sentences.
  • A scanner or lexical analyzer reads the program from left to right, organize them into tokens and pass those tokens to the next level.
Question 120
A compiler that runs on one machine and produces code for a different machine is called:
A
Cross compilation
B
One pass compilation
C
Two pass compilation
D
None of the above
Question 120 Explanation: 
  • A compiler for a high-level language that runs on one machine and produces code for a different machine is called cross compiler.
  • A one-pass compiler is a compiler that passes through the source code of each compilation unit only once. traverses the program only once.
  • A multi-pass compiler or two pass compiler is a type of compiler that processes the source code or abstract syntax tree of a program several times.
  • A one-pass compilers is faster than multi-pass compilers.
Question 121
In an absolute loading scheme which loader function is accomplished by assembler ?
A
Re-allocation
B
Allocation
C
Linking
D
Loading
Question 121 Explanation: 
  • Absolute loader is a kind of loader in which relocated object files are created, loader accepts these files and places them at a specified location in the memory.
  • This type of loader is called absolute loader because no relocating information is needed, rather it is obtained from the programmer or assembler.
Question 122
Which of the following grammar is LR (1) ?
A
A → a A b, A → b A b, A → a , A →b
B
A → a A a, A → a A b, A → c
C
A → A + A, A → a
D
Both (A) and (B)
Question 122 Explanation: 
Both (A) and (B)
Question 123
A shift-reduce parser carries out the actions specified within braces immediately after reducing with the corresponding rule of the grammar.
S → x x W [ print “1”]
S → y [print “2”]
W → Sz [print “3”]
what is the translation of “x x x x y z z” ?
A
1 1 2 3 1
B
1 1 2 3 3
C
2 3 1 3 1
D
2 3 3 2 1
Question 123 Explanation: 
S    → xxW(print “1”)
W → Sz(print “3”)
S   → xxW(print “1”)
W → Sz(print “3”)
S   →  y(print “2”)
Answer is option(C) 23131

Method : 2
shift-reduce parser
Answer is option(C) 23131
Question 124
Synthesized attribute can be easily simulated by a
A
LL grammar
B
Ambiguous grammar
C
LR grammar
D
None of the above
Question 124 Explanation: 
Synthesized attributes follows bottom up parsing evaluation.

Example :
X -> Y | Z  i attribute is there in X,Y and Z
X.i = f( Y.i , Z.i ) means parent is taking attribute from there children

Bottom Up Parsers
LR( 0 ) , SLR( 1 ) , LR( 1 ) , LALR( 1 ) and Operator Precedence

Top Down Parsers
Non Recursive descent ( LL( 1 ) ) and Recursive descent

LR Parsers which are generated from LR grammar are also coming under bottom up parsing fashion Therefore option(C) is correct.
Question 125
Which of the following are Assembler Directives ?
(i) EQU
(ii) ORIGIN
(iii) START
(iv) END
A
(ii), (iii) and (iv)
B
(i), (iii) and (iv)
C
(iii) and (iv)
D
(i), (ii), (iii) and (iv)
Question 125 Explanation: 
An assembler directive is a message to the assembler that tells the assembler something it needs to know in order to carry out the assembly process; for example, an assemble directive test the assembler where a program is to be located in memory.
Basic Assembly directives
EQU →Equate
ORG →Origin
START → Start (Define the start of the first control section in a program)
END → End (End of assembly language program and "starting address" for execution)
Question 126
Assembler program is :
A
Dependent on the operating system
B
Dependent on the compiler
C
Dependent on the hardware
D
Independent of the hardware
Question 126 Explanation: 
  • Assembly languages are hardware dependent; there is a different one for each CPU series.
  • Assembly language (or Assembler) is a compiled, low-level computer language. It is processor-dependent, since it basically translates the Assembler's mnemonics directly into the commands a particular CPU understands, on a one-to-one basis.
  • These Assembler mnemonics are the instruction set for that processor
Question 127
At the end of parsing,
A
Tokens are identified.
B
Set of instructions are identified.
C
The syntactic groups are identified.
D
Machine instructions are identified
Question 127 Explanation: 
At the end of parsing, tokens are identified.
Question 128
Dead-code elimination in machine code optimization refers to :
A
Removal of all labels.
B
Removal of values that never get used.
C
Removal of function which are not involved.
D
Removal of a module after its use.
Question 128 Explanation: 
  • Dead code elimination is a compiler optimization to remove code which does not affect the program results.
  • Removing such code has several benefits: it shrinks program size, an important consideration in some contexts, and it allows the running program to avoid executing irrelevant operations, which reduces its running time.
  • It can also enable further optimizations by simplifying program structure. Dead code includes code that can never be executed (unreachable code), and code that only affects dead variables (written to, but never read again), that is, irrelevant to the program.
Question 129
A parse tree is an annotated parse tree if :
A
It shows attribute values at each node.
B
There are no inherited attributes.
C
It has synthesized nodes as terminal nodes.
D
Every non-terminal nodes is an inherited attribute.
Question 129 Explanation: 
A parse tree is an annotated parse tree if It shows attribute values at each node for given input string. The process of computing the attribute values at the nodes is called annotating or decorating the parse tree.
Question 130
In a two pass compiler, during the first pass :
A
User defined address symbols are correlated with their binary equivalent
B
The syntax of the statement is checked and mistakes, if any, are listed
C
Object program is generated
D
Semantic of the source program is elucidated.
Question 130 Explanation: 
In a two pass compiler, during the first pass user defined address symbols are correlated with their binary equivalent
Question 131
A single instruction in an assembly language program contains :
A
One micro operation
B
One macro operation
C
One instruction to be completed in a single pulse
D
One machine code instruction
Question 131 Explanation: 
A single instruction in an assembly language program contains one macro operation
Question 132
Absolute loader demands that the programmer needs to know the :
A
Start address of the available main memory
B
Total size of the program
C
Actual address of the data location
D
Absolute values of the operands used
Question 132 Explanation: 
  • The absolute loader is a kind of loader in which relocated object files are created, loader accepts these files and places them at a specified location in the memory.
  • This type of loader is called absolute loader because no relocating information is needed, rather it is obtained from the programmer or assembler.
  • The starting address of every module is known to the programmer, this corresponding starting address is stored in the object file then the task of loader becomes very simple that is to simply place the executable form of the machine instructions at the locations mentioned in the object file.
  • In this scheme, the programmer or assembler should have knowledge of memory management. The programmer should take care of two things:
    • Specification of starting address of each module to be used. If some modification is done in some module then the length of that module may vary. This causes a change in the starting address of immediate next modules, it's then the programmer's duty to make necessary changes in the starting address of respective modules.
    • While branching from one segment to another the absolute starting address of respective module is to be known by the programmer so that such address can be specified at respective JMP instruction.
Question 133
Top-down parsers are predictive parsers, because :
A
Next tokens are predicted.
B
Length of the parse tree is predicted beforehand
C
Lowest node in the parse tree is predicted.
D
Next lower level of the parse tree is predicted.
Question 133 Explanation: 
Top-down parsers come in two forms
  • Predictive Parsers
    • Predictive parser is a recursive descent parser, which has the capability to predict which   production is to be used to replace the input string.­
    • Predict the production rule to be applied using lookahead tokens­
    ­
  • Backtracking Parsers Will try different productions, backing up when a parse fails
Top-down parsers are predictive parsers, because next tokens are predicted
Question 134
In the context of compiler design, “reduction in strength” refers to :
A
Code optimization obtained by the use of cheaper machine instructions
B
Reduction in accuracy of the output
C
Reduction in the range of values of input variables
D
Reduction in efficiency of the program
Question 134 Explanation: 
In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts "strong" multiplications inside a loop into "weaker" additions – something that frequently occurs in array addressing.

Other Examples of strength reduction include:
  • Replacing a multiplication within a loop with an addition
  • Replacing an exponentiation within a loop with a multiplication
Let take an Example:
Exponentiation is replaced by multiplication and multiplication is in return replaced by addition.
The below code is having multiplication operator:
a = 10;
for (j=0; j < X; j++)
{
 Z[j] = a * j;
}
This code can be replaced by the following code by replacing the multiplication with addition.
a = 10;
k = 0;
for (j=0; j < X; j++)
{
 Z[j] = k;
 k = k + a;
}
Question 135
The parsing technique that avoids backtracking is :
A
Top - down parsing
B
Recursive - descent parsing
C
Predicative
D
Syntax tree
Question 135 Explanation: 
  • Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string.
  • The predictive parser does not suffer from backtracking.
  • To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next input symbols.
  • To make the parser back-tracking free, the predictive parser puts some constraints on the grammar and accepts only a class of grammar known as LL(k) grammar.
Question 136
A Top down Parser generates :
A
Rightmost derivation.
B
Rightmost derivation, in reverse.
C
Leftmost derivation.
D
Leftmost derivation in reverse.
Question 136 Explanation: 
Top-down parser generates the left-most derivation. It constructs the parse tree from left to right and constructs the left-most derivation of the specified sentence.
Question 137
Symbol table can be used for :
A
Checking type compatibility
B
Suppressing duplication of error message
C
Storage allocation
D
All of these above
Question 137 Explanation: 
Symbol table can be used for
(A) Checking type compatibility
(B) Suppressing duplication of error message
(C) Storage allocation
Question 138
Heap allocation is required for languages that
A
Use dynamic scope rules
B
Support dynamic data structures
C
Support recursion
D
Support recursion and dynamic data structures
Question 138 Explanation: 
Heap allocation is required for the languages which support dynamic data structure.
Question 139
The ‘K’ in LR(K) cannot be :
A
0
B
1
C
2
D
None of these
Question 139 Explanation: 
  • LR Parsers also known as LR(k) parsers, where L stands for left to right scanning of the input stream and R stands for the construction of right most derivation in reverse and k denotes the number of lookahead symbols to make decisions.
  • In LR(k), k is for the number of input symbols of lookahead that are used in parsing decisions. Main cases in this are when k = 0 or k =1. We mostly consider the cases where k<= 1. Also, when k is omitted, value of k should be taken as 1.
  • LR parsers would recognize all the programming language for which can be written in context-free grammar .
There are 139 questions to complete.

This Post Has One Comment

  1. Yetsedaw Tesfaye

    Good

Leave a Reply