Laporan PLC

Kelompok 11

Programming Language C++

Anggota:

  1. Owen Prasetya Gunawan / 2001558403 / Azurerlos.blog.binusian.org
  2. Stefandi Glivert / 2001559072 / Stefandi.blog.binusian.org
  3. Linando Saputra / 2001559103 / terralio.blog.binusian.org
  4. Nico Richardo / 2001559135 / nicorich.blog.binusian.org
  5. Kevin Fausta / 2001559141 / kevinf.blog.binusian.org

 

Session 1: Introduction

 

A. Reasons for studying Concepts of Programming Languages

Increased ability to express ideas, Improved background for choosing appropriate languages, Increased ability to learn new languages, Better understanding of significance of implementation, Better use of languages that are already known, Overall advancement of computing.

 

B. Programming Domains

  • Scientific Computing

– mathematical modeling and engineering design applications

– the first application domain for computers and programming languages

– characterized by: floating point computations, array, matrix manipulations, need for efficiency (number crunching)

– languages: Fortran, ALGOL 60, parallel machine architectures and programming languages

 

  • Business Applications

– information processing and reporting applications

– characterized by: report generation facilities, decimal number manipulations ($), database interoperability, accessibility to nonprogrammers

– languages: COBOL, also affected design of Visual Basic and 4GLs (Delphi, PL/SQL)

 

  • Artificial Intelligence

– manipulating symbols instead of numbers

– characterized by: list (not array) processing, dynamic typing, self-modifying code, flexibility

– example applications: natural language processing, theorem proving, automated integration & differentiation

– languages: LISP (and other functional languages), Prolog

 

  • Systems Programming

– developing operating systems, compilers, database systems, …

– characterized by: emphasis on efficiency, ability to interface with hardware (low-level), few safety checks/restrictions

– languages: C (for implementing Unix), other specialized languages

 

  • Web Software

– write applications

– languages: C, C++, Java

C. Language Evaluation Criteria

  • Readability: the ease with which programs can be read and understood.
  • Writability: the ease with which a language can be used to create programs.
  • Reliability: conformance to specifications (i.e., performs to its specifications).
  • Cost: the ultimate total cost.

D. Influences on Language Design

  • Computer architecture: Von Neumann.
  • We use imperative languages, at least in part, because we use von Neumann machines.
    • Data and programs stored in same memory.
    • Memory is separate from CPU.
    • Instructions and data are piped from memory to CPU.
    • Basis for imperative languages
    •  Variables model memory cells.
    •  Assignment statements model piping.
    •  Iteration is efficient.

 

  • Fetch-execute-cycle (on a von Neumann architecture computer)

initialize the program counter

repeat forever 

fetch the instruction pointed by the counter  increment the counter  decode the instruction  execute the instruction

end repeat.

  • “von Neumann Bottleneck”

E. Language Categories

  • Imperative (e.g., C and Pascal)
  • A language which operates by a sequence of commands that change the value of data elements.
  • Central features are variables, assignments statements, and iteration.
  • Includes visual languages( Visual Basic.NET) and scripting languages (JavaScript, Perl and Ruby).
  • Functional (e.g., LISP and Scheme
  • A functional language is one that operates by use of higher-order functions, building operators that manipulate functions directly without even appearing to manipulate data.
  • Main means of making computations is by applying functions to given parameters.
  • Logic (e.g., Prolog)
  • Rule-based where rules are specified in no special order.
  • Language implementation decides the execution order that produces the desired result.
  • Object-oriented (e.g., SmallTalk, C++ and C#)
  • A language in which data and the functions which access it are treated as a unit.
  • Encapsulate data objects with processing.
  • Inheritance and dynamic type binding.
  • Grew out of imperative languages.

F. Implementation Methods

  • Compilation
  • Programs are translated into machine language; includes JIT systems.
  • Use: Large commercial applications.
  • Translate high-level program (source language) into machine code (machine language).
  • Slow translation, fast execution.
  • Compilation process has several phases:
  • lexical analysis: converts characters in the source program into lexical units.
  • syntax analysis: transforms lexical units into parse trees which represent the syntactic structure of program.
  • Semantics analysis: generate intermediate code.
  • code generation: machine code is generated

The Compilation Process

  • Pure Interpretation
  • Programs are interpreted by another program known as an interpreter.
  • Use: Small programs or when efficiency is not an issue.
  • No translation.
  • Easier implementation of programs (run-time errors can easily and immediately be displayed).
  • Slower execution (10 to 100 times slower than compiled programs).
  • Often requires more space.
  • Now rare for traditional high-level languages.
  • Significant comeback with some Web scripting languages (e.g., JavaScript, PHP).

Pure Interpretation Process

 

  • Hybrid Implementation Systems
  • A compromise between compilers and pure interpreters.
  • Use: Small and medium systems when efficiency is not the first concern.
  • A compromise between compilers and pure interpreters
  • A high-level language program is translated to an intermediate language that allows easy interpretation
  • Faster than pure interpretation
  • Examples:
  • Perl programs are partially compiled to detect errors before interpretation
  • Initial implementations of Java were hybrid; the intermediate form, byte code, provides portability to any machine that has a byte code interpreter and a run-time system (together, these are called Java Virtual Machine)

Hybrid Implementation Process

Layered View of Computer

 

Source(s) :

users.dickinson.edu/~wahlst/356/ch1.pdf

https://prezi.com/nvowp–hkiu4/programming-domains/

http://zakarum.tistory.com/entry/Language-Design

http://zakarum.tistory.com/entry/Language-Categories

https://www8.cs.umu.se/kurser/5DV086/HT09/utdelat/PLC1.pdf

Slide Binus Maya

Session 2: Describing Syntax and Semantics

 

A. Definition of Syntax

The word syntax comes from Ancient Greek: σύνταξις “coordination”, which consists of σύν syn, “together,” and τάξις táxis, “an ordering”.

In computer science, the syntax of a computer language is the set of grammatical rules that defines the combinations of words and symbols that are considered to be a correctly structured document or fragment in that language.

The syntax of a language defines its surface form. Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical). Documents that are syntactically invalid are said to have a syntax error.

B. Syntax in C++

The syntax of textual programming languages is usually defined using a combination of regular expressions (for lexical structure) and Backus-Naur Form (for grammatical structure) to inductively specify syntatic categories (nonterminals) and terminal symbols.

The following are the examples of basic C++ keywords syntax:

  1. #include, which is used to insert the library of functions we are going to use in our program. For example, #include <iostream>, which includes the standard input and output codes.
  2. int main(), which is called first by the program after the initialization. A statement return 0; should be written at the end of the codes inside int main(). Void main() is also a valid keyword syntax in C++ but without a return 0; at the end of the codes inside void main().
  3. cout, which is used to print output on the screen.
  4. cin, which is used to receive inputted data by user and store it as a variable.
  5. int, float, char, etc, which are used to declare a variable with a daya type.
  6. if, which is used in selection and will execute the codes if the condition is true.
  7. while, for, which are used in looping/repetition.

Important things to note:

  1. C++, the semicolon is a statement terminator. That is, each individual statement must be ended with a semicolon. It indicates the end of one logical entity.
  2. A block is a set of logically connected statements that are surrounded by opening and closing braces
  3. C++ is a case-sensitive programming language, as such, “cout” and “Cout” are different.
  4. in C++, to make a comment we use double slash (//) to mark text or code in a line as a comment. Or we can also use /* */ to mark several lines of text or code as a comment inbetween the /* */.

example of a program with correct syntax in C++:

#include <iostream>               // Inserting library for input and output

using namespace std;
int main()                                 // Called first by the program after the initialization

{                                              // Opening brace

int a,y,z;                                  // Declaring variable “a” as an integer data type

cout<<“Masukkan nilai a : “;  // Printing “Masukkan nilai a : “ on the screen

cin>> a;                                   // Asking input from user and store it in variable a
return 0;                                   // Because we use “int main()” we write  a “return 0;”

}                                              // Closing brace

 

C. Definition of Semantic

Semantics (from Ancient Greek: σημαντικός sēmantikos, “significant”) is primarily the linguistic, and also philosophical study of meaning—in language, programming languages, formal logics, and semiotics. It focuses on the relationship between signifiers—like words, phrases, signs, and symbols—and what they stand for, their denotation.

In computer science, the term semantics refers to the meaning of languages, as opposed to their form (syntax). According to Euzenat, semantics “provides the rules for interpreting the syntax which do not provide the meaning directly but constrains the possible interpretations of what is declared.” In other words, semantics is about interpretation of an expression. Additionally, the term is applied to certain types of data structures specifically designed and used for representing information content.

D. Approaches of Semantics

There are many approaches to formal semantics; these belong to three major classes:

  • Denotational semantics, whereby each phrase in the language is interpreted as a denotation, i.e. a conceptual meaning that can be thought of abstractly. Such denotations are often mathematical objects inhabiting a mathematical space, but it is not a requirement that they should be so. As a practical necessity, denotations are described using some form of mathematical notation, which can in turn be formalized as a denotational metalanguage. For example, denotational semantics of functional languages often translate the language into domain theory.
  • Operational semantics, whereby the execution of the language is described directly (rather than by translation). Operational semantics loosely corresponds to interpretation, although again the “implementation language” of the interpreter is generally a mathematical formalism. Operational semantics may define an abstract machine (such as the SECD machine), and give meaning to phrases by describing the transitions they induce on states of the machine.
  • Axiomatic semantics, whereby one gives meaning to phrases by describing the logical axioms that apply to them. Axiomatic semantics makes no distinction between a phrase’s meaning and the logical formulas that describe it; its meaning is exactly what can be proven about it in some logic. The canonical example of axiomatic semantics is Hoare logic.

Source(s) :

https://en.wikipedia.org/wiki/Syntax_(programming_languages)#Example:_Lisp

https://www.tutorialspoint.com/cplusplus/cpp_basic_syntax.htm

https://en.wikipedia.org/wiki/Semantics_(computer_science)

www.letu.edu/people/stevearmstrong/…/chapter%202.ppt

Slide Binus Maya

 

Session 3: Names, Bindings, and Scopes

A. Variables

A variable is an abstraction of a memory cell. The lifetime of a variable is the time during which it is bound to a particular memory cell.

A variable can be characterized by a collection of properties, or attributes, the most important of which is type, a fundamental concept in programming languages.

Example : int angka               Variable name

 

Type

Variables can be characterized as a sextuple of attributes:

– Name            – Address        – Value            – Type              – Lifetime        – Scope

 

Categories of Variables by Lifetimes

  1. Static — bound to memory cells before execution begins and remains bound to the same memory cell throughout execution, e.g., C and C++ static variables in functions.

Advantages     : Efficiency (direct addressing), history-sensitive subprogram support

Disadvantage  : Lack of flexibility (no recursion)

  1. Stack-dynamic–Storage bindings are created for variables when their declaration statements are elaborated. If scalar, all attributes except address are statically bound, e.g. local variables in C subprograms (not declared static) and Java methods.

Advantage        : Allows recursion, conserves storage

Disadvantages  : Overhead of allocation and deallocation, subprograms cannot be

history sensitive, inefficient references (indirect addressing)

  1. Explicit heap-dynamic –– Allocated and deallocated by explicit directives, specified by the programmer, which take effect during execution. Referenced only through pointers or references, e.g. dynamic objects in C++ (via new and delete), all objects in Java.

Advantage      : provides for dynamic storage management

Disadvantage  : inefficient and unreliable

  1. Implicit heap-dynamic--Allocation and deallocation caused by assignment statements, e.g. all variables in APL; all strings and arrays in Perl, JavaScript, and PHP

Advantage: flexibility (generic code)

Disadvantages: Inefficient, because all attributes are dynamic; loss of error detection

B. Names

Name is a string of characters used to identify some entity in a program.that is unique, means that there is only one existence of the name, and different each other. Variable names are the most common names in programs. Names are often referred as identifiers.

  1. Length
  • If too short, they cannot be connotative
  • In C++ no limit, but implementers often impose one
  1. Special Character
  • In C++, name can not start with any symbol or number, thus only able to use alphabet or “_” (underscore) symbol as the first word, and no “ “ (space) permitted, so instead of using space, underscore can be used as the exchange.
  1. Case Sensitivity

Disadvantage: readability (names that look alike are different), e.g. “A” is different with “a”, etc.

  • Names in the C-based languages are case sensitive

C. Special Words

Special words in programming languages are used to make programs more readable by naming actions to be performed.  They also are used to separate the syntactic entities of programs.

Keyword is a word of programming languages that is special only in certain context. This can still be used as an identifier or variable name, e.g. final, override, etc.

Reserved word is a special word of a programming language that cannot be used as a user-defined name, e.g. for, int, if, etc.

D. Bindings

A binding is an association between an entity and an attribute, such as between a variable and its type or value, or between an operation and a symbol. Binding time is the time at which a binding takes place.

Binding Time :

  • Language design time — bind operator symbols to operations
  • Language implementation time– bind floating point type to a representation
  • Compile time — bind a variable to a type in C or Java
  • Load time — bind a C or C++ static variable to a memory cell)
  • Runtime — bind a non static local variable to a memory cell

Storage Bindings

  • Allocation – getting a cell from some pool of available cells
  • Deallocation – putting a cell back into the poo

E. Scopes

The scope of a variable is the range of statements over which it is visible. The local variables of a program unit are those that are declared in that unit. Local variables are only known in the unit and can not be used in different unit. The nonlocal variables of a program unit are those that are visible in the unit but not declared there. Global variables are a special category of nonlocal variables, where it is universally known by any unit and usually declared first before any scope or unit. The scope rules of a language determine how references to names are associated with variables.

Example :

#include <stdio.h>

int JumlahSiswa;                                       Global Variable; known by the NamaSiswa and main unit. It is not visible in those 2 units, but this global variable is usable in those 2 units

void NamaSiswa()

Scope

{

char Nama[30];                                 Local Variable; only known and usable in this unit

getchar();

}

 

int main()

Scope

{

int NilaiSiswa;                                  Local Variable; only known and usable in this unit

getchar();

}

  1. Static Scope
  • Based on program text
  • To connect a name reference to a variable, you (or the compiler) must find the declaration
  • Search process: search declarations, first locally, then in increasingly larger enclosing scopes, until one is found for the given name
  • Enclosing static scopes (to a specific scope) are called its static ancestors; the nearest static ancestor is called a static parent
  • Some languages allow nested subprogram definitions, which create nested static scopes (e.g., Ada, JavaScript, Common LISP, Scheme, Fortran 2003+, F#, and Python)
  • Works well in many situations
  • Problems : In most cases, too much access is possible; as a program evolves, the initial structure is destroyed and local variables often become global, subprograms also gravitate toward become global, rather than nested

 

  1. Dynamic Scope
  • Based on calling sequences of program units, not their textual layout (temporal versus spatial)
  • References to variables are connected to declarations by searching back through the chain of subprogram calls that forced execution to this point

Example :

                

function big()                                                     Static scoping

{                                                                          Reference to x in sub2 is to big’s x

function sub1()

var x = 7;

function sub2()                                            Dynamic Scoping

{                                                                    Reference to x in sub2 is to sub1‘s x

var y = x;

}

var x = 3;

}

               

  1. Global Scopes

C, C++, PHP, and Python support a program structure that consists of a sequence of function definitions in a file. These languages allow variable declarations to appear outside function definitions.

C and C++ have both declarations (just attributes) and definitions (attributes and storage). A declaration outside a function definition specifies that it is defined in another file.

Source(s) :

http://groups.engin.umd.umich.edu/CIS/course.des/cis400/maxim/lectures/chp4.htm

Slide Binus Maya

Session 4: Data Types

 

A. Introduction

  • A data type defines a collection of data objects and a set of predefined operations on those objects.
  • A descriptor is the collection of the attributes of a variable.
  • An object represents an instance of a user-defined (abstract data) type.

 

B. Primitive Data Type

  • Almost all programming languages provide a set of primitive data types.
  • Primitive data types: Those not defined in terms of other data types.
  • Some primitive data types are merely reflections of the hardware.
  • Others require only a little non-hardware support for their implementation.
  • Primitive data types:
  • Floating Point.
  • Double Floating Point.
  • Wide Character.

 

C. Character String Types

  • Values are sequences of characters.
  • Typical operations:
  • Assignment and copying.
  • Comparison (=, >, etc.).
  • Substring reference.
  • Pattern matching.

 

D. User-Defined Ordinal Types

  • An ordinal type is one in which the range of possible values can be easily associated with the set of positive integers
  • Enumeration Types:
  • All possible values, which are named constants, are provided in the definition.
  • C++ Example:
  • enum days {mon, tue, wed, thu, fri, sat, sun};
  • Evaluation of Enumerated Type
  • Aid to readability, e.g., no need to code a color as a number.
  • Aid to reliability, e.g., compiler can check:
  • Operations (don’t allow colors to be added).
  • No enumeration variable can be assigned a value outside its defined range.
  • Ada, C#, and Java 5.0 provide better support for enumeration than C++ because enumeration type variables in these languages are not coerced into integer types.
  • Implementation of User-Defined Ordinal Types
  • Enumeration types are implemented as integers.
  • Subrange types are implemented like the parent types with code inserted (by the compiler) to restrict assignments to subrange variables.

E. Array Types

  • An array is a homogeneous aggregate of data elements in which an individual element is identified by its position in the aggregate, relative to the first element.
  • Subscript Binding and Array Categories
  • C and C++ arrays that include static modifier are static.
  • C and C++ arrays without static modifier are fixed stack-dynamic.
  • C and C++ provide fixed heap-dynamic arrays.
  • Array Initialization:
  • C, C++, Java, C# example:

int list [] = {4, 5, 7, 83}

  • Character strings in C and C++:

char name [] = ″freddie″;

  • Arrays of strings in C and C++:

char *names [] = {″Bob″, ″Jake″, ″Joe″];

  • Heterogeneous Arrays
  • A heterogeneous array is one in which the elements need not be of the same type
  • Array Initialization:
  • int list [] = {1, 3, 5, 7};
  • char *names [] = {″Mike″, ″Fred″, ″Mary Lou″};
  • Slices
  • A slice is some substructure of an array; nothing more than a referencing mechanism.
  • Slices are only useful in languages that have array operations.

Example: #include <iostream>     // std::cout

#include <cstddef>      // std::size_t

#include <valarray>     // std::valarray, std::slice

 

int main ()

{

std::valarray<int> foo (12);

for (int i=0; i<12; ++i) foo[i]=i;

 

std::valarray<int> bar = foo[std::slice(2,3,4)];

 

std::cout << “slice(2,3,4):”;

for (std::size_t n=0; n<bar.size(); n++)

std::cout << ‘ ‘ << bar[n];

std::cout << ‘\n’;

 

return 0;

}

  • Implementation Of Arrays
  • Access function maps subscript expressions to an address in the array.
  • Access function for single-dimensioned arrays:

address(list[k]) = address (list[lower_bound])

+ ((k-lower_bound) * element_size)

 

 

  • Accessing Multi-Dimensioned Arrays
  • Two common ways:
  • Row major order (by rows) – used in most languages
  • Column major order (by columns) – used in Fortran
  • A compile-time descriptor for a multidimensional array

 

 

 

  • Compile-Time Descriptors

 

                        Single-Dimensioned array                                        Multidimensional array

F. Associative Arrays

  • An associative array is an unordered collection of data elements that are indexed by an equal number of values called
  • Example:

#include <iostream>

#include <map>

#include <string>

#include <algorithm>

#include <cstdlib>

#include <iomanip>

#include <vector>

using namespace std;

 

 

 

int main()

{

map<string, map<string, vector<string> > > data;

data[“plants”][“trees”].push_back(“Elm”);

data[“plants”][“trees”].push_back(“Pine”);

data[“plants”][“trees”].push_back(“Ash”);

data[“plants”][“trees”].push_back(“Birch”);

data[“plants”][“trees”].push_back(“Ceder”);

 

vector<string>::iterator it;

 

for(

it = data[“plants”][“trees”].begin();

it != data[“plants”][“trees”].end();

it++)

{

cout << “Kind of tree: ” <<  *it << endl;

}

return 0;

}

 

G. Record Types

  • A record is a possibly heterogeneous aggregate of data elements in which the individual elements are identified by names
  • Example:

Student (idnumber, name, status, credits, gpa)

enum StatusType {FULLTIME, PARTTIME, NONDEGREE, GRAD} struct StudentType {   int idNumber;   char name[30];   StatusType status;   float credits;   float gpa;}; StudentType student1;StudentType student2;§  Implementation Of Record TypeØ  Offset address relative to the beginning of the records is associated with each field

H. Tuple Types

§  A tuple is a data type that is similar to a record, except that the elements are not named

I. List Types

§  Lists in LISP and Scheme are delimited by parentheses and use no commas    (A B C D) and (A (B C) D)§  Data and code have the same form     As data, (A B C) is literally what it is     As code, (A B C) is the function A applied to the parameters B and C§  The interpreter needs to know which a list is, so if it is data, we quote it with an apostrophe. ′(A B C) is data§  List Operations in SchemeØ  CAR returns the first element of its list parameter (CAR ′(A B C)) returns AØ  CDR returns the remainder of its list parameter after the first element has been removed(CDR ′(A B C)) returns (B C)Ø  CONS puts its first parameter into its second parameter, a list, to make a new list(CONS ′A (B C)) returns (A B C)Ø  LIST returns a new list of its parameters(LIST ′A ′B ′(C D)) returns (A B (C D))

J. Union Types

§  A union is a type whose variables are allowed to store different type values at different times during execution§  Implementation of Unions:type Node (Tag : Boolean) is  record    case Tag is      when True => Count : Integer;      when False => Sum : Float;    end caseend record;

K. Pointer and Reference Types

§  A pointer type variable has a range of values that consists of memory addresses and a special value, nil. §  Provide the power of indirect addressing.§  Provide a way to manage dynamic memory.§  A pointer can be used to access a location in the area where storage is dynamically created (usually called a heap).§  Pointer Operations:Ø  Two fundamental operations: assignment and dereferencing.Ø  Assignment is used to set a pointer variable’s value to some useful address.Ø  Dereferencing yields the value stored at the location represented by the pointer’s valueü Dereferencing can be explicit or implicitü C++ uses an explicit operation via *          j = *ptr                      sets j to the value located at ptr                                             The assignment operation j = *ptr §  Problems with PointersØ  Dangling pointers (dangerous)ü  A pointer points to a heap-dynamic variable that has been deallocatedØ  Lost heap-dynamic variableü  An allocated heap-dynamic variable that is no longer accessible to the user program (often called garbage)§  Pointers in C and C++Ø  Extremely flexible but must be used with careØ  Pointers can point at any variable regardless of when or where it was allocatedØ  Used for dynamic storage management and addressingØ  Pointer arithmetic is possibleØ  Explicit dereferencing and address-of operatorsØ  Domain type need not be fixed (void *)        void *  can point to any type and can be type      checked (cannot be de-referenced)§  Pointer Arithmetic in C and C++float stuff[100];float *p;p = stuff; *(p+5) is equivalent to stuff[5] and  p[5]*(p+i) is equivalent to stuff[i] and  p[i] §  Reference TypesØ  C++ includes a special kind of pointer type called a reference type that is used primarily for formal parametersØ  Java extends C++’s reference variables and allows them to replace pointers entirelyØ  C# includes both the references of Java and the pointers of C++§  Representations of PointersØ  Large computers use single valuesØ  Intel microprocessors use segment and offset§  Dangling Pointer ProblemØ  Tombstone: extra heap cell that is a pointer to the heap-dynamic variableü  The actual pointer variable points only at tombstonesü  When heap-dynamic variable de-allocated, tombstone remains but set to nilü  Costly in time and spaceØ  Locks-and-keys: Pointer values are represented as (key, address) pairsü  Heap-dynamic variables are represented as variable plus cell for integer lock valueü  When heap-dynamic variable allocated, lock value is created and placed in lock cell and key cell of pointer§  Heap ManagementØ  A very complex run-time processØ  Single-size cells vs. variable-size cellsØ  Two approaches to reclaim garbageü  Reference counters  (eager approach): reclamation is gradualü  Mark-sweep  (lazy approach): reclamation occurs when the list of variable space becomes empty§  Reference CounterØ  Reference counters: maintain a counter in every cell that store the number of pointers currently pointing at the cellü  Disadvantages: space required, execution time required, complications for cells connected circularlyü  Advantage: it is intrinsically incremental, so significant delays in the application execution are avoided

 

L. Type Checking

§  Generalize the concept of operands and operators to include subprograms and assignments§  Type checking is the activity of ensuring that the operands of an operator are of compatible types§  A compatible type is one that is either legal for the operator, or is allowed under language rules to be implicitly converted, by compiler- generated code, to a legal typeØ  This automatic conversion is called a coercion.§  A type error is the application of an operator to an operand of an inappropriate type§  If all type bindings are static, nearly all type checking can be static§  If type bindings are dynamic, type checking must be dynamic§  A programming language is strongly typed if type errors are always detected§  Advantage of strong typing: allows the detection of the misuses of variables that result in type errors

M. Strong Typing

§  Language examples:Ø  C and C++ are not: parameter type checking can be avoided; unions are not type checkedØ  Ada is, almost (UNCHECKED CONVERSION is loophole)(Java and C# are similar to Ada)§  Coercion rules strongly affect strong typing–they can weaken it considerably (C++ versus Ada)§  Although Java has just half the assignment coercions of C++, its strong typing is still far less effective than that of Ada

Source(s) :

http://jcsites.juniata.edu/faculty/rhodes/cs2/ch05a.htmhttp://www.dreamincode.net/forums/topic/67804-c-multidimensional-associative-arrays/http://www.cplusplus.com/reference/valarray/slice/Slide Binus Maya

 

Session 5: Expressions and Assignment Statements

 

A. Introduction

  • Expressions are the fundamental means of specifying computations in a programming language.
  • To understand expression evaluation, need to be familiar with the orders of operator and operand evaluation.
  • Essence of imperative languages is dominant role of assignment statements.

 

B. Arithmetic Expressions

  • Their evaluation was one of the motivations for the development of the first programming languages.
  • Most of the characteristics of arithmetic expressions in programming languages were inherited from conventions that had evolved in math.
  • Arithmetic expressions consist of operators, operands, parentheses, and function calls.
  • The operators can be unary, or binary. C-based languages include a ternary operator.
  • The purpose of an arithmetic expression is to specify an arithmetic computation.
  • An implementation of such a computation must cause two actions:
    • Fetching the operands from memory
    • Executing the arithmetic operations on those operands.
  • Design issues for arithmetic expressions:
  1. What are the operator precedence rules?
  2. What are the operator associativity rules?
  3. What is the order of operand evaluation?
  4. Are there restrictions on operand evaluation side effects?
  5. Does the language allow user-defined operator overloading?
  6. What mode mixing is allowed in expressions?

C. Operator Evaluation Order

  1. Precedence
    1. The operator precedence rules for expression evaluation define the order in which “adjacent” operators of different precedence levels are evaluated (“adjacent” means they are separated by at most one operand).
    2. Typical precedence levels:
    3. parentheses
    4. unary operators
    5. ** (if the language supports it)
    6. *, /
    7. +, –
    8. Many languages also include unary versions of addition and subtraction.
    9. Unary addition is called the identity operator because it usually has no associated operation and thus has no effect on its operand.
    10. In Java, unary plus actually does have an effect when its operand is short or byte. An implicit conversion of short and byte operands to int type takes place.
    11. Ex:

 

A + (- B) * C         // is legal

A + – B * C                       // is illegal

 

  1. Associativity
  • The operator associativity rules for expression evaluation define the order in which adjacent operators with the same precedence level are evaluated. “An operator can be either left or right associative.”
  • Typical associativity rules:
    • Left to right, except **, which is right to left
    • Sometimes unary operators associate right to left (e.g., FORTRAN)
  • Ex: (Java)

 

a – b + c                // left to right

 

  • Ex: (Fortran)

 

A ** B ** C                      // right to left

 

(A ** B) ** C                   // In Ada it must be parenthesized

 

  • Precedence and associativity rules can be overridden with parentheses.

 

  1. Parentheses
  • Programmers can alter the precedence and associativity rules by placing parentheses in expressions.
  • A parenthesized part of an expression has precedence over its adjacent un-parenthesized parts.
  • Ex:

 

(A + B) * C

 

  1. Conditional Expressions
  • Sometimes if-then-else statements are used to perform a conditional expression assignment.

 

if (count == 0)

average = 0;

else

average = sum / count;

 

  • In the C-based languages, this can be specified more conveniently in an assignment statement using a conditional expressions

 

average = (count == 0) ? 0 : sum / count;

Operand evaluation order

  • The process:
  1. Variables: just fetch the value from memory.
  2. Constants: sometimes a fetch from memory; sometimes the constant is in the machine language instruction.
  3. Parenthesized expressions: evaluate all operands and operators first.

 

  1. Side Effects
  • A side effect of a function, called a functional side effect, occurs when the function changes either one of its parameters or a global variable.
  • Ex:

a + fun(a)

 

  • If fun does not have the side effect of changing a, then the order of evaluation of the two operands, a and fun(a), has no effect on the value of the expression.
  • However, if fun changes a, there is an effect.
  • Ex:

Consider the following situation: fun returns the value of its argument

divided by 2 and changes its parameter to have the value 20, and:

a = 10;

b = a + fun(a);

 

  • If the value of a is returned first (in the expression evaluation process), its value is 10 and the value of the expression is 15.
  • But if the second is evaluated first, then the value of the first operand is 20 and the value of the expression is 25.
  • The following shows a C program which illustrate the same problem.

 

int a = 5;

int fun1() {

a = 17;

return 3;

}

void fun2() {

a = a + fun1();

}

void main() {

fun2();

}

 

  • The value computed for a in fun2 depends on the order of evaluation of the operands in the expression a + fun1(). The value of a will be either 8 or 20.
  • Two possible solutions:
  1. Write the language definition to disallow functional side effects
    • No two-way parameters in functions.
    • No non-local references in functions.
    • Advantage: it works!
    • Disadvantage: Programmers want the flexibility of two-way parameters (what about C?) and non-local references.

 

  1. Write the language definition to demand that operand evaluation order be fixed
    • Disadvantage: limits some compiler optimizations

 

Java guarantees that operands are evaluated in left-to-right order, eliminating this problem.

D. Overloaded Operators

  • The use of an operator for more than one purpose is operator overloading.
  • Some are common (e.g., + for int and float).
  • Java uses + for addition and for string catenation.
  • Some are potential trouble (e.g., & in C and C++)

 

x = &y // as binary operator bitwise logical

// AND, as unary it is the address of y

  • Causes the address of y to be placed in x.
  • Some loss of readability to use the same symbol for two completely unrelated operations.
  • The simple keying error of leaving out the first operand for a bitwise AND operation can go undetected by the compiler “difficult to diagnose”.
  • Can be avoided by introduction of new symbols (e.g., Pascal’s div for integer division and / for floating point division)

E. Type Conversions

  • A narrowing conversion is one that converts an object to a type that cannot include all of the values of the original type e.g., float to int.
  • A widening conversion is one in which an object is converted to a type that can include at least approximations to all of the values of the original type e.g., int to float.

F. Coercion in Expressions

  • A mixed-mode expression is one that has operands of different types.
  • A coercion is an implicit type conversion.
  • The disadvantage of coercions:
    • They decrease in the type error detection ability of the compiler
  • In most languages, all numeric types are coerced in expressions, using widening conversions
  • Language are not in agreement on the issue of coercions in arithmetic expressions.
  • Those against a broad range of coercions are concerned with the reliability problems that can result from such coercions, because they eliminate the benefits of type checking.
  • Those who would rather include a wide range of coercions are more concerned with the loss in flexibility that results from restrictions.
  • The issue is whether programmers should be concerned with this category of errors or whether the compiler should detect them.

 

  • Ex:

 

void mymethod() {

int a, b, c;

float d;

a = b * d;

}

 

  • Assume that the second operand was supposed to be c instead of d.
  • Because mixed-mode expressions are legal in Java, the compiler wouldn’t detect this as an error.  Simply, b will be coerced to float.
  • Explicit Type Conversions
  • Often called casts in C-based languages.

G. Errors in Expressions

  • Caused by:
    • Inherent limitations of arithmetic g. division by zero
    • Limitations of computer arithmetic g. overflow or underflow
  • Such errors are often ignored by the run-time system.

 

H. Relational and Boolean Expressions

  • Relational Expressions: has two operands and one relational operator.
  • The value of a relational expression is Boolean, unless it is not a type included in the language.
    • Use relational operators and operands of various types.
    • Operator symbols used vary somewhat among languages (!=, /=, .NE., <>, #)
  • The syntax of the relational operators available in some common languages is as follows:

 

Operation Ada C-Based

Languages

Fortran 95
Equal = == .EQ. or ==
Not Equal /= != .NE. or <>
Greater than > > .GT. or >
Less than < < .LT. or <
Greater than or equal >= >= .GE. or >=
Less than or equal <= <= .LE. or >=

I. Boolean Expressions

  • Operands are Boolean and the result is Boolean.

 

FORTRAN 77 FORTRAN 90 C Ada
.AND. and && and
.OR. or || or
.NOT. not ! not

 

  • Versions of C prior to C99 have no Boolean type; it uses int type with 0 for false and nonzero for true.
  • One odd characteristic of C’s expressions:

a < b < c  is a legal expression, but the result is not what you might expect.

  • The left most operator is evaluated first b/c the relational operators of C, are left associative, producing either 0 or 1.
  • Then this result is compared with var c. There is never a comparison between b and c.

J. Short Circuit Evaluation

 

  • A short-circuit evaluation of an expression is one in which the result is determined without evaluating all of the operands and/or operators.
  • Ex:

 

(13 * a) * (b/13 – 1) // is independent of the value

(b/13 – 1) if a = 0, b/c 0*x = 0.

 

  • So when a = 0, there is no need to evaluate (b/13 – 1) or perform the second multiplication.
  • However, this shortcut is not easily detected during execution, so it is never taken.
  • The value of the Boolean expression:

 

(a >= 0) && (b < 10) //      is independent of the second

expression if a < 0, b/c (F && x)

is False for all the values of x.

  • So when a < 0, there is no need to evaluate b, the constant 10, the second relational expression, or the && operation.
  • Unlike the case of arithmetic expressions, this shortcut can be easily discovered during execution.
  • Short-circuit evaluation exposes the potential problem of side effects in expressions

(a > b) || (b++ / 3) //           b is changed only when a <= b.

 

  • If the programmer assumed b would change every time this expression is evaluated during execution, the program will fail.
  • C, C++, and Java: use short-circuit evaluation for the usual Boolean operators (&& and ||), but also provide bitwise Boolean operators that are not short circuit (& and |)

 

K. Assignment Statements

Simple Assignments

  • The C-based languages use == as the equality relational operator to avoid confusion with their assignment operator.
  • The operator symbol for assignment:
  1. = FORTRAN, BASIC, PL/I, C, C++, Java
  2. := ALGOL, Pascal, Ada

Conditional Targets

  • Ex:

 

flag ? count 1 : count2 = 0; ⇔      if (flag)

count1 = 0;

else

count2 = 0;

 

Compound Assignment Operators

  • A compound assignment operator is a shorthand method of specifying a commonly needed form of assignment.
  • The form of assignment that can be abbreviated with this technique has the destination var also appearing as the first operand in the expression on the right side, as in

a = a + b

  • The syntax of assignment operators that is the catenation of the desired binary operator to the = operator.

sum += value;        ⇔        sum = sum + value;

 

Unary Assignment Operators

  • C-based languages include two special unary operators that are actually abbreviated assignments.
  • They combine increment and decrement operations with assignments.
  • The operators ++ and — can be used either in expression or to form stand-alone single-operator assignment statements. They can appear as prefix operators:

 

sum = ++ count;    ⇔        count = count + 1; sum = count;

If the same operator is used as a postfix operator:

 

sum = count ++;    ⇔        sum = count; count = count + 1;

 

Assignment as an Expression

  • This design treats the assignment operator much like any other binary operator, except that it has the side effect of changing its left operand.
  • Ex:

 

while ((ch = getchar())!=EOF)

{…}                                                    // why ( ) around assignment?

 

  • The assignment statement must be parenthesized b/c the precedence of the assignment operator is lower than that of the relational operators.
  • Disadvantage:
    • Another kind of expression side effect which leads to expressions that are difficult to read and understand.
  • There is a loss of error detection in the C design of the assignment operation that frequently leads to program errors.

 

if (x = y) …

instead of

if (x == y) …

 

Mixed-Mode Assignment

  • In FORTRAN, C, and C++, any numeric value can be assigned to any numeric scalar variable; whatever conversion is necessary is done.
  • In Pascal, integers can be assigned to reals, but reals cannot be assigned to integers (the programmer must specify whether the conversion from real to integer is truncated or rounded.)
  • In Java, only widening assignment coercions are done.
  • In Ada, there is no assignment coercion.
  • In all languages that allow mixed-mode assignment, the coercion takes place only after the right side expression has been evaluated.

Source(s) :

https://www.google.co.id/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwirlcbl-fLQAhVENI8KHW50DfMQFggbMAE&url=http%3A%2F%2Fwww2.southeastern.edu%2FAcademics%2FFaculty%2Fgalkadi%2F401%2Fnotes%2Fchapter7.doc&usg=AFQjCNHLpvCJ8kQRR60UiIQnX9uInQVDWQ&sig2=j_j5nHP8KZ_dCNU71doPCw&bvm=bv.141536425,d.c2I

 

Session 6: Control Structure Statement

 

A. Control Structure

A control structure is a container for a series of function calls, instructions and statements that dictate the flow of control. There are three basic types of control structures such as sequential, selection and iteration. They can be combined in any way to solve a specified problem.

B. Sequence control structure

The sequence control structure is the default or overriding structure. The sequence structure indicates instructions are to be executed one statement at a time in the order they occur from top to bottom unless a different control structure dictates otherwise.

The sequence control structure is used everytime we codes the program.

 

C. Selection Control Structure

Before knowing more about selection control structure , we must know about the conditional statement .

Conditional statement has the ability to test a variable against a value and act in one way if the condition is met by the variable or another way if not.  If it is true , the following block of code will be executed , either way it will do nothing .

IF Statement is one of the most used conditional statement . The IF statement lets you execute a sequence of statements conditionally. That is, whether the sequence is executed or not depends on the value of a condition.

condition is any variable or expression that returns a Boolean value (TRUE or FALSE). This condition can be called the control expression . Control expression is an expression that have a value of boolean and placed in parentheses where there is the conditional statement.

The selection control structure allows one set of statements to be executed if a condition is true and  allows another set of actions to be executed if a condition is false and another set of actions is available.

There are two categories of selection control structure : Two-way selector and Multiple-way selectors.

 

  1. Two- Way selector

Two-way selector makes the conditonal statement has an alternative sequence of statements to be done if the first conditional statement has a value of false.

In C++ language , it has the same syntax like the one that has been used in C language . The Syntax is IF ELSE . There is another one called switch case but it is not effective when you only have two condition.

Switch-case

Switch(a) —à value of variable is used to decide the case

{

Case 1 : break;

Case 2: break;

Default : ;

}

The most important difference between switch case and if else is , in the if else control expression , it returns a boolean value , in switch case , it uses the value of variable inside the parentheses to decide which case to be executed.

switch statement can have an optional default case, which must appear at the end of the switch. The default case can be used for performing a task when none of the cases is true. No break is needed in the default case.

 

Example in a pseudo-code:

 

IF condition THEN

sequence_of_statements1

ELSE

sequence_of_statements2

END IF;

 

In this case, IF the variable is true, it takes a certain course of action to executedthe sequence of statements 1 and completely skips the ELSE clause. So in oher way ,the sequence of statements in the ELSE clause is executed only if the condition is false or null. Thus, the ELSE clause ensures that a sequence of statements is executed.

In some source code, ELSE   can has the conditional statement too . It will be described in multiple-way selectors.

  1. Multiple-way Selectors

Sometimes you want to select an action from several mutually exclusive alternatives. So we used this kind of  selection control Structure.

The syntax in C++ language is IF ELSE and switch-case.

Just saying that , case in switch case can be added as much as you want that makes it possible to do a multiple way selection.

 

IF condition1 THEN   sequence_of_statements1ELSE IF condition2 THEN   sequence_of_statements2ELSE   sequence_of_statements3END IF; In this code of block ,  IF the first condition is false or null, the ELSE IF clause tests another condition. An IF statement can have any number of ELSEIF clauses; the final ELSE clause is optional. Conditions are evaluated one by one from top to bottom. IF any condition is true, its associated sequence of statements is executed and control passes to the next statement.IF all conditions are false or null, the sequence in the ELSE clause is executed.

 

NESTED IF

It is an IF inside an IF ! It means that you can use one if or else if statement inside another if or else if statement(s).

 

Example in pseudo-code :

 

IF( boolean_expression 1)

{

// Executes when the boolean expression 1 is true

 

IF(boolean_expression 2)

{

// Executes when the boolean expression 2 is true

}

}

 

Nested if can be used in both two-way selector and multiple-way selector. This kind of IF provided us multiple condition inside one condition .

D. Iterative Statements

An iterative statement is one that causes a statement or collection of statements to be executed 0 or 1 or more times.  Every programming language has included some method of repeating the execution of segments of code. Iteration is the very essence of the power of the computer.

 

Types of controlled loop :

  • Counter Controlled Loop
  • Logically Controlled Loop
  • Iteration Based on Data Structures

 

  1. Counter Controlled Loop

A counting iterative control statement has a variable, called the loop variable, in which the count value is maintained.  This includes some means of specifying the initial and terminal values of the loop variable and the difference between sequential loop variable values, often called the step-size.

 

In C++ languages , the most common one is for statement.

 

Example in C++ languages (but commonly used in this kind of syntax):

for ([Expr_1] ; [Expr_2] ; [Expr_4])

{

Expr_3

}

Expr_1 : Initialization , very thing to be done hen executing the statement and only once . You are not required to put a statement here, as long as a semicolon appears.

 

Expr_2 : the condition is evaluated. If it is true, the body of the loop is executed. If it is false, the body of the loop does not execute and flow of control jumps to the next statement just after the for loop.

 

Expr_3 : After the body of the for loop executes, the flow of control jumps back up to the increment statement. This statement allows you to update any loop control variables. This statement can be left blank, as long as a semicolon appears after the condition.

 

Expr_4 : After the body of the for loop executes, the flow of control jumps back up to the increment / decrement statement. This statement allows you to update any loop control variables. This statement can be left blank, as long as a semicolon appears after the condition.

 

After expr_4 is executed then the condition is now evaluated again. If it is true, the loop executes and the process repeats itself (body of loop, then increment step, and then again condition). After the condition becomes false, the for loop terminates.

 

Counter-controlled loop happens as the code get executed , it was meant to be executed until it is not returning a true value in expr_2  where in expr_4 , we control the variable until it return false value.

 

  1. Logically Controlled Loop

In many cases, collections of statements mist be repeatedly executed, but the repetition control is based on a Boolean expression rather than a counter.  For these situation, a logically controlled loop is convenient.

 

Syntax in C++ language : while , do while .

 

The difference between while  and do while is the validation of condition .

There are two kind of validation of condition : Post Test and Pre Test.

 

With while function, it is checking the condition first ,If it returns true then it will do the following block of the code . In this one , it checks first , that’s why it’s the Pre Test

 

With do while function, it will execute, at least once , the following block of the code then it will check the condition whether it is true or not. In this one , it executes the block first then checks it whether it is true or not , that’s why it’s the Post Test.

 

  1. Iteration Based on Data Structure

Rather than have a counter or Boolean expression control iterations, these loops are controlled by the number of elements in a data structure.

C provides two commands to control how we loop:

 

break — exit form loop or switch.

continue — skip 1 iteration of loop

Source(s) :

http://www.valid-computing.com/control-structures.html

https://www.reference.com/technology/three-types-control-structures-13a5ca2c10e15fbf

http://virtual.parkland.edu/kcouch/CIS122/Week3/Sequence%20Control%20Structure.htm

http://www.thevbprogrammer.com/Ch05/05-03-SelectionStructure.htm

https://docs.oracle.com/cd/A87860_01/doc/appdev.817/a77069/03_struc.htm

https://opentechschool.github.io/python-beginners/en/conditionals.html

https://www.tutorialspoint.com/cplusplus/cpp_nested_if.htm

https://www.tutorialspoint.com/cplusplus/cpp_switch_statement.htm

http://groups.engin.umd.umich.edu/CIS/course.des/cis400/maxim/lectures/chp7.htm

https://www.tutorialspoint.com/cplusplus/cpp_for_loop.htm

http://www.tutorialspoint.com/ansi_c/c_control_statements.htm

Session 7: Subprograms

A. Introduction to Subprograms

 

A subprogram is a sequence of program instructions that perform a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Subprograms may be defined within programs, or separately in libraries that can be used by multiple programs. In different programming languages, a subroutine may be called a function, a procedure, a routine, a method, or a subprogram.

–     Fundamental Abstraction Facilities

Two fundamental abstraction facilities can be included in programming language :

  • Process abstraction
    • Emphasized from early days
    • Discussed in this chapter
  • Data abstraction
    • Emphasized in the 1980s

B. Subprograms

Subprograms are the most important concepts in programming language design. Subprograms usually describe computations. There are 2 ways that a subprogram can gain access to the data that is to process: through direct access to nonlocal variables or through parameter passing (will be explained later in parameters part). Subprograms include the following characteristics :

  • Each subprogram has a single entry point
  • There is only one subprogram execution at any given time
  • Control always returns to the caller when the subprogram execution terminates.

In subprograms, there are some basic definition or term :

  • A subprogram definition describes the interface and the actions of the subprogram abstraction.
  • A subprogram call is the explicit request so the subprogram be executed.
  • A subprogram is active if after having been called, it has begun execution but has not completed that execution.
  • Two fundamental kinds of subprograms are procedures and functions.
  • A function is a subprogram that returns a value. That is, a function be evaluated as a component of an expression, because a function call produces a value. Some languages allow a function result to be “ignored”, in which case a function call can be a statement in a program. (This means, however, that the function would have to have a side-effect to be of value.)
  • A procedure is a subprogram that does not return a value. Any useful work that is done by a procedure is through changes to external data, including files and arguments. A procedure may also be called a subroutine, but this term is not used much any more.
  • As indicated in the text, the components of a subprogram are its name, its signature, and its action. The signature of a subprogram consists of its name and its parameter and return value specifications. (Usually only a single return value is allowed, but often it can be a composite object such as a list or a record.) The action is represented by the “body” of the subprogram definition.
  • A subprogram header, which is the first line of the definition, serves several purposes. It specifies that the following syntactic unit is a subprogram definition, and provides the name for the subprogram. It may also optionally specify list of parameters.
  • The parameter profile of a subprogram is the number, order and types of its formal parameters.
  • The protocol of a subprogram is its parameter profile plus — if it is a function — its return types.
  • Subprograms declarations are common in C programs where they are called

C. Local Referencing Environment

  • Stack-dynamic local variables

–   Advantages

  • Support for recursion
  • Storage for locals is shared among some subprograms

–   Disadvantages

  • Allocation/de-allocation, initialization time
  • Indirect addressing
  • Subprograms cannot be history sensitive
  • Static local variables
  • Advantages and disadvantages are the opposite of those for stack-dynamic local variables.

D. Parameters

Data passed through parameters are accessed through names that are local to the subprogram.  Parameter passing is more flexible than direct access to nonlocal variables. The parameters in the subprogram header are called formal parameters. Subprograms call statements must include the name of the subprogram and a list of parameters to be bound to the formal parameters of the subprogram.  These parameters are called actual parameters. The binding of actual parameters to formal parameters – is done by simple position: the first actual parameter is bound to the first formal parameter and so forth.  Such parameters are called positional parameters. When lists are long, it is easy for the program writer to make mistakes in the order of parameters in the list – one solution to this problem is to provide keyword parameters, in which the name of the formal parameter to which an actual parameter is to be bound is specified with the actual parameter. It is sometimes convenient to pass subprogram names as parameter.

E. Parameter Passing

Parameter-passing methods are the ways in which parameters are transmitted to and / or from called programs. There are 2 conceptual models of how data transfers take place in parameter transmission, either an actual value is physically moved (to the caller, to the callee, or both ways), or an access path is transmitted. Formal parameters are characterized by one of three semantics models :

 

  • In mode
  • Out mode
  • Inout mode
  • Pass by Value (In mode)

They can receive data from the corresponding actual parameter. The value of the actual parameter is used to initialize the corresponding formal parameter. Normally this model implemented by copying. They can be implemented by transmitting an access path but not recommended (enforcing write protection is not easy).

Disadvantages :

  • By physical move : additional storage is required (stored twice) and the actual move can be costly (for large parameters)
  • By access path method : must write-protect in the called subprogram and accesses cost more (indirect addressing)
    • Pass by Result (Out mode)

They can transmit data to the actual parameterWhen a parameter is passed by result, no value is transmitted to the subprogram; the corresponding formal parameter acts as a local variable; its value is transmitted to caller’s actual parameter when control is returned to the caller, by physical move. Requires extra storage location and copy operation. Some potential problems of this model :

  • sub (p1, p1) ; whichever formal parameter is copied back will represent the current value of p1
  • sub (list[sub], sub) ; compute address of list[sub] at the beginning or the end of subprogram (?)
  • Pass by Value Result (Inout mode)

Combination of pass by value and pass by result. The formal parameters of this model have local storage. Sometimes called pass-by-copy.

Disadvantages: Those of pass-by-result and pass-by-value

  • Pass by Reference (Inout mode)

Pass an access path, also called pass-by-sharing. Pass-by-reference are the simplest to implement; only an address is placed in the stack.

Advantage : Passing process is efficient (no copying and no duplicated storage)

Disadvantages :

  • Slower accesses (compared to pass-by-value) to formal parameters
  • Potentials for unwanted side effects (collisions)
  • Unwanted aliases (access broadened)

fun(total, total);  fun(list[i], list[j];  fun(list[i], i);

  • Pass by Name (Inout mode)

Worked by textual substitution. Formals are bound to an access method at the time of the call, but actual binding to a value or address takes place at the time of a reference or assignment. Allows flexibility in late binding. Implementation requires that the referencing environment of the caller is passed with the parameter, so the actual parameter address can be calculated.

F. Design Considerations for Parameter Passing

  • Two important considerations
    • Efficiency
    • One-way or two-way data transfer

But the above considerations are in conflict with :

  • Good programming suggest limited access to variables, which means one-way whenever possible
  • Pass-by-reference that is more efficient to pass structures of significant size

G. Referencing Environment

  • Shallow binding: The environment of the call statement that enacts the passed subprogra Most natural for dynamic-scope languages
  • Deep binding: The environment of the definition of the passed subprogram. Most natural for static-scoped languages
  • Ad hoc binding: The environment of the call statement that passed the subprogram

 

Usually when there are several possible subprograms to be called and the correct one on a particular run of the program is not know until execution (e.g., event handling and GUIs). In C and C++, such calls are made through function pointers.

H. Overloaded Subprograms

An overloaded subprogram is subprograms that have the same name in the same referencing environment. Every version of an overloaded subprogram has a unique protocol. C++, Java, C#, and Ada include predefined overloaded subprograms and allow user to write multiple versions of subprograms with the same name.

I. Generic Subprograms

A generic or polymorphic subprogram takes parameters of different types on different activations. Overloaded subprograms provide ad hoc polymorphism, while subtype polymorphism means that a variable of type T can access any object of type T or any type derived from T (OOP languages). A subprogram that takes a generic parameter that is used in a type expression that describes the type of the parameters of the subprogram provides parametric polymorphism – a cheap compile-time substitute for dynamic binding -.

 

J. User Defined Overloaded Operators (C++)

Example :

 

 

#include <iostream>

using namespace std;

 

class printData {

public:

void print(int i) {

cout << “Printing int: ” << i << endl;

}

 

void print(double  f) {

cout << “Printing float: ” << f << endl;

}

 

void print(char* c) {

cout << “Printing character: ” << c << endl;

}

};

 

int main(void) {

printData pd;

 

// Call print to print integer

pd.print(5);

 

// Call print to print float

pd.print(500.263);

 

// Call print to print character

pd.print(“Hello C++”);

 

return 0;

}

When the above code is compiled and executed, it produces the following result:

Printing int: 5Printing float: 500.263Printing character: Hello C++

K. Closures

A closure is a subprogram and the referencing environment where it was defined. The referencing environment is needed if the subprogram can be called from any arbitrary place in the program. Closures are only needed if a subprogram can access variables in nesting scopes and it can be called from anywhere. To support closures, an implementation may need to provide unlimited extent to some variables (because a subprogram may access a nonlocal variable that is normally no longer alive).

L. Coroutines

A coroutine is a subprogram that has multiple entries and controls them itself, also called symmetric control: caller and called coroutines are on a more equal basis, A coroutine call is named resume. The first resume is to its beginning, but subsequent calls enter at the point just after the last executed statement. Coroutines repeatedly resume each other, and provide quasi-concurrent execution of program units (the coroutines); their execution is interleaved, but not overlapped.

 

Possible Execution Controls :

 

Possible Execution Controls (With Loops)

Source(s) :

http://groups.engin.umd.umich.edu/CIS/course.des/cis400/maxim/lectures/chp8.htm

https://en.wikipedia.org/wiki/Subroutine

https://people.cs.clemson.edu/~turner/courses/cs428/spring00/webct/content/pz/ch5/ch5_2.html

Slide Binus Maya

 

Session 8: Abstract Data Types

A. The Concept of Abstraction

An abstraction is a view or representation of an entity that includes only the most significant attributes. The concept of abstraction is fundamental in programming (and computer science). Nearly all programming languages support process abstraction with subprograms, and designed since 1980, support data abstraction.

B. Introduction

Data abstraction refers to, providing only essential information to the outside world and hiding their background details, i.e., to represent the needed information in program without presenting the details. An ADT (Abstract Data Types in extend) may be implemented by specific data types or data structures, in many ways and in many programming languages; or described in a formal specification language. An ADT is a user-defined data type that satisfies the following two conditions:

  • The representation of objects of the type is hidden from the program units that use these objects, so the only operations possible are those provided in the type’s definition
  • The declarations of the type and the protocols of the operations on objects of the type are contained in a single syntactic unit. Other program units are allowed to create variables of the defined type.
  • Advantages of Data Abstraction
  • Advantages of the first condition
    • Reliability — by hiding the data representations, user code cannot directly access objects of the type or depend on the representation, allowing the representation to be changed without affecting user code
    • Reduces range of code and variables of which the programmer must be aware of
    • Name conflicts are less likely to happen
  • Advantages of the second condition
    • Provides a method of program organization
    • Aids modifiability (everything associated with a data structure is together)
    • Separate compilation

Languange Examples : C++

  • Based on C struct type and Simula 67 classes
  • The class is the encapsulation device (a class is a type)
  • All of the class instances of a class share a single copy of the member functions
  • Each instance of a class has its own copy of the class data members
  • Instances can be static, stack dynamic, or heap dynamic
  • Information hiding :
  • Private clause for hidden entities
  • Public clause for interface entities
  • Protected clause for inheritance
  • Constructors:
  • Functions to initialize the data members of instances (they do not create the objects)
  • May also allocate storage if part of the object is heap-dynamic
  • Can include parameters to provide parameterization of the objects
  • Implicitly called when an instance is created
  • Can be explicitly called
  • Name is the same as the class name
  • Destructors
    • Functions to cleanup after an instance is destroyed; usually just to reclaim heap storage
    • Implicitly called when the object’s lifetime ends
    • Can be explicitly called
    • Name is the class name, preceded by a tilde (~)
  • Friend functions or classes – to provide access to private members to some unrelated units or functions (necessary in C++)

Example in C++ :

class Stack {

private:

int *stackPtr, maxLen, topPtr;

public:

Stack() { // a constructor

stackPtr = new int [100];

maxLen = 99;

topPtr = -1;

};

~Stack () {delete [] stackPtr;};

void push (int number) {

if (topSub == maxLen)

cerr << ″Error in push – stack is full\n″;

else stackPtr[++topSub] = number;

};

void pop () {…};

int top () {…};

int empty () {…};

}

C. Parameterized ADTs (C++)

Classes can be somewhat generic by writing parameterized constructor functions

Stack (int size)

{

stk_ptr = new int [size];

max_len = size – 1;

top = -1;

};

 

– A declaration of a stack object : Stack stk(150);

 

The stack element type can be parameterized by making the class a templated class
template <class Type>
class Stack

{
private:
Type *stackPtr;
const int maxLen;
int topPtr;
public:
Stack() {  // Constructor for 100 elements
stackPtr = new Type[100];
maxLen = 99;
topPtr = -1;
}

Stack(int size) {  // Constructor for a given number

stackPtr = new Type[size];

maxLen = size – 1;

topSub = -1;

}

}

– Instantiation : Stack<int> myIntStack;

D. Encapsulation Constructs

Large programs have two special needs:

  • Some means of organization, other than simply division into subprograms
  • Some means of partial compilation (compilation units that are smaller than the whole program)

Obvious solution: a grouping of subprograms that are logically related into a unit that can be separately compiled (compilation units). Such collections are called encapsulation.

 

E. Encapsulation in C++

Encapsulation in C++ can define header and code files, similar to those of C, or, classes can be used for encapsulation. The class is used as the interface (prototypes), and the member definitions are defined in a separate file. Friends provide a way to grant access to private members of a class.

  • Naming Encapsulations
    • Large programs define many global names; need a way to divide into logical groupings
    • A naming encapsulation is used to create a new scope for names
    • C++ Namespaces
    • Can place each library in its own namespace and qualify names used outside with the namespace

F. Typical operations

Some operations that are often specified for ADTs (possibly under other names) are :

  • compare(s, t), that tests whether two instances’ states are equivalent in some sense;
  • hash(s), that computes some standard hash function from the instance’s state;
  • print(s) or show(s), that produces a human-readable representation of the instance’s state.

In imperative-style ADT definitions, one often finds also

  • create(), that yields a new instance of the ADT;
  • initialize(s), that prepares a newly created instance s for further operations, or resets it to some “initial state”;
  • copy(s, t), that puts instance s in a state equivalent to that of t;
  • clone(t), that performs s ← create(), copy(s, t), and returns s;
  • free(s) or destroy(s), that reclaims the memory and other resources used by s.

 

Data Abstraction Example :

Any C++ program where you implement a class with public and private members is an example of data abstraction.

 

#include <iostream>

using namespace std;

 

class Adder {

public:

// constructor

Adder(int i = 0) {

total = i;

}

 

// interface to outside world

void addNum(int number) {

total += number;

}

 

// interface to outside world

int getTotal() {

return total;

};

 

private:

// hidden data from outside world

int total;

};

 

int main( ) {

Adder a;

 

a.addNum(10);

a.addNum(20);

a.addNum(30);

 

cout << “Total ” << a.getTotal() <<endl;

return 0;

}

When the above code is compiled and executed, it produces the following result:

Total 60

Source(s) :

https://www.tutorialspoint.com/cplusplus/cpp_data_abstraction.htm

https://en.wikipedia.org/wiki/Abstract_data_type

Slide Binus Maya

 

Session 9: Object Oriented Programming

 

A. Introduction to Object Oriented Programming

OOP (Object Oriented Programming) is a programming paradigm based on the concept of “object”, which can contain the data, in the form field, which is often called attributes, and code, in the form of procedure of the form, which is often called the method.

The object itself is a component of a program that knows how to perform certain actions and how it interacts with other elements of the program. The object is the basic unit of object-oriented programming. Suppose we have an object “person”. Logically, the “people” are definitely has a name as a property of the “people”. We can also make the object “person” to do something like, “walk” or “drive”. It is called the “method” of the “people”.

We can also interact an object with another object. As an example we want to create a program in which “people” drive “cars” from A to B. We can simply add new objects, the “car” and some methods that include how “people” to drive a car and what do the “car” when is being driven by people. When completed, connect both the object so that the object can work together – the same .It can occur due to code in OOP is set around the object.

Class is a blueprint of an object. The assumption class it as a concept, and the object is the embodiment of the concept. We must create a class before making an object. Say that we want to make a person in a program. We want to describe how the person looks and what they would do. A class called “people” will provide plan how “man” is seen and what is a “person” can do. To really use someone in your program, you need to create an object. You use the class to create an object of type “person”.

Class is very useful in programming. In the manufacture of the object, we can make more than one object in one class only. Manufacturing still needs a detailed declaration but the structure remains the same object.

Once we create an object, we want the object – the object to do something. This is where the method works. Method in OOP is a procedure related to the class. A method defines the behavior of objects created from the class. Method is also defined as an act that is the object capable of doing. The relationship between the method and the class is called binding. For example, an object of type ‘person,’ which made use of a class of people. Method associated with this class can consist of things such as walking and driving. Method is sometimes confused with the function, but they are different.

Function is a combination of instruction combined to achieve results. A function usually requires some input (called arguments) and return some results. For example, driving a car. To determine the distance, we need to perform calculations using the distance driven and the amount of fuel used. You can create a function to perform this calculation. The arguments passed into the function of the great distance and fuel consumption, and the result will be the mileage. Whenever we want to determine the distance, you can simply call a function to perform calculations.

What’s different between function and method? The function is independent and not associated with the class. We can use this function anywhere in our code, and we do not need to have an object for use.

Now, what if we are to associate the function with an object of type ‘car?’ For example, you want to show the mileage on the car dashboard. In this case, the calculation of mileage has been the method because it is a procedure associated with the car class. Every time we create a new object of type ‘car’ to use the car class, this method will become part of the object. Measures that can now be done is to calculate the distance. This is the same calculation as that of a stand-alone function, but is now tied to the car. Four core concepts of object oriented programming is abstraction, encapsulation, inheritance and polymorphism

B. ADT(Abstract Data Type)

Abstract data refers to the data, only provides important information to the outside world and hide the details of their background, that is, to represent the needed information in program without presenting detail. For example, the database system hides certain details of how data is stored, prepared and maintained. the same way, C ++ class provides different methods to the outside world without giving details about the internal methods and data.

C. Encapsulation

Encapsulation is an OOP concept that binds together the data and functions that manipulate data, and that makes them safe from outside interference and misuse. Data encapsulation led to important OOP concepts of hiding data.

Data encapsulation is a mechanism of bundling the data, and the functions that use them and data abstraction is a mechanism of exposing only the interfaces and hiding the implementation details from the user.

C++ supports the properties of encapsulation and data hiding through the creation of user-defined types, called classes. We already have studied that a class can contain private, protected and public members. By default, all items defined in a class are private.

 

Example of encapsulation :

#include <iostream>

using namespace std;

class Adder {

public:

// constructor

Adder(int i = 0) {

total = i;

}

// interface to outside world

void addNum(int number) {

total += number;

}

// interface to outside world

int getTotal() {

return total;

};

private:

// hidden data from outside world

int total;

};

int main( ) {

Adder a;

a.addNum(10);

a.addNum(20);

a.addNum(30);

cout << “Total ” << a.getTotal() <<endl;

return 0;

}

D. Inheritance

One of the most important concepts in object-oriented programming is that of inheritance. Inheritance allows us to define a class in terms of another class, which makes it easier to create and maintain an application. This also provides an opportunity to reuse the code functionality and fast implementation time.

When creating a class, instead of writing completely new data members and member functions, the programmer can designate that the new class should inherit the members of an existing class. This existing class is called the baseclass, and the new class is referred to as the derived class.

The idea of inheritance implements the is a relationship. For example, mammal IS-A animal, dog IS-A mammal hence dog IS-A animal as well and so on.

A class can be derived from more than one classes, which means it can inherit data and functions from multiple base classes. To define a derived class, we use a class derivation list to specify the base class(es). A class derivation list names one or more base classes and has the form:

class derived-class: access-specifier base-class

 

Where access-specifier is one of public, protected, or private, and base-class is the name of a previously defined class. If the access-specifier is not used, then it is private by default.

Example of inheritance (base and derived class) :

#include <iostream>

using namespace std;

// Base class

class Shape  {

public:

void setWidth(int w) {

width = w;

}

void setHeight(int h) {

height = h;

}

protected:

int width;

int height;

}

// Derived class

class Rectangle: public Shape {

public:

int getArea() {

return (width * height);

}

};

int main(void) {

Rectangle Rect;

Rect.setWidth(5);

Rect.setHeight(7);

// Print the area of the object.

cout << “Total area: ” << Rect.getArea() << endl;

return 0;

}

A derived class can access all the non-private members of its base class. Thus base-class members that should not be accessible to the member functions of derived classes should be declared private in the base class.

We can summarize the different access types according to who can access them in the following way:

Access public protected private
Same class yes yes yes
Derived classes yes yes no
Outside classes yes no no

A derived class inherits all base class methods with the following exceptions:

  • Constructors, destructors and copy constructors of the base class.
  • Overloaded operators of the base class.
  • The friend functions of the base class.

When deriving a class from a base class, the base class may be inherited throughpublic, protected or private inheritance. The type of inheritance is specified by the access-specifier as explained above.

We hardly use protected or private inheritance, but public inheritance is commonly used. While using different type of inheritance, following rules are applied:

  • Public Inheritance: When deriving a class from a public base class,public members of the base class become public members of the derived class and protected members of the base class becomeprotected members of the derived class. A base class’s privatemembers are never accessible directly from a derived class, but can be accessed through calls to the public and protected members of the base class.
  • Protected Inheritance: When deriving from a protected base class,public and protected members of the base class become protectedmembers of the derived class.
  • Private Inheritance: When deriving from a private base class, publicand protected members of the base class become private members of the derived class.

E. Polymorphism

The word polymorphism means having many forms. Typically, polymorphism occurs when there is a hierarchy of classes and they are related by inheritance.

C++ polymorphism means that a call to a member function will cause a different function to be executed depending on the type of object that invokes the function.

Consider the following example where a base class has been derived by other two classes:

#include <iostream>

using namespace std;

 

class Shape {

protected:

int width, height;

 

public:

Shape( int a = 0, int b = 0) {

width = a;

height = b;

}

 

int area() {

cout << “Parent class area :” <<endl;

return 0;

}

};

 

class Rectangle: public Shape {

public:

Rectangle( int a = 0, int b = 0):Shape(a, b) { }

int area () {

cout << “Rectangle class area :” <<endl;

return (width * height);

}

};

 

class Triangle: public Shape{

public:

Triangle( int a = 0, int b = 0):Shape(a, b) { }

int area () {

cout << “Triangle class area :” <<endl;

return (width * height / 2);

}

};

 

// Main function for the program

int main( ) {

Shape *shape;

Rectangle rec(10,7);

Triangle  tri(10,5);

 

// store the address of Rectangle

shape = &rec;

 

// call rectangle area.

shape->area();

 

// store the address of Triangle

shape = &tri;

 

// call triangle area.

shape->area();

 

return 0;

}

When the above code is compiled and executed, it produces the following result:

Parent class area

Parent class area

The reason for the incorrect output is that the call of the function area() is being set once by the compiler as the version defined in the base class. This is calledstatic resolution of the function call, or static linkage – the function call is fixed before the program is executed. This is also sometimes called early binding because the area() function is set during the compilation of the program.

But now, let’s make a slight modification in our program and precede the declaration of area() in the Shape class with the keyword virtual so that it looks like this:

class Shape {

protected:

int width, height;

public:

Shape( int a = 0, int b = 0) {

width = a;

height = b;

}

 

virtual int area() {

cout << “Parent class area :” <<endl;

return 0;

}

};

After this slight modification, when the previous example code is compiled and executed, it produces the following result:

Rectangle class area

Triangle class area

This time, the compiler looks at the contents of the pointer instead of it’s type. Hence, since addresses of objects of tri and rec classes are stored in *shape the respective area() function is called.

As you can see, each of the child classes has a separate implementation for the function area(). This is how polymorphism is generally used. You have different classes with a function of the same name, and even the same parameters, but with different implementations.

Source(s) :

https://en.wikipedia.org/wiki/Object-oriented_programming

http://study.com/academy/lesson/oop-object-oriented-programming-objects-classes-interfaces.html

https://www.tutorialspoint.com/cplusplus/cpp_object_oriented.htm

https://www3.ntu.edu.sg/home/ehchua/programming/cpp/cp3_OOP.html

https://www.tutorialspoint.com/cplusplus/cpp_data_encapsulation.htm

https://www.tutorialspoint.com/cplusplus/cpp_inheritance.htm

 

Session 10: Concurrency

A. Introduction

Concurrency is the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units

Concurrency can occur at four levels:

  • Machine instruction level
  • High-level language statement level
  • Unit level
  • Program level

There are three types of concurrency : quasi-concurrency , physical concurrency and logical concurrency .

Quasi-concurrency: execution of symmetric units in which a number of coroutines can cooperate to intertwine their execution sequences. Supports a single thread of control.

Physical concurrency is several program units from the same program that literally execute simultaneously.Supports multiple threads of control.

Logical concurrency is multiple processors providing actual concurrency, when in fact the actual execution of programs is taking place in interleaved fashion on a single processor.

B. Introduction to Subprogram-Level Concurrency

 

Concurrent subprograms are when there were more than one program unit execute at the same time (physically on separate processors, or logically in some time-sliced fashion on a single processor computer system) .

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior so parts of the program where the shared resource is accessed is protected. This protected section is the critical section.

So  , Critical Section is a set of instructions that must be controlled so as to allow exclusive access to one process.

Solution to the Critical Section Problem must meet three conditions:

  1. mutual exclusion: if process X is executing in its critical section, no other process is executing in its critical section.
  2. progress: if no process is executing in its critical section and there exists some processes that wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision of which will enter its critical section next, and this decision cannot be postponed indefinitely.
  3. bounded waiting: there must exist a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

 

C. Difference between Concurrency and Parallel computing

Concurrency is essentially when two tasks are being performed at the same time.

Parallelism requires that at least two processes/tasks are actively being performed at a particular moment in time.

 

D. Subprogram-level Concurrency

  1. Task

A task is a unit of a program that can be in concurrent execution with other units of the same program. Tasks usually work together.

Task can provide one thread of control in a program and communicate with other tasks through share nonlocal variables, through message passing, or through parameters.

 

General Categories from task :

  • Heavyweight task executes in its own address space.

Advantage : Fast Process

Disadvantage : Huge Memory Consumption

  • Lightweight task all run in the same address space.

Advantage : Less Memory Consumption

Disadvantage : Concurrency.

  • A task is disjoint if it does not communicate with or affect the execution of any other task in the program in any way.
    1. Task Synchronization

Synchronization is a mechanism that controls the order in which tasks execute.

 

Two kinds of Synchronization:

  1. Cooperation synchronization (a task must wait for another to to continue).

Example of Cooperation synchronization :

Producer – Consumer Problem . Consumer has to wait Producer to produce its product before the Consumer can do its task(buying the product).

 

  1. Competition synchronization (both tasks require the use of some resource that cannot be simultaneously used).

 

To provide competition synchronization, mutually exclusive access to the shared resource must be guaranteed.

A program called a scheduler manages the sharing of processors among tasks.

 

Example of Competition synchronization :

A shared counter.

 

  1. Task Execution States

Task or Thread Life Cycle(Execution States of Task from start to end)  :

  1. New− A new thread begins its life cycle in the new state. It remains in this state until the program starts the thread. It is also referred to as a born thread.
  2. Runnable− After a newly born thread is started, the thread becomes runnable. A thread in this state is considered to be executing its task.
  3. Waiting− Sometimes, a thread transitions to the waiting state while the thread waits for another thread to perform a task. A thread transitions back to the runnable state only when another thread signals the waiting thread to continue executing.
  4. Timed Waiting− A runnable thread can enter the timed waiting state for a specified interval of time. A thread in this state transitions back to the runnable state when that time interval expires or when the event it is waiting for occurs.
  5. Terminated (Dead)− A runnable thread enters the terminated state when it completes its task or otherwise terminates. No longer actives in any sense.

 

  1. Thread Liveleness Problems

Liveness is a concurrent application’s ability to execute in a timely manner.  A kind of “availability of executions”.

 

‘Liveleness’ problem that can happen are :

  • Deadlock

Deadlock describes a situation where two or more threads are blocked forever, waiting for each other.

For example :

 

Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when you bow to a friend, you must remain bowed until your friend has a chance to return the bow. Unfortunately, this rule does not account for the possibility that two friends might bow to each other at the same time.

 

  • Starvation happens if a thread “greedily” occupies a shared resource with the lock too long, even without any yielding in its iterations inside /.

For example : Spinning.

 

  • Livelock means, two threads are blocking each other’s progress but just logically (not like deadlock). Each thread is still running but logically they’re not progressing.  Think about the situation that, two people in the hallway can’t pass by the other because when the one tries to move left (to give a way), the other moves right for the same purpose, then repeat this forever..

 

E. Methods of Providing Synchronization

There are 3 kinds of methods :

  • Semaphores
  • Monitors
  • Message Passing

 

  1. Semaphores

A very simple mechanism used to provide synchronization of tasks. A semaphore provides limited access to a data structure by placing guards around the code that accesses the structure.

Using semaphores to provide corporation and competition synchronization creates an unsafe programming environment.

 

Semaphores are of two types :

  • Binary Semaphore
  • Counting semaphore

 

Binary semaphore can take the value 0 & 1 only. Counting semaphore can take nonnegative integer values.

 

  1. Monitors

A monitor is a set of multiple routines which are protected by a mutual exclusion lock. None of the routines in the monitor can be executed by a thread until that thread acquires the lock. This means that only ONE thread can execute within the monitor at a time. Any other threads must wait for the thread that’s currently executing to give up control of the lock.

However, a thread can actually suspend itself inside a monitor and then wait for an event to occur. If this happens, then another thread is given the opportunity to enter the monitor. The thread that was suspended will eventually be notified that the event it was waiting for has now occurred, which means it can wake up and reacquire the lock.

Monitors are a better way to provide competition synchronization than semaphores. However, cooperation synchronization is subject to same problems as the semaphores.

 

Differences between Monitors and Semaphores

Both Monitors and Semaphores are used for the same purpose – thread synchronization. But, monitors are simpler to use than semaphores because they handle all of the details of lock acquisition and release. An application using semaphores has to release any locks a thread has acquired when the application terminates – this must be done by the application itself. If the application does not do this, then any other thread that needs the shared resource will not be able to proceed.

 

Another difference when using semaphores is that every routine accessing a shared resource has to explicitly acquire a a lock before using the resource. This can be easily forgotten when coding the routines dealing with multithreading . Monitors, unlike semaphores, automatically acquire the necessary locks.

 

  1. Message Passing

Message passing can be synchronous or asynchronous. Cooperation and competition synchronization of tastes can be conveniently handled with message passing.

Message passing models (Erlang, for example) do not have any shared state; all synchronization and communication is done by exchanging messages.

 

Source(s) :

https://en.wikipedia.org/wiki/Concurrency_(computer_science)

Concept of Programming Languages – Chapter 13 – Concurrency

https://en.wikipedia.org/wiki/Critical_section

http://stackoverflow.com/questions/1897993/difference-between-concurrent-programming-and-parallel-programming

https://cs.nyu.edu/courses/spring98/G22.2110/class12.html

https://www.quora.com/What-is-the-difference-between-concurrency-and-parallelism

http://www2.cs.uregina.ca/~hamilton/courses/330/notes/synchro/node2.html

https://www.tutorialspoint.com/java/java_multithreading.htm

https://docs.oracle.com/javase/tutorial/essential/concurrency/deadlock.html

Thread liveness problems: deadlock, starvation and livelock

https://www.tutorialspoint.com/operating_system/os_semaphores_qa1.htm

http://stackoverflow.com/questions/1853284/whats-the-difference-between-the-message-passing-and-shared-memory-concurrency

 

Slide Binus Maya

 

Session 11: Exception Handling and Event Handling

 

A. Introduction to Exception Handling

With Exception Handling ,the programs are allowed to trap some exceptions, thereby providing the possibility of fixing the problem and continuing .

 

If there isn’t any exception handling , then the programs will stop working when it encountered an error and show some display messages .

 

What is an exception ?

An exception (or exceptional event) is a problem that arises during the execution of a program , or an unplanned event that occurs while a program is executing and disrupts the flow of its instructions.

 

A exception in C++ language is a response to an exceptional circumstance that arises while a program is running.

 

Exception can happen for many different reasons . For example :

  • A user has entered an invalid data.
  • A file that needs to be opened cannot be found.
  • A network connection has been lost in the middle of communications or the JVM has run out of memory.

 

Exceptions are of two types:

 

  1. Synchronous Exceptions :

Errors that occur due to incorrect input data or incorrect handling of array indices (“out of range index”), memory overflow are known as Synchronous Exceptions.

 

  1. Asynchronous Exceptions :

The errors that are caused by events that are beyond control of the program are Asynchronous Exceptions. E.g. Keyboard interrupts.

So what is an exception handling exactly?

Exception handling is a construct designed to handle the occurrence of exceptions, that is special conditions that changes the normal flow of program execution.

Exception handling allows the programmer to manage runtime errors in an orderly fashion. Using exception handling, the program can automatically invoke an error handling routine when an error occurs.

 

It has been said that there are two types of exception , but unfortunately C++ exception handling mechanism can takes care of only Synchronous Exceptions.

 

Exception handling consists in transferring control from the place where exception happened to the special functions (commands) called handlers. So ,Exception Handler is code that stipulates what a program will do when an anomalous event disrupts the normal flow of that program’s instructions.

 

How if the programming language can’t handle the exception ?

Because our groups has choosen to talk about C++ language then we will talk about C++ language .

Based on accu.org , there are several ways to handle this .

  • Return Codes is a small number passed from a child process (or callee) to a parent process (or caller) when it has finished executing a specific procedure or delegated task.

The main problems with the return codes approach are:

  • They are unusable for constructors, destructors and overloaded operators.
  • They are difficult to propagate to the place where the event can be handled consistently for the whole application. In our example the problem occurs inside an application function, and it must be handled by the application, but the framework, which doesn’t know what kind of problems might occur, must itself propagate the error.
  • Programmers ignore them.
  • .Deferred Error Handling ,

Probably , it works this way :

  • An object internal error flag is set, if a problem occurs.
  • This flag can be checked by clients using a special member function (e.g. overloaded !-operator).
  • All member functions of the class check the flag and act respectively (probably by doing nothing).

 

The advantage of deferred error handling is the invisibility to the non-interested parts of the application. In our framework example the errors are completely transparent.

 

A disadvantage of deferred error handling is the long time, which might pass between the detection and the handling of an error.

 

  • Error Stack

The error stack mechanism is essentially the same mechanism as standard C++ exception handling. You typically cannot do anything useful as long as there are unhandled exceptions. Your only actions when you have hanging exceptions are in most cases clean-up actions and manual stack unwinding (returning from the function). But, you have no hidden stack unwinding without your control, and you can have more than one exception at the same time on the stack, so you don’t have artificial (i.e. coming from the C++ environment and not from the problem domain) problems with exceptional events in destructors.

 

  • Safe Termination

This is the best approach for dealing with programming errors. Unfulfilled pre-conditions, such as “index out of range” or “pop from empty stack”.

B. Exception Handling (C++)

  1. Built-in Exception

C++ provides a range of built in exceptions. The base class fo all exceptions classes is exception.

 

Exceptions derived directly from exception class:

 bad_alloc  Happens when there is a failure of memory allocation
 bad_cast  Is thrown when dynamic_cast is used incorrect
 bad_exception  Exception that is thrown by unexpected handler
 bad_function_call  Thrown when an empty (not implemented) function is called
 bad_typeid  Thrown by typeid function
 bad_weak_ptr  Exception that can be thrown by shared_ptr class constructor
 ios_base::failure  Base class for all the stream exceptions
 logic_error  Base class for some logic error exceptions
 runtime_error  Base class for some runtime error exceptions

 

 

Advantages of Built-in Exception Handling :

  • Error detection code is tedious to write and it clutters the program.
  • Exception handling encourages programmers to consider many different possible errors.
  • Exception propagation allows a high level of reuse of exception handling code
  1. C++ Exception Handler

C++ exception handling is built upon three keywords: try, catch, and throw.

 

  • The catch Function

Catch: A program catches an exception with an exception handler at the place in a program where you want to handle the problem. The catch keyword indicates the catching of an exception. It is an overloaded name, so the formal parameter of each must be unique

  • Throw

Throw: A program throws an exception when a problem shows up. This is done using a throw keyword.

 

Exceptions are all raised explicitly by the statement: throw [expression]; à The brackets are metasymbols.

A throw  without an operand can only appear in a handler; when it appears, it simply re-raises the exception, which is then handled elsewhere

  • Try

Try: A try block identifies a block of code for which particular exceptions will be activated. It’s followed by one or more catch blocks.

 

  1. Unhandled Exceptions

An unhandled exception is propagated to the caller of the function in which it is raised.

An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand.

This propagation continues to the main function. If no handler is found, the default handler is called.

 

  1. Continuation

Continuations are useful things. They provide a nice way to manually handle control flow.

In a language which exceptions, every expression can either do one of two things :

  • Continue the rest of the program normally
  • Throw an exception and run an alternative program, the exception handler

And so , we don’t just have a single continuation.

C. Event Handling

  1. Event

An event is a signal that informs an application that something important has occurred.

User actions such as Key press, clicks, mouse movements, etc., or some occurrence such as system generated notifications can be called as events.

  1. Event Handler

Event handling is the receipt of an event at some event handler from an event producer and subsequent processes. The processes involved in event handling include:

  • Identifying where an event should be forwarded.
  • Making the forward.
  • Receiving the forwarded event.
  • Taking some kind of appropriate action in response, such as writing to a log, sending an error or recovery routine or sending a message.
  • The event handler may ultimately forward the event to an event consumer.

 

Event handlers are procedures that are called when a corresponding event occurs. You can use any valid subroutine with a matching signature as an event handler. You cannot use a function as an event handler, however, because it cannot return a value to the event source.

 

  1. Event Handling in C++
  • Declaring Events

In an event source class, use the __event keyword on a method declaration to declare the method as an event. Make sure to declare the method, but do not define it; to do so will generate a compiler error, because the compiler defines the method implicitly when it is made into an event. Native events can be methods with zero or more parameters. The return type can be void or any integral type.

 

  • Defining Event Handlers

In an event receiver class, you define event handlers, which are methods with signatures (return types, calling conventions, and arguments) that match the event that they will handle.

 

  • Hooking Event Handlers to Events

Also in an event receiver class, you use the intrinsic function __hook to associate events with event handlers and __unhook to dissociate events from event handlers. You can hook several events to an event handler, or several event handlers to an event.

 

  • Firing Events

To fire an event, simply call the method declared as an event in the event source class. If handlers have been hooked to the event, the handlers will be called.

Source(s) :

http://whatis.techtarget.com/definition/exception

https://www.tutorialspoint.com/cplusplus/cpp_exceptions_handling.htm

https://www.tutorialspoint.com/java/java_exceptions.htm

http://www.careerride.com/C++-exception-handling-concept.aspx

https://en.wikibooks.org/wiki/C%2B%2B_Programming/Exception_Handling

http://www.theserverside.com/definition/exception-handler

https://accu.org/index.php/journals/546

https://en.wikipedia.org/wiki/Exit_status

https://www.tutorialcup.com/cplusplus/exception-handling.htm

http://geekswithblogs.net/simonc/archive/2013/06/03/why-unhandled-exceptions-are-useful.aspx

http://jozefg.bitbucket.org/posts/2014-04-14-either-and-conts.html

https://msdn.microsoft.com/en-us/library/2z7x8ys3(v=vs.90).aspx

https://www.tutorialspoint.com/csharp/csharp_events.htm

http://www.webopedia.com/TERM/E/event_handler.html

https://msdn.microsoft.com/en-us/library/ee2k0a7d.aspx

 

Session 12: Functional Programming Languages

A. Introduction to Functional Programming Languages

In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutabledata. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead ofstatements. In functional code, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) each time. Eliminating side effects, i.e. changes in state that do not depend on the function inputs, can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming.

Functional programming has its roots in lambda calculus, a formal system developed in the 1930s to investigate computability, the Entscheidungs problem, function definition, function application, and recursion. Many functional programming languages can be viewed as elaborations on the lambda calculus. Another well-known declarative programming paradigm, logic programming, is based on relations.

In contrast, imperative programming changes state with commands in the source language, the most simple example being assignment. Imperative programming does have functions—not in the mathematical sense—but in the sense of subroutines. They can have side effects that may change the value of program state. Functions without return values therefore make sense. Because of this, they lack referential transparency, i.e. the same language expression can result in different values at different times depending on the state of the executing program.

Functional languages use a different paradigm than imperative and object oriented languages. They use side effect free functions as a basic building block in the language. This enables lots of things and makes a lot of things more difficult (or in most cases different from what people are used to)

One of the biggest advantages with functional programming is that the order of execution of side effect free functions is not important. For example in erlang this is used to enable concurrency in a very transparent way. And because functions in functional languages behave very similar to mathematical functions it’s easy to translate those into functional languages. In some cases this can make code more readable.

Traditionally one of the big disadvantages of functional programming was also the lack of side effects. It’s very difficult to write useful software without IO, but IO is hard to implement without side-effects in functions. So most people never got more out of functional programming than calculating a single output from a single input. In modern mixed paradigm languages like F# or scala this is easier.

Lots of modern languages have elements from functional programming languages. C# 3.0 has a lot functional programming features and you can do functional programming in python too. I think the reasons for the popularity of functional programming is mostly because of two reasons. Concurrency is getting a real problem in normal programming because we’re getting more and more multiprocessor computers. And the languages are getting more accessible.

In functional programming, functions are treated as objects. But in C++, functions are not treated as an object. To remedy that,function objects or lambdas are just a class with operate method.

As it’s tedious to create a class for each function, C++ provides a shortcut syntax to create function objects.

auto println = [](const char  *message){ std::cout << message << std::endl;};

Above code is creating a function object,println, which takes a single parameter and returns nothing. The []brackets are used to specify the environment aka closure for the function. We will see more about closure in later part of the post.

As you can see, we use auto here so that we don’t have to care about how these function objects are encoded inside the C++ types.

B. LISP (Programming Language)

Lisp is the second-oldest high-level programming language after Fortran and has changed a great deal since its early days, and a number of dialects have existed over its history. Today, the most widely known general-purpose Lisp dialects are Common Lisp and Scheme.

 

Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). Lisp was originally created as a practical mathematical notation for computer programs, influenced by the notation of Alonzo Church‘s lambda calculus. It quickly became the favored programming language for artificial intelligence (AI) research. As one of the earliest programming languages, Lisp pioneered many ideas in computer science, including tree data structuresautomatic storage managementdynamic typingconditionalshigher-order functionsrecursion, and the self-hosting compiler.

The name LISP derives from “LISt Processor”. Linked lists are one of Lisp’s major data structures, and Lisp source code is made of lists. Thus, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or new domain-specific languages embedded in Lisp.

The interchangeability of code and data gives Lisp its instantly recognizable syntax. All program code is written as s-expressions, or parenthesized lists. A function call or syntactic form is written as a list with the function or operator’s name first, and the arguments following; for instance, a function f that takes three arguments would be called as (f arg1 arg2 arg3).

C. Comparison of Functional and Imperative

1) Functional programming treats everything as a function. The key point is it does not have a state associated with it i.e it is immutable.
So, the next best thing in terms of its usage is concurrent programming. As concurrent operations can be performed much easily without having to worry about state changes.
Having to learn functional programming paradigms lets you understand the common semantics and syntax that most of the programming languages use.

2) Imperative programming is procedural programming which is more focussed on how things are done. It has steps associated with it to perform a task, states and immutable objects.

Object oriented programming is a very common imperative style of programming.Here everything revolves around objects which are real life entities. It has the excellent concepts of encapsulation and abstraction which are the building blocks of object oriented programming style.

Nowadays, people are focussing on hybrid languages which use the best of both approaches. One such example is SCALA which is a hybrid language.

Imperative programming involves writing your program as a series of instructions (statements) that can actively modify memory (variables, arrays). Imperative programming focuses on how, in the sense that you express the logic of your program based on how the computer would execute it.

Functional programming involves writing your program in terms of functions and other mathematical structures. A function is just a rule that maps input elements to output—unlike an imperative procedure or subroutine, it can’t rely on outside state or have any side-effects. Functional programming concentrates on the what, trying to let you specify the logic of your program as close to actual logic as possible rather than basing its semantics on how a computer would execute it.

Fundamentally, functional programming is a different basis from imperative programming. It provides a completley new foundation for writing programs and requires a different way of thinking. However, there are some advantages: functional languages tend to be more expressive, so you end up writing less code; they tend to be safer so that more bugs are prevented by the language ahead of time and they have some featuresimpossible in normal languages because side-effects are controlled.

Source(s) :

http://blog.madhukaraphatak.com/functional-programming-in-c++/

http://stackoverflow.com/questions/36504/why-functional-languages

https://en.wikipedia.org/wiki/Functional_programming

https://en.wikipedia.org/wiki/Lisp_(programming_language)

https://www.tutorialspoint.com/lisp/lisp_program_structure.htm

 

Session 13: Logic Programming Language

A. Definition of Logic Programming

Logic programming is a type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer set programming (ASP) and Datalog.

Logic programming languages sometimes called declarative programming languages. Logic programming language express programs in a form of symbolic logic and use a logical inferencing process to produce results.

B. Background for Logic

  • Proposition

A proposition is a logical statement that may or may not be true

  • Today is Tuesday
  • The Earth is round

Propositions can be stated in two forms:

  • Fact: proposition is assumed to be true
  • Query: truth of proposition is to be determined

Compound proposition:

  • Have two or more atomic propositions
  • Propositions are connected by operators

 

  • Symbolic logic

Symbolic logic uses propositions to express ideas, relationships between ideas and to generate new ideas based on the given propositions

For symbolic logic, we use 1st order predicate calculus

  • statements include predicates like round(x) where this is true if we can find an x that makes it true such as round(Earth) or round(x) when x = Earth
  • you might think of a predicate as a Boolean function except that rather than returning T or F, it finds an x that makes it true
  • Object Representation

Objects in propositions are represented by simple terms: either constants or variables

  • Constant is a symbol that represents an object
  • Variable is a symbol that can represent different objects at different times

 

  • Compound Terms

Compund terms is one element of a mathematical relation, written like a mathematical function.

  • Mathematical function is a mapping
  • Can be written as a table

Atomic propositions consists of compunds terms.

Compound term composed of two parts

  • Functor: function symbol that names the relationship
  • Ordered list of parameters (tuple)

Examples:

  • student(jon)
  • like(seth, OSX)
  • like(nick, windows)
  • like(jim, linux)

C. Logical Operators & Quantifiers

Logical Operator

  • Equivalence means that both expressions have identical truth tables
  • Implication is like an if-then statement
  • if a is true then b is true
  • note that this does not necessarily mean that if a is false that b must also be false

Quantifier

  • Universal quantifier says that this is true no matter what x is
  • Existential quantifier says that there is an X that fulfills the statement

D. Definition of Prolog

Prolog is a programming language that allows the programmer to specify declarative statements only

declarative statements (things you are declaring) fall into 2 categories

  • predicates/propositions that are true
  • clauses (truth preserving rules in clausal form)

once specified, the programmer then introduces questions to be answered. Prolog uses unification to perform the problem solving automatically resolution (backward chaining) and

E. Elements of Prolog

  • Terms: constant, variable, structure
    • Constants are atoms or integers (atoms are like those symbols found in Lisp)
    • Atom is a symbolic value of Prolog.
    • Variables are any string of letters, digits, and underscores beginning with an uppercase letter
    • Instantiation is the binding of a variable to a value. An instantiation will last as long as it takes to complete a goal
    • Structures are predicates and are represented as functor(parameter list) where functor is the name of the predicate
  • All statements in Prolog consist of clauses
    • headed clauses are rules
    • headless clauses are statements that are always true
      • in reality, a headless clause is a rule whose condition is always true
    • all clauses must end with a period

F. Rules

All rules are stated in Horn clause form

  • the consequence comes first
    • the symbol :- is used to separate the consequence from the antecedent
  • And is denoted by , and Or is denoted by ; or separating the rule into two separately rules
  • variables in rules are indicated with upper-case letters
  • rules end with a “.” (dot).
  • examples:
    • parent(X, Y) :- mother(X, Y).
    • parent(X, Y) :- father(X, Y).
    • grandparent(X, Z) :- parent(X, Y), parent(Y, Z).
    • sibling(X, Y) :- mother(M, X), mother(M, Y), father(F, X), father(F, Y).
  • we can use _ as a wildcard meaning this is true if we can find any clause that fits
    • father(X) :- father(X, _), male(X).

X is a father if X is male and is someone’s father

G. Goal Statements

Goal statement theorem proving is in form of proposition that we want system to prove or disprove.

Goal statement has the same format as headless Horn

  • man(fred)

Conjunctive propositions and propositions with variables are also legal goals

  • father(X, mike)

H. Inferencing Process of Prolog

  • Queries are called goals
  • If a goal is a compound proposition, each of the facts is a subgoal
  • To prove a goal is true, must find a chain of inference rules and/or facts. For goal Q:

P2 :- P1

P3 :- P2

Q :- Pn

  • Process of proving a subgoal is called matching, satisfying, or resolution

I. Backtracking and Trace

Backtracking is finding an alternative solution in a previous subgoal when a subgoal in a goal with multiple subgoal fail to show truth.

Backtracking begin search at where the previous search left off

Backtracking can take a lot of time and space because it may need to find all possible proofs in every subgoal.

Tracing is a built-in structure that displays instantiations at each step

Tracing has 4 model of execution:

  • Call, is the beginning of an attempt to satisfy goal
  • Exit, is when a goal has been satisfied
  • Redo, is when backtrack occurs
  • Fail, is when goal fails

J. Deficiencies of Prolog

  • Resolution order control
    • In a pure logic programming environment, the order of attempted matches is nondeterministic and all matches would be attempted concurrently
  • The closed-world assumption
    • The only knowledge is what is in the database
  • The negation problem
    • Anything not stated in the database is assumed to be false
  • Intrinsic limitations
    • It is easy to state a sort process in logic, but difficult to actually do—it doesn’t know how to sor

Source(s) :

https://en.wikipedia.org/wiki/Logic_programming#Logic_and_control

www.nku.edu/~foxr/CSC407/NOTES/ch16.ppt

www.sce.carleton.ca/courses/sysc-3101/w12/…/pl9ch16.ppt

Slide Binus Maya

Leave a Reply

Your email address will not be published. Required fields are marked *