Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Computer Science

7.3 Alternative Programming Models

Introduction to Computer Science7.3 Alternative Programming Models

Learning Objectives

By the end of this section, you will be able to:

  • Discuss characteristics of functional programming
  • Explain characteristics of declarative programming
  • Distinguish the characteristics of object-oriented programming
  • Explain HLL constructs used to support concurrency and parallelism
  • Summarize when to use scripting languages

As we have learned, HLLs can be classified into various paradigms, two of which are imperative and declarative. There are other paradigms that fall within these major categories. Some worth investigating are functional programming, declarative programming, object-oriented programming, and parallel programming utilizing concurrency.

Functional Programming

In 1930, Alonzo Church developed the lambda calculus model of computing, which got its name from the Greek letter lambda (λ). In this model, each parameter was introduced by the lambda symbol and computation was performed by placing parameters into expressions in the same way that high-level programs transfer arguments to functions. Figure 7.20 highlights various HLLs.

Programming hierarchy. Imperative (divided into Procedural (divided into FORTRAN, COBOL) and Object oriented (divided into Java, C++) and Declarative (divided into Logic (divided into Prolog, Loglan) and Functional (divided into Haskell, Erlang).
Figure 7.20 Functional programming languages can be differentiated by their paradigms. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Functional Programming Concepts

The functional programming model’s key concept is that programs are constructed by composing functions and applying them. A pure function is one that will return the same result every time given the same arguments, and there is no reading of shared mutable state. In this context, shared data means that the data is available to multiple program locations or scopes, state represents data that are remembered over time, and mutable means it is changeable. If the data used by the function is shared mutable, it is possible to return different results for the same arguments. A pure function may not have any side effects, meaning it cannot modify any state that is not local to it. For this reason, functional programs do not have assignment statements, which is also known as referential transparency.

Some other necessary features and characteristics that may be missing in some imperative languages include first-class functions and high-order functions. A first-class function is one that can be assigned as a value to a variable. A high-order function is one that can take one or more first-class functions as arguments. It can also return functions as results, which gives a function the ability to apply functions to its arguments one at a time. This is very useful in making code both more flexible and simpler. It encourages the creation of smaller functions that take care of just one piece of a larger job, which makes for great maintainability as well.

Functional Programming Assessment

In the same way that you perform tasks over and over each day, functional programming treats functions used to execute these tasks as prime citizens. Some benefits of repetition include making programs easier to understand for the programmer as well as shortening programs. Some downsides of functional programming are having to move data through the functions, effective implementation is difficult (but not impossible), and new arrays must be created when data in an element are changed.

Object-Oriented Programming

A number of imperative languages are considered object oriented, and most modern HLLs support this paradigm to some degree. Its basic characteristic is to design software around the concept of a class, which is the blueprint from which objects are constructed. Constructing an object is known as instantiation or building an instance of a class. Objects are made up of attributes (characteristics) and behaviors (methods or functions).

The three main principles of OOP are encapsulation, inheritance, and polymorphism, which provide a very high capability of abstraction. With encapsulation, the attributes and behaviors of classes and objects are self-contained and go along with them throughout their lifetime. It also means that internal implementation is hidden from outside the class or object. In inheritance, objects take on specified attributes from their ancestors. An example is that a Dog object inherits from an Animal parent object. Polymorphism means that behaviors that are inherited may perform in different ways that depend upon their context. A Dog is an Animal object and makes noise. A Snake is also an Animal object that makes noise—a different noise.

The degree to which an HLL conforms to these principles measures how object oriented it is. Java and C# are fully object oriented. C/C++ is a hybrid of procedural characteristics and OOP characteristics. Languages such as Python, JavaScript, and PHP partially ascribe to these characteristics. Therefore, they are sometimes referred to as object based.

Encapsulation

OOP languages contain features to implement encapsulation; they are self-contained and enable data hiding. These features include scope rules, for example, defining variables and methods within classes and using access modifiers.

Object-oriented languages often have constructors, and some have destructors which support encapsulation. A constructor is a specialized method that is called upon to instantiate the object. Very often, constructors have parameters which can be used to initialize the values of the attributes with arguments that are passed to them so that the variables are not directly accessed or assigned to. A destructor is used to destroy the instantiated object and recover its memory as in C++. Java and modern HLLs have a background running program for garbage collection that automatically destroys objects and recovers memory when there are no longer any references to them in the code.

They also employ the use of methods to both set and return the values in variables so that they are not directly accessed. Data hiding is further enabled in some modern HLLs such as Java. This engages a background running process for garbage collection that automatically destroys objects and recovers memory when there are no longer any references to them in the code.

Data hiding is one of the main objectives of OOP that falls under encapsulation. Its purpose is to address the details of implementation in a class or object that are irrelevant to users of the class or object. They need only know how to use them; therefore, the details are hidden and provide only what is necessary in its public interface.

Most OOP languages have the keywords public and private. They are used to mark the visibility and access to attributes and methods by areas of code that are outside the class code itself. We call them access modifiers. A member is an attribute or method that is encapsulated within a class or object. A public member is part of the public interface, and a private member is hidden. The following Java code is an abridged class definition that demonstrates the concept:

public class Rectangle {
   // attributes
   private double length;
   private double width;
   // constructors
   public Rectangle(double length, double width) {
      . . . . .
   }
   // methods
   public double getLength() {
      return length;
   }
   . . . . .
   public double getArea() {
      . . . . .
   }
}

The class itself is public, granting access to it and its contents. The attributes are all private, not part of the public interface. In this context, the designer has decided not to allow direct access to these attributes from outside. Instead, they are accessed by public methods, some of which are not shown. The methods that we employ, called getters and setters, can present our data to the outside world in any way we choose. The method can be described as a utility method that can provide the public useful data so that they do not have to compute it themselves.

Inheritance

Figure 7.21 illustrates the implementation of single inheritance, the methodology used by most modern OOP HLLs. Classes and objects can inherit from only one parent.

Screenshot of single inheritance.
Figure 7.21 This UML diagram shows single inheritance in Java. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

We refer to the parent class as the superclass in an inheritance relationship and the child classes as a subclass which contains the attributes and methods of its parent class. In this illustration, all the Vehicle attributes are inherited by the Car and the other subclasses with the same exact access modifiers. The same is true of all the methods.

Only C++ implements multiple inheritance, inheriting attributes and methods of more than one parent. This concept is exceedingly messy and can cause major problems in implementation. An example is where both parents have a method of the same name and it cannot be specified which one is inherited or overridden. To implement behavior that is shared from multiple parent classes, Java introduced the interface, which can be implemented in C++ to solve multiple inheritance issues.

An interface resembles a class definition, but it is a collection of methods only. These methods are constructed as abstract methods. An abstract method is declared without a code block for implementation. A class can use one or more interfaces. The following is a Java class definition that makes use of an interface:

public class Rectangle implements geometryMath{
}

The interface acts as a contract with the programmer. If the programmer declares that a class implements an interface, it is required that the class provide implementation code for the functions that make up all of the interface. An override defines another method with the same name and modifies its signature. The override forms a concrete instance of what was an abstract method in the interface. Interfaces appear in Java, C#, Go, or Ruby, and are the dominant approach, superseding true multiple inheritance.

Polymorphism

The feature of OOP in which methods that are inherited may perform in different ways in different subclasses depending on their context is called polymorphism. This is implemented overriding inherited methods in a subclass or the ones in an implemented interface. In the UML diagram, the launch behavior of a car is very different from that of a sailboat or a rocket ship. The resulting behavior is dependent on the context because it is not set at compile time. Rather, the runtime decides what to do when it encounters an object based on its class. The behavior morphs and produces different results for the same method calls depending on the object in hand, which is dynamic method binding or late binding, meaning that all methods are resolved dynamically at runtime, not by the compiler.

OOP Assessment

Remember that some languages like Java and C# are completely object-oriented; some like C++ are hybrids of procedural and object-oriented, and some are on the path toward complete object-oriented like JavaScript, PHP, and Python. Any assessment of the object-oriented paradigm must be taken in the context of where a particular language is on the scale of object orientation.

OOP advantages:

  • Strong abstraction with the ability to reuse code efficiently
  • Improved software development productivity and maintainability
  • Faster software development, thus lower cost of development as it is very efficient for parallel development
  • Adapts well to parallelism, where programs can have more than one part of the code running simultaneously
  • More consistent software—dynamic method binding producing polymorphism are a mandate that subclasses implement the same behaviors with differences

OOP disadvantages:

  • Steep learning curve to master OOP
  • Program creation can be complex
  • Slower execution due to more generated instructions by the compiler

Concurrency and Parallel Programming

A process, or thread, is an active execution context, meaning that it is an executable code block. The ability of an application to multitask is called concurrency. When an application can multitask, it can process more than one task at the same time. It is the case of their executions overlapping in time. In concurrency we have the illusion of simultaneous execution. A single core CPU can only run one task at a time while the processor rapidly switches between concurrent processes, thus creating the illusion. We refer to the executable unit as a thread of control because the processor is controlling which thread is executing at a designated time.

In parallelism, programs can have more than one thread of control running at the same time. This is possible only if we have multiple or multicore processors which allow us to produce simultaneous execution. We must program for both concurrency and parallelism in the same way because the execution logic is the same, only the physical hardware is different. The process of coding for these environments is concurrent programming or parallel programming in which a thread can be thought of as an abstraction of a physical processor.

Race Conditions

When the condition of a program and its behaviors are not synchronized, a race condition occurs. Race conditions are all about timing because anomalies can occur when we do not know which process will finish first or which part of the program is currently executing. We can exert control over when a process pauses or ends in the run-until-blocked scenario where some mechanism pauses a thread. This situation is not always a negative one; race conditions can sometimes be positive by allowing multithreading to compete unchecked for processor attention.

In many multitasking situations we want to avoid race conditions and execute a degree of control of execution; in other words, we want to synchronize threads in either the interleaved or parallel situations.

Synchronization

The building of cooperation between threads of execution, or synchronization, is often handled by first ascertaining which segments of code form a critical section. A critical section is a code block that influences the results of concurrency or parallelism. A simple example might be an employee program in which one concurrent function calculates employee pay and another one prints the paycheck. The print function cannot proceed to put the amount on the check until the calculation function returns the amount. Other situations that need the use of critical sections are as follows:

  • When multiple threads need access to shared memory or resources and the timing of the access can be critical to the eventual result
  • Times when processes need to communicate with each other to proceed, often implemented as message passing between processes
  • Cases where all concurrent processes must finish before execution of the program proceeds
  • Cases where one or more of the processes need some result from another process

As visible in Figure 7.22, when processes run in a requested order, synchronization is occurring.

Illustration of four threads leading to a Critical Section.
Figure 7.22 When multiple threads seek access to a critical section, they need synchronization. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Since the goal is to avoid negative race conditions, we do not want to over-synchronize because it lessens the degree of parallelism that we need for performance.

Implementing Synchronization

Synchronization must be carefully implemented to avoid situations that synchronization itself can cause:

  • starvation: when a process must wait to enter a critical section, but the other processes monopolize the section, and the waiting process doesn’t get processor time.
  • busy waiting: when a process continually looks to see if it has access to a critical section, taking processor time from all processes.
  • deadlock: when multiple threads need a critical section that is being monopolized by one of the processes.

Java is known for its strong implementation of synchronization. When marked as synchronized, it is guaranteed that it can have only one thread executing in it at the same time. Any other thread trying to enter the method or code block is blocked until the running occurrence exits the method or block. The following Java code shows the syntax of using the keyword on a method:

public synchronized void calcAverage() {
   // all of the method code
}

A synchronized method is coordinated on the object that owns the method. In OOP each instance of the object may have its own synchronization. For example, we might have a Car class that owns a method called start(). Each car instance we build has its own copy of the method; therefore, if we are starting a car, no other process can start that particular car. But if we have more than one car, another one may be started concurrently.

The following Java code shows the syntax of using the keyword on a code block within a method:

synchronized (this) {
   // critical section
}

This code will execute exactly as if it was a synchronized method. This keyword refers to the current object, so this code is also synchronizing on the object which owns the code block. This usage provides much more granularity and efficient execution times.

Another powerful tool for synchronization in Java is to declare a whole class as a thread, meaning the whole class is within a single thread of execution. There are two ways to do this. The first is having the class inherit from the Thread class from the Java API:

public class SomeThread extends Thread {
}

The other way is to have the class implement an interface from the API:

public class SomeThread implements Runnable {
}

Either of these methodologies provides access to the useful methods of the Thread class which are inherited and can be overridden to support the particular situation. The interface forms a contract with the programmer to implement these methods to support the particular task.

A Thread object is controlled by a priority-based scheduler. Some of the most important methods at the programmer’s disposal are as follows:

  • start(): causes the thread to begin execution
  • yield(): tells the thread scheduler that the current thread is willing to yield its control
  • sleep(int time): causes the currently running thread to temporarily hibernate for a specified number of milliseconds
  • setPriority(int priority): changes the priority of the current thread
  • join(int time): waits for the indicated milliseconds before commanding the thread to die

Concepts In Practice

Which HLL Works Best?

Modern HLLs combine a variety of programming models to make it easier to tackle problems in various domains and industries. Two of the models we have studied are object-oriented programming and concurrency.

The need for an object-oriented model may be dictated by the need for a high degree of abstraction capability. Other reasons include improved productivity in software development, cost, scalability, and maintainability.

OOP also adapts well to parallelism, which is another paradigm to consider. A need for speed can require the selection of an HLL that supports concurrency and/or parallelism well. It may be the case that the application we build requires both.

We have choices, the most popular of which are C++, Java, and C#. Narrowing down our choices is helpful and puts us in a great position to examine each to make a well-informed decision.

Sometimes OOP is desirable, but concurrency may not be. This would occur when programming operating systems such as Windows with C++. It has an object-oriented UI, but it must also manipulate the machine elements directly. In this case concurrency may not be desirable, so some modules should be programmed without concurrency by coding them in just the C aspects of the language.

Programming with Scripting Languages

Scripting languages have their roots in shell scripting, originally referred to as stringing together a group of commands to perform tasks on the user interfaces of various operating systems such as pre-GUI, Unix DOS, and CPM. Modern OSs still provide this facility in such programs as Windows PowerShell. You could string together a group of commands to perform tasks, either directly on the terminal or more effectively in batch files that the shells could execute. Batch files are created in a text editor to contain the script. They usually have a file extension that the shell can recognize, such as backup.bat, which might contain commands to backup an MS-DOS computer. These languages are interpreted at runtime and are not compiled.

Early scripting languages include the following:

  • MS-DOS command interpreter
  • Unix: the standard command line interpreter when running the Unix Bourne shell
  • Microsoft PowerShell: used for automation and configuration management on Windows systems

Most of our HLL scripting languages used today evolved from these, particularly for programming the Web. Figure 7.23 also illustrates some of these languages.

  • Perl is an older scripting language primarily used for web scripting that uses the Common Gateway Interface (CGI), an industry specification for web server communication with web browsers. The two different entities need rules by which to “talk” to each other, known as protocols such as HTTP. Perl has mostly been replaced by PHP, ASP.NET, and other web server scripting languages.
  • Python is a general purpose HLL for both core and web programming.
  • JavaScript is a general purpose web scripting language that is almost universal. It originally focused on front-end web browsers and now takes a great amount of market share on backend servers with the newer ES6 version.
  • PHP is a very high-level scripting language for web servers that is platform independent.
Illustration of Scripting language divided into Client side (Ajax, Java Script, JQuery, VB Script) and Server side (ASP, Java Script, JSP, Perl, Python, Ruby).
Figure 7.23 Scripting languages can be further categorized by whether they are on the client side or the server side. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Common Gateway Interface scripts are the original mechanism for server-side web scripting. A Common Gateway Interface (CGI) script is an executable program residing in a special directory known to the web server management program. When a client requests a web address known as a Uniform Resource Indicator (URI), the server executes the program and outputs HTML to a screen that is readable by the browser, which renders it to a user. For example, when you make an online purchase, the web server launches multiple scripts behind the scenes to gather product information, images, and pricing. Most websites use loading screens to allow for one or more scripting languages. Though widely used, CGI scripts have several disadvantages, with the main one being slow loading times by web pages due to having to launch each script as a separate program.

A server-side script runs on a server in a web designer’s domain and generally is faster than a CGI script due to its ability to compile in real time; therefore, in PHP, Python, Ruby, and backend JavaScript ES6, the client only sees the standard HTML.

A client-side script runs on a client computer, usually under the control of a web browser and needs an interpreter on the client’s machine. Client-side scripts often use an embedded script in which one language is embedded inside another. A frequent example is JavaScript embedded within HTML. JavaScript for interactive features on a web page is almost the universal standard. Microsoft produces a language called TypeScript which is a superset of JavaScript. It is a compiled language which generates JavaScript. It has the advantages of being strongly typed and has stronger OOP features. It can be used on the front end and back end, although it is not as popular as JavaScript ES6 on the server side.

Both client-side and server-side languages like JavaScript and PHP can be embedded into HTML code with embedded elements. An embedded element is a program designed to run inside some other program like HTML. On the client side, we embed JavaScript into HTML <script> elements and a browser calls the JavaScript interpreter when it encounters one. On the server side, we can embed PHP the same way with some rules. The script must be installed on the server and must have a file extension of .php.

Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
Citation information

© Oct 29, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.