Learning Objectives
By the end of this section, you will be able to:
- Describe the phases of a software development process and their purposes
- Study the popular traditional prescriptive and Agile software process models
- Suggest an effective software process
Imagine a recipe for building software. There are different ways to cook the same dish, but most recipes follow a basic structure with steps like gathering ingredients, preparing them, cooking, and serving. Software engineering processes are similar. They provide a structured approach to creating software applications. Various software engineering process models are typically used to support the software development life cycle (SDLC). After years of research and refinements, software engineering researchers and practitioners have converged on defining a generic software engineering process model, or process framework, that can be used as a template. That process framework includes a set of process elements (e.g., framework activities, software engineering actions, task sets, work products, quality assurance, and change control mechanisms) that may differ for each process model and for each project.
Traditional Process Models
One common category of process models is known as the traditional process model. This process framework, as you learned earlier in the chapter, encompasses four framework (i.e., generic) activities that are also known as phases: inception, elaboration, construction, and deployment:
- Inception covers planning activities where you define the project goals and identify the overall scope.
- Elaboration involves analyzing requirements and designing a detailed architecture model for the software.
- Construction is where the coding happens! The software is built based on the design created earlier.
- Deployment is the activity that focuses on releasing the software in a usable form and making it accessible to end users.
These (generic) framework activities or phases are applicable regardless of the specific software engineering process model chosen for a project and may be elaborated differently depending on the organization and the problem area and project being developed. There are also umbrella activities that are important but tangential to framework activities. Returning to the recipe analogy, if generic framework activities represent cooking, then you can think of these activities as the things you would do alongside your cooking, like making sure you have the right pots and pans or keeping your kitchen clean. As you’ll learn later in the chapter, in software development, such activities are known as umbrella activities, and they include:
- training and communication (e.g., work product preparation and production)
- risk management and planning (e.g., software project tracking and control)
- configuration management
- quality management (e.g., technical reviews, estimations, metrics/measurements, testing)
- architecture management (e.g., reusability management)
- security management
Various software engineering actions are typically performed as part of the generic framework and umbrella activities. For example, the inception phase may call for requirements engineering actions such as requirements definition and requirements management; the elaboration phase may involve high-level and detail design actions. Each type of software engineering action corresponds to a process that may be represented as a workflow or a task set, and each task results in work products that are subject to specific quality assurance and change control mechanisms. Basically, a task set (or workflow) encompasses all the tasks that are required to accomplish a specific software engineering action within a framework activity. Task sets vary depending on the characteristics of a project, and activities within a given process model usually overlap instead of being performed independently.
Software process models that adhere to the generic framework mentioned precedingly are sometimes referred to as SDLC methodologies. In general, software engineering process models are structured in this fashion to facilitate efficient development of quality software, reduce risk of failure, increase predictability, and capture best practices in software development. The software framework provides a template that allows software engineers to tailor their process model based on the specific project(s) on which they are working. The (generic) framework activities mentioned precedingly are applicable to all projects and all application domains, and they are a template for every process model. Actual process models action/methods may, however, use various approaches. Furthermore, software engineering tools may be used to (semi-)automate the various methods that perform activities.
The activities involved in developing software might vary depending on the organization and the type of software being developed. There is no single right way to create a software solution, but experience typically tells us what works well and what works poorly in a given context. Therefore, process frameworks are elaborated differently depending on the four Ps—problem, project, people, and product—they tackle.
In the past, this led to the use of various traditional prescriptive process models such as the waterfall, prototyping, spiral, and rational unified processes. A prescriptive process model advocates an orderly approach to software engineering that involves following a prescribed set of activities in a continuous manner. These traditional process models provide a structured approach to software development and may help with the following objectives:
- Improve efficiency: By following a clear plan, teams can work more efficiently and avoid rework.
- Reduce risk: Identifying and addressing potential issues early on can help to prevent project failures.
- Increase predictability: A structured process can help to estimate timelines and costs more accurately.
- Capture best practices: Traditional models often incorporate tried-and-true methods for software development.
These days, however, traditional prescriptive process models are perceived by some as “old-school” (i.e., ponderous, bureaucratic document-producing machines). Note that prescriptive simply means that the process model identifies a set of process elements (e.g., framework activities, software engineering actions, tasks, work products, quality assurance, and change control mechanisms) for each project. Traditional models are generally criticized for being too rigid and inflexible. They may not be suitable for all projects, especially those with rapidly changing requirements.
In general, the various process models may have features in common with each other, and there may be some overlap among the activities conducted within each given process. In the next section, we’ll explore an alternative approach called Agile software development.
Agile Process Models
The Agile Manifesto sets forth the Agile philosophy and emphasizes the fact that software engineering processes should not be constrained to be continuous. It advocates that it is fine to skip and accelerate framework activities to deliver a project solution faster and, therefore, it is fine for the software process to be viewed as a discrete set of meaningful activities to reduce the cost of change. In this context, agility refers to the ability to create and respond to change in order to profit in a turbulent business environment. Proponents of Agile process models question whether prescriptive process models that strive for a structured and ordered approach to software engineering are appropriate for a world that thrives on change. In general, Agile processes have very short product cycles and constantly solicit customer feedback to focus development on customers’ current needs. Agile software processes promise strong productivity improvements, increased software quality, higher customer satisfaction, and reduced developer turnover. Agile development techniques empower teams to overcome time-to-market pressures and volatile requirements. However, replacing traditional process models with something less structured may make it difficult to achieve coordination and coherence in software work.
In general, the mere existence of a software process, whether it be strongly prescriptive or Agile, is no guarantee that software will be delivered on time, or meet the customer’s needs, or that it will exhibit long-term quality characteristics. Everyone wants a process that can respond to change; the only debates are over how to design one and which level of discipline should be incorporated into such process models.
Agility requires that customers and developers act as collaborators within development teams. The goal should be to build software products that can be quickly adapted to meet the requirements of a rapidly changing marketplace. This is typically achieved via incremental development of operational prototypes that are improved over time. As a result, Agile software engineering is a way of working and it leverages iterative development, incremental delivery, and ongoing reassessment of products. It is based on a clear idea of the product’s concept and its market. It also focuses on high-value features first and on producing tangible, working results after each iteration. Agility principles are summarized as follows:
- Ensure customer satisfaction by delivering software to customers as quickly as possible.
- Accept the fact that requirements may change and work accordingly.
- Deliver software incrementally to stakeholders as often as possible (e.g., every week rather than every month as it is in the case of traditional process models) and use their feedback to improve subsequent increments.
- Minimize the creation documentation to what is absolutely necessary and relevant.
- Build an Agile team that includes motivated participants and facilitate frequent meetings among team members to improve communication and information sharing.
- Create team processes that encourage technical excellence, good design, and simplicity while avoiding unnecessary work.
- Focus on the primary goal of delivering software that meets customer needs.
- Ensure that teamwork is not overwhelming so that team members can be effective over a long period of time.
- Consider the fact that Agile teams need to become self-organizing in order to meet the primary goal of developing solutions that are well designed and implemented to meet customers’ needs.
- Instill a team culture that requires all team members to work together with one focus in mind, which is ensuring customer satisfaction.
The Agile philosophy is seductive, but it must be tempered by the demands of real systems in the real world. In general, Agile process models are not suitable for large, high-risk, or mission-critical projects. There is a spectrum of agility that one should consider that addresses these demands as illustrated in Figure 9.4.
When an Agile Software Development Ecosystem (ASDE), which encompasses the whole category of Agile SDLC frameworks and methods, is compared with traditional (prescriptive) SDLC methodologies, the ASDE emphasizes the difficulty of predicting future needs. Thus, Agile approaches avoid creating long-term plans and fixed processes so developers can instead collaborate with customers and adjust to their current needs.
Many of the ideas related to the Agile approach are worth considering regardless of the process model a team adopts. Agile processes manage unpredictable changes that take place during software development projects. The focus of Agile processes is on the delivery of software increments in relatively short time frames and using feedback on those increments to drive development. There are trade-offs when selecting an Agile Software Development Ecosystem (ASDE). While ASDEs correctly identify the product as the most important outcome of a project, it can be difficult to scale up rapid product cycles to develop enterprise-wide software applications. In general, trade-offs are important for making things work. In particular, the potential problems caused by dysfunctional teams can be significant. The impact of human aspects of process model adaptation should be considered, and all the human factors and group dynamics of Agile teams, including collaboration and self-organizing teams, are important improvements to traditional approaches and are used repeatedly when Agile development is performed.
In conclusion, a software process, regardless of its process centricity, simply must adhere to a set of software process model criteria that are essential to ensure successful engineering of software solutions. To that extent, it is necessary to assess processes and their related activities using actual numeric measures or by applying metrics as part of analytics methods uses to monitor the performance of software process models.
Software Process Framework Activities
A good portion of a software engineer’s role is spent within the various framework activities of an SDLC. As such, it is critical for a software engineer to understand the key elements of each of the various framework activities that are used to create software solutions. As you may recall, these framework activities or phases were introduced earlier as inception, elaboration, construction, and deployment. These activities provide a structured approach to creating software solutions. By understanding these framework activities and the tasks involved in each phase, you will be well equipped to contribute effectively to the software engineering process.
Inception Framework Activity
A core precondition to the creation of a solution is to know what must be developed. It is easy to say you want to add automation features to an automobile, but what does that really mean? What are the specific expectations and how do they relate to the solution that needs to be created? In order to create a solution, you have to understand what the solution is expected to do.
The inception phase of a project focuses on the gathering and refinement (i.e., definition) as well as the management of functional and nonfunctional requirements, which is also known as requirements engineering. In essence, the inception phase covers the planning activity that lays a project foundation. Here, you will define the project goals, identify the overall scope of the software (what features it will have), and conduct feasibility studies to assess if the project is realistic and achievable. As an example, imagine you are building a recipe app. In this phase, you would decide what features the app should have (like searching for recipes or creating grocery lists) and estimate the time and resources needed to develop it.
Defining requirements must involve stakeholders because they know what the software system should do better than others. Requirements definition involves obtaining the requirements from stakeholders and analyzing/decomposing strategic requirements until you can identify tactical actionable requirements. These will form a foundation for the creation of the analysis model. This definition process is done with either a use case, which describes how the software system is expected to be employed by users to accomplish a goal or requirement, or as a user story, which is a generic explanation aimed at the user to tell them how a software features works. Requirements management relates to handling changes in requirements and identifying the effect of such changes on the existing set of engineered requirements.
Although the software engineering actions and task sets that relate to the drawing forth of requirements may appear straightforward at first sight, it is in fact one of the trickiest parts of the SDLC. This is due to a gap that always exists between the way stakeholders and business analysts understand the requirements as compared to the way they are perceived by software engineers. This is especially true when you develop a software solution for a particular expert group that use their own terminology; often those who perform a process take some of the actions they do for granted.
The inception phase results in a specification of the system to be developed. This specification is generally incomplete and/or anomalous and is typically refined as part of subsequent process phases (or iterations of such). As a result, there is a blurred distinction between requirements specification, design, and construction.
The actual software engineering actions and task sets that are used as part of the inception phase involve gaining an understanding of the solution context and collaboratively gathering, decomposing, and tracking requirements to help elaborate a preliminary analysis model. Once the preliminary analysis model is created, requirements can then be negotiated and validated with the stakeholders via an in-person Joint Application Design (JAD) session, an approach that involves assembling stakeholders and developers, or through the use of collaborative requirements modeling tools that enable scenario-based modeling. After this, detailed requirements modeling takes place.
Agile requirements definition attempts to accelerate the gathering and analysis/decomposition of requirements. The guidelines it uses to speed this process include:
- Use simple models such as fast sketches and user stories to allow all stakeholders to participate.
- Adopt user, client, or expert group terminology and avoid technical jargon whenever possible.
- Get the big picture of the project done before getting bogged down in details.
- Refine requirements throughout the project and allow additions and revisions to occur at any time.
- Implement the most important user stories first and only once its requirements are fully specified.
- Make the current set of requirements available to all stakeholders so everyone can participate in selecting the features to add during the next development cycle.
Various tools may be leveraged to support the software engineering actions and task sets that pertain to the inception phase (e.g., ReqView3).
Requirements Modeling
The software engineering action that is part of the inception phase and focuses on the analysis/decomposition of software requirements is called requirements modeling. The goal of this action is to answer the question “What will the system do?” The focus is purely conceptual, and implementation details are not considered. The main purpose of this analysis is to understand the requirements at a level that makes it possible to design and implement a software system that meets the customer’s needs.
As part of requirements modeling, as in business use case modeling, you typically create a domain model that captures the major concepts of the problem domain and associations between them. For example, in the domain of driving assists for an automotive solution, there could be conceptual/analysis classes such as AdaptiveCruiseControl, Car, BlindSpotDetection, and Blinker and class attributes assigned to classes as follows:
- AdaptiveCruiseControl has attributes state (on/off) and desiredSpeed (the requested speed).
- Car has attribute speed (the current speed).
- BlindSpotDetection has attribute state (on/off).
- Blinker has attribute state (on/off).
Figure 9.5 illustrates a class diagram of a partial domain model. This diagram is specified using Unified Modeling Language (UML). UML is often used for modeling in projects as its visual representations provide clear, compact means of communicating among the developers. In UML class diagrams such as this, the numbers at associations are multiplicities, and the arrows specify the direction in which you are expected to read the association. For example, BlindSpotDetection monitors Car.
Associations are links that deserve to be stored in the system. For example, an association between AdaptiveCruiseControl and Car is required because when Adaptive Cruise Control is active, conceptual class AdaptiveCruiseControl needs access to the car. The numbers at associations are multiplicities, and they say that there is exactly one Car associated with each AdaptiveCruiseControl and that there is exactly one AdaptiveCruiseControl associated with each Car. Associations can have names, which facilitate reading and understanding the analysis model.
The UML notation may be applied to provide different perspectives as follows:
- Conceptual perspective: The diagrams describe real-world concepts or things.
- Specification (software) perspective: The diagrams describe software components.
- Implementation (software) perspective: The diagrams describe software components in a particular technology, such as Java or .NET.
We typically use all perspectives throughout the software development life cycle: conceptual perspective to capture requirements, specification perspective to describe the design, and implementation perspective to clarify implementation details.
When doing requirements analysis, you can create specific outputs/work products (also referred to as artifacts), such as use cases, scenarios, and the domain model. Use cases can be captured in plain text or via a UML Use Case diagram. A Use Case diagram consists of an actor, which represents users of the system, and the use cases that this user is expected to use. In Figure 9.6, the actor is a driver who might be seeking to turn on cruise control. Each use case is then described in detail using one or more scenarios. A scenario is a specific instance of operational flow within a use case that is focused on understanding a specific action. Scenarios are written either in a plain text or as a sequence of steps that describe a specific scenario instance within a use case (i.e., main scenario describing the most productive set of steps versus alternative scenarios that capture unexpected behavior).
As part of requirements modeling, UML diagrams are drawn whenever they bring value to help provide a conceptual perspective of what the solution is meant to accomplish. They are generally not required to be complete or perfect. The use of UML in this manner is typically referred to as “UML as sketch,” and it involves informal and incomplete diagrams. Instead of drawing UML diagrams, it is possible to specify them via scripts to automate the creation of diagrams, which can save valuable time. As mentioned, the main goal of requirements analysis/decomposition in the inception phase is to understand the problem while the main goal of the elaboration phase, which comes next and involves software design, is to clarify what we are to implement.
Elaboration Framework Activity
The elaboration phase further analyzes the requirements to produce design models of the system to be developed. In this phase, you take a deeper dive into the specifics of the software. Requirements are refined in detail, a detail design is created that outlines the architecture of the software, and the potential risks associated with the project are identified and assessed. Design models are defined at a high-level initially to represent the various facets of the architecture of the solution that is being developed at a given level of scope. The scope could be that of a whole enterprise, a portfolio of solutions contemplated by a business unit, or a specific solution being developed by a business unit as part of a given project. Architectural facets are typically based on architectural domains specified in mainstream architecture frameworks. For example, The Open Group Architecture Framework (TOGAF) splits high-level architecture representations into four domains: business architecture, application architecture, data/information/knowledge/wisdom architecture, and infrastructure architecture. Various high-level modeling languages and associated tools may be used to facilitate the creation of high-level architecture models (e.g., TOGAF’s Archimate Certified Tools4). The management of enterprise and solution architectures is described in more detail in Chapter 10 Enterprise and Solution Architectures Management of this book.
A detailed-level design model may then be derived from the high-level architecture model, and it is typically represented using a combination of low-level modeling languages (e.g., BPMN, UML, SysML). At this level of design, a conceptual solution that fulfills the requirements is created and seeks to answer the question “How will the system fulfill the requirements?” The conceptual solution leverages the inputs collected in the inception phase to design a software product. This information is generally organized into two types of design: logical and physical.
Logical design ignores any physical aspects. For example, a cruise control system needs to keep track of the maximum speed selected and whether the cruise control system is on or off. This information is gathered as part of the logical design and needs to be captured via a diagram that also identifies the corresponding relationships. This type of information is often captured as a set of entities (or actors) that enable the grouping of descriptive information and attributes.
A graphical representation of the method for effectively implementing what was determined in the logical design of a software solution is a physical design. It includes defining where information comes from and where it goes within the planned system. It includes defining how information is obtained, processed, and/or how it is presented. For example, the physical design for turning on or off the cruise control system within an automobile can include using controls made available to the driver on the steering wheel to turn on or off the cruise. There could also be controls via pressing the brake pedal to turn off the system.
Work on the logical and physical designs is generally performed first as a high-level design software engineering action followed by a detail-level design software engineering action. The focus of the high-level design (HLD) software engineering action is on providing a general description of the overall system design, and can include information on the overall aspects of a system including its architecture, data, systems, services, and platforms as well as the relationships among various modules and components. Its focus is to convert the requirements into a high-level representation of the solution that can then be further refined as part of detail-level design.
The focus of the detail-level design (DLD) software engineering action is to detail or expand upon the HLD. As part of the DLD, every element of a system is provided with detailed specifications, and the logic for each component within each module of a system solution is determined. DLD is then used to implement the actual solution as part of the construction phase of the software process.
Some of the differences between HLD and DLD are:
- HLD gives high-level descriptions of functionality, whereas DLD gives details of the functional logic within each component of a system.
- HLD is created first, with DLD created as an extension of the HLD.
- HLD is based on the requirements of the software solution, whereas DLD is based on extending the HLD. DLD should, however, still align with the requirements.
- HLD provides elements such as data and information design, whereas DLD provides the information needed to create the actual programming specification and test plan for using the data.
- A solution architect is generally involved with the HLD. Programmers and designers are generally involved with DLD.
Software Architecture Work Product
The software architecture work product acts as a blueprint of the solution being worked on. Imagine you’re building a house. Before construction begins, you would create a blueprint that outlines the overall structure, major components (foundation, walls, roof), and how they fit together. This blueprint is similar to a software architecture. In software development, a software architecture provides a high-level overview of a software system. It describes the system’s major components, their interrelationships, and how they work together. This high-level representation helps developers understand the overall design before delving into details.
A solution is typically represented at various levels of abstraction. Software design involves using software architectures to represent solutions at a high-level of abstraction. A software architecture constitutes a relatively small, intellectually graspable view of how a solution is structured and how its components work together. The goal of software architecture modeling is to allow the software engineer to view and evaluate the system as a whole before moving to component design. This step enables the software engineer to:
- ensure that the design model encompasses the various solution requirements
- make it possible to survey various design alternatives early on to facilitate the adoption of the best possible model
- limit the risk of building software that does not meet the requirements
When you look at a blueprint, you can see the major elements of a building and their relationships to each other. When you look at a software system at a high-level of abstraction, such as the extremely high-level shown in Figure 9.7 of a web browser, you can see its major components and the main connections between them. For example, at a high-level, a web browser connects to a web server via HTTP requests, and the web server interacts with a relational database via SQL queries to create an HTML web page that is rendered dynamically and sent back to the web browser for display.
Software architecture is an important part of the creation of a software solution, and it should always be designed by an experienced software engineer because changes in the software architecture typically have drastic effects on the solution implementation such as constructing solutions that do not meet the nonfunctional requirements.
When designing software architecture, you can leverage various types of software patterns that facilitate the reuse of preexisting solutions. Two examples of such patterns are an architectural style and architectural or design pattern. An architectural style is a transformation that is imposed on the design of an entire system. The intent is to establish a structure for all components of the system. Architectural or design patterns also impose a transformation on the design of an architecture, but they differ from a style because they operate at a lower level of abstraction. Patterns can be used in conjunction with an architectural style that shapes the overall structure of a system.
A software architecture is one of the work products that results from the HLD software engineering action. Software architectures are important work products because they provide high-level representations of solutions that facilitate communication among all stakeholders. They also highlight early design decisions that have a profound impact on all software engineering work that follows
When it comes to Agile software processes, the goal of creating software architecture work products as part of the HLD software engineering action is to avoid rework. In that case, user stories are leveraged to create and evolve an architectural model (sometimes referred to as a “walking skeleton”) before constructing the software. The use of Agile software processes sometimes makes it difficult to manage architectural design especially when the team is developing large systems from scratch rather than adding small services piecemeal to an existing system. Agile approaches typically focus on small increments for which it may be difficult to produce an all-encompassing architecture that will be able to accommodate subsequent increments that have not been completely defined yet. This may lead to having to refactor the architecture, which could be very costly, as it may require changing a lot of the code that has already been developed. Therefore, it is recommended that teams thoroughly consider architectural design when taking on large projects that do not build on an already defined and solid architecture. This brings up the question as to whether big design up front (BDUF) is the preferred method in this case. It is also the reason Agile teams should include software engineers with a strong background in architecture (ideally, (enterprise architecture), who can foresee the type of designs that are required to avoid costly refactoring efforts in the future.
The Scrum and Kanban Agile process models, as you’ll learn later in the chapter, allow software architects to add user stories to the evolving storyboard and to work with the product owner to prioritize their architectural stories in work units called sprints. Well-run Agile projects include the delivery of software architecture documentation during each sprint. After the sprint is complete, the architect reviews the working prototype for quality before the team presents it to the stakeholders in a formal sprint review.
As mentioned earlier, the focus of the HLD software engineering action is on providing a general description of the overall system design. It can include information on the overall aspects of a system, including its architecture, data, systems, services, and platforms as well as the relationships among various modules and components. Its focus is achieved by converting the requirements into a high-level solution that can then be further refined as part of the Low-Level Design (LLD) software engineering action.
Link to Learning
The IEEE Computer Society has proposed IEEE-Std-42010:2022, Software, Systems and Enterprise—Architecture Description as a standard that describes the use of architecture viewpoints, architecture frameworks, and architecture description languages (ADLs) as a means of codifying the conventions and common practices for architectural description.
Software Design
The abstraction and refinement of requirements into a specification that can be used to help create a software solution is referred to as software design. Software design is a software engineering task set that is part of the DLD software engineering action. The focus of the DLD software engineering action is to detail or extend the HLD work products. As part of the DLD software engineering action, every element of a system is provided with detailed specifications, and the logic of each component within each module of a solution is determined.
Software design generally requires problem-solving skills as well as the ability to conceptualize framing as well as define it into a working specification that can be used to create a software solution. In contrast to the HLD software engineering action, the LLD software design task set is concerned with all the implementation details. In Agile approaches, we typically postpone many design decisions until implementation time and only design up front the parts that are tricky or that need to be solved in an unusual way.
The main concepts that drive software design are:
- Abstraction: high-level representation of components such as data (or data objects) and procedures (the sequence of instructions that usually have specific and limited function)
- Architecture: overall structure or organization of software components, ways components interact, and structure of data used by components; component-based software engineering (CBSE) can be considered as a task set as part of the DLD software engineering action to assemble solutions based on the reuse of preexisting components
- Design patterns: a design structure that solves a well-defined design problem within a specific context; pattern-based design can be considered as a task set as part of the DLD software engineering action to assemble solutions using implementation patterns and frameworks
- Separation of concerns: a technique based on the idea that any complex problem can be more easily handled if it is subdivided into pieces
- Modularity: compartmentalization of data and function
- Information hiding: controlled interfaces that define and enforce access to component procedural detail and any local data structure used by the component
- Functional independence: single-minded (high cohesion) components with aversion to excessive interaction with other components (low coupling)
- Stepwise refinement: incremental elaboration of detail for all abstractions
- Refactoring: a reorganization technique that simplifies the design without changing functionality
- Design classes: provide design detail that will enable analysis classes to be implemented
The work products generated as part of the DLD software engineering action are used to construct the actual solution. There are many other task sets that could be considered as part of the DLD, including prototype, user experience and user interface design, and specific design task sets for web, mobile, social, and gaming solutions design.
Think It Through
Asking the Right Questions
In 1998, the Mars Climate Orbiter was launched toward Mars to study its climate. Unfortunately, the lander was never able to complete its task. The probe ended up being destroyed by the Martian atmosphere due to an error in the mathematical calculations. However, the issue was not with the calculations but the numbering system that was used for distance measurements. The software in the probe used the metric system but the team from Earth that sent the data to the probe sent their values in imperial numbers. The result was a destroyed probe.
Was this a design error? What could have been done to avoid a mistake such as using the wrong measurement system?
Construction Framework Activity
As part of the construction phase or framework activity, the design documents that result from the DLD software engineering action of the elaboration framework activity are used to write corresponding source code in a programming language as well as create any supporting assets such as deployable container images, databases, and controls. This is where the coding magic happens! Developers work to build the software based on the design created earlier. This phase involves writing code, testing individual components, and fixing defects/bugs.
Source code may also be automatically generated from design work products using round-trip engineering tools, although this type of functionality is still limited today. The development methodology model-driven engineering, which is compatible with Agile methods, stresses the importance of formal and executable specifications of object models and the ability to verify the correctness and completeness of the solution by executing the models. This is typically made possible when using round-trip design engineering tools and frameworks that allow for the specification of models using standard modeling notations and the creation of evolvable software from these models (e.g., jBPM5).
The result of the construction phase is a complete running solution that is based on the design work products and meets the expectations set forth as part of the inception phase. It should be noted that coding is a mechanistic outgrowth of procedural design, and errors can be introduced as the DLD design work products are translated into a programming language. This is particularly true if the programming language does not directly support data and control structures represented in the design. Code walk-throughs are designed to avoid this.
The field of DevOps, a wide-ranging collection of development and operations practices, has introduced further processes and infrastructure to automate many of the software engineering actions that are part of the construction phase. When these methodologies have been applied together, Agile methodologies and DevOps’ automation have increased the speed, robustness, and scalability with which software can be constructed.
Software engineers use many tools to implement solutions. These include code editors such as Atom, Integrated Development Environments (IDEs) such as VSCode or Visual Studio, version control systems such as Git, debugging tools, testing tools, and more.
Link to Learning
Which programming language to learn often depends upon the type of programming that needs to be done or what a business currently uses. While there is no perfect way to determine which programming language is used the most, the Tiobe Index gives a general idea of which languages are most popular. This index is updated monthly and provides an indication of the changing popularity of programming languages. The rankings are based on factors including the frequency of searching on related topics, courses taught, and more. Of course, each programming language has strengths and weaknesses, so when choosing which language(s) to learn, sometimes it best to focus on the context of the solution you need to create and not just popularity.
Unit, Integration, and System Testing
Unit, integration, and system testing deal with ensuring and verifying that the software system works as expected. It typically involves activities to uncover errors that were made inadvertently during the elaboration and construction phases. Code reviews and unit, integration, and system testing are typically done as part of the construction phase by software developers responsible for writing source code. The process whereby the source code written by one developer is manually inspected by another developer is called code review. This is especially useful when the software team consists of developers with different levels of experience.
It is worth noting that while SDLCs often include testing task sets within specific phases, testing is an activity that should happen repeatedly throughout the software process independently from the actual phase (or iteration of such) that is currently being executed.
Deployment Framework Activity
Deployment is the phase that makes software available to users. Finally, the software is released to the users! This phase involves delivering the software to its intended audience (e.g., launching a mobile app on an app store) and providing ongoing support to address any bugs or issues that arise after deployment. In the past, deployment often meant installing the software on a customer’s computer. Today, software is more likely to be installed on special computers or powerful online systems called cloud servers. When deploying software, some configuration might be needed, especially for complex applications. DevOps uses automation to make deployment faster and more reliable. Modern software is often updated frequently using special tools and techniques. As noted, deployment, especially if it is a nontrivial task that is not expected to be done by the customer, typically involves configuring the software solution. In a typical configuration process, the software solution is containerized, or packaged in such a way that it can run on different computer systems easily, and then deployed via a system, called Kubernetes clusters, that automatically manages scalability and availability.
The field of DevOps has introduced additional processes and infrastructure to automate many of the software engineering actions that are part of the deployment phase. Together, Agile methodologies and DevOps’ automation have increased the speed, robustness, and scalability with which software can be deployed today. It is worth noting that modern applications’ deployment techniques have evolved quite a bit as a result of DevOps’ automation. Software updates are deployed frequently today using continuous integration and deployment (CI/CD) techniques and tools.
Maintenance
Most deployed software will eventually need updating to add new requirements or fix issues that might arise. The process of updating software after it is deployed is referred to as maintenance. As mentioned earlier, maintenance is a software engineering action that is part of the software process deployment phase. The cost of maintenance can exceed that of development, especially if software remains in use for a long period of time.
You may think of the process of creating software as being analogous to that of building a new car. Maintaining the software is like maintaining the car while you use it. To make the analogy more precise, while software does not wear out like car hardware might, its maintenance involves updates to fix bugs and adding new features as requirements change. Building a new car may take several weeks, but car maintenance will probably last much longer because the car may be used for years. As for expenses, building a new car will cost a significant amount, but costs related to its maintenance will easily exceed the price of the car when it was new. Of course, when the costs of maintaining a car become too high, there is always the option of buying a new car, just as there is the option to build a new software product if maintenance of the legacy software gets too high.
In Agile process models, a lot of the maintenance is not limited to adding new features. Instead, it often involves the following:
- using Scrum sprints to plan the work and address customer needs without overwhelming developers
- giving priority to urgent customer requests and allowing corresponding interruptions of planned maintenance sprints to address these urgent requests
- making it possible for team members to prioritize the handling of customer requests and coordinate their processing as part of the maintenance process
- combining the use of meetings and written documentation to minimize the duration and frequency of meetings and keep them focused
- relying on informal use cases when communicating with stakeholders to supplement existing documentations and keep communication simple
- requiring that developers verify each other’s work; in particular, experienced developers should review the work produced by junior developers, such as defect fixes or code added to support new features, to help them develop their knowledge
Crosscutting/Umbrella Activities
In addition to the core process framework activities, namely, inception, elaboration, construction, and deployment, there are many activities that can take place at any point during the creation of a solution and throughout the entire software process—in other words, they crosscut the process. To understand the importance of such activities, consider this analogy. In addition to completing the main stages of building a house (e.g., constructing the foundation, walls, roof), there are other important activities that happen throughout the project (e.g., choosing the colors of wall paint). These are like the umbrella you would use on a rainy day—they support the entire process but aren’t part of the main building steps themselves. In software development, a crosscutting activity, or an umbrella activity, is an activity that crosscuts the entire software development process but is not part of the main building steps themselves. They can include communication and training, risk management and planning, software configuration content management, quality management, architecture management, and software security engineering.
Communication and Training
Establishing communication involves scheduling regular meetings between developers and customers and also meeting with developers to train them on new technologies. It is necessary to communicate with stakeholders and customers at various points in the software development process. For example, collecting project requirements during the inception phase involves communication and coordination between project managers and stakeholders. Release notes serve as another form of communication, and they are written for software users to make them aware of the features that are included in a new release. More generally, the status of a project needs to be communicated with management at regular intervals to make sure that satisfactory milestones are reached according to plan.
Regular communication also happens within software development teams. For example, design reviews encourage communication to help finalize designs. Code reviews focus on communicating how the system is changing, and how to solve problems and improve code. Daily stand-ups in Agile software processes are about concise verbal communication.
Software engineering team members must undergo training on a regular basis to acquire or maintain certain skills. To support this, software development teams typically put in place organizational change transformation methodologies and frameworks (e.g., Prosci 3-Phase process) to manage their ability to successfully conduct projects given the changing nature of software engineering.
Risk Management and Planning
Risk management and planning focuses on identifying potential risks like project delays and having a plan for how to address them. Software development is a complex activity that involves many people working over a long period of time, and it turns out that not every project succeeds. Some projects are delayed, some of them overrun the budget, and some are never finished. The high percentage of unsuccessful projects creates a need for risk management and mitigation. Minimizing risk is typically the main task of a manager, but software engineers can also take on management tasks. For example, to help managers, they might provide updates that allow managers to refine their risk assessments. Ideally, these updates would address the different components of risk, including:
- Performance risk: considers whether the product will not fit its intended use
- Cost risk: determines if the budget constraints can be maintained
- Support risk: assesses how easy the product will be to maintain and update once it is completed
- Schedule risk: considers whether the project will meet expected deadlines
Risk projection attempts to associate with each risk the likelihood of its occurrence and the consequences of the resulting problems if a risk should occur. One of the software process models you will learn more about later in this chapter is called the Unified Process. In this process, risks are mitigated by selecting risky requirements for early iterations. For example, a use case that requires a new technology is typically considered risky, as well as a use case that assumes integration with legacy code. We select such use cases for an early iteration because if their implementation happens to fail, it is better to fail in the beginning of the project rather than in the end.
As the project continues, managers focus on minimizing risk across the four Ps: the people involved, the product being developed, the process being followed, and the project work being completed. While the Agile and traditional software processes use different approaches, the goals for each are the same: controlling risk by providing people with a well-defined product and clear processes to follow. These goals allow software engineers to estimate the work required and track the product through development. Managers compare the product completed against those estimates and use those results to make any needed adjustments.
Technology in Everyday Life
Delivering Viable Systems
The development of software often focuses on three areas: desirability, capability, and viability. In other words, the focus is on what is wanted, what is possible, and what will sell or help a business to function. To be successful, a software product must deliver something people want and is of value. It must also be possible to implement the product.
In the 1980s, the first virtual reality software was released. VR was something that many people wanted and saw a value in having; however, the technology was not capable of delivering a viable system. It is only today that software and hardware capabilities can support VR at a level that makes it possible to deliver products that people are willing to pay for.
What are some other technology areas that are viable today that were not viable ten years ago? What are some software technologies that are not viable today that could be viable within the next ten years?
Software Configuration and Content Management
Software configuration management (SCM) is a crosscutting activity that helps report, identify, and control change to items that are under managed development. These items are referred to as Software Configuration Items (SCIs). SCM also analyzes the implementation of change and provides mechanisms to publish and deploy change. One way to reinforce the importance of configuration management without a real customer is to change the project requirements sometime after the project implementation begins.
The focus of SCM tends to be on four main areas:
- Configuration identification: the identification of all components within a project, including any files, documents, source code files, directory structures, and more
- Configuration change controls: the controlling of who accesses elements of a project and the tracking of changes being made
- Configuration status accounting: the tracking of who made a change as well as when they made the change and why it was made
- Configuration auditing: the tracking of the status of a project and, more important, the tracking and confirmation that what is being created matches what is required
Many Agile teams make use of continuous integration to ensure that they always have viable prototypes ready to test and extend. The advantages of continuous integration are as follows:
- involves frequent feedback to notify developers promptly when integration testing fails so they can fix issues as quickly as possible, especially if the number of fixes required is small
- improves quality by being able to address product changes quickly; as a result, users can trust that the product meets their needs
- reduces risk by avoiding long delays between the time software is developed and its integration into the product; this ensures that design failures can be detected and addressed early on
- involves up-to-date reporting to ensure that software is correctly configured to conform, for example, to the latest code analysis metrics
- ensures that streamlined integration is used as key support technologies in organizations that use Agile software process models
- captures defects as early as possible in the software engineering process, which limits the cost of software development
Various tools may be used to support SCM, including audit management tools (e.g., ZenHub), configuration management/automation tools (e.g., Ansible, Vagrant), continuous integration tools (e.g., Jenkins, Travis CI), dependency tracking and change management tools (e.g., Basecamp, Jira), source control management tools (e.g., GitHub), and so on.
Content management includes collection, management, and publishing subsystems. The collection subsystem facilitates the creation and acquisition of new content. It also makes it possible for humans to relate to the content and combines it as units that can be displayed more effectively on the client side. The management subsystem provides a repository for content storage capabilities, including the content database (i.e., the information structure use to organize all the content objects), the database capabilities (e.g., functions to search for content objects, store and retrieve objects, and manage the content file structure), and configuration management functions (e.g., supports content object identification, version control, change management, change auditing, and reporting). The publishing subsystem extracts content from the repository, converts it to a publishable form, and formats it so that it can be displayed in a web browser (e.g., Chrome, Safari). The publishing subsystem uses a series of templates for each type, including static elements (e.g., text, graphics, media, and scripts that require no further processing are transmitted directly to the client side), publication services (i.e., function calls to specific retrieval and formatting services that personalize content, perform data conversion, and build appropriate navigation links), and external services that provide access to external corporate information infrastructure, such as enterprise data or “back-room” applications.
Software Quality Management
Whereas testing validates that things work as expected and that there are no errors or issues, Software Quality Management (SQM) focuses on the development and management of the quality of the solution being developed. Tasks within SQM involve quality planning and quality control. They include, but are not limited to, activities such as:
- confirming requirements are correct, complete, and consistent
- verifying that all elements of design conform to the requirements and are of high quality
- confirming that source code follows coding standards and is written in a manner that will be maintainable going forward
- ensuring that testing checks all elements of a solution
- implementing a change management plan
Engineering quality software subsumes a deep understanding of the solution requirements and the ability to design work products that conform to these requirements. These activities must rely on the use of software engineering best practices and must be supported by adequate project management.
Assessment reviews (e.g., system engineering assessments, software project planning assessments, analysis models assessments, design models assessments, source code assessments, software testing assessments, and maintenance assessments) are an important quality assurance mechanism. Software quality assurance (SQA) is part of a broad spectrum of software quality management activities that focus on techniques for achieving and/or ensuring high-quality software.
Architecture Management
Returning to the analogy of how software architecture is like the blueprint of a house, software architecture management can then be likened to improving the blueprint as you use it to build the house. To put it another way, creating the blueprint (architecture) was just the first step. In software development, architecture management involves keeping track of the blueprint and making sure it stays up-to-date as the software is being built. This helps avoid problems later on. While HLD is a software engineering action that takes place in the software process elaboration phase, the architecture management umbrella activity encompasses a set of architecture management and architectural refinements techniques that can help improve the architectural design while it is under development.
Architecture management efforts may be performed at any point within the software life cycle, which explains why architecture management is a good umbrella activity; it maintains the knowledge required to qualify the “goodness” of solutions from a design standpoint, and it is handled separately from the quality management umbrella activity.
There are special tools that can help with architecture management. Similar to using software to draw up the blueprint of a house, these tools can help store and organize the architecture information and make it easier for everyone working on the project to understand it. Examples of such tools include artifact/package management tools (e.g., Docker Hub6, JFrog Artifactory7), and pattern catalogs.
Software Security Engineering
Engineering software security focuses on protecting software assets against threats. Threats typically exploit software vulnerabilities to compromise the confidentiality and integrity of data. Threats may also compromise the availability of software systems by disrupting access to system services and related data.
Software architectures must be designed to address security requirements and eliminate vulnerabilities that can lead to exploits. Various design techniques can be used by software engineers to address the possibility and the effects of attacks in order to minimize related losses and costs. As an example, Microsoft’s SQUARE process model provides a means of eliciting, categorizing, and prioritizing security requirements engineering for software intensive systems.
Keeping up with cybersecurity threats is proving to be difficult for businesses these days due to a lack of trained resources and increased demand for security compliance. Traditional approaches to security are no longer viable to ensure that organizations can keep operating as well as develop competitive solutions. For that reason, many businesses are combining traditional software process models or Agile process models with modern approaches, such as DevSecOps, to manage software security engineering. Using DevSecOps requires the adoption of new processes and tools as well as the training of staff members. The DevSecOps approach automates the support of security throughout the SDLC, which reduces time and costs of development and facilitates the integration of the security and development teams.
Some examples of DevOps security tools are Aqua Security and HashiCorp Vault, and examples of DevSecOps tools are SonarQube and XebiaLabs.
Popular Software Process Models
The framework activities (or phases) that have been presented as part of the process framework are general phases that get applied within software process models/SDLCs. The types of software engineering actions that get applied with each phase depend on the software development model that is used for the project at hand.
There is a multitude of SDLC models. These models have evolved over time and offer various approaches to creating software solutions. Some more traditional SDLCs are prescriptive in terms of the software engineering actions that must be conducted, while others are agile. Agility, as you know by now, has to do with the ability to skip some software engineering actions or make some of the deliverables optional in order to meet deadlines and still deliver a quality product within budget constraints. As you may recall from Figure 9.4, there is a spectrum of agility between software process models. By nature, SDLCs are incremental as it is always possible to consider a subset of requirements for a given release. In fact, there are typically as many increments as there are subsets of requirements. To accommodate changes in requirements and possibly new requirements within an increment, SDLCs can involve iterations that make it possible to add to and replan increments on an ongoing basis. This adding and replanning may introduce backlogs because, usually, the original timeline cannot be changed.
In short, SDLCs can be made agile, incremental, and iterative. Historically, traditional models were incremental but not iterative or agile. The Unified Process (UP) model was the first traditional model to introduce iteration and it was quite prescriptive and, therefore, not agile, in terms of expected deliverables. Agile software process models are always incremental and iterative. That said, it does not make sense to use an Agile software model if the requirements are known and not expected to change during the increment. In that case, using out-of-the-box solutions may, with the help of some collaborative features found in Agile process guidelines set forth in agile ASDEs/SDLCs, produce better results. Traditional software process models follow a step-by-step plan, akin to building a house according to a blueprint. They are good for projects with clear requirements that don’t change much. While some organizations define the software development model their software engineers are expected to use, it is almost often better for teams to pick or tailor a software process model, so it aligns with the project at hand.
Some of the popular software process models include:
- Waterfall model
- V-model
- Incremental model
- Prototyping model
- Spiral model
- Unified Process (UP) model
- Agile Process models
Waterfall
Predominantly used in the early days of software engineering, the waterfall model is a continuous prescriptive software process model in which phases “flow” into another the way water flows from the top of a waterfall down to the bottom. In the waterfall model, the requirements are first gathered and analyzed, then a complete software system is designed, the system is implemented, and then final testing is done before the system is finally deployed. Unfortunately, the traditional waterfall model did not make a distinction between phases and software engineering actions, and, therefore, the steps it uses correspond to specific software engineering actions or task sets with custom names that should be conducted in a prescribed sequence. Our generic process framework may still be used to represent the waterfall process, but framework activities would have to be ignored and specific software engineering actions or task sets that correspond to the waterfall phase names would have to be used. For example, some of the testing task sets from the SQA software engineering action of the Quality Management umbrella activity could be pulled together to make up the waterfall testing phase.
In the waterfall model, software engineering actions or task sets are performed in a strict order as shown in Figure 9.8, and each one of these has a required output in the form of artifacts, such as a document, diagram, or code. This software process is not Agile, so it is not possible to skip a step or drop a deliverable. For example, the output of requirements analysis is a document that describes all requirements for the system. Because this process is not iterative, it is not possible to go back to a previous step to modify this output. For example, if the final design document contains a mistake, it could not be revised during the implementation step—the next step would be testing. Note, however, that there is nothing that keeps the waterfall model from being used incrementally. The concept of incremental development was simply not understood when waterfall was first used.
The major advantages of the waterfall model include:
- It is easy to understand and use.
- Steps and corresponding software engineering actions or task sets are conducted sequentially.
- The artifacts are well documented.
Although the waterfall model has some advantages, they are often outweighed by its disadvantages:
- It cannot easily accommodate changes in requirements. If there is a change in requirements, it is necessary to go back to the first step of the process model and update all the artifacts that were previously completed.
- No software product is provided until late in the life cycle. Because software is not available until the end of the implementation/construction step, it is not possible to ask the customer for feedback during the process.
- It produces many artifacts, which are not always necessary; therefore, a lot of time may be spent creating unnecessary artifacts.
Despite these disadvantages, the waterfall method is useful in situations where requirements do not change and interaction with the users of a system is limited or nonexistent during a project.
The V-Model
The V-model for software development is also known as the verification and validation model. The V-model is similar to the waterfall model in that it is a continuous prescriptive model that includes basic initial system creation steps starting with gathering requirements, designing the system, and coding it. Each step is prescriptive and conducted sequentially. Where the model differs is that each step of the V-model is associated with a verification or validation testing step/phase, as shown in Figure 9.9. This testing is planned in coordination with each of the design and implementation steps/phases.
Like the waterfall model, the V-model is considered easy to understand and use because it follows a specific flow when it comes to steps/phases, and each step/phase is only completed once. Also, it is best suited for smaller projects where the requirements are easy to understand and unlikely to change after the project starts. The V-model’s advantage over the waterfall method is that verification and validation testing is more integrated into the overall process.
The V-model, however, does have several disadvantages, including:
- It is not good for larger, longer projects or projects that may involve changing requirements.
- A usable software product will not be available until near the end of the software development life cycle.
- Once testing is started, it becomes more difficult to make changes to the design.
Incremental Model
In the incremental model, the software process is divided into modules. Each module focuses on a smaller set of requirements based on an overall business plan. These smaller sets of requirements are then used to design, implement, and test that part of the software solution, as shown in Figure 9.10. Once all the modules are completed, the software solution is deployed to the users. Note that increments in this case are different from iterations. It is assumed that each increment focuses on a small subset of requirements, and it is not possible to accommodate possible changes to requirements. Projects are typically split into a number of increments meant to cover customers’ needs over a period of time that is acceptable to them, while each increment is made manageable by the development team.
The incremental model is best suited when the requirements are clearly stated at the start of the project and the product needs to be released quickly. Because the increments are small, testing can be done and user feedback can be gained with each increment. This means there are opportunities to identify and fix errors or issues with the product sooner than in the waterfall or V-model methods. Once the overall requirements step is complete, each increment can focus on its specific delivery. This helps to reduce costs, especially if there is a change in the requirements. Additionally, a big advantage of the incremental model is that it is easy to know how much has been completed and what remains to be done.
Prototyping Model
The prototyping model requires the quick creation of a mock-up or demo of the expected final product that does what the final product is expected to do in order to be able to show end users what the system could look like and how it might function. Because the users can see what the product may look like and the basics of how it may function, they are in a better position to provide feedback that can be incorporated into the demo and then built into the final product. The prototyping model is also known as a RAD (Rapid Application Development) model because its focus is on getting a working demo created rapidly.
This model still uses gathering requirements, designing, implementing, and testing steps; however, they are all done quickly to build the prototype. The focus is on improving the prototype to get to what is required to build the final software solution.
As shown in Figure 9.11, the prototyping model can be used to build a mock-up that the user can approve, and then the mock-up can be used as an input to the standard process of designing, implementing, testing, and deploying the actual product.
The prototyping model has some drawbacks. Because it involves getting the user involved early, the process of testing and incorporating feedback can become time consuming. Additionally, while the prototype might appear to work, generally it will lack the full internal functionality, which must still be built even after the user sees what appears to be a working system. This may require added effort to be made to manage expectations about the final product.
Spiral Model
The spiral model is a combination of the waterfall model with an iterative model approach and a focus on reducing risk within a project. As with the waterfall model, the spiral model starts with requirements gathering except that it starts with a small set of requirements and then cycles through planning, design, implementation, and testing for those requirements. After the initial set of requirements is addressed, the process iterates back to the beginning, where additional requirements are applied to the project, and then it continues cycling, as illustrated in Figure 9.12, until the software solution is ready for deployment. This model differs from the waterfall method in that it includes a risk analysis as part of the planning step. The risks associated with the project are also assessed during the review and testing of the system.
The spiral model is often used on larger projects or when frequent releases are expected. It is also used when risks are considered high for a project and need to be monitored closely. Such risks can include cost, unclear requirements, complex requirements, or having requirements that could change. The advantage is that because of the spiraling, iterative nature of the model, changes can be added later in the project. Additionally, the model allows for better estimation of costs for individual iterations because a limited number of requirements are being addressed during each spiral. Because each iteration allows for the system to build upon itself, there is also the benefit of being able to adapt to user feedback and changing requirements.
The spiral model does have its disadvantages. Because there is an added focus on risk, it requires expertise in risk management. Additionally, the iterations not only add new features, but can build upon existing features, which can lead to added costs. If applied to smaller projects, the cost can outweigh the benefits of some of the other approaches. Because there are multiple iterations composed of several steps/phases, it is also important to follow the processes more strictly than in the case of other models, and keep good documentation to know what has been done and what is expected.
Unified Process Model
In the Unified Process (UP) model, the development of a software system is divided into four primary phases (inception, elaboration, construction, and transition), each of which involves multiple iterations that include the standard software development processes, as shown in Figure 9.13. Like the framework activity/phase in our generic process framework, the UP model differentiates phases from software engineering actions. The UP model still relies on a continuous prescriptive process, but it supports incremental iterative development. The UP model uses the word discipline or process to refer to the software engineering action defined in our process framework. The names of the phases are almost the same as the names of the phases in our process framework, although the transition phase is equivalent to deployment in our process framework.
Each phase or process of the Unified Process model has its own goal. In the inception phase, the goal is to get the users to buy into the solution being presented, document the requirements, and create an initial plan for the project. The elaboration phase focuses on the solution design and firming up the project plan. The construction phase focuses on implementing and testing the software. Finally, the transition phase focuses on the deployment of the solution. As in other traditional process models and in contrast to the Agile models, each phase of the UP model results in a set of deliverables. In the inception phase, this might include documents like a scope statement, initial risk assessment, or preliminary project plan; in the elaboration phase, a software development plan, revised business case, or executable architecture baseline; in the construction phase, individual iteration plans, release description document, or user documentation; and in the transition phase, final product release and lessons learned analysis, among others.
Note that the amount of time spent in each discipline changes over time so that as the project evolves, less time is spent on requirements and design and more time is spent on testing and deploying. The expectation is that the model adjusts the extent of software engineering actions on an ongoing basis depending on what is needed to develop the software solution at a given time.
The expectation when using the UP model is that software will be delivered early and regularly throughout the process. Additionally, the model allows users to see what is coming and provide feedback, which, in turn, allows the software to be adapted to any changing needs. This requires open communication throughout the process and keeping the users as active participants in the project. Another practice with this model is to focus on reusing existing code as well as on using modeling and other tools. Tools such as UML are almost always used as part of UP. Additionally, UP lowers risk through the iterative nature of the development. Because the iterations are timeboxed, there is better control of the overall process. This helps make risk easier to manage.
The disadvantage is that the phases are combined with the disciplines and time and boxed into a set of iterations, which can result in a model that is complex to follow. To be effective, the disciplines need to be managed, and communication needs to be clear. It generally requires good management of the process.
It is worth noting that UP was the first model that distinguished phases from disciplines, which made it possible to introduce iterations and integrate them with increments. While incremental development had existed before, the separation of phases from disciplines made it newly possible to plan iteration as part of a project increment and thus to create solutions more effectively, as software process actions were no longer associated to a specific framework activity/phase. During each iteration, partial functionality could be created, and the results could be integrated with the rest of the system.
Agile Process Models
Several popular software process models in use today align with the Agile guidelines. While the phases used within Agile process models are similar to that of other models, the difference is that some disciplines in various phases may overlap. For example, in the Agile process model, all the requirements do not need to be defined in the inception phase prior to starting the design or to coding disciplines in the elaboration and construction phases. New requirements or changes in requirements can be considered as part of subsequent phases. Additionally, an incremental, iterative approach is applied, so that it is not necessary for all the functionality in one increment to be dealt with at once. It is worth noting that the incremental model that was presented earlier may be agile for a similar reason. The benefits of Agile process models are they are flexible (i.e., allow for changes along the way) and emphasize collaboration. They also allow for continuous improvement, which is advantageous if requirements are likely to change or needs are likely to evolve. The drawbacks are that Agile models can be difficult to scale.
Agile Principles
Agile principles were formulated in the Agile Manifesto that was a response to the unsatisfying number of projects that were delayed, overrun their budget, and did not meet customers’ expectations. Agility refers to the ability to create and respond to change in order to profit in a turbulent business environment. Some Agile principles are as follows:
- Satisfy the customer through early and continuous delivery of software.
- Welcome changing requirements, even late in development.
- Deliver working software frequently, such as each couple of weeks.
- The most effective way of communication within a development team is face-to-face (via collaboration tools).
- Working software is the primary measure of progress.
- Maintain a sustainable working pace.
- Continuous attention to technical excellence and good design enhances agility.
- Simplicity—the art of maximizing the amount of work not done—is essential.
- The best architectures, requirements, and designs emerge from self-organizing teams.
The approaches that complement Agile software development are Scrum, DevOps, and Site Reliability Engineering.
Link to Learning
To learn more about Agile, you can visit the Agile Alliance to learn the core principles of Agile development from the experts. The Agile Alliance is a global nonprofit organization that is focused on applying and expanding Agile values, principles, and practices. You can view its tutorial What is Agile? online, which provides more details on Agile and its use.
Scrum
Scrum is a type of Agile software development model. The fundamental unit of Scrum is a Scrum team, which is typically ten or fewer people. It consists of one Scrum master, one product owner, and several developers.
The Scrum master is responsible for running Scrum and for helping everyone understand its theory and practice. The product owner is responsible for product backlog management, which includes product goals and product backlog items. The product goal describes future desired features of the product, and the product backlog item defines what is required to be added to the product.
The product is developed in iterations called a sprint, which is a fixed-length event typically of one to four weeks. Each sprint starts with sprint planning, in which the team selects the product goals and product backlog items that will be implemented in that sprint. The selected product goals and product backlog items are moved to the sprint backlog, which is a plan for the current sprint. During the sprint, developers have a daily scrum, which is a 15-minute event for the developers that is held every day at the same time and the same place. During the daily scrum, the developers inspect the progress and, if needed, they adapt the objectives of the sprint. At the end of sprint, there is a sprint review, during which the Scrum team and stakeholders review a demonstration of what was achieved as part of the sprint increment, and the Scrum team gets feedback. After that, in the sprint retrospective, the Scrum team discusses what worked well and what worked poorly process-wise during the last sprint to produce the product increment and proposes changes to increase effectiveness. Figure 9.14 depicts the Scrum process.
The benefits of the Scrum framework include better quality of the product, decreased time to market, higher customer satisfaction, and increased collaboration within the development team. The drawbacks are that the approach requires training; it is not suitable for large teams; and it requires daily meetings.
Link to Learning
Atlassian, a worldwide company that creates team and project related products, has a no-nonsense guide to Agile development that provides additional details on what Agile is as well as on related topics, such as Scrum, Kanban, Agile Project, the Agile Manifesto, and much more. Its site has a tutorial for Scrum as well as related articles where you can learn more about how to use Scrum in a project.
DevOps
A DevOps model combines practices of software development and operations. It uses a short development life cycle and continuous delivery to achieve high-quality software products. In a DevOps model, development and operations teams are merged into a single team, and software engineers participate in all parts of the software life cycle. As shown in Figure 9.15, this life cycle includes planning (design), development, testing, deployment, and operations in a continuous cycle.
Because the DevOps model involves a focus on both development and operations, there is less chance of errors or vulnerabilities existing in a released software product. This uniting of development and operations can also lead to faster releases or shipments of products. Because of the collaboration, there tends to be improved effectiveness, faster delivery, and an optimizing of processes.
Where the DevOps process model can struggle is with systems that are complex as well as with legacy systems. The DevOps model requires strong teamwork and collaboration or it will likely fail. Additionally, it requires that the team members have the right expertise to satisfy the expectations of the project, including the ability to do continuous integration and development.
Link to Learning
Some of the best practices associated with the fascinating field of DevOps include continuous integration, continuous delivery, microservices, infrastructure as code, monitoring and logging, communication and collaboration, among others. Read this perspective on what DevOps is and consider researching other opinions to find other perspectives to help fully understand the concept.
Site Reliability Engineering
Site Reliability Engineering (SRE) is an approach that focuses on achieving appropriate levels of reliability when developing solutions. SRE was created to address the complexity of challenges created when software solutions get larger. It is important to make sure that software meets business needs while operating reliably. The ability to scale up must be balanced with the complexity of a solution while maintaining reliability within a system. In many ways, this is a similar goal to that of the DevOps.
Three key parts of SRE are reliability, appropriateness, and sustainability. A system needs to be reliable to serve the needs of the client. The level of reliability needs to be appropriate. Specifically, some systems don’t need 100% reliability 100% of the time. For example, a feature such as cruise control only needs to be reliable when it is in use. Additionally, most cruise control features do not need to maintain an exact speed, but rather can have a difference of a few miles or kilometers when the car is going up or down hills and be okay. Regarding sustainability, a system has to be sustainable and maintainable by people.
As noted earlier, SRE is similar to DevOps in that both bring software engineering and software operations closer. However, DevOps tends to focus on the product solution, or the “what,” whereas SRE focuses more on “how,” (i.e., how the solution will get done in a reliable and sustainable manner). Both focus on providing opportunities for collaboration across an organization to deliver solutions that will be successful for the client. Table 9.1 indicates areas where SRE and DevOps differ.
SRE | DevOps |
---|---|
Primary focus of reliability of a solution | Primary focus of effective development and delivery of a solution |
Focus on regulating IT with specific measurements such as following service level indicators (SLIs) and service level objectives (SLOs) | Focus on continual integration (CI) and continual development (CD) |
Prioritizes user experience by ensuring services run reliably and meet SLOs | Works to use broad ideas and does not specify how operations of services are run |
Intended to be a role more than a framework, although it can be performed by those outside of the specific role | Although it can be a role, it is intended to be a philosophy adopted across a team |
Works to move quickly to reduce cost of failure | Works to implement gradual change to reduce the chance of failure |
Works to have specific expectations of what is acceptable regarding failure or issues with new releases | Accepts failure as a learning opportunity and prioritizes rapid recovery and continual improvement. |
Uses automation and monitoring tools to standardize processes and reduce manual effort | Works to reduce organizational isolation by working closer together but does not necessarily go to the same level of using similar tools and techniques |
Suggested Process Model
Which process model is the best? There is no best process model, and no process model will work for every project or group of people. Each model has advantages and disadvantages to be considered.
The recommended approach is to use the model that can be tailored best to fit the current project and the skills of the team members. Many organizations have already made this determination and will have their own internal guidance on which model should be used. Regardless, it is often important to consider software process improvement, which is the process of transforming the existing approach to software development into something that is more focused, more repeatable, more reliable (in terms of the quality of the product produced and the timeliness of delivery), and more cost-effective. As shown in Figure 9.16, software process improvement typically involves four steps.
Industry Spotlight
Applying the Right Process
The proper selection and application of a software engineering process are important in every industry today as these processes help ensure the success of the software solutions being developed. Software engineering offers many industries opportunities for improvement. For example, the New York Stock Exchange (NYSE) has engineered its software to offer stock trading capabilities to anyone anywhere so that they can trade at almost any time. As you can imagine, the NYSE software is complex, and any software issues encountered during trading hours can generate financial losses or cause reputational damages. These are some of the considerations that must have gone into the decision about which software engineering process to use to complete this upgrade.
Of all the processes you’ve learned about so far, which one(s) do you think might be best for a project that involves developing a software solution to serve the global markets industry? Think about the possible ramifications of software development delays when addressing a software issue on a trading floor. Imagine a software defect causes stock prices to display incorrectly during peak trading hours. How could this impact the market and investors?