Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Computer Science

13.2 Big Cloud IaaS Mainstream Capabilities

Introduction to Computer Science13.2 Big Cloud IaaS Mainstream Capabilities

Learning Objectives

By the end of this section, you will be able to:

  • Learn how to use IaaS storage services
  • Learn how to use IaaS compute services
  • Understand IaaS support services for web and mobile applications
  • Relate to IaaS container management services
  • Understand IaaS support services for database management

From a business model perspective, start-ups and companies with fluctuating workloads depend on finding a way to effectively deploy and manage their applications. This includes considerations such as cheap, reliable performances that can adapt to unpredictable seasonal demands. This calls for a service model that allows businesses to access cloud resources and adopt agile infrastructures without having to worry about IT management.

One solution to such demand is infrastructure as a service (IaaS). IaaS offers on-demand access to cloud computing services, provides a pay-as-you-go pricing model, and allows the user to take advantage of cloud servers by virtually managing data and servers for their application. IaaS allows the engineering team to optimize the platform from the infrastructure and does not get locked into any cloud provider’s settings.

In this section, we will delve deeper into the different layers of operations that IaaS provides. As the infrastructure is provided without any special setup, IaaS, as a framework, requires additional skills and time from the engineering team to set up and maintain their platform. Throughout these sections, we will go through different common infrastructures that mainstream IaaS providers usually accommodate so that we can determine the best approach to maintain our cloud applications.

Storage Service

A storage service is one of the base infrastructure components that a cloud provider would provide that allow the user and the application to read, write, and access storage. These services are elastic storage services, and the word elastic is used throughout the chapter to indicate an unlimited number of resources that the user can access from the cloud provider. Some common components in the application that may require storage access are analytical data, logging information, application data, images, and videos. The storage required for these components may grow over time as the application operates, so the users must understand the type of storage service that they choose for their application. It is also common that the user may choose more than one service to use in their application. There are several types of storage services that the user can choose from, depending on the cloud provider. For example, Microsoft Azure allows users to create a storage account that makes it possible for them to use blob storage as explained in the following paragraph, mount Azure-based remote file systems to their local machine, create column-oriented database tables, or create a queue to receive streaming data from a sensor. However, there are three common types of services that all cloud providers provide:

  • The first, file storage, manages data as files. This is the most common storage service among first-time users as it is the easiest concept to understand. Most users who use computers daily are familiar with File and the way it stores in their local environment. However, as File has a tendency to grow in size and its format can get complicated, the File storage becomes more difficult to manage as the application operates and grows in the Cloud Environment. The structure of the file system can also be carried into each File being stored in this service. Some common file metadata that are managed and tracked by the Cloud Provider are file name, file size, timestamp, and permissions.
  • The second, object storage (or blob storage), manages data as blobs, with each blob representing any data format, such as a file in a local filesystem. A blob can contain any type of data, such as a small value, a document, an image, a video, or a collection of such. Cloud providers allow access to the information stored and its associated metadata via an API/SDK or via direct web links in the case of pictures for example. Common metadata includes name, size, timestamp, and custom-tag. In some cases, this approach helps manage the storage of items and also makes them available to anyone who is given access to them in the cloud, which is not possible when storing data items in a local file system. This service is commonly used by consumer applications and allows them to access and retrieve data items as objects using their names and tags.
  • The third, block storage, manages the data as blocks or physical range of storage in the physical device (such as a hard disk drive [HDD] or Non-Volatile Memory Express [NVMe]). Each block address can range from a few kilobytes or several megabytes in size. Because this service can provide a way for the user to directly access the data using a physical address, it does not need to manage the entire set of data objects. The most common usage for this service would be a system application such as an operating system or database where they need access to files much quicker, and they can keep track of and manage how the data are changed.

To access the storage services from the application, the developer would have more than one way to access the data. Figure 13.2 shows a simple view of different access patterns on how the user and application can get access to the storage service.

Graphic showing how Admin, Developer, User access Cloud API gateway through Console/terminal, Application, Web UI controller console, and then to Authentication/authorization, Storage access point and service interface, and Data device.
Figure 13.2 As the diagram shows, users and applications can access storage service through different patterns, including CLI or command line interface. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Most cloud providers will provide different tools to support different environments that the user may use to access their infrastructure and services. The storage service will usually be provided as a location in the cloud for the user to access. If the user is a developer, they can access the storage directly through their code using a software development kit (SDK), which is used to access the storage needed to read, write, and store data. This kit is usually provided by the cloud provider and works with different programming languages. Some applications running on the cloud environment also use this SDK to write operational data, such as logs, or consume different configurations that were stored in the cloud environment. If a user is a cloud administrator, they can also access the storage service through a computer terminal using command line interface (CLI). This is also a tool provided by the cloud provider, and this tool will allow users to access the storage service and manage their data. Finally, the user can also access the storage service through the web interface console through their browser. By doing it this way, the user can access their data interactively through their browser. Generally, cloud resources are available either via a cloud portal interface, an SDK that is accessed programmatically, or a CLI that enables users to invoke SDK constructs on the command line. Figure 13.3 shows the web interface for an Azure storage account container (aka, blob) accessible via the Azure object storage service’s web console. On the Amazon AWS cloud, blobs can be created as buckets within the Simple Storage Solution (S3).

Screenshot of web interface for an Azure storage account container (VBlobs) accessible via the Azure object storage service’s web console.
Figure 13.3 This image from Azure object storage service’s web console provides an additional example of the web interface used to store data containers (i.e., blobs) in Azure within a storage account. (Used with permission from Microsoft)

Another important component in the cloud storage service is the storage access point. The access point is one of the critical components in cloud architecture that will minimize the latency that the user can consume the data. Based on the user’s location, the cloud provider can provide the available access point closest to the user’s location to eliminate long I/O latency. However, this is also one of the key problems of leveraging storage service when the application is scaled out. Depending on the cloud provider, there are a fixed number of access points in an area for each account to consume. Usually, it would not be an issue for a small- or medium-performance application. However, for real-time and high-performance applications, this restriction may become a bottleneck for the application to operate correctly. This issue can be resolved when the user moves to a hybrid cloud solution where they can spend a bit more up front to pay for backing up data devices and hardware that can be located in their local area.

Compute Services

The compute service provides the ability for the user to obtain access to a private computing environment and is another base infrastructure component that a cloud provider would need to provide. This is also one of the first things that the user will try when they start using cloud services. This will provide a similar experience to a remote computing service that a user may experience in their local environment. The user can access this environment to develop or run any tasks that they cannot run in their local environment. The performance of this service can vary based on the user request, or it can be based on how the application and tasks are required during runtime. Depending on the computer hardware specification, when they need to run, and how they scale based on the application requests, there are various services the user can select. The following are three common compute services the user can see from a cloud provider:

  • A virtual compute service (VCS), which enables the user to request an environment to do some tasks and then shut it down to release the resource back to the cloud provider. Depending on when it was requested and how long the user keeps it running, the price for this service varies.
  • A spot/not urgent compute, which enables the user to get a task done but keeps costs low by allowing the cloud provider to run the task without urgency when the time is convenient and cost-effective.
  • A virtual functional and serverless compute service, which is an application that runs in the compute environment, is executed as a function, and is then shut down when the task is completed. The user will get charged based on the number of tasks or requests that the service completes. Because this service requires a different backend service from the cloud provider to operate, the cost per time unit for this service will be higher than other services. This service is common for cloud microservice architectural applications where the user may host different components and functions of their application as separate components. This will allow each component to be scaled independently and avoid a bottleneck to one large instance of an application needing to be maintained.

Similar to storage service, the user can have more than one way to access compute service. However, because the user needs to interact with the compute service to complete a task, they usually access through two main channels: CLI and Web UI controller console. Figure 13.4 shows how different users’ roles can access the compute service.

Graphic showing how Admin/developer and User access Cloud API gateway through Console/terminal or Web UI controller console, and then to Authentication/authorization, Compute access point and Virtual compute service interface, and virtual infrastructure.
Figure 13.4 The access pattern for a compute service may differ depending on a user’s role. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

One thing to note is that we ignore the access pattern where either the user uses a local integrated development environment (IDE) to access the computing environment or when the user accesses the IDE that is hosted in the virtual environment. In both cases, the user has to configure the computing environment differently to bypass the provided tools and interfaces.

Another challenge in compute service that is similar to storage service is the limited number of access points in an area or cloud account that the user can use. However, this is not as big of a challenge as storage computing because most of the computing tasks can be scheduled, and it is easier to provide minimum compute service and operate with minimal performance that provides storage access with minimal I/O latency. The high I/O latency can destroy the user experience, and it can occur unpredictably.

Web and Mobile App Services

Among all applications and workloads in the cloud, two common workloads have emerged in recent years. Those are web and mobile workloads where the user wants to provide real-time access to all users across the globe. This is one of the most critical reasons why the user wants to move into the cloud environment because the cost to scale the global infrastructure is extremely high.

A content delivery network (CDN) is an important cloud capability that accelerates web and mobile workloads access globally. A CDN is a network of servers and associated networking infrastructure that is spread across the globe and allows access to web and mobile workloads from anywhere. It is configured to prioritize and cache common data and content (e.g., videos) in different geographical areas so it can increase access and processing speeds of mobile and web applications in the global network, which results in improved user experience and reduced energy costs. On-demand streaming services (e.g., Netflix, Hulu, Tubi) use CDNs to direct users to the closest server (i.e., a network edge server located at the edge of the network closer to their location) from which they can stream their movies. The web clients/apps provided by these vendors allow dynamic adaptive streaming over the Web (via the HTTP web protocol) to ensure high-quality streaming of media content over the Internet delivered from CDN servers. Figure 13.5 shows a high-level architecture of how CDN is set up on Azure Cloud Service.

Illustration of users accessing Edge server from Origin server.
Figure 13.5 Microsoft Learn’s content delivery network on Azure Cloud Service is set up using a high-level architecture. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Outside of the infrastructure, the cloud provider is also providing other common patterns and services that most web and mobile applications need to use to provide an optimal experience to the end consumer. These patterns and services are pre-implemented and optimized by the cloud provider and have an easy integration with the compute and storage services where the application runs. The following are two common web and mobile cloud components:

  • The component that stores and manages sensitive information securely while allowing the application to be scaled and deployed in different environments on the cloud is secret and configuration management. Different environments will require different configurations and secret details, such as database access or language settings. The best practice to handle complex environmental configurations and secrets is using a global cloud service to inject those details during runtime so the application code can be free of custom implementation and the application deployment and version control would be much simpler. Most cloud providers will allow the user to manage their custom configuration and secrets and update them during application runtime.
  • The component that allows developers to centralize all logging data and provide a comprehensive view of all events happening with an application at any moment is logging and monitoring management. As the application grows and scales in different environments, the ability for the developer to know and understand how their application runs becomes more and more important. The administrator’s ability of the administrator to identify and mitigate the issue during the runtime is an essential requirement for any cloud-based application. For this reason, most cloud providers provide logging and monitoring services together with the compute service so that they can centralize all logging data and provide a comprehensive view of all events that are happening with the application at any moment. The user can leverage this service to obtain some monitoring capabilities out of the box when they deploy their application to the cloud environment. However, based on different applications and requirements, the user may expand these capabilities by adding custom logging logic or notification configurations to match their needs. Figure 13.6 shows a simple monitoring dashboard on Azure Cloud Service.
Screenshot of simple monitoring dashboard on Azure Cloud Service.
Figure 13.6 This picture provides an example of a simple monitoring dashboard on Azure Cloud Service. (Used with permission from Microsoft)

Container Management Services

In recent years, container management services, which encapsulate an application with the necessary operating system libraries for it to operate, have become one of the key innovations that have transformed the use of cloud infrastructures. In a nutshell, the container is a way to encapsulate an application with any operating system’s library that is required for the application to operate. By using a container, the developer is not worried about the environmental mismatch between the environment where the application was developed and the environment where the application is run. This container can be run on top of any Linux kernel that provides support to the container runtime. Figure 13.7 shows how a container runs in a regular compute environment. It is also important to point out that the container technology is different from the virtualization technology used in virtual machines.

Illustration of container with Application code linked by downward arrow to System library, linked by downward arrow to Operating system’s kernel and Hardware (these 2 are outside Container).
Figure 13.7 This diagram shows the high-level view of how a container runs in a regular compute environment. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

On the cloud environment, the container management service allows the user to deploy, operate, and scale any containerized application. It also includes several necessary components that help the user manage their containerized application better, such as the following:

  • The container registry (CR), or the registry where the user can version control each container image that they deployed into the cloud environment. The user can usually manage the container using image metadata such as namespace, image name, or image tags.
  • The base container image, which is the foundational layer of a container provided by the cloud provider to build an application. The image is mostly up-to-date with secure patches and packages that allow the user to update their application with the latest security update.
  • The Kubernetes environment, or Kubernetes (K8S) service, which is the most popular container orchestration system. This is a managed service that is usually provided by the cloud provider to allow the user to scale their containerized application in a Kubernetes environment. The K8S environment is one of the key systems to run a hybrid cloud environment where the user can run applications from both their local and cloud environment.

Figure 13.8 shows a simple workflow to deploy a containerized application into Azure Cloud Environment. Architecting a hybrid cloud solution requires strong technical expertise in application development and containerized application, and it also requires deep knowledge of cloud solutions. The details on how to do it correctly will not be covered in this section. However, the hybrid solution should be the target for any user on their journey of migrating their application into the cloud.

Arrow connected boxes: Create Azure container registry (ACR); Create Azure Kubernetes Services (AKS) cluster and attach to ACR; Build docker images and publish to ACR registry; Update Kubernetes (K8) manifest; Deploy application.
Figure 13.8 On Azure, K8S can be part of a simple containerized application deployment workflow. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Concepts In Practice

Containers and Virtual Machines

For deployment, organizations can use containers or virtual machines. How do you decide which technology is more appropriate? Containers have more scalability and are practical if you’re working with multiple environments and you want to package and run your applications in a manner that is predictable with repetition from one environment to the next. Virtual machines (VM) provide more environmental control and are practical if you need to use the same physical machine to install more than one operating system, as well as create more than one environment. Another consideration is the speed of software development. It is faster and easier to build and test new features on containers compared to VMs. However, VMs provide better security. While there’s a lot to consider when deciding on containers versus VMs, both technologies offer important benefits, providing organizations with critical resources for deployment.

Database Management Services

Today, database or any data management system is the heart of any application. However, managing a database is usually a difficult task as the data can be growing oversized both in size and in complexity. For this reason, most cloud providers provide several managed database services for any application that runs inside and outside of the cloud environment. There are two popular database management services:

  • A relational database service (RDS), in which the relationship between data is strictly managed. The data can be managed under a predefined data schema. This is the most common type of database that most developers are familiar with. Some popular examples of this type of database are Oracle DB, MySQL, or PostgreSQL.
  • A NoSQL database, in which the relationship between data is not strictly managed. The data can be managed under key/value pair. This type of database has become popular in recent years with monitoring applications where the data needs to be written and captured quickly. Some popular examples of this type of database are MongoDB Atlas and Cassandra DB.

The access pattern for these services is similar to how any application access to any managed database. Different secure authentication and authorization methods can be configured through the CLI or web console by the user and the application can access the database from inside or outside of the cloud environment.

Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
Citation information

© Oct 29, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.