Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Computer Science

13.3 Big Cloud PaaS Mainstream Capabilities

Introduction to Computer Science13.3 Big Cloud PaaS Mainstream Capabilities

Learning Objectives

By the end of this section, you will be able to:

  • Learn how to use Internet of Things cloud PaaS services
  • Learn how to use shallow and deep machine learning cloud PaaS services
  • Learn how to use blockchain cloud PaaS services
  • Understand PaaS services support for extended reality applications
  • Understand PaaS services support for 3-D/4-D printing services
  • Relate to PaaS services for cloud application development

For businesses that look for all existing features in IaaS, as well as a service that provides a complete platform to develop, test, and launch their applications, platform as a service (PaaS) has become one of the top options. On top of servers, network, and security that IaaS provides, PaaS also includes middleware such as operating systems and development tools that enable quick application development and market launch. As the cloud provider manages more layers, PaaS will require less time and skill from the engineering team to manage their infrastructure. It will provide freedom for the company to focus on developing applications and providing services. However, because of those abstracted services, PaaS may have higher costs with dependency on the particular platform or vendor. The higher cost may come later when the organization explores different technical decisions and cloud vendors.

In this section, we will explore various PaaS services that enable organizations to effectively launch and maintain large-scale applications. This includes AI incorporations or XR platforms that allow for operations of scripting and modeling. Through real-life examples, we can analyze how companies slash costs while also accelerating time to market and application development.

Internet of Things Services

A 5G network enables mobile computing at the edge of modern telecommunication networks with support for a variety of IoT devices, including laptops, smartphones, and smartwatches. 5G has higher radio frequencies, which transfer considerably more data over the air at faster speeds while reducing congestion and lowering latency. Thanks to 5G, more IoT devices can be used simultaneously within the same geographic area. As a result, today’s dynamic information networks consist of interconnected sensors, actuators, mobile phones, robotics, and smart devices.

IoT network traffic falls broadly into two categories: telemetry and telecommand. One category, telemetry, aggregates data generated by sensors and devices and sends them to a server. The other category, telecommand, sends commands across a network to control devices or sensors. Figure 13.9 illustrates the typical flow of IoT data generated by mobile edge devices and data processing and storage via cloud PaaS services.

Illustration showing back and forth between User, Applications, Operating system, Device drivers, and devices.
Figure 13.9 Generally, this is how IoT data flow via mobile edge devices, data processing, and storage via cloud PaaS services. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

To serve the purpose of IoT, several application-layer protocols have been developed, such as Message Queuing Telemetry Transport (MQTT), Advanced Message Queuing Protocol (AMQP), Constrained Application Protocol (CoAP), Extensible Messaging and Presence Protocol (XMPP), and Simple Text Orientated Messaging Protocol (STOMP). This was necessary because application layer protocols, such as HTTP, are not suitable for IoT telemetry and telecommand applications. HTTP is designed for one-to-one communication rather than one-to-many communication between many sensors and one server and is a reliable protocol for web applications. However, HTTP supports unidirectional synchronous request-response communication and cannot send data in both directions simultaneously. IoT sensors cannot work efficiently in a synchronous manner. HTTP also is not designed for event-based communication. Generally, HTTP’s scalability is achieved by loading the server, which puts a heavy load on sensor devices connected to multiple other devices. In addition, HTTP uses high power consumption, making it unsuitable for advanced wireless sensor networks.

To understand how IoT application-layer protocols work, consider MQTT, which is an example of a lightweight application-layer messaging protocol. It is based on the publish/subscribe (pub/sub) model typically used for message queuing in telemetry applications. Multiple clients, or sensors, can connect to a central server, known as a broker, and subscribe to topics that interest them. Through the broker, which is a common interface (router) for sensor devices to connect to and exchange data, these clients also have the option of publishing messages regarding their topics of interest. To make the communication reliable, MQTT uses a TCP connection on the transport layer for connections between sensors and the broker.

Big cloud vendors use IoT protocols such as MQTT to provide IoT PaaS services and related frameworks that facilitate the interactions of IoT devices with the cloud. The use of these services makes it possible to build and deploy innovative IoT solutions without developing and managing IoT frameworks on local computers. In particular, Microsoft Azure IoT manages cloud services that can interconnect, monitor, and control billions of IoT assets. An IoT solution is typically made up of one or more IoT devices that communicate with one or more back-end services hosted in the cloud. IoT devices can be constructed as circuit boards with sensors attached that use Wi-Fi to connect to the Internet (e.g., presence sensors in a room). Devices may be prototyped using a Raspberry Pi or the Microsoft MXChip IoT DevKit, which has built-in sensors for temperature, pressure, and humidity, as well as a gyroscope, accelerometer, and magnetometer. Microsoft also provides an open-source IoT device SDK to help build apps on devices. In addition, Microsoft’s IoT Edge and IoT Hub frameworks can be used to facilitate the operation of IoT applications at the edge and the collection and transfer of data to the cloud via common communication protocols such as MQTT and AMQP.

The technologies, PaaS services and solutions provided by Azure IoT are summarized in Table 13.1. AWS, GCP, and IBM Cloud also provide equivalent IoT PaaS services and related capabilities.

IoT Central Application Templates IoT Solutions Azure Services for IoT IoT and Edge Device Support
Retail
Health
Energy
Government
Azure IoT central-managed application platform Azure IoT Hub
Azure IoT Hub Device Provisioning Service
Azure Digital Twins
Azure Time Series Insights
Azure Maps
Azure Sphere
Azure IoT Device SDK
Azure IoT Edge
Azure Data Box Edge
Reference architecture and accelerators (PaaS) Azure Stream Analytics
Azure Cosmos DB
Azure AI
Azure Cognitive Services
Azure ML
Azure Logic Apps
Windows IoT
Azure Certified for IoT—Device Catalog
Azure Stream Analytics
Azure Storage
Dynamics connected field service (SaaS) Azure Active Directory
Azure Monitor
Azure DevOps
Power BI
Azure Data Share
Azure Spatial Anchors
Azure ML
Azure SQL
Azure Functions
Azure Cognitive Services
Table 13.1 IoT Technologies, Services, and Solutions Available through Microsoft Azure

Shallow and Deep Machine Learning Services

IoT services provided by big cloud vendors include both shallow machine learning, which has few neuron layers, and deep machine learning, which has many neuron layers. PaaS services enable application developers to leverage machine learning capabilities on the cloud. Developers can build and deploy innovative machine learning solutions without using local computers to set up and manage machine learning frameworks, such as Apache Hadoop or Spark, along with related libraries and/or tools. This includes building and deploying solutions that require using and streaming data analytics.

In addition to Microsoft, AWS, GCP, and IBM Cloud also provide shallow and deep PaaS services and related capabilities.

Big Data Analytics Services

The process of analyzing big data to find correlations, consumer preferences, market trends, and related information is referred to as big data analytics. It is important to help organizations with decision-making processes. Big data analytics processes usually require training models using data sets with a manageable number of descriptive features. As such, big data analytics requires shallow rather than deep machine learning services. Tools for big data analytics include big data analytics frameworks, machine learning libraries, and analytics machine learning tools.

Big Data Analytics Frameworks

Various cloud vendors, including Amazon, Cloudera, Dell, Oracle, IBM, and Microsoft, offer an implementation of the Apache Hadoop or Spark stacks that are useful to support big data analytics projects. Other cloud data analytics frameworks include Amazon Elastic MapReduce (EMR), Amazon Athena, Azure HDInsight, Azure Data Lake, and Google Cloud Datalab. EMR is a useful framework to host Spark. HDInsight is similar to EMR in power, and it supports Spark, Hive, HBase, Storm, Kafka, and Hadoop MapReduce. HDInsight guarantees 99.9% availability and integrates various programming tools, such as Visual Studio, and supports various programming languages like Python, R, Java, and Scala, as well as .NET languages. As illustrated in the Azure Data Lake conceptual view (Figure 13.10), HDInsight includes all the usual Hadoop and Yarn components, such as Hadoop File System (HDFS) as well as tools that integrate other Microsoft business analytics tools such as Excel and SQL Server.

Illustration of Azure Data Lake conceptual view: Data lake analytics (U-SQL and HDInsight (Spark, Hadoop, Hive, PIG, YARN)) connected to HDFS, webHDFS, Azure Data Lake store, Structured and unstructured big data.
Figure 13.10 This graphic shows the big data analytics tools available in HDInsight with Azure Data Lake. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Big Data Machine Learning Libraries

Machine learning libraries have algorithms and functions in place to develop machine learning models needed for big data analytics. For example, Spark includes the MLlib scalable machine learning library and the GraphX API for graphs and graph-parallel computations. MLlib offers classification, regression, clustering, and recommender system algorithms. MLlib was originally built directly on top of the Spark RDD abstraction. It provides an API to specify dataframes, transformers, estimators, and a high-level API for creating ML pipelines. GraphX is a Spark component that implements programming abstractions based on the RDD abstraction. GraphX comes with a set of fundamental operators and algorithms to work with graphs and simplify graph analytics tasks. MLlib and GraphX are available to application developers via Azure ML and ML programming offerings from other cloud vendors.

Big Data Analytics Machine Learning Tools

Azure Machine Learning (ML) is a cloud portal for designing and training machine learning cloud services. Azure also provides Databricks, which is an Apache Spark–based analytics platform optimized for the Microsoft Azure cloud environment, as well as the ML.NET open-source and cross-platform machine learning framework. Amazon’s AWS machine learning service (i.e., Amazon SageMaker), like Azure ML, can be used to create a predictive model based on the training data you provide. It requires much less understanding of ML concepts than Azure ML. Its dashboard presents a previous list of experiments, models, and data sources that enables developers to define data sources and ML models, create evaluations, and run batch predictions. As illustrated in Figure 13.11, it is based on a drag-and-drop component composition model. You build a solution to a machine learning problem by dragging solution parts from a palette of tools and connecting them into a workflow graph. You then train the solution with your data. When you are satisfied with the results, you ask Azure to convert your graph into a running web service using the model you trained. The tool supports customized machine learning as an on-demand service.

Screenshot of Azure ML, a drag-and-drop component composition model, showing options for tools to create a workflow.
Figure 13.11 Azure ML is based on a drag-and-drop component composition model that enables you to build a solution to a machine learning problem. (Used with permission from Microsoft)

This is another example of serverless computation. It does not require you to deploy and manage your VMs; the infrastructure is deployed as you need it and if your web service needs to scale up because of demand, Azure scales the underlying resources automatically.

Amazon also provides a portal-based tool, Amazon Machine Learning, that allows you to build and train a predictive model and deploy it as a service. In addition, both Azure and Amazon provide pre-trained models for image and text analysis in the Azure Cognitive Services and the Amazon ML platform, respectively.

Think It Through

Using Avatars in Virtual-Reality Environments

Avatars are a popular way to represent ourselves graphically online.

Can avatars in the metaverse’s virtual-reality environment make use of big cloud PaaS capabilities to operate semi-autonomously? If they’re semi-autonomous, they’ll be able to act independently, with limited control from users. What ethical consequences could this pose?

Streaming Big Data Analytics Services

Vendors provide various services to facilitate streaming big data analytics in the cloud, such as Spark Structured Streaming, Amazon Kinesis Data Firehose and Kinesis, Azure Stream Analytics, and Google Dataflow (based on Apache Beam). Spark Structured Streaming also provides another high-level concept called the DStream, which represents a continuous stream of data as a sequence of RDD fragments with windowed computations. Spark Structured Streaming leverages Spark Core and its fast scheduling engine to perform streaming analytics.

Amazon Firehose, which is designed for extreme scale, can load data directly into Amazon S3 or other Amazon services. Kinesis Data Analytics for SQL applications provides SQL-based tools for real-time analysis of streaming data from Kinesis Data Streams or Firehose. Kinesis Data Streams provides ordered, replayable, real-time streaming data. Various open-source frameworks, such as Kafka, Storm, RisingWave, Apache Spark, Apache Flume, Apache Beam, and Apache Flink, are also available to process streaming data on local machines. Table 13.2 compares Firehose to Kinesis Data Streams. The two primary components of the Azure Stream Analytics are the Azure Event Hubs service and Stream Analytics engine.

Feature Firehose Kinesis Data Streams
Purpose Service for transferring data into third-party tools Streaming service
Provisioning Fully managed and has no administration Managed but requires shards configuration
Scaling Automated scaling based on demand Manual scaling
Data storage Data storage not included Offers data storage that can be configured from 1 to 365 days
Replay capability No, replay capability is not supported Yes, replay capability is supported
Message propagation Almost real-time, depending on buffer size or time Real-time
Table 13.2 Firehose versus Kinesis Data Streams

Apache Beam is the open-source release of the Google Cloud Dataflow system. Beam treats the batch and streaming cases uniformly and supports pipelines to encapsulate computations, as well as PCollections, which represent data as they move through a pipeline. Beam enables computational transformations that operate on PCollections, produces PCollections, and relies on sources and sinks, from which data are read and to which data are written, respectively.

Deep Learning and Generative AI Services

All the big clouds today provide integrated machine learning services that use various techniques to create models that leverage prior experience and make it possible to improve the ability of machines to perform tasks. These services may be used as part of big data analytics and streaming big data analytics techniques, as explained in the previous subsections. Deep learning (DL) is another technique that leverages artificial neural networks (e.g., RNNs, CNNs, GANs) to create models that perform predictive tasks requiring special training and the ability to relate to a vast combination of labels/patterns (e.g., image recognition, speech recognition, language translation). There are various types of deep learning improvements that include reinforcement learning and transfer learning among others. Artificial intelligence (AI) is also a technique that leverages ML to enable computers to mimic human intelligence. Generative AI (GenAI) is a subset of AI that uses techniques such as deep learning and transformers to generate new content (e.g., create images, text, or audio) that matches a request. Transformers are special types of ML model architectures that are suited for solving problems that contain sequences such as text or time-series data.

Transformers have been used recently to solve natural language processing problems (e.g., translation, text generation, question answering, text summarization), and various transformer implementations have been quite successful, including the bidirectional encoder representations from transformers (BERT) and the generative pre-trained transformers (GPTs). ML models that support GenAI are referred to as large language models (LLMs) and/or foundation models (FMs). LLMs are typically tuned toward specific conversational applications and require more parameters and data-intensive training. Foundation models (FMs) are more general-purpose than LLMs and less data-intensive. Examples of LLMs include OpenAI’s ChatGPT, Google Gemini, Meta M2M-100 and LlaMA, IBM’s Granite model, Anthropic’s Claude models, Mistral AI’s models, and many others. LLMs (and FMs) can be augmented using retrieval augmented generation (RAG) AI frameworks that supplement the FMs internal representation of information to improve the quality of the LLM-generated responses.

Cloud vendors provide DL services in the form of programming frameworks that help implement deep learning applications using differentiable programming. GenAI services provided by the big clouds and other vendors are prompt interfaces that are specially engineered to allow users to get the most out of an LLM by including a sufficient amount of information in the prompts they create.

The following provides a nonexhaustive list of DL services provided by some of the big clouds:

  • Amazon AWS deep learning services:
    • Amazon Deep Learning AMIs (DLAMIs). Amazon DLAMIs are customized machine images that may be used for deep learning in the cloud. They can be deployed on various types of Amazon VMs (i.e., EC2 instances), including CPU-only instances or the latest high-powered multi-GPU instances. DLAMIs come preconfigured with NVIDIA CUDA and NVIDIA cuDNN and the latest releases of the most popular deep learning frameworks. Amazon released the Amazon Deep Learning AMI with Conda, which uses Conda virtual environments to isolate each framework, allowing you to switch between them at will without their dependencies conflicting. The full list of supported frameworks by Amazon Deep Learning AMI with Conda includes PyTorch, TensorFlow 2, and Apache MXNet (now retired but can still be accessed and used). Starting with the v18 release, Amazon Deep Learning AMI with Conda no longer includes the CNTK, Caffe, Caffe2, Theano, Chainer, or Keras Conda environments. Configuring the Amazon DLAMIs to use Jupyter is easy; go to the Amazon Marketplace on the EC2 portal and search for “deep learning.” You will find the DLAMIs, then select the server type you would like to use. If you simply want to experiment, it works well with a no-GPU option; when the VM comes up, log in with ssh, and configure Jupyter for remote access.
    • Amazon Lex. Amazon Lex is an IA chatbot that allows users to incorporate voice input and conversational interfaces into applications. It is an extension of Amazon’s Echo product, a networked device with a speaker and microphone that you can ask questions through the Alexa service (e.g., questions about the weather, event scheduling, news, and music). It is possible to associate Echo voice commands with the launching of an Amazon Lambda function that executes a cloud application.
    • Amazon Polly. Amazon Polly is the opposite of Lex; it turns text into speech for dozens of languages with a variety of voices and uses the Speech Synthesis Markup Language (SSML) to control pronunciation and intonation.
    • Amazon Rekognition. Amazon Rekognition is at the cutting edge of deep learning applications. It takes an image as input and returns a textual description of the items that it sees in that image. This includes objects, landmarks, dominant colors, activities, and faces. It also performs detailed facial analysis and comparisons and can identify inappropriate content that appears in images.
  • Microsoft Azure deep learning services:
    • Azure Data Science VMs (DSVMs). Azure DSVMs are Azure Virtual Machine images, preinstalled, configured, and tested with several popular tools that are commonly used for data analytics, machine learning, and AI development and training.
    • Azure Machine Learning (ML). Azure ML is a cloud service that is designed to help accelerate and manage machine learning project life cycles. It can be used to train and deploy ML models and manage machine learning operations (via MLOps). You can create a model in Microsoft ML or use a model built from an open-source platform, such as PyTorch, TensorFlow, or scikit-learn. MLOps tools help you monitor, retrain, and redeploy models. As noted earlier, Azure also provides the ML.NET machine learning framework.
    • Azure AI services. Azure AI services are APIs/SDKs that can be used to build applications that support natural methods of communication (i.e., see, hear, speak, understand, and interpret user needs). These services include support for vision (e.g., object detection, face recognition, optical character recognition), speech (e.g., speech-to-text, text-to-speech, speaker recognition), languages (e.g., translation, sentiment analysis, key phrase extraction, language understanding), and decision (e.g., anomaly detection, content moderation, reinforcement learning).
  • Google deep learning services:
    • Deep learning VMs. Deep learning VM images are virtual machine images optimized for data science and machine learning tasks. All images include preinstalled ML frameworks and tools and can be used on VM instances with GPUs to accelerate data processing tasks. ML frameworks supported include TensorFlow and PyTorch.
    • Google machine learning APIs. Google provides various APIs to services that can be used to build applications that support natural methods of communication, including Cloud Vision to understand the content of an image, Cloud Speech-to-Text to transcribe audio to text, Cloud Translation to translate an arbitrary string to any supported language, and Cloud Natural Language to extract information from text.

The following provides more information related to GenAI services provided by the big clouds:

  • AWS GenAI services. Amazon AWS provides various GenAI tools, including the Amazon Q AI-powered assistance and the Amazon Bedrock suite of LLMs, FMs, and generative AI tools. Amazon SageMaker may be used to build, train, and deploy FM models at scale.
  • Microsoft Azure GenAI services. The Azure OpenAI service and the Azure AI studio can be used to create custom copilot and generative AI applications. Microsoft has partnered with OpenAI, the company that is developing ChatGPT. It also provides the Phi family of small language models (SLMs) that are low-cost and low-latency alternatives to LLMs in some cases.
  • Google GCP GenAI services. Vertex AI, Generative AI Studio, and Vertex AI Model Garden are various solutions that Google provides to support the creation of generative AI applications. Google also provides the Gemini family of generative AI models that are capable of processing information from multiple modalities, including images, videos, and text.
  • IBM Cloud GenAI services. IBM Watsonx.ai AI studio brings together generative AI capabilities that are powered by FMs and ML. It provides tools to tune and guide models based on enterprise data as well as build and refine prompts. IBM also develops custom Granite AI foundation models that are cost-efficient and enterprise-grade.
  • Other GenAI services. In addition to the GenAI services and tools mentioned here, many other vendors focus on the creation of LLMs, SLMs, and FMs. Here are a few of them:
    • OpenAI’s ChatGPT
    • Meta LlaMA
    • Anthropic Claude
    • Mistral AI

ML Toolkits Performance

ML toolkits can be used for various tasks, such as scaling a computation to solve bigger problems. One approach is the SPMD model of communicating sequential processes by using the message passing interface (MPI) standard model. Another is the graph execution dataflow model, used in Spark, Flink, and the deep learning toolkits. You can write ML algorithms using either MPI or Spark. You should be aware that MPI implementations of standard ML algorithms typically perform better than the versions in Spark and Flink. Often, the differences are factors of orders of magnitude in execution time, but the MPI versions are harder to program than the Spark versions.

Blockchain Services

Blockchains use a distributed ledger system to store data and transactions in an open-source database that enables you to build applications that allow multiple parties to securely and transparently run transactions and share data without using a trusted central authority. With blockchain 2.0, developers have a mechanism that allows programmable transactions, which are modified by a condition or set of conditions. Blockchain 2.0 is not limited to supporting transactions. It can also handle microtransactions, decentralized exchange, and creating and transferring digital assets. Blockchain 2.0 also has the ability to handle smart contracts, which are scripts executed in a blockchain 2.0 environment. The codes of smart contracts are accessible to the public, and anyone can verify the correctness of code execution. The actual verification is carried out by miners in the blockchain environment, and this ensures honest execution of the “contract.” Smart contracts rely on cryptography to secure them against tampering and unauthorized revisions.

A blockchain network is a peer-to-peer network that allows people and organizations who may not know one another to trust and independently verify the records of financial and other transactions. This improves the efficiency and immutability of transactions for business processes such as international payments, supply chain management, land registration, crowdfunding, governance, financial transactions, and more.

Ethereum.org implements a blockchain 2.0 decentralized computing platform for Web3 and provides a language to write transaction scripts. Web3 applications are referred to as DApps and may be deployed on blockchain 2.0 decentralized computing platforms. Ethereum.org provides its own development environment (i.e., Remix) and programming languages, such as Solidity, to develop and deploy contracts. Public test networks such as Goerli may be used to develop and test contracts before deploying them on the Ethereum platform. Web3 APIs are available for various programming languages, such as JavaScript, Python, Haskell, Java, Scala, and PureScript, to facilitate the creation of applications that interact directly with the blockchain 2.0 platform.

Using PaaS blockchain services on AWS, Oracle, GCP, and IBM big clouds, it is possible to create a blockchain decentralized computing platform to facilitate the deployment of Web3 DApps. Big clouds provide clusters of VMs that may be leveraged as P2P nodes within a blockchain 2.0 decentralized platform implementation, such as Hyperledger. While Microsoft Azure offered PaaS services to create blockchain platforms, it has retired these services and partnered with ConsenSys and other companies to provide that support. Azure does provide products and services, Web3 developer tools, and security capabilities to help create Web3 applications and deploy them on partner platforms.

AWS Blockchain Services

AWS provides blockchain templates that help you create and deploy blockchain networks on AWS using different blockchain frameworks. AWS Managed Blockchain, which is shown in Figure 13.12, is used to configure and launch AWS CloudFormation stacks to create blockchain networks. The AWS resources and services used depend on the AWS blockchain template selected and the options to specify the fundamental components of a blockchain network.

AWS blockchain template (AWS Cloud Formation). ECS container platform contains: Elastic load balancing, Amazon S3, Multinode blockchain network, Blockchain explorer and monitoring, Amazon ECS, Amazon EC2, and ECR Registry (Blockchain framework containers).
Figure 13.12 This graphic shows AWS blockchain templates, which are used to configure and launch AWS CloudFormation stacks to create blockchain networks. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Amazon Managed Blockchain is a fully managed service for creating and managing blockchain networks that supports the Hyperledger Fabric open-source framework. You can use Managed Blockchain to create a scalable blockchain network quickly and efficiently using the AWS Management Console, the AWS CLI, or the Managed Blockchain SDK. Managed Blockchain scales to meet the demands of thousands of applications running millions of transactions. Once the blockchain network is functional, Managed Blockchain simplifies network management tasks by managing certificates, making it easy to create proposals for a vote among network members, and tracking operational metrics such as computing resources, memory, and storage resources.

Figure 13.13 shows the basic components of a Hyperledger Fabric blockchain running on AWS. A network includes one or more members with unique identities. For example, a member might be an organization in a consortium of banks. Each member runs one or more blockchain peer nodes to run chaincode, endorse transactions, and store a local copy of the ledger. Amazon Managed Blockchain creates and manages these components for each member in a network and also creates components shared by all members in a network, such as the Hyperledger Fabric ordering service and the general networking configuration.

Hyperledger fabric ordering service blockchain, connected to Member (Fabric, Peer node), then to AWS private link, then through VPC to AWS client.
Figure 13.13 This graphic shows the basic components of a Hyperledger Fabric blockchain running on AWS via a Managed Blockchain network. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

When creating a Managed Blockchain network, the creator chooses the blockchain framework and the edition of Amazon Managed Blockchain to use, and this determines the capacity and capabilities of the network as a whole. The creator also must create the first Managed Blockchain network member. Additional members are added through a proposal and voting process. There is no charge for the network itself, but each member pays an hourly rate (billed per second) for their network membership. Charges vary depending on the edition of the network. Each member also pays for peer nodes, peer node storage, and the amount of data that the member writes to the network. The blockchain network remains active as long as there are members. The network is deleted only when the last member deletes itself from the network. No member or AWS account, even the creator’s AWS account, can delete the network until they are the last member and delete themselves.

IBM Blockchain Services

IBM Blockchain Platform (IBP) is an IBM Cloud offering that is built on Fabric, a blockchain infrastructure provided by the open-source Hyperledger project. IBP provides an integrated developer experience with smart contracts that can be easily coded in Node, Golang, or Java. You can use the new IBM Blockchain VS Code extension to write client applications, based on the IBP console’s integration of the Fabric SDK. IBP offers the possibility of deploying only the necessary components to connect to multiple channels and networks, while you maintain control of identities in your environment. Flexible and scalable, IBP can be run in any environment that IBM Cloud Private (ICP) supports, including LinuxONE. IBP simplifies the development and management of a blockchain network. It lets you accomplish the following tasks with just a few clicks in the easy-to-use interface:

  • automated deployment of Fabric
  • creation of custom governance policies
  • initial development
  • deployment of the application into production, including the creation of channels and deployment of chaincode
  • inviting new members into the network and managing identity credentials over time

LinuxONE is engineered for high-performance, large-scale data and cloud services. A single LinuxONE platform consolidates hundreds of x86 cores. The platform’s dedicated I/O processors allow you to move massive amounts of data while maintaining data integrity. The option to have dedicated cryptographic processors that supplement the standard CPUs means encryption for data at rest and for data in transit. Partitions within IBM’s Secure Service Container (SSC) technology help to protect data and applications from internal and external threats.

The IBM Blockchain solution leverages Kubernetes (K8s), which is an open-source system for the automation of deployment, scaling, and management of containerized applications. The Kubernetes framework runs distributed systems resiliently and takes care of scaling requirements, failover, deployment patterns, and so forth. Kubernetes restarts containers that fail, replaces containers, kills containers that do not respond to your user-defined health check, and does not advertise them to be used until they are ready to serve. The key aspects of Kubernetes include the following:

  • service discovery and load balancing
  • storage orchestration
  • automated rollouts and rollbacks
  • automatic bin packing
  • self-healing
  • secret configuration management

Other components of the IBM Blockchain solution include IBM Cloud Private (ICP), GlusterFS, MIBM Secure Service Container, and the IBM Blockchain Platform. ICP is a private cloud platform for enterprises to develop and run workloads locally. It consists of PaaS and developer services that are needed to create, run, and manage cloud applications. GlusterFS is a scalable network file system suitable for data-intensive tasks such as cloud storage and media streaming. It aggregates various storage servers into one large parallel network file system. IBM Secure Service Container (SSC) provides the base infrastructure on LinuxONE for container-based applications, either for hybrid or private cloud environments. This secure computing environment delivers tamper-resistant installation and runtime operations.

Oracle Blockchain Services

Oracle also provides a blockchain platform. As illustrated in Figure 13.14, Oracle’s blockchain components include a network of validating nodes (i.e., peers), a distributed ledger (i.e., linked blocks, world state, and history database), an ordering service for creating blocks, and membership services for managing organizations in a permissioned blockchain.

Illustration of Blockchain platform including: Membership service, World state, Order, Peers, Smart contracts (Vendors, Pos, Invoicing, Shipping, Payments), Applications, and Supply Chains and Accounts payable at Organizations.
Figure 13.14 As this diagram shows, Oracle’s blockchain components include a network of validating nodes, a distributed ledger, an ordering service for creating blocks, and membership services for managing organizations in a permitted blockchain. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

The smart contracts (chaincode) layer consists of chaincode programs that contain the business logic for updating the ledger, querying data, and/or publishing events. Chaincodes can read the ledger data to verify conditions as part of any proposed updates or deletes and trigger custom events. Updates and deletes are proposed or simulated and are not final until transactions are committed following consensus and validation protocols. New or existing applications can register/enroll organizations as members, submit transactions (invoke smart contracts) to update or query data, and consume events emitted by the chaincodes or by the blockchain platform.

The Oracle Blockchain Platform is based on the Hyperledger Fabric project from the Linux Foundation, and it extends the open-source version of Hyperledger Fabric. Preassembled PaaS, Oracle’s Blockchain Platform, includes all the dependencies required to support a blockchain network such as compute, storage, containers, identity services, event services, and management services. The Oracle Blockchain Platform includes the blockchain network console to support integrated operations.

GCP Blockchain Services

Google offers Blockchain Node Engine, a fully managed node hosting service for Web3 development that minimizes the need for node operations. Web3 companies that require dedicated nodes can relay transactions, deploy smart contracts, and read or write blockchain data. Blockchain Node Engine supports Ethereum, enabling developers to provision fully managed Ethereum nodes with secure blockchain access.

With blockchain, Google’s focus is on cryptocurrency and blockchain analytics tools that provide deep blockchain transaction history data sets and deeper sets of queries that enable multichain meta-analysis and integration with conventional financial record processing systems. Google is particularly interested in providing transaction history for cryptocurrencies that have similar implementations, such as Bitcoin, Ethereum, Bitcoin Cash, Dash, Dogecoin, Ethereum Classic, Litecoin, and Zcash. Google also offers machine learning tools that may be used to search for patterns in transaction flows and provide basic information on how a crypto address is used.

Extended Reality Services

When discussing extended reality services, there are generally two types: virtual reality (VR), which enables a computer-generated, interactive, 3-D environment in which a user is immersed, and augmented reality (AR), which supplements the real world with virtual (computer-generated) objects that appear to coexist in the same space as the real world. The key distinction between VR and AR is that VR is meant to immerse the user in a virtual environment, while AR introduces virtual elements to the real world. A VR system typically uses a headset in combination with a variety of sensors to track the user’s movement and relay the appropriate images and feedback, creating the sensation of interacting with the virtual world. An AR system typically relies on clear lenses or a pass-through camera that allows users to see the world around them in real-time while virtual elements are projected on the lenses or rendered on the camera output.

There is also extended reality (XR), a reality service that involves both real and virtual environments. Microsoft introduced the term mixed reality (MR), which is a form of XR. XR encompasses both virtual and real environments and often integrates cloud services like IoT, ML, and blockchain. Three components are needed to make an XR system functional: a head-mounted display, a tracking system to recognize and follow physical objects, and mobile computing power. Today, XR is enabled via the use of headsets and haptic gloves, which are IoT devices. Instead of accessing applications with traditional computers, XR makes it possible to access applications through head-mounted hardware or gloves that interact directly with humans’ senses, such as vision, hearing, and touch. In addition, ML may be needed within a mixed reality environment to do things such as generate prediction and perform recognition in a way similar to what we use ML for in the real world. In some sense, XR is the base technology for Webx.0 (i.e., the real metaverse, not Meta’s Metaverse). Many XR commercial headsets are available for use, including Microsoft HoloLens, Google Cardboard, and Meta Quest.

Creating virtual reality scenes for VR or virtual objects or avatars that can be viewed via XR headsets requires the use of a 3-D engine tool such as Unity or Unreal Engine. Unity is a widely used cross-platform 3-D engine and integrated development environment (IDE). Its uses include developing 3-D content and games for different platforms, such as PCs, consoles, mobile devices, AR/VR target devices, and the Web.

Unity is a complex system with a steep learning curve. Successful deployment of applications also requires development frameworks and plug-ins, such as Microsoft Mixed Reality Toolkit (MRTK) or OpenXR. OpenXR can be accessed by 3-D engines (e.g., Unity, Unreal), the WebXR device API, as well as XR applications running on base stations to facilitate deployment to or integration with various devices including 3-D head mounted displays (e.g., Microsoft HoloLens, Apple Vision Pro, Meta Quest), trackers (e.g., body, hand, object, eye), haptic devices, and cloud/5G infrastructure.

XR applications provide controlled and repeatable scenarios rehearsing muscle memory and situational awareness. VR applications make it possible to explore places otherwise inaccessible and also have the potential to provide access to resources that may be prohibitively expensive or otherwise inaccessible. VR and AR applications provide innovative ways to visualize and manipulate data.

Azure MR Services and Related PaaS Services

Microsoft and the Azure Cloud provide various services for its Kinect and HoloLens products. This includes the Azure Kinect Sensor SDK, a developer kit with advanced AI sensors for building computer vision and speech models. Azure Kinect is a cutting-edge spatial computing developer kit with sophisticated computer vision and speech models, advanced AI sensors, and a range of powerful SDKs that can be connected to Azure cognitive services.

Microsoft also offers HoloLens 2, which is a set of smart glasses, and the HoloLens Emulator, both of which allow users to test holographic applications on a PC without a physical HoloLens 2 or HoloLens 1, including the HoloLens development toolset. Using the HoloLens emulator requires learning keyboard and mouse commands to facilitate walking in a given direction, looking in different directions, and making controlling gestures and hand movements.

The emulator uses a Hyper-V virtual machine, which means human and environmental inputs being read by HoloLens sensors are simulated from a keyboard, mouse, or Xbox controller. Users do not need to modify projects to run on the emulator because the apps do not recognize that they are not running on a real HoloLens. Users can join the HoloLens developer program, and learn to develop and deploy their own 3-D models.

Alternative development environments for HoloLens include Unreal Engine and BuildWagon. BuildWagon provides an online code editor that allows users to write code in JavaScript and view the results on the same screen or directly on the HoloLens. A HoloLens device is not required, and code is hosted on the cloud to allow multiple developers to collaborate on the same project from different locations. BuildWagon’s HoloBuild library provides ready-made components to expedite creation processes and access HoloLens’ special features.

The Microsoft Mixed Reality Toolkit (MRTK) is a Microsoft-driven project that provides a set of components and features used to accelerate cross-platform MR app development in Unity. It provides the cross-platform input system and building blocks for spatial interactions and UI, enabling rapid prototyping via in-editor simulation that enables users to see changes immediately. It operates as an extensible framework that provides developers with the ability to swap out core components while supporting a wide range of platforms.

Microsoft provides a detailed set of guidelines to assist with the development of mixed reality applications, covering application ideation, design, development, and distribution.

Concepts In Practice

Innovation and Big Cloud Paas

Innovation is a great concept, and the use of big cloud PaaS services enables it. Before big cloud PaaS services became available, it was difficult for companies to put in place the services and related infrastructure needed to develop innovative solutions. All of the PaaS services that are covered in this chapter require frameworks and resources, which are provided by the big clouds; therefore, it is possible for companies to focus on applying these services to develop innovative solutions. As an example, you can simply go to azure.portal.com and type “Data Science Virtual Machine” in the search bar. You will then be provided with a choice of Linux or Windows VMs that come fully packed with all the framework and related APIs needed to implement the service. Microsoft Azure provides IoT Edge and IoT Hub frameworks that can be used to collect data from sensors located at the edge (e.g., weather sensors that measures temperature and humidity) and propagate the corresponding data to the Azure cloud so it can be analyzed to generate weather predictions. Therefore, big cloud PaaS services can be used today to enable and accelerate innovation.

Also available on the Azure Cloud are a number of PaaS services, including the following:

  • Azure storage services may be used to store 3-D models.
  • Azure Remote Rendering (ARR) is a service that lets you render highly complex 3-D models in real-time and stream them to a device. ARR is generally available and can be added to your Unity or Native C++ projects targeting HoloLens 2 or Windows desktop PC.
  • Azure Object Anchors (AOA) is a mixed reality service that helps you create rich, immersive experiences by automatically aligning 3-D content with physical objects. It makes it possible to gain a contextual understanding of objects without the need for markers or manual alignment. It also saves significant touch labor, reduces alignment errors, and improves user experiences by building mixed reality applications with Object Anchors.
  • Azure Spatial Anchors (ASA) is a cross-platform service that allows you to build spatially aware mixed reality applications. With ASAs, you can map, persist, and share holographic content across multiple devices at a real-world scale. In particular, ASAs are used to create free-world anchors that persist across multiple application sessions.
  • Azure Speech service is a speech resource that may be used to recognize speech, synthesize speech, get real-time translations, transcribe conversations, or integrate speech into your bot experience.
  • Azure AI Vision is a cloud-based computer vision API that provides developers with access to advanced algorithms for processing images and returning information. When a user uploads an image or specifies an image URL, Microsoft Computer Vision algorithms can analyze visual content in different ways based on inputs and user choices.

GCP XR and Related PaaS Services

Google provides various XR and related PaaS services, including both Google AR and VR. The AR services include Google Lens, which can recognize things in images. AR in Google search lets you bring 3-D objects and animals into the world you see. Live View in Google maps changes how the world looks to add directions and other information. AR Stickers let you drop objects into photos taken with a Google Pixel camera. Google makes it possible for developers to develop AR applications using its ARCore Geospatial API. Google also provides VR capabilities, including Cardboard, which is a cardboard VR headset that uses a phone as a virtual world generator unit. DeepDream is a computer vision program that uses a convolutional neural network to find and enhance patterns in images to create dream-like appearances. Google also makes it possible for developers to develop their own VR applications.

Other XR and Related PaaS Services

Google isn’t the only provider of XR and related PaaS services. For example, Meta Quest offers all-in-one VR headsets that developers can use to create a range of VR experiences, including mixed reality, designed for both work and play.

In another example, with the Amazon Sumerian Platform, Amazon develops XR tools and uses XR technology to support its retail businesses. For example, Amazon Sumerian makes it possible to create and run VR, AR, and 3-D applications quickly and easily without requiring any specialized programming or 3-D graphics expertise. It runs on popular hardware such as Meta Quest and Google Cardboard, as well as on Android and iOS mobile devices. Amazon Sumerian makes it possible to create virtual classrooms that let you train new employees around the world or enable people to tour a building remotely.

The NVIDIA Omniverse platform is an easily extensible platform for 3-D design collaboration and scalable multi-GPU, real-time, true-to-reality simulation. Omniverse revolutionizes the way individuals create, develop, and work together as teams, bringing more creative possibilities and efficiency to 3-D creators, developers, and enterprises.

YouTube also offers a VR experience through videos recorded with 360 or 3-D cameras. Through YouTube VR, users can experience things such as skydiving, snowmobiling, and a hot air balloon ride. To ensure it is VR, look for the compass icon in the upper left of a video.

3-D/4-D Printing Services

The process of 3-D printing, formally known as additive manufacturing, is a process of designing static objects in three dimensions through additive processes in which successive layers of material are laid down under computer control. It is being used in applications such as medical prosthetics, aerospace components, and defense equipment. A 3-D modeling program, such as AutoCAD, is used for designing the objects. Various cloud vendors are making 3-D printing technology available on the cloud today, such as Craftcloud, which allows users to essentially create an order for a custom 3-D printed part without having to actually own a 3-D printer. Users can upload their 3-D models to the Craftcloud platform, select their specifications, and receive their custom 3-D printed part in the mail.

The process of 4-D printing, also supported as a platform as a service (PaaS) on some clouds, provides the capability of programming the fundamental materials used in 3-D printing by creating objects that can change their form or function after fabrication. It is scalable and can use cloud-based environments that streamline the development and deployment of smart materials and dynamic structures. These services offer advanced computational resources and specialized software tools needed for designing, simulating, and controlling 4-D printing processes, enhancing efficiency and innovation in creating adaptable and self-transforming products. 4-D printing adds the elements of time and interactivity to 3-D printing. 4-D printing creates objects with dynamics and performance capabilities, so they are able to change their form or function after fabrication. These objects can be assembled, disassembled, and then reassembled to form macroscale objects of desired shape and multifunctionality. This technology is based on three key capabilities: the machine, the material, and the geometric program. As an example, using this technology, the Stratasys material research group developed a new polymer that could be expanded 150% when submerged in water.

Industry Spotlight

3-D and 4-D Printing in Health Care

In health care, 3-D and 4-D printing has revolutionized imaging technology, improving processes such as mammography, radiation therapy, bronchoscopy, and ultrasounds. With benefits such as three-dimensional imaging, better delivery processes for drugs, tissue engineering, and more sophisticated medical devices, 3-D and 4-D technology improves the quality of images and enables health professionals to provide more accurate diagnoses and better targeted treatments, improving patient care and often leading to better outcomes.

Provide a specific example of how you think 3-D and 4-D printing are likely to improve health care in the next five years.

Applications Development Services

Various application development support capabilities are provided as PaaS services on the big clouds. These services help organizations improve operations and include integration management, identity and security management, application life cycle management, monitoring, and management and governance.

Integration Management

Integration management is a PaaS service that supports project management with tools for communication, project coordination, efficiency, and even conflict resolution. AWS, GCP, and IBM Cloud also provide integration management PaaS services and related capabilities.

Identity and Security Management

With identity and security management supported by PaaS, organizations can help ensure that only authorized users have access to their systems and applications. AWS, GCP, and IBM Cloud also provide identity and security management PaaS services and related capabilities.

Application Life Cycle Management

Application life cycle management is a tool that guides the software application process from planning until the software is decommissioned and retired. Various application life cycle management capabilities, including DevOps and migration, are provided as PaaS services on the big clouds.

DevOps combines people, processes, and products to enable continuous delivery of value to end users. DevOps enables you to build, test, and deploy any application, either to the cloud or on premise. AWS, GCP, and IBM Cloud also provide DevOps PaaS services and related capabilities.

Migration services minimize the time and resources required to migrate an on-premises environment to the cloud. AWS, GCP, and IBM Cloud also migration services and related capabilities.

Monitoring

Typical monitoring services include application log analytics to drive resource autoscaling. Monitoring ensures that organizations realize when they have application issues that need immediate attention and areas where applications can perform better. Monitoring also provides data about applications that are underutilized and overloaded. AWS, GCP, and IBM Cloud also provide monitoring PaaS services and related capabilities.

Management and Governance

Generally, management and governance capabilities include recovery, cost management and billing, and other services. AWS, GCP, and IBM Cloud also provide management and governance PaaS services and related capabilities.

Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
Citation information

© Oct 29, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.