Learning Objectives
By the end of this section, you will be able to:
- Understand how to deploy a cloud-native application on a PaaS platform
- Understand how to deploy a cloud-native application using VMWare Tanzu
- Understand how to deploy FaaS functions on a serverless platform
This module focuses on building sample applications that illustrate the steps taken to deploy sample applications using various cloud deployment technologies. The first section focuses on how to build a sample cloud-native application on a PaaS platform. The sample application provided illustrates the use of microservices, Docker containers, and Kubernetes orchestration. The second section focuses on how to set up a suite of products that are used to manage Kubernetes clusters and monitor applications that are deployed in Kubernetes clusters. Finally, the third section focuses on how to deploy FaaS functions that are parts of a distributed application on a serverless platform. The example provided illustrates the use of various metrics and performance dashboards used to monitor a distributed application. When working through these examples, keep in mind that they are based off tutorials that are made available by the cloud service providers. As the technologies used in these tutorials evolve, the tutorials may change. As a result, there may be differences in the configuration options in the cloud service provider consoles or some of the steps may have changed. Regardless, the underlying goals of these examples should remain achievable. These examples also require subscriptions to AWS and Microsoft Azure. All cloud services providers provide free-trial credits. These examples were completed without exceeding the free-trial credit and using as many free-tier services as possible.
PaaS Deployment of a Sample Cloud-Native Application
The example in this section illustrates PaaS deployment of a cloud-native application on Microsoft Azure. A subsidiary of Microsoft, Azure is a cloud computing platform that offers a wide range of services that allow customers to build, deploy, and manage applications and services in the cloud. The sample cloud-native application13 includes two microservices. Both communicate with a single datastore. Each microservice is containerized and deployed in a Kubernetes environment illustrating Kubernetes orchestration. The PaaS deployment of this sample cloud-native application in Azure is illustrated in Figure 12.36.
One microservice implements a web service in JavaScript using Node. Express.js is used to implement the REST API for the web service. Express.js is a back-end Node web application framework used to implement RESTful APIs. This microservice pushes data updates to a datastore via the REST API. The other microservice implements a web service using Next.js. Next.js is an open-source React framework used to create full-stack web applications. React is a library used to create components to render to certain environments including web and mobile applications. This microservice reads data from the same datastore.
Docker images are created for each microservice and pushed to an image registry. Azure’s Container Registry Service (ACR) is used for this purpose. Each microservice is self-contained and encapsulated into Docker containers that are pulled from ACR and deployed into worker nodes in a Kubernetes cluster. Scaling the microservices is managed by Kubernetes. The Azure Kubernetes Service (AKS) is used for this purpose. Both microservices communicate with a single datastore. The datastore used is a PostgreSQL database hosted in Azure.
Prerequisites:
- Open a web browser and log into the Azure Portal. The Azure Portal is a web-based console that allows customers to manage their cloud services and Azure subscriptions.
- An Azure resource group. An Azure resource group is a container that holds related resources used in a cloud solution. In this example, the resource group rg-nativeapps-eastus is used.
Set Up a Postgres Database in Azure
The first step is to create a datastore that both microservices will communicate with. Azure Database for PostgreSQL is the resource used for this purpose. The following steps create a relational database management service (RDBMS). Once the RDBMS is created, a PostgreSQL database is created along with tables to store the data. Finally, data is inserted into the database.
Create the Resource
- In the Azure Portal, search for Azure Database for PostgreSQL. Select Azure Database for PostgreSQL listed under the Marketplace section.
- Select Azure Database for PostgreSQL Flexible server for the Resource type and click Create.
- On the Basics tab, configure the resource attributes. Table 12.1 shows the list of settings that should be used. Any settings not included in the table should be set to the default values provided in the wizard.
Basics Tab Settings Subscription Select the default subscription. Resource group For this example, rg-nativeapps-eastus is used. Server name Enter a unique name for the resource. For this example, na-dbserver-flex is used. Data source Select None. Location For this example, select the region that is used for the resource group. Version Select 11. Compute + storage Click on the Configure server link. On the Configure blade, select Basic, set the vCore value to 1 and Storage to 2 GiB, and then click Save. Admin username Enter a username. For this example, Student is used. Password Enter a password. For this example, Pa55w0rd1234 is used. - To create the resources as configured in the Table 12.1, click Review + create and then click Create. The provisioning of the database server may take several minutes. A status message appears when the deployment is complete. Click on Go to resource.
Configure Connection Security
Security policies need to be added to allow resources, including all microservices, to connect to the datastore securely. This step configures the connection security settings so that the microservices can securely connect to it. This is done on the Connection Security page for the datastore resource.
- From the menu on the left under Settings, click Networking.
- Enable the database server to allow connectivity from the cloud-native application deployed and running in Azure. To do this, click Yes for Allow access to Azure services. Immediately below this configuration, click + Add current client IP address (Figure 12.37).
- For this example, disable the SSL settings. If this step was missed before provisioning the database, do the following: click Server parameters. In the search field, search for “require_secure_transport.” Click Off (Figure 12.38).
- Click Save.
- Enable the database server to allow connectivity from the cloud-native application deployed and running in Azure. To do this, click Yes for Allow access to Azure services. Immediately below this configuration, click + Add current client IP address (Figure 12.37).
- From the menu on the left, click Overview. Make a note of the Server name and Admin username values. These values are used to connect to the database from the cloud-native application deployed and running in Azure.
Create the Database, Tables, and Initial Data
The datastore is now configured with a valid hostname and user account. The next step is to create a database and tables for the data to be stored. Once the database and tables are created, initial data is inserted.
- Open the Azure Cloud Shell. To do this, click on the Cloud Shell icon to the right of the search bar in the Azure Portal. In the bottom frame of the browser page, the Cloud Shell will load. Click Bash, if prompted when the Cloud Shell loads. Click Create storage if prompted to complete loading the Cloud Shell.
- In Cloud Shell, connect to the database with the following psql command. Insert the Server name and Admin username values obtained earlier for <server_name> and <username>, respectively as shown below. A postgres command prompt appears.
psql –host=<server_name> --port=5432 –username=<user_name> --dbname=postgres
- Create the database, create a table, and insert data that will be used in this example.
- Run the SQL statement below to create a new PostgreSQL database. The database name used in this example is cnainventory.
CREATE DATABASE cnainventory;
- Run the command below to switch to the newly created database. This step is necessary so that the tables are created in the correct database.
\c cnainventory
- Run the SQL statement below to create a new table. The table created for this example is inventory. It contains four fields: id, which is the primary key, name, quantity, and date.
CREATE TABLE inventory( id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER, date DATE NOT NULL DEFAULT NOW()::date );
- Confirm the inventory table was created using the following command.
\dt
- Insert data into the inventory table with the SQL statements below.
INSERT INTO inventory (id, name, quantity) VALUES (1,'yogurt', 200); INSERT INTO inventory (id, name, quantity) VALUES (2,'milk', 100);
- Confirm the data was successfully inserted with the following command:
SELECT *FROM inventory;
- The output lists the data records.
- Type \q to disconnect from the database.
- Run the SQL statement below to create a new PostgreSQL database. The database name used in this example is cnainventory.
Create and Deploy a Cloud-Native Application
Now that the datastore for the cloud-native application has been successfully created and configured, the next step is to create each of the two microservices of the cloud-native application. As previously mentioned, the cloud-native application consists of two microservices. One of the microservices is implemented using Node/Express.js. This microservice serves as a back-end service. The second microservice is implemented using Next.js and serves as a front-end web service. Although these microservices do not directly communicate with each other, both communicate with the datastore.
Create the Back-End Service
The first microservice created is the back-end service. This service exposes a set of functions that can receive requests via a REST API that inserts inventory data into the datastore.
- Open the Azure Cloud Shell. To do this, click on the Cloud Shell icon to the right of the search bar in the Azure Portal. In the bottom frame of the browser page, the Cloud Shell will load. Click Bash, if prompted when the Cloud Shell loads. Click Create storage if prompted to complete loading the Cloud Shell.
- Create a directory for the application and navigate into it with the following command.
mkdir -p can-node-express && cd can-node-express
- Use the command to initialize a Node project. A package.json file, among other files, is generated for the Node project. The package.json file is updated to include dependencies for the Node/Express.js back-end service.
npm init -y
- Express.js is used to build the REST API for the back-end service. Install Express.js with the following command. Confirm the package.json file is updated listing express as a dependency.
npm install express
- Create a new file named index.js with the command code index.js and add the code shown below. To save the file, type CTRL+S. Close the file by typing CTRL+Q. The code creates an Express application server that listens on port 8080. It accepts client requests sent in JSON format.
const express = require('express') const port = process.env.PORT ǁ 8080 const app = express() app.use(express.json()) app.listen(port, () => console.log('Sample app is listening on port ${port}!'))
Connect the Cloud-Native Application to the Database
Now that the back-end service has been successfully created, the next step is to add code to the Express.js application that allows it to connect to the datastore. The object-relational mapping (ORM) technique converts a data object in the Express.js code to a table in the PostgreSQL relational database. Sequelize is used for this purpose.
- In Azure Cloud Shell, run the command below to install the Sequelize package.
npm i sequelize pg pg-hstore
- Edit the index.js file to add code that allows the Express.js application to connect to the cnainventory database. Insert the code below. Substitute the Server name value for <server_name> (appears twice). This code provides the connection hostname and user account to the datastore so that the back-end service can connect to it.
const Sequelize = require('sequelize') const sequelize = new Sequelize('postgres://Student%40<server_name>:Pa55w0rd1234@<server_name>.postgres.database.azure.com:5432/cnainventory) sequelize .authenticate() .then(() => { console.log('Connection has been established successfully.'); }) .catch(err => { console.error('Unable to connect to the database:', err); });
- To use Sequelize in the Express.js application, add the following code to the file the index.js. This is the code that does the mapping between data objects in the Express.js code to data records in the database table. The variable Inventory is declared to define the mapping between the Express.js code and the inventory table. Notice how this definition contains the exact same fields that were declared in the inventory table in the cnainventory PostgreSQL database when it was created.
const Inventory = sequelize.define('inventory', { id: { type: Sequelize.INTEGER, allowNull: false, primaryKey: true }, name: { type: Sequelize.STRING, allowNull: false }, quantity: { type: Sequelize.INTEGER }, date: { type: Sequelize.DATEONLY, defaultValue: Sequelize.NOW } }, { freezeTableName: true, timestamps: false });
Create the Express.js REST API Endpoints
Now that the Express.js application is configured to access the PostgreSQL database, the next step is to create the REST API to accept client requests. These REST routes call functions that perform read and write operations on the PostgreSQL database. Two Express.js routes are added in the code. The first route performs a read from the database in response to receiving a GET HTTP request. The second route performs a write to the database in response to receiving a POST HTTP request.
- Edit the index.js file and add the code shown. The code adds a route that accepts HTTP GET requests to fetch an inventory record. The ID for the record is included in the request, and the ID, name, quantity, and date fields for the inventory record are returned.
app.get('/inventory/:id', async(req, res)=> { const id=req.params.id try { const inventory = await Inventory.findAll({ attributes: ['id', 'name', 'quantity', 'date'], where: { id: id } }) res.json({ inventory }) } catch(error) { console.error(error) }})
- Add the second route by adding the following code to the index.js file. The code adds a route that accepts HTTP POST requests to create a new inventory record. Values for the record are included in the HTTP request body with exception for the date, which is calculated from the current date.
app.post('/inventory', async (req, res) => { try { const newItem = new Inventory(req.body) await newItem.save() res.json({ inventory: newItem }) } catch(error) { Console.error(error) }})
Create the Front-End Component
The second microservice created is the front-end web service. This web service provides a web-based user interface to fetch inventory data.
- In the Azure Cloud Shell, use this command to create a Next.js application.
npx create-next-app
- Answer the prompts. It is important to select No for the App Router prompt. Note that the project name is cna-next. This is the root directory for the Next.js application.
- Navigate into the cna-next directory.
- Recall in the back end, Express.js application Sequelize is used as the ORM to convert data objects in the Express.js code to data records in the inventory table in the database. Prisma is a Node ORM used to map data objects to tables in a relational database.
- Install the prisma and prisma-client packages with the following commands.
npm install prisma (npm install prisma -save-dev) npm install @prisma/client
- Configure the Next.js application to use Prisma by running the command. This creates the prisma/ subdirectory and generates the schema.prisma configuration file inside it. This command also generates a dotenv (.env) file in the root directory of the project.
npx prisma init
- In the prisma/ directory, edit the generated schema.prisma file and add the content shown in Figure 12.39. This adds the data model for the inventory table.
- Notice how the schema.prisma is configured to read the data source database URL from a dotenv (.env) file. In the cna-next/ directory, edit the generated dotenv (.env) file and change the database connection string as shown in the code snippet below. Replace USER_NAME with Admin name, PASSWORD with Password, and SERVER_NAME with Server name for the cnainventory PostgreSQL database.
DATABASE_URL="postgresql://USER_NAME%40SERVER_NAME:PASSWORD@SERVER_NAME.postgres.database.azure.com:5432/cnainventory"
- Make a copy of the .env file and name it .env.local as this is the file that will be copied into the Docker container and used by Prisma.
- To use Prisma in the Next.js application, the Prisma Client must be configured. The Prisma Client serves as a query builder tailored to the application data. Query builders are part of the ORM that generate the SQL queries used to perform the database operations for the application. To do this, run the command:
npx prisma generate
- Add the Prisma Client code to the Next.js application. To do this, create the lib/ subdirectory and navigate into it.
- Inside the lib/ directory, create the file prisma.tsx and add the following code.
import { PrismaClient } from '@prisma/client' const globalForPrisma = global as unknown as { prisma: PrismaClient | undefined } export const prisma = globalForPrisma prisma ?? new PrismaClient({ log: ['query'], }) if(process.env.NODE_ENV !=='production') globalForPrisma.prisma = prisma
- Install the prisma and prisma-client packages with the following commands.
- Now that the Next.js application is configured to map to the inventory table, the next step is to implement the web service code. The web service code consists of React components that render in a browser. The React component InventoryProps is an array of data records once fetched from the database. The React component Inventory implements how the data is displayed as a web page in the browser. The React component Layout adds additional navigation to the web page.
- In the cna-next/ directory:
- Create and navigate to a directory named components/ and add the two code files that follow into it (confirm the components/ directory is at the same level as the pages/ directory that was generated):
- Create the file Inventory.tsx and add the code that follows.
import React from "react"; export type InventoryProps = { id: string; name: string; quantity: string; date: string; }; constInventory: React.FC<{ inventoryrec: InventoryProps }>=({ inventoryrec})=> {return( <div className="flex bg-white shadow-lg rounded-lg mx-2 md:mx-auto mb-5 max-w-2xl" > <divclassName="flex items-start px-4 py-3"> <div className=""> <div className="inline items-center justify-between"> <p className="text-gray-700 text-sm"> <strong>ID: {inventoryrec.id}</strong> Name: {inventoryrec.name} (quantity: {inventoryrec.quantity}) </p> <small className="text-red-700 text-sm"> Date: {inventoryrec.date.toString().substring(0,10)} </small> </div> </div> </div> </div> </div> ); }; export default Inventory;
- Create the file Layout.tsx and add the following code.
import React,{ ReactNode } from "react"; import Head from "next/head"; type Props = { children: ReactNode; }; const Layout: React.FC<Props> = (props) => ( <div> <div className="w-full text-center bg-red-800 flex flex-wrap items-center"> <div className="text-3xl w-1/2 text-white mx-2 md:mx-auto py-5"> Inventory Data </div> </div> <div className="layout">{props.children}</div> <style jsx global>{` html { box-sizing: border-box; } *, *:before, *:after { box-sizing: inherit; } body { margin: 0; padding: 0; font-size: 16px; font-family: -apple-system,BlinkMacSystemFont,"Segoe UI",Roboto, Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji", "Segoe UI Symbol"; background: rgba(0,0,0,0.05); } input, textarea { font-size: 16px; } button { cursor: pointer; } `}</style> <style jsx>{` .layout { padding: 0 2rem; } `}</style> </div> ); export default Layout;
- Edit the index.tsx file and replace the default code with the following code.
declare global { namespace NodeJS { interface Global { prisma: any; } } } import { prisma } from '../lib/prisma'; import Inventory,{ InventoryProps } from "../components/Inventory"; import Layout from "../components/Layout" export const getServerSideProps = async () => { const inventoryrecs = await prisma.inventory.findMany({ }) return { props: { inventoryrecs: JSON.parse(JSON.stringify(inventoryrecs)) } } } type Props = { inventoryrecs: InventoryProps[] } // index.tsx const InventoryFeed: React.FC<Props> = (props) => { return ( <Layout> <div className="page"> <br/> <main> {props.inventoryrecs.map((inventoryrec) => ( <div key={inventoryrec.id} className="post"> <Inventory inventoryrec={inventoryrec} /> </div> ))} </main> </div> <style jsx>{` .post:hover { box-shadow: 1px 1px 3px #aaa; } .post + .post { margin-top: 2rem; } `}</style> </Layout> ) } export default InventoryFeed
- In the cna-next/ directory:
Build and Store Microservices Images in an Azure Container Registry
Now that the two microservices for the cloud-native application have been successfully created, the next step is to create an Azure Container Registry (ACR) to store Docker images for these microservices. Each microservice of the cloud-native application is containerized. Their images are pulled from the ACR and deployed in a Kubernetes environment hosted in the cloud.
Create the Azure Container Registry
- In the Azure Portal, on the home page, click on Create a resource. Click Container Registry.
- On the Basics tab, configure the resource attributes. Table 12.2 shows the list of settings to configure on the Basics tab. Any settings not included should be set to the default values provided in the wizard.
Basics Tab Settings Subscription Select the default subscription. Resource group For this example, rg-nativeapps-eastus is used. Registry name Enter a unique name for the resource. For this example, ncaregistryflex is used. Location For this example, select the region that is used for the resource group. SKU Select Standard. - Click Review + create. Make a note of the Registry name and Resource Group as these will be needed in a later step. Click Create. When the provisioning is completed, a status message appears.
- Click on Go to resource. Make a note of the registry name that was provided in the create wizard. In this example, the registry name is ncaregistryflex.
- Generate access keys for the container registry, which will be needed later. For this example, the container registry used is ncaregistryflex. In the Azure Portal, navigate to the container registry resource. From the menu on the left, under Settings, click Access keys. Enable Admin user.
Build the Docker Images
Now that the container registry has been successfully created, the next step is to build Docker images for each microservice. These images are then pushed to the container registry.
- Setting specific environment variables makes it easier to run the commands that follow. In the Azure Cloud Shell, run the following commands to set the required environment variables. Note the resource group name in this example is rg-nativeapps-eastus. The registry name in this example is ncaregistryflex.
RESOURCEGROUP={resource-group-name} REGISTRYNAME={registry_name}
Containerize the Back-End Service
To containerize the back-end service, a Dockerfile must be created with a list of instructions to build the Docker image.
- Navigate to the can-node-express/ directory. Create a Dockerfile and add the instructions below. The Dockerfile starts with a base image for Node indicated by the FROM instruction. A working directory is created and the package.json file is copied into it. The dependencies listed in the package.json file are used to install the dependent packages. Next, the source code for the Express.js application is copied. Port 8080 is exposed for the Express.js application listens on. Finally, the command to start the Express.js application server is added as the last instruction.
FROM node:14-alpine # Create app directory WORKDIR /src # Copy package.json and package-lock.json COPY package*.json /src/ # Install npm dependencies ENV NODE_ENV=production RUN npm ci --only=production # Bundle app source COPY ./src EXPOSE 8080 CMD [ "node", "index.js" ]
- Build the Docker image and push it to the ACR registry. The Docker image must be built locally and then pushed to the ACR registry later. This is assumed a Docker engine is installed on the local computer. To build the image, run the command below. Notice the command has a space followed by a “.” at the end. This references the current directory and must be part of the command. Wait until the Docker image build is complete.
docker build -t expressimage .
- Test run the back-end application by running the Docker container. Run the following command.
docker run -d --name expressimage -p 8080:8080 expressimage:latest
- Open a browser and enter the URL: http://127.0.0.1:8080/inventory/1.
- Tag the image so that it can be pushed to the ACR registry with the command below. Note: run “docker images” to confirm the correct image name is used for the docker tagging. Substitute <registry_name> with the correct name of the ACR registry.
docker tag expressimage:latest <registry_name>.azurecr.io/expressimage:v1
Containerize the Front-End Service
To containerize the front-end service, a Dockerfile must also be created with a list of instructions to build the Docker image.
- Navigate to the can-next/ directory. Create the Dockerfile and add the instructions below. The Dockerfile starts with a base image for Node indicated by the FROM instruction. A working directory is created and the package.json file is copied into it. The dependencies listed in the package.json file are used to install the dependent packages. Next, the source code for the Next.js application is copied. The Prisma client is generated. Port 3000 is exposed because the Next.js application listens on port 3000. Finally, the command to start the Next.js application server is added as the last instruction.
FROM node:lts-buster-slim AS base RUN apt"-" get update && apt"-" get install libssl"-" dev ca"-" certificates "-" y WORKDIR /app COPY package.json package"-" lock.json ./ FROM base as build RUN export NODE_ENV"=" production RUN yarn COPY . . RUN npx prisma generate RUN yarn build FROM base as prod"-" build RUN yarn install "--" production COPY prisma prisma RUN npx prisma generate RUN cp "-" R node_modules prod_node_modules FROM base as prod COPY "--" from"=" prod"-" build /app/prod_node_modules /app/node_modules COPY "--" from"=" build /app/.next /app/.next COPY "--" from"=" build /app/public /app/public COPY "--" from"=" build /app/prisma /app/prisma EXPOSE 3000 CMD ["yarn","start"
- Create the docker-compose.yml file and add the content below. This step is required so that the .env.local file is properly copied into the Docker container.
services: web: ports: – "3000:3000" build: dockerfile: Dockerfile context: ./ volumes: – .env.local:/app/.env.local
- Build the Docker image and push it to the ACR registry with this command. The Docker image must be built locally and then pushed to the ACR registry later. To build the image, run the command below. The command will build the image and then run the container. Wait until the image builds and the container starts.
docker compose up -d
To stop the container, run the commanddocker compose down
. - Test run the front-end application. Because docker compose was used for the front end, the Docker container is already running. Open a browser and enter the URL http://127.0.0.1:3000.
- Tag the image so that it can be pushed to the ACR registry with the command below. Note: run “docker images” to confirm the correct image name is used for the docker tagging. Substitute <registry_name> with the correct name of the ACR registry.
docker tag cna-next_web:latest <registry_name>.azurecr.io/cna-next_web:v1
- Push Images to ACR Registry. First, confirm the images for both the front-end and back-end applications are tagged properly. Run the command: “docker images.” The following four images should be listed (the originally built images, and then the images tagged for the ACR registry).
- Log in to the ACR registry with the command below.
az acr login --name $REGISTRYNAME
Note: running the above command assumes being logged into the Azure Portal. This can be done with the following Azure CLI command.az login --scope https://management.core.windows.net//.default
- Push both images to the ACR registry. Run the two commands below. Substitute <registry_name> with the correct name of the ACR registry.
docker push <registry_name>.azurecr.io/expressimage:v1 docker push <registry_name>.azurecr.io/cna-next_web:v1
- In the Azure Console, navigate to the container registry and click on the registry name, ncaregistryflex. Under Services, click Repositories. Confirm that both image repositories are created. For this example, the repositories are expressimage and cna-next-web.
Create an Azure Kubernetes Service Instance
Now that the Docker images for the two microservices have been created and pushed to their respective repositories, the next step is to create a Kubernetes cluster.
- In the Azure Portal search bar, search for Kubernetes services. Click on Kubernetes services. Select Create a Kubernetes cluster.
- On the Azure Portal Home page, click on Create a resource. From the menu on the left, click Containers. Click on Azure Kubernetes Service (AKS).
- On the Basics tab, configure the resource attributes. Table 12.3 shows the list of settings to configure on the Basics tab. Any settings not included should be set to the default values provided in the wizard.
Basics Tab Settings Subscription Select the default subscription. Resource group For this example, rg-nativeapps-eastus is used. Kubernetes cluster name Enter a unique name for the resource. For this example, nca-aks is used. Scale method Select Manual. Location For this example, select the region that is used for the resource group. Node count Set to 2. - On the Integrations tab, select the container registry that was created previously.
- Click Review + create. Then, click Create. Wait until the Kubernetes cluster is provisioned.
Deploy Microservices to the Kubernetes Cluster
Now that the Kubernetes cluster is provisioned successfully, the next step is to deploy the containerized microservices into it. The images for these microservices are pulled from the registry and deployed into pods in the Kubernetes cluster.
Set Up the Environment
- Setting specific environment variables makes it easier to run the commands that follow. Environment variables need to be set up for the resource group, Kubernetes cluster, and the container registry. Run the following commands to set these variables. Note that for this example, the resource group used is rg-nativeapps-eastus, the Kubernetes cluster name used is nca-aks, and the registry name is ncaregistry.
RESOURCEGROUP={resource_group} CLUSTERNAME={cluster_name} REGISTRYNAME={registry_name}
- The Kubernetes cluster must be able to connect to the ACR registry to pull the Docker images from it. Connect the Kubernetes cluster to the ACR registry using this command.
az aks get-credentials --resource-group $RESOURCEGROUP --name $CLUSTERNAME
- Kubectl is the Kubernetes command-line tool that is used to manage Kubernetes clusters. This is the tool that interacts with the Kubernetes cluster on Azure. The first step in interacting with the Kubernetes cluster is to connect to it. Run the command, which lists the nodes of the Kubernetes cluster. The output lists the nodes for the cluster.
kubectl get nodes
- The next step is to obtain the hostname to the ACR registry. This is added to the deployment manifest files for the microservices so that the Docker images can be pulled from the registry and deployed into the Kubernetes cluster. Run the command to query the ACR server.
az acr list --resource-group $RESOURCEGROUP --query "[].{acrLoginServer:loginServer}" --output table
Create and Apply the Deployment Manifests
Deployment manifest files are used to deploy the Docker images for the microservices into the Kubernetes cluster. They provide declarative updates for the Kubernetes Pods and ReplicaSets for the microservices. Initially, only one instance of each microservice is made available. Each microservice can scale up or down as needed.
- Create a deployment manifest file for the Docker image, expressimage, for the back-end service. Create the express-deployment.yaml file and enter the following content. The deployment manifest deploys the back-end service with the label cna-express. In this deployment manifest file, the Docker image is pulled from the registry and gets deployed in a pod in Kubernetes.
# deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: cna-express spec: selector: # Define the wrapping strategy matchLabels: # Match all pods with the defined labels app: cna-express # Labels follow the `name: value` template template: # This is the template of the pod inside the deployment metadata: labels: app: cna-express spec: containers: - image: ncaregistry.azurecr.io/expressimage name: expressimage ports: - containerPort: 80
- Apply the deployment manifest to the Kubernetes cluster with the following command. A message indicates that the deployment object was successfully created.
kubectl apply -f ./express-deployment.yaml
- Create a deployment manifest file for the Docker image, webimage, for the front-end service. Create the web-deployment.yaml file and enter the following content. The deployment manifest deploys the front-end service with the label cna-web. In this deployment manifest file, the Docker image is pulled from the registry and gets deployed in a pod in Kubernetes.
# deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: cna-web spec: selector: # Define the wrapping strategy matchLabels: # Match all pods with the defined labels app: cna-web # Labels follow the `name: value` template template: # This is the template of the pod inside the deployment metadata: labels: app: cna-web spec: containers: - image: ncaregistry.azurecr.io/webimage name: webimage ports: - containerPort: 80
- Apply the deployment manifest to the Kubernetes cluster with this command. A message indicates that the deployment object was successfully created.
kubectl apply -f ./web-deployment.yaml
- Confirm the deployments for both the back-end and front-end services were successful with the following commands. Both microservices should display a status of “Running” in the Kubernetes cluster.
kubectl get deploy cna-express kubectl get pods
- In the Azure Console, navigate to the Kubernetes resources page. Click on Workloads. The microservices are deployed with a status of Ready.
Create and Apply the Service Manifests
Both microservices are now deployed in a Kubernetes cluster. Kubernetes Services are created for each microservice so that they can receive client requests. Kubernetes provides the Pods where the microservices are deployed with IP addresses and a single fully qualified domain name (FQDN) for a set of Pods. In addition, Services expose TCP ports to the containers where the microservices are running.
- Create a service manifest file for the Docker image, expressimage, for the back-end service. Create the express-service.yaml file and enter the following content. The service manifest creates the Kubernetes Service for the back-end service with the label cna-express.
#service.yaml apiVersion: v1 kind: Service metadata: name: cna-express spec: type: ClusterIP selector: app: cna-express ports: - port: 8080 # SERVICE exposed port name: http # SERVICE port name protocol: TCP # The protocol the SERVICE will listen to targetPort: 8080
- Apply the service manifest to the Kubernetes cluster with this command. A message indicates that the service object was successfully created.
kubectl apply -f ./express-service.yaml
- Create a service manifest file for the Docker image, webimage, for the front-end service. Create the web-service.yaml file and enter the following content. The service manifest creates the Kubernetes Service for the front-end web service with the label cna-web.
#service.yaml apiVersion: v1 kind: Service metadata: name: cna-web spec: type: ClusterIP selector: app: cna-web ports: - port: 3000 # SERVICE exposed port name: http # SERVICE port name protocol: TCP # The protocol the SERVICE will listen to targetPort: 3000
- Apply the service manifest to the Kubernetes cluster with this command. A message indicates that the service object was successfully created.
kubectl apply -f ./web-service.yaml
- Confirm the service deployment was successful. The Services for each microservice should be listed. IP addresses (CLUSTER-IP) and ports should also be specified for each microservice.
kubectl get service cna-express kubectl get service
Create and Apply the Ingress Controllers
Now that the services are created with assigned IP addresses and exposed ports, Ingress controllers are used to define how the deployed microservices are exposed to outside requests.
- First, enable the Kubernetes cluster so that it can use HTTP Application Routing with the following command.
az aks enable-addons --resource-group $RESOURCEGROUP \ --name $CLUSTERNAME --addons http_application_routing
- Next, configure and deploy the Ingress controller. As mentioned earlier, an FQDN is provided for a set of Pods of the Kubernetes cluster. Run this command to obtain this FQDN. The output is the FQDN that is used to expose the microservices to outside requests.
az aks show --resource-group $RESOURCEGROUP --name $CLUSTERNAME -o tsv \ --query addonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName
- Create the Ingress descriptor file, express-ingress.yaml, for the back-end service labeled as cna-express and add the following content. Note that the host includes the FQDN that was obtained in the previous step. It is prepended with cna-express making it unique for the back-end service.
#ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cna-express annotations: kubernetes.io/ingress.class: addon-http-application-routing spec: rules: - host: cna-express.8776bb8bc0324e10946c.eastus.aksapp.io http: paths: - path: / # Which path is this rule referring to pathType: Prefix backend: # How the ingress will handle the requests service: name: cna-express # Which service the request will be forwarded to port: name: http # Which port in that service
- Apply the Ingress manifest to the Kubernetes cluster with this command. A message indicates that the Ingress object was successfully created.
kubectl apply -f ./express-ingress.yaml
- Create the Ingress descriptor file, web-ingress.yaml, for the front-end web service labeled as cna-web and add the following content. Note that the host includes the FQDN that was obtained in the previous step. It is prepended with cna-web making it unique for the front-end web service.
#ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cna-web annotations: kubernetes.io/ingress.class: addon-http-application-routing spec: rules: - host: cna-web.8776bb8bc0324e10946c.eastus.aksapp.io http: paths: - path: / # Which path is this rule referring to pathType: Prefix backend: # How the ingress will handle the requests service: name: cna-express # Which service the request will be forwarded to port: name: http # Which port in that service
- Apply the ingress manifest to the Kubernetes cluster with this command. A message indicates that the Ingress object was successfully created.
kubectl apply -f ./web-ingress.yaml
- Confirm the Ingresses were deployed successfully with the commands below. The Ingresses for both microservices will display host names (HOSTS) that are unique for each microservice.
kubectl get ingress cna-express kubectl get ingress
- The command below queries Azure for the FQDN that was created earlier. It serves as the ZoneName. The command also returns the ResourceGroup value that is used to access the microservices. Run the command below and make a note of the ZoneName and ResourceGroup values.
az network dns zone list --output table
- Substitute the values for ResourceGroup and ZoneName obtained in the previous step for <resource-group> and <zone-name>, respectively, in the command that follows. Execute the edited command in the Cloud Shell, which results in the table shown. Two records are added for cna-express and two for cna-web which show in the Name column.
az network dns record-set list -g <resource-group> -z <zone-name> --output table
- In a browser, access the back-end service using the URL that is generated (e.g., http://cna-express.8776bb8bc0324e10946c.eastus.aksapp.io/inventory/1), which includes the route that was implemented as part of the REST API for the microservice. The back-end service receives requests in JSON format to create and store inventory records into the datastore. It also retrieves inventory records from the datastore and renders them in JSON format.
- In a browser, access the front-end web service using the URL that is generated (e.g., http://cna-web.8776bb8bc0324e10946c.eastus.aksapp.io). The front-end web service renders inventory records from the datastore to a web page.
PaaS Deployment of a Sample Cloud-Native Application Using VMWare Tanzu
This section demonstrates how to use VMWare Tanzu deployment technology to launch a dashboard on AWS monitoring application metrics exposed by the Kuard (Kubernetes Up and Running Demo) application. Amazon Web Service (AWS) is an Amazon cloud computing platform that offers a wide range of services that allow customers to build, deploy, and manage applications and services in the cloud, and will be used in this example.
Set Up an Environment for VMWare Tanzu
The first step is to create an AWS EC2 instance. This is a VM hosted in the cloud. Once the EC2 instance is provisioned, a set of tools need to be installed, including Docker, Kubernetes CLI tool, and other relevant package managers.
Create the EC2 Instance
- AWS Portal is a web-based console that allows customers to manage their cloud services and AWS subscriptions. In the AWS Portal, search for EC2 and click Launch Instance. Amazon Elastic Compute Cloud (EC2) is a cloud based, on-demand, compute platform that can be auto-scaled to meet demand.
- Under Application and OS Images (Amazon Machine Image), select Ubuntu, and leave the remaining configurations to the default settings for this section.
- Under the Instance type section, select t2.micro for the Instance type.
- Under the Key pair (login) section, click on Create new key pair.
- Enter a value for Key pair name and keep the default settings. Click on Create key pair. The name used for this example is cnatanzu-private-key. Download the cnatanzu-private-key.pem file. This file is used to log in to the EC2 instance. The name provided, cnatanzu-private-key, is populated for the Key pair name in the Key pair (login) section.
- Under the Networking settings section, select Create security group. Enable Allow SSH traffic from, Allow HTTPS traffic from the internet, Allow HTTP traffic from the internet. Set Anywhere to 0.0.0.0/0.
- Under the Summary section, click Launch instance. Wait until the provisioning is complete. A Success message appears with the Instance ID. Click on the Instance ID link.
- Open a terminal and navigate to the directory where the cnatanzu-private-key.pem file was downloaded. Under Instance summary, copy the value from Public IPv4 DNS. Use the command to make an SSH connection to the EC2 instance by inserting the copied Public IPv4 DNS.
ssh -i nucamp-private-key.pem ubuntu@< EC2 public IPv4 DNS>
Install Homebrew (brew)
Homebrew is an open-source package management system used to install and manage packages on MacOS and Linux operating systems. Homebrew, also referred to as “brew,” is used to install Octant later.
- Homebrew uses a compiler environment to build packages that may need to be built. The first step is to install a compiler environment with this command. The package build-essentials provides all required packages for the compiler environment. Run the command below to install the build-essentials package
sudo apt install build-essential
- Run the following command to download the brew installation script that is used to install brew.
curl -fsSL -o install.sh
- Run the following command to launch the installation script.
/bin/bash install.sh
- Once the installation script is complete, run the following two commands to add brew to the PATH environment variable. This makes the command brew recognizable in the EC2 instance.
(echo; echo 'eval \"$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"') >> \ /home/ubuntu/.profile eval \"$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
- Confirm brew was added to the PATH environment variable. Type the command brew to confirm the brew help menu is printed to the console. This indicates that brew was installed successfully.
Install and Set Up Docker
Tanzu requires the docker engine to be up and running. Docker Engine is an open-source containerization platform used to build Docker images and manage Docker containers. To install the Docker engine, Advanced Package Tool (apt) is used. Apt is a packaging tool used to install new packages and update existing packages.
- Run the command to update the apt utility itself before installing Docker.
sudo apt update
- Run the command below to install the following packages as prerequisites for installing Docker:
- apt-transport-https, which allows the package manager to transfer files over the HTTPS protocol
- ca-certificates, which makes available common Certificate Authority certificates to aid in verifying the security of connections
- curl, which is used for transferring data to or from a server
- software-properties-common, which is a package to help manage the software installations
sudo apt install apt-transport-https ca-certificates curl software-properties-common
- Download a public key file from Docker with the command below. The public key file is then added to a list of trusted keys managed by apt.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add
- Run the add-apt-repository command below to add the external Docker repository to the apt sources list.
sudo add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable\"" sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
- After making the changes to the apt package information in the previous steps, update the apt index with this command.
sudo apt update
- Run the command below to install Docker on the EC2 instance:
sudo apt install docker-ce
- Run the next three commands to configure Docker and install Docker Compose.
- Add the current ubuntu user to the docker group.
sudo usermod -aG docker ${USER}
- Download Docker Compose. Docker Compose is a tool used for defining, via descriptor files, and managing applications that consist of multiple Docker containers.
sudo curl -L \ "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" \ -o /usr/local/bin/docker-compose
- Run this command to change the permissions of the downloaded docker-compose file to an executable so that it can be run.
sudo chmod +x /usr/local/bin/docker-compose
- Add the current ubuntu user to the docker group.
- Confirm that both docker and docker-compose were installed. Run these commands and confirm that the versions for docker and docker-compose are displayed to the console.
docker -v docker-compose -v
- Run the command below to start the Docker engine in the EC2 instance.
sudo systemctl start docker
- Run the command below to test if Docker was installed correctly.
sudo docker run hello-world
Install and Set Up Kubectl
Tanzu requires kubectl. The Kubernetes command-line tool, kubectl, is used to interact with the Tanzu Kubernetes cluster.
- Download the latest release of kubectl with the command below.
curl -LO \"https://dl.k8s.io/release/$(curl -L -s \ https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
- Install kubectl with the command below.
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
- Run the following commands to change the permissions to kubectl file to executable so that it can be run and copy it to the ~/.local/bin directory.
chmod +x kubectl mkdir -p ~/.local/bin mv ./kubectl ~/.local/bin/kubectl
- Append ~/.local/bin to the $PATH environment variable. Edit the ~/.bashrc file and add the following line to the end of it.
export PATH=$PATH:~/.local/bin
- Save and close the file. Apply the changes by running the following command:
source ~/.bashrc
- Confirm, by running the command kubectl, that it can be executed. The kubectl help menu should display in the console. This indicates that it was installed successfully.
Install Tanzu
- Install jq as a support package for Tanzu with the following commands:
sudo apt-get update sudo apt-get install jq
- Install xdg-utils as a support package for Tanzu with the following command:
sudo apt-get install --reinstall xdg-utils
- Ensure the kubectl version is compatible with the Tanzu version.
curl -H \"Accept: application/vnd.github.v3.raw" -L \ https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh\ | bash -s v0.10.0 linux
- Unpack the gzip file and install Tanzu using the provided shell script with the command below.
tar xzvf tce-linux-amd64-v0.10.0.tar.gz
- Navigate into the tce-linux-amd64-v0.10.0/ directory and run the installation shell script with the command below.
/install.sh
- Confirm Tanzu was installed successfully. Run the command tanzu. The Tanzu help menu should be displayed in the console.
Install and Test the Kuard Demo Application
Now that the Docker engine, Docker Compose, Kubectl, and Tanzu are all installed, the next step is to install the Kuard demo application.
Create Required AWS Secret and Access Keys
AWS secret keys are used to manage AWS resources via the AWS CLI. AWS CLI is a command-line interface used to manage Amazon cloud services. Access keys are used in the Tanzu configuration to provision and configure the Tanzu Management Cluster. An RSA key pair is also used in the Tanzu configuration to communicate with the Tanzu Management Cluster.
- In the AWS shell, install awscli. First, download the AWS CLI zip file with the curl command below.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- Install the unzip CLI utility that is used to unpack the downloaded awscliv2.zip file. Run the command below:
sudo apt install unzip
- Unzip the awscliv2.zip file with the command below.
unzip awscliv2.zip
- Install the unpackaged AWS CLI with the command below.
sudo ./aws/install
- Run the command below to verify that the AWS CLI was installed successfully. The version should be displayed.
aws - version
- Install the unzip CLI utility that is used to unpack the downloaded awscliv2.zip file. Run the command below:
- In the AWS Portal under Access management, click on Security credentials. Click on Create access key to generate the key. Make a note of the key pairs as they will be added to the Tanzu Management Cluster settings.
- Finally, generate an RSA key pair. The name used in this example is tanzu-key-pair. Make a note of the name of the RSA key pair as it will be added to the Tanzu Management Cluster settings.
aws ec2 create-key-pair \ --key-name tanzu-key-pair \ --key-type rsa \ --key-format pem \ --query "KeyMaterial" \ --output text > tanzu-key-pair.pem
Create the Tanzu Management Cluster Configuration
Now that the environment to run the Tanzu Management Cluster is set up, the next step is to create a Tanzu Management Cluster. A management cluster configuration file is needed. One way to create a management cluster configuration file is to use the Tanzu installer, which is a web page that is used to generate the configuration file.14
- Use the Tanzu installer to create the management cluster configuration file. Launch the Tanzu installer web page with the command below. A browser page automatically launches.
tanzu management-cluster create
- Select Deploy on Amazon Web Services. This generates a configuration file that deploys the cluster on the Amazon EC2 instance that was created earlier.
- Under the IaaS Provider section, select the REGION that is also used as a location for the EC2 instance.
- Under the Management Cluster Settings section, make sure that Bastion Host is unchecked (disabled). For EC2 KEY PAIR, enter the name of the RSA key pair that was created earlier. For this example, the name used in this example is tanzu-key-pair. Select t3a.large for the AZ1 WORKER NODE INSTANCE TYPE.
- Under the Identity Management section, disable Enable Identity Management Settings.
- Under the OS Image section, select ubuntu-20-04-amd64 as the OS IMAGE.
- Click Review Configuration. Table 12.4 shows the complete list of settings expected for configuring the Tanzu Management Cluster.
IaaS Provider Settings IaaS Provider Validate the AWS provider credentials for Tanzu Community Edition AWS CREDENTIAL PROFILE Default REGION us-east-2 VPC for AWS settings VPC for AWS Specify VPC settings for AWS VPC CIDR 10.0.0.0/16 Management Cluster Settings Management Cluster Settings Development cluster selected: 1 node control plane DEV INSTANCE TYPE t3a.large MANAGEMENT CLUSTER NAME tkg-cnamgmt-cluster-aws EC2 KEY PAIR tanzu-key-pair ENABLE MACHINE HEALTH CHECKS Yes ENABLE BASTION HOST No ENABLE AUDIT LOGGING No AUTOMATE CREATION OF AWS CLOUDFORMATION STACK Yes AVAILABILITY ZONE 1 us-east-2c WORKER NODE INSTANCE TYPE 1 t3a.large PROD INSTANCE TYPE Metadata Settings Metadata Specify metadata for the management cluster LOCATION (OPTIONAL) DESCRIPTION (OPTIONAL) LABELS Kubernetes Network Settings Kubernetes Network Cluster Pod CIDR: 100.96.0.0/11 CNI PROVIDER Antrea CLUSTER SERVICE CIDR 100.64.0.0/13 CLUSTER POD CIDR 100.96.0.0/11 ENABLE PROXY SETTINGS No Identity Management Settings Identity Management Specify identity management ENABLE IDENTITY MANAGEMENT SETTINGS No OS Image Settings OS Image OS Image: ubuntu-20.04-amd64 (ami-06159f2d2711f3434) OS IMAGE ubuntu-20.04-amd64 (ami-06159f2d2711f3434) - Copy the generated CLI command to the clipboard (or click Deploy Management Cluster). Clicking on the Deploy Management Cluster button connects to the EC2 instance and provisions the Tanzu Management Cluster. Alternatively, the configuration file that was generated is copied into the /home/ubuntu/.config/tanzu/tkg/clusterconfigs/ directory. In this example, the generated filename for the configuration file is kldlaarqyl.yaml. Run the copied command in the AWS shell to provision the Tanzu Management Cluster.
- After the copied command is executed, a config file, with “config_” prepended as the filename, is generated, and copied into the kube-tkg/tmp/ directory.
- Run the command below to check the status of the Tanzu Management Cluster. Change <config_file> with the filename generated.
Kubectl get \ po.deploy.cluster.kubeadmcontrolplane,machine,machinedeployment\ -A --kubeconfig /home/ubuntu/.kube-tkg/tmp/<config_file>
Run the Kuard Demo Application
Kuard (Kubernetes Up and Running Demo) is a demo application that provides information about Kubernetes environments that are running. The Kuard demo application is deployed in the Tanzu Management Cluster as a containerized application.
- Run the command below to set the default context to the Tanzu cluster that was just created.
kubectl config use-context tkg-cnamgmt-cluster-aws-admin@tkg-cnamgmt-cluster-aws
- Run the command below to pull the Kuard image and start a single instance of a Kuard Pod.
kubectl run --restart=Never –image=gcr.io/kuar-demo/kuard-amd64:blue kuard
- Configure Kuard to listen on port 8080 and forward to port 8080 in the Kuard pod. First, run the command below to list the Pod and make note of the Pod name. The Pod name should be kuard and should be up and running.
kubectl get pods
- Run the command below to use port forwarding and expose the Kuard default port, 8080.
kubectl port forward kuard 8080:8080
- Launch the Kuard website in the browser using the URL http://localhost:8080.
Install and Run Octant
Octant is an open-source web interface for Kubernetes that is used to inspect Kubernetes clusters and applications deployed in them.
- Run the command below to install Octant using Homebrew.
brew install octant
- Launch Octant by typing the command octant in the AWS shell. A browser window automatically launches.
FaaS Deployment of a Sample Cloud-Native Application
The following example15 illustrates FaaS deployment of a cloud-native application that consists of two event-driven workloads. The first serves as a simulator that sends data to an event hub. This is the Azure FaaS producer function. A second connects to this event hub to trigger storing the events in a datastore. This is the Azure FaaS consumer function. Both FaaS functions are deployed as Azure Function Apps.
Azure Function App is an event-driven serverless compute platform without provision or managing infrastructure. The datastore used is an Azure Cosmos Database. Dashboards are then used to monitor the performance of the Azure FaaS functions. The architecture of the FaaS deployment of the cloud-native application is shown in Figure 12.40.
FaaS Environment Setup
Setting specific environment variables makes it easier to run these commands. Run the commands below to set the environment variables. For this example, rg-atafunctions-westus2 was used for the resource group, ncafaaseventhub was used for the event hub namespace, ncafaaseventhub was used for the event hub name, ncafaasauth was used for the event hub authorization rule, ncafaasdbusr was used for the Cosmos database account username, ncafaasstor was used for the storage account name, ncafaasapp was used for the FaaS function name (for the first Azure function), and westus2 was used for the location.
RESOURCE_GROUP=<value> EVENT_HUB_NAMESPACE=<value> EVENT_HUB_NAME=<value> EVENT_HUB_AUTHORIZATION_RULE=<value> COSMOS_DB_ACCOUNT=<value> STORAGE_ACCOUNT=<value> FUNCTION_APP=<value> LOCATION=<value>
Create a Datastore for the Event-Driven FaaS Cloud-Native Application
The producer function simulates and sends data to an Azure event hub. The consumer function listens for events of a specific namespace in the Azure event hub and processes and stores them in an Azure Cosmos database. The first step is to create an Azure Cosmos DB datastore. Once the datastore is created, the next step is to create and configure an Azure event hub.
- Run the commands below to create the Azure Cosmos DB datastore.
az cosmosdb create\ --resource-group $RESOURCE_GROUP\ --name $COSMOS_DB_ACCOUNT az cosmosdb sql database create\ --resource-group $RESOURCE_GROUP\ --account-name $COSMOS_DB_ACCOUNT \ --name TelemetryDb az cosmosdb sql container create\ --resource-group $RESOURCE_GROUP\ --account-name $COSMOS_DB_ACCOUNT\ --database-name TelemetryDb\ --name TelemetryInfo\ --partition-key-path '/temperatureStatus'
Create and Configure an Event Hub
- Run the commands below to create the event hub namespace, Azure event hub, and event hub authentication rule resources.
az eventhubs namespace create\ --resource-group $RESOURCE_GROUP\ --name $EVENT_HUB_NAMESPACE az eventhubs eventhub create\ --resource-group $RESOURCE_GROUP\ --name $EVENT_HUB_NAME\ --namespace-name $EVENT_HUB_NAMESPACE\ --message-retention 1 az eventhubs eventhub authorization-rule create\ --resource-group $RESOURCE_GROUP --name $EVENT_HUB_AUTHORIZATION_RULE --eventhub-name $EVENT_HUB_NAME --namespace-name $EVENT_HUB_NAMESPACE \ --rights Listen Send
- Run these commands below to create the storage account and function app resources.
az storage account create\ --resource-group $RESOURCE_GROUP\ --name $STORAGE_ACCOUNT"p" \ --sku Standard_LRS az functionapp create\ --resource-group $RESOURCE_GROUP\ --name $FUNCTION_APP"-p"\ --storage-account $STORAGE_ACCOUNT"p" \ --consumption-plan-location $LOCATION\ --runtime java\ --functions-version 4
Create, Build, and Deploy the FaaS Producer Function
A few resources need to be created for the FaaS producer function. First, a storage account is created. The FaaS producer function needs to connect to the Azure event hub. Connection strings need to be generated for this purpose. Finally, the FaaS producer function is built as a maven project and then deployed as an Azure Function App.
Set Up Storage for the Consumer Function
- Run the commands below to create the connection strings that are used to access the storage account for the event hub.
AZURE_WEB_JOBS_STORAGE=$(\ az storage account show-connection-string\ --resource-group $RESOURCE_GROUP\ --name $STORAGE_ACCOUNT"p"\ --query connectionString\ --output tsv) EVENT HUB_CONNECTION_STRING=$(\ az eventhubs eventhub authorization-rule keys list\ --resource-group $RESOURCE_GROUP\ --name $EVENT_HUB_AUTHORIZATION_RULE\ --eventhub-name $EVENT_HUB_NAME\ --namespace-name $EVENT_HUB_NAMESPACE\ --query primaryConnectionString\ --output tsv)
- Run the commands below to obtain the connection strings that were created in the previous step. Make a note of these connection strings as they will be used later.
echo $AZURE_WEB_JOBS_STORAGE echo $EVENT_HUB_CONNECTION_STRING
- The connection strings that were generated for the Azure Web Jobs Storage and event hub in the previous step need to be added as application settings to the Azure Function App account in the command. Run the command below with these values inserted to create the Function App. A notification to the console indicates that the Function App was built successfully.
az functionapp config appsettings set\ --resource-group $RESOURCE_GROUP\ --name $FUNCTION_APP"-p" \ --settings\ AzureWebJobsStorage=$AZURE_WEB_JOBS_STORAGE\ EventHubConnectionString=$EVENT_HUB_CONNECTION_STRING
Build and Deploy the FaaS Producer Function
Now that the Azure resources, such as the event hub, Azure Function App, and Storage account, have been created and configured, the next step is to create an Azure FaaS function project for the FaaS producer function. Maven is used to build the project.
- Run the command below to create and build the function project. The telemetry-functions-producer/ directory is generated along with the files for the project. A Build Success message indicating the build was successful should appear in the console.
mvn archetype:generate -batch-mode\ -DarchetypeGroupid=com.microsoft.azure\ -Darchetype ArtifactId=azure-functions-archetype\ -DappName=$FUNCTION_APP"-p"\ -DresourceGroup=$RESOURCE_GROUP\ -DappRegion=$LOCATION\ -DappServicePlanName=$LOCATION"plan" \ -Dgroupid=com.learn\ -DartifactId=telemetry-functions-producer
- Run the command below to add application settings from the Azure function into the function project local.settings.json file located in the telemetry-functions-producer/ root directory.
func azure functionapp fetch-app-settings $FUNCTION_APP”-p”
- Navigate to the telemetry-functions-producer/src/main/java/com/learn/ directory. Edit the Function.java file and replace all the code in it with the code that follows. The code declares a Function that establishes a connection to the Azure event hub.
package com.learn; import com.microsoft.azure.functions.annotation.EventHubOutput; import com.microsoft.azure.functions.annotation.FunctionName; import com.microsoft.azure.functions.annotation.TimerTrigger; import com.microsoft.azure.functions.ExecutionContext; public class Function { public Telemetryltem generateSensorData( 10 seconds String timerInfo, final ExecutionContext context) // every { context.getLogger().info("Java Timer trigger function executed at: "+java.time.LocalDateTime.now()); double temperature = Math.random()* 100; double pressure = Math.random() * 50; return new Telemetryltem(temperature, pressure); } }
- Create a file named TelemetryItem.java and add the code below. The code declares simulated data items that are pushed to the Azure event hub.
package com.learn; public class TelemetryItem { private String id; private double temperature; private double pressure; private boolean isNormalPressure; private status temperatureStatus; static enum status { COOL, WARM, HOT } public TelemetryItem(double temperature, double pressure) { this.temperature = temperature; this.pressure = pressure; } public String getId() { return id; } public double getTemperature() { return temperature; } public double getPressure() { return pressure; } public String toString() { return "TelemetryItem={ id=" + id + ",temperature=" + temperature + ",pressure= " + pressure + "}"; } public boolean isNormalPressure() { return isNormalPressure; } public void setNormalPressure(boolean isNormal) { this.isNormalPressure = isNormal; } public status getTemperatureStatus() { return temperatureStatus; } public void setTemperatureStatus(status temperatureStatus) { this.temperatureStatus = temperatureStatus; } }
- In the telemetry-functions-producer/ directory, run the command below to build the function. A Build Success message appears in the console indicating the build was successful.
mvn clean package
- Run the command below to test the function and confirm it runs properly. The output shown indicates the function is running properly.
Mvn azure-functions:run
- Run the command below to deploy the FaaS producer function as an Azure Function App. Once the deployment is complete, the HTTP Trigger URLs are provided.
mvn azure-functions:deploy
Create, Build, and Deploy the FaaS Consumer Function
Now that the FaaS producer function is built and deployed, the next step is to create an Azure FaaS function project for the FaaS consumer function. Like the FaaS producer function, a few resources need to be created for the FaaS consumer function. A storage account is created. The FaaS consumer function needs to connect to the Azure event hub. The FaaS consumer function also needs to connect to the Azure Cosmos DB datastore. The FaaS consumer function is built as a maven project and then deployed as an Azure Function App.
- Run the commands below to create a storage account and the FaaS consumer function.
az storage account create\ --resource-group $RESOURCE_GROUP\ --name $STORAGE_ACCOUNT"c"\ --sku Standard_LRS az functionapp create\ --resource-group $RESOURCE_GROUP\ --name $FUNCTION_APP"-c"\ --storage-account $STORAGE_ACCOUNT"c"\ --consumption-plan-location $LOCATION\ --runtime java\ --functions-version 4
- Use the commands below to obtain the connection strings for the storage account and the datastore. These values, as well as the datastore information, are required for the consumer function.
AZURE_WEB_JOBS_STORAGE=$(\ az storage account show-connection-string\ --resource-group $RESOURCE_GROUP\ --name $STORAGE_ACCOUNT"c"\ --query connectionString \ --output tsv) COSMOS_DB_CONNECTION_STRING=$(\ az cosmosdb keys list\ --resource-group $RESOURCE_GROUP\ --name $COSMOS_DB_ACCOUNT\ --type connection-strings\ --query 'connectionStrings[0].connectionString'\ --output tsv)
- Run the command below to obtain the event hub connection string.
EVENT_HUB_CONNECTION_STRING=$(\ az eventhubs eventhub authorization-rule keys list\ --resource-group $RESOURCE_GROUP\ --name $EVENT HUB_AUTHORIZATION_RULE\ --eventhub-name $EVENT_HUB_NAME\ --namespace-name $EVENT_HUB_NAMESPACE\ --query primaryConnectionString\ --output tsv)
- Run the commands below to display the connection strings that were created in the previous steps. Make a note of these connection strings as they will be used later.
echo $AZURE_WEB_JOBS_STORAGE echo $EVENT_HUB_CONNECTION_STRING echo $COSMOS_DB_CONNECTION_STRING
- The connection strings that were generated for the Azure Web Jobs Storage, Event Hub, and Azure Cosmos DB datastore in the previous step need to be added as application settings to the Azure Function App account in the command. Run the command below with these values inserted to create the Function App. A notification to the console indicates that the Function App was built successfully.
az functionapp config appsettings set\ --resource-group $RESOURCE_GROUP\ --name $FUNCTION_APP"-c"\ --settings\ AzureWebJobsStorage=$AZURE_WEB_JOBS_STORAGE\ EventHubConnectionString=$EVENT_HUB_CONNECTION_STRING\ CosmosDBConnectionString=$COSMOS_DB_CONNECTION_STRING
Build and Deploy the FaaS Consumer Function
Now that the Azure resources, such as the event hub, Azure Function App, Storage account, and Azure Cosmos DB, have been created and configured, the next step is to create an Azure FaaS function project for the FaaS producer function. Maven is used to build the project.
- Run the command below to create the function project for the FaaS consumer function. The telemetry-functions-consumer/ directory is generated along with the files for the project.
mvn archetype:generate -batch-mode\ -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DappName=$FUNCTION_APP"-c" \ -DresourceGroup=$RESOURCE_GROUP -DappRegion=$LOCATION\ -DappServicePlanName=$LOCATION"plan" \ -Dgroupid=com.learn \ -DartifactId=telemetry-functions-consumer
- Navigate into the telemetry-functions-consumer/ directory. Run the command below to update the local settings for local execution with the command. These settings are added to the local.settings.json file.
func azure functionapp fetch -app-settings \ $FUNCTION_APP”-p”
- Navigate to the src/main/java/com/learn/ directory. Replace all the code in Function.java with the code shown below. The code declares a Function that establishes connections to the Azure event hub and Azure Cosmos DB.
package com.learn; import com.learn.TelemetryItem.status; import com.microsoft.azure.functions.annotation.FunctionName; import com.microsoft.azure.functions.ExecutionContext; import com.microsoft.azure.functions.OutputBinding; import com.microsoft.azure.functions.annotation.Cardinality; import com.microsoft.azure.functions.annotation.CosmosDBOutput; import com.microsoft.azure.functions.annotation.EventHubTrigger; public class Function { public void processSensorData( final ExecutionContext context) TelemetryItem item, OutputBinding <TelemetryItem> document, { context.getLogger().info("Event hub message received: " + item.toString()); if (item.getPressure() > 30) { item.setNormalPressure(false); } else { item.setNormalPressure(true); } if (item.getTemperature() < 40) { item.setTemperatureStatus(status.COOL); } else if (item.getTemperature() > 90) { item.setTemperatureStatus(status.HOT); } else { item.setTemperatureStatus(status.WARM); } document.setValue(item); }
- Create the file TelemetryItem.java and add the code shown below. The code declares data items that are received from the Azure event bus and stored in the Azure Cosmos DB datastore.
package com.learn; public class TelemetryItem { private String id; private double temperature; private double pressure; private boolean isNormalPressure; private status temperatureStatus; static enum status { COOL, WARM," HOT } public TelemetryItem(double temperature,double pressure) { this.temperature = temperature; this.pressure = pressure; } public String getId() { return id; } public double getTemperature() { return temperature; } public double getPressure() { return pressure; } @Override" public String toString() { return \"TelemetryItem={id=" + id + \",temperature=" + temperature + ",pressure=" + pressure + "}"; } public boolean isNormalPressure() { return isNormalPressure; } public void setNormalPressure(boolean isNormal) { this.isNormalPressure = isNormal; } public status getTemperatureStatus() { return temperatureStatus; } public void setTemperatureStatus(status temperatureStatus) { this.temperatureStatus = temperatureStatus; } }
- Navigate to the telemetry-functions-consumer/ directory and build the function with the command below. Once the build is complete, a Build Success message is displayed to the console to indicate the build was successful.
mvn clean package
- Run the command below to test if the function runs properly.
mvn azure-functions:run
- As mentioned earlier, the FaaS consumer function listens to events of a specific namespace in the Azure event hub and processes and stores them in an Azure Cosmos DB datastore. The events stored in the Azure Cosmos DB datastore can be checked by visiting the Azure Cosmos DB page in the Azure Portal. In the Azure Portal, navigate to the Azure Cosmos DB page. On the left menu, click on Data Explorer, click on the TelemetryInfo tab, and then click on Items expanded from TelemetryInfo to view the data. Data will continue to be sent to the Azure Cosmos DB datastore, which can be viewed in real time.
- Run the command below to deploy the functions to Azure Functions. Once the deployment completes, a Build Success message appears in the console.
mvn azure-functions:deploy
Test the Deployed FaaS Producer and Consumer Functions
Now that both the FaaS producer and consumer functions are successfully deployed as Azure Function Apps, data continues to be simulated, pushed to the event hub, and then stored in the datastore. The next step is to test them and evaluate their performance. The FaaS producer function continues to send telemetry data to the Azure event hub while the FaaS consumer function continues to process events from the Azure event hub and store them in the Azure Cosmos DB datastore. The activities and performance of these two functions can be viewed in Application Insights in the Azure Portal.
Evaluate the FaaS Producer Function
The first step is to inspect the application map for both FaaS functions. Application maps represent the logical structure of a distributed application. Individual components of the application are identified by their “roleName” property and are independently deployed. These components are represented as circles, called “nodes” on the map. HTTP calls between nodes are represented as arrows connecting the source node to the target node. Application maps help identify performance bottlenecks or failure hotspots across all components via alerts.
The application map (Figure 12.41) displays the logic structure of the FaaS producer function and Azure event bus. It shows the FaaS producer function as a node connected to an Azure event hub, also as a node. The application map for the FaaS producer function shows that there is a dependency between the FaaS producer function and the Azure event hub. It also shows the FaaS producer function is not dependent on the FaaS consumer function nor the Azure Cosmos DB datastore.
The application map shown in Figure 12.42 displays the logic structure of the FaaS consumer function and Azure Cosmos DB datastore. It shows the FaaS consumer function as a node connected to an Azure Cosmos DB datastore, also as a node. The application map for the FaaS consumer function shows that there is a dependency between the FaaS consumer function and the Azure Cosmos DB datastore. It also shows the FaaS consumer function is not dependent on the FaaS producer function nor the Azure event hub.
Footnotes
- 13Sample based off tutorials: https://learn.microsoft.com/en-us/training/modules/cloud-native-build-basic-service/ and https://learn.microsoft.com/en-us/training/modules/cloud-native-apps-orchestrate-containers/
- 14See VMware Tanzu Kubernetes Grid documentation at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.5/vmware-tanzu-kubernetes-grid-15/GUID-mgmt-clusters-config-aws.html.
- 15Sample based off tutorial https://learn.microsoft.com/en-us/training/modules/deploy-real-time-event-driven-app/