Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Computer Science

14.1 Cyber Resources Management Frameworks

Introduction to Computer Science14.1 Cyber Resources Management Frameworks

Learning Objectives

By the end of this section, you will be able to:

  • Describe cyber resources quality requirements
  • Analyze various frameworks developed over time to manage cyber resources
  • Identify specific cyber resources qualities such as cybersecurity
  • Explain the growing challenges that are faced by innovative web solutions
  • Analyze cloud cyber resource implementation strategies
  • Describe the metaverse ecosystem and its cyber resource challenges

The “cyber” qualifier typically applies to anything related to computers or information technology. Cyber resources include cyber platforms, solutions, processes, policies, and procedures or guidelines required. Cyber resources store, process, and manage data and other assets electronically and make them available via networks. Cyber resources must be evaluated in a world heavily inundated with computers to ensure they conform to best practices and meet certain quality requirements. The qualities expected from cyber resources include security, safety, performance, usability, reliability, and autonomy. These qualities apply to the various architectural components of cyber resources, including business, application, data (and the pyramid of knowledge layers above data, including information, and knowledge), and infrastructure components. Cyber resources’ qualities are typically paired with one another and enhanced to optimize user experience. For example, technology solutions that leverage front-end interaction must be usable and ensure safety in various forms (e.g., data protection and user safety).

While cybersecurity is meant to ensure the quality of an organization’s cyber resources, it also calls for a specific cyber resource that encompasses the policies, procedures, technology, and other tools, including people, that organizations rely on to protect their computer systems and technological environments from digital threats. Regarding cybersecurity, organizations typically implement an information security policy (ISP) that outlines security practices for employees and systems. Since organizations’ assets are not always information-centric, the ISP policy applies to all organizational architectural components, including its business, applications, data (and associated pyramid of knowledge layers), and infrastructure. For example, the ISP policy details how the various infrastructure components are evaluated and outlines how the organization’s information is secured. An ISP policy typically includes safety procedures and best performance practices, but it does not handhold workers through their jobs by telling employees how to maintain safety and perform within their role.

To help manage the development of cyber resources architectures that abide by specific requirements, organizations typically choose a specific framework, or a Technical Reference Model (TRM), that details the technologies and standards that should be used to develop systems and deliver services. Analyzing frameworks, which can be complete or incomplete, helps determine which TRM applies to an organization’s needs. However, most TRMs and related frameworks briefly outline procedures or system structures/capabilities. It then becomes the responsibility of the workers to determine how to best adapt the framework to their company’s needs. In adopting these TRMs and frameworks, computer scientists need to take a proactive approach to enhance the overall quality of cyber resources, including cybersecurity.

In 2012, a reference model and associated framework referred to as The Open Group Architechture Framework (TOGAF) TRM was created to help users develop architectures based on their specific requirements. It champions a layered and componentized architecture that can be leveraged to develop specialized domain and industry architectures for infrastructure and business applications. It also provides a set of qualities that these specialized architectures and the applications that use them must fulfill.

This framework was created and designed to be customized for each organization, resulting in a more secure design framework. The TRM analyzes business information systems, including application software, platforms, and communication infrastructures. The TRM assumes that all software and applications are running on the Internet. The original intention of the model was to create a taxonomy of vocabulary and standards that could be used to evaluate business and infrastructure applications. The different qualities within the TRM were meant to break an organization’s infrastructure into manageable segments. These segments could then be evaluated for cyber resource quality and implementation needs. This model tried to establish easy interoperability, allowing two or more computers or processes to work together for infrastructure applications while providing the versatility to analyze business applications. This TRM is vastly applicable to modern systems, but newer TRMs have been created to handle cloud and other infrastructure areas.

While optimizing the quality of cloud-based solutions, smart ecosystems, and supersociety solutions is proving to be quite difficult, these solutions are also opening the door to current and future challenges in computing including sustainability, ethics, and professionalism. This has led to the creation of the Responsible.Computing() movement as an IBM Academy of Technology (AoT) initiative in early 2020. Subsequently, in May 2022, the Object Management Group (OMG) created the Responsible.Computing() consortium with IBM and Dell as founding members. This led to the creation of the Responsible Computing Framework (Figure 14.2 and Figure 14.3).

Illustration of Responsible Computing Framework includes organization, theme, and sponsor perspectives. It also includes circularity, privacy, climate, ethics, diversity, openness, inclusion, sustainability.
Figure 14.2 The various perspectives of the responsible computing approach include organization, theme, and sponsor. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)
Illustration of responsible computing provider: Domains responsible data center, responsible infrastructure, and responsible code more directed to technology aspects. Domains responsible data usage, responsible systems, and responsible impact more directed to social aspects.
Figure 14.3 The Responsible Computing Framework helps organizations regulate the use of cyber resources to protect our current and future generations and the planet. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Responsible computing is a comprehensive approach tackling present and future computing challenges. It emphasizes both systematic design and soft skills crucial in the industry. Developers are urged to anticipate potential harm from their work, emphasizing secure coding and adversarial systems development.

These frameworks and styles target various enterprise business platforms. Traditional architectural styles like OMA and SOA are generic models that can be applied to the TOGAF. The OMA focuses on creating and assembling components that can be used for information systems, including interfaces. At the same time, the SOA relates to the assembly of services that can be applied to the TOGAF TRM. Understanding the difference between OMA and SOA is crucial, especially for cyber-quality resources. Newer platforms require updated models and the adoption of older models with modifications to fit system needs. This realization becomes more critical as cyber ecosystems continuously change.

These architectural styles offer vision but are not core concepts of cyber resources. The OMA reference model (OMA-RM) (Figure 14.4) was developed in the 1990s to detail communications with object request brokers and different services. It visualized interfaces and services before the need to control and protect web interfaces.

Illustration of OMA reference model: object services; object request broker; application objects, domain objects, common facilities.
Figure 14.4 This figure shows the OMA reference model developed in the 1990s to explain communications and interactions with object request brokers and different services. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Different architectural styles cater to various project managerial styles, such as the waterfall model and the Agile development style (refer to Chapter 9 Software Engineering). Each framework has its strengths and weaknesses, influencing the implementation and evaluation of cyber resource qualities, which significantly affects the information security policy. IT specialists must consider all cyber qualities, not just security. Best practices for quality and performance determine data storage methods, which can be automated to enhance cyber quality resources.

Cyber Resources Qualities

Cyber resources address various cyber platforms and their functional and non-functional requirements, including software and hardware solutions for information systems. Well-designed platforms balance the requirements while analyzing the effect on the environment in which the systems will function. In systems engineering, quality attributes are non-functional requirements, also called architecture characteristics, used to evaluate the performance of a system. As development practices evolve, different terms and nomenclature are created. For example, “ilities” refer to the “abilities” of architectural properties. They can include system architecture trade-offs, which may not be visible to the user, but that ensure the architecture will perform as needed.

Developers and enterprise architects are responsible for incorporating “ilities” into architecture. When developing information systems and quality management plans, the developer must analyze and create functional and non-functional requirements that are only measurable once a solution platform is built (refer to Chapter 9 Software Engineering and Chapter 10 Enterprise and Solution Architectures Management). This guides the creation of large-scale horizontal platforms such as the web/mobile and cloud platforms, ensuring acceptable quality and security.

Defining Cyber Resources Qualities

Cyber resources qualities are developed and measured within software models. The ISO/IEC 25010 standard, detailed in ISO/IEC 25002:2024,1 details different software quality models and “ilities.” This standard aims to provide guidelines for creating high-quality software and systems. A suite of ISO standards focuses on the quality of software and systems development, helping to identify quality characteristics that guide the development of functional and non-functional requirements.

Additionally, frameworks like OMA RM, OMAGuide (OMG), and TOGAF TRM guide these requirements by creating a taxonomy of service qualities. This expands the implementation of the different “ilities” a system can sustain. Functional specifications can fall into categories (Table 14.1).

Categories Definitions
interoperability The ability for two or more computer devices or processes to work together
composability The ability to incorporate services within applications
scalability The ability to enhance or retract system requirements for the number of users involved in the system
evolvability The ability to adapt the system to new standards and practices
extensibility The ability to modify the system to include new requirements or remove old requirements that are no longer needed
tailorability The ability to customize the system for the needs of the users or industry
security The ability to ensure safe use of the system
reliability The ability of the system to perform as needed (i.e., without failure) and to specification
adaptability The ability to change or modify the current system to meet the needs of a different industry requirement
survivability The ability to survive an attack or disruption of service within a system
affordability The ability to create a system that is cost-efficient, not only monetarily but also with resource usage
maintainability The ability for the system to be upkept
understandability The ability of the system to be used (also referred to as the learning curve)
performance The ability to work within measurable standards and requirements
quality of service The ability to provide a solution to a problem or task in a way that most of the necessary user requirements are met
real-time The ability to operate the system within normal operational time
nomadicity The ability to work in a self-contained environment or the ability to move the system from location to location, when system location is a requirement
Table 14.1 Categories of “Ilities”

Defining system requirements into “ilities” makes it the developer’s/architect’s responsibility to ensure system security. This may seem daunting, but any system can be broken down into functional and non-functional requirements. A TRM supports and sustains the “ilities” while ensuring that the information security policy can be enforced. To do this, the TRM focuses on four service qualities: availability, assurance, usability, and adaptability (Table 14.2).

Qualities Components of the Taxonomy
Availability Locatability, manageability, performance, reliability, recoverability, serviceability
Assurance Credibility, integrity, security
Usability Ease-of-operation, international operations
Adaptability Extensibility, interoperability, portability, scalability
Table 14.2 Service Qualities of the TRM

These quality areas and the categories of “ilities” provide an avenue and direction for cyber resources.

Measuring Cyber Resources Qualities

TOGAF recommends combining quantitative and qualitative methodologies to measure a system and provides several quality assessment techniques. The systems architect should create assessment plans to ensure all functional and non-functional requirements are met and tested, aligning with the framework. This methodology checks to see if the requirements are met and evaluates how well the requirements meet their objective. Other models, like the CIS Critical Controls,2 can be used to test compliance with policies and requirements.

An important approach to testing resource qualities is to log tests performed and continually analyze system changes. This aids in assessing maintainability, survivability, and performance. The TOGAF Architecture Development Model (ADM)3, centers the requirements around the development process.

Illustration of TOGAF ADM preliminary framework and principles. Arranged in a circle around requirements: architecture vision, business architecture, information systems architecture, technology architecture, opportunities and solutions, migration planning, implementation governance, architecture change management.
Figure 14.5 The TOGAF ADM is a standard to assist enterprises in identifying the steps of different activities and the inputs required to develop an architecture. (credit: modification of “TOGAF ADM” by Stephen Marley/NASA/SCI, Public Domain)

To assess the quality of a platform/solution architecture, TOGAF recommends using a combination of quantitative and qualitative methods. Quantitative methods measure performance against quality attribute scenarios, while qualitative methods review the architecture against best practices, principles, standards, and guidelines. TOGAF provides several tools and techniques for quality assessment, including the following:

  • architecture compliance review to check alignment with enterprise architecture standards and policies
  • architecture maturity assessment to measure maturity level based on a predefined model or framework
  • architecture capability assessment to measure an organization’s capability to deliver and manage an architecture based on a predefined model or framework
  • architecture trade-off analysis to compare and balance trade-offs between different quality attributes and design alternatives

To improve the quality of a platform/solution architecture, TOGAF recommends a continuous improvement cycle: plan, do, check, and act. In the plan step, define and agree on quality objectives, criteria, and metrics. In the do step, execute and document the quality activities, tasks, and responsibilities. The quality outcomes, results, and feedback are collected and analyzed in the check step. The quality issues, gaps, and opportunities are addressed and resolved in the act step.

To ensure the acceptance and adoption of a platform/solution architecture, TOGAF emphasizes quality communication. TOGAF provides various tools and techniques, such as architecture views, viewpoints, models, and documentation. Architecture views represent a subset of architecture, which focuses on a specific quality attribute or concern. Architecture viewpoints specify the conventions and rules for creating and using an architecture view, while architecture models are formal or informal descriptions of an aspect of the architecture. Lastly, architecture documentation collects artifacts that capture and convey information about architecture and decisions.

Cyber resource measurement is a layered process. Remember that according to IBM’s responsible development methodologies, quality assurance should be done during all development steps, not just at the end. Following these steps results in an incremental system with proper checks and controls for security purposes.

Web and Mobile Solutions Era

Web, mobile, and IoT solutions are used at a global scale today. We have built development models, created a TRM, and educated computer scientists to prioritize cyber resource qualities. We have set the stage for web and mobile platforms by emphasizing these qualities in development. The TRM’s focus on usability, ease of operation, assurance, security, and adaptability has built a framework for responsible content development. Utilizing web-centric server-side coding, we have created more secure platforms that efficiently communicate through APIs.

Scalability, extensibility, and flexibility of web platforms pose significant challenges for cyber quality. In the past, quality focused on ensuring that cyber solutions functioned as intended, but now it is also focused on security. Securing a web platform involves more than just the web front end; it also requires securing the server architecture, the physical locations of the servers, the cloud solutions, and interfaces. Many web platforms use third-party software plug-ins that rely on the proprietary software’s security and quality. Poorly written code is easily exploitable, especially with the widely accessible tools to do so freely accessible on the World Wide Web (WWW), the Internet information system that connects public websites and users.

Web platforms have continuously changed in direction and architecture design. The architectural principles of the Internet stated in the Request for Comments (RFC) 1958 set forth by the Internet Engineering Task Force (IETF) in June 1996.4 Knowing that the Internet had to be able to change was a design principle. Anticipating that IP addresses would need to adapt or be replaced at some point because the number of devices would exceed the allotted number was a guiding principle that forced the Internet to be scalable and extensible. Quality of service and assurance/security concerns drive the commercial Internet. Initially, the Internet’s architecture was unregulated, and there was no mandate for IP address allocation or domain name services (DNS) from the Internet Architecture Board (IAB). Public Internet is now ubiquitous in most economic areas of the world. The increase in mobile platforms and the continuous need to be connected pushes the boundaries of cyber quality beyond the original architectural goals of the Internet. Performance and usability demands drive technology and the Internet to adapt and create new mediums easily adapted into scalable infrastructure.

Open Web Platform Quality Challenges

The Open Web Platform (OWP) initiative was designed to offer royalty-free (open) technologies for web platforms.5 Its goal was to provide a foundation of services and capabilities for developing web/mobile applications, as illustrated in Figure 14.6. Using the OWP, everyone can implement a web or mobile solution without requiring approvals or license fees and maximizing usability and accessibility.

Illustration of OWP application foundations: security and privacy; web design & development; device interaction; application lifecycle; media and real-time communications; performance and tuning; usability and accessibility; services.
Figure 14.6 The foundational characteristics of OWP solutions allow anyone to implement a web or mobile solution without licensing. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

The cyber qualities of these technologies are continuously evaluated. People want free web applications that are easy to use, have good performance, are secure (always an afterthought), are compatible with their current technology, and, most of all, remain free and open to the public. Security plays a significant role in the adoption, and ease of use is often a struggle. The most straightforward security measure, restricting access to the system, is not feasible if companies want to interact with the world. Several companies, like Meta, use a limited functional interface called a walled garden approach, which limits openness and prevents users from accessing a platform. This approach goes against the spirit of OWP. Open web platforms like WordPress and Drupal provide user-friendly web design through drag-and-drop web page creation frameworks. However, updates to these technologies can break other areas or disrupt compatibility with other open web software packages. To mitigate this, people and companies are adopting the walled garden approach to web applications, limiting controls and plug-ins, and the developer can incorporate their own HTML security measures.

The wall garden approach goes against the principles of the OWP. The concept of “openness” wants technology to be free and interoperable. Giving individuals a subset of technologies and forcing them only to use those technologies diminishes the power and expansibility of the Internet.

Modern Web/Mobile Solutions Quality Challenges

The World Wide Web (WWW) demands 24/7 access as users expect constant availability. Companies like Amazon have set the bar for quality of service and how fast a product purchased on the Web can be delivered to a residency. Smartphones and the IoT have provided new access models to services and goods. What challenges does this pose?

  • Account management: Users have numerous online accounts, requiring unique, strong passwords for each. Research by Colorlib in May 2023 found that the average person has 100 accounts needing passwords, up from 70–80 in 2022.6 This growth highlights the challenge of managing diverse and increasing technology.
  • Workforce shortage: There is a shortage of qualified professionals. Higher education and companies like Microsoft, CompTIA, and CISCO provide certifications to prepare workers, but students without industry experience struggle to find suitable jobs, and entry-level positions are limited.
  • Threat landscape: The rapid growth of web platforms and technology increases security challenges. The use of cloud technology and IoT devices introduces new vulnerabilities. Companies like Microsoft invest heavily in training programs and certifications, particularly for securing Azure web platforms.

Modern Walled Garden Platforms Quality Challenges

The main challenge for Walled Garden Platforms is interoperability. For instance, WordPress offers more than 59,000 plug-ins. While these plug-ins allow for customization, only cosmetic changes are typically possible. This allows security to be maintained in a walled garden approach. Plug-ins are tested for security issues before being added to a library. However, using plug-ins in ways they were not intended can create vulnerabilities. Using plug-ins in untested ways goes against IBM’s responsible coding practices. Then questions must be asked, such as who is responsible for the misuse? Who is responsible for updating the plug-ins for newfound security issues? And lastly, who is responsible for removing outdated plug-ins, whose functionality no longer works with the current technology?

Using frameworks to assess the proper development of websites and content should be a requirement for web programmers. Insecure content puts users at risk. Security frameworks, plans, and walled garden approaches can enhance web platform security. However, this requires skilled developers and available resources.

Cloud-Centric Solutions Era

Cloud platforms rely on applications built for web and mobile platforms, similar to the Open Systems Interconnection (OSI) networking model. In this analogy, cloud-centric solutions are layer 3, web programming, applications and the WWW are layer 2, and the Internet is layer 1. Each layer has its security challenges. Cloud-centric solutions involve many different systems, software, and communication methods. Applying the TRM to this model highlights availability and assurance as major concerns with adaptability being crucial with interoperability, scalability, portability, and extensibility. Usability is the biggest challenge. If users cannot use the product, then implementation will fail. Hardware issues present additional challenges beyond user interactions in the cloud-centric model. The OMG Cloud Working Group (CWG) addresses myths associated with cloud-centric Solutions, focusing on scope and applicability, costs, security, availability, and management. The CWG advises maintaining a calm approach to solving cloud issues and warns against “overselling” cloud solutions because of the difficulties in implementation needs.

Hardware Virtualization Quality Challenges

Many cloud-centric implementations use hardware or software virtualization. Hardware virtualization challenges face compatibility issues and constant updates, often resulting in performance loss when implementing a virtualized approach for cost savings. Vendors like Amazon, Google, and Microsoft offer hardware virtualization options that typically improve efficiency, flexibility, scalability, and security. However, a big concern with virtualized cloud solutions is that the availability of resources comes with extra processes. Users needing to access their data and information must navigate the complex interfaces that provide walled approaches to web design. This limits functionality, and since many organizations decide to host their cloud solutions themselves, it leads to new challenges with regard to cost, assurance, data privacy, and security.

When using virtualization solutions with the TRM, availability, and manageability are key concerns for users. Poor performance and outdated hardware can lead to liability for an organization. Redundant systems are necessary for maintenance and upgrades because there are times when a system is vulnerable and not available. This is where virtualized solutions can help. It is easy to spin up a third instance of a cloud-centric virtualization and perform the needed hardware upgrades in the first backup system and then the production system. While the backup system is down for hardware upgrades, the virtualized backup covers any necessary security issues. Once the servers are upgraded, an entire image is copied from the main server onto the backup server. Then, the backup server becomes the primary server with a virtualized backup solution, and the original main server can then be upgraded to become the new backup server.

During this hardware update, the implementation of the TRM becomes important. Information must be kept available throughout the whole process. If the faulty hardware is left in the systems, then errors or crashes may occur. Then, known vulnerabilities are identified in hardware configurations, and the process of hardware upgrades must be implemented.

Virtualized solutions require constant monitoring. Network modeling virtual local area networks must always be monitored for load balancing. A virtual local area network (VLAN) is a tool to connect devices and nodes from various LANs. Storage calculations and estimations must be constantly performed to ensure upgrades keep ahead of any issues. Manageability and serviceability are constant challenges, including both hardware and human resources. Training workers in new software is costly and time-consuming, resulting in minimal opportunities to learn on the clock and possible security lapses. Security procedures are often ignored for the sake of time. As a result, many companies seek cloud-centric systems to avoid these issues.

Container Management Platforms Quality Challenges

A container is a lightweight package that bundles together applications to form a solution to specific problems. When applying the TRM to containers, usability and ease of operation are major challenges for web/mobile and cloud platforms. Containers simplify development, deployment, and management across complex environments, offering a quick solution for loading features without concerns about the operating system. With a container, the developer is limited to the operating system for which the package was developed. While this seems like a challenge, it provides security and adaptability. Containers are built for specific application purposes. They are often referred to as enterprise solution support containers. Kubernetes includes support for load balancing, rollout of patches, configuration management, and storage orchestration. Alternatives include Docker Swarm, Apache Mesos, and cloud solutions such as Azure containers.

Effective container management benefits from the TRM in several ways. The usability and ease of operation and the quality of web/mobile platforms include several approaches that help the user and developer manage and build the system.

  • Ease of setup: The management system provides a drag-and-drop system for scheduling, storage maintenance, and system monitoring.
  • Enhanced administration: It simplifies IT management, allowing administrators to focus on other tasks.
  • Automation: It automates any number of processes and does automatic load balancing.
  • Continuous health checks: Reporting and logging are essential for container management and allow an admin to do their job efficiently.
  • Change management: It keeps a detailed change log of the packages. This allows for troubleshooting issues arising from upgrades of older packages.

Using the TRM to maintain alignment of the container management to the “ilities” is a large concern. Every aspect of the TRM can be applied to the container management platforms. It is recommended that when the TRM is used to evaluate the management systems, the developer thinks about incorporating an outside resource like the CIS Critical Security Controls to help determine the level of compliance needed for the given control. Overall, container management can greatly assist an organization looking to speed up its development time and interoperability between systems.

Cloud Big Data Analytics PaaS Quality Challenges

In our development pyramid, where cloud-centric platforms are at the top (layer 3), cloud and big data analytics form an overarching umbrella. The process of analyzing that data to find correlations and trends that can be used for decision-making is called big data analytics. Incorporating the Platform as a Service (PaaS) aspect for big data analytics provides a cloud-based development environment that can solve many development problems and provides methods to analyze and enhance system performance.

A big challenge with data analytics from a TRM perspective is time. It takes time to properly analyze data collected to make informed business decisions. Big data only helps information management when data is converted into information on time. While cloud providers do their best to offer 24/7 customer service, issues arise when third parties and containers are introduced. The data may not be immediately available from the third-party vendors, or they may not want to share their data. Having clear agreements with third parties about what data will be shared and what data will not be available is important.

A second challenge with big data solutions is collecting the wrong data. For instance, firewall log files might record time stamps of external IP address access attempts, but if the firewall doesn’t account for IPv4 to IPv6 conversions, some IP address data could be lost. If the firewall only handles IPv4 and the attacker uses IPv6, the log files may be incorrectly formatted, resulting in incorrect or incomplete data analysis.

A third challenge for data analysis and PaaS is bandwidth. Proper data analysis requires large amounts of computing power and hardware, slowing network communications during data transmission. Storage is another issue, as analyzing multiple days of data requires additional storage locations, often leading to cloud storage and virtualized solutions. Companies like Google generate revenue by doing big data analysis and selling their findings to companies for marketing strategies. Managing over 1.8 billion Google accounts with a “free” 10GB cloud storage drive necessitates virtualization and an extensive security plan.

A service-level agreement (SLA) must be established when customers enter into agreements with PaaS platforms. The SLA outlines the services that the platform provides. Companies can estimate growth and need for their organization using SLAs. However, SLAs are not covered in the TRM, and since they are not actual “ilities,” they may fail to include actionable items. This means that the SLAs are “what if” actions. For example, if there is a data breach, this will happen, making the SLA reactive rather than proactive. The TRM aims to prevent issues before they arise. SLAs are criticized for lacking assurance and privacy protections, as data breaches can put user information at risk. This highlights the importance of trust, which, while not directly mentioned in the TRM, is addressed in many different “ilities” in implied ways.

Trust in any service-providing system has a narrow margin for error. A trust relationship is established when consumers entrust their information to a company. To bolster this trust, companies should have information security policies detailing data security practices. However, exact methods should not be disclosed to prevent giving hackers helpful information. The trust relationship aims to make the consumer feel secure, and therefore increase their ease-of-use with the system. This trust can be a valuable metric for measuring quality based on customer satisfaction.

Cloud IoT PaaS Quality Challenges

Earlier, we compared technology to a one-mile race, with IoT representing the last ten steps. Every technological advancement has led to the IoT, as consumers want more power and faster access to information. They are dissatisfied when companies fail to provide information quickly and efficiently. Cloud IoT, as a PaaS, provides more challenges than this small section of the book can cover. To start, IoT’s largest challenge is scalability. The rapid development of wearable devices, smart devices, smart cars, appliances, and artificial intelligence suggests no end to this growth. Consequently, IoT scalability is increasingly challenging to manage.

If scalability is out of control, then it is typically the case that security is out of control. Most TRM “ilities” encounter challenges with the IoT. Having a quality security plan for every possible third-party IoT device is not feasible and it is sufficient to analyze the performance of the underlying Bluetooth and Wi-Fi technologies. Developing better and stronger ways to secure communications between devices would be an excellent first step. There will always be an issue with hackers capturing data transmitted wirelessly.

The next challenge for IoT as a PaaS is storage. Where is all the captured data stored? Apple devices are stored in their iCloud, but other companies face security issues with stored data. Consider the data that is left behind on a device that is no longer in use. Best practices for disposing of computer hard drives and other storage devices don’t always apply to IoT devices and smartphones. We created regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) to address concerns arising from IoT storage situations.

Interoperability is a major challenge with IoT devices, given the vast types of devices and communication methods. Securing biometric data is a constant concern, and hacking into home monitoring systems with IoT devices poses significant safety risks. Since most IoT devices rely on Wi-Fi, improving interoperability efforts and security in wireless communications is necessary. Cloud-centric services are another concern, as all IoT devices are part of cloud technology. Developing interfaces for IoT devices is also challenging due to the need to accommodate cultural and language differences.

Cloud Robotics PaaS Quality Challenges

Cloud robotics PaaS merges two challenging areas: cloud computing and robots. Robotics has faced development challenges, especially in natural language processing, to help robots perform actions and communicate with users. This is often seen as a futuristic world where robots handle daily operations, reducing the need for human labor. However, the high cost of robotics remains a barrier. AI continues to evolve, driving industries to use it to enhance production and services. We as humans must repurpose our skillsets to “train” the robots to do our work.

Cloud robotics is the next step in the evolution of standard robotics. Traditionally, robots perform pre-loaded algorithms, making decisions based on predetermined outcomes. However, with cloud robotics platforms, robots use big data analysis to make more informed decisions. Leveraging the power of cloud data and rapid processing speeds, they can approach “thinking” to analyze a problem or a task. This is not to be confused with automation. Traditional automation involves robots replacing human jobs or tasks. Automation is a programmed response with limited inputs and decisions. Cloud robotics PaaS, in contrast, allows for more extensive inputs and a broader range of decisions. This enhances the robots’ decision-making capabilities through advanced algorithms.

The TRM has small applicability to robotics, with key considerations being security and connectivity. Robotic cameras and other devices need to transmit their data for processing, making connectivity to cloud platforms essential. Reliable high-speed data transmission is critical to cloud robotics, raising concerns about network outages. Robots might cease functioning without incoming data, and latency or corrupted data packets can impair operation.

Cloud robotics shares many challenges with simple networking, but the issue is that robotics may have a more important role than a computing platform. If the robots are not getting the needed inputs, they may be unable to carry out the functional tasks. For example, suppose a robotic drone traveling over a mountain range loses communication while searching for lost hikers over a mountain range. In that case, it might crash without its ability to transmit and receive data. The drone cannot receive any flying commands and would plummet to the ground. Safety protocols are essential, such as hovering in place until communication is reestablished or the battery depletes. While these connectivity issues are not new, the context has shifted. Previously, this scenario would have involved a helicopter or airplane doing the search, but it is more costly to have a plane fly to search for lost hikers than a drone. Connectivity remains a persistent challenge for any system relying on remote communications.

Industry 4.0 Metaverse Smart Ecosystems Era

The fourth industrial revolution, also referred to as Industry 4.0, is characterized by increasing automation and the employment of smart machines and smart factories; insight obtained from data helps produce goods more efficiently and productively across the value chain. The term “metaverse,” which was coined in Neal Stephenson’s 1992 novel “Snow Crash,” has been used by Meta, the company that developed Facebook, to refer to a platform that applies augmented and virtual reality in a social media environment. However, the metaverse concept extends beyond social media and refers to the next generation of the Web that will provide an immersive augmented/virtual reality (AR/VR) interface and will leverage new technologies including AI/ML, IoT, blockchain, 3-D models, and others. The metaverse will go beyond Web 3.0 that already includes blockchain 2.0 capabilities on a decentralized Internet.

The metaverse is a hybrid cloud environment aiming to uphold the OWP principles. Software developed for the metaverse must ensure full interoperability and interactivity with all other platform software. These requirements significantly constrain the technology's ease of use and performance requirements.

Incorporating virtual and augmented reality into the metaverse typically involves using an AR/VR headset to access a library of software programs, each offering a highly realistic user experience. To be part of the metaverse, software must adhere to these rules7:

  1. There is only one metaverse, and all people should have access to it.
  2. The metaverse exists beyond everyone’s control and must be accessible at all times.
  3. The metaverse doesn’t care about hardware.

We must understand how metaverse solutions relate to the hybrid cloud environment and leverage the foundations set forth by OWP. New challenges for each one of the qualities arise as we go up in layers. For example, cloud security requires users to trust their cloud provider. The ecosystem resulting from the software developed to interact with the metaverse provides challenges for all cyber-quality services.

From a security perspective, users must know about privacy and data protection when using AR. Systems collect large amounts of data, and users need to understand what is being collected and where it is stored. For instance, self-driving vehicles use AR technology to navigate and store driving patterns and routes. If hacked, this data could reveal when a user is not home. Additionally, many self-driving cars store garage door information to automatically open doors when the vehicle is nearby, posing further security risks.

The ecosystem should provide a plug-and-play mentality for all metaverse-associated technology. Without this, ease of use, performance, and reliability may suffer. Increasingly, people are integrating technology into their lifestyles, especially since the COVID-19 pandemic, with more online shopping and remote working. Virtual meeting rooms and advanced technology are replacing traditional office interactions. A work from home environment that is fully immersed in technology could be a way to provide life-like meetings where 3-D images and holograms of people sit at conference desks instead of flat-screen monitors. The technology performance, automation, and ease of use are at an all-time high. For this environment to become commonplace, trust in technology must increase. Companies must prioritize incorporating all cyber qualities to ensure high performance, automation, and ease of use.

A large issue is creating content that is both interoperable and secure. The demand for VR worlds and platforms drives rapid production, often leading to unchecked vulnerabilities due to insufficient time and resources. This is particularly problematic in VR, where large multiuser dimensions are common. Data exchanged between entities has to happen seamlessly, but the whole system is at risk if one system is compromised. Since there is only one metaverse, a security vulnerability in any software on the platform puts the entire system at risk.

A new risk that has emerged is addiction. Users may prefer the AR version of their environments, leading to a sensory deprivation effect in the real world that drives them toward AR and VR. This is particularly problematic for adolescents. Since the onset of the pandemic, increased AR/VR use has made younger users more susceptible to addictive behavior with these systems.8

Smart Ecosystems 3-D Modeling Platform Quality Challenges

Ecosystems that incorporate 3-D modeling are faced with many different challenges. To apply the TOGAF reference model to the problem, all of the qualities of concern are as follows:

  • Graphics and images: Cloud computing has provided a storage medium for graphics and images, but the quality of the images based on their resolution is challenging. For AR/VR platforms, the images need to be of high resolution, which leads to more storage and lower transmission rates.
  • Data management: The pure amount of data to create 3-D modeling is immense. The data collected for AR/VR to be successful requires more storage and memory within machines. The fastest Internet connections are needed for data transmissions, and archival procedures must be implemented for the data system’s lifetime.
  • Data interchange: As with data management, data interchange struggles with the transmission speeds of large 3-D models and the physical hardware found within the different AR/VR systems platforms. Since no complicated platforms are required, the data interchange must be universal.
  • User interface: This is paired with graphics and images, but there is an aspect that is a different challenge. The ease of use is a challenge with 3-D modeling. The user interface must be seamless for the AR/VR to work properly. The user must be able to interact with objects appropriately rendered, and the user interface must be life-like.
  • International operations: This area is a serious challenge with 3-D modeling, and it comes in the form of symbolism and color coding. In different countries and different cultures, symbols have different meanings. With 3-D modeling, where every country in the world is interacting with the metaverse, all software must be interoperable. Each country involved in the metaverse will have different data storage and security standards.
  • Location and directory: Working with data management, location data, and directory information is always challenging. If two software development companies collaborate and create 3-D models that hold user data or geo-record, then storing that information is challenging for the system.
  • Transaction processing: 3-D modeling requires a large amount of object interaction. When objects are developed for 3-D modeling, they are typically put into a setting or a landscape. Most objects interact with other objects, which is done through transaction processing.
  • Security: The risks and challenges associated with security are some of the most difficult to identify and detect. Many companies are creating 3-D models for other entities to use. These models are not always verified, and penetration testing is not always performed on each 3-D model. If multiple 3-D models are incorporated into different companies’ systems, then the security concerns escalate. Security seems to be an afterthought based on performance and system requirements.
  • Software engineering: This area of the TOGAF deals with skills in different 3-D modeling software. Many HTML or scripting languages cannot do proper 3-D rendering; therefore, the software engineer needs special skills to develop the models correctly and with proper aesthetics for the platforms and worlds they will be used within.
  • System and network management: The challenges are related to communications and interoperability. Since there is no requirement on what hardware or technology can be used to interface with the metaverse, network management is crucial. The communication needed to transfer 3-D models between AR/VR nodes is important, and the system resources will be challenged to perform these transfers properly. The metaverse is not housed on one computer and the bandwidth needed to transfer the large amounts of data needed for 3-D modeling will always be challenging.

The TOGAF and the TRM provide a good framework for analyzing most system challenges. Analyzing 3-D modeling is similar to analyzing any resources needed for the metaverse. For instance, besides 3-D modeling, there is a large language barrier in the metaverse, and each of the qualities of the TOGAF can also be used to analyze such barriers.

Smart Ecosystems AR/VR Platform Quality Challenges

Any ecosystem that houses AR/VR technologies will always struggle with ease-of-use issues. The TRM stresses that accessibility to any platform must be made easy. New technology introduced through the IoT presents challenges for AR/VR platform issues. The headsets needed for most AR/VR technology cannot be worn to walk around. They are usually meant to be worn safely away from obstacles the individual might encounter or trip over. This virtual platform may be easy to manipulate in the device, but the hardware is bulky and often clumsy. Wearing AR/VR gloves to interact with computer systems is a nice feature when the individual can use them fully.

Using AR/VR technologies to assist individuals who have disabilities is a major reason to invent the technology and push the boundaries of the technology.

Smart Ecosystems IoT Platform Quality Challenges

IoT platform challenges are equal to that of cloud technologies. The main challenges are bandwidth, storage, and infrastructure. There is a need for bandwidth that is more significant than 5G technology for all communications. The use of fiber optic communication greatly increases the rate of data transfer, but many times, the network is left to backhaul the network. Backhauling the network directs traffic to a longer out-of-the-way route to perform load balancing on the network, help prioritize important data, and send the data to the most direct route. Using backhauling for data that are not as important as primary communication is a reasonable workaround for having poor network bandwidth.

The infrastructure challenges are constantly being evaluated. The need to move all devices to the IPv6 protocol in the TCP/IP network layer away from IPv4 provides its challenges. The move to different IP settings is small compared to Internet provider (ISP) issues. Many ISP providers share mediums, but many others install their networks. These independent networks do not communicate; therefore, all information must go to large data centers, which can then be sent from there. This forces central offices and organizations to be formed to handle data warehousing. This introduces the challenge of central locations to IoT. There needs to be enough data centers close enough to each other to have data transmitted quickly and without any transmission issues. Network and mobile device security always needs to be addressed. The attack vectors created by the IoT increase with every new device created.

The biggest issue with data storage is the physical need required for the data storage. Data warehousing and data mining have become their distinct area of the industry. The sheer amount of data collected becomes unimaginable. The average person in North America has over thirteen devices that have telecommunication needs.9 It is reported that each household in the United States has over eight IoT devices.10 There is a constant increase in the number of IoT devices in households. This means that there is a direct corollary with the number or size of the data center needed to provide proper services to these devices.

A serious challenge that needs to be addressed immediately is the standards and requirements for IoT platforms. ISO/IEC standard 31041 addresses the IoT and reference architecture. This standard helps smooth over a seamless connection with the intent of being safer. The framework that this standard is a good starting point, but the challenge is the implementation of the standard. It is not a requirement for companies to use. ISO/IEC 27400 family addresses IoT security and data privacy guidelines, but nothing is forcing a company to use or address these standards.

Smart Ecosystem Blockchain Platform Quality Challenges

Blockchain platforms suffer from availability/scalability challenges directly associated with the TRM model. All blockchain platforms can only process so many transactions per second. If the data the blockchain received are greater than the computational capabilities, then the computational ability of the platform slows down and gives the impression of being offline. Blockchains tend to be more secure than traditional computer systems because most systems are distributed. Many are based in the cloud. A distributed and cloud-based platform will always be directly affected by availability and scalability issues. Most blockchains have strong cryptographic principles, strengthening the TRMs assurance and integrity issues.

The TRMs ease-of-use plays an important role in the challenges with blockchain. Many owners of blockchains become landlords to data mining infrastructure, facing challenges to cryptocurrencies and other dilemmas on how the information will be disseminated. The ease-of-use challenge is that while crypto-currencies are relatively easy to use, their values fluctuate greatly, and people do not trust transactions based on fictional money. The different blockchain software packages do not always communicate well with each other, making transaction processing difficult at times. A challenge introduced by cryptocurrencies is the issue of auditing and transaction analysis. Individuals in charge of the blockchain can control what data are collected and what they will do with it. While this does not directly play into the ease of use, you must adhere to the owners’ rules, and they can make it as easy or difficult as they wish for you to use their software.

Blockchain platforms also face challenges in terms of adaptability and interoperability. Although the World Wide Web seems to have been around for an eternity, blockchain concepts pre-date it. Cryptographical secured chains were discussed in the early 1990s by Stuart Haber and W. Scott Stornetta—the idea is that cryptographic hashes could be broken up into “blocks” to be transmitted. Choosing the size of the block and communicating these blocks has always been a challenge. The argument of what form of cryptography, or the process of using codes to protect data and other information, is the best and what the best ways of transmitting the information remain challenging to all blockchains. These different modes of transmission and the different cryptography used challenge adaptability and interoperability.

Smart Ecosystems AI/ML Platform Quality Challenges

artificial intelligence (AI) and machine learning (ML) provide a variety of platform and quality challenges. These two types of technology suffer from the same difficulties as the cloud big data analytics platforms discussed earlier, with the addition of TRM quality standpoints. The challenges of availability, assurance, and usability are still present, but this issue is how to ensure quality from an AI/ML perspective. Generative AI can interact with a customer base, but that customer base usually wants to talk to a “real” person. When used in chatbots and other front-end interfacing devices, users can typically get their answers quickly, but they often lack depth or leave out important facts or features. The generative AI can only produce responses and information based on its calculating algorithms. The Turing model of AI is based on algorithms, while the Lovelace model of AI is based on spontaneous creation of material. We are not in a world where AI/ML devices can “think” and create new content. The generative AI, for instance ChatGPT, uses a predictive text model to form its responses. The quality of responses from such devices is left in question because if the device cannot determine the correct answers, it often makes up the material.

The use of cloud devices based on AI/ML provides the challenge of communication and speed. The availability of data that is accessible and of the highest quality of analysis is always a risk. The need for secure communications and smart backups is also critical. This is a definite concern for compliance issues. The CIS critical controls can be used to evaluate procedures and test compliance levels against company policies. The problem is creating policies that handle all possible combinations of AI/ML and their applications within a company.

In higher education, colleges create ML models based on information collected in student datasets. They use these models to perform targeted marketing in an effort to increase the number of college applications. The challenge is that many of the datasets that are used to train ML models are culturally biased and do not consider all possible learners; they usually have a targeted demographic. Therefore, the new student enrollment plan could be flawed because of a population of students that is not included or represented.

Industry Spotlight

Challenges of Health Care Information Systems

According to LinkedIn, health care information systems struggle to provide accurate and complete data. Many fields within patient records are left blank, or the administrative assistant cannot read the patient’s handwriting. If you have done any research, you will learn the phrase, “data is dirty.” When you ask participants to fill in surveys or forms, you have no idea what kind of data they will be receiving. Therefore, patients’ incomplete fields or spelling mistakes all lead to dirty data. With the advancements of AI and ML, data records can be analyzed for completeness, and fields can be evaluated for proper responses.

The use of AI in medical fields is growing rapidly. Tools like ChatGPT have had high success rates in properly diagnosing conditions already verified by doctors.11 Studies have shown that generative AI tools can be used by doctors to help with diagnostic evaluations of patients. These tools are not 100% accurate, and patients should be wary about only using computer-related devices for diagnosis. You should consult a physician if you feel something is wrong. If tools like ChatGPT can decrease medical professionals’ research time, they can find a quicker place to start treatment and be able to treat more patients. Medical professionals should proceed with caution because the studies show there is about a 10% chance of wrongful diagnosis.

Smart Ecosystems 3-D/4-D Platform Quality Challenges

When discussing 3-D/4-D platform systems, there are two areas in which challenges can be analyzed. The first is 3-D/4-D printing, and the second is 3-D/4-D immersive gaming. 3-D/4-D printing is a new exciting concept that allows the materials used in 3-D printing to transition or transmute into other shapes or substances. The concept of 4-D printing is an add-on to 3-D printing with special features. The challenges with this are the materials needed to print in a 3-D environment properly and the transitioning period for the print. There are typically special environmental conditions that will trigger the transmutation. The challenge with 3-D printing is properly designing the model. Sometimes, these models need to be printed with supports and other features, which is a waste of filament. If that filament is meant for 4-D printing, then that expense must be accounted for. The success rate of 3-D printing is not 100%. Many conditions will cause the print to fail. One of the biggest issues is the filament absorbing moisture from the air. If the 4-D model is based on moisture or water, the transition might be triggered during printing, which would not be the intended outcome. Many 3-D models, if not designed properly, are frail and have many weak points. This is not always desirable if the 4-D transition is going to depend on the strength of the material.

3-D/4-D immersive gaming has all the same concerns as the AR/VR platforms discussed previously in the chapter. The 4-D aspect of the AR is a condition that triggers an alternative sensory response. It goes beyond just the visual aspects of the oculus, but it may incorporate sound or touch. This is common on immersive 4-D rides found at amusement parks. Providing an experience beyond the 3-D interaction has challenges based on environment and materials. Most 4-D immersive experiences use air or mists of water to generate their effects, but some go as far as to incorporate pheromones.

Metaverse Smart Ecosystems Platform Quality Challenges

The metaverse is a big World Wide Web (WWW) addition. The immersive AR/VR culture the platform intends to create is nothing short of a science fiction movie. The WWW and the Internet have had decades to develop and majorly influence modern life. The challenges that the metaverse presents are the issues of adaptability and interoperability and the usability and ease-of-use areas of the TRM. The metaverse prides itself on being platform-independent. It is meant to be used on all IoT devices and all computer platforms. It is reasonable to assume that interoperability will significantly influence how smoothly the metaverse functions and how stable the environment is. The ability to travel seamlessly between virtual worlds and transition from one system to the next will only be as good as the resources introduced into the system. The amount of memory each computer has and cloud technologies will be challenging. The platform will want seamless integration of 3-D modeling, and the adaptability of this has been previously discussed in the chapter.

A second major concern is that the metaverse is not a traditional medium. It is different than current movies or films produced for large screens. These screens can be as small as a watch face or as large as a city skyscraper. All images in the metaverse are digital. There will be no print media from it. Individuals with disabilities will have to adapt to the new culture, and the level of detail within the models may not meet expectations. The platform will be expensive, and in a world where we struggle to ensure everyone has access to the Internet and WWW, making sure everyone has access to the metaverse is a further stretch. Individuals will hesitate to provide all the information needed to have a metaverse account, and therefore, a lot of unauthentic information will be provided. Alias will be used, and sometimes wrong data will be provided just to gain access.

The rate at which the software is being created for metaverse is a definite area of concern in the TRM. Software engineering skills in web technologies, networking, quality assurance, security, and overall software usability need to be increased. Developing these skills is not the biggest challenge. With the advent of the Internet, the modern personal computer, and all of the IoT devices, where does the innovation come into play with the metaverse? It will be years before the current world and experiences in it are fully brought into the metaverse, but we need people who are going to look beyond the metaverse to see where it is going. If people get immersed in the metaverse, looking past it for newer technology won’t be easy.

Industry 5.0 Supersociety Solutions Era

Industry 5.0 is a new and emerging phase of industrialization that sees humans working alongside technology and AI-powered robots. It also relates to a transformation of industries from production-based to value-based, focusing on social and environmental benefits (i.e., Society 5.0) as well as profit-making. Industry 5.0 aims to enhance workplace processes, increase efficiency, and improve resilience and sustainability. Industry 5.0 relies on various supersociety solutions that typically operate in a hybrid cloud environment. Therefore, they rely on the application foundations set forth by the OWP for web/mobile applications as well as services that are made available by multi-cloud platforms and smart ecosystem solutions. The concept of a supersociety is a technologically rich environment. The individuals who live in these societies choose to immerse themselves in technology. The household would often be completely reliant on the IoT and cloud environment software. As we prepare for super-societies, think about the science fiction movies we have seen where a computer runs and maintains the entire household. While we are not there yet, the members of super-societies work toward this concept. The infusion of cloud-based software applications and the idea of being fully connected to the Internet plays a significant role in utilizing or being a member of a supersociety.

Supersociety Autonomous Systems Platform Quality Challenges

The challenges we face with an autonomous system, which is a system that can operate with limited human control, are ever-changing. The theory is that if every car in the world was autonomous, there would be no more accidents. This is a myth because, as we know, computers are prone to failure. Individuals who wish to hack autonomous vehicles will do so to possibly cause harm. Many people cannot manage the sense of not being in control of a vehicle while it is driving, even when they are behind the wheel. Humans, by nature, prefer to be in control of their environments.

Autonomous vehicles rely on sensory data to function. This data provides perception, decision-making, and execution of real-time data analysis. The vehicles rely on cloud mapping services for navigation and other cloud data for weather and environmental issues. If any of these channels are interrupted, the vehicle must function based on the last known data. This information may be insufficient, or erroneous data may be used for decision-making. This will change the perception of the vehicle and may lead to the system’s inability to make safe decisions.

The vehicles are expensive to manufacture. Autonomous vehicles can also refer to drones and smart homes. Any device that could work independently of humans is at risk for false data. With smart homes, there is also the challenge of power outages. While these platforms are in constant development, and many companies are contributing to the enhancement and evolution of these devices, as a society, we need to analyze how we will integrate them into our lives. Theoretically, as the autonomous machinery increases, it should seamlessly fit within the metaverse. The TRMs interoperability and adaptability will be challenged, and data mining and decision-making through more developed AI mediums will continue to challenge the functionality of immersive technology.

The great hope for autonomous vehicles, and in reality it is most true, is that autonomous machinery will be faster than human interactive computing and will provide humans the opportunities to repurpose themselves so they do not have to do the tasks these autonomous entities can provide. The issue is that we need more computer scientists studying AI/ML and a greater devotion to nanotechnology. All companies must enhance their quality assurance protocols, and the world cannot be the testbed for software. The ISO 22737:2021 Intelligent Transport Systems and ISO 39003:2023 Road Traffic Safety must continue to evolve to handle the newest technology. Standards cannot be an afterthought for autonomous platforms.

Supersociety Advanced Robotics Platform Quality Challenges

As previously discussed in the chapter, there are two bases of AI in the world: Turing and Lovelace forms of AI. Robotic platforms cannot exist without implementing one of the two types of AI. Since robotics has not been proven to create thoughts spontaneously, the Lovelace form of AI does not seem to be implantable. Therefore, we can assume that AI based on algorithmic analysis will be the current resolution for robotics with machines working toward the Lovelace form of AI. If we consider humanoid robotics, much can be done with predictive AI and the advancements in neurosciences.

The Human Brain Project seeks to find disorders in human consciousness. Although this project is currently on hold because of its ambitious goals, we can still learn from the foundations they established. Henry Markram, creator of the Human Brain Project, spent many years trying to reverse-engineer the brain to understand brain disorders like autism. In a 2009 TED talk, Markram said that he would be able to figure out the brain and create simulations in the form of holograms that will be able to think and talk with you. His foundation did not meet its goals, but it did advance brain model research in previously analyzed areas. This will help with the creation of robotic brains that can think. The challenge is that the neurological community cannot agree on one concise development course. This has led to the delay and descension of scientists leaving the project.

The TRMs quality assurance and software engineering are challenged every step of the way with robotics. An ethical dilemma also comes with all robotics: Do we need a robot that can function the same way as a human? If that is the case, when they are developed, will humans become obsolete? Science fiction movies have warned about robots taking over the world. If the quality assurance and software engineering skills increase to a point where a robot is created that can truly think, the world will experience a new set of challenges.

Supersociety Nanotechnology Platform Quality Challenges

There are many challenges to nanotechnology (NT) platforms and their introduction into society. Nanotechnologies are small (1 billionth of a meter) devices introduced into technology to help the device function in a modified way, hopefully for the betterment of the technology. The process required to develop such technologies is costly, and many different chemicals are involved. What to do with these chemicals once the manufacturing is done is a constant challenge for the NT companies.

The NT contradicts the TRMs thought process about extensibility. NT is created to perform a specific task; it often cannot adapt to new conditions or environments. The NT has a series of functional requirements that it is meant to carry out. Once the NT is introduced into the system, it is difficult to remove the technology.

NT developments are under the supervision of universities and private companies. Since there is competition to advance these technologies, there is not a lot of information sharing between the entities involved. The duplication of effort with the advancement of technologies and the fact that many employees are required to sign a non-disclosure agreement when working for these companies. The industry does not see as much of a turnaround of employees moving from one company to the next because most technology is proprietary.

There are other ethical challenges with NT that the world does not have answers for. NT can be used to create biological weapons and other powerful weapons. Although these weapons can be developed, the question is, should they be? Or why do they need to be? The fear of atomic warfare has shaped the current military forces. Still, the thought of cybersecurity and NT advancing the attack surfaces is scary and challenging to society. The use of NT to enhance human abilities is also in question. Should NT be used to improve the human brain or muscle system? Can we use NT to enhance beasts of burden to work harder and longer during the day? This would create a divide between wealthy people who could afford the NT implants and those who could not. The technology divide between classes would grow even more. The NT’s most significant challenge is the simplest of questions: Is it needed?

Supersociety Super-Compute Platform Quality Challenges

Super-societies cannot exist without advanced technology. This means that data warehousing and communications must be at the performance scale’s extreme top end. Supercomputers, often called clusters or parallel computers, are made to share resources and interconnect processing CPUs. The CPUs are grouped in nodes where the communication mediums between the nodes are of high quality and have the fastest speeds. The power and speed of a supercomputer is based on the number of nodes the computer possesses. Researchers have tried to enhance the supercomputers through NT and mimic the human nervous system. The neuromorphic systems try to emulate the nervous system and the way the brain works to deliver information. In this area, IBM has done extensive research, creating the IBM TrueNorth chip and the Intel Loihi 2 chip. These will eventually replace IBM’s Eagle Quantum computer, one of today’s top supercomputers.

The TRM challenges introduced through the supercomputer are availability and quality. Most families do not have supercomputers in their homes and do not have access to them. Therefore, unless you are a research scientist working for these organizations, you will probably never have the opportunity to interface with this type of computer. The second challenge of quality is a difficult challenge to measure. Quality with supercomputers comes in different forms. The first is based on speed and efficiency. If the computer is doing what it needs to do and the outcome is accurate, it is usually okay with the users. The second challenge is correlated with speed, which is information accuracy. Many supercomputers produce data at an alarming rate, and it is difficult to check that data in real-time before it can be used in other calculations. Implementing algorithms needs to be unit-tested, and quality assurance is a must. Quality has many meanings to different people, and a computer that can perform tasks as fast as a supercomputer can also be used for malicious purposes. Since the cost of these computers is at the extremely high end of the industry, they are mostly found in government agencies and defense agencies. Using a supercomputer to perform hacking techniques could either ensure success because of the raw computer power, or it could be used to supply the ultimate denial of service attacks.

Super-computers are ideal for analyzing big data. Using these computers to perform calculations on data sets that are generally not manageable by regular computers could advance technology and industries in ways that were not previously thought about. The U.S. government, through NASA, has been analyzing the data captured from telescopes to evaluate different areas of the universe. This is scary to some people because they feel the government could use the computers to analyze privacy issues and to spy on people. It will take a visionary in the computer industry to truly utilize the processing power of these computers. As visionaries, we need people to think about capabilities beyond how computers are currently being used and consider how these devices could be used in the future. This is a challenge because most of the industry is extending or adapting the current technology to perform new tasks, not thinking outside the box on what could be possible with the speeds of modern supercomputers.

Supersociety Autonomous Super-Systems Platform Quality Challenges

The challenges of autonomous super-systems platforms combine all the previous difficulties and significant complications. Since we have not seen a truly autonomous platform that combines robotics and supercomputers, we are left to think about what-ifs. What can we expect from a system that can think for itself, regulate itself, fix any issues that might arise, and do it faster than humanly comprehendible? Our only hope as a society is the concept that Dell introduced: the Responsible Computing Framework. The ethical implications are endless if we have autonomous super-systems. However, if these systems are built with the proper series of checks and balances and the technology used to develop the systems is constantly tested for flaws, our only concern is the human factor of implementation. At some point in time, humans will create autonomous super-systems. We can only hope that they are of the highest ethical standards and that introducing these systems is socially acceptable. Suppose individuals adhere to the Responsible Computing Framework and always consider an information security policy when implementing the systems. In that case, we can only hope that the systems will do their needed jobs and do them with the highest ethical standards.

Concepts In Practice

Industry Certificates Combined with Degree Programs

Professional skills are always needed within the industry, and industry certifications seem to be in demand. Cloud technology companies seem to be thriving, but the engineers who will support them do not appear to be growing at the rate of demand. Several companies have openings in roles related to cloud technology. Let’s look at what Microsoft offers for different certificates pertaining to the Azure Cloud Services.

Microsoft offers a variety of certifications for particular areas when working with its Azure platform. The first recommended certificate is the Azure Fundamentals. This certificate focuses on the learner’s skills in cloud concepts, including the benefits of cloud computing and IaaS, PaaS, and SaaS. After this certification, there are a variety of options. The learner can take the Microsoft Azure Data Fundamentals or the Azure Monitoring, Troubleshooting, and Optimizing. There are certificates for Storage Solutions, Subscriptions and Resources, Third Party Connectivity, and Virtual Machines.

Microsoft is not the only company offering skill-based certificates for its technologies. Companies like Google and IBM offer options, and external entities offer programming, software engineering, and security certifications. These certificates are exceptional add-ons to a college education. They help you focus your skillset for the job at hand. The education you receive from a college or university should provide foundational knowledge to build upon.

Footnotes

  • 1International Organization for Standardization and the International Electrotechnical Commission. ISO/IEC 25002:2024. https://www.iso.org/standard/78175.html
  • 2Center for Internet Security, “CIS critical security controls,” v8.1, 2024. https://www.cisecurity.org/controls
  • 3Refer to the TOGAF ADM in the TOGAF Standard at https://openstax.org/r/TOGAF
  • 4See Network Working Group, “Architectural principles of the Internet,” June 1996. https://datatracker.ietf.org/doc/html/rfc1958
  • 5See W3C, “Open web platform,” August 27, 2020. https://www.w3.org/wiki/Open_Web_Platform.
  • 6Colorlib, “Password statistics (how many passwords does an average person have?),” June 9, 2024. https://colorlib.com/wp/password-statistics/
  • 7S. Subrahmanyam, “What is metaverse and how is it changing AR/VR world?” November 28, 2022. https://readwrite.com/what-is-metaverse-and-how-is-it-changing-ar-vr-world/
  • 8A. H. Najmi, W. S. Alhalafawy, & M. Z. T. Zaki, “Developing a sustainable environment based on augmented reality to educate adolescents about the dangers of electronic gaming addiction,” Sustainability, 15 no. 4, pp. 3185. January 19, 2023. doi: 10.3390/su15043185. [Online].
  • 9Statista, “Average number of devices and connections per person worldwide in 2018 and 2023,” January 19, 2023. https://www.statista.com/statistics/1190270/number-of-devices-and-connections-per-person-worldwide/
  • 10C. Weinschenk, “Report: People underestimate number of IoT devices in their homes.” April 3, 2023. https://www.telecompetitor.com/report-people-underestimate-number-of-iot-devices-in-their-homes/
  • 11P. T. Paharia., “ChatGPT: A diagnostic sidekick for doctors? Caution advised for non-professionals.” News Medical Life Sciences, April 27, 2023. https://www.news-medical.net/news/20230427/ChatGPT-A-diagnostic-sidekick-for-doctors-Caution-advised-for-non-professionals.aspx
Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
Citation information

© Oct 29, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.