Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Computer Science

13.4 Towards Intelligent Autonomous Networked Super Systems

Introduction to Computer Science13.4 Towards Intelligent Autonomous Networked Super Systems

Learning Objectives

By the end of this section, you will be able to:

  • Analyze specific applications of AI through XR technology
  • Understand the impact of the development of supersociety capabilities, including nanotechnology, robotics, and supercomputers
  • Discuss the advantages and challenges faced by the development of IANS and supersystems

Recent advances with superintelligent AI allow for the seamless vision of incorporating networked autonomous systems into reality. These systems, known as intelligent autonomous networked supersystems (IANS), are becoming the next major development for chained computing, where intelligent chains of autonomous machines work together as a system to make decisions and take action. IANS are highly interconnected AIs, forming complex networks that collaborate while utilizing quick and extended exchange of data to promote growth.

In this section, we will analyze current applications of IANS and supersystems that are implemented and traffic large-scale systems and businesses with their applications. By going through real-life examples, we can evaluate the potential and critically examine the current challenges and limitations in the development and deployment of extensive IANS and similar supersystems that use intelligence to improve industries such as health care.

Web Platforms and Smart Ecosystems Applications

Today’s web incorporates hybrid multiclouds that continuously evolve. As shown in Figure 13.15, these hybrid multiclouds power myriad innovative technology components that make it possible to create innovative solutions as the Web continues to evolve.

Illustration of web evolution: Prior to 1980s-Discussion and newsgroup service; 1980s-Email, chat, news, bulletin; Early 1990s-WWW goes mainstream, text-based websites; Late 1990s-e-commerce, website design, animation, interactivity; Current-Web 2.0, Social media, Mobile internet.
Figure 13.15 As this diagram shows, the Web has evolved and experienced many breakthroughs and disruptions. (credit: modification of “History of online service” by Viviensay/Wikimedia Commons, CC0)

In particular, recent advances in mobility and networking, such as 5G, have made it possible to minimize the latency of traditional web and mobile applications. This has led to a proliferation of social networks that enable efficient access to various types of content and global instant communication and collaboration. In addition, virtualization technology has made it possible to create powerful cloud platforms that facilitate access to infrastructure and platform services, as described earlier in this chapter.

This progress is leading to global acceptance of the next-generation hybrid Web 3.0, which makes it possible to combine traditional Web 2.0 applications with blockchain 2.0 capabilities. Blockchain is characterized by real-time transactions, scalability, and unlimited decentralized storage. Further improvements are on the horizon to provide a more scalable, fast, unlimited, and completely secure blockchain infrastructure via Blockchain 3.0/4.0.

To build on this, Web 4.0 and 5.0 are already on the way as the metaverse is being positioned as the successor to today’s Internet. Metaverse is a concept that originated in the 1992 novel Snow Crash, in which people use the metaverse as an escape from a dystopian world (an idea also later explored in the novel and film Ready Player One). Metaverse embodies a unified immersive digital world that is tightly connected to the physical world. In the metaverse, people can interact without physical or geographic constraints and enjoy a compelling sense of social presence.

The metaverse is characterized by two key features. It is persistent, with its collective network of 3-D-rendered virtual elements and spaces available throughout the world 24/7. It is also shared, giving a vast number of users simultaneous access and the ability to use the metaverse to interact. The metaverse functions with six key layers that include

  • infrastructure (e.g., chips and processors, cloud infrastructure),
  • access/interface (e.g., haptics, headsets, smartglasses),
  • virtualization tools (e.g., 3-D design engines, avatar development),
  • virtual worlds (centralized and decentralized),
  • economic infrastructure (e.g., payments, crypto wallets, and non-fungible token (NFT) marketplaces using NFTs with a digital signature that cannot be exchanged or equated to another item), and
  • experiences (e.g., gaming, virtual real estate/concerts).

Industry Applications

As of 2024, the online worlds promoted by metaverse proponents are not fully formed and functional just yet, but platforms and games like Second Life, Roblox, and Decentraland are indicators of the future once the metaverse is fully operational. The metaverse is expected to rely on technologies such as virtual reality headsets, advanced haptic feedback, and 3-D modeling tools to power immersive digital environments. To understand the potential of the metaverse and mixed reality technology, let’s consider their applications in the areas of education and health care.

Enhanced Learning Experiences

Mixed reality can provide visuals and relatable examples to help students perceive theoretical information in complex topics such as biology, anatomy, physics, and math. Medical students can learn anatomy and practice examining the body with XR apps that represent the human body inside and out. Chemistry students can conduct experiments using different chemical combinations and see results with no harm to students and school property. Nursing students can use simulations to learn and prepare for unique situations that they’ll encounter in clinical settings. Students can also take field trips to museums, exhibitions, and theaters all over the world without leaving the classroom.

In-Person Patient Care Applications

In operating rooms, clinics, hospital wards, and medical training settings, mixed reality speeds up diagnoses, increases access to health-care facilities, cuts down on infection transmissions, and improves medical care outcomes. XR enables holographic overlaying of images and data onto real-life situations such as surgical operations, remote consultation, and treatment, opening new avenues in health care.

In cardiology, initial XR health-care applications created interactive visualizations that enabled pediatric cardiologists to virtually demonstrate complex congenital heart problems to their students and patients. XR techniques reduce the time required to diagnose cardiology issues, and surgeons who use Microsoft HoloLens headsets during procedures can interact with the hologram using hand movements and benefit from a wider field of view, which improves surgical outcomes. In one application, XR technology enables surgeons to see patients’ 3-D computed tomography (CT) and magnetic resonance imaging (MRI) scans directly. This will make it easier for surgeons to identify the exact area of the patient’s body that needs surgery. This could be especially beneficial for emergency surgeries that must be performed as quickly as possible to save a patient’s life. Preoperative simulations are made easier with XR, which creates customized 3-D models for each patient and visualizes the inside anatomy in a fully immersive environment. In complex surgical operations such as reconstructive surgeries, holographic overlays can substantially help surgeons examine the bones and determine the flow of blood arteries. XR may also be helpful for pain relief. In Denmark, Aalborg University researchers studied the potential of XR to provide pain relief for phantom limbs. It may be possible to delude the brain of a person with an amputation into thinking it still controls their missing limb, which may assist in reducing the agony associated with phantom limbs.

Telemedicine

Using XR headsets and 3-D XR, medical practitioners are able to review patient histories vocally, discuss with medical specialists, and record patient records. XR-powered headsets can eliminate the need for doctors to review written reports, analyze patient data, and deliver findings in real time, resulting in faster and more precise diagnostics.

XR may also provide paramedics with remote support to address emergencies. With XR, paramedics can remotely get support from senior medical professionals. This can help them make more accurate and faster medical decisions, efficiently provide emergency medical aid, and improve patient outcomes.

Lastly, XR can help provide remote care to patients with mobility issues. It can also visually project simulations for different situations, offer ease of access to facilitators remotely, increase patient engagement by providing a safe and controlled immersive environment, and leverage telehealth appointments.

Therapeutic and Mental Health Applications

The Autism Glass Project at Stanford University’s medical school used XR to help children with autism manage their emotions and identify related facial expressions. There are many other possible applications of the same technology, including the use of VR psychotherapy to address mental health conditions and disorders and treat the cause rather than the effect of such disorders by combining VR technology with big data analytics, cloud computing, machine learning, IoT, and blockchain. Affective Interaction through Wearable Computing and Cloud Technology (AIWAC) technology provides a full-stack solution aiming at effective remote emotional health-care assistance. The solution components allow collaborative data collection via wearable devices, enhanced sentiment analysis and forecasting models, and controllable affective interactions.

Other Applications of XR Technology

As the metaverse evolves, XR-driven technology may be applied in various industries for uses such as the following:

  • equipment assembly, maintenance, and repair
  • engineering and architectural design (e.g., experiencing a virtual building before it is built)
  • market research (e.g., experiencing a virtual product that does not yet exist)
  • entertainment (e.g., cinema, music, and sports)
  • product advertising and promotion
  • computer games

Mixed reality technology also has the capacity to promote social good. For example, applications that combine XR with other innovative technologies to support people with disabilities have already been developed, including technology that alerts people who are blind when rapidly moving objects are headed in their direction.

Evolving Considerations for Standards and Guidelines

The use of immersive applications in the metaverse has made it necessary to update the standards and guidelines used to develop applications. Traditional usability guidelines were designed for web and mobile applications that forced users to access applications with computers. Immersive applications force designers to understand how human beings sense things, reason, and plan actions, as well as what motivates people to use available solutions. Designing for these applications requires paying attention to usability, accessibility, and inclusion.

The process of designing products to be effective, efficient, and satisfying is called usability; per the World Wide Web Consortium (W3C) standards, it also calls for accessibility and inclusion. It subsumes solution qualities (e.g., effectiveness, efficiency) that are appealing to humans and motivate the use of such solutions. The concept of accessibility addresses discriminatory aspects related to equivalent user experience for people with disabilities. It pertains to solution qualities that make these solutions usable by people with disabilities. For example, web pages that display pictures should have “alt-text” HTML tags associated with the pictures that can be used to read what the pictures contain so that people who are blind can navigate to pages and understand what is displayed within pictures they cannot see. In general, people with disabilities should be able to perceive, understand, navigate, interact, and leverage websites and related tools in much the same way as all people do. The concept of inclusion ensures that diverse communities can make use of solutions regardless of their location, culture, and other differentiating traits, habits, or interests.

An important aspect of designing applications is human-computer interaction (HCI), the science that studies interactions between people and computers and evaluates whether computers can successfully interact with humans. As illustrated in Figure 13.16, “HCI is concerned with understanding the influence technology has on how people think, value, feel, and relate and using this understanding to inform technology design.”1

Venn diagram circles of Engineering, computer science, Psychology, sociology, ethnography, and Design overlapping to create HCI.
Figure 13.16 Human-computer interaction (HCI) studies the interaction of humans with technology to understand how technology influences human behavior. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

The focus of HCI is to ensure that solutions are usable by humans. Usable solutions should be easy to use as well as effective, safe, efficient, and fun for the user. HCI also focuses on the creation of methods that may be used to measure and otherwise evaluate usability, as well as on the definition of usability guidelines and standards. Applying HCI as part of the design of computing systems requires considering human physical and mental capabilities (e.g., attention, memory) as well as the needs of humans (e.g., functional, emotional, social) as constraints at the same level as machine physical constraints such as processor speed and networking capabilities.

A usable system must understand the various roles of humans. This includes the following:

  • Humans as sensory processors. Usability results when the system fits within human sensory limits for vision, hearing, touch, smell, and taste.
  • Humans as interpreters/predictors. Usability results when the system fits with human knowledge. This includes the ability to process information via perception and cognitive processes such as selective attention, learning, problem-solving, and language processing.
  • Humans as actors in environments. Usability results when the system fits within task and social contexts, such as gender and ethnic backgrounds.

Human Computer Interaction (HCI) Guidelines for Immersive Solutions

The most important solution development steps in HCI are to define the context. This includes the type of uses and applications—such as industrial, commercial, and exploratory—as well as the market and the customer. The context is not the specific local environment, but rather the larger type of world that the system needs to exist in. This includes the users’ physical attributes, physical workspaces, perceptual abilities, cognitive abilities, personality and social traits, cultural and international diversity, and special abilities or disabilities. It also includes task analysis to understand what users need and want to do with technology. Other steps involve function allocation, system layout/basic design, mockups and prototypes, usability testing, iterative testing and redesign, and update and maintenance.

Similar to mobile solutions, immersive products must go through a series of prototypes to ensure stabilization, feasibility (a single logic path), alpha prototype (minimum viable product), beta prototype (largely complete), and release candidate (all required functionality) ready for product owner review. Quality measurement checklists and design best practices are different for web, mobile, and immersive solutions.

Think It Through

VR Accessibility

Ricardo’s friends are having fun using virtual reality and avatars to explore castles in Europe. But Ricardo has a disability that prevents him from joining in the fun using the website his friends have selected.

Why is this important? How could HCI guidelines help the developers of this website make virtual reality accessible to Ricardo?

Supersociety Digital Solutions

While smart ecosystem solutions focus on providing insights to their users so they can adapt to change and optimize their activities to guarantee success, supersociety applications go one step beyond by replacing humans in certain mechanical activities and allowing them to focus on activities that machines are not able to perform on their behalf. The set of supersociety capabilities that are being developed keep evolving with capabilities supported by innovative technology components powered by the hybrid multiclouds that are an inherent part of our evolving web infrastructure.

Supersociety Capabilities

Noteworthy supersociety capabilities being developed today include technology at the molecular level, robotics and advanced robotics, supercomputers, and intelligent autonomous networked systems and supersystems.

Nanotechnology

The field of nanotechnology focuses on matter at the molecular level to create structures and devices about 1 to 100 nm in size with fundamentally new organization, properties, and performance. Nanotechnology can reduce the size of storage devices available via hybrid multiclouds, making it possible to drastically increase the volume of information available on the Web. Some researchers have also suggested a quite futuristic Internet of Thoughts in which neural nanorobots could be used to connect the neocortex of the human brain (i.e., the smartest conscious part of the brain) to a “synthetic neocortex” in the cloud. If doable, this could enable the creation of a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought.

Challenges associated with nanoscale science and technology include making nanomaterials (e.g., self-assembly, top-down vs. bottom-up), characterizing nanostructures (e.g., imaging and measuring small things), understanding properties (“nanoland” lies between macroworld and single atoms and molecules), and nanosystems integration and performance (i.e., how we assemble nanostructures into systems). To better understand these challenges, consider Figure 13.17.

Visual representing nanotechnology between natural and synthetic world.
Figure 13.17 Nanotechnology faces many challenges as matter is used at the molecular level to create structures and devices that are ~1 to 100 nm in size with fundamentally new organization, properties, and performance. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license; credit ruler: modification of "The Scale of Things - Nanometers and More" by NIST/nist.gov, Public Domain; credit top left image: modification of "CSIRO ScienceImage 11085 A scanning electron micrograph of a female dust mite" by Matt Colloff/Wikimedia Commons, CC BY 3.0; credit top middle image: modification of "This digitally-colorized scanning electron micrograph (SEM) revealed some of the ultrastructural morphology displayed by red blood cells" by CDC/Public Health Image Library, Public Domain; credit top right image: modification of "Dna-163466" by PublicDomainPictures/Wikimedia Commons, CC0; credit bottom left image: modification of "Head of a pin" by NIST/nist.gov, Public Domain; credit bottom middle image: modification of "Model of a MEMS Safety Switch," Courtesy Sandia National Laboratories, SUMMiT™ Technologies, www.sandia.gov/mstc; credit bottom right image: modification of "Carbon Nanotube Reference Materials" by NIST/nist.gov, Public Domain)

Robotics and Advanced Robotics

Robotics and advanced robotics are joint disciplines that include computer science and mechanical and electrical engineering. The field of robotics focuses on the design, development, functioning, and application of robots, as well as the computer systems needed to control the robots, provide sensory feedback, and process information. Swarm robotics emphasizes a large number of robots and promotes scalability. A cyborg is a biological human with parts replaced with machinery, while machines with biological parts added are considered to be an artificial human. Google’s Cloud Robotics Core is an open-source platform that facilitates the management of robot fleets as well as the creation and operation of robotics-packaged solutions that automate business tasks. Other big cloud platforms also provide support for robotics and advanced robotics.

Software Robots

Software robots, or bots (e.g., web crawlers, chatbots), are computer programs that operate autonomously to complete a virtual task. They are not physical robots; instead, they exist only within a computer.

Another field of robotics, cognitive robotics, is a field that creates robots that can think, perceive, learn, remember, reason, and interact. It focuses on creating robots that mimic human perception, reasoning, and planning abilities. One subspecialty, biomimetic robotics, focuses on the design of robots that leverage principles common in nature, such as what can be learned from the evolution and development of intelligence in animals and humans. Recent progress and directions in AI, machine learning, and cognitive science drive the focus of the next generation of robotic systems. Figure 13.18 shows how these areas overlap.

Venn Diagram with circles for Robotics, Artificial intelligence, and Cognitive and biological sciences converging in Cognitive robotics.
Figure 13.18 Cognitive robotics draws from the fields of cognitive and biological sciences and artificial intelligence, as well as robotics. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

The intent of cognitive robotics is to replace humans in dangerous environments or manufacturing processes or resemble humans in cognition, enabling robots to do jobs that are hazardous to people. Cognitive robotics aims to improve robots’ perception capabilities as they navigate and manipulate objects in a given environment and interact with people. It also makes it possible for robots to perform tasks by predicting the actions of people around them as well as their own. Cognitive robots can also perceive how people see the world, predict what they need, and anticipate their actions. This explains how these robots can execute daily tasks while interacting safely with people. They are capable of direct interactions, such as assisting customers, as well as indirect interactions, such as sweeping the floor while customers are shopping in a store.

Intelligent mobile robots that can move independently were introduced during the Second World War. Following the implementation of artificial intelligence (AI) in robotics, they became autonomous or more intelligent. Figure 13.19 provides a list of components and architecture of modern intelligent robots.

Illustration of components and architecture of modern intelligent robots.
Figure 13.19 Modern, intelligent robots rely on various components and architecture to function as intended. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Robot Operating System

The Robot Operating System (ROS) is a meta-operating system specially designed for robots. It is open-source and supports a variety of services to control robotics hardware, provide hardware abstractions, and perform common tasks. It can also help manage software packages and pass messages from one process to another. ROS also includes various libraries and tools to facilitate the selection, development, and operation of software modules across various computers.

Robot Manipulators and Mobile Robots Characteristics

Robots include robot manipulators and mobile robots. A robot manipulator is a physical tool that operates at a fixed location to catch and move items. A mobile robot is one that can navigate from one position to another. Robot manipulators face the challenge of being able to pick and place objects with a sufficient degree of precision, while mobile robots must be able to estimate relative and absolute robot positions and navigate on a map.

Mobile robots are used in applications such as medical treatment, mail delivery, infrastructure inspections, and passenger travel. For example, Nao is a humanoid robot that is specially designed to interact with humans. It is loaded with sensors that enable it to mimic emotions. It can recognize people’s faces as well as objects and can speak, walk, and dance. Nao was created by Aldebaran Robotics, which was acquired by SoftBank in 2015. Sixth-generation Nao robots are used in research as well as in the health-care and education industries.

Atlas is one of the most agile robots in existence. It uses whole-body skills to move quickly and balance dynamically. While Atlas can lift and carry objects such as boxes and crates, the robot can also run, jump, and do backflips.

Zipline is an autonomous fixed-wing aircraft drone used to carry blood and medicine from a distribution center to wherever it is needed. It can launch within minutes and travel in any weather.

As these examples show, mobile robots have many applications. Figure 13.20 provides an overview of the fields and industries that find mobile robots useful.

Visual showing how mobile robots has a variety of applications in various field and industries.
Figure 13.20 Mobile robots have a variety of applications in various fields and industries, including engineering specialties, science, mathematics, and law. (credit: Copyright © 2020 Vermesan, Bahr, Ottella, Serrano, Karlsen, Wahlstrøm, Sand, Ashwathnarayan and Gamba. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.)

Computer Vision and Cognitive Robotics

Computer vision advances have facilitated the positioning and navigation of mobile robots. Computer vision is achieved using optics and sensors, which involve image acquisition, image representation, and image processing.

Artificial Cognitive Systems

Robots need artificial cognitive systems that simulate human thought processes to supplement human cognition. Artificial cognitive systems are developed using algorithms in artificial intelligence and technologies such as machine learning, deep learning, speech recognition, and object recognition.

Supercomputers

Various supercomputers are being developed based on neuromorphic computing and quantum technology. The supercomputers will eventually be available on the big clouds to help optimize the performance and throughput of supersociety applications. IBM already provides quantum compute access plans for its cloud, and Intel is building neuromorphic computing hardware that can be leveraged on cloud supercomputers.

Cognitive Sciences and Neuroinformatics

The computing approach that combines the use of AI and cognitive science is called cognitive computing. The study of how to build a computer that can mimic basic human brain functions is called neuroinformatics. This requires robots’ ability to handle ambiguity as well as uncertainty. Robots that are equipped with these capabilities can mimic humans’ ability to memorize information, learn, reason, react, and show emotions.

Table 13.3 outlines related areas in cognitive sciences and technology support.

Subject Area Brief Description Technology Support
Artificial intelligence Study of cognitive phenomena to implement human intelligence in computers Pattern recognition, robotics, computer vision, speech processing
Learning and memory Study of human learning and memory mechanisms to build them on future computers Machine learning, database systems, memory enhancement
Languages and linguistics Study of how linguistics and language are learned and acquired, and how to understand novel sentences Language and speech processing, machine translation
Perception and action Study of the ability to take in information via the senses such as vision and hearing; haptic, olfactory, and gustatory stimuli fall into this domain Image recognition and understanding, behavioral science, brain imaging, psychology, and anthropology
Neuroinformatics The intersection of neuroscience and information science Neurocomputers, artificial neural nets, deep learning, aging, disease control
Knowledge engineering The study of big data analysis, knowledge discovery, and the transformation and creativity process Datamining, data analytics, knowledge discovery, and system construction
Table 13.3 Cognitive Science and Technology Support

Desired features of cognitive computing systems include the following:

  • adaptive learning, which is learning as information changes and as goals and requirements evolve, resolving ambiguity and tolerating unpredictability
  • interaction with users, allowing users—which may include other processors, devices, and cloud services, as well as people—to define their needs as a cognitive system trainer
  • ability to be iterative and stateful, which means the system may remember previous interactions and may redefine a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete
  • contextual in information discovery, which means the system may understand and extract meaning, syntax, time, location, appropriate domain, regulations, user profiles, process, tasks, and goals, and respond to sensory inputs with visual and gestural effects

The real-world applications of cognitive systems include speech understanding, sentiment analysis, facial recognition, election insights, autonomous driving, and deep learning applications. Deep learning, in particular, requires specific hardware, including graphic processors, digital signal processors, field programmable logic devices, systems on a chip, custom microchips, and application-specific integrated circuits. Some of the providers of deep learning hardware include the Google Neural Machine Translation System (GNMT), Google Cloud’s Tensor Processing Unit (TPU), TensorFlow, Cambricon’s Neural Processing Unit (NPU), Intel’s Movidius Neural Compute Stick (NCS), the Intel Movidius Neural Compute SDK, and Intel’s Movidius Vision Processing Unit (VPU).

Neuromorphic Computing

Cognitive science is interdisciplinary in nature. It covers the areas of psychology, artificial intelligence, neuroscience, and linguistics. It spans many levels of analysis, from low-level machine learning and decision mechanisms to high-level neural circuitry to build brain-modeled computers. It applies software libraries on clouds or supercomputers for machine learning and neuroinformatics studies. It uses representation and algorithms to relate the inputs and outputs of artificial neural computers. It also designs hardware neural chips to implement brain-like computers referred to as neuromorphic computers, as illustrated in Figure 13.21.

Illustration of how Human brain with neural network being compared to a Computer motherboard.
Figure 13.21 As a field, cognitive science has made many advances, including the design of hardware neural chips, such as those pictured here, which are used to implement brain-like computers known as neuromorphic computers. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license; credit left image: modification of “Artificial Neural Network with Chip” by mikemacmarketing/Wikimedia Commons, CC BY 2.0; credit right image: modification of “Amiga 3000T motherboard without annotations” by Podstawko/Wikimedia Commons, CC0)

In 1990, Carver Mead, the Gordon and Betty Moore Professor Emeritus of Engineering and Applied Science for the California Institute of Technology, introduced the term neuromorphic computing, which relies on a hardware architecture that models how the human brain uses neurons. This provides the potential for faster, more complex computations while remaining power efficient. This field of computing emerged to compete with traditional computer architectures. Machine learning became popular, and as it advanced, neuromorphic computing became the best platform for machine learning algorithms. Comprised of a network of neurons and synapses, neuromorphic computing relies on hardware architecture modeled after the human brain. A neuron is a function that operates on an input. A synapse, which processes neuron output and passes a state to another neuron, can be trained to know how to convert neuron output to states. A memristor is a component that remembers the charge of an electric current. Memristors are great for neuromorphic computing as they provide neuroplasticity. Neurons pulse electric signals as input. Output is based on the path taken through the network, and it is a highly connected and parallel architecture that uses side-by-side memory and processing while consuming a low amount of power. Various neuromorphic computing models have been developed on this type of hardware, including the Neuroscience-Inspired Dynamic Architecture (NIDA), Dynamic Adaptive Neural Network Arrays (DANNAs), and Memristive Dynamic Adaptive Neural Network Arrays (mrDANNAs). Intel Labs developed the Loihi 2 neuromorphic chip that is used for research along with an open-source framework known as Lava. Intel’s goal is to facilitate the adoption of neuromorphic computing. Intel Labs’ second-generation neuromorphic research chip, codenamed Loihi 2, and Lava, an open-source software framework, will drive innovation and adoption of neuromorphic computing solutions.

Quantum Computing

Quantum computing has applications in experimental physics. It provides a theory that is more fundamental than Newtonian mechanics and electromagnetism and can explain phenomena that these theories cannot tackle.

A quantum computer is a machine that performs calculations based on the laws and principles of quantum mechanics, in which the smallest particles of light and matter can be in different places at the same time. Information is stored in a physical medium and manipulated by physical processes. Designs of “classical” computers are implicitly based in the classical framework for physics and can only deal with bits, not qubits. This classical framework has been replaced by the more powerful framework of quantum mechanics. In a quantum computer, one qubit (a quantum bit) could be both 0 and 1 at the same time. So, as Figure 13.22 shows, with three qubits of data, a quantum computer could store all eight combinations of 0 and 1 simultaneously. That means a three-qubit quantum computer could potentially process information more efficiently than a classical three-bit digital computer, depending on the algorithm.

Illustration of Classical bit 0 and 1, A Qubit, and 3 Qubits.
Figure 13.22 In a quantum computer, one qubit, or quantum bit, could be both 0 and 1 at the same time, enabling a quantum computer to store all eight combinations of 0 and 1 simultaneously. (attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)

Typical personal computers today calculate 64 bits of data at a time. A quantum computer with 64 qubits would be 264 faster, or about 18 billion times faster. A bit of data is represented by a single atom that is in one of two states, denoted by |0> and |1>. A single bit of this form is known as a qubit. A physical implementation of a qubit could use the two energy levels of an atom. An excited state represents |1>, and a ground state represents |0>. A single qubit can be forced into a superposition of the two states denoted by the addition of the state vectors:

|ψ>=α|0>+α|1>|ψ>=α|0>+α|1>

where α and α are complex numbers and |α|+|α|=1|α|+|α|=1.

A qubit in superposition is in both of the states |1> and |0 at the same time.

In general, an n qubit register can represent the numbers 0 through 2n–1 simultaneously. Entanglement is the ability of quantum systems to exhibit correlations between states within a superposition. Imagine two qubits, each in the state |0> + |1> (a superposition of the 0 and 1). We can entangle the two qubits such that the measurement of one qubit is always correlated to the measurement of the other qubit.

However, if we attempt to retrieve the values represented within a superposition, the superposition randomly collapses to represent just one of the original values. This means that a wave function changed and instead of the superposition state continuing, the superposition has collapsed into a single state that has a defined value representing only one of the original values.

In the classical computing model, a probabilistic Turing machine (PTM) is an abstract model of the modern (classical) computer. The strong Church-Turing thesis states that a PTM can efficiently simulate any realistic model of computing, meaning that if a problem is difficult for a PTM, it must also be difficult for any other reasonable computing model. For example, factoring is believed to be hard to perform on a Turing machine (or any equivalent model). We do not know whether there is some novel architecture on which factoring is easy. Because we lack certainty, we assume that certain computational problems, such as factoring, possess inherent complexity regardless of the effort put into finding an efficient algorithm.

In the early 1980s, Richard Feynman noted that it appeared unlikely for a PTM to efficiently simulate quantum mechanical systems. Because quantum computers operate as quantum mechanical systems, the model of quantum computing appears to challenge the strong Church-Turing thesis.

Possible applications of quantum computing include efficient simulations of quantum systems, phase estimation, improved time-frequency and other measurement standards such as GPS, factoring and discrete logarithms, hidden subgroup problems, and amplitude amplification. Possible implementations of quantum systems include optical photon computers, nuclear magnetic resonance (NMR), ion traps, and solid-state quantum. The optical photon computer operates through the interaction between an atom and a photon inside a resonator, while another approach employs optical devices such as a beam splitter and mirror. NMR represents qubits using the spin of atomic nuclei, with chemical bonds between these spins manipulated by a magnetic field to simulate gates. The spins are initialized by magnetization, and measurement is achieved by detecting induced voltages. Currently, it is believed that NMR will not scale beyond about twenty qubits. However, in 2006, researchers reached a 12-coherence state, showing that scalability up to 12 qubits is feasible using liquid-state nuclear magnetic resonance quantum information processors. Ion traps form qubits using two electron orbits of an ion (charged atom) confined in a vacuum by an electromagnetic field. Additionally, there are two widely recognized solid-state implementations of qubits:

  1. A qubit formed through a superconducting circuit using a Josephson junction, which establishes a weak link between two superconductors. A Josephson junction consists of two superconductors separated by a very thin insulating barrier.
  2. A qubit formed using a semiconductor quantum dot, a nanostructure ranging from ten to several hundred nanometers in size, designed to confine a single electron.

Many papers have explored various aspects of quantum computing, including detailed language specifications. For more information about any of the following examples, perform some further research.

  • Quantum computation language (QCL) by Bernhard Ömer: a C-like syntax and very complete
  • Quantum Guarded-Command Language (qGCL) by Paolo Zuliani and others: a high-level imperative language for quantum computing
  • Quantum C by Stephen Blaha: currently just a specification

Global Issues in Technology

Quantum Computing in Global Operations

Quantum computing is impacting industries throughout the world and improving global operations in fields such as banking, health care, manufacturing, and transportation. For example, quantum computing has been used to develop new drugs, design aircraft that are safer and more efficient, provide more robust encryption and online security, and predict the weather with greater accuracy. Quantum computing has the ability to solve problems faster and more effectively compared to classical computing.

In 2023, the value of quantum computing’s global market size was $885.4 million (USD), and it is expected to increase to $12.6 billion by 2032, representing a growth of 34.8%. While North America currently holds the largest share of the quantum computing market, other parts of the world, particularly Europe and Asia, are expected to grow over the next few years.

Intelligent Autonomous Networked Systems and Supersystems

Intelligent autonomous networked systems and supersystems leverage the various supersociety technologies discussed so far. However, one of the missing links appears to be the lack of understanding of what drives human reasoning and planning, which has led to the inability to mimic it to create systems based on artificial general intelligence (AGI). Also, the current approach to deep learning requires using a very large amount of data to train machine learning algorithms and create usable models, which are extremely time-consuming, as well as requiring tremendous resources such as data and processing power. Researchers are striving to come up with solutions to these problems.

IANS have the potential to leverage the innovative capabilities of AI, ML, edge computing, and virtualization to offer better human experiences in interconnectivity. Compared to automated networks, which have explicitly defined inputs and outputs in predictable environments, autonomous networks improve operations when functioning in unpredictable environments with conditions lacking inputs and outputs that can be tested in advance. With IANS, the systems can learn and adapt as conditions change, adjusting to meet whatever needs arise. AGIs are particularly useful in this environment, as they have the ability to work independently, adapting and making adjustments as needed when conditions change.

Knowledge management and reuse of processes and information are being looked into as well. A possible approach to facilitate reasoning in the AGI grand scheme consists of using an open world decentralized hybrid multicloud repository that can be accessed by swarm cognitive robots. These robots could reuse/share their knowledge and adapt their individual behavior in real time according to the context in which they operate.

Footnotes

  • 1P. C. Wright and J. C. McCarthy, “Empathy and experience in HCI,” Conference: Proceedings of the 2008 Conference on Human Factors in Computing Systems, CHI 2008, 2008, Florence, Italy, April 5-10, 2008. http://dx.doi.org/10.1145/1357054.1357156
Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-computer-science/pages/1-introduction
Citation information

© Oct 29, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.