Saturday, 24 February 2024

Saturday, 1 February 2020

Artificial consciousness



(Artificial consciousness)


Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).

Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation.

Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.


 *** 

"Philosophical views"

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of “raw feels”, “what it is like” or qualia (Block 1997).


"Plausibility Debate


Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution (Block 1978; Bickle 2003)

In his article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that "Working in a fully automated mode, they [the computers] cannot exhibit creativity, emotions, or free will. A computer, like a washing machine, is a slave operated by its components."

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam 1967).


"Computational Foundation Argument"


One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his article Chalmers 2011, is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. In the outline, he defends his claim thus: Computers perform computations. Computations can capture other systems' abstract causal organization.

The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are "characterized by their causal role". He adverts to the work of Armstrong 1968 and Lewis 1972 in claiming that "[s]ystems with the same causal topology…will share their psychological properties".

Phenomenological properties are not prima facie definable in terms of their causal roles. Establishing that phenomenological properties are amenable to individuation by causal role therefore requires argument. Chalmers provides his Dancing Qualia Argument for this purpose.[7]

Chalmers begins by assuming that agents with identical causal organizations could have different experiences. He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. Ex hypothesi, the experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could "notice" the shift in experience.

Critics of AC object that Chalmers begs the question in assuming that all mental properties and external connections are sufficiently captured by abstract causal organization.


"Ethics"


If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer of a building of larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).

The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:

61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[8]



( Research and implementation proposals )


"Aspects Of Consciousness"


There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars (Baars 1988) and others. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for artificial consciousness (Aleksander 1995) and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.


***Awareness***

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.

There are at least three types of awareness:[9] agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.[10]


***Memory***

Conscious events interact with memory systems in learning, rehearsal, and retrieval.[11] The IDA model[12] elucidates the role of consciousness in the updating of perceptual memory,[13] transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system.[14] In IDA, these two memories are implemented computationally using a modified version of Kanerva’s Sparse distributed memory architecture.[15]


***Learning***


Learning is also considered necessary for AC. By Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events (Baars 1988). By Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments"


***Anticipation***


The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander.[16] The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events.[16] An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.


***Subjective Experience***


Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism. On the other hand, there are problems in other fields of science which limit that which we can observe, such as the uncertainty principle in physics, which have not made the research in these fields of science impossible.



(Role Of Cognitive Architectures)


Main article: Cognitive architecture >> https://en.wikipedia.org/wiki/Cognitive_architecture


The term "cognitive architecture" may refer to a theory about the structure of the human mind, or any portion or function thereof, including consciousness. In another context, a cognitive architecture implements the theory on computers. An example is QuBIC: Quantum and Bio-inspired Cognitive Architecture for Machine Consciousness. One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be in a formalized form so they can be the basis of a computer program. Also, the role of cognitive architecture is for the A.I. to clearly structure, build, and implement it's thought process.



(Symbolic Or Hybrid Proposals) 


***Franklin's Intelligent Distribution Agent***


Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (Baars 1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and Franklin 2003 for details). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task." IDA has been extended to LIDA (Learning Intelligent Distribution Agent).


***Ron Sun's cognitive architecture CLARION***


CLARION posits a two-level representation that explains the distinction between conscious and unconscious mental processes.

CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task (Sun 2002). Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.


***Ben Goertzel's OpenCog***


Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, being done at the Hong Kong Polytechnic University.


(Connectionist proposals)


***Haikonen's cognitive architecture***


Pentti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2003) was reportedly not capable of AC, but did exhibit emotions as expected. See Doan (2009) for a comprehensive introduction to Haikonen's cognitive architecture. An updated account of Haikonen's architecture, along with a summary of his philosophical views, is given in Haikonen (2012), Haikonen (2019).


***Shanahan's cognitive architecture***


Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination") (Shanahan 2006). For discussions of Shanahan's architecture, see (Gamez 2008) and (Reggia 2013) and Chapter 20 of (Haikonen 2012).


***Takeno's self-awareness research***


Self-awareness in robots is being investigated by Junichi Takeno[17] at Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it,[18][19] and this claim has already been reviewed (Takeno, Inaba & Suzuki 2005). Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis.[20] He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).


***Aleksander's impossible mind***


Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language.[21] Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.[22]


***Thaler's Creativity Machine Paradigm***


Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[23][24][25] or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies.[26] He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.[27][28][29] Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.


***Michael Graziano's attention schema***


Main article: Michael Graziano § The brain basis of consciousness >>>HERE<<<


In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.[34] Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".[2] This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well study body schema that tracks the spatial place of a person's body.[2] This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.


( "Self-Modeling" )


In order to be "self-aware," robots may use internal models to simulate their own actions.


Original Source Of Content And Even More Details >>>HERE<<<




Will Robots Usher in the Lights-Out Data Center?



(Will Robots Usher in the Lights-Out Data Center?)



The idea of using robots to automate data center operations has been the object of years science experiments and proof of concepts. In November, a data center startup called TMGcore demonstrated a robotic system that can replace servers housed within an immersion cooling tank.

This ability to swap out servers creates new possibilities in the industry’s long-running effort to create a “lights out” unmanned data center. Which raises a question: Are we on the verge of a new frontier in data center automation?

Today we’ll look at the history of robotics in the data center, look at some implementations, and discuss how they may help shape the data centers of the future.


First, why robots? There are three important reason for the fascination with the use of robots in the data center industry:

The first is the data center industry’s long tradition of continuous improvement in automation and efficiency, which has intensified as teams of system admins and SREs have had to manage a larger and larger volume of servers.

Secondly automation has also been advanced as a way to improve uptime by detecting potential problems and addressing them before they manifest and failures and uptime
The third reason is that robotics has been discussed as a potential tool for addressing the looming workforce shortage in the data center. Some folks believe we are likely to see shortages of skilled data center staff as experienced employees reach retirement age.
It’s hard to talk about AI and robots without noting the public anxiety about the technology becoming self-aware, or ushering in our New Robotic Overlords. It’s not an accident that Americans wonder if artificial intelligence (AI) will turn the tables on its creators and subjugate humanity. It’s a reflex drilled home by decades of blockbuster Hollywood movies like the “Terminator” series or “The Matrix” series or “Alien” or “Westworld” or “Avengers: Age of Ultron” or “Tron” or “Blade Runner” or “War Games” or “I Robot” (Will Smith version) or … well, you get the picture.


So it’s important to note that when we talk about robots in the data center, we mean industrial robots, that bear little resemblance to the humanoid Terminators, and are closer to the large mechanical arms you see on factory floors and assembly lines.

Folks have been talking about an automated “lights out” data center for a long time. In 2006 HP announced plans to move to a “Lights Out” model. Five years later, AOL said it had created a small, completely automated, unstaffed data center.

More recently, EdgeConneX has built a series of regional data centers that can operate unmanned, using a remote monitoring, sensors and an advanced “edge operating system” to track operations and deploy “warm hands” staff only when needed.

In 2013, I was talking with Bill Kleyman, who has written for both Data Center Frontier and Data Center Knowledge, which I was editing at that time. Bill loves all kinds of new technology, and was fascinated with the potential for robots to operate in data center environments. So he did a series of stories about the rapidly advancing capabilities of industrial robots created by companies like DevLinks or Japan’s Fanuc. This was one of the first meaningful discussion of how factory robots might be adapted for data center use, including equipment replacement, and the major obstacles that present challenges. So what does the road to data center robots look like?


(The History of Robots in the Data Center)

Robotics has been used for many years in cold storage operations for older data. My first encounter with a robot in a data center came in 2015. I was touring the massive Facebook data center campus in North Carolina, and got an early look at a new storage system for rarely accessed data, known as cold storage. I walked into a large, mostly dark data hall top a row of racks housing the new cold storage system, which used high capacity Blu-Ray disks to store old photos and status updates. They were housed in large rack-based storage units that hold thousands of Blu-Ray disks. Each rack included a robot retrieval system, which was housed in the bottom of the rack. When data was requested, the robotic arm springs into action, travels along tracks on either side of the rack, fetches a Blu-Ray disk, pulls data off the disc and writes it to a live server.

Even before the Facebook Blu-Ray system, tape archives seen at Google and high-performance computing data centers use robotic arms to locate and retrieve backup storage tapes.


( "In 2013, IBM got attention when it adapted some models of iRobot Roomba vacuum to monitor temperatures in its data centers" )


In 2013, IBM got attention when it adapted some models of iRobot Roomba vacuum to monitor temperatures in its data centers. IBM built a Roomba that moseys around a data center with sensors and a webcam attached, measuring temperature and humidity, and creating maps of their distribution. IBM uses the resulting maps to see where hot spots are developing. They can also scan RFID tags and manage inventory. At the time, IBM was supposedly using these in 9 data centers, but I haven’t seen any subsequent accounts of this being deployed at scale, or that this was anything more than science experiment.

In 2018, Google’s Joe Kava disclosed that the company was using industrial robots to smash used hard drives that were being decommissioned. When a disk can’t be verified as being fully wiped, they shred the hard drives

One of the most interesting applications of robotics has been in interconnection. In Frankfurt the interconnection specialist DE-CIX created a robotic system which it named Patchy McPatchbot, which has automated the provisioning and upgrading of networks. Patchy is a mounted on an Optical Distribution Frame instead of a standard rack, and its robotic arm can plug and unplug connectors, as well as manage cables. Here’s a look at Patchy McPatchbot in action.




There’s also a company called Wave2Wave, which has a robotic optical fiber switch, which also uses robotic arms to manage cross connections. The Rome switch is 19 inches wide and can fit into a standard rack. Wave2Wave has worked with NTT on automating fiber deployments in Japan. The robot knows the position of ports within the chassis, and can manage cables and slack with precision.





So how do we make the transition to robot server management? Interestingly, the front lines of this effort are in the field of immersion cooling, where the servers are dunked into a tank of liquid coolant. I had never thought that much about how immersion could lead to the creation of unmanned data centers powered by AI and robots.


But then I spoke with Scott Noteboom, who many readers may know from his work at Yahoo, where he designed an extremely efficient Compute Coop – a data center that borrowed design concepts from the thermal management of a chicken coop. Scott has been a thought leader in applying AI to managing data center infrastructure, and just joined immersion specialist Submer as its CTO. He has a vision for a future filled with autonomous data centers, which could look very different from the data halls you see today.


Noteboom believes robots and software can create a data center environment completely optimized for machines, with no humans involved. He helped develop AI agents that can monitor the sound of generators, listening for any anomalies that require attention. He points to the highly automated operations of warehouses as an example of what data centers might be able to accomplish. Like warehouses, data centers feature an organized structure, a lot of automation, and a floor plan built to be as efficient as possible.

Now that robots can smash hard drives, and swap out storage media and network connections, the next frontier is servers.

(What Does the Future Hold for Robots in the Data Center?)


Shortly after I spoke to Noteboom, I had a briefing with TMGcore, which has turned that vision into a working automation platform it called OTTO, which is demonstrated on the EXPO floor at this week’s SC19 conference in Denver. John-David Enright, the CEO of TMGcore, shared the details of the company’s technology, which uses two-phase immersion cooling, in which servers are immersed in coolant fluid that boils off as the chips generate heat, removing the heat as it changes from liquid to vapor. Enright and his team designed micro modular data centers to house the immersion tanks, and wanted to create a robotics system to swap the servers.

So TMGcore worked with Olympus Controls, which creates robotic systems for factory automation, and adapted its technology for physical server management. The robotic arm was customized so it can latch onto a server, lift it out of the immersion tank, and place it into an enclosure next to the tank that houses backup servers and has open slots where the robotic arm can place the faulty server. The system then grabs one of the backup servers, lifts it into the tank, and pops it into the plug-n-play backplane that provides the power and fiber connections.





TMGcore has just demonstrated its platform for the first time, so it remains to be seen how the market will embrace the platform. But it poses a question: What is the future of robotics in the data center?

First, it’s important to be clear that there are always going to be humans involved in data centers. Most data centers, anyway. But in the near future, we will begin to see a disconnect between the enormous volume of servers and storage that must be managed, and the volume of skilled workers available to manage them. Robotics will be part of the toolkit that data center industry must use as it seeks to accelerate the automation of cloud

In the unmanned data centers operated by EdgeConneX the automation is driven by sophisticated software, management tools and sensors. Preparing an entire facility for robot server swapping is a larger challenge. I don’t think we’re close to that. One thing data centers have going for them is that they are standardized. Robots are good at repeatable tasks with an unchanging canvas or environment. They can work with great precision in these type of environments.


That’s why robotic management is likely to be seen first at hyperscale operators who can control their entire environment, and who can begin to design their servers, racks and data halls with robotic management in mind. Amazon’s extensive use of robotics in its warehouses and distribution centers provides a glimpse of what type of optimization is possible when you design around the capabilities of robots. Unmanned or lightly-staffed environments also allow efficiency gains by allowing the facility to operate at higher temperatures and humidity, which has always been a focus for hyperscalers like Google and Microsoft.

In hyperscale data centers, you could design a rail-based system, much like the one created by TMGcore, with racks positioned adjacent to the rails. This would likely work best with racks positioned horizontally, rather than vertically, much as they are in the immersion tanks housing the servers at TMGcore.

An interesting wrinkle is that a robot-managed facility creates interesting possibilities for expanding vertically, stacking rows of data containers above one another. Human management and access would always be required, so platforms or gangways would need to be included.


( "A robot-managed facility creates interesting possibilities for expanding vertically, stacking rows of data containers above one another" )

As we’ve noted on DCF, many data center developers are already building three and four-story buildings – or in the case of Facebook, a massive 11-story facility in Singapore – so this tracks with industry trends. This concept would pose some interesting challenges around cabling and design.

Colocation and multi-tenant data centers are a different story. These facilities have to be flexible to accommodate whatever diverse type of equipment their customers.

But one area where the light-out data center will become a priority is edge computing, which could soon see hundreds and perhaps even thousands of small data centers in distributed locations. As a practical matter, the majority of edge data modules and enclosures will need to operate without a human on site.

I believe that we are in the early days of robotics in the data center. While there have been interesting early projects, the greatest potential is yet to come.









Type's Of Intelligence And AI



Other Type's Of Intelligence And AI

****

Assisted Intelligence - Augmented Intelligence - Autonomous Intelligence, etc

There are various types of artificial intelligence that federal agencies can take advantage of to achieve their missions.

****

While artificial intelligence has been around for some time, only recently has it reached the level many predicted for the technology at the outset. Today, the growing toolkit of AI is capable of more and more human-like functions and is delivering on its potential to advance virtually every aspect of everyday life, including how government agencies function.

Computer vision, natural conversation, machines capable of learning over time and other advanced functions of AI offer the potential to enhance virtually all government operations, including defense, space exploration and recognizing and managing disease outbreaks.

Despite early hesitation about AI, more than 80 percent of early adopter organizations surveyed are using or are planning to use AI, with more than 90 percent considering these cognitive technologies to be of extreme strategic importance for their internal business processes, according to Deloitte.

It is important to note that the goal of AI-augmented government is not to replace humans; the goal is to take advantage of the best capabilities of both humans and technology. How can governments best do this to get the fullest advantage of AI? To answer that question, it’s helpful to first discuss the three models (assisted, augmented and autonomous) and four types (reactive machines, limited memory, theory of mind and self-awareness) of AI.

****


(What Is Assisted Intelligence?)


Considered the most basic level of AI, assisted intelligence is primarily used as a means of automating simple processes and tasks by harnessing the combined power of Big Data, cloud and data science to aid in decision-making. Another benefit is that by performing more mundane tasks, assisted intelligence frees people up to perform more in-depth tasks. Requiring constant human input and intervention, assisted intelligence only works with clearly defined inputs and outputs. The main goal of assisted intelligence is improving things people and organizations are already doing — so, while the AI can alert a human about a situation, it leaves the final decision in the hands of end users. The exception would be those cases in which a predetermined action has been clearly defined.

MORE FROM FEDTECH: What are deepfakes and how will they affect the 2020 elections? >>HERE<< Or through this link: https://fedtechmagazine.com/article/2020/01/what-deepfake-technology-how-spot-deepfake-perfcon


(What Is Augmented Intelligence?)


The next level of AI is augmented intelligence, which focuses on the technology’s assistive role. This cognitive technology is designed to enhance, rather than replace, human intelligence. This “second-tier” AI is often what people consider when discussing the overall concept in general, with machine learning capabilities layered over existing systems to augment human capabilities.

Augmented intelligence allows organizations and people to do things they couldn’t otherwise do by supporting human decisions, not by simulating independent intelligence. Among the models included under this umbrella are machine learning, natural language processing, image recognition and neural networks. 

The main difference between assisted and augmented intelligence is that augmented intelligence can combine existing data and information to suggest new solutions rather than simply identifying patterns and applying predetermined solutions. Thanks to deep learning capabilities and continuous training, augmented intelligence machines are able to make better and faster decisions than humans, which can be especially helpful in time-sensitive applications.

READ MORE: Discover why agencies are cautious about using AI for cybersecurity. >>HERE<< Or Through This Link: https://fedtechmagazine.com/article/2020/01/agencies-are-cautious-about-using-ai-cybersecurity


(What Is Autonomous Intelligence?)


The most advanced form of AI is autonomous intelligence, in which processes are automated to generate the intelligence that allows machines, bots and systems to act on their own, independent of human intervention. Once considered mainly the stuff of science fiction, autonomous intelligence has become a reality. The thought is that, like human beings, AI needs autonomy to reach its full potential. While autonomous intelligence applications are growing, organizations are not yet — and may never be —ready to hand total control over to machines. With this in mind, AI should only be given autonomy within strict lines of accountability — a belief that is in no small part due to those aforementioned sci-fi portrayals.

Additionally, autonomous intelligence is not a good fit for all applications, particularly those where it is difficult to quantify the best outcome. In these situations, AI can serve as an automated adviser, with humans retaining the responsibility of accepting and implementing decisions made by the technology based on any more qualitative, intangible factors that must be considered.

CHECK OUT THIS VIDEO: What are the seldomly asked questions around emerging tech? Video >>HERE<< Or Through This URL: https://fedtechmagazine.com/media/video/imagine-nation-elc-2019-seldomly-asked-questions-around-emerging-tech

****

(Reactive Machines, Limited Memory, Theory of Mind, Self-Awareness)


Of the four types of AI, Reactive Machines are the most basic. Rather than storing and learning from memories or using past experiences to determine future actions, reactive machines merely perceive occurrences in the world and react to them. An example would be IBM’s Deep Blue, which was able to defeat chess champion Garry Kasparov by observing and reacting to the position of the various pieces on the board.

Limited memory machines are a step above reactive machines in that they do retain data, but only for a certain period of time and without adding that data to their library of experiences for future use. Many self-driving cars use this type of AI, storing data like the relative speed and distance of other cars, speed limit, and other data that allows them to navigate roads.

At the moment, theory of mind is only theoretical in AI as researchers attempt to build technologies that are capable of imitating human thoughts, emotions, memories and mental models by forming representations about the world and about other entities that exist within it. For example, the hope is to build computers that can perceive human intelligence and how people’s emotions are impacted by events and their environment in an effort to better relate to humans.

Like theory of mind, self-aware machines are not yet a reality. There are those who believe this to be the ultimate end goal of AI, with machines operating as humans do with an eye on self-preservation, predicting their own wants and needs, and relating to others as equals. However, there is debate about whether a machine can become truly self-aware, like Skynet in the Terminator movies.


How to Use Assisted, Augmented and Autonomous Intelligence


Regardless of the specific type, AI has numerous applications for federal agencies. Perhaps the most beneficial of these falls under the heading of assisted intelligence, with the technology taking over simple tasks currently performed by humans. According to Bill Eggers, executive director of Deloitte’s Center for Government Insights, federal employees spend about 4.3 billion hours per year on a variety of tasks, including recording and documenting information, handling objects and much more. Deloitte estimates that currently available AI and robotic process automation could free up about 1.3 billion of those hours by automating “the more menial sorts of tasks that most people really don’t like doing anyway,” Eggers says.

“We now have the ability to free up all of that time for more high-value, human sorts of activities, and that’s gotten a lot of attention throughout the federal government,” he says. “That enables quantum leaps in productivity for different employees and departments.”

Data management is another area where federal agencies could see major advantages. The federal government is digitizing more than 235 million pages of records, with the hope of reaching 500 million by fiscal year 2024, Eggers says.

“You can just imagine the value of intelligent machines processing this vast trove of data. With connected sensors and the Internet of Things producing even more data, this is a real sea change in how governments operate,” he says. “There is not going to be a technology that’s going to have as big an impact as AI on the public sector over the next 10 years.”

Recognizing its potential to significantly improve processes and productivity, federal agencies are working with companies like Google and Microsoft to harness the full power of AI. One example is Google’s work with researchers from NASA’s Frontier Development Lab to help identify life beyond Earth using Google Cloud AutoML to identify patterns in massive data sets.

Google CloudML’s resources helped researchers root out false positives, rapidly classify light curves and identify key variables they hadn’t noticed yet, allowing data jobs to run in seconds, and at 96 percent accuracy,” says Mike Daniels, vice president of global public sector for Google Cloud. “This eight-week session of AI-fueled rapid experimentation and iteration guided researchers in the search for exoplanets, where intelligent life may still be waiting to be found.” ( There is not going to be a technology that’s going to have as big an impact as AI on the public sector over the next 10 years.”

****

According to Susie Adams, CTO of Microsoft Federal, Microsoft’s research labs have been investing in AI since the first lab was founded in 1981. A recent technology to come out of this research is Healthcare Bot, which the Department of Health and Human Services uses to help quickly connect doctors with suitable patients for medical testing and clinical trials.

“With the help of computing power from cloud platforms such as Microsoft Azure, government agencies can now weave AI into the core of their day-to-day citizen interaction more efficiently, without the need to build expensive supercomputers used to do this type of work in the past,” Adams says.

Improving citizen engagement is another area where AI can assist agencies. According to Daniels, 85 percent of citizens expect the same level of service or better from the government as they receive from private companies. By leveraging AI and machine learning, agencies can take data-driven approaches toward better citizen engagement.

“We’ve seen our AI used to reveal hidden patterns faster in everyday scenarios like traffic jams and larger issues like urban blight. By helping agencies make sense of diverse, complex data sets, these agencies can empower government workers, who can then provide better service to their citizens,” Daniels says.

AI also allows government to transition from reacting to problems to focusing more on anticipating problems and being able to prevent them ahead of time, a model known as anticipatory government.

The Centers for Disease Control and Prevention has already implemented this type of function, using data analytics to track variables and public health issues to combat diseases, such as measles outbreaks. This is one reason Eggers says anticipatory government will be one of the most important benefits of AI over time.

“This is going to lead to a lot of lives saved, less crime, less disease and a variety of other things that will contribute to a better overall quality of life,” he says.

At the moment, the primary challenges to wider AI adoption among federal agencies are technology and strategy. Eggers says that while the government spends about $90 billion annually on technology, a lot of that is allocated to the operation and maintenance of legacy systems, some of which are 20, 30 or even 40 years old.

“What needs to happen is both new and fresh investments into AI, but also moving a lot of those investments toward these dramatic, productivity-enhancing technologies,” he says. “But what is really needed at this point is a coherent strategy. Otherwise, you won’t get the full benefits.”



****

This Article Was Created And Posted By

Vexed Inc - An AI, Artificial Intelligence Company Of The Future - Exploring New Technologies And Sciences For A Better World. Also bringing AI To The Public Sector and beyond. By creating a very successful Business Network And Structure.

We also work in many other fields of interest and Expertise.