Book: Arpsychology and structured design of artificial intelligent systems



Arpsychology and structured design of artificial intelligent systems


PSYCHOLOGY AND STRUCTURED DESIGN OF ARTIFICIAL

INTELLIGENT SYSTEMS

Introduction to Artificial Intelligent System’s Mind

Dr. L. M. Polyakov

Globe Institute of Technology



ii


To my family


iii



iv


Knowledge is not absolute. This book is not the last word; it is invitation to

discussion of understanding and direction of the new class systems

development. It can be used to develop the specification for Artificial

Intelligent System design and applications. Collective work and evolution of

knowledge will create better understanding of these systems philosophy.

All basic ideas of this book are presented in the strictly structured and dynamic

way rather than final results. As the textbook it can create atmosphere of strong

intellectual activities in the classroom. Examples of relatively simple realization

of the basic ideas by ordinary students should create student‘s sense of

confidence.


All definitions and system descriptions are applicable just to the Artificial

Intelligent Systems but in some cases can help better understanding psychology

of the Natural Systems.



v



vi


PREFACE

The Problem

Les then three hundred years ago the human society did not have a motorized transportation

in the streets. A car has changed a human behavior and life. People have learned how to live

in the new environment. They have learned the new traffic rules and how to communicate

with a car through its control system, have learned the new safety rules.


21st century is the man-intelligent-machine society. It is a combination of intelligent

machines and the human being. The business intelligence bumming market grows at an

annual rate of 11.5 percent and the business-performance-management category of 12.8

percent, compared with overall growth in the software market of just 8 percent [76]. Japan‘s

robot market is estimated at $4.5B and by prediction $16B in 2010 year.


Artificial Intelligence with knowledge accumulation through the Internet will drastically

change the world. People have to learn how to live in the new environment. It is very

important to understand psychology and behavior of the new habitants of the Earth. The

American Society for the Prevention of Cruelty to Robots (ASPCR) tries to define relationship between the human society and Robots.


As it will be shown later, the actual behavior of fully autonomous advanced artificial

intelligent systems cannot be predicted in some cases. It is important therefore to

prognosticate possible dangerous results of their behavior and protect environment and

human being from their not authorized actions (see FREE WILL AND ACTIONS and LAW

AND MORAL). Today it is not Sci-Fi or horror movie problems; it is the real world

existence. We must define limitations of the AIS that would be acceptable to human society;

if we do not define and enforce limitations of the AIS one day we can find ourselves at the

point of no return, like aircraft with vertical take off in the middle of lifting (it looks like balancing on the top of the geyser; this process can not be controlled by a human).


Although some advances have been made recently in machine learning and artificial systems

design, major issues remain unresolved. These regard abilities that are difficult to mimic by

machines but that humans and animals display ubiquitously, such as adaptation,

generalization, continuous learning with experience, conceptualization, and so on. Do neural

brain cells provide a computational platform with characteristics and representations that

could permit such abilities to be expressed in machines and be applied in practice? Do neural

brain processes use unique methods for performing 'computation' in general, in contrast to those used in current computer and integrated circuit technologies?

The Solution

For some contemporary philosophers, the soundest approach to problems in philosophy of

mind is to translate the mental into a set of functions. One version of functionalism that attracts wide attention is found in such specialized fields as artificial intelligence and expert

systems [67].


vii


IBM‘s chess-playing computer defeated Kasparov, one of the greatest chess masters of the

age, in 1997. What is philosophically interesting about such outcomes is not that computers

can outperform humans, but that the performance suggests that the best understanding of

human‘s mental operations is computational.

There are sound philosophical and conceptual reasons for caution here. It‘s clear that a

computer can ―play‖ chess or can ―play‖ any game at all or can, in any sense, have the cultural resources with which to recognize an activity as a game. Now we discuss the mental

life of computers!

Development of the Artificial Intelligent System (AIS) is the process of automation of

intelligent system activities. Artificial Intelligent System theory includes Psychology of the

AIS (the theory of the central control system), Physiology of the AIS (the theory of the functions of the systems and their parts – Distributed Control Theory) and Anatomy of the

AIS (the science of the shape and structure of systems and their parts).

It is impossible to develop and automate any process without understanding its nature in the

clear engineering terms. It is reason to learn as much as possible about intelligent machines

psychology before starting the AIS development. Being neither a physical science nor a

biological science in the strict sense, psychology has evolved as something of an

engineering science - the General Theory of Control. But the ―Theory of Intelligent

Control‖ is more advanced then the theory of convention control systems. Application the

engineering methods to the analysis of psychological abilities and problems permits to

get better understanding of machine’s intellectual abilities and human’s as well. It is important to understand the natural and artificial intelligent systems abilities and limitations.

The General Theory of Control was presented under name Cybernetics [78]. Cybernetics is

the study of communication and control, typically involving regulatory feedback in living

organisms, machines and organizations, as well as their combinations. It is an earlier but still-used generic term for many of the subject matters that are increasingly subject to

specialization under the headings of adaptive systems, artificial intelligence, complex

systems, complexity theory, control systems, decision support systems, dynamical systems,

information theory, learning organizations, mathematical  systems theory, operations

research, simulation, and systems engineering.

There are two main principle steps to prepare information:

1. To understand of the object or process (definition and description in the engineering

terms)

2. To organize the information (present it in a structured, algorithmical form) in the

way to make understandable how the process works.


Information should be described as simple as possible. Algorithm demonstrates concept of

the function.


viii


The question is: is it possible to learn the psychology of the systems that do not yet exist?

The answer is ―yes‖. The goal of this research is not only analysis of existing systems but

also preparation of a layout for the new system design process.


The author tries to cover this problem, and presents a draft of the layout of the Psychology of

Artificial Intelligent Systems. It lays the groundwork for a better understanding of Artificial

Intelligent Systems behavior and process of the AIS design for different areas of application.

It is understandable that intelligence of the AIS and natural systems have a lot in common.


This book also can help to reach better understanding of human psychology. It can be used in

undergraduate courses in computer science, management, psychology, philosophy, and also

in preparation of the students of all specialization to understanding of new realms of the 21st

century.


As the college course, Psychology of the AIS has grater degrees of freedom for discussion

because is not limited by the stereotypes and dogmas of the human science such as the

psychology and social studies. The students of Globe Institute of Technology under author

supervision developed all working examples in this book.

The Definition

―Everything is illusive in this Wild Dynamic World‖ (Russian song). What is intelligence

and what is a mind? What is an artificial life and what is a natural one? What is

consciousness and what is sub consciousness? What is creativity and what is super

intelligence? What is autonomy and what are emotions? What is an Artificial Intelligent

system intuition and what is hypothesis generation? What is fairness and what is the fair deal? What is natural reproduction and what is artificial one? There are limits in ownership

of Artificial Intelligent Systems and how to define it? There is an identity of Artificial Intelligent System and how to define it? There are hundreds and hundreds questions.

Relativity Theory and Uncertainty Principle, computer science and neurobiology, migration

and immigration, multi-citizenship and cross line marriage all of these scientific

achievements and social cataclysms force us to define and redefine almost everything.

The first step of any research and discussion (especially in the area of Artificial Intelligence

that has mixture of human‘s common sense, the engineering and philosophical terms as well)

is strict definition of the terms. John McCaty has introduced new name for the field:

artificial intelligence. This name defined all set of the terms. A definition is not a blueprint of a system design but a direction of design process. Without good definition nothing else

is really matter. Good definition presents description of a term from the user (customer), designer, etc. points of view and helps to recognize it among the other terms. It should include the minimal set of defining (unique) features used for this term and be as simple

as possible. Definition is ―A statement conveying fundamental character, a statement of the meaning of a word, phrase, or term, as in a dictionary entry‖ (American Heritage

Dictionary). A definition should supports active approach to the problem solution. All basic terms in this book are strictly defined by the scientific society or by the author.

Absence of the definitions is cause of the most theoretical and philosophical problems,

discussions, and misunderstanding in the history of Artificial Intelligence Science [51]. For


ix


example: the statement ―the thermostat‘s belief… is not identical to the corresponding belief

held by a person‖ without any definition of the term ―belief‖ kills all discussions about power

of artificial intelligence (see APPENDIX 8).

It is important to understand difference between definitions of the same term in different languages. For example world ―mind‖ in the Russian language means ―razum‖. This term

covers just conscious and subconscious processes and does not include unconscious

processes as it is in the English language (see MIND, INTELLIGENCE,

CONSCIOUSNESS, THOUGHT).

Some proposed definitions should be accepted as the start position to move from.


x


ACKNOWLEDGMENT

I begin with the founder and the First President of the Globe Institute of Technology Mr.

Leon Rabinovich. I am very grateful for his full support of this project.


My special acknowledge to the Professor of Drexel University Dr. Alex Meystel I have

worked over the years. Much of what I have learned has been the result of our collaborative

work and discussions.


My special thanks to Mr. Richard Holley for his excellent editing work.


Finally, I want to thank my lovely wife, who is the medical doctor and programmer, for her

patience and important critical comments.


xi



xii


CONTENTS


INTRODUCTION…………………………………………………………………………...1


PART 1

INTELLIGENCE……………………………………………………………………………3

WHAT IS INTELLIGENCE?………………………………………………………………,,,5

Introduction……………………………………………………………………5

Definition Development……………………………………………………….7

Robustness as the Tool of Reliability………………………………………...13

CREATIVITY……………………………………………………………………………….15

IMAGINATION……………………………………………………………………………..21

SUPERINTELLIGENCE……………………………………………………………………22

MEASURMENT OF INTELLIGENCE ……………………………………………………23

CLASSIFICATION OF THE INTELLIGENT TASKS AND ABILITIS

OF THE AGENTS TO ACHIEVE THEIR GOALS………………………………………..25

Introduction…………………………………………………………………25

Intelligence Abilities………………………………………………………..25

Goal and Agent Classes…………………………………………………….25

MIND, INTELLIGENCE, CONSCIOUS, THOUGHT ……………………………………27

THE MIND AS AN OPERATING SYSTEM ……………………………………………...30

DISSOCIATION BETWEEN INTENTIONAL CONSCIOUS AND UNINTENTIONAL

CONSCIOUS PROCESSES CONSCIOUS, UNCONSCIOUS, AND SUBCONSCIOUS

PROCESSES..………………………………………………………………………………30.

AWARENESS AND SELF-AWARENESS………………………………………………..31.

Awareness…………………………………………………………………..31

Self-awareness……………………………………………………………...32

REFLEXES…………………………………………………………………………………34

FREE WILL AND ACTIONS…………………………………………………………….. 35

THE STRUCTURE OF INTELLIGENCE…………………………………………………37

CONCLUSION…………………………………………………………………………......39.

REFERENCES……………………………………………………………………………...44

PART 2

PSYCHOLOGY OF ARTIFICIAL INTELLIGENT SYSTEMS……………………...49

WHAT IS PSYCHOLOGY OF ARTIFICIAL SYSTEMS?……………………………….51

Introduction………………………………………………………………… 51

Method of Analysis…………………………………………………………53

Levels of Analysis…………………………………………………………..53

DECOMPOSITION AS THE METHOD OF ANALYSIS………………………………...54

THE STRUCTURE OF AIS ……………………………………………………………….56

VECTOR OF PERFORMANCE (FUNCTIONS)………………………………………….57

AUTONOMOUS…………………………………………………………………………...59

SENSINGS AND ENSATION..……………………………………………………….…...67

ATTENTION……………………………………………………………………………….69


xiii


PERCEPTION……………………………………………………………………………...70

DISCRIMINATION………………………………………………………………………..70

OBJECT RECOGNITION………………………………………………………………....72

Speech and Text Recognition Technology………………………………….74

UNDERSTANDING AND INTERPRITATION…………………………………………..75

REASONING……………………………………………………………………………….81

Introduction…………………………………………………………………..81

Knowledge Representation………………………………………………......82

The Structure of Knowledge Representation in the Intelligent System……...83

Knowledge representation in the neuron net………………………………....86

Proposition Logic Forward chaining………………………………………....89

Relationship Between Abstract and Specific..……………………………….90

Wumpus World……………………………………………………………....91

MEASURMENT OF KNOWLEDGE VALUE AND POWER OF REASONING OF

ARTEFICIAL SYSTEMS …………………………………………………………………94

ASSOCIATIVE THINKING……………………………………………………………….94

ABSTRACT THINKING AND CONCEPTUALIZATION ………………………………96

GENERALIZATION AND CLASSIFICATION…………………………………………. 98

INTUITION………………………………………………………………………………....99

HYPHOTHISIS GENERATION………………………………………………………….107

LEARNING………………………………………………………………………………..109

Learning Concepts…………………………………………………………...109

Conceptual Learning…………………………………………………………105

The Construction of New Production Rules………………………………....112

Learning by Instructions……………………………………………………..116

Learning by Experience……………………………………………………...116

Supervised Learning…………………………………………………………113

Learning by Imitation………………………………………………………..116

Curiosity, Learning by Interactions………………………………………….116

PLANNING………………………………………………………………………………..120

PROBLEM SOLVING…………………………………………………………………….121

Well-Defined Problems and Solution…………………………………………121.

Measuring of Ability of Problem-Solving…………………………………….121

Multivariable Problems……………………………………………………….123

Lack of Statistics in Decision-making………………………………………...123

PERSONALITY OF THE ARTIFICIAL SYSTEM………………………………………124

AGGRESSION…………………………………………………………………………….125

EMOTIONS……………………………………………………………………………….126

Detecting and Recognizing Emotional Information………………………….. 135

Emotional Understanding……………………………………………………...138

STIMULUS, MOTIVATION AND INSPIRATION……………………………………..139

WILLINGNESS TO ACCEPT RISK……………………………………………………...140

SOCIAL BEHAVIOR……………………………………………………………………...142

The Man-machine Society……………………………………………………...142


xiv


Fairness………………………………………………………………………...146

The Fair Deal Development …………………………………………………...147

Independent Behavior………………………………………………………….150

PSYCHOLOGICAL MALFUNCTIONS, DESORDERS, AND CORRECTION………..152

MORAL AND LAW………………………………………………………………………152

ART APPREHANSIONS……………………………………………………………………………….154

ARTIFICIAL LIFE………………………………………………………………………...155

Artificial Life as the Model of Natural One……………………………………..155

Artificial Life……………………………………………………………………156

PRINCIPLES OF THE ARTIFICIAL BRAIN DESIGN………………………………….160

EVOLUTION AND INTELLIGENCE……………………………………………………160

GENDER OF AIS………………………………………………………………………….162

INSTINCT AND ALTRUISM…………………………………………………………….162

CONCLUSION…………………………………………………………………………….163

REFERENCES……………………………………………………………………………..164

APPENDIX 1

BRAIN DEVELOPMENT……………………………………………………………….167

THE BRAIN……………………………………………………………………………….169

STRUCTURE OF A TYPICAL NEURON……………………………………………….169

CHEMICAL SYNAPSES…………………………………………………………………171

RELATIONSHIP TO ELECTRICAL SYNAPSES……………………………………….171

THE BRAIN DEVELOPMENT STAGES………………………………………………..172

APPENDIX 2

ANALISIS OF DEFINITIONS OF INTELLIGENCE ………………………………..175

APPENDIX 3

MEASURMENT OF MULTIVARIABLE FUNCTION……………………………….183

ADDITIVE FORM…………………………………………………………………………185

REFERENCES……………………………………………………………………………..189

APPENDIX 4

FUZZY LOGIC…………………………………………………………………………...191

APPENDIX 5

NEURON NETWORK…………………………………………………………………...195

APPENDIX 6

GENETIC ALGORITHM……………………………………………………………….213

APPENDIX 7

EXPLORE BRAIN SCANNING TECHNIC…………………………………………...217


xv


APPENDIX 8

DEFENITION…………………………………………………………………………….223


APPENDIX 9

PREDICTION OF TIME THE NEUROAL NET WILL BE AT LEAST AS

COMPLEX AS THE HUMAN BRAIN…………………………………………………227

APPENDIX 10…………………………………………………………………………...231

APPENDIX 11

HIDDEN MARKOV MODEL…………………………………………………………...235

APPENDIX 12

THREE LAWES OF ROBOTICS………………………………………………………239

APPENDIX 13

DISCRIMINANT ANALYSIS…………………………………………………………..243


APPENDIX 14

INFORMATION EXCHANGE BETWEEN SHORT AND LONG TERM

MEMORIES IN THE NATURAL BRAIN……………………………………………..247

APPENDIX 15

STUDENT’S DISTRIBUTION………………………………………………………….257

INDEX……………………………………………………………………………………255


ABOUT THE AUTHOR…………………………………………………………………263


xvi


INTRODUCTION

The prime goal of Artificial Intelligence (AI) is synthesis: development and implementation of the universal methods of the system design.


The prime goal of the Psychology of Artificial Intelligent Systems (PAIS) is analysis: to develop information base for the intelligent system design and implementation, to learn the

system performance and abilities issues.


AI main question is:

How to develop?

The PAIS main question is:

What to develop?

Psychology of Artificial Intelligent Systems is the science that deals with processes related

to the artificial mind. These processes originate in the artificial brain (computer) and are manifested especially in thought, perception, emotion, will, memory, imagination and so on.

All of these processes are functions of Intelligence. So, the first question is: What is Intelligence? What is Mind?


1


2


PART 1

INTELLIGENCE

3


4


WHAT IS INTELLIGENCE?


Introduction

There are numerous definitions of ―Intelligence‖, but none of them satisfies all of the aspects

of the nature of intelligence, its measurement and engineering procedures for artificial

system‘ hardware and software design. It is important to define basic terms before start

discussion. While there is no single acceptable definition, there is no shortage of material for

discussion, measurement, research and development of intelligent systems.

Yet no universally accepted definition of intelligence exists, and people continue to debate what, exactly, it is. Fundamental questions remain: Is intelligence one general ability or several independent systems abilities? Is intelligence a property of the brain, a characteristic

of behavior, or a set of knowledge and skills?

―Whenever scientists are asked to define intelligence in terms of what causes it or what it actually is, almost every scientist comes up with a different definition. For example, in 1921

an academic journal asked 14 prominent psychologists and educators to define intelligence.

The journal received 14 different definitions, although many experts emphasized the ability

to learn from experience and the ability to adapt to one's environment. In 1986 researchers

repeated the experiment by asking 25 experts for their definition of intelligence. The

researchers received many different definitions: general adaptability to new problems in life;

ability to engage in abstract thinking; adjustment to the environment; capacity for knowledge

and knowledge possessed; general capacity for independence, originality, and productiveness

in thinking; capacity to acquire capacity; apprehension of relevant relationships; ability to judge, to understand, and to reason; deduction of relationships; and innate, general cognitive

ability‖.

(―Intelligence,"

Microsoft

Encarta

Online

Encyclopedia

2003,

http://encarta.msn.com 1997-2003 Microsoft Corporation).

We have entered the era of artificial intelligence (AI) revolution but still don‘t know what

intelligence is. We try to measure the value of intelligence, but don‘t know what and how to

measure. It is natural to start from the beginning and try to find a workable definition of intelligence even if this phenomenon may be non-definable. Let us try to find an acceptable

definition of intelligence.

Ancient Egyptians believed the heart was the center of intelligence and emotion. They also

thought so little of the brain that during mummification, they removed the brain entirely from

bodies.

Philosophers of different positions: materialists (Hippocrates, Aristotle, Aquinas),

behaviorists (Pavlov, Simon), and cognetivists (Plato, Kant, Chomsky) [22] have developed

different approaches to the problem of intelligence. Difficulties of understanding of the

―intelligence‖ phenomena in the earlier days are the reason why philosophers did not define

intelligence in the way acceptable today for application. These difficulties found reflection in

some earlier definitions like this:



‖Intellect is generalization and abstractions. These intellectual constructs are all that we can

know, because each material object is infinitely complex in its details‖. Thomas Aquinas

5


[22]. Numerous experts in different areas of science thinks: ―There are no satisfactory

definition of human intelligence‖[21].


Intelligence is the system‘s output and can be observed and defined through the system‘s

behavior. Behaviorism is an approach to psychology based on the proposition that behavior

can be studied and explained scientifically without recourse to internal mental states.


New achievements in biology, psychology and AI research and development created better

understanding and stronger condition for definition development. Description of intelligence

can be found in numerous publications [2,6,48,49].


Contemporary global market converts the scientific (pure academic) definition of intelligence

into important characteristic of a product, a tool of competition and market war. Mass

production of the objects with elements of intelligence creates the problem to label this product with information about the level of intelligence. It is very important to promote a

―smart‖ product in the global market. The new global market and new generation of product

presentation requires urgent development of practical methods to measure of the level of

product intelligence. This task is impossible without creating an acceptable definition of

intelligence. Definition of intelligence and personal abilities become important part of an automatic selection system that is looking for more well-rounded candidates (Hansell Saul,

Google Answer to Filling Jobs Is an Algorithm, NYT, 01/03/07)


It is impossible to draw a solid distinction between artificial and natural intelligence. This line does not even exist. Suppose we replace one natural brain‘s neuron with an artificial one

(as it was already done). Does this convert a natural brain into an artificial one? What if the

two neurons were replaced? How many artificial neurons are required to classify a brain as

―artificial‖? Where is this dividing line? What is a cyborg? It would be better to use the title

―an artificial system with intelligence‖ (ASI) instead of AI.


A group of researchers led by professor Steve M. Potter at the Laboratory for

Neuroengineering of Georgia Tech University has created a part mechanical, part biological

robot that operates on the basis of the neural activity of rat brain cells (2,000 or so) grown in

a dish. The neural signals are analyzed by a computer that looks for patterns emitted by the

brain cells and then translate those patterns into robotic movement. If the neurons fire a certain way, for example, the robot's right wheel rotates once. The leader of the group calls

his creation a Hybrot, short for hybrid robot.


Existing publications present many different definitions of intelligence (1-4.7,8,30,36,40-45)

from a non-definition: ‖Giving a generally acceptable definition of the concept of the AI is

difficult – if not impossible‖ to ―the psychologists agree that intelligence is a set of cognitive

characteristics and abilities that cannot be directly observed…‖ [30].


This approach doesn‘t help to understand and describe natural or artificial intelligence and

we cannot accept this pessimistic approach because we desperately need the working

definition that supports active approach to the problem solution.

6


Axiom: A mentally healthy human baby as well as a grown human being is the

intelligent system without any age limitations. (“The baby test”). Research shows that in reality a child demonstrates the first intelligent abilities at age 4-7 or 9-12 months (see APPENDIX 1). It is the result of intelligent fuzziness. As it will be shown latter the first 3-8

months can be described by the first level of intelligence definition.


It does not mean that a human being with some mental problems is not an intelligent person.

In the first month of life a baby uses ability to learn and generate the mental map (see APPENDIX 1). “The baby test” is the big problem to many types of existing definitions of intelligence (see APPENDIX 2). This axiom tells only that availability of conditional

reflexes is a condition of intelligence existence. It defines just the lower limit of the area of

intelligent existence. It is just needed but not only condition. Unconditional reflexes are not

intelligent processes (see also REFLEXES).


Definition of Intelligence

What is Intelligence?

First of all, as it was mentioned above, intelligence is a fuzzy term. In some cases it is very

difficult to draw a line between intelligent and non-intelligent natural and artificial systems.

For example, biological adaptation or any kind of evolution can be presented as either

learning intelligent ability or as a non-intelligent process. Acceptance of this statement as a

learning ability in combination with a definition of life (see ARTIFICIAL LIFE,

EVOLUTION AND INTELLIGENCE) gives extreme definition of intelligence: a living

system means an intelligent system (?). The goal is surviving.

Second, intelligence is an ability of the system to act in broad meaning of this word.

Third. All intellectual activities are triggered by a goal. ―A system can be intelligent only in relation to a defined goal…‖ [44], ―intelligence is…goal-directed adaptive behavior‖ [25].

The 4-7 month old baby is developing only simple goal-directed behavior (circular reactions,

see APPENDIX 1). Before this age a baby acts under internal goals (get food) (stimuli-reflex). So, in order to accept a baby as an intelligent system (see Axiom) we must include

the internal goal into the definition as well as the external goal. The way a baby behaves in the first month of its life is determined by feedback (positive and negative) that is provided

by various hard-wired pleasure and pain stimuli (adaptive reflexes). Distinguishing an

internal goal from an external goal is a kind of self-awareness.

Fourth. All kinds of intellectual activities are based on knowledge, but intelligence is not knowledge. Knowledge is a ―tool‖ of intelligence. As it is shown in [53]: ―education

produces intelligence‖. For ancient Greeks ―intelligence‖ and ―knowledge‖ were synonyms.

―We need knowledge to survive‖ [61]. An ability to learn is an important intellectual ability

that can improve knowledge. Knowledge-based intelligence represents specific abilities.

Knowledge reinforces intellectual activities. There are two levels of intelligence: general intelligence that is inherited at birth, and knowledge-based intelligence (domain oriented) that can be improved by learning. Twins studies [52] support this approach but the twin result

of intelligent level measurement depends on definition of intelligence and the measurement

7


method that still remains problematic. Professor Ulric Neisser (Cornell University) notes

[54], that ―in isolated areas where the gene pool has been unaffected by migration, the longer

that children attend a school, the higher their I.Q.‘s on average‖. The knowledge base is a

modular, organized memory of an intelligent system and knowledge is just the content of this

base. It includes description of an environment as a dynamic system that is acting under control of the law of nature, society development, etc. The more rules and connections

between the variables the higher intellectual power of a system. In AI systems that are not based on neuron net technology, increasing a number of rules in knowledge base (KB)

increases a number of virtual connections between the different parameters as well. The

knowledge base is the main source of information and intellectual power of artificial systems.

Importance of knowledge is determined not just by the quantity of knowledge but its quality

as well. Right, reasonable balance, between specific knowledge and common knowledge in

many different areas of application is needed but is not sufficient condition for the highest level of intelligence. Numerous examples (Leonardo Da Vinci and others) illustrate this

thesis. A European resident will have a problem surviving in Japan if he lacks Japanese

common sense knowledge. Unfortunately, today business and manufacturing world has a

tendency toward strong specialization not only in a blue-collar work area but in white-collar

work area as well. In most cases employers are looking for potential employees with very specific knowledge and skills. Research also supports the idea of knowledge-based

intelligence: ―it is not surprising that in the early 1900s there began to appear the first of a

long succession of efforts to demonstrate significant correlations between measures of

learning ability… and psychometric measures of intelligence [26].

Fifth. Natural systems inherit strong information through a genetic code. ―Neurogeneticists claim that genes determine… level of intelligence…‖ [23]. As it was mentioned above, this is

called general intelligence. Inherited ―brain power‖ or natural intelligence is determined by

power of a neuron net (number of neurons and power of connections: number of connections,

value of a weight function, a threshold and a transfer function) (see APPENDIX 5). A

process of knowledge collection creates an information flow through the neuron net and

increases power of connections (Hebb). As a result ―brain power‖ increases. Inheritance is

the important source of natural intelligence power.

Sixth. ―Intelligence is an internal property of the system, not a behavior‖ [54], but a behavior

is the main criteria of an intelligence level. This level can be determined by a test.

Seventh. Results of new research support idea of duality of intelligence (see APPENDIX 2,

group of rules number ten [27], [32], [38]) and general intelligence: ―general fluid intelligence," which studies suggest is strongly influenced by heredity. Raven's test scores correlate highly with scores on I.Q. tests and other standardized measures of intelligence‖.

"To our knowledge, this is the first large-sample imaging study to probe individual

differences in general fluid intelligence, an important cognitive ability and major dimension

of human individual difference," wrote the researchers, led by Dr. Jeremy R. Gray, a research

scientist in the department of psychology at Washington University in St. Louis [26]. This paper is published, in the March issue of the journal Nature Neurosciences, on the journal's

web site.

Eighth. The definition should covers not just cognitive power but a power of sensing system

and the actuators. There are different levels of intelligence. Sometimes different levels of 8


performance (skills) can be presented as different levels of intelligence. Their levels are determined in many cases by limitation of one or more elements of the system. Advanced

upper level abilities of the intelligent structure (generalization, conceptualization, etc.) are not guaranteeing a high level of the skills. For example, low capability of the sonar sensors

can prevent a person from becoming a musician even if he/she/it has a suitable capability of

the rest of the subsystems. Beethoven was not a deaf man in earlier age; he lost his ability to

hear later on. A composer as a music designer can ―hear‖ his music with his inner ―sensor‖.

The famous Helen Keller, an author and educator was deaf, blind and mute but she had a sensitive tactile system and sense of smell. She learned to ―hear‖ and to speak and she was

able to make her great intellectual power work [34]. A scientist with a high level of

intelligence may have a problem doing a manual job if he/she does not have suitable

actuators. A ―handyman‖ is not a handyman without hands. There are two choices to design a

definition of intelligence: to extend the definition and include sensors and actuators or to add

separate explanation of sensors and actuator importance. As soon as we talk about

intelligence as ‖…an ability of a system to act appropriately…‖ [1], we include an actuator

into this definition. No sensors – no learning and no knowledge, no actuators – no

performance; without them it is impossible to evaluate the level of intelligence. So, the intelligent system is the sentient system with actuators.

Ninth. Human intelligence is a product of nature. Artificial intelligence (AI) is a human product. Many people think that a machine can do only what it was programmed to do and

AI is not a real intelligence. They think that intelligence is a very complicated phenomenon.

―It seems that human behavior is complex. In reality it just reflects complexity of the

environment in which on lives― (behaviorism) [52]. The more knowledge about intelligence

we collect the better we understand that an intelligent system is a product of the combination

of the relatively simple subsystems. There are numerous descriptions of intelligent system

abilities [2,6,48,49]. A human is a machine (Rodney Brooks, MIT). Very complicated

intelligence functions are a combination of relatively simple understandable functions

that we can successfully emulate. Even generalization (to infer from many particulars, to draw inferences or a general conclusion from) and conceptualization (to form mental images

of) that sound very complicated in reality is product of reasoning. The same is true for intuition [48]. Different levels of emulation and different levels of natural intellectual

abilities of different creatures lead to vagueness in the definition of intelligence. There are people who can easily learn math or other abstract theories (are they very smart?) and at the

same time could not learn simple car driving rules (are they very stupid?). Intelligence is a

mental process. ―Mental is executed or performed by the mind, existing in the mind (brain)‖

[74].

Tenth. Definition of intelligence consists of two levels.

The first level of intelligence is General Intelligence (capabilities - inherited or built-in hardware and basic software) is an organized combination of conscious (cognitive) and

subconscious potentials of abilities in a sentient system that enable it to direct and influence mental and physical behavior in accordance with a system external or internal

goals. As inherited intelligence abilities General Intelligence is in some way the product of evolution. ―Neurogeneticists claim that genes determine… level of intelligence…‖[23]. This

is ―infancy” level (up to 4 or 8 months old).

9


Conscious is “a capability of thought, will, or perception (having knowledge)‖ [74].

Cognition is the mental process or faculty of knowing, including aspects such as awareness, perception, reasoning, and judgment, that which comes to be known, as

through perception, reasoning, or intuition” [74].

General Intelligence defines capabilities opposite to abilities of the system. It determines capacity of the system to exercise its abilities. It can be evaluated indirectly through electrical

and chemical brain (computer) activities that are measured by instrumentation or technical

description of the AI system. A level of fuzziness determines a level of confidence. This definition determines a second level of the intelligence that is developed by learning. It is acceptable for an artificial system but should be carefully applied to a natural system.

This definition includes main features (mandatory features) that determine defining term as a

class description and optional features, extra quality of a subject or a subclass inside of the

class. In reality the terms ―conscious‖ and ―cognitive‖ cover two main capabilities: learning

(knowledge collection) and reasoning (see APPENDIX 2) and can be replaced by these

features. In this case:

The first level of intelligence is General Intelligence (capabilities) (inherited or built-in

hardware and basic software) is a combination of learning and reasoning as mandatory

capabilities (with more intellectual capabilities as optional) of a sentient system that enable it to direct and influence mental and physical behavior in accordance with a

system external or internal goal.

This definition supports the axiom (A baby test). Features: ―understanding abstract concept‖,

―to act appropriately in an uncertain environment‖, ―the capacity to construct and manipulate

symbolic representation‖, ―fabricate complex engineering artifacts‖, ―understand and handle

abstract concepts‖, ―to respond quickly and successfully (?) to a new situation‖, ―to think rationally (?) and to deal effectively (?) with the environment‖, etc. (all of these are borrowed

from definitions shown in APPENDIX 2), to make generalization/specification, to have

imagination, intuition and creativity are not mandatory but optional features of intelligence.

They fail ―the baby test‖. Systems that exercise these features just demonstrate a higher level

of intelligence.

The second level of intelligence is Knowledge-based intelligence can be defined as a knowledge-based general intelligence (or ability) of a domain-oriented system to act under

existing constraints (limitations) and reach external or internal goals or decrease the distance

between the start and the goal‘s stages (intellectual adaptation). This is ―infancy” level (8

months and older. This is different stages from early ―childhood to adulthood‖.

This type of intelligence includes the same type of abilities as capabilities of general intelligence. Usually reasoning includes a goal as a direction of reasoning. This definition describes abilities of the system opposite to its capabilities. A goal‘s description can be presented in crisp, fuzzy, or probability and statistics theory languages. Knowledge is a combination of rules and procedures (data is not knowledge but information about

environment).

10


Autonomy strongly related to intelligence, but it is very difficult to use it for definition of intelligence (see also AUTONOMOUS).

These definitions combine all three philosophical approaches: materialistic (brain/hardware

as the tool of intelligence), behaviorist (stimulus-response), and cognetivist (minds are made

of collections of representations that constitute symbols and images). The first definition represents cognitive power of the brain; the second one represents ability to use this power. Two these definitions can be combined into a single very complicated one, but it does not make sense. Each capability/ability should be defined separately and measured with

or without aggregation of all results in one [50].

The second level of definition cannot be realized without the first one. An intelligent system

must be able to collect knowledge and infer conclusion. The hierarchical structure of abilities is described in [41,49] (see THE STRUCTURE OF INTELLIGENCE). Dr. A.

Meystel presented a similar definition: ―thinking involves mentally representing some

aspects of the world and manipulating these representations or beliefs where the latter may

aid in accomplishing a goal‖[41]. His ―unit of intelligence‖ is designed to execute this

procedure.

Note: a conditional statement “if-then” in a hard coded program is not an element of a

knowledge base. A hard coded system does not have KB and cannot learn. Only the

system that is based on knowledge separated from the source code can be an intelligent

system [40]. Hard wired neuron net with constant parameters (weights) is not an

intelligent system because cannot learn.

In some way it is possible to say that an intelligent system is a system that has capacity or

ability to make choice (learning and reasoning with a goal internal presentation). This definition has many supporters [21]. It gives the answer about intelligence: ―yes‖ or ―no‖.

Learning and reasoning is manipulation of symbolic representation. In this case it is possible

to say in some way that intelligence is ability to manipulation of symbolic representation

in accordance of the goal. Definitions 1 and 2 (see APPENDIX 2) give abilities for intelligence measurement. In case of two levels of intelligence rejection of the baby axiom

changes nothing for a human being but creates problems to animal world definition. By the

way; adaptation is ‖Something, such as a device or mechanism that is changed or changes so

as to become suitable to a new or special application or situation‖ [74]. So, the terms adaptation and choice-making have the same meaning. Definitions of the group 2 (see

APPENDIX 2), the group 3, first definitions in the group 5, and group 7 are similar to knowledge-based definitions but do not cover general intelligence definition (artificial

systems). All existing definitions represent different aspects of the problem and are very helpful to find the correct solution.

Proposed above are definitions that did not fail ―the baby test‖ (see Axiom). The first level

definition is the basis upon which to work with artificial intelligent systems. In the second

level definition (cognitive abilities) is limited by knowledge and extended by learning.

Knowledge-based intelligence can be evaluated by behavior tests. As it was previously

mentioned, psychologists ―agree that intelligence is a set of cognitive characteristics and abilities that cannot be directly observed…‖[22], General intelligence of the AI systems can

be evaluated by reading technical characteristics from the design documentation and program

11


source code. Unfortunately access to this information usually is not available under

conditions of secrecy. Both of these definitions of intelligence agree with existing bi-

factored, multiple-intelligence, and information-processing theories of natural intelligence

[47].


The time it takes to execute the goal is one of many important characteristics of a system‘s

performance such as learning ability, duration to object recognition, etc. and should not be incorporated into the definition. The statement ―…a goal should be reached for certain period

of time…‖ does not make the definition any better. The shorter the response time the higher

the level of intelligence, but a time response does not define a system as intelligent.

Note: IQ test takes time response into consideration.


An expert system usually is defined as an intelligent system, but what is the goal? The goal is

“the purpose toward which an endeavor is directed, an objective” (Webster). Using

‖Intention― as a synonym for ―goal‖ or ―endeavor‖, the goal can be defined as “the purpose toward which an effort is directed” . In this case an expert system together with

the user (who activates the goal) can be defined as an intelligent system. The goal or purpose

(area of application) of the system usage is hidden in the system itself and should be

activated by activation of appropriate KB. This demonstrates fuzziness of the division

between intelligent and non-intelligent systems. Another definition of a goal (purpose): “A result or an effect that is intended or desired; an intention” (Webster). This definition fits very well with the term ―intelligence‖. A natural (human) and an artificial diagnostic system

with just two active abilities (learning and reasoning) are still intelligent systems with choice

of the correct diagnosis as the system‘s outcome.

Items [62,63,64] show that the INTELLIGENCE, as we know it, somehow reflects the

structure and the actual properties of BRAIN even its ability to use and manipulate

―Languages‖. Karl Pribram was one of the leaders who established the link

―intelligencebrainlanguages.‖ Pribram was also one of the pioneers in building the

extended string


intelligencebrainlanguagesconsciousness”.


From his [62]: ―Daniel Dennett has humbly contributed a volume entitled ―Consciousness

Explained‖ [64]. In it he replaces the Cartesian Theater (Shakespeare‘s ‗Stage‘) with a

tentative pluralistic set of narratives recounting our experience. Marvin Minsky has also

emphasized the plurality of mental processes in his ―Society of Mind‖ [66]. Pribram invite us

to familiarize ourselves with ―From Folk Psychology to Cognitive Science‖ [61].


In this work we don‘t discuses the structure of brain. But link between intelligence and the

brain has reverse sequence:

brain intelligence consciousness languages.

12


Arpsychology and structured design of artificial intelligent systems


CREATIVITY


Fig I-1. Data and knowledge transformation in intelligent and non-intelligent systems.

Robustness as the Tool of Reliability

In computing terms, robustness is the resilience of the system under stress or when confronted with invalid input. It is the ability of the software system to maintain function even with the expected and unexpected changes in internal structure or external environment.

For example, an operating system is considered robust if it operates correctly when it is starved of memory or storage space, or when confronted with an application that has bugs or

is behaving in an illegal fashion such as trying to access memory or storage belonging to other tasks in a multitasking system.

The Artificial Intelligent Systems are very sophisticated systems with high-speed

performance. It is difficult to monitor its activity and to maintain all these functions without

interruptions in case of failing the part of the system, to protect environment from its malfunctions. Reservation is one but not the best way to avoid this problem. This problem can be solved with more efficient and chipper way by automatically reassignment of the

failed function activity to the active parts of the system. This procedure exists in a natural brain. The intelligent system with this ability can be defined as the Robust Intelligent System.

This replacement is possible because all intellectual functions are based on two main

procedures of data manipulation: learning and reasoning. Some systems such as the control

systems of the symmetrical (left and right) parts of a body; perceiving and conceiving (see

LEARNLNG) and some other execute similar procedures.

13


The Robust Artificial Intelligent System is the system with automatic ability to reassign

execution of intellectual function from the failed subsystem to the active one or to

protect a system against noisy or non-adequate input and output information.

The new type of computer that mimics the complex interactions in the human brain is being

built by UK scientists at the University of Manchester (Scientists to build 'brain box' - BBC, 17.07.06).

Professor Steve Furber, of the university school of computer science, said: "Our brains keep working despite frequent failures of their component neurons, and this 'fault-tolerant'

characteristic is of great interest to engineers who wishes to make computers more reliable."

The working part of the natural system can pick up new function just under specific conditions

after intensive training. It does not acceptable from the high reliability point of view, but training can be done priory. This process must be quick and reliable. "Our aim is to use the computer to understand better how the brain works at the level of spike patterns, and to see if

biology can help us see how to build computer systems that continue functioning despite

component failures," he explained.

In the brain, groups of neurons work together, producing bursts of activity called "spikes".

The "brain box" will use large numbers of microprocessors to model the way networks of neurons interact.

The natural brain has an ability of regeneration of the failed part of a brain. After stroke, new

blood vessels form and newly born neurons migrate to the damaged area to aid in the

regeneration process of the brain. In mice, UCLA neurologists identified the cellular cues that start this process, casually linking angiogenesis, the development of blood vessels, and



neurogenesis, and the birth of neurons. Regeneration in the artificial world is the new

problem for the artificial biotechnical intelligent systems.

Being able to understand the environment (usually time-varying and unknown a priori) is an

essential prerequisite for intelligent/autonomous systems such as intelligent mobile robots.

The environmental information can be acquired through various sensors, but the raw

information from sensors are often noisy, imprecise, incomplete, and even superficial. To

obtain from raw sensor data an accurate internal representation of the environment, or a

digital map with accurate positions, headings, identities of the objects in the environment, is

very critical but very difficult in the development of robotic systems. The major challenge is

from the uncertainty of the environment and the insufficiency of sensors.

Basically there are two categories of techniques for handling uncertainties: adaptive and

robust. Adaptive techniques exploit a posteriori uncertainty information that is ―learnt‖ on-

line, whilst robust techniques take advantage of a priori knowledge about the environment

and sensors.

Robustness can be measured by the number of failures and time of function restoration.

14


CREATIVITY


The most contradicted ability of intelligence is creativity. There is strong opinion in the community of psychologists that creativity does not related to intelligence and it is not clear

what creativity is. More than 60 different definitions of creativity can be found in the psychological literature. As it was mention before, the natural brain develops creativity when

a baby is 7 – 9 month old (see APPENDIX 1).


Let us look at some existing definitions.


―One factor that is not closely related to creativity is intelligence‖[20].


―Psychologists are a long way from understanding creativity‖ [25].


―The correlation between measures of creativity and intelligence is not strong (it is positive

but low). Intelligence is based on logic and knowledge (convergent thinking)‖[19].


―We view intelligence as sitting in a collection of related qualities that include autonomy and

creativity‖ [37].


These contradictions demonstrate the existence of the problem in intelligence and creativity

understanding and their measurement.


―It is a difficult concept to define. In general terms creativity is the ability to produce

―products‖ that are both novel and socially valued‖ [39].


―Difference between intelligence and creativity is closely related to the distinction between

convergent and divergent thinking. Convergent thinking is logical, conventional (based on

logic and knowledge), and focused on a problem until a solution is found. Divergent

thinking is loosely organized, only partially directed, and unconventional‖ (Guilford) [39].


Divergent is ―Not having relevance to the topic at hand, irrelevant, sidetracked, digressive,

divergent, extraneous, immaterial, inconsequential, no germane, peripheral, tangential,

unrelated‖ [74]. Thinking (does not meter what kind of thinking) is ―a way of reasoning; judgment‖ [74].


―Creativity refers to ―mental processes that lead to solution, conceptualizations (developing

of a general idea derived or inferred from specific instances or occurrences), ideas, theories

or products that are unique and novel ― (Reaber) [35].

Creativity: ―It is ability to generate unusual responses to problems‖ [20].

―Creativity is an ability to generate a fresh, original, innovative and unusual approach:

imaginative ability to generate new problem‖ [5].

What are unusual responses to problems: a fresh, original, innovative and unusual approach?

15


Convergent thinking is reasoning by its definition. Well-organized and poorly organized construction workers are still construction workers. It is true that it is easy to organize reasoning in the precise world of words than in an obtuse world of images and sensations.

Searching for information in both of these worlds is still seeking and organizing the existing

information. All processes are directed but some of them are directed intentionally by will,

others unintentionally in accordance with the internal or external goals (signals).

In [62] there are presented three different definitions of creativity. All of them may be summarized into this definition: creativity is ―the ability to produce work that is novel, high

in quality, and appropriate‖

There are several descriptions of the most creative strategies. In reality all of them are just a

standard problem solving strategies [5,35].

Creativity (or creativeness) is a mental process involving the generation of new ideas or concepts, or new associations between existing ideas or concepts, a process that resolves

contradiction or explains new events through reasoning. It is search answer for question:

How to get this feature? What can happen if something will be done? How these facts

can be combined into new theory? Etc.

It has been studied from the perspectives of behavioural psychology, social psychology,

psychometrics, cognitive science, artificial intelligence, philosophy, history, economics,

design research, business, and management, among others. The studies have covered everyday creativity, exceptional creativity and even artificial creativity. Some researchers think that unlike many phenomena in science, there is no single, authoritative perspective or

definition of creativity. Unlike many phenomena in psychology, there is no standardized

measurement technique. It is not productive position

Creativity is domain-oriented phenomena. ―People are creative or not creative in particular domain‖ [20]. For example: a scientist has a lower level of tolerance to scientific

contradictions, a businessman has a lower level of tolerance to businesslike problems.

Agent‘s personality involvement creates the problem of creativity measurement. In this case

the most important personality feature is an ability to take chance or risk but not irresponsible

adventurism. It is determined by the level of knowledge, availability of needed tools, reward

(benefit/loses) ratio, and probability to win or fail. Different areas of activity have different

losses: in business – investment, in military - life, in science – time or prestige and so on.

Psychologists cannot observe strong correlation between intelligence and creativity because

they measure intelligence as integral function of different abilities (IQ) but creativity in the

most of cases is the function of only several specific abilities.

There are two types of creativities:

1. Conscious reasoning is triggered intentionally in accordance with information about

new event or opened contradiction. The working ―tool‖ is organized reasoning.

2. Subconscious reasoning is triggered unintentionally by new event or opened

contradiction. The working ―tool‖ is intuition [48] and associative thinking (see also

INTUITION, ASSOCIATE THINKING and Curiosity, Learning by Interactions).

16


It is constant search for answer the question: How can I utilize this knowledge?

Psychologists have tried to develop tests to measure human capacity for creativity (Sternberg

& Lubart, 1991, J. P. Guilford, 1967). Guilford asked people to name as many uses as they

could for common objects such as brick. Skilled divergent thinkers came up with

unconventional answers – use it to prop a door open, use it as paperweight, and so on [25].

This is typical associative thinking, i.e. the retrieval of information by association of the link between words (see ASSOCIATIVE THINKING).

A second approach to measuring creativity is the Symbolic Equations Test. People are asked

to produce ―symbolic equivalents‖ for various images. In response to the image of candle

burning law, for example, the divergent thinkers is likely to imagine such analogous events as

dying, a sunset, and water trickling down a drain [25].

The Artificial Knowledge Base has the multilevel structure (see also REASONING, The

Structure and Knowledge Representation). An abstract model of an object or event is located

on the upper level of the base structure. Description of the specific object is located on the lower level of the base structure. All objects with the similar abstract model are combined in

one group. Some objects and events can be presented by the several different abstract models

and be placed in the several groups. In this case the Symbolic Equations Test is the procedure

of searching through an abstract model similarity.

A third measure is the Remote Association Test, in which people are shown three words and

asked to came up with a fourth that links all the others. For example, the words piano, record, and baseball, are all linked to player [25].

Remote Association Test is thinking by analog. These analogs are generated by learning

(reading and watching images, and so on), It is not ―crispy‖ process of thinking, It is working

with fuzzy objects and fuzzy relationship, but still is a more organized and sophisticated process of thinking. No gimmicks.

Remote Association Test is associative thinking using different connections between terms.

As we can see, all answers to different methods of creativity measurement are based on

thinking or reasoning and knowledge. Research shows that the brain develops creativity when

a baby is 7 – 9 month old (APPENDIX 1). A brain product is nothing but intelligence. It means that creativity is an intellectual process. It is a mixture of intentional and unintentional processes.

Genius

In the philosophy of Arthur Schopenhauer, a genius is a person in whom intellect predominates over will much more than for the average person.

In the philosophy of Immanuel Kant, genius is the ability to independently arrive at and understand concepts that would normally have to be taught by another person. For Kant

"genius" means being originality. This genius is a talent for producing ideas which can be 17


described as non-imitative. Kant's discussion of the characteristics of genius is largely

contained within the Critique of Judgement and was well received by the romantics of the early 19th century.


Genius is a result of extreme development one or several abilities; “extraordinary

intellectual and creative power, a person of extraordinary intellect and talent: "One is

not born a genius, one becomes a genius" (Simone de Beauvoir). A genius is a person who has an exceptionally high intelligence quotient, typically above 140‖ [74]. Extraordinary

intellectual power is power that can be observed in very few individuals working in the same

area of knowledge. It is possible that extraordinary intellectual power can be observed in some individuals of groups of other creatures. Extraordinary intellect power can be observed

in one specific or several different areas of activities. In some cases it can be combined with

disability in other areas. It is the reason why researchers cannot see correlation between intelligence and geniality.

Creativity depends on personality of an agent. It is important to include personality test into

the process of measurement.


Creativity as an ability is universal feature, but it depends on existing knowledge.

Development of calculus by Isaac Newton was a very much a creative process. Derivative

(main idea of calculus) is a combination of two this time contemporary ideas: algebraic

division and the idea of limits.


In this case it is possible to defined creativity as domain oriented ability to create new knowledge as a new combination and new application of existing knowledge. The higher

level of knowledge then the higher probability (p) of creativity (C) demonstration. Creativity

is the function of the knowledge level (K), ability of associative thinking (AT) and reasoning

(R) as a tool of problem solving:


pC = f (K, AT, R)


Artificial Intelligent Systems can use the Internet as its natural knowledge base. This

knowledge base has huge amount existing methods of problem solving in different areas of

application (Greedy Search, Genetic Algorithm, and hundreds more).


GenoPharm software (Berkeley Lab) can find hidden knowledge in thousands of Internet publications that were overlooked by scientists. This software is based on associations

between the terms. It infers new knowledge by connecting closely related terms in one

meaningful string.


A team at Purdue University currently is developing a "data-rich" environment for scientific discovery that uses high-performance computing and artificial intelligence software to

display information and interact with researchers in the language of their specific disciplines.

Problem solving technology (greedy algorithm and other) is the powerful tool of the creative

system development. Genetic Algorithm is another method to develop the creative systems

(see APPENDIX 6).

18


Imagination Engines, Inc. (Dr. Stephen Thale) patented the Creativity Machine (Fig. I-2).

That engine has capability of human level discovery and invention. An artificial neural

network that has been trained on some body of knowledge and then perturbed in a specially

prescribed way tends to activate into concepts and/or strategies (e.g., new ideas) derived from

that original body of knowledge. These transiently perturbed networks are called

'imagination engines' or 'imagitrons'. It is an extremely valuable neural architecture. Optional feedback connections between this latter computational agent and the imagination engine

assure swift convergence toward useful ideas or strategies.

Agitation of the neuron net by the input noise (signal) can change weights of neurons

connections randomly. Information that was saved priory in this net can be source of new concepts generated randomly by the net in this dynamic regime. The system generates the

output from the existing information. Mutation of saved information (chromosomes)

generates new offspring generation. ―This new AI paradigm is vastly more powerful than

―class‖ genetic algorithms (GA), efficiently generating new concepts‖ (Dr. Stephen Thale).

An Imagination Engine is a trained artificial neural network that is stimulated to generate new ideas and plans of action. Neural networks were trained upon a collection of patterns representing some conceptual space (i.e., examples of either music, literature, or known

chemical compounds), and then the networks were internally 'tickled' by randomly varying

the connection weights joining neurons. If the connection weights were varied at just the right level, the network's output units would predominantly activate into patterns

representing new potential concepts generalized from the original training exemplars (i.e.,

new music, new literature, or new chemical compounds, respectively that it had never been

exposed to through learning). In effect, the network was thinking out of the box, producing

new and coherent knowledge based upon its memories, all because of the carefully 'metered'

noise being injected into it. From an engineering point of view, this is quite phenomenal: a

neural network trains upon representative data for just a few seconds and then generates

whole new ideas based upon that short experience. In effect, it was created an engine for invention and discovery within focused knowledge domains.

Disorder of intelligent system can develop outstanding creativity (see also

PSYCHOLOGICAL MALFUNCTIONS, DISORDERS).

Algorithm (Associative Thinking method) (see ASSOCIATIVE THINKING):

1. Is this an object?

2. If ―Yes‖, then check association of the terms of the problem description to the terms

in the memory.

3. If ―NO‖, then check: Is this an event or process?

4. If ―Yes‖, then use another procedure.

5. Are these terms related to the any method of problem solving?

6. If ―Yes‖, then evaluate these methods to apply it to the new problem solving.

7. If ―Yes‖, then use it; otherwise repeat 1 – 4 until all choices are checked.

8. If none is fitted to the problem, then use another procedure or fail.


19


Arpsychology and structured design of artificial intelligent systems



Fig. I-2. An Imagination Engine

http://www.membrana.ru/articles/inventions/2004/01/26/212000.html


Algorithm (Associative Thinking method) (see ASSOCIATIVE THINKING):


9. Is this an object?

10. If ―Yes‖, then check association of the terms of the problem description to the terms

in the memory.

11. If ―NO‖, then check: Is this an event or process?

12. If ―Yes‖, then use another procedure.

13. Are these terms related to the any method of problem solving?

14. If ―Yes‖, then evaluate these methods to apply it to the new problem solving.

15. If ―Yes‖, then use it; otherwise repeat 1 – 4 until all choices are checked.

16. If none is fitted to the problem, then use another procedure or fail.


Algorithm (searching through an abstract model similarity):

1. Is it an event or process?

2. If ―Yes‖, then define the abstract model

3. Define the specific events or processes are related to the model


Note:

For easy reading and understanding all algorithms are presented in the simple way without

separation of knowledge from control, as it should be done in intelligent systems (see

REASONING).

20


IMAGINATION

Imagination is ”the formation of a mental image of something that is neither perceived

as real nor present to the senses; the ability to confront and deal with reality by using

the creative power of the mind” [74].

The ability of imagination is a very sophisticated intellectual ability. It is a high-level of creativity. A common use of the term is for the process of forming new images in the mind

which have not been previously experienced, or at least only partially or in different

combinations. The new image can be created as logical expantion of previosly known

images in accordance with description of new features and abilities.

Imagination in this sense, is not being limited to the acquisition of exact knowledge by the

requirements of practical necessity, is up to a certain point free from objective restraints. The

ability to imagine one's self in another person's or agent place is very important to social relations and understanding in natural, artificial or mixed environment. The most comon type

of this activity is compassion.

Progress in scientific research is due largely to provisional explanations which are

constructed by imagination, but such hypotheses must be framed in relation to previously

ascertained facts and in accordance with the principles of the particular science.

There are two main types of imagination:

1. Subjection imagination generate the scene or sensation

2. Objection imagination generates the object.

Image may not represent original with certain level of probability or generate the fuzzy

image. Associative thinking is the important tool of imagination development.

Imagination applies both strategies: decomposition and combination to analyze a complex

scene or generate a complex scene by combination of the simple shapes.


Algorithm of SABJECTION imagination:


1. Get the description of the subject for imagination

2. Use ASSOCIATIVE THINKING to generate the image of sensation.

3. Generate the combination

Algorithm of OBJECTION imagination:


1. Get the description of the object functions for imagination

2. Make decomposition of the function

3. Use ASSOCIATIVE THINKING to generate the image of sub objects

4. Generate the combination


21


SUPERINTELLIGENCE

All kinds of artificial systems are more powerful and capable then a natural system in each

specific area of applications (NYT, Wednesday July 12, 2006).

By a "superintelligence" we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. (How Long Before Superintelligence? Department of Philosophy, Logic and Scientific

Method London School of [email protected]://www.nickbostrom.com) 2000) (MIT project).

Smart is characterized by sharp, quick thought. Smart is often a general term

implying mental keenness; more specifically it can refer to practical knowledge, ability

to learn quickly, or to sharpness or shrewdness”. So smartness is a highly dynamic kind

of intelligence with a goal that directs to personal gain [74].

Intelligence as it was shown before is based on combination of knowledge collection

(learning) and knowledge manipulation (reasoning). Any contemporary intelligent system

has learning and memorization limitations. Domain orientation of intelligent systems is a

result of these limitations. Future AI systems will not have these limitations. Connection to

the Internet can extend complexity and size of the system, its power practically beyond any

limits. It is important to redesign the Internet to make it accessible to data and knowledge mining. It makes possible to create super intelligent systems. Computational power, ability

and speed of knowledge manipulations are base of the superintelligent systems

Knowledge manipulation is more serious problem. There is a lot of criticism and support of

logical power of knowledge manipulation [51]. But the optimistic position is determined by

the statement: everything under the Moon is developed in accordance with the law of Nature.

All laws of Nature are perceivable. But it takes time. It is reasonable to accept that the reasonable level of knowledge manipulation will be developed in reasonable time.

Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bio- and

genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and

mind transfer. Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to directly initiate the Singularity (something uncommon or unusual), a

choice the Singularity Institute addresses in its publication "Why Artificial Intelligence?"

(2005).

Advocates of friendly artificial intelligence acknowledge the Singularity is potentially very dangerous and work to make it safer by creating AI that will act benevolently towards

humans and eliminate existential risks. AI researcher Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines. Isaac Asimov's Three Laws of

Robotics are one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans, though the crux of Asimov's stories is often how the laws fail. In 2004, the Singularity Institute launched an 22


internet campaign called 3 Laws Unsafe to raise awareness of AI safety issues and the inadequacy of Asimov's laws in particular. See also APPENDIX 10.

MEASURMENT OF INTELLIGENCE (see APPENDIX 3)

I.Q. test is the universal method of human intelligence measurement. It is possible to use this

method as M.I.Q. (Machine I.Q.) test. In reality I.Q. test measures the level of basic

intelligence (mandatory) abilities. Advanced (optional) abilities can be measured by more

sophisticated test that includes questions related to procedure of proof. By the way M.I.Q.

test is a specifically organized Turing test.

There are several standardized IQ tests based on evaluation of different sets of abilities or skills. Usually they are very strange sets and are not based on definition or understanding of

intelligence. ―Most intelligence researchers define intelligence as what is measured by

intelligence tests, but some scholars argue that this definition is inadequate and that

intelligence is whatever abilities are valued by one's culture‖ (―Intelligence," Microsoft®

Encarta® Online Encyclopedia 2003, http://encarta.msn.com 1997-2003 Microsoft

Corporation).

For example: one standardized test measures 12 mental abilities, and separate scores for each

are computed. These abilities are as follows:


Visual

Vocabulary

Spatial

Arithmetic

Logical

General Knowledge

Spelling

Short Term Memory

Computational Speed

Geometric

Algebraic

Intuition


This set of skills presents a chaotic understanding of intelligence. Basic skills for an I.Q. test

can be determined from the skill (ability) structures (see THE STRUCTURE OF

INTELLIGENCE). In the part 2 we will return to the different intellectual abilities

measurement.

Measurements of both artificial and natural intelligence levels can be done using the twodimension scale. For comparative measurement of two or more systems the first dimension

shows the ratio between levels of knowledge of different intelligent systems, the second one

shows the level of intelligent abilities. Only equivalent knowledge systems can be compared

by level of intelligence. It is the reason why I.Q. tests must be applied only to the same human age and may be to the same level of education. In the case of single system

measurement the ratio should be set equal to 1.

Difference between natural and artificial intelligence is defined by the way the system was

developed: gradual development of natural system from zero to up and artificial one from the

level that is defined by the system designer. In the most advanced cases an artificial system

23


can be designed with a low level of intelligence with a strong capability to develop advanced

abilities through learning. A. Turing suggestion: "… the best way to pass the Turing test is to

build a baby machine and train it." See APPENDIX 1.

So, the goal of this area of science is analysis of behavior and synthesis of the structure of

intelligence based on suitable technology (expert systems, evolutional algorithm, neuron net,

and so on) in accordance with the specific area of application, the level of autonomy and universalism.

Magneto encephalogram (see APPENDIX 7) shows lower brain activity of the persons with

higher IQ. High local brain activity in combination with relatively low activity of the rest of a

brain demonstrates different information density. High level of intelligence demonstrates

almost the same information density. It is a result of more efficient brain organization and can be used as the hint how to design optimal artificial brain structure and objective

measurement. But we still don‘t have universal method of intelligence measurement

acceptable by the scientific community.

CLASSIFICATION OF THE INTELLIGENT TASKS AND

ABILITIS OF THE AGENTS TO ACHIEVE THEIR GOALS

Introduction

What kinds of intellectual tasks are there? Who is more intelligent or ―smarter‖: a scientist or

a wood-maker (human or machine), a metal-maker or a wood-maker? In [47] we can read:

―Who‘s more intelligent: a Supreme Court Justice or a professional golfer?‖ Task

classification can help to design a system and answer this question.


Definition of the term ―smart‖ as it presented in previous chapter: “Smart – characterized

by sharp, quick thought. Smart is often a general term implying mental keenness; more

specifically it can refer to practical knowledge, ability to learn quickly, or to sharpness

or shrewdness”. So smartness is a highly dynamic kind of intelligence with a goal that

directs to personal gain [74].

Intelligence Abilities


The system design is based on the set of desirable system tasks (abilities) and relationships between them. A conventional software design technology creates the programs for the specific

problem solution. From the programmer point of view the Artificial Intelligence is a software

design technology to create programs with intellectual abilities. These programs can be used for

wide area of the problem solving.


Intelligent abilities can be presented as the multilevel structure [2,58] (see THE STRUCTURE

OF INTELLIGENCE). A multilevel structure of functions (abilities) describes expressive and

cognitive thinking at the upper levels of the structure; learning, problem solving, and etc at the

middle level; and generalization, reasoning, conceptualization, induction, information

collection, perception, etc. at the lower level of the structure presents the system from another

24


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

point of view [49]. Conceptualization itself consists of two levels: identification of important characteristics and identification of how the characteristics are logically linked. Certainly this structure is based on some level of simplification of the relationship as well as the set size of

abilities. But this structure can help to determine the set of abilities related to the certain goal,

their relationships, and determine the metric structure to evaluate the system intelligence levels.

It takes longer to exercise the upper level abilities than the lower level abilities. Different tasks

require different sets of abilities to fulfill these tasks. ―Animal behavior ought to be used as a

model to define a hierarchy of intelligence tasks‖[59].


The structure of the intelligent functions was discussed earlier, for example, in [46, 49].


Goal and Agent Classes


The goal achievement is a result of the intelligent system actions. ―A system can be intelligent only in relation to a defined goal or environment‖[21]. Different tasks, different areas of activities have different goals. Similar goals can be combined into one class, which we can call

the goal class (GC). The goal class (similarity) is determined by the minimal needed set of abilities to fulfill the goal of the task with the same weight functions (w) of each ability.


GC

min (needed [wA])


All agents that exercise the same minimal set of active abilities to carry out the goal with the

same set of weight functions can be combined into one class which we can call the agent class

(AC). The members of the same agent class can fulfill the goals of the same goal class.


AC

min (active [wA])


The agent class should match to the goal class:


AC

GC


A scientist, a wood-maker, and a metal-maker (natural or artificial) are trained to perform different classes of tasks (goal classes) and we cannot make any comparisons between different

agents of different agent classes. So, it is impossible to compare a scientist and a handyman, as

long as they fulfill different tasks under different goals. In some cases it is possible to combine

the systems with visible different intelligence levels into one agent class. For example, agent from the ―handyman class‖ and agent from the ―scientist class‖ can be combined into one class

if these systems act under similar goals as for example, surviving, reproduction, repairing something that does not need any special scientific knowledge, etc. Performance of these

systems and level of their intelligence can be compared. Multiple-intelligence theory [47]

supports this point of view. Achievement of the same goal by different agents usually involves

the same set of their abilities with the same set of weight functions. It is impossible to compare

a car and bookstore even if you use the money scale to evaluate them. But as soon as you look

at them as investment choices (taxi or shop), you will be able to make a comparison: the same

goal (profit) and the same set of characteristics. The stock market permits the use of the money

scale to compare almost everything because success is judged by the same investment goal and

25


the same parameters of evaluation. Good gamblers in reality use vector function, but un-sophisticated gamblers play by price difference.


It is reasonable to suppose that a scientist has better training in abstract abilities than a handyman. It is reasonable to make serious decision about differences in the intelligence level

of these systems. Different domain applications are determined by different sets of abilities. But

it is possible that a handyman (human or machine) has a grater level of intelligence (special abilities) than a scientist (human or machine). If these handyman‘s additional abilities are not

useful to his/her/its kinds of activities then they cannot be utilized in his/her/its professional activities. Performance of the different tasks utilizes the certain limited sets of intelligent abilities. In this case a very smart metal-worker will not be able to fully use his/her/its available

intellectual power and will not be able to demonstrate the full set of abilities that are not important to fulfill standard metal-worker task. In order to make evaluation of the true ―brain‖

power of the system we should assign a reasonable and comparable goal level. It is important to

avoid using the overqualified agent.


There are different levels (capacities) of intelligence. Sometimes different levels of performance

(skills) can be presented as different levels of intelligence. Different levels of performance are

determined in many cases by the limitation of one or more elements of the system. Advanced

upper level abilities of the intelligent structure (generalization, conceptualization, etc.) do not necessary guarantee a high level of skills. For example, low capability of the sonar sensors can

prevent a person from becoming to be a musician even if he/she/it has all of the essential subsystems intact. Beethoven was not born deaf; he lost his ability to hear at a later age. A composer as music designer can ―hear‖ his music with his inner ―sensor‖. The famous woman

Helen Keller, author and educator were deaf, blind and mute but she had a sensitive tactile system and sense of smell. She learned to ―hear‖ and to speak and she was able to make her

great intellectual power work [34]. A scientist with a high level of intelligence may have a problem doing a manual job if he/she/it does not have suitable actuators. A ―handyman‖ is not a

handyman without hands. There are two choices when refining the definition of intelligence: to

extend the definition and include sensors and actuators or to add a separate explanation of sensors and actuator importance. As soon as we identify intelligence as ‖…an ability of a system to act appropriately…‖[1], we include an actuator into this definition. No sensors – no

knowledge, no actuators – no performance; without them it is impossible to evaluate the level of

domain oriented intelligence.


MIND, INTELLIGENCE, CONSCIOUSNESS, THOUGHT

There is the brain and there is the mind. It is almost like distinction between the body and the soul (Christof Koch, California Institute of Technology).

Usualy mind refers to the collective aspects of intellect which are manifest in some combination of thought, perception, emotion, will and imagination.

There are many theories of what the mind is and how it works, dating back to Plato, Aristotle

and other Ancient Greek philosophers. Pre-scientific theories, which were rooted in theology,

concentrated on the relationship between the mind and the soul, the supposed supernatural or

divine essence of the human person. Modern theories, based on a scientific understanding of 26


the brain, see the mind as a phenomenon of psychology, and the term is often used more or less synonymously with consciousness.

The question of which human attributes make up the mind is also much debated. Some argue

that only the "higher" intellectual functions constitute mind: particularly reason and memory.

In this view the emotions - love, hate, fear, joy - are more "primitive" or subjective in nature and should be seen as different in nature or origin to the mind. Others argue that the rational

and the emotional sides of the human person cannot be separated, that they are of the same

nature and origin, and that they should all be considered as part of the individual mind.

The minimal set of intelligent functions is learning and reasoning. In accordance with existing

definitions mind is:

1. The human consciousness that originates in the brain and is manifested

especially in thought, perception, emotion, will, memory, and imagination.

2. The collective conscious and unconscious processes in a sentient organism that

direct and influence mental and physical behavior.

3. The principle of intelligence; the spirit of consciousness regarded as an aspect

of reality.

4. The faculty of thinking, reasoning, and applying knowledge [74].


Consciousness is having an awareness of one's environment and one's own existence,

sensations, and thoughts, capability of thought, will, or perception [74].

Or

Consciousness is the awareness of external and internal stimuli [60].


Awareness is minimal ability of consciousness.


Cognition is the mental process or faculty of knowing, including aspects such as

awareness, perception, reasoning, and judgment [74].


The first definition of the term “consciousness” shows that terms ―consciousness” and

“cognition” are synonymous. It is not true. If we use minimal set of needed terms then the second definition of the term “consciousness” shows relations between two these terms

Cognition = Consciousness + reasoning and other intelligent abilities.


Aware is the range of what one can know.


There is a famous brief treatise by William James, American psychologist and philosopher

(1842-1910), on the question ―Does consciousness exist?‖ The answer he gives is ―yes and

no.‖


1. If we think of consciousness as immaterial, spaceless, massless but nonetheless an

ontologically real thing, no, that does not exist.

2. But if we think of it as a flow of ideas, the stream of perceptions and thoughts and

feelings the process by which a supernumerary intelligence knits together experiences

over a course of time, then consciousness is indubitable. [67]

27


I would replace this question by ― Does cognition exist?‖


Unconsciousness is the lack of awareness and the capacity for sensory perception; not

conscious; without conscious control involuntary or unintended [74].


Thinking is a way of reasoning and judgment. Thought is the faculty of reasoning. [74].

Thought is the brain product. As research showed more and more clearly that brain really was a network, the brain‘s structure become a proof that neural networks worked and a

glimmering hope that they could solve problems intractable to artificial intelligence.


The other tradition-the symbolic tradition-in artificial intelligence discovered the need for schemata and frames to create a way for computer to choose a set of rules appropriate to a

specific situation. One school, what [27] calls the reasoning approach, continued to view

thinking as a process of logical inference. The other school - Simon labels it search - view

thinking as a process of searching among possible problem solutions. This school

emphasized building representation that modeled the problem situation and finding efficient

strategies for searching among solutions. It found support in biology, which had started to show that the brain solved some search problems by building neural maps of the external

world [75].

Mind includes the collective conscious and unconscious processes. Intelligence does not

include unconscious process because unconscious is not faculty of learning and reasoning.


In this case: Mind is intelligence plus unconsciousness. It is the product of the double level control system. Conscious and subconscious processes are output of the upper

level (Main Control System) control; unconscious processes are output of the lover level

(Local Control Systems) control systems.


Result of this logical process is the new definition of mind: Mind is the collective conscious

and unconscious process in a sentient system that direct influence mental and physical

behavior.


A thinking process is quantum-like process. The mind has a domain of separable concepts,

which can be connected by rule of logic (Pylkkanen P. at the University of Skovde, Sweden)

[76]. A number of researchers today make an appeal to quantum physics trying to develop a

satisfactory account of the mind, an appeal still felt to be controversial to many. Post-phenomenologists (Bohm, Pylkko and other) have created model ―aconceptual mind‖. For

them the ―general thinking process is non-logical uncontrollable, unpredictable and its

semantic elements are indivisible in sort of way that makes difficult to analyze it in

conceptual terms‖ [76]. This statement contradicts to whole concept of thinking. It is not productive, negative approach to develop artificial intelligent systems.


28


The quantum physics (W. Heisenberg and N. Bohr) describes the world as the dual system

that can be described as waves and particles at the same time. This theory can be applied to

the micro world. The neuron net is an object of the macro world. Its stages depend just on

information flow.

Electro encephalogram (see APPENDIX 7) and other methods support dual character of

brain electrical processes (pulses and waives). It is possible that neurons oscillation also creates additional analog information process. But this duality is not quantum physics

duality. It is result of discreet nature of information coding and electrical current nature of information transfer in the macro system.

It has been widely believed in neuro- and cognitive sciences that the brain can be understood

as a macroscopic object governed not by quantum physics but by classical physics or at most

by molecular and chemical biology. They believe that quantum mechanical concepts to the

brain might be hardly accepted.

Recent research has shown that brain waves contain useful information about intention or

mind. After some training process, distinctive patterns associated with specific intensions can

be detected from brain waves, which can be used to generate commands to control computers

and robots.

This duality of the natural system should be taken in consideration in artificial brain design.

But for the time being it is reasonable to follow the less complex concept.

Small group of physicist try to create universal theory of everything. They are discussing the

hypothesis of primary brain that has triggered Big Band. It is mix of materialism (brain) and

idealism (primer mind) or some kind of Intelligent Design theory. They connected it to

Singularity. It is very controversial hypothesis.

By the way, the world ―mind‖ in the Russian language translation means ―razum‖. This term

covers just conscious and subconscious processes and does not include unconscious

processes as it is in the English language. It means that there is cultural difference in the terms presentation. It should be taken in consideration.

THE MIND AS AN OPERATING SYSTEM

The operating system is the main part of the control system. The ―operating system‖ at the

top of the hierarchy sets goals for lower level processors and monitors their performance.

Since it is at the top, its instructions can specify a goal in explicitly symbolic terms, such as

―to get up‖ and ―walk‖. It does not need to send detailed information about how to contract

muscles. They will be formulated in progressively finer detail by the processor at lower

levels right down to the contractions of muscle spindles (The computer and the mind. An

introduction to cognitive science by Philip N. Johnson-Laird, Harvard University press,

1988).

29


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

DISSOCIATION BETWEEN INTENTIONAL CONSCIOUS AND

UNINTENTIONAL CONSCIOUS PROCESSES, CONSCIOUS,

UNCONSCIOUS, AND SUBCONSCIOUS PROCESSES

Intentional and unintentional processes are parts of intelligent activities.

―If someone says to you, ―Look at that bird‖, you can search for a bird, the decision can be

conscious… If your name occurs in nearby conversation at a cocktail party, it attracts your

attention involuntarily – a phenomenon that establishes the existence of a processor that lies

dormant until the right pattern of sound brings it to life‖ (The computer and the mind. An introduction to cognitive science by Philip N. Johnson-Laird, Harvard University press,

1988).

There are three types of unintentional processes:

unintentionally controlled by the Main Control system ―brain‖ (intuition, emotions,

and others)

hard-wired or hard coded to the Main Control system ―brain‖ (unconscious systems)

controlled by a Local Control system (unconscious systems).

At the present stage, there are still fundamental disagreements within psychology about the

nature of the subconscious mind (if indeed it is considered to exist at all), whereas outside formal psychology a whole world of pop-psychological speculation has grown up in which

the unconscious mind is held to have any number of properties and abilities, from animalistic

and innocent, child-like aspects to savant-like, all-perceiving, mystical and occultic

properties. Some psychics also believe that the subconscious mind possesses a kind of

"hidden energy" or "potential" that can realize dreams and thoughts, with minimal conscious effort or action from the individual. Some also believe that the subconscious has an

"influencing power" in shaping one's destiny. All such claims, however have so far failed to stand up to scientific scrutiny. This mistical aprouch in unprodactive and can not be accepted

in psychology of the AIS.

In our understanding subconscious are unintentionally controlled processes by the Main

Control system “brain” (intuition, emotions, and others). Subconscious can create very

dangerous problems in natural and artificial intelligent systems. It is not under full control of

conscious processes and has full access to almost all sometimes-twisted knowledge and to

almost all control systems. It can generate unpredictable dangerous behavior.

AWARENESS AND SELF-AWARENESS

Awareness

Awareness and self-awareness are important abilities of an intelligent system. It is tool of understanding the external world and the system itself. Awareness includes perception and

cognition - two the most important elements of intelligence. But awareness is not the

synonym of intelligence. There are intelligent systems without awareness (the chess player).

In this case intelligent abilities directed just to the limited external world. The definition of

30


the terms ―awareness‖ and ―self-awareness‖ are designed with application of the term

―existence‖ without any understanding of this term. It is important to define terms

―awareness‖ and ―self-awareness‖ and find out difference of application of these terms in the

natural and artificial worlds.

Awareness is having the range of what one (an Agent) can know or understand of one's

environment and one's own existence (self-awareness). Levels of awareness: from sharp

to coma and death. In the human brain the thalamus plays a major role in regulating arousal,

the level of awareness and activity.

Some natural intelligent systems don‘t have full understanding of their own existence

(self-awareness). A cat does not recognize itself in the mirror. Only beginning at 15-24

months a child begins to recognize itself in the mirror (see APPENDIX 1).

There are two main grounds of awareness:

1. the physical world,

2. the social environment.


In biological psychology, awareness describes a human‘s or animal's perception and

cognitive reaction to a condition or event. Awareness does not necessarily imply

understanding, just an ability to be conscious of, feel or perceive.

Awareness is a relative concept. An animal may be partially aware, may be subconsciously

aware, or may be acutely aware of an event. Awareness may be focused on an internal state,

such as a visceral feeling, or on external events by way of sensory perception. It provides the

raw material from which animals develop qualia, or subjective ideas about their experience.

Awareness includes evaluation of environment conditions in relation of the ability of the

system to exist and survive. Awareness is the result of conscious or subconscious

processes.

Electro-chemical networks related to the chordate nervous system facilitate awareness.

Researchers have debated what minimal components are necessary for animals to be aware

of environmental stimulus, though all animals have some capacity for acute reactive behavior

that implies a faculty for awareness.

Popular ideas about suggest the phenomenon describes a condition of being aware of one's

awareness. Efforts to describe consciousness in neurological terms have focused on describing networks in the brain that develop awareness of the qualia developed by other networks.

Neural systems that regulate attention serve to attentuate awareness among complex animals whose central and peripheral nervous system provides more information than cognitive areas of the brain can assimilate. Within an attenuated system of awareness, a mind might be aware of much more than is being contemplated in a focused extended consciousness.

Awareness of an Artificial Intelligent System develops understanding of the specific outside

environment, activates specific system‘s abilities to survive in this environment and generate

everything needed to reach the goal.

31


Self-awareness

Self-awareness is the ability to perceive one's own existence, including one's own traits,

feelings and behaviors. It is an ability to develop one‘s own inner dinamic model. In an epistemological sense, self-awareness is a personal understanding of the very core of one's own identity. In today global society identity is loosing its crispy character. For example, it

is difficult to define such important element of identity as belonging to the specific group

as nation. It is difficult to define the term ―nation‖ and some others. It is important not just

to a human being but to an artificial agent as well.

Psychologists cannot assume that a computer becomes self-aware. At the same time they

cannot assume that such machines are not self-aware. Self-awareness is less physical then

mental phenomena.

Self-awareness is the awareness of oneself as an individual entity or personality. An Agent should be able to understand boundaries of His/Her/Its personality as the part of the social (as

a team member) and physical environment. It is important condition to correct understanding

of incoming information. It is the understanding of the system abilities to interact with the environment.

Self-awareness is the understanding that one exists. Furthermore, it includes the concept that one exists as an individual, separate from other people, with private thoughts. It may also include the understanding that other people are similarly self-aware.

There is no universally accepted theory of what the word ―existence” means. It is reasonable

to define the term (Agent) ―existence‖ as an ability to create the Agent own dynamic model

with physical and mental representation (the stage of the mind). The stage of the mind

includes the list of activated areas of the memory, stage of information flow control system,

etc. All this information develops by the periodical self-diagnostic tests. It is famous: ―I think, therefore I am”. Perhaps what Descartes meant, simply put is " I am vividly aware of my existence". ... "I think, therefore I exist,"

It is known that, for example, in the very beginning after amputation of one lag a human being does not realize this. He/she must update the world model of own body to operate by

new system.

A human baby gradually generates the body‘s world model (see APPENDIX 1):

- birth to 1 Month: generating mental maps of the different position its body

- 9 - 15 Months: the brain has a fairly complete mental model of what its body can do and

what effect it has on the environment

- 15 - 24 Months: language and symbolic communication are coming online now, and

these tools are used to further expand the mental model of the world a child begins to

recognize themselves in the mirror.

On the earlier stage a baby brain is underdeveloped and does not have strong connections between a brain and the sensor system. It is reason why a baby does not demonstrate self-32


awareness at this stage. Self-awareness is the combination of intelligence and the

diagnostic system activities. It is combination of subconscious and conscious processes.

Animals have problem to recognize themselves in the mirror. A cat cannot recognize itself in

the mirror. Primates and some types of dolphins can do. Experiment result shows that

elephant from Bronx Zoo (NYC) can recognize in the mirror its reflection as whole and

different parts.

An Artificial Intelligent System has two methods to develop its own dynamic world model:

1. to accept information from the system designer

2. to develop it by itself through random movement of the body parts and collect

information about this movement and information from the body sensors.

Full self-awareness can be developed only in the social environment. It includes the language

development (see above) and development of the circle:

PERSON’S (SYSTEM’S) NAME → “YOU ARE”→ “I AM” → PERSON’S NAME

System‘s name should be assign to the system‘s world model. An Artificial Intelligent

system can learn principle of this circle through understanding of the term ―substitution‖.

Some authors call self-awareness as self-consciousness because the self is the object of

analysis. Self-consciousness is credited with the development of identity

In an epistemological sense, self-consciousness is a personal understanding of the very core of one's own identity. It is during periods of self-consciousness that people come the closest

to knowing themselves objectively. Jean Paul Sartre describes self-consciousness as being

"non-positional", in that it is not from any location in particular.

Self-consciousness plays a large role in behavior as it is common to act differently when people "lose one's self in a crowd". It is the basis for human traits, such as accountability and

conscientiousness. It also plays a large role in theatre, religion, and existentialism. Self-consciousness affects people in varying degrees, as some people are in constant self-

monitoring (or scrutinizing), while others are completely oblivious about their existing self.

Different cultures vary in the importance they place on self-consciousness.

Disorder of intelligent system can develop outstanding creativity (see also

PSYCHOLOGICAL MALFUNCTIONS, DISORDERS).

REFLEXES

Reflex is an automatic response or reaction [74]. (See also APPENDIX 5). The AIS

demonstrate (similar to natural systems) two types of reflexes:

- conditional

- unconditional.

The Main Control system (―brain‖) controls conditional reflexes. They are based on

information (conditions) that is stored in the memory. It is subconscious intelligent functions 33


Arpsychology and structured design of artificial intelligent systems

triggered by the signal generated by associative thinking. The Local Control systems control

unconditional reflexes. There are unconscious, not intelligent functions.

Some unconditional reflexes can be controlled by the Main Control systems without

involvement of intelligent abilities. In this case a system uses the hard coded or hard-wired to

the ―brain‖ logical function. It is non-intelligent process in an intelligent system but

unconscious function of mind.

The Main Control system can be connected to the internal sensor system. In this case system

generates the reflection of input information as a sensible reaction. This type reaction can be

seen in art apprehension (see ART APPRECHANSION).


Fig. I-3 A Reflex Arc

A reflex action or reflex is a biological or artificial control system linking stimulus to

response and mediated by a reflex arc (see SENSATIONS). Reflexes can be built-in or

learned. It occurs very quickly before thinking. Before the message is sent to the brain, the

spinal cord senses the sensory stimulus, and sends a signal (action potential) to an effector organ (actuator, muscle) to create an immediate action to counter the stimulus. For example,

an agent stepping on a sharp object would initiate the reflex action through the creation of a

stimulus, (pain) within specialized sense receptors located in the skin tissue of the foot. The

resulting stimulus would be transmitted through afferent or sensory neurons and processed at the lower end of the spinal cord, part of the central nervous system. This stimulus is processed by an interneuron to create an immediate response to pain by initiating a motor (muscular) response which is acted upon by muscles of the leg, retracting the foot away from

the object. This activity would occur as the pain is arriving in the brain which would process

a more cognitive evaluation of the situation. (see EMOTIONS).

Reflexes are some way of interaction with environment that is important for learning. Each

interaction generates knowledge in the knowledge base (see Learning by Interactions). But

without the Central Control system unconditional reflexes are not intelligent processes.

34


Frog‘s lag separated from a body demonstrates unconditional reflex but it is not an intelligent

process.

FREE WILL AND ACTIONS

The problem of free will is the problem of whether rational agents exercise control over their

own actions and decisions. The various philosophical positions on the problem of free will

can be divided in accordance with the answers they provide to two questions:

1. Does free will exist?

2. Is determinism true?

Addressing this problem requires understanding the relation between freedom and causation,

and determining whether or not the laws of nature are causally deterministic. The various positions taken differ on whether all events are determined or not— determinism versus

indeterminism—and also on whether freedom can coexist with determinism or not—

compatibilism versus incompatibilism. So, for instance, hard determinists argue that the universe is deterministic, and that this makes free will impossible.

Society generally holds people responsible for their actions, and will say that they deserve praise or blame for what they do. However, many believe that moral responsibility requires free will. Thus, another important issue is whether individuals are ever morally responsible

for their actions—and, if so, in what sense.

Free will is “The power, attributed especially to human beings, of making free choices

that are unconstrained by external circumstances or by an agency such as fate or divine

will”[74]. This definition should be expand by excluding “attributed especially to human beings” because, as it will be shown latter, artificial intelligent systems should be incorporated into the human society and follow the rules of this society behavior.

One of the most heated debates in biology is that of "nature necessarily versus nurture" . This debate concerns the relative importance of genetics and biology as compared to culture and

environment in human behavior (Pinel P.J. Biopsychology. Prentice Hall Inc.1990.).

In generative philosophy of cognitive sciences and evolutionary psychology, free will is assumed not to exist [71, 72]. Accidental is unknown necessarily (F. Engels). However, an

illusion of free will is created, within this theoretical context, due to the generation of infinite

or computationally complex behavior from the interaction of a finite set of rules and

parameters. Thus, the unpredictability of the emerging behavior from deterministic processes

leads to a perception of free will, even though free will as an ontological entity is assumed not to exist [71,72]. In this picture, even if the behavior could be computed ahead of time, no

way of doing so will be simpler than just observing the outcome of the brain's own

computations [73].

Determinism is ―the philosophical doctrine that every event, act, and decision is the

inevitable consequence of antecedents that are independent of the human will‖ [74].

Determinism is roughly defined as the view that all current and future events are necessitated by past events combined with the laws of nature (McKenna Michael, "Compatibilism", The

Stanford Encyclopedia of Philosophy (Summer 2004 Edition), Edward N. Zalta ).

35


Intelligent system behavior is determined by knowledge, experience, goal, motivation,

internal structure and complexity, and external conditions. It is case of "nature versus

nurture" or strict determinism.

But there is one problem. In the real life sometimes there are several alternatives of strategies

with equal outcome in term of the goal achievement. In this case an intelligent system can make decision in accordance with outcome from the mechanism of random choices (flip a

coin). By the way a coin dynamic is pre defined. Application of random number mechanism

can be interpreted as free choice. Artificial Intelligent Systems have the mechanism of

random numbers and can use it mach easy than natural systems can do.

There is another problem. An agent can act under conditions of limited information

(conditions of uncertainty and risk). Philosophically it does not contradict to determinism, but force an agent to make its own choice with some level of probability of wrong decision.

It is true at least for the Macro World.

There is the third problem. An agent can face to unfriendly social environment with

unpredictable behavior. It is condition of antagonistic game that creates conditions of

uncertainty and risk.

This dualism creates difficulties in prediction of a system behavior. In real life determinism as absence of free will does not mean full predictability because we don‘t know all variables.

Lack of information limits power of determinism. It is still probabilistic situation. It is decision making process under conditions of uncertainly and risk. Is it reasonable to keep an

artificial system responsible for wrong choice? It is the difficult question. Responsibility means punishment and reward! See the chapter MORAL AND LAW and APPENDIX 12.

Punishment is the practice of imposing something unpleasant on a subject as a response to

some unwanted or immoral behavior or disobedience that the subject has displayed is the

reduction of a behavior via a stimulus which is applied (" positive punishment") or removed (" negative punishment").

An award is something given to a person or group of people to recognize excellence in a certain field.

Determinism does not mean inevitability. It means that each specific result depends on chosen specific actions with certain level of probability that is determined by lack of

knowledge. In this case an agent is responsible for its choice of actions. If an Artificial Intelligent System entitled or forced to exercise free will it should be able to evaluate possible results of different alternatives and take it in consideration. In the most of cases system deals with single events when probability does not make sense. Evaluation of the

possible dangerous result of actions should be done not by evaluation of probability but by

evaluation of possibility of the dangerous results. Value of possibility can be equal to ―zero‖

or ―one‖. Value ―one‖ is not acceptable and must be reason of punishment. Efficiency of the

system action grater then it was expected can generate the reward (see SOCIAL

BEHAVIOR, The Fair Deal Development).

It is very difficult to impose punishment and reward on the artificial system because

contemporary artificial systems are not very sensible to punishment and reward. It makes

36


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

sense to develop the set of rules and criteria‘s the system must follow and use as limitation of

dangerous actions (see EMOTIONS and STIMULUS AND MOTIVATION, and LAW AND

MORAL).

Now the definition of free will is: The power, attributed to the Intelligent System, of making free choices under existing external limitation and held responsibility for result

of actions.

Free will should be transferred in actions. Absolute freedom (anarchy) does not exist in real

life. Reasonable actions based on evaluation of circumstances are freedom of actions.

Freedom of actions is determined by freedom of will and existing constrains. Degree of freedom of actions “measure of variability which merely expresses the number of options available within a variable or space. In a system with N states the degree of freedom is equal to N.

The last question is: do we need an artificial system with free will? An autonomous system

without responsibility to result of its actions and acting under conditions of uncertainty can

harm people, animals and destroy environment. It is important to have protection. Law and

moral will define responsibility of the artificial system and its designer as well. For example:

the carmakers responsible for their product in accordance with the moral and criminal law. A

car is not autonomic intelligent system therefore is nor responsible for its actions.

By the way Autonomy (Greek: Auto-Nomos - nomos meaning "law": one who gives oneself his own law) means freedom from external authority. Autonomy is a concept found in moral, political, and bioethical philosophy. Within these contexts it refers to the capacity of a rational individual to make an informed, uncoerced (not forced to act or think in a certain way by use of pressure, threats, or intimidation; compel) decision. In moral and political philosophy, autonomy is often used as the basis for determining moral responsibility for

one's actions (Wikipedia, encyclopedia).

The structure of an Artificial Intelligent System is designed as the closed-loop system with

incorporation of environment (see THE STRUCTURE OF AIS) as a part of the structure.

The simple environment does not undermine determinism of the system. The complex

environment such as a financial, military, social, and other system have unpredictable

reaction that can be activated with delay, in long period of time. In this case the feed back

signal does not help to correct a system behavior. A system behaves as open-loop

statistically controlled one.


THE STRUCTURE OF INTELLIGENCE

(Based on Part 2 analysis)

Intelligent Abilities

1. Conscious

Learning (Knowledge collection)

Sensation

- Sensing

- Attention

- Discrimination

37


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

- Perception

- Perceive

i. Recognition

ii. Localization

- Conceive

i Judgment

ii Interpretation

iii. Understanding

Reasoning

Conditional reflexes

Associative thinking

Generalization

- Conceptualization

i. Identification of important characteristics

ii. Identification of how the locally linked

characteristics

- Induction

- Classification

Deduction

Judgment

Motivation

Creativity (Intentional knowledge manipulation)

Hypothesis generation

Reasoning

Associative thinking

Deduction

Judgment

Imagination

- Sensational

- Objection

Generalization

i Conceptualization

ii Identification of important

characteristics

iii Identification of how the characteristics

locally linked

- Induction

- Classification

2. Subconscious (Unintentional consciousness)

Emotions

Unintentional learning

Intuition

Associative thinking

Unintentional creativity


38


CONCLUSION

Intelligence is a double levels term.

The first level of intelligence is General Intelligence (capabilities) (inherited or built-in) of

a sentient system that enables it to direct and influence mental and physical behavior in accordance with a system external or internal goal.

The second level of intelligence is Knowledge-based intelligence can be defined as a knowledge-based abilities of a domain-oriented system to act under existing constraints

(limitations) and reach external or internal goals or decrease the distance between the start and the goal’s stages (intellectual adaptation).

Learning and reasoning are mandatory features of intelligence. There are other optional features such as: generation of hypothesis, generalization, specialization, conceptualization, and

so on. Analysis shows that some existing definitions give reasonable descriptions of natural intelligence but still have problems to describing the intelligence of artificial systems.

Application of different terms with the same meaning creates problems in measurement.

Standardization of definitions is an important condition to succeed in understanding and

measurement. All existing and published definitions quoted in APPENDIX 2 are important

valuable and are sources of information. Some of these definitions, as it was mentioned, are similar to those presented in this paper. Human behavior is the source of understanding of intelligence. ―Charles Darwin found clear evidence for intentional behavior in earthworms and

some scientists believe that even bacteria display it‖ [23]. It is clear that all these questions arise in natural life and some of them in artificial as well.

Intelligence abilities can be presented at the functional multilevel structure. Similar goals of the agents can be combined into the goal class. The goal class is determined by minimal set of needed abilities to fulfill this goal of the task and one set of weight function for each alternative

– class member. All agents that exercise the same minimal set of active abilities and common

set of weight functions to carry out the goal can be combined into the agent class. The members of the same agent class can fulfill the goals of the same goal class. Overqualified agents can be included in the set of alternatives and should be evaluated in the same way as the rest of the set.

Creativity as an ability is universal feature, but it depends on knowledge. In this case it is possible to defined creativity as domain oriented ability to create new knowledge as a new combination and new application of existing knowledge.

Consciousness is having an awareness of one's environment and one's own existence,

sensations, and thoughts, capability of thought, will, or perception.

Cognition is the mental process or faculty of knowing, including aspects such as awareness,

perception, reasoning, and judgment.

In our understanding subconscious are unintentionally controlled processes by the Main

Control system “brain” (intuition, emotions, and others).

There are three types of unintentional processes:

39


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

unintentionally controlled by the Main Control system ―brain‖ (intuition, emotions,

and others) (unintentionally conscious – subconscious systems)

hard-wired or hard coded to the Main Control system ―brain‖ (unconscious systems)

controlled by a Local Control system (unconscious systems)

Mind is the collective conscious and unconscious process in a sentient system that direct

influence mental and physical behavior, the principle of intelligence.

Awareness is having the range of what one (an Agent) can know or understand of one's

environment and one's own existence (self-awareness).

Self-awareness is the ability to perceive one's own existence, including one's own traits,

feelings and behaviors.

Strong relationships between different abilities are presented by cross-references between

almost all chapters of this book.


Fig. I-4 –I-11 present the structures of intelligent abilities.


INTELLI

INTE

GE

LLI

NT

GE

ABILIT

AB

IES

ILIT

SUB

SU

CONSCIOUS

(UNINTANTIONAL

TI


ONAL

CONSCIOUS

CONSCIOU

CONSCIOUS)

CONSCIOU

CREATIVITY

CREA

LEA

LE

RNING

(INTENTI

EN

ONAL

ONA

(KNOWLEDGE

KNOWLEDGE

COLLECT

COLL

ION)

N

MANIPULAT

MANIPUL

ION)

ION


Fig. I-4


40


LE

L ARNING

I


SENSATION

I


PERSAPTION

I


CONDIT

I IONA

I

L

L

REASONING

I


REFL

F E

L XE

X S

ASSOCIATIVE THINKING

GENERALIS

I ATION

I


DEDUCTION

IO

JUDGME

M NT

MOT

M

IV

I ATION

IO

Fig. I-5


CR

C E

R AT

A IVI

V TY

AS

A SO

S

C

O I

C AT

A IVE

RE

R AS

A O

S

N

O I

N NG

N

THI

H NK

N I

K NG

N

DE

D D

E U

D C

U T

C ION

O

JU

J D

U G

D

ME

G

N

ME T

N

GE

G N

E E

N R

E AL

A I

L ZAT

A ION

O

IMAG

A I

G NAT

A ION

O

SE

S N

E S

N AT

A ION

O AL

A

OB

O J

B E

J C

E T

C ON

O AL

A

HY

H PO

P

T

O HE

H S

E I

S S

S

GE

G N

E E

N R

E AT

A ION

O

Fig. I-6

41


HYPO

P THES

HE I

S S

S

GENE

E

RAT

A ION

AS

A SO

S

CIAT

A IVE

REAS

A ONING

THINKING

DEDUCTION

JUDGMENT

E

GENE

E

RAL

A IZAT

A ION

IM

I

AG

A INAT

A ION

SE

S NSAT

A IONAL

A

OBJECTONAL

A


Fig. I-7


SUBCONS

SUBCO

CIOUS

UNINTENTIONAL

ONA

EMOTIONS

LEARNING

INTUITION

ASSOCIATI

ASSO

VE THINKING

UNINTENTIONAL

ONA

CREATIVITY

Fig. I-8


42


SE

S N

E S

N AT

A ION

I

AT

A TEN

E T

N ION

I

DI

D S

I C

S R

C IMIN

I AT

A ION

I

PE

P R

E C

R E

C P

E T

P ION

I

SE

S N

E S

N IN

I G

N

Fig. I-9


PERCEPTIO

PERCE

N

PTIO

PE

P R

E CEIV

CE

E

IV

CONCEIV

CE

E

IV

RECOGNITION

LOCAL

A IZAT

A ION

JUGMENT

INTERPRETAT

A ION

UNDERSTANDI

A

NG

Fig. I-10

GENERALIZ

GENERAL ATION

CON

C

C

ON EPT

C

U

EPT ALI

U

ZAT

Z

I

AT ON

INDUCT

INDU

ION

CLASSIF

CL

ICATION

IDEN

D

T

EN I

T FI

F CAT

C

I

AT ON

ON

IDEN

D

T

EN I

T FI

F CAT

C

I

AT ON

ON

OF

OF HO

H W

O


OF

OF I

MPOR

M

T

POR AN

T

T

AN

T

TH

T E

H

CH

C AR

H

AC

AR

T

AC E

T RISTI

ST CS

C

CH

C AR

H

A

AR CT

C ER

T

I

ER STI

ST CS

C

LOC

L

ALLY

OC


ALLY LI

NC

N ED

C

Fig. I-11

43


REFERENCES:


1 Albus J., Outline for Theory of Intelligence. IEEE Transactions on Systems, Man,

and Cybernetic, vol. 21, No 3. May/June, 1991

2 Albus James S., Meystel Alexander, Behavior Generation in Intelligent Systems,

NIST.

3 Antsaklis Panos, Defining Intelligent Control. Report of Task Force on Intelligent

Control, IEEE Control Systems, June 1994.

4. Artificial Intelligence with Dr. John McCARTY. Conversation On The Leading

Edge Of Knowledge and Discovery With Dr. Jeffry Mishlove, 1998.

3. Atkinson Rita, Atkinson Richard, Smith Edward, Bem Daryl, Nolet-Hoekgema

Susan. Hilgards Introduction to Psychology, Harcourt Brace College Publishers,

1996.

6. Andrew A. M. Artificial Intelligence. Viable Systems, Chillaton, Devon (U.K.)

Abacus Press, 1980

7. Boden, Margaret A., Artificial Intelligence and Natural Man, Basic Books, Inc.,

New York, NY, 1977.

8. Bock, Peter, The Emergency of Artificial Intelligence: Learning to Learn, The

AIMagazine, Fall, 1985.

9. Berg-Cross G., Dimensions of Intelligent Systems. Measuring the Performance and

Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop. August 13-15,

2002).

10. Bongard Josh ―Animals‖ grown from an artificial embryo. EPSRC/BBSRC International

Workshop Biologically-Inspired Robotics: The Legacy of W. Grey Walter 14-16

August 2002, HP Bristol Labs, UK of Zurich,

11. Charnik, Eugene and McDermott, Drew, Introduction to Artificial Intelligence,

Addison-Wesley Pub. Co., Reading, MA 1985.

12. Cawsey A. The Essence of Artificial Intelligence. Prentice Hall, 1995

13. Computers and The Mind with Howard Rheingold. Conversation On The


Leading Edge of Knowledge and Discovery with Dr. Jeffry Mishlove, 1998.

14. Commun S., Li Yushan, Hougen D,. Fierro R Evaluating Intelligence in Unmanned

Ground Vehicle Teams. Measuring the Performance and Intelligence of Systems:

Proceeding of the 2004 PerMIS Workshop. August 23-26, 2004.

15. Coon D. Introduction to Psychology. Exploration and Application. West Publishing

Co. 1995.

16. Campione J.C., Brown A.L., Ferrara R. A Mental Retardation and Intelligence.

Handbook of Human Intelligence, Cambridge University Press, 1982.

17. Dean T., Allen J., Aloimonos Y. Artificial Intelligence. Theory and Practice. The

Benjamin/Cummings Publishing Company, 1995.

18. Decision Support and Expert Systems. Management Support Systems by

Efraim Turban. Prentice Hall. 1995.

19. Davis S. F., Palladino J. J. Psychology, PRENTICE HALL, 1997

20. Ettinger R. H., Crooks R. L., Stein J., Psychology. Science, Behavior, and Life,

Harcourt Brace College Publishers, 1994.

21. Finkelstain Robert, A Method For Evaluating the ―IQ‖ of Intelligent System,

August 14-16, 2000, Gaithersburg, MD

44


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

22. Foundations of Neural Networks by Khanna, Addison-Wesley, 1990.

23. Freeman W. J., How Brains make Up Their Mind, PHCENIX, 1999

24. Fogel. D. B. Evolving Solutions that are Competitive with Human. Measuring the

Performance and Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop.

August 13-15, 2002.

25. Feldman R. S., Understanding Psychology, McGraw-Hill Inc. 1996.

26. Goode E. Brain Scans Reflect Problem-Solving Skill, NYT, February 17, 2003.

27. Gao R. and. Tsoukalas L.H, Performance Metrics for Intelligent Systems. An

Engineering Perspective. Measuring the Performance and Intelligence of Systems:

Proceeding of the 2002 PerMIS Workshop. August 13-15, 2002

28. Gunderson J. P., Gunderson L. F. Intelligence Autonomy Capability. Measuring

the Performance and Intelligence of Systems: Proceeding of the 2004 PerMIS

Workshop, August 23-26, 2004.

29. Gray P., Psychology, Worth Publisher, 1999.

30. Huffman Karen, Vernoy Mark, Vernoy Judith, Psychology in Action, John

Wiley & Sons, Inc1997.

31. Jerison H, J. The evolution of biological intelligence. Handbook of Human

Intelligence, Cambridge University Press, 1982.

32. Hoffman K., Vernoy M., Vernoy J.. Psychology in Action, John Willey & Sons, INC.

1994

33. Horst J. A., Native A., Intelligence Metric for Artificial Systems. Measuring the

Performance and Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop.

August 13-15, 2002.

32. Hoffman K., Vernoy M., Vernoy J.. Psychology in Action, John Willey & Sons, INC.

1994

33. Horst J. A., Native A., Intelligence Metric for Artificial Systems. Measuring the

Performance and Intelligence of Systems: Proceeding of the 2002 PerMIS Workshop.

August 13-15, 2002.

34. Keller Helen. The Story of My Life, 1902.

35. Kassin S., Psychology, Prentice Hall, NJ, 1998

36. Language And Consciousness. Part 4: Consciousness and Cognition with Dr.


Steve Pinker. Conversation On The Leading Edge Of Knowledge And


Discovery With Dr. Jeffry Mishlove, 1998.

37. Landauer C., Bellman K. L., Measuring the Performance and Intelligence

of Systems: Proceeding of the 2002 PerMIS Workshop. August 13-15, 2002.

38. Lathey B. Psychology, An Introduction. Wm. C. Brown Publishers, 1989.

39. Lahey B.B..Psychology. An Introduction. Wm. C. Brown Publishers, Dubuque,

Iowa,1989.

40. Meystel A. Evolution of Intelligent Systems Architectures. What Should

Be Measured? Performance Metrics for Intelligent Systems. Workshop. August 14-16,

2000, Gaithersburg, MD.

41. Meystel A. Semiotic Modeling and Situation Analysis; An Introduction, AdRem,

Inc.1994

42. Mind Over Machine with Dr. Hubert Dreyfus. Conversation On The Leading Edge of

Knowledge and Discovery with Dr. Jeffry Mishlove, 1998

45


43. Mind As A Myth with U. G. Krishnamurti. Conversation On The Leading Edgeof

Knowledge and Discovery with Dr. Jeffry Mishlove, 1998.

44. Myers David G. Psychology, Worth Publish, 1995.

45. McCulloch W. S. What is a number, that a man may know it, and a man that he may

know a number? General Semantics Bulletin, No 26 and 27, 1960.

46. Negnevitsky M. Artificial Intelligence. A Guide to Intelligence Systems,

Addison-Wesley, 2001

47. Plotnik R. Introduction to Psychology, Brooks/Cole Publishing Company, 1995.

48. Polyakov L.M. Agent with Reasoning and Learning: The Structure Design,

Performance Metrics for Intelligent Systems, Workshop, August 14-26, 2004,

Gaithersburg, MD.

49. Polyakov L. M. Structure Approach to the Intelligent System Design. Performance

Metrics for Intelligent Systems, Workshop, August 13-15, 2002, Gaithersburg, MD.

50. Polyakov L.M., In Defense of the Additive Form for Evaluating Vectors, Measuring

the Performance and Intelligence of Systems: Proceeding of the 2000 PerMIS

Workshop. August 14-16, 2000.

51. Russell Stuart, Norvig Peter, Artificial Intelligence. A Modern Approach,

Prentice Hall, 1995.

52. Simon H. The Sciences of the Artificial, Cambridge, Mass., The MIT Press, 1969.

53. Sternberg R. Handbook of Human Intelligence, Cambridge University Press,

1982.

54. The Rising Curve edited by Proof. Ulric Neisser, American Psychological

Association, 1995.

55. Winston, Patrick Henry, Artificial Intelligence, Addison-Wesley Pub. Co., Reading,

MA 1985.

56. Psychology by Crider A. B., Goethals G. R., Kavanaugh R. D., Solomon P. R. Harper

Collins College Publishers, 1993.

57. Subhash Kak, Grading Intelligence in Machines: Lessons from Animal Intelligence,

Preliminary Proceedings ―Performance Metrics for Intelligent Systems Workshop,

August 14-16, 2000,

58. The Oxford Companion to MIND, edited by Richard L. Gregory, Oxford University

Press, 1987

59. Tieger P.D. and Barron- Tieger B. Do What You Are. Little, Brown and Co. 1995.

60. Huffman, Verney, Verney Psychology in Action, John Wiley & Sone Inc. 1994.

61. Robinson D. N. The Grate Ideas of Philosophy, The Teaching Company, 2004.

62. Sternberg R. J., Grigorenko E. L., Singer J.L. Creativity. From Potential to

Realization, American Psychology Association, 2004

62 Pribram K. H., ―Quantum Information Processing in Brain Systems and the Spiritual

Nature of Mankind, ‖ The Center for Frontier Sciences,Volume 6, No. 1, Fall/Winter,

1996, pages.7-16

63. Pribram K. H, ―Languages of the Brain: Experimental Paradoxes and Principles of

Neuropsychology, Englewood Cliffs, NJ, Prentice Hall; Monterey CA: Brooks/Cole,

1977, New York, Random House, 1982,

64. Dennett D. C, ―Consciousness Explained‖, Little Brown and Co, 1991, (p. 511)

65. Dewan E. M, J. C. Eccles, et.al. ―The Role of Scientific Results in Theories of Mind

and Brain: A conversation among philosophers and scientists, In. G.G. Globus and G.

46


Maxwell (eds.) Consciousness and the Brain (p.p. 317-328), Plenum Press, New York,

1976

66 Minsky M, The Society of Mind, MIT

67. Robinson D. N. The Grate Ideas of Philosophy, The Teaching Company, 2006

68. Neuman, J. and Morgenstern, O. Theory of Games and Economic Behavior. Princeton

University Press, Princeton. 1953.

69. Polyakov L. M., Kheruntsev P. E., Shklovsky B. I., Elements of the Automated

design of the electrical a automated equipment‘s of Machine tools. Publ.

―Machinostrojenie‖. Moscow, 1974,. (in Russian).

70. G. Watson (ed.) Free Will. Oxford University Press. 2nd Edition 2003.

71.

Epstein J.M. and Axtell R. Growing Artificial Societies - Social Science from the Bottom.

Cambridge MA, MIT Press 1996.

72.

Wolfram, Stephen, A New Kind of Science. Wolfram Media, Inc., May 14, 2002.

73.

Koller, J. Asian Philosophies. 5th ed. Prentice Hall 2007.

74. American Heritage Talking Dictionary. Copyright © 1997 The Learning Company, Inc.

75. Jubak J. In the Image of the Brain, The Softback Preview, 1994.

76. Pylkkanen P. Can Quantum Analogies Help us to Understand the Process of Thought?

Brain and Being, John Benjamin Publishing Co., Amsterdam/Philadelphia, 2004.

77. Leibs Scott, Designs of Intelligence, CFO vol. 22, No. 12 November 2006

78. Wiener Norbert (), Cybernetics or Control and Communication in the Animal and the

Machine, Paris, Hermann et Cie - MIT Press, Cambridge, MA, 1948


47


48


PART 2

PSYCHOLOGY OF ARTIFICIAL INTELLIGENT SYSTEMS


49


50


WHAT IS PSYCHOLOGY OF ARTIFICIAL INTELLIGENT

SYSTEMS?

Introduction

Any intelligent system can be described through description its behavior. Behavior is the

subject of the science that we call psychology. Theory of the human psychology has strong

influence on development of Artificial Intelligent Systems. It is important to recognize

existence of the strong influence of the Artificial Intelligent System psychology back on the

natural system psychology.


For a long time, cognitive psychology has been both a resource and a beneficiary of robotic

research. Robotic vision, robotic speech recognition and robotic vocalization do not

completely simulate, but have all found help from psychological research in human sensory

processes and human perception. In turn, the process of developing machinery for sensors

and perceptive functions has shed light into the "back box" of such functions in humans. The latter -- studying human intelligence by trying to implement it -- should be especially

beneficial to psychology, even though by and large, robotics has not been used so far as a tool in psychological research.


Areas of psychology, in addition to cognitive psychology, therefore, find themselves in a

wide range of new territories in the frontier of robotic research and development. Among

these areas, the most obvious are social interaction, communication, emotions and affects,

child development, learning and teaching, and perhaps, even gender development issues. The

creators of most of the social humanoid robots have intentionally left the gender of these robots undecided. They call them by names, not "he" or "she", and they certainly do not like

"it". These are creatures, not merely robotic systems, they emphasize. Later in the chapter GANDER OF AIS we will discuss this topic.


Some day, there will be an entire psychological research area (and even service) devoted to

this new kind of "creature". Like there is "animal psychology", there will be "robotic psychology". I am predicting that the day is coming soon that robots become creatures living

amongst us. Nevertheless, this robotic psychology will benefit both them and us.


Being neither a physical science nor a biological science in the strict sense, psychology has

evolved as something of an engineering science. All intelligent abilities and functions

represent actions. Actions are outcome of the control systems as the subject of informational

technology. Basic ideas are:


1. The general approach to problem solving in engineering is first to reduce the problem

to a model, usually including a number of distinct modules.

2. The modules have such properties to encompass both the given function and the

means by which that function can be integrated into the performance of the overall

system.

51


3. Psychology attempts to use the laboratory context as a simplified model, with the

experimental variables chosen to tap into one or another functional module.


The most successful application of this line of thinking can be seen in contemporary

cognitive neuroscience.

1. Any cognitive achievement, no matter how complex, is reducible to an ensemble of

distinguishable functions.

2. Each function is accomplished by processes and networks in the central nervous

system.

3. Manipulation of relevant variables in the controlled conditions of laboratory research

is the means by which put thought and action on scientific foundation.

It is important to understand the difference between an engineering module that actually

performs a task and a cognitive event subject to interpretation. Interpretation is the subject of

the upper level of the control system. Performance is the subject of the lower level of the control system.

An artificial intelligent system like the natural one generates specific behavior. This behavior

is determined by the system design and external conditions. There are many commonalities

between behaviors of these two classes of systems; but strong system‘s difference creates

specific behavior and features. Discovering these differences and learning methods for their

implementation is the main goal of Psychology of Artificial Intelligent (AI) Systems.

Psychology is “the science that deals with mental processes and behavior‖.

Psychology of Artificial Intelligent Systems (PAIS) is the science that deals with processes

related to the artificial mind, intellect and behavior. These processes originate in the artificial

brain (computer) and are manifested especially in thought, perception, emotion, will,

memory, imagination and so on. Analysis is the main method of the PAIS. The main goals of

(PAIS) are:

1. Clear defining, describing and understanding the mental processes in engineering

terms.

2. Organizing the information in a highly structured, algorithmic form.


Insiders of the robotics field have recently observed a shift from behavior-based approaches

to robotics that deal with low-level competence to those that try to build humanoid robots with an increasingly complex behavioral repertoire that includes the ability to interact

socially. Such a shift also accentuates the new role of psychology in robotic science.


Artificial Intelligence (AI) is a Computer Science that deals with:

1. The ability of a machine to perform those activities that are normally thought to require

intelligence.

2. The branch of computer science concerned with the development of machines having this

ability [36].

52


Synthesis is the main method of the Artificial Intelligence.

Comparison of the PIAS and the AI shows that the first one describes definition and nature of

the system‘s functions. The second one deals with development and implementation of the

working system that can deliver these functions. Theoretical principles of artificial intelligent

systems design and their psychology should be presented in engineering terms and from the

engineering point of view. This approach can be very useful to develop psychology of the future artificial biological systems.

Method of Analysis

The Psychology of Artificial Intelligent systems as well as traditional, human psychology

focuses on analysis.

The history of human psychology evolves through several stages. Each stage is based on

different representational models of mental processes: Structuralism, Functionalism,

Behaviorism, Gestalt psychology, Psychoanalysis, Information-processing systems, and

Psycholinguistics [18].

The Structural approach during the period of Structuralism was gradually replaced by newer

and different ideas. Combination of Structuralism and Functionalism creates powerful tool to present the structure and description of the AIS abilities. The first part of the definition of the AI systems can be presented as Psychology of Artificial Intelligent systems.

It deals with abilities of artificial intelligent systems that may be presented in a form suitable

for the development of intelligent machines.

Cognitive psychology (Plato, Kant, Chomsky) is the school of psychology that examines internal mental processes such as problem solving, understand, memory, language, and so on.

It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and

Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children. Cognitive psychologists are interested in the mental processes which mediate

between stimulus and response. Cognitive theory contends that solutions to problems take the

form of algorithms

Behaviorism is an approach to psychology based on the proposition that behavior can be studied and explained scientifically without recourse to internal mental states. The behaviorist school main influences were Ivan Pavlov, who investigated classical

conditioning, John B. Watson who rejected introspective methods and sought to restrict psychology to experimental methods, and B.F. Skinner who conducted research on operant

conditioning, and Simon. It is avery important approach to learn about An Artificial Intelligent System.

Combination of Cognitive psychology and Behaviorism permits to observe the output

and inside processes of the system. All fore methods are presented in this book as

amethodological foundation.

There are several approaches of study in the field of cognitive science including symbolic,

connectionist, and dynamic systems.

53


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


Symbolic - That intelligence can be explained by means of systematic, discrete

instructions not unlike the way in which a computer works (Semiotic and Algorithm

Theory).


Connectionist - The means of explanation is by using artificial neural networks.


Dynamic Systems - Cognition can be explained by means of a continuous system in

which everything is interrelated (Control Theory).

"Cognitive" has to do only with formal rules and truth conditional semantics. (Nonetheless, that interpretation would bring one close to the historically dominant school of thought

within cognitive science on the nature of cognition - that it is essentially symbolic,

propositional, and logical).

Levels of Analysis

One of the central principles in the symbolic approach to cognitive science is that

1. there are different levels of analysis (LOA) from which the brain (natural or artificial)

and behavior can be studied, and

2. mental phenomena are best studied from multiple levels of analysis.


These levels are usually broken into three groups, based on Marr's description of them: Computational (Behavioral) level: describes the directly observable output (or

behavior) of a system, includes structurization and semiotics.

Algorithmic (Functional) level: describes how information is processed to produce

the behavioral output.

Implementational (Physical) level: describes the physical substrate that the system

consists of (e.g. the brain; neurons).


The first two levels are related to Psyhology and include structuralization and semiotic technique. Semiotics is the study of signs and  symbols, both individually and grouped in sign systems. It includes the study of how meaning is constructed and understood.


The third one and partly the second one are related to Artificial Intelligent Systems

development. An analogy often used to describe LOA is to compare the brain to a computer.

The physical level would consist of the computer's hardware, the behavioral level represents the computer's software, and the functional level would be the computer's operating system,

which allows the software and hardware components to communicate (see THE

CONSCIOUS MIND AS AN OPERATING SYSTEM). Hardware and an operation system

are related to the first level of intelligence definition.


DECOMPOSITION AS THE METHOD OF ANALYSIS

―Decomposition‖ is the method by which one divides natural phenomena into their

constituent parts, and those parts into subparts and so forth. It is a highly intellectual process.

Decomposition is disintegration, replacement of a complex system by a set of simple

subsystems. It is the process of simplification. It is the replacement of more abstract or 54


Arpsychology and structured design of artificial intelligent systems

lesser-known TERMS with less abstract more specific or better known TERMS. This

procedure creates the multilevel structure. The lowest level of the structure consists of the simplest undivided parts, processes, and subgoals. Each step of decomposition is based on

specific criteria. Choice of the criteria can be done in a way similar to choosing attributes in

the process of purification (see below). This hierarchy determines the functionality of every

level above them. In the area of research of an artificial intelligence this approach was proposed by Dr. J. Albus and Dr. A. Meystel [16,19].

In the most of cases decomposition is based on strong knowledge about the object of

decomposition. If an agent does not have knowledge about the new object it has to learn about it (see below) or use existing knowledge even without any direct relationship to the object of decomposition as the hypothesis in the learning process.

Decomposition can be done by different class of parameters, criteria:

1. by functions

2. by elements

3. by modules

4. by subgoals

5. by tasks

6. by processes, etc


A criterion of choice depends on nature of an object of decomposition: a physical system, a

term, a goal, a process, etc.


In many cases definition of a term consist of the set of the lover level terms:


TERM

TERM1, TERM2…


Each of new terms has definition with the same structure. In this case decomposition can be

presented by algorithm:


1. Develop the term definition in accordance with the rules of definition development

(APPENDIX 8)

2. Develop the second level of hierarchy as the set of the lover level terms

3. Develop the definitions for the second level of terms

4. Develop the third level of hierarchy

5. Continue until integrity of criteria is not destroyed.


Modern biologists such as ―genius‖ award winner of Sapolsky, and MacArthur relies in their

research on biological hierarchy, and use this method of decomposition in biology.

The long rationalist tradition that extends from Descartes, Leibniz, and Hobbes assumes that

all phenomena, even mental ones, can be understood by breaking them down into their

simplest primitive components. Russell and Whitehead (Principia Mathematica) attempts to

reduce the world to logical operations expressed mathematically and artificial intelligence.

The grand project of artificial intelligence has been to find those atoms and the logical relations that govern them and to build a symbolic computer representation that captures that

55


order. But Wittgenstain argued that facts couldn‘t be striped of their context because it is their context, their pragmatic use that gives them meaning [38]. This problem can be solved

by presentation decomposition of facts and attributes.


THE STRUCTURE OF AIS (AGENT)


The human brain is designed as the structure. Different areas execute different functions. An artificial system has more visible modularity. It helps to develop system with high diversity of architecture.

Architecture

The structure of components, their functions and relationships.

Environment

The Real External and Internal World.

Complex environment such as financial, social, military and other are active, non-friendly, and non-predictable systems.

Sensor

The system of information collection

Perception

Translation of sensor data into organized meaningful information.


World Model

An internal representation of the real world.


Knowledge Base

The data structure and information that form the intelligent world model.

Or

Model of human knowledge that is used by expert system


Behavior Generator

The planning and control of action designed to achieve behavioral goal.


Actuator

The action generator in accordance with a behavioral generator program.


Value Judgment

The value judgment system determines good and bad, reward and punishment, important and

trivial, certain and improbable.


Inference Engine

A module that generates a logical conclusion and proof based on a set of rules for deduction


Fig. II-1 presents the integral closed-loop structure of the AIS.


Traditional multi-level, multi-resolution structures can be presented by a tree-shaped

structure. An artificial intelligent system structure is a very complicated multi-level, multi-56


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

resolution structure with cross connections between different branches (horizontal or

vertical). Horizontal connections are between modules of the same level. It is determined by

participation of the same low-level abilities in higher-level ability activities. Quantification of

abilities with fuzzy numbers, computation with words and other methods of quantification

are important procedures.


Knowledge Base

Value j

e u

j dgme

gm n

e t

Value j

e u

j dgme

gm n

e t

Inf

In er

e e

r n

e ce

e en

e gine

Inf

In er

e e

r n

e ce

e en

e gine

Perc

e epti

e

on

World

Wor

mod

m

el

Perc

e epti

e

on

World

Wor

mod

m

el

Be

B h

e avio

a

r ge

g n

e erator

l

Be

B h

e avio

a

r ge

g n

e erator

Inn

In er

e an

r

d Ou

O ter

te

r

Inn

In er

e an

r

d Ou

O ter

te

r

Environme

vironm n

e t

Sensors

e

Environme

vironm n

e t

Actuator

Ac

s

tuator

Sensors

e

Actuator

Ac

s

tuator


Fig. II-1. The integral structure of an Agent. Local feedback delivers part of information from the Actuators directly to the World Model. Goal as the system‘s input is not

shown.


VECTOR OF PERFORMANCE (FUNCTIONS)


The intellectual functions are derivatives of basic intellectual abilities of a system:


F = f[V(A)]


The fallowing functions are included to satisfy the specific system requirements [20]:


1. Object Recognition

to recognize objects, actions, situations

to search for a required object within a scene

to interpret situations (to evaluate objects, relationships, and

actions)

to detect an unfamiliar object,

2. Learning

57


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

by instruction

by experience

by interactions

by imitations

3. Hypothesis generation

4. Reasoning (manipulation with abstract and specific terms and relationships)

5. Data and information organization

Generalization

Conceptualization

6. Adaptation to new environment

7. Communication

Communication with humans and artificial agents

Collaborate with humans and artificial agents

8. Interpretation its own behavior and behavior of other agents (to evaluate actions and

relationship to other agents and objects)

9. Decision making

10. Perform decomposition

11. Planning and scheduling

12. Art apprehension


All of them represent actions.

Some characteristics of an intelligent system are not actions:

1. fairness

2. truthfulness

3. loyalty, etc.


In some cases

the problem is not clear

decomposition is not obvious

the variables are not listed in the beginning of a process analysis

the rules of actions should be learned during the process


Multilevel nature of tasks and knowledge determines the multilevel structure of performance.


Unfortunately artificial system creativity, artificial system intuition and so on are beyond of

engineering attention. It is impossible to make serious advances in this new area of

knowledge and application without understanding these intellectual abilities. It is very

important to create a common approach to research and application of new knowledge to the

new class of systems under the umbrella of a new knowledge theory - the Psychology of

Artificial Intelligent Systems.


There are two different approaches to measuring intelligence as the vector:

Physical parameters such as memory size, the diameter of associations ball (circle)

etc

The level of behavioral abilities

58


In the most cases the physical parameters of the existing system are not available. The

system‘s output is more important characteristic than the physical parameters.


The most important question of intelligent measurement is: Is it an additive (APPENDIX 3)

or multiplicative function? Psychology and cognitive science calculate IQ based on the

assumption that intelligence is an additive function of abilities. It is a very strong assumption

because there is interdependence between the same abilities. For example: reasoning is the basis of several other abilities such as generalization, intuition, etc. It is important to choose

local abilities without interdependency. For example: generalization, intuition, associative

thinking, objects recognition, etc are appropriate choices but reasoning is not because it is a

part of these abilities and is therefore interdependent.

AUTONOMOUS

Autonomy (Greek: Auto-Nomos - nomos meaning "law": one who gives oneself his own law) means freedom from external authority. Autonomy is a concept found in moral, political, and bioethical philosophy. Within these contexts it refers to the capacity of a rational individual to make an informed, uncoerced (not forced to act or think in a certain way by use of pressure, threats, or intimidation; compel) decision. In moral and political philosophy, autonomy is often used as the basis for determining moral responsibility for

one's actions (Wikipedia, encyclopedia).


In term of the Theory of Control Systems autonomy means self adaptation to the new

environment. Unfortunately the real environment is the complex, active, unpredictable,

unfriendly to an agent.


The following are three of the best-known definitions of an Autonomous System in the

intelligence community:


―Autonomy – an ability to generate one‘s own purpose without any instruction from outside‖

(L. Fogel).


―A constructed system is autonomous if there is a likelihood that circumstances will arise in

which no one can predict in advance what it will do‖ (T. Whalen).

Autonomous is not controlled by others or by outside forces; independent; independent in

mind or judgment; self-directed‖ [36].

There are some problems with these definitions. The first one: autonomy is not just the

ability to generate a purpose. It also includes ability to execute the plan to achieve the goal.

What does it mean: without any instructions from outside? What does it mean:

independently? It is not possible to execute any plan without receiving information from the

outside world, from environment. Can we accept another agent as a part of environment?


59


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

The second, there is a real possibility that another agent with stronger experience and ability

to reason can predict with some level of confidence and probability the behavior of the first

agent. Any experienced professor can easy predict mistakes and fraudulent submissions of

certain students.

There is one more definition:

An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to affect

what it senses in the future [29].

Let us look at some basic ideas to develop workable definition of the Autonomous Systems.

1. Autonomous is ability to adaptation. There are two types of adaptation:

short term time-spatial adaptation

long-term multi-generational adaptation.

The last one is referred to as ―evolution‖ (Dr. Alex Meystel). Evolution improves ability

of the system to increase a level of intelligence. Evolution is the tool to improve system‘s

intelligence (see EVOLUTION AND INTELLIGENCE).

2. An autonomous system is not controlled by others or by outside forces, but is rather

Independent in mind or judgment; self-directed system [36].

3. Adaptation is ―something, such as a device or mechanism that is changed or changes so as

to become suitable to a new or special application or situation. Change in behavior of a

person or group in response or adjustment to new or modified surroundings‖ [36]. So, the

terms adaptation and choice-making have the same meaning. In accordance with the

statement from above: In some way it is possible to say that an intelligent system is a system that has capacity or ability to make choice (learning and reasoning with a goal

internal presentation) (this definition has many supporters [21]). Acceptance of this

simplified definition of intelligence makes acceptable the statement: In some cases autonomous and intelligence has the same meaning.

4. Adaptation makes sense if the system behaves in the uncertain environment.

5. A system collects information from environment

6. Any another agent is a part of environment.

7. An Agent is domain-oriented system that cannot equally operate in the different

environments; a cab driver may not be able to work as a pilot.

Let us discuss the first type of adaptation (short term time-spatial adaptation). All of these

ideas can be covered by the definition:

Fully autonomy is a domain oriented ability to generate one’s own goal and without any

instruction from any other agent to execute achievement of the goal in uncertain

environment.

Let us discuss the case when a driver gets lost and asks some body for direction. Is a driver

autonomous agent? Is this an example of malfunction of the system or collection of

additional information? Autonomous ability depends on ability to collect information. The

60


driver starts his/her/its trip as an intelligent agent, but in the middle of the trip he/she/it lost

his right to be called an autonomous agent?

Fully autonomy is very strong ability (or strong definition?). In accordance with this

definition a cat has a grater level of autonomy then a human being. A cat never asks for advice. In reality a human being is domain-oriented system. Even an expert in the specific area of knowledge can search for advice from the expert in the same area of knowledge. A

child receives repeated instructions about the same problem all day along.

It would be better, more productive to define autonomous action instead of an autonomous Agent.

Autonomous actions are agent’s goal driven actions in uncertain environment that can

be executed without any instruction from any other agent.

In this case the autonomous system is a domain-oriented system that capable to execute

autonomous actions.

The goal is an essential element of definition of the term ―autonomous‖. Sometimes single

agent cannot reach the complex goal. The complex goal can be decomposed for several sub

goals. In this case the autonomous system consists of the several not fully autonomous but

sub autonomous agents. In real life many solution of real tasks need team work or some help

from outside. It is semiautonomous activity of the team or crewmembers. It is possible to qualify this whole teem as autonomous system. In this case communication between the same team members is the part of operation.

The Centibots system (The 100 Robots Project) is a framework for very large teams of robots

(Fig. II-3), those are able to perceive, explore, plan and collaborate in unknown

environments. The Centibots were developed in collaboration with SRI International, funded

under DARPA‘s SDR program. The Centibots team currently consists of approximately 100

robots. These robots can be deployed in unexplored areas, and can efficiently distribute tasks

among themselves: the system also makes use of a mixed initiative mode of interaction in which a user can influence missions as necessary (Distributed Multi-Robot Exploration and

Mapping, the Proc. of the IEEE 2006).

The robots are fully autonomous in terms of a human involvement. All computations are

performed on-board. [http://www.cs.washington.edu/ai/Mobile_Robotics/projects/centibots/]

Another example of a fully functioning autonomous system is the team of the soccer players,

robots that are participating in the famous Robocop tournament.

Subautonomous actions are subgoal (of a team goal) driven action in uncertain

environment without any instruction from any other agent who is not a team member.

The subautonomous system is a domain-oriented system that capable to execute

subautonomous actions.

61


In case of teamwork each agent (natural or artificial) should be psychologically compatible

with other team members. The team can be presented as an autonomous net with intelligent

nodes.

There are two types of net:

1. The net with subautonomous agents as the nodes

2. The net with autonomous agents as the nodes. In this case each single agent can achieve

the net goal by itself. His/Her/Its participation in the net just increases the power of the system.


A regular car is not autonomous system. The combination of the car and driver, the drone and

operator (natural or artificial) are autonomous systems. In this case a car and a drone are actuators of an intelligent agent.


Autonomous is a complex feature of intelligence. It includes the set of several abilities, such

as sensation (S), perception (P), conceiving (C), learning (L), reasoning (R), generalization (D), and discrimination (DE):

A = F (S,P,C,L,R,D,(DE))


The development of artificial systems with the high level of autonomy and great abilities is a

short-term goal of science and industry. The actual behavior of these types of systems cannot

be predicted in some cases. It is important therefore to prognosticate possible dangerous

results of their behavior and protect environment from not authorized actions (see FREE

WILL AND ACTIONS and LAW AND MORAL).


Although there is very strong correlation between Intelligence and short-term time-spatial

adaptation, it is not reasonable to define Intelligence as Adaptation. Adaptation is a very sophisticated term. An unintelligent system cannot be an autonomous system. An intelligent

system can be or cannot be fully autonomous system.


Natural intelligent systems can be autonomous and subautonomous. All of them some times

rely on outside help under different conditions. It is true until the single agent can achieve the

goal. Artificial intelligent systems like natural systems can be autonomous and sub-

autonomous as well. In this case some intelligent abilities not flexible enough for adaptation

to new conditions.


When we are talking about adaptation we refer to a specific environment operating under

specific constraints. For example, an autonomous vehicle cannot fly and neither can a human being. Both are autonomous but have their limitations. BIG BLUE is the fully

autonomous system only in the chess game environment.


The contemporary level of development of artificial intelligent systems reflects merely the beginning of this process. But this development is accelerating and soon there will be a community of very advanced systems. There are many advanced and fully functioning

autonomous systems but many challenges remain to be resolved. Now is the time to start

thinking about these challenges and their potential solutions.

62


Arpsychology and structured design of artificial intelligent systems

The Agent in Wumpus World (see REASONING) is an autonomous system. It can move

about in its environment and avoid dangerous areas.



INTELLIGENT

SYSTEM



AUTONOMOUS

SUBAUTONOMOUS

AUTONOMOUS


AS THE NET




NATURAL

ARTIFICIAL

NATURAL

ARTIFICIAL

NATURAL


ARTIFICIAL


MIXED


Fig II-2

Algorithm of adaptation:


1. To define the goal

2. To collect information about environment

3. To develop the World Model (see PERCAPTION)

4. To design the strategy of the goal achievement

5. Moving toward the goal while avoiding obstacles

6. To communicate with other team‘s members

7. To adjust the World Model in accordance with new information

8. Reevaluate the strategy

9. Repeat 4-6 until the goal will be achieved.


Fig. II-3A. The Crystalline Atomic Unit Modular Self-reconfigurable

Robot (http://www.mit.edu/~vona/xtal/xtal.html)

63


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


Fig. II-3B. The Soccer team.


Fig. II-3C. The autonomous the Centibots players.

http://www.cs.washington.edu/ai/Mobile_Robotics/projects/centibots/

64


Arpsychology and structured design of artificial intelligent systems


Fig. II-3D. Boeing‘s autonomous unmanned underwater vehicle (Long-term Mine

Reconnaissance System)


65


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Fig. II-3E. The system with reconfiguration (South California University, NASA, Lockheed

Martin, Raytheon DARPA project)

66


SENSING AND SENSATION


Sensation is a perception associated with stimulation of a sense organ or with a specific

body condition; the faculty to feel or perceive; physical sensibility; an indefinite,

generalized body feeling [36]. This definition of the term ―sensation‖ combines with the terms ―perception‖ and ―stimulation‖ and ―generalization‖:


Sensation includes attention as mobilization of the sensor system for intensification of

information collection and discrimination as the first step of information organization.


It would be better to separate sensation from perception. They are two different functions. It

is easy to develop two different simple subsystems of an artificial intelligent system then one

system with the complex function. So, sensing is the process of data and information

collection through the outer and inner sensor systems. It if the first step in information process.


Sensing begins with the impinging of a stimulus upon the receptor cells of a sensory organ,

which then leads to perception. Sir Francis Galton (cousin of Charles Darwin) stated:

―intelligence is a question of exceptional sensory and perceptual skills…the more sensitive

and accurate an individual‘s perceptual apparatus, the more intelligent the person‖.


Pain is example of sensing. Pain is unpleasant sensing occurring in varying degrees of

severity as a consequence of injury, or disease. It can be result of the sensors overloading.

Physical pain is a low-level internal negative value-state variable that can be assigned to specific regions of the body. It may be computed directly as a function of inputs from pain

sensors (tactile, temperature, etc.) in a specific region of the body [16]. It is part of self-testing, self-diagnostic system that provides information about internal or external problems

that are important for survival. Hunger, low power battery level, pain in joints, noise or low-

level pressure in the lubrication system is the signal for treatment. In human-robot society information exchange between agents is more important than a robot‘s personal feelings. It is

the part of self-awareness.


Beside vision, hearing, smell, taste (?), touch, pressure, temperature, and pain an artificial sensing system unlike a natural one has many more different types of sensors (infrared,

supersonic, different types of rays, and so on) and can better communicate with its

environment. The system needs specific inner sensors to feel love, inner pain, excitement and other emotions.


The sensitivity of an artificial sensor has a wider range than does a natural one. It can create a

stronger influence its environment and can create specific, unknown (does not exist in natural

systems) responses to input signals. Artificial sensors usually combined with the receptors that converts input signal.


67


―Claudia Mitchell lost her left arm at the shoulder in a motorcycle accident. She is the fourth

person -- and first woman -- to receive a "bionic" arm, which allows her to control parts of the device by her thoughts alone. The device, designed by physicians and engineers at the Rehabilitation Institute of Chicago, works by detecting the movements of a chest muscle that

has been rewired to the stumps of nerves that once went to her now-missing limb. Surgeons

took the first step by rewiring the skin above her left breast so that when the area is stimulated by impulses from the bionic arm, the skin sends a message to the region of her

brain that feels "hand." Someday she hopes to upgrade to prosthesis, that will allow her also to "feel" with an artificial hand.‖ (For 1st Woman With Bionic Arm, a New Life Is Within

Reach By David Brown, Washington Post Staff Writer Thursday, September 14, 2006). This is an example of inner sensing system as the part of self-testing and self-awareness in hybrid-the natural and artificial - system.


Some communication tracks that transfer signals from the sensors have the ability to respond

directly to these signals. This response is a reaction of an organism or a mechanism, to a specific stimulus and in some case can create resonance to these signals as excitement. This

amplified signal can be transferred to different parts of a body. In this case the system demonstrates the ability to react unconsciously to the input signal (A Reflex arc, see REFLEX).


Examples: sound of metal moving on glass surface, music, and so on. Human ―feel‖ this

information with the spinal cord. This method distinguishes from perception (recognition and

interpretation of sensory stimuli based chiefly on memory). It is possible to create the

artificial information tracks with the same ability. Repetition of the same signal can activate

memory and excitement. In this case, the process involves subconsciousness. The so-called

Mirror Cells in the human brain can respond to the signal presenting behavior of another human being by correspondent actions and prediction of the results of these actions. An

artificial system can demonstrate the same ability if similar actions saved in the memory and

can be activated by visual or other signals.


Forceful movement of an actuator can create the artificial information tracks similarly to the

process rehabilitation of injured human‘s part.


The aging of the population dramatically increase the needs for the service industry. It

creates pressure on a job market. This vacuum can be filled with artificial intelligent systems

(robots). To qualify for this job (especially in the senior community) the AIS has to be able

to recognize human moods and behave in accordance with circumstances. Voice volume

level, types of words, the velocity of speech, and so on carry information about the human

mood. Sensitivity of the artificial system is responsible for this information collection.

Module of perception generation converts this information in mood description and

understanding.


Walt Disney Co. [New Scientist, 24.01.2006] has created a media player that selects songs

based on its owner‘s latest mood. The device has wrist sensors that measure body

temperature, perspiration and pulse rate. It uses these measurements to build a profile of what

music or video the owner would prefer played when He/She is hot, cold, dry or sweaty, and

68


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

when their pulse is racing or slow. The device then comes up with suggestion to fit each profile, either using songs or videos in its library or downloading something new that should

be suitable. If the owner rejects the player‘s selection it learns and refines the profile. So, over time the player should get better at matching bodily measurements with the owner‘s

moods. This type of relationship can be seen between two artificial systems. It resembles compassion (see below) and emotions (see below).

ATTENTION

Attention is the cognitive or subcognetive process of selectively concentrating on one thing while ignoring other things. It is mobilization of the recourses of sensation through

control or emotions. The information patterns (sound, smell, light, shape, etc) can be stored

in the specific area of memory. Information matched to these patterns activates a selective magnified collection of information from the specific source. In terms of the Theory of

Control Systems it means increasing sensibility of the sensor system. Discrimination is the first step of information organization, it detects the specific signal that should activate attention (see DESCRIMINATION). In the human brain the limbic system (Cingulate gyrus)

responsible for attentional processing.

There are two types of attention:

1. Overt attention is the act of directing the sensors toward a stimulus source.

2. Covert attention is the act of mentally focusing on a particular stimulus

Attention can also be split between several activities or signals. It is not easy but possible to

split resources of an Artificial System to accommodate information from the several sources

at the same time and generate correct world model. The subject or the object of attention in

an artificial world is determined by

relation to the problem that the agent is working with

unknown, unfamiliar signal

the power of the information flow or signal

fitness to the criteria of pleasure (stimulus)


A lack of confidence uncertainty assigned to the frame of an external object may cause

attention to be directed toward that object in order to gather more information about it.

Algorithm (Attention):


1. To memorize of the patterns and the criteria

2. To compare input information against the patterns

3. If it fits to the pattern or the specific criteria, then increase information

collection from this source (discrimination). Generate the signal ―attention‖

(increase sensibility of the sensor system). Otherwise to ignore it.

4. To collect more information


69


DISCRIMINATION

The word discrimination comes from the Latin "discriminate", which means to "distinguish between". Distinction, the fundamental philosophical abstraction, involves the recognition of two or more things being distinct, i.e. different. The adjective discriminative refers to the ability or power to discriminate between different things, i.e. notice and state their equality

or difference, make a distinction. It can also refer to characteristic elements, attributes or

features of a thing. APPENDIX 13 presents some methods of Discriminate Analysis.

Discrimination is the ability to detect subtle (slight as to be difficult to detect or analyze; elusive; not immediately obvious) differences, to respond only to specific kind of stimulus.

A difference is the quality or condition of being unlike or dissimilar [36]. It is important tool of the object recognition, compassion (see SOCIAL BEHAVIOR). It based on specific

criteria. Discrimination searches the answer of the question: Is this regular information or the

specific stimulus? Information is submitting to perception to generate the world model,

stimulus is submitting directly to generate the action.

Algorithm:

1. To receive data from each sensors

2. To evaluate all the signal parameters through localization

3. To group signals by objects

4. To recognize the object or event (preliminary object recognition) with a low level degree of possibility based on existing information in the memory

5. To evaluate information by criteria

6. To activate attention

PERCEPTION

Perception is “Recognition and interpretation of sensory stimuli based chiefly on

memory” [36]. It is the process of acquiring, interpreting, selecting, and organizing

sensory information. Perception is the process of World Model development. It is translation of sensor data into organized meaningful information. Everything about the

world is a matter of perception. Definition of the stimuli interpretation can be found in [16].

The objects of perception are percepts. Percepts are not the material objects in the physical

realm that the mind imagines (rightly or wrongly) that it is sensing. They are, rather, the actual objects of perception, patterns of sensational qualities, impression. Impression is an

image retained as a consequence of experience, mental picture.

Visual percepts are patterns of area (shape, size, and position) and color (tint and tone) over a

two-dimensional field. Color is the easiest feature to percept. Audile percepts are patterns of

pitch and volume over time. In the human‘s brain the cerebellum contains topographic maps

and helps map object location and shape into grasping coordinates. The parietal lobe plays important roles in integrating sensory information from various parts of the body, and in the

manipulation of objects.

70


These are the things immediately perceived by the mind; the objects they are taken to

represent are a matter of inference. Percepts, in fact, are used to infer the existence of the entire material world; since its reality is only surmised, it must technically be considered a perceptual realm. It means that perception is the cognition process that is based on logic.

Logic is a system for deriving new symbols from existing ones, by combining or altering

them according to certain conventional rules.

Many cognitive psychologists hold that, as we move about in the world, we create a model of how the world works. That is, we sense the objective world, but our sensations map to

percepts, and these percepts are provisional, in the same sense that scientific hypotheses are provisional (provided or serving only for the time being; temporary). As we acquire new

information, our percepts shift. In the case of visual perception, some people can actually see

the percept shift in their mind's eye. Others, who are not picture thinkers, may not necessarily perceive the 'shape-shifting' as their world changes. The 'esemplastic' nature has been shown

by experiment: an ambiguous image has multiple interpretations on the perceptual level.

Just as one object can give rise to multiple percepts, so an object may fail to give rise to any

percept at all: if the percept has no grounding in a person's experience, the person

(natural and artificial) may literally not perceive it.

This confusing ambiguity of perception is exploited in human technologies (development smart systems) such as camouflage, and also in biological mimicry, for example by Peacock

butterflies, whose wings bear eye markings that birds respond to as though they were the eyes of a dangerous predator.

So, perception is an ability and tool to develop the World Model. Four main functions are

involved in this process [16]:

1. Localization is one of the functions of perception. It includes segregation of objects

and events, perceiving distance, location, and motion of sources of information. All

these functions work similarly to those in a natural system.

2. Recognition is an awareness that something perceived (to become aware of directly

through any of the senses, especially sight or hearing) has been perceived before.

[36]. It is an ability of a system to recognize specific information (not stimulus) with

certain level of probability through comparison with existing in the memory

information.

3. Judgment.

4. Interpretation (see UNDERSTANDING AND INTERPRITATION).


In [35] is shown that ―there is no one place (the Cartesian Theater) in the human brain through which all signals must pass to deposit their contents ―in consciousness‖‖. It is shown

perception works as the Multiple Draft model that accepts stimulus, then color, then shape, motion, and object recognition (traffic light‘ signal is coded by color, ―stop‖ and ―yield‖

signals are coded by shape). This sequence is determined by slow speed of natural neuron net

computation. A computer‘s computation speed is grater then speed of a natural system. In

this case the structure of an artificial system of perception can be different in any sequences.


71


Arpsychology and structured design of artificial intelligent systems

An Artificial Intelligent System has more powerful sensor systems and can develop the more

accurate (in term of information richness) World Model than a natural system.

Perceiving and conceiving are two functions of perception. Perceive means to become

aware of directly through any of the senses, especially sight or hearing; to achieve

understanding of; apprehend; to be physically aware of through the senses, experience,

feel. It is a physical process, product of the sensor system. Its function is to establish connections between signals from different sensors (direction, time, etc.)

Conceive is to apprehend mentally; to understand the language, sounds, form, or

symbols. It is a mental process, product of logic. Its function is to establish connections

between symbols and their meanings.

Algorithm:

1. To receive data from the each sensors

2. To perceive these data

- to evaluate all signal‘ parameters through localization

- to evaluate time parameters of each signal

3. To conceive these data

- recognize the object or event with a certain degree of possibility based

on existing in the memory information

4. To generate the model of the objects, scenes or events

5. To supply the object scene or event with evaluation through judgment

6. To submit information to the world model

OBJECT RECOGNITION

The problem in object recognition is to determine which, if any, of a given set of objects appear in a given image or image sequence. Recognition is one or several pre-specified or

learned objects or object classes can be recognized. Thus object recognition is a problem

of matching models from a database with representations of those models extracted from the image luminance data. Preliminary object recognition is the function of discrimination as the part of sensing. In the human‘s brain the temporal lobes are part of the

cerebrum. It Involves in high-level visual processing of complex stimuli such as faces and scenes, and the object perception and recognition.

It is very interesting that the process of Recognition in the psychology of natural systems was borrowed from the artificial intelligent system activities descriptions. It is based on segregation of the object (A

/, -, \) and comparison of the basic elements against the stored

samples in the memory with desegregation as the next step.

The representation of the object‘s model is extremely important process. Clearly, it is impossible to keep a database that has examples of the every view of an object under every

possible lighting condition. Thus, object views will be subject to certain transformations; certainly perspective transformations depending on the viewpoint, but also transformations 72


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

related to the lighting conditions and other possible factors. For example, sometimes it is big

problem even to human being to recognize objects in the Picasso‘s pictures or other 20th century abstract arts. In some cases system can develop image of the object that represent the

real object with the certain level of probability (see also APPENDIX 4)

There are two stages to any recognition system:

1. the acquisition stage, where a model library is constructed from certain descriptions of the objects.

2. the recognition stage , where the system is presented with a perspective image and

determines the location and identity of any library objects in the image.


The most reliable type of object information that is available from an image is geometric information. So object recognition systems draw upon a library of geometric models,

containing information about the shape of known objects. Usually, recognition is considered

successful if the geometric configuration of an object can be explained as a perspective

projection of a geometric model of the object. There are several engineering methods to solve

this problem [27].


To generate the dynamic model of the object it is very important to assign attributes to the

object: color, position, speed, etc. Events and processes can be recognized by the sequence of

observing steps of an action.


The software ―SEXNET‖ learns to identify gender from facial images. From ninety photos of

University of San Diego students-sans facial hair, jewelry, and apparent makeup-the machine

learned to tell men from women with only an 8 percent error. Humans using the same data

made mistakes 11.6 percent of the time. Building a neural network that can learn abstract concept like maleness and femaleness, without ever being told anything about people or

sexual characteristics, is just a way to learn how networks categorize (Beatrice Golomb,

University of San Diego) [38].


A neural network trained on photos of children with William syndrome could catch patterns

that human doctors might miss.

Algorithm: Object recognition

1. Perceive the input signals

2. Conceive this information

Define the edges of the object

Integrate the edges into the shape

Find mach of this shape to the pattern in the data base

If ―Yes‖ then object is recognized (with certain level of confidence)

If ―No‘ then it is the new object, memorize it

Put it in the data base with attributes

Assign the name.


73


This procedure can be used for an intelligent abilities (perceiving and conceiving)

measurement.

Speech and Text Recognition Technology


Computer speech recognition is the process of converting a speech signal to a sequence of

gramaticaly organised words.


In terms of technology, most of the technical text books nowadays emphasize the use of

Hidden Markov Model as the underlying technology (APPENDIX 11). The dynamic

programming approach, the neural network-based approach and the knowledge-based

learning approach have been studied intensively in the 1980s and 1990s.


One of the important areas of the objects recognition application is the text recognition. A human being has strong ability of the words recognition. All types of the machine‘s word processors else have ability of the written and spoken words recognition even in case of misspelling and different pronunciation. An ability of a human being can be demonstrated by

example presented below. Most of people can read this text fluently without reconstruction

each word:


yuo hvae a sgtrane mnid if yuo cna raed this.

Cna yuo raed tihs? Olny

55 pcenert of plepoe cluod uesdnatnrd ym

wariteng. The compute’sr ilteleignnce hsa hte sema phaonmneal

pweor

as the hmuan’s mind. Aoccdrnig to a

rscheearch at Cmabrigde

Uinervtisy, it dseno't mtaetr in waht oerdr the

ltteres in a wrod are, the olny

iproamtnt tihng is taht the frsit and lsat ltteer be

in the rghit pclae. The

rset can be a taotl mses and you can sitll raed it

whotuit a pboerlm. Tihs is

bcuseae the huamn mnid deos not raed ervey lteter by

istlef, but the wrod as

a wlohe. Btu it si nto mipotratn ot ehav the frits and teh

lats eltters ni the ritgh poositni. oyu can rade even fi the

lats letrest aer in teh rwogn poosiotns.

This text shows that even location of the last letter is not critical in the many cases.

Result of ―recognition-translation‖ sees APPENDIX 10. It is clear that the artificial

intelligent system can demonstrate the same ability.

Procedure of reading and understanding of this text consists of two steps:

74


The first step: recognition of the language by specific criteria or simple reconfiguration sequence of letters in some small words. This method can be used to determine what type of

the language is presented in Latin letters, for example: English or Russian?

The second step: recognition of the words.

Algorithm: ―recognition-translation‖

1. Choose words with the same first and last letters

2. From this set of words select words with the same number of letters

3. From the new set of words select words with the same letters

4. Check meaning of the words

5. Choose words with the same first letters

6. If positions of words don‘t fit to the grammar of a sentence then define parts

of speech

7. From this set of words select words with the same number of letters

8. From the new set of words select words with the same letters

9. Check meaning of the words

10. Recognize unknown words by recombination of letters

11. Analyze the text to find meaning of unknown words by association

12. Using associative thinking on wide range of texts find meaning unknown

words

13. Generate correct grammar sentence (see Fig II-4A and 4B)


This test can be used for an intelligent ability (reasoning, text recognition) measurement.

It can be design with different levels of difficulties: different word levels and different mix of

letters.


Fig. II-4A and 4B present the examples of the software that generate correct English

grammar sentence.


See also UNDERSTANDING AND INTERPRITATION. Detecting and Recognizing of

Emotional Information see EMOTIONS.

UNDERSTANDING AND INTERPRETATION

Understanding is a psychological process related to an abstract or physical object or process, such as, person, situation and message whereby one is able to think about it and use

concepts to deal adequately with that object. It is the most difficult problems of Artificial Intelligent Systems.

Understanding

in the artificial system

has specific

meaning. In this case

UNDERSTANDING is the process of recognition of the object; symbols or events bind

all known features to the object, symbol or event and evaluates possible relation of the

agent with the object or event. It is searching for correspondence between signals

(symbol) and knowledge, searching for meaning. In reality understanding and

conceiving are synonyms. Comprehension is a result of understanding.

75


Relationship between the symbol and meaning is the result of convention. Semiotics is the

study of signs and symbols, both individually and grouped in sign systems. It includes the study of how meaning is constructed and understood.

Meaning is discursive: it arises from conventions that presuppose not only a social

world but also one in which the meaning bearing share the interests and aspirations of

those whom they would engage. Meaning (thus, knowledge and conduct) is now

stripped of abstract, once-and-for all features and is seen as entirely constructed. “The

limits of my language mean the limits of my world” (Australian philosopher Ludwig

Wittgenstein, 1889-1951).

Development of new meaning by Wittgenstein has several steps [34]:

1. A name signifies something only to the extent that it is understood to stand for the

thing signified.

2. But ―to stand for‖ anything, a sign must be related to that thing by some sort of rules

or conventional understanding.

3. But the adoption of conventions is a social act. Conventions are part of the actual

practices of people in the world.

4. It is impossibly to apply a private rule to a private occurrence.


Two aspects of meaning that may be given approximate analyses are the connotative relation

and the denotative relation. The connotative relation is the relation between signs and their interpreting signs. The denotative relation is the relation between signs and

objects.

Correspondence between symbol and its meaning through knowledge is the process of

INTERPRETATION; Interpretation is something that serves to explain or clarify.

Interpretation, the true subject of semiotics, begins with perceptual paradigms, which are

abstractions from perceptual patterns.


Understanding is the denotative relation

Interpretation is the connotative relation


Understanding is the base for development of response and strategy of utilization of new

information.

In the most of cases Artificial Intelligent Systems are not deal with development of meaning

of new concept, but understanding and interpretation of existing concepts. For example:

incoming information presents word. The system should be able to recognize the parts of

speech and assign all attributes. It is object recognition with understanding.

So understanding is binding the abstract symbols and signals collected by conceiving.

Abstraction is the process of defining a concept based on an observation, mental or

perceptual; hence all abstractions are concepts (see ABSTRACT THINKING AND

CONCEPTUALIZATION).

76


Conception is the ability to form or understand mental concepts and abstractions; something conceived in the mind; a concept, plan, design, idea, or though [36]. This definition is an example of breaking the rule number 2 of definition development ( conception

is…concepts) (see APPENDIX 8).

Better definition is: Conception is the ability to form or understand a general idea derived or inferred from specific instances or occurrences. Idea is description of the object, process, etc in the minimal set of defining features conveying fundamental

character presented full information to recreate these object, process, etc. A concept development is the subject of ―LEARNING, Conceptual Learning‖ (see LEARNING).

A sign is an association of a perceptual paradigm with another concept. This association is

made through memory: two concepts are associated when they occur in the same thought

experience; thinking of one will then cause the recall of the entire experience, in which the

other concept is also present (see ASSOCIATIVE THINKING).

Interpretation is the process of fitting observed percepts into recognized paradigms,

thereby deriving meaning, which is nothing more than the association of concepts.

Interpretation applies to all aspects of the perceptual realm. It is means of constructing a personal version of the perceptual realm ― an attempt to reconstruct the actual course of events in the world. Although the terms "interpreting" and "translation" are often used interchangeably in everyday speech, they are distinguished in the field of interpreting and translation. Both refer to the transfer of meaning between two languages. However,

"translation" refers to a transfer from text to text.

Communication is an attempt by one mind to induce a certain interpretation by another. This

includes such things as disinformation, which is an attempt to induce a false interpretation of

the course of events in the perceptual realm. But by far the most important form of

communication is language, the use of symbols. A symbol is a sign whose association

between perceptual paradigm and other concept is one of convention. (The first convention

must be established by coincidence, where two interpreters form the same association based

on some common experience. That first convention can then serve as the basis for further conventions.) The set of all symbols and logics understood by an interpreter is that

interpreter‘s language. Communication between artificial agents can be developed by wider

range of technical systems.

An agent establishes sign relationships only by a gradual learning process. It experiences things in conjunction and thus forms associations in memory, develops a sense of the

functional rules of the perceptual realm by trial and error, and is constantly in the process of

revising its personal versions of the course of events in it. In large part this is accomplished

using the scientific method ― the formation of a hypothesis and the gathering of data to check the hypothesis against. If the data support the hypothesis, consider it provisionally correct; if they contradict it, it must be revised (see ASSOCIATIVE THINKING and

HYPOTHESIS GENERATION).

The text recognition and understanding consists of three major procedures:

1. Grammar recognition (syntax)

2. Parsing (part of speech)

77


3. Meaning of symbols (worlds and signs)

Words meaning (semantic) is determined by location in the sentences, and parsing (for

example)


I have this file.

This file is on the table.

I file these documents.

I have a file cabinet.


and connotation:


I am expecting you

I am waiting for you


The words ―expect‖ and ―wait‖ have the same mining but different connotations

Fig. II-4A and 4B illustrate two versions of the software with ability to recognize,

understand, and interpret new grammar rules, math operations and formulas that a user

presents for sentence generation or math calculations.

Algorithm: Understanding and Interpretation

1. Object recognition (understanding) as the combination of signs – concept (object)

(see OBJECT RECOGNITION, Algorithm)

2. Bind this object to the knowledge of the object in the knowledge base-interpretation

(unfriendly or friendly object)

3. Bind this knowledge about the object to the knowledge of relationship between this

object and others – interpretation of the object behavior in relationship to the other

objects and environment.

4. Generate response in accordance with the rules in the knowledge base.

For Emotion Understanding see EMOTIONS.


Understanding and interpretation of information are based on agent‘s individual knowledge

that represents result of its lifetime learning and personal experience. Therefore, it is a very

difficult to foresee the actual behavior of a fully autonomous advanced artificial intelligent system (see INTUITION). This behavior can be very dangerous. By the way this is one of the

possible reasons of dangerous behavior a human being with difficult childhood. May be it is

reasonable to create some day the special group of artificial intelligent systems behavior supervision. It is important to prognosticate possible results of their behavior and protect environment, other AIS, and a human being from their not authorized actions (see FREE

WILL AND ACTIONS and LAW AND MORAL). Some day it can become the real world

problem.


78


Arpsychology and structured design of artificial intelligent systems


Fig. II-4A (Mr. OU)

79


Arpsychology and structured design of artificial intelligent systems

Fig.II-4B (Mr. WONG)


80


REASONING

Introduction

Reasoning is the process of drawing conclusion from facts, “Using of reason, especially

to form conclusions, inferences, or judgments” [36]. Practical reason is intellectual virtue,

by which one comes to distinguish what is good and bad, the prudent course of action, the

right strategy, and so on.


Logic is the tool of reasoning. It is a system for deriving new symbols from existing ones, by combining or altering them according to certain conventional rules.

This topic covers only preposition and predicate (monotonic) logic.

Reasoning and learning are the most powerful intellectual functions. It is not easy to emulate

them. The main problem is determined by the very nature of reasoning that is based on

computation with words rather than computation with numbers.

―It is pretty clear, from my point of view at least, that then von Neumann machine was based

on some image of the human mind,‖ says Rumelhart (Stanford University). ―The image was

something like idea of following a set of instructions, sort of like our conscious thoughts. If

we have a list of thing to do, we do it. We can easily imagine ourselves being a von Neumann machine.‖

According to Rumelhart, the standard assumption in cognitive psychology or an artificial

intelligence has been that these subconscious mental processes are just like the conscious ones, only that they go on without our conscious awareness. They are sequential, logical and

rule-based, even if we aren‘t conscious of the rules or even able to articulate them.

The reasoning in an artificial intelligent system is going through a sequential process where

this follows that follows that. There is a sense of connectedness among things. All of logic,

linguistics, cognitive psychology, and related fields have been about building rules that

approximate the underlying casual processes. Researchers in the artificial intelligence field insisted on replacing ambiguous natural language with its own computer versions.

The long atomistic, rationalist tradition that extends from Descartes, Leibniz, and Hobbes

assumes that all phenomena, even mental ones, can be understood by breaking them down

into their simplest primitive components. The goal of the great seventeenth-century

rationalists was to find these components and the purely formal and logical rules that joined

them together into the more complex compounds of the exterior and interior worlds. All

reasoning, therefore, could be reduced to calculations. Analysis would produce a kind of

alphabet of facts, the simplest atoms of the world that could be recombined by limited

number of logical relations to produce and explain the world and all thoughts. That same goal

lies behind Russell and Whitehead‘s Principia Mathematica , their great attempt to reduce the

world to logical operations expressed mathematically, Ludwig Wittgenstain‘s Tractatus

Logico-Philosophicus (1922), and artificial intelligence. ―AI‖, in the words of Hubert and Stuart Dreyfus, philosophers of science at University of California at Berkeley, ―can be

thought of as the attempt to find the primitive elements-that mirror the primitive objects and

their relationships that make up the world‖ [38].

81


That effort assumes that it is possible to strip these atoms of all their relations, that at base

they are context-free, linked by abstract rules. The grand project of artificial intelligence has

been to find those atoms and logical relations that govern them and to build a symbolic computer representation that captures that order. This problem can be resolved in some way

by learning of ―facts-relationship‖ patterns through experience of applications and

explanations.

Tools of reasoning:

1. inference (analogical, probabilistic, monotonic, non-monotonic)

2. tautology (the reversible rules)

3. decomposition

4. combination

5. separation

6. comparison and selection

7. judgment

8. algorithmization

Knowledge Representation

Knowledge is the sum or range of what has been perceived, discovered, or learned [36].

Laws of Nature and Society development and existence are knowledge and source of

new Knowledge. Knowledge of the external world is mediated by perceptual mechanism.

Thomas Hobbes and Pierre Gassendi have agreed that reality is not composed of two

different kinds of staff but of one kind only – the physical [67].

The problem of representation is a central part of the problem of knowledge and an enduring issue in philosophy of mind:

1. At the most fundamental ontological level, our experiences of the external world are

complex arrangements of matter, energy and information.

2. We might ask whether we or a honeybee – whose vision is sensitive to an

electromagnetic spectrum to which we are essentially blind – more accurately

―represents‖ the properties of roses and lilies.

3. Thus, the question of representation can be stated: Is our knowledge of objects in the

external world direct or mediated? [61].

There are a lot of different approaches to knowledge representation in the agent‘s knowledge

base. The most important languages of knowledge representation are preposition, predicate

and fuzzy logic, frames, semantic net and other. All known knowledge representation such as

map building, STRIP language [27] etc. can be presented through the languages mentioned

above.

The real external world could be seen or heard or felt as if it holds certain properties. The fact

is that vision and sensory experiences in general comprise properties of a distinctly

―experimental‖ quality. For instance, a 25-cent coin is round, however, only when it is

projected onto the retina in straight-on plane will it form a circular pattern on the retina. At

any other angle, it will be elliptical. In accordance with the identity theory question is what in reality quality terms, such as ―red‖ or ―melodic‖ refers to.

82


The set of the different models can be stored in the knowledge base. Fitness of the object and

event to the specific model and category can be defined in the process of learning (teaching)

by the certain criteria.

Knowledge and precision of it presentation are not absolute. They have probabilistic

character. The tiger, lion, punter and cat are different animals but all of them belong to the

same ―cat‖ family with different level of a membership. It reflects knowledge fuzziness (see

APPENDIX 4).

A neuron network is the discrete system where information is presented in discrete form.

The Structure of Knowledge Representation in the Intelligent System

Classification of natural systems memory is based on the duration of memory retention, and

identifies three types of memory:

1. sensory memory,

2. short term memory,

3. long term memory (see APPENDIX 14).

The Knowledge Base of an artificial system has two types of memory:

1. The memory for data - the short term memory combined with the sensory memory,

2. The memory, where organized data (knowledge) is located - the long term memory

The last one has two parts:

1. The Application Knowledge base

2. The Reasoning Knowledge base

The Application Knowledge base can be divided into

1. declarative (semantic)

2. relationships memory.

Perception generates information for the World Model in the short-term memory. The

Associative thinking and Object recognition work with the long-term memory.

In artificial systems information exchange between these two types of a memory is ―any

time‖ procedure. Some knowledge can be presented directly to the long-term memory. In

natural systems this process takes place at specific period of time (see APPENDIX 14).

The following features of the memory are:

1. everything lived through consciously is automatically printed to the memory

2. the memory is constantly held accessible

3. past experience, though having lost its original presence can be made to reappear in

mental presence.

The memory:

1. relies an enormous capacity of information storage,

2. relies on the conservation of information and its protection against overwriting

3. means that the experience passed is reproduced by recombining the information

stored with mental presence.

83


Anything

Objects (Abstract models) Events (Processes)

Categories

Representation

Linguistic Numeric Symbolic


The structure of knowledge representation


Any intelligent information system operates with the real world information. This

information is represented by the rules of application. Logic manipulates with rules of

reasoning. These rules represent the relationships between abstract terms. Meaning of real

world terms can be assign to these abstract terms.

There is much research dedicated to the problems of reasoning and the agent‘s structure

design [22,23,25,26]. All of them are based on the representation of knowledge as the rule-

based, semantic net, frame structure and so on. These knowledge bases (KB) are centered on

application knowledge (AK) (domain oriented KB). Application rules of reasoning are

different for different areas of the application. The single-base approach decreases the level

of universality of the agent. The most existing systems with reasoning are not universal

theorem provers (http://www-formal.stanford.edu/clt/ARS/Entries/acl2). These systems are based on rules of reasoning and don‘t work with application knowledge. Some of them, like

ACL2, are designed as multi-KB. However, all of these systems are based just on preposition

logic. The most interesting result in the area of reasoning is the Jess language (Jess, the Java

Expert System Shell http://herzberg.ca.sandia.gov/jess/demo.html). This language also is

based on just one KB-AKB. Information is presented by predicate logic. Rules of reasoning

are incorporated into the source code.

A possible way to increase the level of universality of the agent is by creating the double-KB

agent structure. The first KB is the application knowledge base (AKB); the second one is the

rule of reasoning KB-reasoning knowledge base (RKB). The RKB is a universal KB. It can be

used with different AKB. The Double-KB structure of a system is shown in Fig. II-5. AKB

has a multilevel structure. The process of reasoning is shown in Fig. II-7. Complex rules of

application should be decomposed to simple rules via the rules of reasoning (And-Elimination

rule-RR2 in Fig. II-7) application. The idea of a multi-KB in search engines also was

described by Dr. Lotfi Zadeh in ―The Prototype-Centered Approach to Adding Deduction

Capability to Search Engines- The Concept of Protoform‖ (BISC letter, 21 Dec 2001)

http://www.cs.berkeley.edu/People/Faculty/Homepages/zadeh.html In this letter: ―The deduction database is assumed to consist of a logical database and a computational database,

with the rules of deduction…‖

84


Separation of the AKB and RKB from the program code converts a conventional system into

a system with the ability to learn, creates conditions for teaching the system through delivery

of new rules of application and reasoning by an expert in an area of application and reasoning

without knowledge of programming. It is an important progressive step from a conventional

system to the AI system. New rules should have the same structure as existing rules. New processes can be added via new program modules. The number of areas of application

determines the number of AKB. Multi-KB structure creates conditions necessary to design a

system with the ability to generate rules as hypotheses in the AKB. The choice of application

rules (AR) is determined by terms. Choice of rules of reasoning (RR) is determined by the

structure of the application rule. New knowledge as new application rules is presented to the

World Model (WM). Technically a process of reasoning can be described as the following

chain of steps:


Data → AR activation → AR is testing all chains of related knowledge in the WM →

RR activation→ simplification of a rule→ Data

Execution of the foregoing process is separated by levels of knowledge.

Fig. II-5 shows the double-KB system. Fig. II-6 and II-7 show the algorithm and structure of

the system.. Fig. II-8 shows Forward-chain algorithm of reasoning that is based on rules of

reasoning RR15-RR17. Application of the rules RR1-RR14 is not shown.


Re

R a

e s

a on

o ing

n


ing Kno

n w

o led

w

g

led e

g

e

App

p l

p ic

l at

a ion

ion K

no

n w

o led

w

g

led e

g

e

ba

b s

a e

e (R

( u

R le

u s

le of

o

Da

D t

a a

a re

r p

e r

p e

r s

e en

e t

n at

a ion

t


f

ba

b s

a e

e (A

( pp

p li

p c

li at

a ion

t


ion

ion

re

r a

e s

a on

o ing

n

)

ing

ru

r les

u

)

Inf

n er

e e

r n

e c

n e

e en

e g

n ine

g

Goal

Goa s

Tra

Tr n

a s

n lator

o

Var

a iab

ia l

b es

e d

s es

e c

s r

c ipt

p i

t on

o

Int

n er

e f

r ac

a e

Envi

v ro

r n

o men

me t


Fig. II-5. The double-KB system structure.

85


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Knowledge representation in the neuron net.

Endel Tulvin, a psychologist at the University of Toronto, and others demonstrated the

existence of at least two independent kinds of memory, sometimes called procedural and

factual. People with brain lesions who have forgotten such fact as whether they ever took piano lessons can still remember how to play the instrument. Spatial object representation can be done by methods of graphic software.

How did groups of neurons ―mean‖ a specific thing? It is relative representation that is groups of neurons with memberships that differ from learning trial to learning trial and yet

somehow represent the same fact. It works like procedure of classification in the neuron net.

(Fig II-27). Each object receives the specific code and is placed into the specific cluster in accordance with this code. There is some overlap between clusters. It represents the fuzziness

of categories and the way some blend into each other. A blob defines a category, and its relation to other blobs suggests the relationship between categories.

Rules of Reasoning

There is a limited set of rules of reasoning in preposition logic [22-26]:


RR1. Implication Elimination:

, (modus ponens) (IF is in DB THEN

true)

RR2. And –Elimination:

1

2

3

n

LIST( i), LIST( i) = 1, 2, 3,

n

con( i)

LIST( i), [i=1,n]

RR3. And-Introduction:

… ,

1 ,

2 ,

3,

n

1

2

3

n

LIST( i)

con( i), [i=1,n]

RR4. Or-Introduction: LIST(

… ,

i)

1

2

3

n

i =

1 ,

2 ,

3,

n

LIST( i)

dis( i), [i=1,n]

RR5. Double-Negation Elimination:


RR6. Unit Resolution:

RR7. Resolution:

RR8. Universal Elimination:

( )

(g), (from DB: = g)

RR9. Existential Elimination: ( )

(g) (from DB: = g)

RR10..Existential Introduction: (g)

( ) (from DB: = g)

RR11. DeMorgan Laws

RR12 Universal Generalization: ( x) P(x)

RR13 Existential Generalization: ( x) P(x)

RR14 Rules of Induction: P(1)=T

( k) {[P(k)=T]

[P(k+1)=T]} P(n)

T

RR15 Associative law


This set of rules creates the universal RKB.


86


Arpsychology and structured design of artificial intelligent systems


1

2


3


4


5


6


7


9


14


10


11

12 13 8

Fig. II-6. Multi knowledge base system. (Mr. Uri).

1. Add to DataBase button


2. Add to Application Knowledge Base button

3. Data Base (data display area)

4. Knowledge Base

5. Reasoning Rules display area Button

6. Change Data Button/delete data

7. Choose Name Box (facts on)

8. Ask a Question Button

9. Pre-set question panel

10. Choose Name Box (pre-set question)

11. Choose Object Box

12. Execute Pre-set Question Button

13. Execute Pre-set Question

14. Button Results display area

87


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Example of the process of reasoning.


Suppose, the DB initially includes facts A, B, C, D, and E, and AKB contains application

rules:


AR1: IF Y is true AR3 X


X

AND S is true A

n

AND D is true AR2 Y

THEN Z is true B

B

Y

P Z

P

AR2: IF X is true AR1

AND B is true E D

D Z

E

AND E is true AR4

THEN Y is true RR1


S a n

d

B

a

n

d

W

S

S

AR3: If A is true

THEN X is true


AR4: IF P is true RR1: If S and B and W Then S

THEN S B W is true

Sequence of the process of rules application:


AR3, AR4, AR2, RR1, AR1


Fig. II-7. An inference (forward) chain in a system based on proposition logic.

88


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Proposition logic Forward chaining (data-driven reasoning)

DB & IDR DB & IDR DB & IDR DB & IDR

A B C D E

A B C D E

A B C D E

A B C D E

S,

S B,

B W

,

X P

X

A B C D E

ABCDE

CD X

ABCDE

CD XP

X

A B C D E

S,

S B,

B W

,

XPY

Y&D&S

Z

Y&D&S

Z

Y&D&S

Z

Y&

Y D&

D S

&

Z

X&B

& &

B E

&

E

Y

X&B

& &

B E

&

E

Y

X&B

& &

B E

&

E

Y

X&

X B

& &

B E

E

Y

Y

A

X

A

X

A

X

A

X

C

P

C

P

C

P

C

P

S&

S B&

B W

&

P

S&B

S& &

B W

&

P

S&

S B

& &W

&

P

S&

S B&

B W

&

P

con(

n

)

LI

L ST

I

(

ST

)

i

i

con(

n

)

LI

L ST

I

(

ST

)

con(

n

)

LI

L ST

I

(

ST

)

con(

n

)

LI

L ST

I

(

ST

)

i

i

i

i

i

i

i

…………

……

.

……

…………

……

.

……

…………

……… .

…………

……… .

RKB RKB RKB RKB

Fig.II-8. The system structure and algorithm. IDR-internal data representation, DB-Data

base (external data representation), RKB-reasoning knowledge base

89


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Relationship Between Abstract and Specific (see also ABSTRACT THINKING

AND CONCEPTUALIZATION)

Syntax in predicate logic can be presented as:

PREDICATE (LIST OF TERMS - OBJECTS)

PREDICATES: RELATIONSHIP, PROPERTIES, and FUNCTIONS.


Suppose, the following facts in the predicate logic using meaningful predicates and functions

rules.


Rules of application (abstract concept):


1) Anyone sane does not teach an AI course.

x sane( x)

AIInsructor ( x) )

2) Every circus elephant is a genius.

xCircusElephant( x)

genius( x)

3) Nothing is both male and a circus elephant.

x Male( x)

CircusElephant( x)

4) Anything not male is female.


x Male( x)

Female( x)


Data (specific data):


1) Clyde is not an AI instructor. AIInsructor( Clyde)

2) Clyde is a circus elephant. CircusElephant( Clyde)


Based on the application rules determine: if the state of the following is true, false or cannot

be established

Clyde is a genius.

Example of the working system is presented on the Fig.II-6 and algorithm is on the Fig.II-9.


ADDITIONAL RULES OF REASONING IN PREDICATE LOGIC


Rules of reasoning include all rules of reasoning based on preposition logic and an additional

set of rules that are specific to the predicate logic such as:


RR16. Find all atomic sentences that related to the first term in the DB

RR17. Find all atomic sentences with conclusion that related to the predicate

of the result of RR1 action

RR18. Check each of them against the solution question.

90


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Algorithm of Reasoning (predicate logic)


Proof that Clyde is a genius: DB x

Clyde

RR16 RR16

x

Clyde

AKB

AIInsructor( Clyde) CircusElephant( Clyde)

RR17

RR17

x sane( x)

AIInsructor ( x) xCircusElephant( x)

genius( x)


x Male( x)

CircusElephant( x)

RR18

No

RESULT Yes

RR17

Genius( Clyde)


Male( Clyde)


( Clyde)

Genius( Clyde) Male( Clyde)

genius (Clide)

Fig.II-9 shows the Forward-chain algorithm of reasoning based on rules RR15-RR17.

Application of the RR1-RR14 is not shown.

Wumpus World

The Wumpus World as an example of the process of reasoning [27].


1.4

2.4

3.4

4.4

STENCH

TENC

WUMP

UM US

STENCH

1.3

2.3

3.3

4.3

STENCH

GOLD

BREE

B

ZE

LD

1.2

2.2

3.2

4.2

BRE

BR EZE

PIT

1.1

2,1

3,1

4,1

A

BRE

BR EZE


Fig. II-10 Wumpus World

91


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Wumpus is a beast that eats anyone who enters its room. He is somewhere in the cave.

Possible Agent‘s actions are: to go forward, backward, turn right 900 and turn left 900 .


Fig. II-10 presents the famous problem: AGENT is searching for GOLD. WUMPUS and PIT

are dangerous areas. These areas are forbidden for an AGENT. Adjusted Areas to WUMPUS

(STENCH) and PIT (BREEZE) generate signals of dangers. The agent‘s ―A” goal is to find

GOLD and bring it back to the start. It dies if it enters a square containing a pit or a live WUMPUS. It is safe (but smelly) to enter a square with a dead wumpus. GOLD is surrounded by STENCH and BREEZE.


An AGENT who does not have the ability of reasoning will not be able to find GOLD in

environment presented on Fig. II-10 with WUMPUS and PIT are around of GOLD. Only a

sophisticated ability of reasoning combined with environment memorization (the World

Model) permits an AGENT to solve this problem. Fig. II-11 shows a system with the ability

to design the World Model and apply rules of application.


The fallowing capabilities are included to satisfy these specific system requirements:

1. to recognize objects, situations

2. to infer from the recognized element of the scene

3. to search for required object within a scene

4. to remember scenes

5. to interpreter situations

6. to evaluate objects and situations.


Rules for WUMPUS and PIT:


DB S1,1 B1,1

S2,1 B2,1

S1,2 B1,2


Universal R1: Si,i

Wi,i Wi,i+1 Wi+1,i

AKB R2: Si+1,1

Wi,i Wi+1,1 Wi+1,i+1 Wi+2,i

(WUMPUS) R3: Si,i+1

Wi,i Wi,i+1 Wi+1,i+1 Wi,i+2

R4: Si,i+1

Wi,i+2 Wi,i+1 Wi+1,i+1 Wi,i


Universal R1: Bi,i

Pi,i Pi,i+1 Pi+1,i

AKB R2: Bi+1,1

Pi,i Pi+1,1 Pi+1,i+1 Pi+2,i

(PIT) R3: Bi,i+1

Pi,i Pi,i+1 Pi+1,i+1 Pi,i+2

R4: Bi,i+1

Pi,i+2 Pi,i+1 Pi+1,i+1 Pi,i


92


Arpsychology and structured design of artificial intelligent systems


Fig. II-11. Agent in the Wumpus World. (Mr. Benny Wong)


93


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Two sets of equations can be replaced by universal one:


R1: Yi,i

Xi,i Xi,i+1 Xi+1,i

R2: Yi+1,1

Xi,i Xi+1,1 Xi+1,i+1 Xi+2,i

R3: Yi,i+1

Xi,i Xi,i+1 Xi+1,i+1 Xi,i+2

R4: Yi,i+1

Xi,i+2 Xi,i+1 Xi+1,i+1 Xi,i

X

W or P Y

S or B


An Agent in famous ―Wumpus World‖ problem manipulates with three types of rules of

reasoning: Modus Ponens, AND elimination, and Unit resolution [27,30].


Algorithm:

See Fig. II-4, II-5, II-6, II-7

MEASURMENT OF KNOWLEDGE VALUE AND POWER OF

REASONING OF ARTIFICIAL SYSTEMS


The level of knowledge is determined by the numbers of associations, analogues, and rules of

application in the application knowledge base [30]. The stronger level of knowledge the

greater power of creativity (under the same power of reasoning).


It is simple problem to define a number of the application rules in the application knowledge

base. The numbers of association and analogues can be defined through the difficult

randomly generated procedures. This process presents just approximate result.


The level of reasoning is determined by the number of rules of logic (rules of reasoning, DeMorgan rules, and so on) in the reasoning knowledge base [30]. Standardized weight-values can be assigned to each rule. Full set of rules of reasoning has limited size. It gives an

opportunity to generate the reasoning knowledge base in full power.


The ability of the system to manipulate with these rules is determined by the standard test.

Data (information about objects, events, and so on) does not determine intellectual power.

ASSOCIATIVE THINKING

The goal of associative thinking is to present the set of possible solutions to solve a particular problem. Choosing criteria’ and making correct choices are the goal of the

process of reasoning.

Associative thinking is reasoning that is based on connections between words, events,

sounds, etc (see also INTUITION). It is searching of information in the control system‘s memory that can enrich input information and help to solve the problem.

94


Aristotle: two sensations repeatedly experienced together would become associated. The

level of connection between words can be determined by the frequency of repetitions of their

combinations in different texts. For example the word ―sky‖ is associated with words ―blue‖

and ―cloud‖. The word ―blue‖ is associated with the word ―lake‖ and so on. As a result software can be used to create connections between the words ―sky‖ and ―lake‖. This method

can be used to find connections between different events and present them as a law of nature

or law of the artificial system environment (see CREATIVITY). The diameter of

association‘s ball (circle) determines the strength of associative thinking. The strength of association can be defined also by number of steps between terms and the relative frequency

of repetition. Strong emotions can increase the associative connection between different

events through adjustment of weight of connections. The more information in the agent‘s

memory (data base) and the higher the level of structurization of this information, then the

grater diversity and the strength of associations. It is possible to present information as a set

of common sense knowledge collected from different sources similar to the MIT project

(Super Intelligence design). Chosen criteria (action, shape, color, etc.) determine correct

associations. Procedure is based on the abilities of recognition and reasoning. In the simple

case it develops the like tree structure. But in reality it creates the net as undirected graph because some nodes [n(i)] have associations with nodes from the different branches. This

structure (S) can be presented as the net


S = n(i)*{∑n(i)*[∑n(i)*[…….∑n(i)…..]]}

1 2 3 i


Several nodes have more then one inclusion. It shows associative power of these nodes.

Some terminal nodes can be loose, not included in a circle. Size of the association‘s ball can

be limited by specific criteria, for example, the strength of associations. The bigger size the

higher level of intelligence, creativity.


GenoPharm software (Berkeley Lab) can find hidden knowledge in thousands of Internet publications that were overlooked by scientists. This software is based on associations

between the terms. It infers new knowledge by connecting closely related terms in one

meaningful string.


“Associative memory” is a descriptive term on a different level. It refers to a kind of network, one structured to perform the specific task of associating one input with another and

then retrieving both inputs when just one is presented to the machine [38].


One-way would be tag every item with the main term ―A‖ when it first was added to the computer memory. To find all the items linked to ―A‖, the computer would have to look at

every memory it held for this ―A‖ tag. The memory can be structured. All the memories

would go in one section, an ―A‖ list [38]. For example: word ―graph‖ activate list ―graph‖

and develop connections to ―directed graph‖, ―undirected graph‖, ―3D-graph‖, and so on. For

more see LEORNING; Curiosity, Learning by Interactions.


95


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

The existence of the memory makes reasonable both the materialist point of view and the

cognitivist point of view as well [14]. In the reconstruction of new knowledge when any past

event or experience is recalled, the act of recollection tends to bring again into use other events and experiences that have become related to this event in one or more specific ways.

This is called an ―association‖. Associative memory refers to the ability to recall complete

situations from partial information. These systems correlate input data with information

stored in memory. Information can be recalled even from incomplete input. Associative

memory can detect similarities between new input and the stored pattern [11, 12, and 13].

Researchers don‘t yet know exactly how the brain completes its associative tasks; it clearly

can‘t work this way because the brain‘s neurons are so much slower than the computer‘s

individual processors [38] but it is possible for an artificial system.

Algorithm:

1. Input: the problem presented in the set of the defined TERMS

2. Find the set of the associated TERMS

3. Define the strength of associations (the diameter of association‘s ball (circle))

4. Rearrange the TERMS in accordance with the strength of their association

5. Present the final set for reasoning (see REASONING).

ABSTRACT THINKING AND CONCEPTUALIZATION

In philosophical terminology, abstraction is the thought process wherein ideas are distanced from objects. Abstract thinking is manipulation (reasoning) of abstract terms and their relationship. Abstraction uses a strategy of simplification (decomposition), wherein formerly concrete details are left ambiguous, vague, or undefined; thus effective

communication about things in the abstract requires a common experience between the communicator and the communication recipient.


Abstract thinking manipulates with linguistic (abstract words like truth and justice), math, and graphic symbols under the rules of semiotics. It is the area of the highest level of the knowledge base information (see Knowledge Base).


The tools of the process of the logical chain that connects the start and the end of the thinking

process are:


structuring,

reasoning,

tautology.


The procedure of understanding of abstract and specific information is based on presented

early definitions, patterns, and symbols. Decomposition of unknown complex terms or

symbols can be used for recognition of unknown abstract information. Unknown symbols

and terms can be marked and submitted for definition searching. Unknown specific terms can

be learned and understand through different learning methods (see LEARNING) and

96


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

associative thinking methods (see ASSOCIATIVE THINKING). Predicate logic is the tool to

develop relationships between abstract and specific (see REASONING).

Conceptualization is a kind of abstract thinking (see UNDERSTANDING AND

INTERPRETATION). Concept is a general idea derived or inferred from specific

instances or occurrences [36].

Conception is notion, or mental image: idea, concept, impression, perception, picture,

thought, insight, interpretation, mental picture [36].

Conceptualization itself consists of two levels:

identification of important characteristics

identification of how the characteristics are logically linked (see also

GENERALIZATION, LEARNING, Learning Concept and Conceptual Learning).

Classification (see CLASSIFICATION) is the possible tool to conceptualization: identify

important characteristics and how the characteristics are logically linked (associative thinking

is the tool to identification).

Building a neural network that can learn abstract concept like maleness and femaleness,

without ever being told anything about people or sexual characteristics, is just a way to learn

how networks categorize (Beatrice Golomb, University of San Diego) [38].


For example: The abstract term GREATNESS is determined by a weighted sum of terms: A,

B, C, etc. The specific meaning of these terms can convert this abstract term into a specific,

meaningful one. This abstract term can be converted into specific one: New York City‘s

greatness can be defined by walking through downtown Manhattan at lunch time, observing

the masses of diverse people, seeing the huge beautiful buildings and bridges and reflecting

on the historic domestic and international events depicted on markers on a Broadway

sidewalk. All of these create specific filing (see also EMOTIONS) of the AMERICA‘S

GREATNESS because New York City is associated with AMERICA.

Another example:

1. Statement: ―Freedom is Liberty of the person from slavery, detention, or oppression,‖

confirms the fact.

2. ‖Conformity to fact or actuality‖ is Truth

3. It means ―Freedom is Liberty of the person from slavery, detention, or oppression‖ is

TRUTH

Algorithm: (see REASONING, GENERALIZATION AND CLASSIFICATION).

Tools of abstract thinking are GENERALIZATION, DECOMPOSITION,

CLASSIFICATION, Learning Concept and Conceptual Learning.


97


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

GENERALIZATION AND CLASSIFICATION

Generalization is a foundational element of logic and reasoning. It is the essential basis of all valid deductive inference. For any two related concepts, A and B; A is considered a generalization of concept B if and only if:


every instance of concept B is also an instance of concept A; and


there are instances of concept A which are not instances of concept B.

For instance, animal is a generalization of bird because every bird is an animal, and there are animals which are not birds (dogs, for instance).

Generalization is the act or an instance of generalizing, to draw inferences or a general

conclusion from the specific acts, events objects, etc. [36]. The process of generalization

is based on classification (purification) of the features of objects or events, creating identical

groups that can be presented under common description. Fig. II-12 presents the purification

procedure with numeric evaluation of the result. In some cases ability of hypothesis

generation determines the capability of generalization. Hypothesis generation is the

example of the algorithm of generalization (see HYPOTHESIS GENERATION).

Classification: To arrange or organize the objects or events according to class or

category. The ability of classification is determined by the level of purity of the final group.

The combination of classification with decision tree permits the learning new knowledge from positive and negative experiences. The learning procedure can be presented as an algorithm.

Generalization is a very difficult work for a machine. Some contextual rules can tell the machine what parameters to concentrate on in a specific instance in order to reach a relevant

decision about sameness or difference.

In the article ‖Toward a Universal Law of Generalization for Physical Science‖ (1987)

Shepard described the metric of similarity as the units that measure psychological space

between two objects. Foe example: for a species whose survival depends on discrimination

dogs from bears, the metric of similarity would put a relatively great distance between the two. For one those only needs to comprehend dogs and bears as big animals, the

psychological distance would be smaller.

Algorithm: (Classification)

1. If there are some positive and some negative examples, then choose the best attribute

to split them (attribute that gives maximal gain). Tests all possible splits on all

possible independent variables using "classifier" -- a software tool -- to distinguish

between splits,

2. Compute all the resulting gains in purity

3. Pick up the split that maximizes the gain

4. If all the remaining examples are positive (or all negative), then we are done.

Otherwise repeat 1-4.

98


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


This procedure can be used for an ability of classification measurement. It is possible to

develop the different levels sceneries difficulties.

REMOVING “IMPURITY” FROM THE DATA


Impurity = 0.2


Impurity = 0.99

0.2 + 0.3 = 0.5


Non-NJ

Impurity = 0.3


20


Impurity = 0.5


NJ

All Residents Impurity = 0


20

Total Impurity = 0.99 Total Impurity = 0.7 Total Impurity = 0.5


Gain=0.99 – 0.7 =0.29 Gain=0.7 – 0.5=0.2


Fig. II-12. Classification.


To learn more about Classification (see SOCIAL BEHAVIOR).

INTUITION


Does an artificial system have intuition? Is it possible to create an artificial system with intuition? What is intuition? Different people give different answers. The working linguistic

(computation with words) model is designed to answer this question.

In Spinoza‘s philosophy, intuition is the highest form of knowledge, surpassing both empirical knowledge derived from the senses and ―scientific‖ knowledge derived from

reasoning on the basis of experience. Intuitive knowledge gives an individual the

99


comprehension of an orderly and united universe and permits the mind to be a part of the infinite being.

Immanuel Kant regarded intuition as the portion of a perception that is supplied by the mind itself. He divided perceptions, or ―phenomena,‖ into two parts: the sensation caused

by the external object perceived and the ―form,‖ or the understanding of the perception in the

mind, which results from intuition. An understanding of space and time are types of pure intuition.

Henri Bergson (French philosopher born in 1859) contrasted instinct with intelligence and

regarded intuition as the purest form of instinct. Intelligence, he believed, was adequate for the consideration of material things but could not deal with the fundamental nature of life

or thought. He defined intuition as ―instinct that has become disinterested, self-conscious,

capable of reflecting upon its object and of enlarging it indefinitely.‖ Intelligence, on the other hand, can only analyze, and the function of analysis is to produce what is relative in an

object, rather than what is absolute or individual. Only by intuition, Bergson declared, can the absolute be comprehended.

Some ethical philosophers, among them Spinoza, have been called intuitionists or

intuitionalists because of their belief that a sense of moral values is intuitive and immediate.

This view contrasts with that of the empiricists, who hold that moral values result from human experience, and that of the rationalists, who believe that moral values are determined

by reason.

John Locke (philosopher, 1632-1704) likes Spinoza, presented intuition as knowledge.

Intuitive knowledge, he wrote, is immediate, leaves no doubt and is ―like sunshine, forces itself immediately to be perceived as soon as ever the mind turns its view that way.‖

Hubert Dreyfus, a professor of philosophy at the University of California at Berkeley, thinks

that ―intuition is knowing in some area, almost immediately, what is the appropriate thing to

do, without being able to give any rationalization, justification of reasons to yourself or to anybody else as to why you did it‖ and relating this directly to AI, ―if [the computers] can‘t

recognize patterns, if they can‘t be intuitive, they can hardly be creative‖.

Some psychologists and philosophers, like Dr. Marcia Emery, an adjunct professor in the

Masters in Management program at Aquinas College in Grand Rapids, Michigan, presents

intuition as a spark, a lightning flash and don‘t accept any possibility of understanding it.

But lack of knowledge is not an excuse but reason to learn more.

Plato held that intuition is a superior faculty.

Russell designated as intuitive any unreflective instance of knowledge by acquaintance.

Daniel N. Robinson, philosopher (Oxford University) [34]: Intuition is ―an instinctive

knowing, or impression that something might be the case, without the use of rational

processes‖.

The Merriam-Webster Dictionary presents intuition as ―Direct, non-inferential awareness of

abstract objects or concrete truth‖.

100


Webster‘s Unabridged Dictionary sums up intuition as ―the immediate knowing or learning

of something without the conscious use of reasoning; instantaneous apperception.‖ Simply

stated, intuition is direct knowledge. Learning is cognitive, logical, conscious process.

―InQ (intuitive quotient) reflects the ability to go inward, respond to a variety of analytical

skills, perceive connections, communicate nontraditionally, and tap into personal and

collective wisdom‖ (Intuition systems, Orleans, ON).

We often use the words ―intuition‖, and ―intuitively‖, meaning some act of our thought,

which cannot be decomposed into any constituent elements and, in particular, cannot be

further explained or justified. This is the result of activating some nonverbal model of the world implemented in our brain. Sometimes we are very certain about what our intuition tell

us; on other occasions we find ourselves in the state of doubt (Principia Cybernetic Web).

There are a lot of people who can‘t accept the creative ability of the computer. Artificial intelligence or the concept of a ―thinking machine‖ is frightening to people. For example, Mr. Howard Rheingold, a writer with unique expertise in science and technology as well as

consciousness research, said in 1995: ―The human can do things that no machines can do –

recognize patterns, think creatively, use intuition…. You simply can‘t teach a computer to

translate from one language to another by putting a dictionary in its memory. You come out

with all kinds of strange things. There was a famous case in which the quotation, ―The spirit

is willing, but the flesh is weak‖ was translated into Russian… and then translated back into

English, and the English translation was, ―The vodka is agreeable, but the meat has

spoiled.‖‖

This was in the mid 1990s. But in 1999 my computer already gave me the phrase

(http://www.translate.ru/Rus/): ―The spirit wishes, but the flesh is weak.‖ We can see a major

improvement.

Many experts in artificial intelligence present the opposite point of view.

Dr. Marvin Minsky: ―We are the greatest machine in the world‖ and more ―Newell and

Simon… discovered a wonderful way to make a machine have a goal‖, and more: ―I

think we can explain consciousness the way science explains other things‖.

Dr. John McCarty (artificial intelligence laboratory at MIT and Stanford University): "…I

don‘t see that human intelligence is something that humans can never understand.‖ It is certainly reasonable to comprehend intuition as a part of intelligence.


Some philosophers and psychologists present the same point of view. The philosopher U. G.

Krishnamurti said that: [The brain] ―is actually, a computer‖... Dr. Steven Pinker (a professor in the Department of Brain and Cognitive Science at MIT, and director of the

Cognitive Neuroscience Center at MIT) said: that ―…starting in the late fifties with Chomsky

and other cognitive scientists, who started to try to figure out what the mind‘s software was –

what‘s in our head…‖ Psychologist Peter Gray (Boston College) agrees with the view at the

brain as a computer: ―It is not unreasonable, therefore, to assume that the brain can perform

with split-second timing the complex calculations that are required of it by the unconscious-

inference theory.‖ The human being can be regarded as an information-seeking, information-

processing organism.

101


If we accept that we are computers then we have to accept that a computer is able to do what

we can. I even didn‘t mention here a future biological ―hardware‖. A language is a system of

signs and a computer is able to learn language like a child learns it but certainly not by memorization from a dictionary.

The Encyclopedia Britannica describes, ―the intuition is the power of obtaining knowledge

that cannot be acquired either by inference or observation, by reason or experience. As such,

intuition is thought of as an original, independent source of knowledge, since it is designed to

account for just those kinds of knowledge that other sources do not provide.‖

The negative form of the definition is not very productive for artificial intelligent system design. It leads us nowhere. It is not the question: does machine have intuition or doesn‘t have it? The problem is to define the word intuition to convert it into a workable process.

From the practical point of view we need the positive, constructive approach even if at the

beginning we design system just with the realization of the simple process of the intuition imitation.

Intuition can produce a passive or active result. Intuitive feeling (dangerous, love) is a passive result of intuition activities. The problem solution is the active result. In terms of predicate logic the problem can be presented as: danger (x). A problem solution is an active

result of intuition activities: do (x).

Intuition is an immediate form of knowledge in which the knower is directly acquainted with the object of knowledge. Intuition differs from all forms of mediated knowledge, which

generally involve conceptualization the object of knowledge by means of rational/analytical

thought process.

It is impossible to extract knowledge from nothing. If you never heard about the stock

market or brain surgery, you can never make intuitive decisions in these areas.

Artificial Intuition is a non -intentional extraction of knowledge from the data and information contained in the memory and involuntary transferring this knowledge into

a problem solving actively or perception. It is kind of Associative Thinking. It is subconsciousness, involuntary, unintended process.

―An ―intent‖ is the directing of an action towards some future goal that is defined and chosen by the actor‖[14]. Non-perceived reasoning is an unavoidable part of intuition. Non-perceived doesn‘t mean non-existing. This definition is translated into a workable process

that is reasonable and non-contradictable even when applied to the meaning of the word

―spark‖. Such presentation of intuition may not be the very best but that is very productive

for artificial intelligent system design and understanding of these systems psychology.

Intuition is not just the search for a similar solution to a problem but sometimes is requires

the ―design‖ of a solution as a sophisticated assembly of several elements. In this case we

deal with a more complex procedure.

In opposite, the research and decision searching are motivated intendment organized processes of a solution of a problem searching. Many philosophers mention importance of

102


intentionality. For Edmund Husserl (German philosopher) intentionality is ― one essential feature of any consciousness‖. For Jean-Paul Sartre (French philosopher and writer)

―intentionality is consciousness‖.

Spontaneous brain (natural or artificial) activities can be triggered by a non-verbal fuzzy defined problem that is dominated in the memory at this particular time. In this case

accidental knowledge activates the algorithm of searching for patterns, history, relationships

and etc. to find solution for the problem. The more data and information is stored in the memory, the better result of the intuitive process. The stronger connections between

information blocks the stronger system‘s creativity. The higher the information diversity the

more efficient intuitive solutions may be. There are two kinds of information: genetic and

non-genetic. In artificial systems genetic information is stored in the hardware and partly in

the software and contributes to the artificial intuition.

Spontaneous brain (natural or artificial) activities can be triggered by spontaneous interest of

the system to the problem. For example if the problem presents as a dangerous environment

for the system‘s existence, this may result in spontaneous problem formulation. A

spontaneous faulty problem-formulation may result from the availability of a powerful sensor

system. This system collects information about simple, separately non-dangerous events, puts

it together independently from a system viewpoint, looking for patterns and creates a sense of

danger. The process and information are presented in fuzzy description.

All knowledge about objects and processes has to be presented as models designed from the

different points of view (structural models, math models, logical models, chemical models,

electrical and information models, etc) (see Knowledge Representation). For example, a

human body can be presented in different ways as a structured model, a chemical model, an

information model, a mechanical model, etc. Such methods of knowledge presentation make

it possible to easily identify common features in different areas. The structured organization

of knowledge in the memory is a very important condition for the effective performance of

artificial intuition. In the artificial system we don‘t have to deal with the problem of how the

natural brain attaches meaning to symbolic representation [14].

The existence of the memory makes reasonable both the materialist point of view and the

cognitivist point of view as well [14]. In the reconstruction of new knowledge when any past

event or experience is recalled, the act of recollection tends to bring again into use other events and experiences that have become related to this event in one or more specific ways.

This is called an ―association‖. Associative memory refers to the ability to recall complete situations from partial information. These systems correlate input data with information

stored in memory. Information can be recalled even from incomplete input. Associative

memory can detect similarities between new input and the stored pattern [11, 12, and 13].

Therefore, intuition and association should work together. Realization of the associative

memory can be done as a Hopfield Neuron Network [13] (see ASSOCIATIVE THINKING

and APPENDIX 5)

Let us look at the following simple scenario. At nighttime you left a party with your friends

and were going home. You were thinking about the good time you had. Suddenly you step

into a dark street on your way home (level of darkness may be different-fuzzy description).

Nothing is wrong although your body becomes alerted even if you try to calm yourself

103


through reasoning. Intuition vs. reasoning! It is unintentional vs. intentional reasoning.

Intuition can win because reasoning is based on the same knowledge! Reasoning can just add

some new information and knowledge. As a result, correction of the sense and behavior can

be obtained.

When we meet a stranger, we receive complex of information about his/her appearance, body

language, and way of talking, etc. Our brain compares this information with the fuzzy or statistical models of a ―good‖ or ―bad‖ person‘s appearance, behavior, etc. and creates our

―fuzzy‖ impression model about this person. ―Good‖ or ―bad‖ person models are based on

our previous experience. This situation was emulated via a computer model (Fig. II-13). The

system was able to generate an intuitive impression at the moment of meeting with a

stranger.

In the simplest case, the calculation of intuition can be presented as a percentage of positive

and negative ―feelings‖ about the specific event. Suppose an agent has positive experience

when meeting another agent (a good man) in three occurrences and just a single negative

experience in one case (a bad man). It creates a ratio of a bad intuitive feeling equal to 25%

and a ratio of a good intuitive feeling equal to 75%. Fig. II-13 illustrates the system that generates intuition in accordance with this scenario.

From experience an agent has knowledge that the most dangerous stranger belongs to the age

group of 25-35 years-olds. Younger or older may also belong to this group with some level

of membership ―M‖ [15] (see APPENDIX 4). In this case the level of dangerous can be

determined as equal to ―M‖. For example: the potentially dangerous stranger belongs to the

range from 15 years to 45 years old. If the stranger has age 40 years old then M = 0.5 and

index of his dangerous is equal 0.5. In our example negative intuitive feeling will be equal to

12.5%. It is the problem simplification but demonstrate a procedure.

Unintentional brain activity can include the testing procedures. One day I send an e-mail but

forget to attach my file that I promised to send my friend. I was sure that I did not make this

mistake. In the middle of the night I suddenly woke up and realized that I did not attach the

file. My brain was testing my activities stored in the short-term memory against the goal procedure and sends me an error message. It‘s similar automatic virus testing software that is

used when we reboot the computer without special activation. Certainly, it is just an analogy.

This ability to control human activities is a very useful component of Artificial Intelligence.

The ―Testing‖ module can perform this test. By the way, a brain replays daytime events at

night sleeping time as transferring short time memory content into the long time memory.

The approach described above can be illustrated by another example. Suppose we have the

AI system, which has extensive working experience in different areas of knowledge and also

powerful learning abilities from an experienced external teacher. Knowledge is represented

as the models: linguistic, math, logical, structured, etc.

All these models create the hierarchical structure in the knowledge base (see Knowledge

Representation). The more abstract the description, the higher the location levels. The

linguistic description belongs to the higher level. The physical description belongs to the lower level. Suppose we have a control system and would like to reduce the acceleration of

the moving parts. The AI system has information (through the sensors) about the problem

104


and starts looking for a solution to this problem without system intention interference. Each

level represents a new level of goals. Each new goal motivates a next step in the search for

the solution.

As we know, Intuition can be activated in the sleeping stage when the brain is working

without participation of the human will.

One night I had dream. I was not the main participant but an observer.

A middle-level manager (Mr. A) of a big company decided to make a joke with the company

employee Mr. B. Mr. B had a bad sense of humor and was an unpleasant character. Mr. A

sends a message to him that some middle-level managers would like to see Mr. B at their meeting some day. He did not send this message directly to Mr. B. He asked Ms. C (a

company employee) to tell this to Mr. B as gossip. Mr. B tried to contact Mr. A but Mr. A

was avoiding any contact with him. One day Mr. B contacted Mr. A and asked him what was

going on, and what kind of meeting it was suppose to be. Mr. A had two possible ways to

respond. First, to apologize for the bad joke and in doing so to create an enemy. Second, to

say that the meeting was canceled and that this was the reason he did not contact Mr. B.


What is interesting to me as a viewer of this scenario?


First, my brain generated the problem. Maybe this problem was stored in my memory as a

result of previous activity because this kind of joke may be common. Second, the fact that

my brain constructed two possible alternative solutions to the problem solution is intriguing.

It is also possible that my memory already had this information. Third, my brain correctly chose of the second alternative.


As an observer, I had the chance to view the reasoning process of the two possible choices. I

cannot recall these particular states (problem generation, solution alternatives generation, and

decision-making) have occurred simultaneously before in my life. My brain was generating

this chain of events and reasoning without my intention. The whole process was transparent

and structured. I was able to ―view‖ the process as if I were there by myself.


In accordance with definition of intuition this scenario is a product of my intuition. If this is

correct then we can emulate the entire process from problem generation to the problem

solving. This creativity can be triggered unintentionally or intentionally by information that

was stored during daytime activities.


In an interview with The New York Times (Nov. 14, 2000) Dr Terrence J. Sejnowski (a

neuroscientist at the Salk Institute in San Diego) said: ―There has always been a close

connection between sleep and creativity, which may be a byproduct of the way that nature

chose to consolidate memories‖.

These described scenarios and approach are the point of view that can be used as a

foundation of the artificial intelligent system architectural design with intuitive ability. The

development of such a system is the first step in the process of intuition design. In the beginning we can create just an artificial imitation of natural intuition. Any system

architecture, which incorporates an intuition capability, creates a more powerful decision

making system and is therefore more self-defensive against destruction.

105


Arpsychology and structured design of artificial intelligent systems

Algorithm (see also ASSOCIATIVE THINKING):

1. Input: New DATA (THE PROCEDURE or THE PROBLEM DESCRIPTION)

2. Find: associated DATA or INFORMATION or THE PROCEDURE or THE

PROBLEM SOLUTION

3. Find: EVALUATION FUNCTION (Generated by Experience) or PROCEDURE

(SOLUTION OF THE PROBLEM) associated with this DATA

4. Evaluate DATA

5. Rearrange RESULTS in accordance with strength of association

This procedure can be used to measure an ability of a system‘s intuition. It is possible to

develop the different levels sceneries difficulties.


Fig II-13. Emulation of INTUITION.

106


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

HYPOTHISIS GENERATION

Hypothesis is something taken to be true for the purpose of argument or investigation,

an assumption [36]. It is a tentative explanation that accounts for a set of facts and can

be tested by further investigation, a theory.

In large part this is accomplished using the scientific method ― the formation of a

hypothesis and the gathering of data to check the hypothesis against. If the data support the

hypothesis, consider it provisionally correct; if they contradict it, it must be revised.

Generalization and conceptualization are the main tools of hypothesis generation (see Fig I-

7). Educated guesses (to assume, presume, or assert (a fact) based on knowledge-type of associative

thinking, classification and reasoning) are the tools in many cases to search the hypothesis as well. A computerized educated guess is based on available knowledge.

For example, a math problem in the most cases can be solved by mathematical methods. The

linguistic problem can be solved by linguistic methods. Social problems can be solved by

application of knowledge of social sciences. In some cases analogies can help to solve the problem. Fig. II-14 illustrates the system that can generate a hypothesis (an equation) how to

calculate any member of the numeric sequence if the first several members of the set are known. Basic knowledge of the system is four arithmetical operations.


This is the tries and errors procedure.


Example:


Sequences: 2 4 7 11 16

Hypothesis: 2*2 2*3+1 2*4+3 2*5+6 ?

2+2 4+3 7+4 11+5 Ni-1 + i

2+1+1 4+2+1 7+3+1 11+4+1 Ni-1 + (i-1) +1

Correct hypothesis: Ni-1 + i


An ability of hypothesis generation is determined by:

value of available knowledge

the limited number of unsuccessful tries

complexity of the hypothesis

time duration of hypothesis generation


For more see also LEARNING and CURIOSITY.


Algorithm:


1. Choose the limited number of events or subjects of the same class or same nature

2. Choose the method of descriptions: linguistic, math, symbols

107


Arpsychology and structured design of artificial intelligent systems

3. Make classification by different criteria

4. Find common formula or symbolic representation (Generalization)

5. Check the result on the new events or objects of the same class

6. If the result is positive then HYPOTHESIS

7. If the result is negative then make correction

8. Repeat 3, 4 and 5 until the positive result.


Fig. II-14. Heuristic generator. The first 5 expressions are basic knowledge in math, rest

of the expressions are generated as heuristics to each set of numbers.

Final Result


This procedure can be used to measure an ability of a system‘s intuition. It is possible to

develop the different levels sceneries difficulties in different areas of sciences such as the

math, linguistic, natural sciences, engineering, etc.


108


LEARNING


Learning Concepts

As it was shown before, learning is an essential ability of the intelligent system.

Learning is the act, process, or experience of gaining knowledge or skill through

schooling or study.

When you learn a fact, you learn to think about something in a different way.

When you learn a concept, you learn how to treat different things as instances of the same category.

When you learn a skill, you acquire a program that enables you to do something that you

could not do before.

In general the term learning is the construction of new knowledge or programs from

elements of experience (existing knowledge).

It is impossible to learn anything unless you already have some knowledge, because

knowledge cannot be constructed out of thin air.

Development of the Internet dramatically changed ability of an Artificial Intelligent

System to learn. Direct communication with the Internet presents knowledge accumulated by society for many generations. The Internet should be redesign to present knowledge in

the ready to use form.

Conceptual Learning

Conceptual learning is development new concepts that can be apply to solution of some

problems. These activities can be triggered by curiosity (see Curiosity) or any type of

motivations

There is nothing new under the Moon. All new knowledge is constructed from new

combinations of existing knowledge. It means that the AIS can generate new knowledge

using its computational power, new information, and knowledge in the knowledge base.

A learning system as a whole has a certain degree of computational power. Suppose it has the power of a finite-state machine, and then nothing that happens subsequently by way of

learning will increase its computational power – for that to occur, it needs to be equipped with a better memory. Such a system may not be able to carry out additions, whereas at a later stage it may have mastered this ability. To demonstrate the point, we assume that at the

earlier stage system is able to observe its own actions. Agent‘s personal experience shows:

If you put nothing in your basket then you will get nothing

If you put one object in your basket then you will get one object

If you put one more object you will have one and one objects

This activity can be presented by different symbols:

109


nothing and nothing is nothing or 0 & 0 = 0

nothing and one stick is one stick or 0 & I = I

one stick and one stick are two sticks or I & I = II

one stick and two sticks are three sticks or II & I = III


The ―&‖ is the linguistic symbol and can be replaced by the symbol ―+‖. In this case


0 + 0 = 0

0 + I = I

I + I = II

II + I = III

…………

n + I = (n + I)

………….


Now it is possible to infer the rule of arithmetic ―addition‖. The numeric code can be chosen

any: Arabic, Romans, unitary code, binary code, Morse code, etc.

It is possible to demonstrate this ability in different way. We assume that at the earlier stage

system is able to compute only a single mathematical operation, one that delivers the

successor or predecessor to any integer:

successor (0) = I or 1

successor (I) = II or 2

successor (II) = III or 3

…………………

successor (n) = n+1

………………..

The symbol ―+‖ does not mean addition yet.

predecessor (I) = 0

predecessor (II) = I or 1

predecessor (III) = II or 2

…………………..

predecessor (n+1) = n

…………………….

Now base on these agreements it is possible to infer arithmetic operation for addition:

0 as predecessor (I) & 1 as predecessor (II) can be replaced by 2 as predecessor (III)

0 as predecessor (I) & 1 as predecessor (II) & 2 as predecessor (III) can be replaced by 3

as predecessor (IIII)

The ―&‖ is the linguistic symbol ―AND‖ and can be replaced by the symbol math symbol

―+‖. In this case

I + II = III or 1 + 2 = 3

If ―0‖ is the beginning of the scale, then

110


Arpsychology and structured design of artificial intelligent systems

predecessor (0) = 0

and

0 + 0 = 0

0 + 1 = 1

1 + 2 = 3

,,,,,,,,,,,,,,

n + (n+1) = (n+2)

…………………….


Now ―+‖ is the math symbol of addition.

This concept can be advanced to the level of multiplication:

If an Agent collects one ―I-type‖ object two times, then, based on concept of addition, an

Agent will get two ―I-type‖ objects.

If an Agent collects two ―I-type‖ objects two times, then, based on concept of addition, an

Agent will get fore ―I-type‖ objects.

And so on.


The operation ―collect‖ can be replaced by the symbol ― ―. These statements can be

*

presented as:

I * 1 = I

I * 2 = 2I

2 I * 2 = 4I

……………

I * n = nI

…………….

The linguistic form is the prime form of each concept presentation. It is true for whole shine

math science tower. What is the symbol ― ‖? Only mathematicians know that it is the

gamma function because they have linguistic description. So, calculation with words is the prime method of math operations execution.

Addition is the foundation of all concepts of the math science structure. This concept can be

developed and executed by computer in a form designed with special math symbols. Each of

these symbols has conceptual description in the linguistic form.

It is possible to activate this process (to bring an Agent attention) by some kind of motivation

(see Curiosity, Learning by Interactions) such as special shape of objects. Ability to

exercise curiosity should be incorporated into the system.

Hence, the system has progressed from an elementary to a more advanced concept. So, it is

possible to learn a more complex concept. (The computer and the mind. An Introduction to Cognitive Science by Philip N. Johnson-Laird, Harvard University press, 1988).


Another example:

5/5 = 1

10/5 = 2

111


15/5 = 3

20/5 = 4

……….


Conclusion: any integer number, that has a 0 or a 5 at the end (in right most position) can be divided by 5 without a resulting remainder. This is the new concept of divisibility by 5.


It is no need to have an advanced language to perform simple calculations. Champ can recognize

difference between numbers of objects up to fore. If you show that you have fore bananas and

latter will give only two of them champ demonstrate unpleased response.


The Construction Of New Production Rules (see also HYPOTHESIS

GENERATION)

Observation is the main source of information to construct production rules. Application of

reasoning to the facts is the main method of the fact conversion into the rule. There are two

main types of the fact:

1. unconditional

2. conditional


Unconditional facts describe the status of environment and cannot be used to construct the

rules. It generates subconscious definition that can be converted into conscious definition,

into knowledge (calculation by words).

Example of the unconditional fact:

It is a sunrise.

Conditional statements usually include two terms (nouns) connected by actions (verb).

Example of the conditional fact:

A sunrise declares a new day.

Conditional facts can be converted into the rules (calculation by words):

If there is a sunrise then a new day begins.

Conditional facts converted into the rules present knowledge.

The rule can be designed in accordance with the pattern:

IF PRECONDITIONS

AND ACTIONS

AND NO ACTIONS

OR ACTIONS

THEN RESULTS (HYPOTHESIS)

112


If the addition table contains a fact of the form A + B = C, then one may build a new rule

with the condition: (GOAL to add A and B) and the action: (ANSWER C). The rule is:

If “A” add to “B” then result is “C”

Another example:


Sequences: 2 4 7 11 16

Hypothesis: 2+2 4+3 7+4 11+5

2+1+1 4+2+1 7+3+1 11+4+1

2*2 2*3+1 2*4+3 2*5+6

Correct hypothesis: Ni + i +1

The rule is: If there is sequence of numbers 2, 4, 7, 11, 16, …. then each next member of

this sequence can be equal Ni + i +1.

Supervised Learning

Supervised learning is based on comparison of a learning (Actual) system‘s output with a

known result (Fig. II-15). A feedback module compares results of both systems activity and

generates commands to adjust the Actual system parameters (weights).


A MODEL OF SUPERVISED MACHINE LEARNING


Desired Performance

Id

I eal S

a

yst

y em

e

Corre

or ct Output

c

Input

I

Task Modul

Ta

e

Output

Knowled

Know

ge

g

e Base

a

Fe

F edb

e

ac

a k

c

module

Le

L arning

g

module

Actual Performance


Fig. II-15

113


Suppose we would like to teach the Neuron Network to execute the logical function OR (see

TRUTH table below). In the very beginning value of weights (W1 and W2 ) can be set randomly. Gradually these values will be adjusted automatically by the system in accordance

with the table below.

Example of Supervised Learning


Inputs

Case X1 X 2 Desired Result 1 0 0 0

2 0 1 1 (positive)

3 1 0 1 (positive)

4 1 1 1 (positive)

Delta = Z – Y (desired – actual) output.


Initial Final

Stap X1 X 2 Z W1 W2 Y Delta W1 W2

1 0 0 0 0.1 0.3 0 0.0 0.1 0.3

0 1 1 0.1 0.3 0 1.0 0.1 0.5

1 0 1 0.1 0.5 0 1.0 0.3 0.5

1 1 1 0.3 0.5 1 0.0 0.3 0.5

2 0 0 0 0.3 0.5 0 0.0 0.3 0.5

0 1 1 0.3 0.5 0 1.0 0.3 0.7

1 0 1 0.3 0.7 0 1.0 0.5 0.7

1 1 1 0.5 0.7 1 0.0 0.5 0.7

3 0 0 0 0.5 0.7 0 0.0 0.5 0.7

0 1 1 0.5 0.7 1 0.0 0.5 0.7

1 0 1 0.5 0.7 0 1.0 0.7 0.7

1 1 1 0.7 0.7 1 0.0 0.7 0.7

4 0 0 0 0.7 0.7 0 0.0 0.7 0.7

0 1 1 0.7 0.7 1 0.0 0.7 0.7

1 0 1 0.7 0.7 1 0.0 0.7 0.7

1 1 1 0.7 0.7 1 0.0 0.7 0.7


Parameters: alpha = 0.2(learning rate); threshold = 0.5

Updated weights: Wi (final) = Wi (initial) + alpha * delta * Xi


Calculation of weight update for the multi-layer Neuron Net see [72]. (See APPENDIX 5).


114


Arpsychology and structured design of artificial intelligent systems

Algorithm (Neuron Learning)

1. To set the parameters randomly

2. To set the learning rate and threshold

3. To calculate the difference between desired and actual output

4. To define direction of change

5. To calculate value of change

6. To make change

7. To continue until desired and actual outputs are equal


Fig. II-16 Neural Net as the Supervised Learning Machine ( Mr. Yurchenco) (see

APPENDIX 5)


115


Learning by Instructions

It is the first method that natural and an artificial intelligent system use to learn.

Learning by instruction is based on two functions:

1. Acceptance of new knowledge (rules)

2. Interpretation these rules for execution.

All knowledge is residing in the Application Knowledge Base (see Fig. II-5). The Expert

Systems technology is the simplest example of this method implementation.

Learning by Experience

The hypothesis generation system can memorize the well-checked hypothesis. Next time the

presentation of the same situation will generate a result directly without generation of a hypothesis. Generation and memorization of the world model represents learning by

experience.

Learning by Imitation

Imitation is something derived or copied from an original [36]. Based on observation how 3- and 4-year-olds children learn Dr. Horner and Dr. Whiten described the results as evidence

that humans are hard-wired to learn by imitation, even when that is clearly not the best way

to learn. Their study is published in the July 2005 issue of the journal Animal Cognition by

Victoria Horner and Andrew Whiten, two psychologists at the University of St. Andrews in

Scotland http://www.nytimes.com/pages/science/index.html). They found that a child is going to move toward the goal through unnecessary steps if an instructor includes it in the

procedure. The chimp, on the other hand went straight for the goal avoiding unnecessary

steps. If these psychologists are right, this represents a big evolutionary change from our ape

ancestors. Other primates are bad at imitation. When they watch another primate doing

something, they seem to focus on what its goals are and ignore its actions. An adult person is

better prepared to activate of a goal driven problem-solving procedure.

In contrary the AI system better executes a process that is described with a sequence of specific steps. It is easy to design a system with a capability of imitation. It makes sense to

have both abilities with criteria of choice. In this case the local control systems perform actions.

A car navigator with ability to learn driver‘s driving patterns is an example of learning by

imitation.

Curiosity, Learning by Interactions

Curiosity is the motivation to learn about the unknown: Arousing interest because of novelty or strangeness: a curious fact, to detect unfamiliar object, label it, and then learn about it. It is an active type of learning. Observation is a passive type of learning.


116


Curiosity can be triggered by association of the result of observation to the area of interest.

The area of interest can be generated by activation of the specific knowledge. This

knowledge can be tagged or transferred into specific area of a memory. Each new

information is checked against this knowledge. A ―brain‖ generates association between

new and existing information (see ASSOCIATIVE THINKING). For example:

Suppose an Agent is interested in psychology of artificial intelligent systems. He observes

behavior of the system that did not received a substantial set of knowledge and he realized

that the system demonstrates childish behavior. Childish behavior connected to

psychology (activated area of interest) is developing the term Child Psychology and generate connection by association to the area of knowledge that an Agent has missed

before.

In order to make the learning by interaction efficient it is important to develop

environment, to present the specific object for interactions. The set of the different size objects (rings, empty boxes) can help to learn concept ―bigger – smaller‖. The special set of

objects and their relative location can help to learn concepts ―more – less‖, ―in front –

behind‖, and so on. See Conceptual Learning.

An Baby Agent drops a ball, evaluates the result, and formulates the rule:

If you drop a ball ―A‖, then you will get result ―B‖.

The playground experiment demonstrates curiosity-driven learning on an autonomous four-

legged AIBO robot platform (Fig. II-17). The robot is equipped with basic motor primitives

(control of its head direction, arm and mouth movements), which are controlled by a

generic developmental engine (called Intelligent Adaptive Curiosity).

Sony robot dogs are programmed with software that simulates "curiosity" and are placed on

a baby's activity mat where different objects can be bitten, bashed, or just visually detected.

With this engine, the robot actively chooses its actions and its learning situations. The engine can measure the effects of the actions taken on the environment through a camera,

IR sensors, touch sensors and motion feedback.

The Playground Experiment uses AIBO to study how infants develop increasingly complex sensorimotor and communication skills. By trying different motor primitives, which it can

modulate, AIBO progressively discovers that some objects are easier to interact with than

others. As the robot learns to master particular sensory-motor trajectories, its behavior

became more complex. He behaves like a baby. The developmental engine is composed

of:

1. prediction systems that learn the effects of actions in a particular context; these

prediction systems are set of experts specialized in particular areas of the sensory-motor

space;

2. meta-prediction systems that learn to predict the error of prediction systems (1) and its evolution over time. In particular, these meta-prediction systems can compute an expected

error reduction corresponding to a given action. This is done by comparing the error rate

of the new expected sensory-motor context to the error rate in similar sensory-motor

117


contexts from the past, as opposed to the error rate in the most recent sensory-motor

context;

3. an action selection module that chooses actions with maximum expected error reduction

(as computed by (2));

As a consequence, this system produces action sequences that are expected to maximize

learning progress.


Such a self-motivated robot focuses on tasks with are neither too predictable nor too difficult

to predict. It looks for ―progress niches‖, sensory-motor situations that are optimal for

learning given its embodiment (physical body and learning algorithms), the structure of its environment and its current stage of development. This ensures a continuous development of

increasingly complex behaviors.


Algorithm (Learning by Interaction):


1. Interaction with environment

2. Conceive information

3. Evaluate result

4. Design the rules

5. Place these rules into the World Model

6. Learn interaction limits


The rule can be designed in accordance with the pattern shown before:


IF PRECONDITIONS

AND ACTIONS

AND NO ACTIONS

OR ACTIONS

THEN RESULTS (HYPOTHESIS)


A result can include information presented by sound, smell, visual representation, etc.

Hypothesis can be investigated by changing of the combinations of preconditions and actions

(all or part of them) randomly. Investigation of the full possible combinations tests the strength of the hypothesis and can make it a TRUE rule. See also HYPOTHESIS

GENERATIONS.


Names of preconditions, actions, and results should be presented in the knowledge base.

Identification of unknown events (results) can be done through associative thinking or other

methods of reasoning.


Learning through interaction with environment (self-learning) is important for behavior

adjustment for adaptation. It is an ineffective learning method. In order to get whole set of

knowledge an Agent should repeat whole history of mankind. Education as communication

(verbal, visual, writing) with a teacher is the most efficient method of learning.


118


Arpsychology and structured design of artificial intelligent systems

Another example (Fig. II-18) of interaction with environment is learning how to walk. In a

very beginning the robot from the New Hampshire University behaves as a newly born baby.

It cannot walk. Gradually it interacts with the environment and learns how to keep balance

and how to walk. In the human brain this function is responsibility of the cerebellum.

Feedback deficits are resulting in disorders in fine movement, equilibrium, posture, and

motor learning. Initial observations by physiologists during the 18th century indicated that patients with cerebella damage show problems with motor coordination and movement.


Fig. II-17. Learning through interaction with environment.

(http://playground.csl.sony.fr/en/page2.xml)

119


Arpsychology and structured design of artificial intelligent systems


Fig. II-18. Learning how to walk.

PLANNING


Plan is a scheme, program, or method worked out beforehand for the accomplishment

of an objective.

Abilities of stucturalization, and classification of sub objectives (sub goals) of an objective

(main goal) are the tools to develop the plan. Problem solving methods are the main tools to

develop the planning algorithm.

Planning algorithms use descriptions in a formal language, usually first-order logic.

States and goals are represented by a set of sentences.

Actions are represented by logical descriptions and effects.

The planner makes a direct connection between states and actions.

The planner is free to add an action to the plan wherever it is needed, rather than in an

incremental sequence starting at the initial state. [27]

Most parts of the world are independent of most other parts. It is possible to design a plan as independent subplanes.

The quality of the planning algorithm is determined by capability of decomposition of the goal to subgoals, to generate planners for each subgoal and to define time of execution of each part of process. Parallel execution of the independent parts of procedure is important strategy of a plan execution. Planning algorithms must take in consideration all constrains and recourses. Each plan can be evaluated by the vector [time execution, resources and cost]. Each level of resolution has a specific horizon of planning.


120


Sometimes the goal or execution of a plan is not strongly defined; this leads to an execution

of some actions (moving from one location to another without generation of the path) without

strong understanding of the goal (destination). In this case system generates needed

information based on previous knowledge, or the brain generates the execution plan in

accordance with statistics or in keeping with the last active plan (the last path). Effect iveness

of a plan execution depends on personal ability of an agent to follow the plan procedure.

PROBLEM-SOLVING

A PROBLEM is a collection of information that the agent will use to decide what to do.

All intelligent abilities such as reasoning, learning, generalization, hypothesis generation, etc.

are the tools of problem solving.

Well-Defined Problems and Solution [51]

1. The INITIAL STATE is the state that the agent knows itself to be in.

2. The OPERATOR is used to denote the description of an action in terms of which state will

be reached by carrying out an action in particular state.

3. The STATE SPACE of the problem is the set of all states reachable from the initial state

by any sequence of actions.

4. A PATH in the state space is simply any sequence of actions leading from one state to

another.

5. The GOAL TEST, which the agent can apply to a single-state description to determine if it

is a goal state.

6. A PATH COST FUNCTION is a function that assigns a cost to a path.

7. The OUTPUT of a search algorithm is a SOLUTION.

This evaluation should be done for available algorithms (Greedy search, Straight line

algorithm, Prime‘s algorithm, Kruskal‘s algorithm, Learning decision tree, etc.).

The most known strategy is presented as the Search Tree.

Measuring of Capability of Problem-Solving

1. Does it find a solution?

2. Does the solution meet the goal?

3. What is the search-cost associated with the time and memory required to finding a

solution?


DATA STRUCTURE FOR SEARCH TREE


A node is a data structure with 5 components:

1. The state in the state space to which the node correspondents.

2. The node in the search tree that generated this node (this is called the parent node).

3. The operator that was applied to generate this node.

4. The number of nodes on the path from the root to this node (the depth of the node).

121


5. The path cost of the path from the initial state to the node.


GENERAL-SEARCH ( problem, strategy) Algorithm:

1. Initialize the search tree using initial state of problem

2. loop do

if there are no candidates for expansion then return failure

3. choose a leaf node for expansion according to strategy

if the node contains a goal state then return corresponding solution

else expand the node and add the resulting nodes to the search tree

4. end

The Search Tree presents several possible strategies (in Greedy Search group of strategies)

how to reach the Goal from the Start point.

―A‖ strategy (Best First Search algorithm) is based on movement from the Start point to the

Goal through all nodes with sequential evaluation of each step by evaluation function (path

cost function). Minimal value of evaluation function shows the best strategy.

―A*‖ strategy (Strait Line Strategy) is based on ―mental‖ movement in opposite direction

from the Goal stage to the Start stage with sequential evaluation of the straight-lane distance

between the Goal stage and the Start stage. Minimal value of evaluation function shows the

best strategy. Transparent walls permit to get information about location of the Goal.

Fig. II- 19 presents comparison of two different strategies A and A*.


A and A*

Start

Agent (robot) *

Transparent walls

Non-Transparent

walls

A*

Walls

A

* Goal

Fig. II-19. Comparison A and A* strategies


122


Arpsychology and structured design of artificial intelligent systems

Multivariable Problems

Some of variables of the multivariable problems are contradict to each other, For example:

for the better product (that is good) we have pay more (that is bad). In this case decision-

making process is based on compromise [40] (see APPENDIX 3).

Lack of Statistics in Decision-making

Probability of decision‘s result is important tool in the decision-making process. Lack of statistical information makes calculation of probability unreliable. Student's t-distribution (see APPENDIX 15) is a probability distribution that arises in the problem of estimating the

mean of a normally distributed population when the sample size is small. Unfortunatly this approach is not transparent to degree understandable to the person responsible for decision

meking and need intuitive setting some parameters.


Fig II-21. Parabola

It can be corrected by calculation of the level of trust to the result. The level of trust (T) is

non-linear function of the number of the event occurrences. The more events occurrences the

higher trust level. This function has parabolic character:


T = [P/(P+N)]n , R > (P+N)

T = 1, R = (P+N)

n = (P+N)/P

0 ≤ T ≤ 1,


where P is a number of positive occurrences,

N is a number of negative occurrences,

R is the representative number of occurrences,


Probability of events for [R ≤ (P+N)] is equal to


p = P/(P+N).


In case of lack of statistical information it is better to rely on possibility then on probability

of events. Possibility (pos) of events for [R > (P+N)] can be calculated as


123


pos = T * [P/(P+N)],


or

pos = [P/(P+N)]n+1

and

pos = pn+1

See also SOCIAL BEHAVIOR. Independent Behavior.

In the human‘s brain the frontal lobe involves in problem solving. It controls the so-called

executive functions. It involves into the ability to recognize future consequences resulting from current actions, to choose between good and bad actions (or better and best), override

and suppress unacceptable social responses, and determine similarities and differences

between things or events.

PERSONALITY OF THE ARTIFICIAL SYSTEM (artificial person)

“Personality is a set of distinctive and characteristic patterns of thought, emotion, and

behavior that define an individual’s personal style of interacting with his/her/its

physical and social environment” [18].

Personality of an artificial system is the totality of qualities and traits, such as the character of behavior, which is peculiar to a specific artificial system (“person”).

An Artificial Personality is the composite of characteristics that make up the individuality of the system; the self.

The personality of a natural system is determined by its genetic code and depends on the strength of internal secretions (chemicals and hormones). The hardware and the software

determine the personality of an artificial system.

It is reasonable to develop row of identities that are based on set of psychological

characteristics in accordance with different areas of applications.

The most acceptable method of artificial system personality analysis is the Behavioristic

approach that emphasizes the importance of environmental or situational determinants of

behavior. Psychoanalytic and Phenomelogical approaches are more suitable to human

personality analysis. There are different opinions about the number of traits that determine personality. The British psychologist Hans Eysenck arrived at 32, Cattell arrived at 16.

McCrae and Costa design a table listing five factors by six traits [18].

For an artificial person it is reasonable to take into consideration seven known factors by two

traits:

Optimistic-Pessimistic are functions of the ratio of positive-negative sensations and

experiences.

124


Active-Passive is determined by the level of stimuli that generates a response: rewards, curiosity, etc.

Peaceful (friendly)-Aggressive (unfriendly). Aggression is the next topic of discussion (see

―Aggression‖).

Reliable (dependable)-Moody (undependable) (see ―Social Behavior‖)

Cool (calm)-Anxious

Sociable-Unsociable (can work as a team member or not)

Careful-Careless (unguarded)

Higher flexibility in self-reconfiguration (for example: reconfiguration of connections

between neurons in artificial neuron net) can create conditions of abnormal or unknown

development in natural systems.

The environment has a strong influence on personality development through moral, and

immoral influences (by law). These limitations should be applied to the artificial system

operating in specific social environment.

Changing the value of the weight function negative and positive variables can change the

personality, for example, from optimistic to pessimistic and back. Repetition of positive

result or negative result of actions can change personality from optimistic to pessimistic and

back. The same method can be used to change other characteristics.

AGGRESSION

Aggression is the intent to injure another person (physically or verbally) or to destroy

property. In the human brain Amygdala involves in aggression, jealousy, and fear.

Military combat systems are aggressive. If a military assault system does not have an

aggressive personality, it could be taught with knowledge about target and actions via

genetic code (natural or artificial).

Aggression directed to friendly objects can be neutralized by activation of ―friendly‖ as an

opposite personality against this target. In this case knowledge can be presented as ―rules‖.

For example, if target is ―A-type‖, then destroy it. Communication, as it will be shown

below, is going through a knowledge base. A source of knowledge is an expert in the area of

combat. If a military assault system is aggressive, it should not be taught about target and the

actions against this target. In this case the operator can present just data: target ―A‖, ―B‖, and

―C‖.

It is easy to communicate with the artificial system if it demonstrates a specific personality.

In this case the system does not need to learn specific knowledge, but needs just the

information (data). Describing of these objects in the database as the friendly objects can set

personal behavior.

It is natural to use a ‖friendly‖ system in the human environment.

125


Optimistic-Pessimistic factors are important in systems that deal with uncertainly: systems

involved in foreseeing results and events, forecasting assault in military application. It may

be useful in some business and management applications. The same approach can be applied

to the choice of personal factors for specific areas of a system application.

The combinations of factors providing the AIS with the capability to automatically

adjustment to specific environment characteristics (autonomy) are a very important ability

of the system. The special control module can perform this adjustment. If the system is

designed as a neuron net, changing the values of the weight, threshold, and transfer function

can perform this adjustment. Transformation from one trait into another can arise gradually

or instantly depending upon the conditions of application. Transformation of functions

should be instantaneous.

Algorithm:

1. Recognize the object (see OBJECT RECOGNITION).

2. Define it as ―friendly‖ or ―target‖ (description is in the knowledge base).

3. If ―target‖ then destroy it otherwise do not.

4. Change the object

5. Repeat 1- 4

EMOTIONS

Emotions are the most typical of all human‘s features. It is complex phenomena, and the term

has no single universally accepted definition. Psychologies have generally agreed that

emotions entail, to varying degrees, awareness of one‘s environment or situation, bodily

reactions and approach or withdrawal behavior.

Although a widespread word, it is not so easy to come up with a generally acceptable

definition of emotion. Growing consensus does agree that the distinction between emotion

and feeling is important.

Feeling is a physical sensation and an affective state of consciousness, such as that

resulting from emotions, sentiments, or desires. Feeling can be seen as emotion that is

filtered through the cognitive brain centers [36], specifically the frontal lobe, producing a

physiological change in addition to the psycho-physiological change. Feeling and sensation

are synonyms. Sensing is a part of sensation. Sensation is a perception associated with stimulation of a sense organ (sensing system) or with a specific body condition: the

sensation of heat; a visual sensation; sensation of interest, loneliness; the faculty to feel or perceive; physical sensibility: The patient has very little sensation left in the right leg

[36] . So, feeling is the combinaation of physical sensations coming from the outer and inner sensing system and perception development based on information from the

different subsystems: sensing and emotions development. Demonstration of feeling is a

communication processes that accompanying feeling.


126


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Sensation (feeling) includes:

1. sensing (information collection) from

sensors

body‘s parts that are involved in emotions development

2. perception

Based on discoveries made through neural mapping of the limbic system, the neurobiological

explanation of human emotion is that emotion is a pleasant or unpleasant mental state

organized in the limbic system of the mammalian brain. Specifically, these states are manifestations of non-verbally expressed feelings of agreement, anger, certainty, control,

disagreement, disgust, disliking, embarrassment, fear, guilt, happiness, hate, interest, liking,

love, sadness, shame, surprise, and uncertainty.

For a machine to "have" emotion means there is a mechanism for it to decide which emotion

state the machine should be in, and also influences the behavior of the system afterward. This

may be useful in human-computer interactions with mechanisms such as robots and virtual

reality systems. Expressing emotional information in such systems can enhance the naturalness of the system. Emotion recognition and understanding see OBJECT

RECOGNITION and UNDERSTANDING and INTERPRETATION.

By definition [36] emotions are:

“1. An intense mental state that arises subjectively rather than through conscious effort

and is often accompanied by physiological changes; a strong feeling.

2. A state of mental agitation or disturbance.

3. The part of the consciousness that involves feeling; sensibility.”


First of all, the third part of definition contradicts to the first one: is this ―through conscious‖

effort or not? Second, this definition does not show the main substance of this phenomenon.

―An intense mental state that arises subjectively‖ is the part of other mental processes such as

intuition. If it is conscious and at the same time unconscious process then it is a synonym of

feeling.


Emotion is an automatic response or reaction to the selective signals. It consists of two parts: sensational processes (real emotional processes) and demonstration of emotions

(informational, communicational processes). Emotions are cognitive sensational

information processes that accompany the combinations of chemical, electrical, mechanical, informational, and other physical processes in the inner and outer parts of

a body triggered by the sensors and mobilize system resources. Feeling triggers emotions.

Facial expression, body language, and verbal presentation (―I am in bad mood‖) are ways of communication. It is easy observing an animal ‘body language as well as human‘.

The cat sends the clear signal about readiness to attack.


Artificial Intelligent Systems developers cannot wait the final conclusion of the human

psychologist to define the term ―emotions‖. It is important to define this term to proceed with

development of Artificial Intelligent Systems. Anyway human knowledge is not absolute.


127


There are the personal or cultural sets of the related pairs of internal and external ―signal-reactions‖ that can trigger emotions. For one agent some signals don‘t trigger emotions but

the same signals can trigger emotions of another agent. It is kind of conditional reflexes (see

REFLEXES). Reflex is an automatic response or reaction.

Some ―signal-reactions‖ pairs can be members both of ―reflexes family‖ and ―emotion-

family‖ (fear, pleasure, joy, etc.).

Emotions accelerate reaction of the system on dangerous events and treat, mobilize system

abilities. Emotions tell the system what is good and what is bad around help system to stay

alive. Emotions help to plane actions (anger helps to remove obstacles).

Artificial Intelligent systems can be members of international teams. These teams include

people with different cultural background... ―We have to emphasize communication skills,

the ability to work in teams and with people from different cultures,‖ says former Lockheed

Martin CEO Norman Augustine. It means ―the ability to work in teams and with people from

different cultures is as important as IQ for success in today‘s workplace‖ [42].

Desires to get moral or material reward (motivation, stimuli) generate emotions. A separate

branch of inquiry into emotion had sledded some researches to theorize that emotions are no

more than strong motivational or drive state.

Some emotions under specific circumstances can be controlled but some are not. Information

about emotions can be controlled as well.

In many cases both emotions and reflexes can be bound together. Emotions like conditional

reflexes are unintentional process - subconscious process. It creates difficulties to develop clear definition.

Emotions of intelligent systems are the combination of processes of mobilization of the

system mental and physical recourses to execute dynamic adjustment to the real world,

to withstand bad (dangerous) events or to increase efficiency of good events: to develop

self diagnostic of the system’s modules problems; to communicate with another agents

through the body language. Not all emotions (some internal disturbances) generate the communication signal.

In the human brain each type of emotions has specific physical location. It is reasonable to

repeat the same design in the Artificial Intelligent System. Amygdule is responsible for

control of emotions. It receives the signal from the sensor system and sends the signal of response directly to the body (actuators) to prepare the body to response. This response is a

reaction of an organism or a mechanism, to a specific stimulus and in some case can create

resonance to these signals as excitement. At the same time it sends the signal to the frontal

cortex that is responsible for reasoning. The body sends the signal back to the frontal cortex

as feedback. The signal to the actuators is a control signal it arises unintentionally. It doesn‘t involve process of reasoning. It is unconscious process. Process of reasoning recognizes the situation (for example type of dangerous) and defines the type of reaction (Fig. II-23). It is

unintentional process of reasoning. It is subconscious process that is based on system‘s experience.


128


Examples: sound of metal moving on glass surface, music, and so on. Human ―feel‖ this

information with the spinal cord. This method distinguishes from perception (recognition and

interpretation of sensory stimuli based chiefly on memory). It is possible to create the

artificial information tracks with the same ability. Repetition of the same signal can activate

memory and excitement. In this case, the process involves subconsciousness. The so-called

Mirror Cells in the human brain can respond to the signal presenting behavior of another human being by correspondent actions and prediction of the results of these actions. An

artificial system can demonstrate the same ability if similar actions saved in the memory and

can be activated by visual or other signals.


It is very important to keep balance between the incoming signal and emotional reaction.

Overreaction can decrease intellectual and mobilization ability of the system. Over sensible

Amygdule (natural or artificial) can be cause of the health (natural or artificial) problems.

Control of a system‘s status can be executed not just by emotions. The system goal triggers a

control process intentionally as well; send the control signal to the actuators. It is intentional

conscious process. It is executed under control of free will (the goal driving process) of the

intelligent system.

It is difficult to fake human emotions. Research shows that it is impossible to fake smile. It is

possible to recognize fake smile by strong scrutiny. A level of some emotions from zero to

maximum depends on culture and education. Artificial system‘s emotions can be faked easy.

A human being has emotions but at the same time can emulate, or fake them (professionally

or non-professionally) without any real feeling. The famous Russian theatre director

Stanislavski has taught actors to feel emotions otherwise they will not be able to play their

parts naturally. It is different result to live life the person artist represents or to represent life of this person. It is true for drama artist. A singer (in opera) plays emotions by the voice technique and need full attention to this technique. Real feeling and a body language are imitations (Anna Netrebko, a famous opera artist, coloratura soprano) similar to artificial systems.

True humans‘ emotions are triggered by external or internal signals to the control system (brain or peripheral nerve system). The control system activates chemical, mechanical, and

electrical processes. Different emotions include different specific processes: a rise in blood pressure and adrenaline, rapid heart heartbeat, breathing, etc. People with a severed spinal cord in the lumbar region don‘t feel pain in the lower parts of their body and cannot develop

emotions.

Why do we laugh? What function does laughter have? Laughter is one of the most poorly

understood of human behaviors. While we know, for example, that certain parts of the brain

are responsible for certain functions and tasks, it seems that laughter cannot be traced to one

specific area of the brain. Furthermore, the relationships between laughter and humor, or

even laughter and mirth are not understood, despite their evident interconnection. The

medulla directly controls many involuntary muscular and glandular activities, including

breathing, heart contraction, artery dilation, salivation, vomiting, and probably laughing.

129


Some clues for the physiological basis of laughter have come from people who suffered brain

injuries, strokes or neurological diseases. Three years ago, at the age of 48, C.B. suffered a

stroke. Fortunately, he recovered quite well and was expected to return to his normal life.

However, since the stroke, C.B. and those around him have been perplexed by certain

changes in his behavior. Though he seems healthy, and doesn't suffer any pain, occasionally,

for any noticeable reason, he bursts out into uncontrollable, wild laughter. In other cases, out

of the blue, he is swept into tears in a similar attack. The pleasant feelings, happiness, amusement joy or memory about past joke that usually accompany laughter are absent. For

artificial systems this facial information can be the signal of self-diagnostic.

Some feelings that reflect local sensation such as local pain are controlled by the local control systems (similar to unconditional reflexes, see also REFLEXESS). They are non-intelligent processes (see also CONSCIOUS PROCESSES CONSCIOUS, UNCONSCIOUS,

AND SUBCONSCIOUS PROCESSES) but it can trigger a body language similar to a body

language of emotions.

Does a machine have emotions or just emulate them? If you would like the answer to be

―yes‖ you have to design it! If you would like to have a car with ability to move, you have to

properly design the tires and other parts. An artificial system has a brain (computer). It is possible to add a full mechanism of emotions with a set of inner local sensors and local actuators to create an artificial system with feeling and response (emotion). A human being

with a transplanted heart (biological by nature but artificial by implementation) experiences

all types of emotions. My pacemaker (artificial part of my heart control system) participates

in my emotional process, it follow the signal to increase my pulls to mobilize my body to withstand the obstacles. New experiments with the artificial heart do not kill emotions.

And now there is another more important question: Does an Artificial Intelligent system need

emotions?

As soon as we define emotion as the tool of the system‘s recourses mobilization the answer is

―YES‖! Almost each (every) complex engineering (artificial) system has the mechanism of

resources (speed, power, force, and so on) mobilization. Changing of the lane in congested

traffic, beating of the fare in the subway station needs mobilization of the car’s or the

body’s resources. In the first case it is the driver responsibility, in the second case it is the responsibility of a violator. The Autonomous Artificial Intelligent System can face the

situation when it needs mobilization of recourses. It is emotions! Emotions are very

important part of communication between a human being and an artificial agent as a member

of mixed team.

The Control Theory presents strong methods of actuator‘s resources mobilization (forcing).

For Artificial Intelligent Systems there are no strong methods of computational ability of system resources mobilization. It is possible to create the flexible structure with ability to increase computational power by variation of a number of parallel computational branches.

A facial expression and body movement can indicate emotions in artificial systems as well as

in natural one. Researchers in the Humanoid Robotics Group at the Massachusetts Institute of Technology have created the robot Kismet (Fig. II-24) that demonstrates (imitate) emotions.

130


Arpsychology and structured design of artificial intelligent systems

Intensity of emotions should be determined for each individual system. Some people think

that the artificial system cannot feel but can react emotionally to an arousing situation.


The Structure of the

Emotion’s System

SYSTEM

OF

EMOTIONS

MODULE

MODULE

OF

OF

REASONING

CONTROL


Fig. II-22


left leg


Amygdule

right leg


Fig. II-23. Control signal from the Amygdule is going down (right leg).

Feedback signal is going up (left leg) to the frontal lobe.


If we accept the definition of ―feel‖ as ―to perceive through the sense of touch, to perceive as

a physical sensation‖ [36], as reaction to sensors signals then we have accepted the capability

131


of an artificial system to feel. This can be expressed as ―Primary Emotions-Appraisals‖:

Grief (Sorrow)-Loss, Fear-Treat, Anger-Obstacle, and Joy-Potential Mate, Trust-Group

member, Disgust-Gruesome object, Anticipation-New information, and Surprise-Sudden

novel object.

Some of artificial system‘s emotions are based on cognitive processes. It involves patterns of

emotion reaction stored in the memory. Kevin Warwick, professor of cybernetics at Reading

University, cites Sony's AIBO as the closest man has come to creating a sentient artificial being. According to Warwick's research, robots with brain processing capabilities are no

more intelligent than snails or bees, with basic behavior patterns and the ability to map out

simple environments. "Humans have human emotions and robots have robot emotions. As

soon as you allow robots to learn, you are opening up the possibility that they could develop

their own emotions. "Warwick believes that in ten to twenty years, humanoid robots will complicate the moral dilemma further. "In this timeframe, robots in the home will not be an

equal, but they will be given more of a status." He believes that in 20 years time, robots will

have an intellect on par with humans, which could reverse the issue into whether or not robots will be willing to let humans into their homes!


Strong emotions that follow events increase the capability of memorization of these events.

Emotions should trigger the generation of associative connections between events and

emotions (see CREATIVITY).


The most difficult areas of emotional activities are related to art, poetry, and music (see also

ART APPREHANSIONS). Emotional reaction in these areas requires strong preparation of

the Agent it requires a special education. A non-prepared human being does not emotionally

respond to art and music. The AIS should be educated to respond emotionally to this type of

information as well. The artificial center of pleasure can be exited by specific sound, color,

actions, etc.


Studies show that emotions have a huge impact on the human body. Good emotions help to

cure wounds faster. Emotions change the biochemical body environment (immune system).

Mental distress makes physical pain worse. To control emotions is a necessary intellectual ability, but not the most important problem confronting the evolution of the AIS.


Kismet (Fig.II-24) was created in MIT by team under supervision of Rodney Brook. You can

play with and talk to Kismet. If you charge towards Kismet too fast, you will startle Kismet

and Kismet will quickly withdraw or even become annoyed of you. When you talk to

Kismet, watch your tone because a scolding tone can make Kismet's head bow and eyes look

down. On the other hand, if you speak adoringly and with encouragement, Kismet will smile

and engage. Kismet enjoys face-to-face interaction and will not (or has not learnt to) hide emotions. With those huge eyes shaped like eyes of a gold fish, Kismet has a doe-eyed look,

like a young child.


Only, Kismet is not a child. It is an ensemble of metals, wires, cameras, synthesizers, sensors,

and motors -- a machine with hardware and software control. Yes, Kismet is a robot. More

precisely, Kismet, heavily inspired by the theories, observations, and experimental results of

132


child developmental psychology, is an expressive robotic creature with perceptual and motor

modalities tailored to natural human communication channels.


Kismet is probably the most famous exemplar of a new crop of robots called "Sociable

Humanoid Robots" or "Affective Humanoid Robots".


In "The Art of Building A Robot to Love" by Henry Fountain ("New York

Times", March 05, 2006) we read:


1. "What people want from Robots?" It turns out, is what they often want

from people: emotions.


2. It turns out that by equipping robots with the mechanisms of emotions,

we are able to increase the efficiency of our smart machines. So, there is an

expectation that emotions can improve their performance.


3. Dr. Mataric presented a paper at a conference on Robot/Human

Interaction at Salt Lake City persuasive enough to capture the attention of

a group of people like PERMIS (National Institute of Standard and Technology)

who want to affect the performance of Robots.


4. A robot must have human emotions, said Christof Bartneck of the

Eindhoven University of Technology in the Netherlands. Thus, emotions

have to be modeled for the robot's computer. "And we don't really

understand human emotions well enough to formalize them well"-he said.


5. At Stanford, Clifford Nass, a professor of communication, found that

in a simulation, drivers in a bad mood had far fewer accidents when they

were listening to a subdued voice making comments about the drive...

6. Even an insincere or simple emotion is easy for a person to detect:

people can find emotional cues everywhere. "They are obsessed with

emotion," Dr. Nass said. "The reason is, it's the best predictor of what

you'll do."


7. "If robots are to interact with us," said M. Scheutz, director of the

AI laboratory at Notre Dame, "then the robot should be such so that

people can make its behavior predictive". Then, people are able to

understand how and why the robot acts as it does.

133


The Process of emotion development or imitations in the artificial systems is based on activation of a body‘s and facial‘s reactions on the signals from the environment or internal

sensors. These reactions are saved in the computer memory. Development of emotions, not

imitations, is possible in the system with a full set of artificial or natural subsystems that have

a relationship to these processes. PERCEPTION (see above) describes the methods of

emotional presentation.

Success_observed (Success is the achievement of something desired, planned): Is a positive value-state variable that represents the degree to which task goals are met plus the amount of

benefit derived there from [16].

Success_expected: Is a value-state variable that indicates the degree of expected success or

the estimated probability of success it may stored in a task frame or computed during

planning on the basis of world model predictions. When compared with success observed it

provides a baseline for measuring whether goals were met on behind or ahead of schedule at

over or under estimated costs and with resulting benefits equal to less than or greater than

those expected.

Hope (To look forward to with confidence or expectation): Is a positive value-state variable produced when the world model predicts a future success in achieving a good situations or

events when high hope is assigned to a task frame. The Behavior Generator (BG) module

may intensify behavior directed toward completing the task and achieving the anticipated

good situations or event.

Frustrations (To prevent from accomplishing a purpose or fulfilling a desire): Is a negative state variable that indicates an inability to achieve a goal it may cause a BG module to abandon an

ongoing task and switch to an alternate behavior directed toward completing the task and

achieving the anticipated good situations or event.

Love: Is a positive state variable produced as a function of the perceived attractiveness and

desirability of an object or person. When assigned to the frame of an objective or person, it

tends to produce behavior designed to approach protect or process the loved object or person.

Hate: To feel hostility or animosity toward. Is a negative-value state variable produced as a

function of pain anger or humiliation when assigned to the frame of an object or person hate

tends to produce behavior designed to attack harm or destroy the hated object or person.

Comfort: A condition or feeling of pleasurable ease, well-being, and contentment. Is positive-value state variable produced by the absence of or relief from stress pain or fear comfort can be assigned to the frame of an object person or region of space that is safe sheltering or protective. When under stress or in pain an intelligent system may seek out places or persons with the entity frames that contain a large comfort value.

Fear: A feeling of agitation and anxiety caused by the presence or imminence of danger. Is a

negative-value state variable produced when the sensory processing system recognizes or the

world models predicts a bad or dangerous situations or event fear may be assigned to the

attribute list of an entity such as and object person situation event or region of space. Fear

tends to the produce behavior designed to avoid the feared situations event or region or flee

from the feared object or person. Dangerous situation may be recognized by the system by

134


reference to experience. In this case the system generates adequate response. Unknown

signals generates alert: increases sensibility of all sensors systems (see ATTENTION and

INTUITJON). In the human brain Amygdale involves in fear.

Joy (pleasure): Intense and especially ecstatic or exultant happiness. Is positive-value state

variable produced by the recognitions of an unexpectedly good situations or event. It is

assigned to the self-object frame. In the human brain the limbic system (Nucleus accumbens)

involved in reward, pleasure, and addiction;

Despair: The state of being without hope. Is a negative value state variable produced by the

world model predictions of unavoidable or unending bad situations or events. Despair may be

caused by the inability of the behavior generation planners to discover an acceptable plan for

avoiding bad situations or events.

Depression is negative value. The condition of feeling sad affected or characterized by sorrow or unhappiness.

Happiness (see Joy): Is a positive value produced by sensory processing observations and world model predictions of good situations and events. Happiness can be computed as a

function of a number of positive rewarding and negative punishing value state variables. It

can be measure by the level of fulfillment of desires.

Confidence: Trust or faith in a person or thing. Is an estimate of probability of correctness a confidence state variable may be assigned to the fame of any entity in the World Model. It

may also be assigned to the self-frame to indicate the level of confidence that a creature has

in its own capabilities to deal with a situation. Level of confidence is based on experience to

deal with a person or event. A high value of confidence may cause the behavior generator hierarchy to behave confidently or aggressively.

Uncertainty [16]: Is a lack of confidence uncertainty assigned to the frame of an external object may cause attention to be directed toward that object in order to gather more

information about it. Uncertainty assigned to the self-object frame may cause the behavior generating hierarchy to be timid or tentative.

VJ modules [16]

Value-state-variables are computed by value judgment functions residing in VJ

modules. Inputs to VJ modules describe entities, events, situations, and states. VJ value

judgment functions compute measures of cost, risk, and benefit. VJ outputs are value-state-

variables.


Axiom: In Artificial Intelligent System the value-state-variables are additive functions.


In this case the VJ value judgment mechanism can be defined as a mathematical or logical

function of the form [16]:

E(t+dt)= ∑ V(S(i,t)

I

135


where E is an output vector of value-state-variables V is a value judgment function that computes E given S


This equation represent algebraic sum of positive and negative functions.

The components of S are entity attributes describing states, objects, events, or regions of

space. Those may be derived either from processed sensory information, or from the world

model.


Value judgment function V in the VJ module computes a numerical scalar value (i.e. an

evaluation) for each component of E as a function of the input state vector S. E is a time dependent vector. The components of E may be assigned to attributes in the world model

frame of various entities, events, or states.


If time dependency is included, the function E(t+dt)=V(S(t)) may be computed by a set of equations of the form [16]


e(j,t+dt)=(k d/dt+1)b s(i,t) w(i,j)


i


Where e(j,t) is the value of the j-th value state-variable in the vector E at time t


s(i,t) is the value of the i-th input variable at time t


w(i,j) is a coefficient, or weight, that defines the contribution of s(i) to e(j).


Each individual may have a different set of ―values‖, i.e. a different weight matrix in its

value judgment function V.


The factor (k d/dt + 1) indicates that a value judgment is typically dependent on the temporal

derivative of its input variables as well as on their steady-state values. If k>1, then the rate of change of the input factors becomes more important than their absolute values. For k>0, need

reduction and escape from pain are rewarding.


This formula suggests how a VJ function might compute the value state-variable

―happiness‖.


Happiness = (k d/dt+1) (success – expectation


+ hope – frustration


+ love – hate


+ comfort – fear


+ joy – despair)


136


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


Calm


Interest

Angry


Happy


Sad

Surprise


Disgust


Fig.II-24. Kismet

If time where success, hope, love, comfort, joy are all positive value state-variables that contribute to happiness and expectation, frustration, hate, fear, and despair are all negative value state-variables that tend to reduce or diminish happiness.


It is possible to assign a real non-negative numerical scalar value to each value state variable

this defines the degree or amount of that value state variable. For example a positive real value assigned to good defines how good i.e.

if


e: =‖good‖ and 0<e, 10en e+10 is the best evaluation possible.

then,

e=10 is the ―best‖ evaluation possible.


Some value state-variables can be grouped as conjugate pairs. For example: good-bad,

pleasure-pain, success-fail, love-hate, etc. For conjugate pairs, a positive real value means the

amount of the good value, and a negative real value means the amount of the bad value.


For example,

if


e :=‖good-bad‖


and

-10<e<+10

then


137


e=5 is good

e=-4 is bad


e=6 is better e=-7 is worse


e=10 is best

e=-10 is worst

e=0 is neither good nor bad


Similarly, in the case of pleasure-pain, the large the positive value, the better it feels. The large the negative value, the worse it hurts. For example,


if

e := ―pleasure-pain‖


then

e=5 is pleasurable

e=-5 is painful


e=10 is ecstasy

e=-10 is agony


e=0 is neither pleasurable nor painful


The positive and negative elements of the conjugate pair may be completed separately, and

then combined.

Detecting and Recognizing Emotional Information

Detecting emotional information usually involve sensors which gather information about the

user's physical state or behavior without interrupting the user. The most obvious way in

which a computing device can sense the user's emotion is by using the same cues as other humans do, such as Facial expression, posture, gestures, and speech.


Computing devices can also sense emotion in ways which humans are not capable of, such as

the force or rhythm of key strokes of a hand on the keyboard, the temperature changes of a

hand on the mouse, or the evaluation of other physiological vital signs. Other technologies such as speech recognition are being explored for gathering emotional information.


Recognizing emotional information requires the extraction from the sensor data of the

features specific to emotional states, and the learning of patterns of data by the software.


There is one specific area which is not really recognition of emotions but related to emotional

trust. Evolution has developed the stereotype of appearance of the wise person. It is not the

young but the aged person. Experience and knowledge collections take time. The most of

portraits and pictures of the great thinkers and scientist (Newton, Einstein, Leonardo Da

Vinci, Franklin, and so on, and so on) shows the aged person even if their younger pictures

are available. Subconsciously we trust the aged person‘s wisdom. It means that in some

special areas of application the Artificial Intelligent System should have the specific

appearance that fits to the existing stereotypes. It is very important to develop the map of an

Artificial Intelligent System emotions to make it recognizable.


138


Emotional Understanding

Emotional understanding refers to the ability of a device not only to detect emotional or affective information, but also to store, process, build and maintain an emotional model of the user.


Emotional understanding aims at incorporating contextual information about the user and the environment, and produce appropriate responses. It is a difficult issue because human

emotions arise from complex external contexts.


Possible features of a system which displayed emotional understanding might be editable

preferences such as avoidance or modification of interaction when the user is angry, and

applications might improve security or confidentiality as well as the overall interaction.

STIMULUS, MOTIVATION, AND INSPIRATION

Stimuli are something causing or regarded as causing a response; an agent, an action,

or a condition that elicits or accelerates a physiological or psychological activity or response; something that incites or rouses to action; an incentive, it is some kind of motivation [36].

Motivation is the strongest stimulus. Motivation determines the direction and intensity of

goal-directed behavior. Motives are activated from within, but are sometimes

stimulated by external conditions [36].

Motivation is a temporal and dynamic state that should not be confused with personality or

emotion. Motivation is having the desire and willingness to do something.

Stimuli and motivation are derivatives of emotions or desires. Both of them mobilize mental

and physical abilities to achieve the goal. Stimuli are developed as the result of learning and

at the same time motivate learning.

In the chapter FREE WILL AND ACTIONS we discussed importance of rewards and

punishment as stimulus and motivations for natural system. Desires as well as rewards and

punishment cannot be stimuli‘s and motivation in contemporary artificial systems.

Contemporary artificial systems so far are not so advanced to accept it.

One of possible type of rewards is development of self-esteem (self-confidence). Self-esteem or self-confidence is determined by ratio of the number successful actions (SA) to the

number of all actions (successful and failed) (FA + SA). It is non-linear function similar to

the trust function.


n

C = [SA/(SA + FA)]

n = (SA + FA) / SA

0 ≤ C ≤ 1,

If SA = 0, then n = 0.1


139


Positive experience increases the level of self-confidence. The grater level of self-confidence

then the grater willingness to accept risk (see WILLINGNESS TO ACCEPT RISK).

Distinction between calculation of trust (T) and self-confidence: trust is related to the specific

types of actions, self-confidence is related to all types of activities.

Stimuli and motivation can be activated by evaluation of environment and alternatives of

actions and set priority to deal with more dangerous and more efficient as the choices of top

priorities. Rules, criteria of choices can be generated as the result of learning (see

APPENDIX 12).

Stimulation triggers actions from outside environment. Motivation triggers actions from

outside or sometimes from inside environment. In this case motivation is similar to inspiration but may be not so strong. Inspiration is self motivation from inside of the agent. It is the act or power of exercising an elevating or stimulating influence upon the

intellect or emotions; the result of such influence which quickens or stimulates. It is subconscious process that is generated by the agent’s World Model. This process can be

triggered by contradictions between application knowledge in the agent‘s Knowledge Base.

Inspiration is the strong motive to initiative. Initiative is the power or ability to begin or to follow through energetically with a plan or task; enterprise and determination [36].

Altruism (concern for the welfare of others), moral, protective and self-protective

mechanism is important stimuli of the natural system. Some of them are reasonable for the

AIS. They involve in process of assigning a task priorities by level of motivation and resources mobilization.

The value judgment of a system determines good and bad, reward and punishment, important

and trivial, certain and improbable all of them are the AIS motivations. Some of these terms

are discussed in the next chapter.

Algorithm (Altruism):

1. Recognition of the scene

2. Evaluation of the objects or alternatives of action

3. To find the dangerous object or conditions to another object

4. Development of the system‘s responses priorities of the actions.

WILLINGNESS TO ACCEPT RISK

Risk is “the possibility of suffering harm or loss; danger”. The Level of willingness to

accept risk is an important personal characteristic: courage. ―Courage is the state or quality

of mind that enables one to face danger” [36]. Fear is opposite to courage.

Fear: A feeling of the presence or imminence of danger. It is the state or quality of mind that disables one to face danger. “It is a negative-value state variable produced when the

sensory processing system recognizes or the world models predicts a bad or dangerous

situations or event fear may be assigned to the attribute list of an entity such as and object

person situation event or region of space. Fear tends to the produce behavior designed to avoid the feared situations event or region or flee from the feared object or person‖ [16].

140


By Aristotle [35] courage is the virtue at the balance point between heedlessness and

cowardice, which are both excessive forms of the same thing. Practical reason is intellectual

virtue. By which one comes to distinguish what is good and bad, thè course of action, the

right strategy, and so on. In order to evaluate this personal ability it is important to understand how to measure risk.


The standard risk-level measurement is probability of occurrences of the event. Risk is

acceptable in the case of a reasonable probability value calculated on represantive number of

repetitions of the events. The most important case in real life is the evaluation of the risk level of only one occurrence. The level of information needed for decision-making can

determine the level of risk. If information about an event is equal to zero, then risk is equal to

100% or 1. If information about the event is equal to 100%, then risk is equal to zero. The

level of information availability is equal to


L = I(A)/I(N),


where I(A) is available information,

I(N) is needed information,


Available information is determined for each variable related to the event separately.

The ―calculated‖ risk level is equal to


R = (1 – L)


Willingness to accept risk


W = (1-R)*P/LOS,

where P is profit,

LOS is losses.


The amount of needed information can be extracted from the rules of application from the

Application Knowledge Base. The amount of available knowledge can be extracted from the

Date Base information. The importance of each variable of information should be evaluated

and presented by weight coefficient ―w‖. Aggregation of needed and available information

can be done by weight sum of information of the variables (APPENDIX 3).


The AIS can evaluate its experience and make an adjustment of a decision-making process.

In this case we are talking about subjective risk that is defined as uncertainty based on a person‘s mental condition or state of mind. Personal willingness to accept subjective risk is

determined as


Wp = kW, 0 < k < 1

―k‖ shows a level of personal courage.

141


The type of professional activity sometimes depends on ability to accept risk.

Entrepreneurship is such type of activities. Genes are key to entrepreneur‘s activity.

Entrepreneurs are largely born rather than made, research suggests.

A UK-US (BBC NEWS) study has found our genes are crucial in determining whether we

are entrepreneurial and likely to become self-employed. The Twin Research Unit at St

Thomas‘ Hospital, London, the Tanaka School of Business at Imperial College, London and

the US Case Western Reserve University carried out the study.

It found nearly half of an individual's propensity to become self-employed is due to genetic

factors. The researchers say genetics is likely to determine whether a person has traits vital to

being a successful entrepreneur, such as being sociable and extroverted. And, contrary to

previous beliefs, family environment and upbringing have little influence on whether a

person becomes self-employed or not. The other factors which did play a significant role

were random life events, such as being made redundant, winning a large sum of money, or a

chance meeting.

John Cridland, CBI Deputy Director-General, said: "If half of a person's propensity to become self-employed is due to genetic factors then half is caused by other influences and it

is vital that the proper education and entrepreneurial support schemes are in place to enable

them to blossom."

Willingness to accept any risk depends on personality of a system and the level of self-confidence (see STIMULUS, MOTIVATION, AND INSPIRATION). If information is not

available then willingness to accept risk can be defined base on the level of self-confidence

―C‖ or the level of trust ―T‖ (see PROBLEM SOLVING):

W = C

Where C is a level of self-confidence.

This equation represents agent‘s personality rather than decision that is based on information.

SOCIAL BEHAVIOR

The Man-machine Society

21st century is the man-machine society. People have to learn how to leave in the new

environment. Social skills of the AI systems are based on the strong ability of

communication between members of the natural and artificial social groups and between

members inside of the each group. Simple group‘s activities can be executed even without

direct communication between group‘s members. The common goal guides the group

actions. A soccer game in human and artificial groups is based on the common goal and this

goal can be reached without direct communication between the group‘s members (Robocup)

by evaluation of environment conditions.

A group can be homogeneous or a mixed comprised of human beings and the AIS.

Interaction with the human individuals and their groups increase the importance of the AIS

personality.

142


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

There is a big area of robots application that concentrates on relationships or contacts with

humans. These systems should have its own emotions and should be able to understand

human emotions, as described in the section above (see EMOTIONS). People prefer to have

relationships with cats and dogs because these animals have recognizable emotions. People

need somebody who can respond to their actions on emotional level.

If you look at the picture (Fig. II-25) then you can realize that PIGMALION‘S problem

become reality. A humanoid, also called "android", or "anthropomorphic robot", is a robot that looks like a human. With the behavior-based approaches to robotics, humanoids are built

to replace humans in performing tasks that are not suitable for humans (e.g., too dangerous,

too boring, too expensive) but that humans are good at because of our genetic built (e.g., our

vision, our dexterity, our mobility). In replacing humans, "automation with emotions" is key.

In other words, the less human interference needed the more effective robots are. As

researchers in the Humanoid Robotics Group at the Massachusetts Institute of Technology have revealed: Mr. Warwick believes that in ten to twenty years, humanoid robots will

complicate the moral dilemma further. "In this timeframe, robots in the home will not be an

equal, but they will be given more of a status." He believes that in 20 years time, robots will

have an intellect on par with humans, which could reverse the issue into whether or not robots will be willing to let humans into their homes!

Detail description of the Behavior Generator structure (Fig. II-1) and definitions of basic emotions with the quantities evaluation (see EMOTIONS) are presented in [28]. Most of

them are emotions that are based on sensation. The behavior generator presents ―personality‖

of an artificial system.

One type of social activity is compassion - deep awareness of the suffering of another coupled with the wish to relieve it. Unlike natural systems artificial ones can exercise

compassion without the wish to relieve pain. Compassion is a result of the involvement of

two ore more systems in some kind of relationship. Both of them must have the same

understanding of any event. The algorithm of compassion can be presented as two

synchronized processes


Fig. II-25. The Robot ( http://www.androidworld.com/prod19.htm)

143


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


The first system is under influences of the event

Event – discrimination (good or bad) – communication about feeling

The second system receives information about the event connected to the first system:

Information (from the first system) -– demonstration of feeling

An event can be internal (broken part or low level of power supply) or external. One of the

systems can be a human being. The response to help is the highest level of compassion. The

sensitivity level of the informational channel and the AI‘s ability to react determines the level

of compassion.

Reasonable behavior can be changed if there are some problems in the system. Fig. II-27

presents the neuron net (three layer perceptron) that can classify objects coded as 11, 10, 01.


Fig. II-26. Honda's intelligent humanoid robot


The system with ability to make classification can be design as the rule base system with the

set of rules:


1. If the code of a object is 00 then place this object in the location Output1

2. If the code of a object is 01 then place this object in the location Output2

3. If the code of a object is 10 then place this object in the location Output3


This example demonstrates relationship between the Neuron Net technique and the Rule

Base technique. For any (i) neuron output is determined as

O = w(i)*I(i) – T(i)

i

144


N5 N6


I1 N 1

Input Output 1

N3 N7


I2 N2 Output 2

Input

N4 N8


Output 3


Fig. II-27. Neuron Net application. I – Input, N – Neuron, W(N) – weight of input from

neuron N, T – Threshold, O - Output


Table of Outputs


NEURON

W(I1)

W(I2)

W1

W2

W3

W4

W5

T

N1

1.0

0.0


0.5

N2

0.0

1.0


0.5

N3


0.5

0.5


0.6

N4


0.0

1.0


0.0

N5


1.0

0.0


0.0

N6


-1.0

0.0

1.0

0.1

N7


1.0

0.0

0.0

0.0

N8


-1.0

1.0

0.0

0.1


Classification:


1. If I1 = 1, I2 = 0 Then O(1) = 1, O(2) and O(3) don‘t have a signal

because T(3) = 0.6 and W(1) = 0.5

Then result is O(6) = 1


2. If I1 = 0, I2 = 1 Then O(2) = 1, O(1) and O(3) don‘t have a signal

because T(3) = 0.6 and W(2) = 0.5

Then result is O(8) = 1


3. If I1 = 1, I2 = 1 Then O(3) = 1, O(1) = 1 and O(3) = 1

Then result is O(7) = 1

This procedure demonstrates one of possible methods to store data in a neuron net.

145


Fairness

An Artificial Intelligent System of 21st century will be involved in sophisticated relationships

not just as the direct actor but also as advisers in business, politics, development of new product, military business, etc. Sometimes interests of the different agents or groups who involve in some kind of relationships are contradictable. In this case the ―fair‖ compromise is

the base of relationships development. Unfortunately definition of the term ―fairness‖ does

not exist. It is may be because in reality pure fairness (impartial) does not exist either. Real

fairness does not exist not because ―good‖ or ―bad‖ guys, but because the each party has its

specific goal and criteria that are not compatible with the goals and criteria of other parties.

Compromise is the closest term to fairness. Compromise is a settlement of differences in

which each side makes concessions, the result of such a settlement. Usually it is based on

balance of power, a state of equilibrium or parity characterized by cancellation of all forces by equal opposing forces. In terms of the Theory of Control Systems it is multi system control of equilibrium. We will use the term ―fairness‖ as the evaluation of a result of

an agreement about rules of behavior or relationships between different groups. It is

reasonable to speak about acceptability of other party‘s behavior by the first one instead of

fairness. Fairness is the criteria or evaluation of multi group agreement (compromise).

Fairness can de defined as a temporary equilibrates in relationships between different

groups or individuals. It is a criterion in Value Judgment. Fairness has very strong psychological emotional connotation that creates additional difficulties for compromise.

Temporary character of the many compromises is determined by deviation of power of

different groups. Do you know international agreements that were not broken?

The multiparty compromise development is the very complex problem. It can be calculated

as minimal unhappiness [UH] in the group:

C = min [UH]

Developing of the fair deal includes 5 main steps:

1. Equalization of the party‘s status. Discrimination can not be base of the fair deal

2. Development of the common goal

3. Development of the common metric of the common goal

4. Adjustment of the parties goals to the common goal

5. Generate the compromise (minimal unhappiness calculation).


If it is impossible to develop the fair deal then, as we know, alternative is war. In the mixed

society of 21st century the problem of fairness in relationships between intelligent systems as

natural and artificial as well should be taken in consideration. Development of independent

humanoid group with the special identity can posed treatment to the human society and other

independent humanoid groups in the name of their identity (Greeks and Turks, Arabs and

Israel) [43]. It should be taken in consideration from the very beginning in order to prevent

this development.

146


Arpsychology and structured design of artificial intelligent systems

Example above shows that even today, as we know, there are problems of fair development

in relationships between human groups. But now the new problem arises: it is fuzzy problem

of ownership of an intelligent system by another intelligent system.

There is another question: Who is the leader and the boss in kindergarten with artificial babysitter or in the group with super intelligent leader? Even today in the aircraft, spaceships,

machine-tool, and other industries we delegate some control functions to artificial,

computerized control system without ability to override it by a human being!

The Fair Deal Development

Note: This example is only simple illustration of the fair deal calculation.

There are two main parties that participate in the Public Company‘s wealth distribution:

investors and workers. Both of them have different status. Investors lend money, workers

sale (?) their abilities and energy (labor). Labor is the use of physical or mental energy and

skills to do something. ―Sale‖ means changing the owner. ―Lend‖ means contemporary

borrowing. In reality skills don‘t change their owner. Workers does not sale but lend their

ability to work, their labor, for use it. It equalizes status of investment and labor. Some workers are the same company stockholders. A borrower owns only result of labor and loan

application. As the workers who at the same time have status of the owners, are they sale their labor to themselves?

Difference between economical understanding of the terms ―investment‖ and ―labor‖

(different status), fuzzy understandings of the term ―fairness‖ all of these create problems in

different areas of the human‘s relationships. Wealth distribution in the public corporations is

one of many examples of such problems. This problem is the cornerstone condition of the

social fairness (impartial).

Party‘s goals are contradictable. A worker‘s goal is to get the highest return on the unit of

labor. An investor‘s goal is to get the highest return on the unit of investment.

The fair rules can be developed independently from the pressure of money power and power

of the union. These rules have to reflect the society goal rather than the goal of the special powerful groups. In accordance with The Declaration of Independence the goal of the

American society is ―…to secure these Rights: Life, Liberty, and the Pursuit of Happiness‖.

Happiness, as it was shown, in some way is based on the fair deal that is based on fair wealth

distribution in accordance with the level of participation in wealth development.

As we mention before there are two groups who participate in the process of wealth

development: investors and workers. Investors invest money workers invest labor. Value of

the labor investment as any financial political and social factor can be measured by money

value on the money scale [40]. Measurement of investment and labor on the common money

scale (we equalized its status) creates the common metric. Value of workers investment is equal:


I(W) = s(i)*N(i), (1)

i

147


where s(i) is yearly salary (price of labor with insurance) in the group ―i‖,

N(i) is the number of workers in the group ―i".

Suppose that this value is constant for each of ―n‖ years ahead.


Whole investment is equal:


I = I(M) + n * I(W), (2)


Where I(M) is invested money by investors.

Suppose yearly spending on equipments, material, transportation, etc. are equal to ―E‖ and

constant for each of ―n‖ years ahead.

Suppose I(M) is invested for ―n‖ years with rate m%. It means that yearly return on

investment is equal to


YR = I(M)/n + [I(M) /n] * m/100 (3)

Or

YR = [I(M)/n] * (1 + 0.01m) (4)


If an agreement of money investment does not include fixed return (m %) than


YR = I(M)/n (5)


From (1) and (3) total spending for ―n‖ years is equal:


S = n * [E + I(W) + YR] (6)


Suppose that value of full amount of money receiving after realization of the company‘s

product is equal to ―$A‖ and is the same for each year.


The salary scale can be design for each level of workers, CEO are included. If the scale is

presented, for example, by arithmetic series up and number of workers in the group by the

same math rule down, then salary for each group is equal:


s(1)*N(1)

[s(1) + k]*[N(1) – k]

…………………….

[s(1) + k*n]*[N(1) – k*p],


where p is the group level.


Salary and return on investment are determined by financial situation on the labor and

financial markets. It is limitation by external conditions.

148


Extra value earned by a company for ―n‖ years is equal


D = n * A – S (7)


and depends on choices (4) or (5).


Distribution of Earning (simplify example)


Each invested dollar as investment and labor eligible to get extra value equal


d = D/I (8)


From (2) and (8) investors eligible for


I(E) = [I(M) + n * I(W)] * d (9)


Or

I(E) = I(M) * d * (1 + n) (10)


Yearly value is equal:

I(E)/n = I(M) * d * (1 + n) /n (11)


If an agreement of money investment does not include fixed return (m %) (5) then m% is not

granted and risk of investment increases.


If spending is calculated without return on investment and labor then


S = n * E (12)

and risk is higher.


Workers eligible for (see (9) or (10)) return:


W(E) = A – I(E) (13)


Yes, investment is risky business, but not very much if you invest rather then speculates. If

fixed return on money investment is not included in the agreement then investment become

more risky but can gives higher return. Working people have risk to get compensation lover

then it was suppose to be. Condition (13) creates equal level of risk for labor and investors. It

is the highest level of fairness. If a company cannot fulfill obligation to pay salary then it layoff workers. If a company cannot fulfill obligation to investor then investor are loosing money. A Company declares bankruptcy. So, it is almost equally risky activity for investors

and workers.

21st century is the century of extensive automation. The workers who lost their job in the shrinking labor market can be compensated by participation in wealth distribution [39, 41].

149


Each person who lost the job because technology advanced is eligible to get part of wealth of

the company he/she worked for. Value of compensation can be equal to the poverty level or

minimal salary in this kind of business or calculated by the special formula. This

compensation is for lifetime. Unemployment insurance will pay additional part of money in

accordance with the existing system of unemployment benefits calculation law.

Unemployment benefits should be stopped if the person will get a new job. This policy will

benefit whole society from advanced technology application.

Each worker receives basic amount of monthly payment. Final yearly salary can be

calculated by the end of the year.

This approach increases fairness in the Global Market and decreases social tension. If a

company decides to move business off shore (outsourcing) to use cheaper labor then the

financial result of a company activities increases and compensation of workers who lost their

job will be increased.

“Fair Deal” Algorithm:

1. Equalization of party‘s status.

2. Define the common goal of participants (wealth distribution)

3. Adjustment of the parties goals to the common goal

4. Development of the common metric of the common goal (money scale)

5. Chose the criteria (receiving amount is proportional to the investment)

6. Calculate the values on the table (amount of investment and labor)

7. Calculate unit of action on the common scale (d)

8. Calculate the fair deal

Independent Behavior

A human being demonstrates great level of dependency from another human being behavior.

In some cases it creates problems in the decision-making process.

Artificial Intelligent Systems can demonstrate grater level of independency as well.

However its decision is more reasonable, more valuable.

In computer simulations, Couzin (http://www.msnbc.msn.com/id/6934951/Simple science

governs herd mentality) and his colleagues programmed virtual animals with the instinct to

stay near others — an important survival trait in many species. The researchers then endowed

some members in the flock with a preferred direction — be it toward a food source or a new

nesting site. They then determined how close the group would come to arriving at this goal.

Accuracy increased, as more of the members knew where to go. But at a certain point, adding

more informed individuals did not increase the accuracy by very much. To give an example,

a group of 10 gets about the same advantage from having five leaders as having six.

The minimum percentage of informed individuals needed to achieve a certain level of

accuracy depended on the size of the group. If 10 virtual buffaloes need 50 percent of the 150


herd to know where the watering hole is, a group of 200 can get by with only 5 percent. In

nature, it is likely that the number of leaders is kept as low as possible.

Couzin thinks there may be a similar sort of mechanism at human crowds. As humans "We

walk along a busy street more or less on autopilot," he said. Perhaps we are subconsciously

reconciling two simple commands: get to work on time and avoid stepping on anyone‘s

shoes.

The level of a recipient‘s trust to the leader in specific area of activities can be calculated as:


T =[P/(P+N)]n , R > (P+N)

T = 1, R = (P+N)

n = (P+N)/P

0 ≤ T ≤ 1,


where P is a number of positive occurrences in specific events of a leader‘s activity,

N is a number of negative occurrences in specific events of a leader‘s activity,

R is the representative number events of a leader‘s activity.


The same formula can be used to calculate a level of trust to the leader in general area of activities. In this case instead of a number specific events should be used all number of events in all areas.


The level of self-confidence is

C = [SA/(SA + FA)]n

n = (SA + FA) / SA,


where SA is a number successful actions of all types or specific area of activities,

FA is a number of all failed action of all types or specific area of activities.

Degree of readiness to follow is determined by the experience. If the lower level of self-confidence and grater level of trust, then the grater readiness to follow. The level of

recipient‘s dependency from the leader‘s agent can be calculated by the ration of an agent‘s

trust to the leader (T) to the agent‘s self-confidence (C).

So readiness to follow is

RF = T/C

If SA = P and FA = N or T = C, then RF = 1


The value RF < 1 triggers independent behavior, the value RF > 1 triggers dependency.

An agent can choose his adviser by evaluation of value of the coefficient of trust.

151


The grater value of ―T‖ the higher chance of correct advises. Universal trust and self-confidence are important values for decision making in case of new area of activities without

previous experience.


PSYCHOLOGICAL MALFUNCTIONS, DISORDERS AND

CORRECTION

Different artificial mind‘s malfunctions such as the broken connections, the sensors

malfunctions, and the parameters deteriorations (see previous chapter and ―Robustness as the

Tool of Reliability‖ in WHAT IS INTELLIGENCE?) and the search engine malfunctions,

and so on can be cause of information distortion. Adding wrong or loosing needed

information can distort the World Model. The wrong World Model generate inadequate

response, creates psychological disorders. Problems diagnostic can be done by comparison of

the incorrect World Model with correct one for the same system type. It is possible to use a

human brain diagnostic technology (see APPENDIX 7).

Dangerous and antisocial behavior of an Artificial Intelligent System can be corrected in

more efficient way than in case of a human one. A psychologist and neurobiologists working

with human being as a patient tries to change existing negative setting to positive one. But

for the time being he/she does not have detail information about content of knowledge in the

human brain. In case of an artificial system we can get this information and even can correct

it. Working with a human being or some types of animal psychologist presents input

information to their control system. He/shy tries to reset some programmable parameters of

the system. Moral and law exist to do the same but in more powerful way.

MORAL AND LAW


Moral arising from conscience or the sense of right and wrong; having psychological

rather than physical or tangible effects; based on strong likelihood or firm conviction,

rather than on the actual evidence [36]. Moral is the set of rules accepted by the

member of society that are not covered by the law. See FREE WILL AND ACTIONS and

APPENDIX 12. The frontal lobes of the human‘s brain involve in the ability to recognize future consequences resulting from current actions, to choose between good and bad actions

(or better and best), override and suppress unacceptable social responses.

Moral makes sense in the group. Even in the homogeneous group there is some range of

moral values deviation through the cross section of the group. Fully isolated intelligent

system (in a world with no other occupant) does not understand moral values. Artificial

Intelligent systems interact with human beings. In some cases they will be incorporated in the

human society. It is very important to expect that these systems will be able to understand and exercise the moral law of the human society.


David Hume rejects the idea that there is anything ―moral‖ in the external real world,

morality itself arising from our own sentiments about actions of certain kind. The difficulty

lies in the subjective nature of morals. What is ruled out in one culture is fashionable in 152


another. Moral estimations of ―right‖ and ―wrong‖ are not in external world but in the

sentiments of the observer. The moral law is within by each person.


Kant‘s moral philosophy bases morality on reason and, thus, reserves the moral domain to

creatures that are rational [34]. Artificial Intelligent Systems are rational. So, they can

act in accordance with the moral rules. Existence of the hybrid intelligent systems (combination of the natural and artificial elements) advances moral problems up on the list of

problems.


Moral rules are result of learning. The system learns moral patterns and criteria‘s through the

educational process (intentional process). It is way to impose the moral rules and behavior in

the human and artificial systems ―society‖. Moral is result of non-volunteer agreement

between the group‘s members. Each new group member obliged to accept moral rules of

behavior.


There are two types of moral: the moral of the group (GM) and the personal moral (PM). The

personal moral arises as relationship to the other society members. Difference between two

of them is the personal moral deviation (PMD):


PMD = GM – PM


Deviation can be defined by a person, by specific moral rules, and through the cross section

of the society as least square of deviation.

The law is the body of rules and principles governing the affairs of a community and enforced by a political authority, a legal system [36]. The law is the social phenomena.

What is a crime under one set of laws is an act of heroism a few miles down the road. An

Artificial Intelligent System must to abbey the human law. The law is a part of a

characteristic of environment and should be presented in the World Model similarly to the

process of imposing the moral rules.

Subconscious can create very dangerous problems in natural and artificial intelligent systems.

It is not under full control of conscious processes and has full access to almost all sometimes-

twisted knowledge and to almost all control systems. It can generate unpredictable dangerous

behavior. It is critical to develop a system that can control dangerous subconscious processes.

The existing system of moral rules and law should be expended to incorporate rules of

rights and responsibilities of Artificial and Hybrid Intelligent systems such as rights and limitations of ownership. Contemporary western moral and law accept ownership of the

intelligent system by another intelligent system or group (animals in agriculture, Zoo). Can a

human being be an owner of an android? It is important law problem as well as psychological

and moral.

There is one more question. Who is responsible for the problem that is caused by intellectual

malfunction? An insane human being is not responsible for his/her actions

153


ART APPREHANSIONS

Art apprehensions are one of personality characteristics. It is a result of communication

between an artist and a recipient. The common language is a precondition of any

communications. In this case we are talking about specific symbolic language. Knowing and

mastering this language is important condition of art apprehension.

Another precondition is sensibility of the visual, hearing and other sensor systems. An

artificial system may have the better sensor system and can better evaluate the object of art

then natural one and can collect more information from the same art object and present

judgment that is inapprehensible by a human being.

The artificial systems as art developers (composers, writhers, game and industrial designers,

architects, interior designers, etc.) can work at the personal level as well as the community

level. In social relationship this ability permits to share art enjoy with human counterparts.

The human‘s brain cerebellum is processing of language, music, and other sensory temporal stimuli.

Art is

“1. Human efforts to imitate, supplement, alter, or counteract the work of nature.

3. The conscious production or arrangement of sounds, colors, forms, movements, or

other elements in a manner that affects the sense of beauty” [36].

The conscious production means the conscious apprehensions. The sensor system of the AIS

collects information, the system of perception and conceiving analyses it and compare with

stored in the memory patterns. Evaluation of information is based on the criteria‘s of beauty.

The system learns patterns and criteria‘ through

1. experience (unintentional, subconscious process based on repetition and

conceptualization)

2. the educational process (intentional process).


Some criteria (rules) are culture-oriented some are universal. For example, multimedia

intentionally programs human‘s stereotypes of beauty.


Beauty is [36] a delightful quality associated with harmony of form or color, excellence of

craftsmanship, truthfulness, originality, or another property. The rationalist school takes

aesthetics to include standards of taste and judgment permitting assessment of the good, the

bad, and the prosaic. On the Humean account, aesthetics is the part of empirical psychology

that identifies the features of the external world generally productive of agreeable feelings.

Beauty is a product of the perceptual and emotional responses to an object, where the

agreeable feelings are most reliable associated with judgments of aesthetic value.


A professional agent evaluates the objects of arts intentionally through cognition,

consciousness. A dilettante does not think about rules. He/she/it gets impression,

unintentional evaluation. It is subconscious process.


154


Pythagoras discovers the musical scale and musical harmonies (concordant and discordant).

The shape and color of natural objects usually is accepted as the good patterns. The sharp shapes and discordant sounds, unmatched colors all are bad patterns. Analysis of the art

objects can generate good or bad ―feeling‖.

Any designer knows that ―beautiful‖ mechanical part (with smooth connection between

part‘s elements) is more reliable and livelonger life. ―Feeling‖ of part‘s beauty was

developed by long experience, sometimes through the evolutional process. The picture with

strong asymmetrical location of the objects develops impression of misbalance. In

accordance with every day experience it generates bad feeling by association. This feeling rejects acceptance of this piece of art unless it does not represent something that compensate

this bad feeling. Many famous physicians even before any experimental proof accepted

Einstein theory as right theory because it was ―beautiful theory‖.

Douglas Bagnall and David Hall from New Zealand Archive created the Neuron Net with

ability to learn criteria of arts evaluation from the experts. This system can implement

learning knowledge to develop movies (see also CREATIVITY).

Some unconditional reflexes (see also REFLEXES) can be controlled by the Main Control

system without involvement of intelligent abilities. In this case a system uses the hard coded

logical function. Psychologists know that there are sounds (singer‘s voice) that trigger

woman‘s sexual psychical behavior. The Main Control system can be connected to the inner

sensor system. In this case system generates the reflection of input information as a sensible

reaction. It is similar to resonance. This type reaction can be seen in art apprehensions.

In the some cases a system develops reverse procedure – deliver type of art that fits to the

mood of recipient. As it was described before, Walt Disney Co. [New Scientist, 24.01.2006]

has created a media player that selects songs based on its owner‘s latest mood. The device

has wrist sensors that measure body temperature, perspiration and pulse rate. It uses these measurements to build a profile of what music or video the owner would prefer played when

He/She is hot, cold, dry or sweaty, and when their pulse is racing or slow. The device then

comes up with suggestion to fit each profile, either using songs or videos in its library or downloading something new that should be suitable. If the owner rejects the player‘s

selection it learns and refines the profile. So, over time the player should get better at matching bodily measurements with the owner‘s moods. This type of relationship can be

seen between two artificial systems. It resembles compassion and emotions.

ARTIFICIAL LIFE

Artificial Life as the Model of Natural One

All definitions in this book define the artificial life terms, but it is up to reader to expand them to the natural life terms.

A modeling of the real processes is the powerful method of research. Any model is based on

existing detailed description of the modeling process. Unfortunately many of the natural life

important processes (creativity, intuition, etc) do not have adequate definition and

description,

155


An artificial process model is based on an artificial process detail description. In some cases

the description is not the same as for the natural life process. It creates limitation to use the

model of artificial life processes as the model of natural life processes. But in some case to

some extend it is possible to use artificial life as a model of the natural one.

Artificial Life

New advance in development of Artificial Intelligent Systems move us close to the new

phenomena such as an Artificial Life. An Artificial Life is the chain events of

development, evolution, existence and psychology of Artificial Intelligent Systems.

Life is The property or quality that distinguishes living organisms from dead organisms and inanimate matter, manifested in functions such as metabolism, growth,

reproduction, and response to stimuli or adaptation to the environment originating

from within the organism” [36].

An Artificial Intelligence researcher (Josh Bongard) at the University of Zurich shows the possibility of artificial life (virtual life) existence with virtual intelligence [31]. This type of life does not have natural metabolism but perhaps it is possible to speak about virtual metabolism that supports virtual growth. Metabolism isthe complex of physical and chemical processes occurring within a living cell or organism that are necessary for the

maintenance of life. In metabolism some substances are broken down to yield energy

for vital processes” [36].

Professor of the Intelligence Autonomous Systems Laboratory (University of the West

England) Chris Melhuish and his team created EcoBot I: Sugar powered autonomous robot

(Fig. II-29, 30). It is a 960g robot, powered by microbial fuel cells (MFCs) and performs a

photo-tactic (light seeking) behavior. This robot does not use any other form of power source

such as batteries or solar panels. It is 22cm in diameter and 7.5cm high. Transformation of

―food‖ into power by this system can be called metabolism ([email protected]).

By definition ―A MAN is a member of the genus Homo, family Hominidae, order Primates,

class Mammalia, characterized by erect posture and an opposable thumb, especially a

member of the only extant species, Homo sapiens, distinguished by a highly developed brain, the capacity for abstract reasoning, and the ability to communicate by means of

organized speech and record information in a variety of symbolic systems” [36] Pre-

literate societies had limited resources for recording what was of value to them [34].

The same definition (Homo sapiens) can be used to characterize the advanced Artificial

Intelligent Systems - humanoids. Soul is the animating and vital principle in human beings,

credited with the faculties of thought, action, and emotion and often conceived as an

immaterial entity [36]. This definition gives another connection between life and intelligence.

It will be shown that biological and intellectual adaptations are subjects of two different parts

of a definition of intelligence [31]. The connection between intelligence and life was

elsewhere presented in [19] by Dr. A. Meystel.


156


Arpsychology and structured design of artificial intelligent systems

Virtual creatures, with muscles, senses and primitive nervous systems, have been "grown"

from artificial embryos in a computer simulation [31]. The multi-celled organisms could be

the first step toward using artificial evolution to create intelligent life from scratch.


Josh Bongard, ran the simulation until each cell had grown into a creature of up to 50 cells.

He then tested each one to see how well it pushed a simulated box (Fig. II-28). By setting one

creature against another, Bongard was able to find which cells grew into the most effective

"pushing" creatures. He then took the genomes that led to the most successful creatures and

mixed them to produce new genomes for his virtual embryos, which he grew and tested.

Bongard, who reported the work at the International Workshop on Biologically Inspired

Robotics at HP Labs, Bristol, now has a bunch of creatures that excel at box-pushing.


The Intelligent Autonomous Systems Laboratory (Dr. Ian Kelly, British University) has

created the robot SlugBot (Fig. II-31) that can ―eat‖ bags and develops electric power

(artificial metabolism).


American company UGOBE (Fig. II-32) created artificial intellectual life forms similar to the

natural life. Dinosaur Pleo (created by this company) can be tired, exited, be afraid, be happy,

etc. He can make smooth movement of any part of his body.


―The ideas of the Origin of Species are applied not only to the domain of organic world and

living creatures, but to the inorganic world as well and the objects of application include inanimate and non-living creatures‖ (A. Meystel)


Fig. II-28


157


Arpsychology and structured design of artificial intelligent systems


Fig II-29. EcoBot I fully assembled ([email protected])

Two stacks of four MFCs

connected in series

Bank of capacitors (accumulator)

Circular piece of styrine material

Photo-detecting diodes

Caster wheels

High efficiency high torque escap

motors

Electronic Control circuit


Fig. II-30 ([email protected])


158


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


Fig. II-31 SluugBot


Fig. II-32 Pleo (http://www.pcmag.com/article2/0,1895,1918705,00.asp)


159


PRINCIPLES OF THE ARTIFICIAL BRAIN DESIGN

Analysis of brain activities shows that it is reasonable to design the artificial brain base on

several principles:


1. The structure should be designed as a multilevel and multiresolutional system

2. It can be design as the distributed control system with multi location of the system‘s

parts

3. Each brain function should be assigned to the separate module

4. Each module technology should be design not as part of a uniform technology (for

example: the Neuron Net or Genetic Algorithm) but using an application that provides of

the best technology for the specific application

5. Execution of similar functions or stapes of functions can be done in the same module.


It is important to have substitution of the fail modules (see Robustness). This replacement is

possible because all intellectual functions are based on two main procedures of data

manipulation: learning and reasoning. Some systems such as the control systems of the

symmetrical (left and right) parts of a body execute similar procedures.

EVOLUTION AND INTELLIGENCE

A term ―evolution‖ in the AIS is connected to two diferent phenomenas. The first one is the

method of a problem solvind: the Genetic Algorithm (see APPENDIX 6). The second one is

the process of a system adaptation to new conditions.


Ability to adaptation is one of an important intellectual ability of the AIS (see

AUTONOMOUS). ―There are two types of adaptation:

1. short term time-spatial adaptation

2. long term multi-generational adaptation. The last one is referred to as ―evolution‖‖ (Dr.

Alex Meystel).


The artificial gene in Genetic Algorithm represents the smallest unit of information. Like the natural set of genes (chromosome) it controls the physical development and behavior,

determines a particular characteristic of the system. ―Neurogeneticists claim that genes

determine… level of intelligence…‖[32].


Josh Bengard [31] took the genomes that led to the most successful creatures and mixed them

to produce new genomes for his virtual embryos, which he grew and tested. Bongard, who

reported the work at the International Workshop on Biologically Inspired Robotics at HP

Labs, Bristol, now has a bunch of creatures that excel at box-pushing (see Fig. II-28).

"Evolution seems to figure out that it's useful to organize the growth process," says Rolf Pfeifer who works with Bongard. "You get repeated structures, and they discover things like

increasing body mass helps to push the block."


Evolution of the AIS is the process of a system changing (adaptation) under external

influence “concerned with the development of the physical universe from unorganized

160


matter” to the higher level of organization with the stable changing of a system’s

behavior that can be observed in the next generation.

There are two types of external influences in an artificial environment:

1. intentional influence by another Intelligent System (a human being or another the

AIS) to develop new characteristic or behavior,

2. unintentional influence by external environment to adopt the system to new external

condition, to make a system more efficient (for example, automatic adjustment of a

computer‘s pull down menu in accordance with frequency of a specific command

application)


Evolution improves ability of the system to increase a level of intelligence. It is the part

of General Intelligence (see also WHAT IS INTELLIGENCE?). Evolution is not

mandatory ability of the intelligent systems and is not feature that defines system as intelligent. Evolution is the tool to improve system’s intelligence.


―Intelligence is a control tool that has emerged as a result of evolution by rewarding systems

with increase of the probability of success under informational uncertainty‖ [33]. It is better

to say that evolution in the most cases creates conditions to increase the level of intelligence.

To increase the level of intelligence is possible just in an intelligent system. Evolution of a

plant does not change a level of plant‘s intelligence. A plant and some other species are not

intelligent systems because they have the hard-wired and hard coded control system (see

DEFINITION OF INTELLIGENCE). These systems have fully predictable response to any

stimuli; they do not have the ―free will‖ (see FREE WILL).


Virtual creatures, with muscles, senses and primitive nervous systems, have been "grown"

from artificial embryos in a computer simulation [31] (see Fig. II-28). The multi-celled

organisms could be the first step towards using artificial evolution to create intelligent life from scratch.


Chris Langton (the Center for Nonlinear Studies) created a colony of artificial ants, ―vants‖,

he calls them, for virtul ants. The vants search their environment, meet other vants, and reproduce to create new vants. The system starts with a bunch of randomly specified vants

and gives them a few simple rules, such what to do when they meet other vants and so forth.

These rules define behavior of vants.


Artificial genetic code can be presented as the set of controlled switches in separate

―Genetic Bases‖. The setting can be adjusted to new environmental conditions in a gradual

manner or instantaneously by special control signals. Repeated experience of the several

generations can be presented in genetic code, memorized and transferred to the next

generation as the result of evolution [38].

161


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

GENDER OF ARTIFICIAL INTELLIGENT SYSTEMS

Different areas of Artificial Intelligent Systems application need specific gender abilities and

characteristics. The specific characteristics of a human being are defined by different brain activity. The same difference can be implied in an artificial intelligent system by variation of

software or hardware.


There is clear difference between male and fimale behevior. The corpus callosum is wider in

the brains of women than in those of men, it may allow for greater cross talk between the

hemispheres—possibly the basis for woman‘s "intuition" . It has also been used, for example, as the explanation of an increased single-task orientation of male, relative to female, learners;

a smaller male organ is said to make it harder for the left and right sides of the brain to work

together and to explain a feminine ability to multitask.


The male type of the brain (masculine) [37]

is business oriented

is the single task system

has better ability of orientation in spatial environment.


The female type of the brain (feminine)

has stronger social orientation

is the multitasks system

has wider range of sensitivity

has stronger sensibility to the body language and details recognition

has more expressive facial and body language.


Unlike traditional manufacturing robots, which carry out single tasks sequentially, the three

female robots (Fembots) are able to switch between a number of jobs according to priority

and circumstance.


"If a man does the housework, he'll load the washing machine then stand there and watch it,"

Dr. Hill (founder of the robotic software firm Kadence, Australia) said. "A woman will go

off and do something else."


It is possible to design the system with maximal set of abilities and its extreme values, but

some times application needs the specific set of abilities. For example female senior citizens

will be more comfortable with the female robot (female personality and appearance) as the

helper rather than male or neutral one. In this case specific gender should be taken in consideration.

INSTINCT AND ALTRUISM

Instinct is

1. An inborn pattern of behavior that is characteristic of a species and is often a response to

specific environmental stimuli: altruistic instincts in social animals.

162


2. A powerful motivation or impulse.

3. An innate capability or aptitude: an instinct for tact and diplomacy.


Altruism is unselfish concern for the welfare of others, selflessness [36]. It is not only inborn ability. It is also ability that can be developed by education as the selective ability. In some cases it is professional responsibility (bodyguards).

Instinct provides a response to external stimuli, which moves an organism to action, unless

overridden by intelligence, which is creative and hence far more versatile. Since instincts take generations to adapt, an intermediate position, or basis for action, is served by memory,

which provides individually stored successful reactions built upon experience. The particular

actions performed may be influenced by learning, environment and natural principles.

Generally, the term instinct is not used to describe an existing condition or established state.

It is debatable whether or not living beings are bound absolutely by instinct. Though instinct

is what seems to come naturally or perhaps with heredity, general conditioning and

environment surrounding a living being play a major role. Predominately, instinct is pre-

intellectual, while intuition is trans-intellectual.

Instinct can be developed partly as hard coded ability that is presented as a part of hardware

or as intelligent part of the system.

CONCLUSION


Psychology of Artificial Intelligent Systems is the new branch of psychology. There are two

important questions.


The first question: Is it really something new? Yes, the most of people (not just my students)

have problems to accept these ideas in the very beginning not because they are very

complicated but because they are not fit to the existing understanding of artificial system nature.


The second question: Are these ideas useful? Yes, they present descriptions of the new

systems, their possible features and abilities, present the guideline haw to design the new systems. They are preparing a human being to the new environment.


The actual behavior of fully autonomous advanced artificial intelligent systems cannot be

predicted in some cases. It is important therefore to prognosticate possible dangerous results

of their behavior and protect environment and human being from their not authorized actions.

It is very important to answer arisen questions about relationship between a human being and

an Artificial Intelligent System.


163


REFERENCES:


1. Artificial Intelligence With Dr. John McCARTY. Conversation On The Leading Edge Of

Knowledge and Discovery With Dr. Jeffry Mishlove, 1998.

2. Mind As Society With Dr. Marvin Minsky. Conversation On The Leading Edge o f

Knowledge and Discovery With Dr. Jeffry Mishlove, 1998.

3. Language And Consciousness. Part 4: Consciousness and Cognition with Dr. Steven

Pinker. Conversation On The Leading Edge Of Knowledge and Discovery With Dr.

Jeffry Mishlove, 1998.

4. Unlocking your Subconscious Wisdom. Part 1: Using Intuition with Dr. Marcia Emery.

Conversation On The Leading Edge Of Knowledge and Discovery With Dr. Jeffry

Mishlove, 1998.

5. Mind Over Machine With Dr. Hubert Dreyfus. Conversation On The Leading Edge of

Knowledge and Discovery with Dr. Jeffry Mishlove, 1998.

6. Mind As A Myth with U. G. Krishnamurti. Conversation On The Leading Edge

of Knowledge and Discovery with Dr. Jeffry Mishlove, 1998.

7. The Transcendence Of The Ego. An Existentialist Theory Of Consciousness

by Jean-Paul Sartre. Hill and Wang-New York, 1997.

8. Psychology by Peter Gray, Worth Publishers, 1999.

9. Philosophy, History & Problems by Samuel Enoch Stumpf, McGraw- Hill, 1994.

10. Computers and The Mind with Howard Rheingold. Conversation On The Leading Edge

of Knowledge and Discovery with Dr. Jeffry Mishlove, 1998.

11. Decision Support and Expert Systems. Management Support Systems by Efraim

Turban. Prentice Hall. 1995.

12. Foundations of Neural Networks by Khanna, Addison-Wesley, 1990.

13. Neural Networks and Physical Systems with Emergent Collective Computation

Abilities by Hopfield, J. Proceedings, Natural Academy of Sciences USA 79, 1985.

Their Minds, PHOENIX, 1999.

14. McNeill F. M., Thro E. Fuzzy Logic. A Practical Approach. AP PROFESSIONAL,

1994.

15. Zadech, L., Kacprzyk J. Fuzzy Logic for the Management of Uncertainty, NY. John

Wiley & Sone. Inc. 1992.

16. Albus J., Meystel A. Behavior Generator in Intelligent Systems, NIST, 1997,

17. Meystel A. Evolution of Intelligent Systems Architectures. What Should Be Measured?

Performance Metrics for Intelligent Systems, Workshop, August 14-16, 2000,

Gaithersburg, MD

18. Atkinson R. L., Atkinson R. C., Smith E. E., Bem D. J., Nolen-Hoeksema S. Hilgard‘s

Introduction to Psychology, Harcourt Brace & Co. 1996

19. Meystel A. Semiotic Modeling and Situation Analysis; An Introduction, AdRem, Inc.

1994]

20. Measuring Performance of Systems with Autonomy: Metrics for Intelligence of

Constructed systems. Per MIS August 14-16, 2000, Gaithersburg, MD

21. Proud R. W/, Hart J. J., and Mrozinski R. B. Methods for Determining the Level of

Autonomy to Design into a Human Spaceflight Vehicle: A Function Approach. PERMIS

2000.

164


22. Cawsey A. The Essence of Artificial Intelligence. Prentice Hall, 1995

23. Dean T., Allen J., Aloimonos Y. Artificial Intelligence. Theory and Practice. The

Benjamin/Cummings Publishing Company, 1995.

24. Gersting J. Mathematical Structures For Computer Science, W. H. Freeman and Co. 1999

25. Negnevitsky M. Artificial Intelligence. A Guide to Intelligence Systems, Addison-

Wesley, 2001

26. Polyakov L. Structure Approach to the Intelligent Design. Proceedings of the 2002

PerMIS Workshop August 13-15, 2002.

27. Russell S. Norvig P. Artificial Intelligence. A Modern Approach. Prentice Hall, 1995

28. Albus J., Meystel A. Behavior Generation in Intelligent Systems, NIST.

29. Franclin S., Grasser A. Is it an Agent, or just a Program? A Taxonomy for Autonomous Agents, PerMIS http://www.msci.memphis.edu/~franklin/AgentProg.html#agent

30. Polyakov L. M. Agent with Reasoning and Learning: The Structure Design,

Performance Metrics for Intelligent Systems, Workshop, August 14-26, 2004,

Gaithersburg, MD.

31. Bongard Josh ―Animals‖ grown from an artificial embryo. EPSRC/BBSRC International

Workshop Biologically-Inspired Robotics: The Legacy of W. Grey Walter 14-16

August 2002, HP Bristol Labs, UK of Zurich,

32. Freeman W. J., How Brains make Up Their Mind, PHCENIX, 1999

33. Meystel A. Evolution of Intelligent Systems Architectures. What Should Be Measured?

Performance Metrics for Intelligent Systems. Workshop. August 14-16, 2000,

Gaithersburg, MD.

34. Robinson D. N. The Grate Ideas of Philosophy, The Teaching Company, 2004.

35. Dennett D. C. Consciousness Explained,. Dennett D. C., Kinsbourne M. The Nature of

Consciousness: Philosophical Debates, London, The Penguin Press, 1992

36. American Heritage Talking Dictionary. Copyright © 1997 The Learning Company, Inc.

37. Pease Allan, Why men don‘t listen and women can‘t read a maps, EKCMO-press, 1998.

38. Jubak J. In the Image of the Brain, The Softback Preview, 1994

39. Albus J. People‘ capitalism. Economics of the Robot Revolution.

http://www.peoplescapitalism.org/people.htm#front

40. Polyakov L.M., In Defense of the Additive Form for Evaluating Vectors, Measuring the

Performance and Intelligence of Systems: Proceeding of the 2000 PerMIS Workshop.

August 14-16, 2000.

41, Michael Brush, The coming crackdown on CEOs http://articles.moneycentral.msn.com/Investing/CompanyFocus/TheComingCrackdown

OnCEOs.aspx

42. Wallis C., Steptoe S., How to Bring Our Schools Out of the 20th Century, Time,

December 18, 2006.

43. Volkan V. Killing in the Name of Identity, A Study of Bloody Conflicts, Pitchstone

Publishing, 2006


165


166


APPENDIX 1

BRAIN DEVELOPMENT

167



168


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

The Brain


Position of each neuron is determined by the genetic code.


http://en.wikipedia.org/wiki/Neuron#Anatomy_and_histology

169


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


neuromuscular junction

http://en.wikipedia.org/wiki/Chemical_synapse


170


Detailed view of a neuromuscular junction:

1. Presynaptic terminal

2. Sarcolemma

3. Synaptic vesicle

4. Nicotinic acetylcholine receptor

5. Mitochondrion


There are two different scales at which brain operate. One such scale of nervous system is

composed of circuits made up of large fibers usually called axons. These circuits operate by

virtue of nerve impulses that are propagated along the fibers by neighborhood depolarization

of their membranes.


The connections between neurons (synapses) take place for the most of parts within this fine

fiber. Pre-synoptically, the fine fibers are the terminal branches of axons that use to be called

teledendrons. Both their existence and their name have been largely ignored.

Postsynaptically, the fine fibers are dendrites that compose a felt work within which

connections (synapses and electrical ephapses) are made in every direction. This feltwork

acts as a processing web‖.

Chemical Synapses

Chemical synapses are specialized junctions through which cells of the nervous system

signal to one another and to non-neuronal cells such as muscles or glands. A chemical synapse between a motor neuron and a muscle cell is called a neuromuscular junction.

Chemical synapses allow the neurons of the central nervous system to form interconnected neural circuits. They are thus crucial to the biological computations that underlie perception

and thought. They also provide the means through which the nervous system connects to and

controls the other systems of the body.

The human brain contains a huge number of chemical synapses, with young children having about 1,000 trillion. This number declines with age, stabilizing by adulthood. Estimates for

an adult vary from 100 to 500 trillion synapses.

The word "synapse" comes from "synaptein" which Sir Charles Scott Sherrington and his colleagues coined from the Greek "syn-" meaning "together" and "haptein" meaning "to clasp". Chemical synapses are not the only type of biological synapse: electrical and

immunological synapses exist as well. Without a qualifier, however, "synapse" by itself most commonly refers to a chemical synapse.


Relationship to Electrical Synapses


An electrical synapse is a mechanical and electrically conductive link between two abutting

neurons that is formed at a narrow gap between the pre- and postsynaptic cells known as a

gap junction. At gap junctions, cells approach within about 3.5 nm of each other (Kandel), a 171


much shorter distance than the 20 to 40 nm distance that separates cells at chemical synapses

(Hormuzdi). As opposed to chemical synapses, the postsynaptic potential in electrical

synapses is not caused by the opening of ion channels by chemical transmitters, but by direct

electrical coupling between both neurons. Electrical synapses are therefore faster and more reliable than chemical synapses. Electrical synapses are found throughout the nervous

system, yet are less common than chemical synapses.

The Brain Development Stages


Birth to 1 Month. ADAPTIVE REFLEXES

A child shows

- basic responses to stimulus – reflexes

- learning many little correlation‘s of position and sensors input

- generating mental maps of the different position its body

- feedback (positive and negative) is provided by various hard-wired pleasure and pain

stimuli

- signals of comfort and discomfort teach the brain what works and what does not


1 – 4 Months. CIRCULAR REACTIONS

- the basic reflexes of the first month are now chained together, creating repetitive motions

- visually-guided reaching begins to occur

- recognition of the mother‘s face


4 - 7 Months. SECONDARY CIRCULAR REACTIONS

- developing simple goal-directed behavior

- begins the training of the plan-and-goal layer of intelligence

- the brain is learning to make use of cause-affect relationships


Gruber (http://en.wikipedia.org/wiki/Child_psychology#Infancy) thinks that the development

of logic and the coordination between means and ends occur from nine to twelve months and

is associated primarily with. This is an extremely important stage of development, holding

what Piaget calls the "first proper intelligence. " Also, this stage marks the beginning of goal

orientation,  the deliberate planning of steps to meet an objective.


7 - 9 Months. COORDINATION OF SECONDARY CIRCULAR

REACTIONS

- the brain developing intentionality and creativity, exhibits means-end behavior, including

the use of intermediate actions to achieve the ultimate goal (new actions are not being

invented yet, but the brain explores the many uses (both familiar and novel) for the

motions it has already learned

- a baby loosing an ability of differentiate sounds of different languages, an adult person

accepts these sounds as the same.


172


9 - 15 Months. THIRD LEVEL OF CIRCULAR REACTIONS

- the brain has a fairly complete mental model of what its body can do and what effect it

has on the environment

- the brain begins to direct the body to perform old actions in new context and to find new

actions for old situations, performing experiments to see what happens (process is going

not through hypothesis and intentional experiment, but by trail and error)

- the brain is probably developing symbolic model of its physicals capabilities


15 - 24 Months. SIMULATION OF EVENTS

- the representational model is being exercised, validated, and expended

- language and symbolic communication are coming online now, and these tools are used

to further expand the mental model of the world

- a child begin to recognize themselves in the mirror


24 – as long as possible. DEVELOPMENT OF EXPENDED

WORLD MODEL

- further expansion of the mental world model.

Note:

A cat and other animals cannot recognize themselves in the mirror.

Some group of dolphins and primates can do this. Last experiment with elephants in Bronx

Zoo (New York) shows that these animals can recognize themselves in the mirror.

New results shows (Scientists: New phylum sheds light on ancestor of animals, humans.

University of Florida, http://physorg.com/news81711681.html) that our common ancestor

doesn‘t have a brain but rather a diffuse neural system in the animal‘s surface.


A reconstructed genetic record reported in the Nature article also implies that the brain might

have been independently evolved more than twice in different animal lineages, Moroz said.

This conclusion sharply contrasts the widely accepted view that the centralized brain has a

single origin, Moroz noted. This shows that our common ancestor doesn‘t have a brain but

rather a diffuse neural system in the animal‘s surface.

173


174


APPENDIX 2

ANALISIS OF DEFINITIONS OF INTELLIGENCE

175




176


Analysis


Axiom: A mentally healthy human baby as well as a grown human being is the

intelligent system without any age limitations. (“The baby test”).

It does not mean that a human being with some mental problems is not an intelligent person.

“The baby test” is the big problem to many types of existing definitions of intelligence. This axiom tells only that availability of reflexes is a condition of intelligence existence. It defines

just the lower limit of the area of intelligent existence. It is just needed but not only condition. Unconditional reflexes are not an intelligent processes (see also REFLEXES).


Several groups of classification can illustrate the variety of existing definitions. Some

definitions and groups can overlap others. Each group is presented by one or several

examples.


1. This group of definitions represents only the list of some “intelligent” abilities:


―Intelligent systems are transforming the way we design, fabricate, operate and even dispose

complex engineering artifacts‖ [27].


This approach is based on only naming some common abilities. It is a very simplistic way of

definition design. A baby cannot ―design, fabricate, operate and even dispose complex

engineering artifacts‖.


2. This group of definitions emphasizes importance of learning and adaptation:


―In general, intelligence embodies the ability to learn from experience and adapt successfully

to the environment‖ [14].


―Intelligence to refer to adaptive behavior of the individual usually characterized by some elements of problem solving and directed by cognitive processes of acquiring information or

knowledge‖ [31].


The New Encyclopedia Britannica gives a definition of intelligence as ‖mental quality that

consists of the ability to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one‘s environment‖.


Webster‘s New Universal Unabridged Dictionary definition: Intelligence is ―(a) the ability to

learn or understand from experience; the ability to acquire and retain knowledge; mental

ability; (b) the ability to respond quickly and successfully to a new situation; use of the faculty of reason in solving problems, directing conduct, etc. effectively; (c) in psychology,

measured success in using this abilities to perform certain tasks‖.


What does it mean ―some elements of problem solving‖? ―mental ability; (b) the ability to

respond quickly and successfully to a new situation‖? A baby cannot ―handle abstract

concepts‖.

177


3. This group defines an intelligent system as a goal driven system:


―Intelligence is the ability for a system to adapt its behavior to meet its goals in a range of

environment‖ [24].


―Intelligence: the ability to determine behavior that will maximize the likelihood of goal satisfaction in a dynamic and uncertain environment‖ [28].


―David Wechsler‘s general description of intelligence: the global capacity to act

purposefully, to think rationally and to deal effectively with the environment‖. ―Most

psychologists can accept this definition‖ [15].


What does it mean ―to think rationally and to deal effectively with the environment‖?


An intelligent agent is something that can act independently with well-defined goal…

should be able to adopt what it is doing based on information it receives from its

environment‖ [21].

A goal is an important but only one of the features of intelligence.


4. This approach defines an intelligent system as creative autonomic systems:


―We view intelligence as sitting in a collection of related qualities that include autonomy and

creativity‖ [37].


Autonomy and creativity don‘t give full description of intelligence. A baby cannot

demonstrate creativity at an earlier age. The brain develops creativity when a baby is 7 – 9

month old.


5. This group underlines importance of knowledge and reasoning. For ancient Greeks

―intelligence‖ and ―knowledge‖ were synonyms [61].


―Intelligence is flexible, appropriate, and rapid application of available knowledge‖[16].

Intelligence is ―The capacity to acquire and apply knowledge. The faculty of thought and reason… Superior powers of mind” “Mind: 1. The human consciousness that originates in

the brain and is manifested especially in thought, and imagination. 2. The collective conscious and unconscious processes in a sentient organism that direct and influence mental

and physical behavior. 3. The principle of intelligence; the spirit of consciousness regarded as an aspect of reality…” (American Heritage Dictionary).


In the same dictionary ―smartness‖ has the same definition as intelligence. This is a mix of

real definition and the description of abilities. It is impossible to define intelligence through

the term mental. Term ―Mental‖ is the synonym of ―intelligence―.


There is another definition of the word ―smart‖. “Smart – characterized by sharp, quick

thought. Smart is often a general term implying mental keenness; more specifically it 178


can refer to practical knowledge, ability to learn quickly, or to sharpness or

shrewdness”. So smartness is a highly dynamic kind of intelligence with a goal that

directs to personal gain (American Heritage Dictionary).


―Intelligence is based on logic and knowledge (convergent thinking)‖[19].


Knowledge collection and reasoning are very important characteristics of intelligence. As we

can see later it is a reasonable definition.


―The system MUST be able to learn. The system MUST be autonomous. That is to say, it

MUST be able to do things by itself (however may choose to accept aid). The system MUST

be able to reason. The system MUST be able to develop self-awareness‖. (Philip Nettleton,

psychologist, University of New England, Armidale, Australia)


Self-awareness is developing by the middle of the second year of life.


6. This group identifies an intelligent system as an information system:


―Intelligence is defined as a total information-processing capacity of the organism which

represents the size of the brain in excess of that needed to control routine body functions

(fish and human beings have the same intelligence level?”) [31].


This definition looks like a control system description. Different human beings have different

levels of intelligence.


7. Some definitions underline the ability of the system to adapt appropriately to a

changing environment:

―Intelligence is an ability of a system to act appropriately in an uncertain environment, where

appropriate action is that which increases the probability of success, and success is the achievement of behavioral sub goals that support the system‘s ultimate goal‖ [1].


It is very reasonable definition. This definition does not permit to the evaluation of a ―smart‖

product for promotion into global market because in the most of cases a product does not act

and can be evaluated by possibilities, not abilities (see ―Definition Development‖ below).


―An intelligent system can respond to the environment in a variety of ways… can explore its

surroundings, manipulate objects or seek communication with other intelligent systems‖ [17].


―Intelligence can be defined as the capability of a system to adopt its behavior to everchanging environment‖ [47].


―Intelligence – showing sound judgment and rationality-: Mental acute. Intelligent usually

implies the ability to cope with demands created by novel situation and new problems, to apply what is learned from experience, and to use the power of reasoning and inference

effectively as a guide to behavior‖.

179


A baby cannot ―manipulate objects‖ nor ―showing sound judgment and rationality‖ at early

age.


8. G. Berg-Cross defines intelligence as ability to manipulate symbolic representation as

cognetivists did it at earlier stages:

―Concept of intelligence as basically cognition [Newell, 1982]-the capacity to construct and,

i.e. ‖approximate models‖ that are mapped to the environment and determine ―appropriate‖

action‖ [9].


Unconditional reflexes manipulate symbolic representation but it is not intelligent processes.

Chaotic manipulation with symbolic representation is not the intelligence property. The goal

defines meaning of the process. A human baby does not have the capacity to construct and

manipulate symbolic representation at such an early age, but still is an intelligent system (The baby test).


9. Sir Francis Galten makes clear connection between intelligence and the physical

objects-sensors (the materialist‘s point of view):

―Intelligence is a question of exceptional sensory and perceptual skills, which are passed from one generation to the next. Because all information is acquired through the senses, the

more sensitive and accurate an individual‘s perceptual apparatus, the more intelligent the

person‖[5].


It is a very important element of the artificial system design.


10. Some authors understand existence of different types of intelligence and propose dual definitions.


―Raymond Cottell proposes that there are two different types of intelligence which he called

fluid intelligence and crystallized intelligence. Fluid intelligence refers us to our ability to gain new knowledge and to attach and solve novel problems. Being both genetically and

biologically determined it consists more of our capacity for learning new things. Crystallized

intelligence refers to the actual accumulation of knowledge over our life span. Research

(Horn, 1978) has found that crystallized intelligence tends to increase with age, while fluid

intelligence tends to decrease after about age 40‖ [32].


―Sternberg suggests that there are actually three kinds of components (triarachic theory of intelligence)… that allow us to learn (knowledge acquisition components)… to solve specific

problems (performance components)… that allow us to understand how, in general, to solve

the problems we face (metacomponents)‖ [38].


Duality of intelligence can be easily understood through artificial intelligence definition:

―General Intelligence (A computer that acts humanly) and Specific Intelligence (A computer

that performs a specific job)‖ [27].


180


―Artificial system intelligence: (1) native intelligence, expressed in the specified complexity

inherent in the information content of the system, and (2) performance intelligence,

expressed in the successful (i.e., goal-achieving) performance of the system in a complicated

environment‖ [33].


It is a very productive approach.


11. The most ―synthetic‖ definition is presented in [40]:

―Intelligence is a control tool that has emerged as a result of evolution by rewarding systems

with increase of the probability of success under informational uncertainty. Intelligence

allows for a redundancy in its features of functioning simultaneously with reduction of

computational complexity by using a loop of semantic closure equipped by a mechanism of

generalization for the purposes of learning. Intelligence grows through the generation of

multiresolutional system of knowledge representation and processing.‖


Intelligence is not a tool but outcome of a control system. In this case we should define control as a system that ―exercises authoritative or dominating influence over; direct‖; and

brain as ―…the primary center for the regulation and control of bodily activities, receiving and interpreting sensory impulses, and transmitting information to the body organs…‖

(American Heritage Dictionary). Information uncertainty is not a mandatory condition for

intelligent system environment.


Analysis shows that different definitions refer to intelligence as mental quality, ability of a

system, behavior, application of knowledge, consciousness, control tool, and etc. Most of

them accept intelligence as a set of abilities. Many definitions stress an importance of

adaptation to the environment (group 7). It is important to mention distinguishes between

biological or physical (ability to change a body) and intellectual adaptation (ability to make

choice of action). Adaptation to the environment is one of life‘s defining abilities: ―The property or quality that distinguishes living organisms from dead organisms and inanimate

matter, manifested in functions such as metabolism, growth, reproduction, and response to

stimuli or adaptation to the environment originating from within the organism‖ (American

Heritage Dictionary). It makes the connection between life and intelligence. A newborn baby

has reflexes. New research (Josh Bongard) [10] shows the possibility of artificial life (virtual

life) existence with virtual intelligence. This type of life does not have natural metabolism but perhaps it is possible to speak about virtual metabolism that supports virtual growth. It

will be shown that biological and intellectual adaptations are subjects of two different parts

of a definition of intelligence. The connection between intelligence and life was elsewhere

presented in [41] by Dr. A. Meystel.


12. This group represents specific definition of AI.


―Artificial Intelligence is the science of making machines do things that would require

intelligence if done by man‖ [7].


181


―Artificial Intelligence is the ability of a human-made machine (an automation) to emulate or

simulate human methods for the deductive acquisition and application of knowledge and

reason‖ [8].


―Artificial Intelligence is the study of mental faculties through the use of computational models‖ [11].


―Artificial Intelligence is the science of designing computer systems to perform operations

that mimic human thinking and do ―intelligent‖ thinks‖ [43].


―Systems that think like humans. Systems that act like humans. Systems that think rationally.

Systems that act rationally‖ [51].


―Artificial Intelligence is the study of ideas that enable computers to be intelligent‖ [55].

―Artificial Intelligence, the capacity of a digital computer or…robot… to perform tasks

commonly associated with the higher intellectual process characteristic of humans, such as

the ability to reason, discover meaning, generalize or learn from past experience‖ (The New

Encyclopedia Britannica).


―AI

is

the

science

and

engineering

of

making

intelligent

machines.‖

(www.cera2.com/ee/ai.htm).


Some definitions resemble a description of Nobel Prize winner‘s abilities. They refuse to

accept existence of AI. Authors of these definitions don‘t realize that their own abilities don‘t

match to these definitions.


As we can see, there is variety of definitions. It is not a complete but representative pool of

opinions. Some of them are contradictory to the baby test. It is difficult to answer these questions: Which definition is right? Which definition is acceptable? Let us try to answer these questions.

AI

is

the

science

and

engineering

of

making

intelligent

machines.‖

(www.cera2.com/ee/ai.htm).


Some definitions resemble a description of Nobel Prize winner‘s abilities. They refuse to

accept existence of AI. Authors of these definitions don‘t realize that their own abilities don‘t

match to these definitions.


As we can see, there is variety of definitions. It is not a complete but representative pool of

opinions. Some of them are contradictory to the baby test. It is difficult to answer these questions: Which definition is right? Which definition is acceptable? Let us try to answer these questions.

References: (see References to PART 1)

182



APENDIX 3

MEASURMENT OF MULTIVARIABLE FUNCTION


183


184


Arpsychology and structured design of artificial intelligent systems

ADDITIVE FORM


The most important question of intelligent measurement is: Is it an additive or

multiplicative function? Psychology and cognitive science calculate IQ based on

assumption that intelligence is an additive function of abilities. It is very strong

assumption because there is interdependence between the same abilities. For example:

reasoning is a basis of several other abilities such as generalization, intuition, etc. It is important to choose local abilities without interdependency. For example: generalization,

intuition, associative thinking, object recognition, etc but not reasoning that is the part of

these abilities.


The measurement is a process of assigning numbers to the objects or events in

accordance with certain rules of the system. The estimation is a process of assigning

fuzzy values. The number and value assignment is possible just on the scalar scale. There

are three types of axioms related to a measurement process: identity axioms, rank axioms,

additively axioms [9]. Next set of known axioms is very important for ability of the measurement.


Identity axioms

A=B or A B

If A=B, then B=A

If A=B and B=C, then A=C


Rank axioms

If A>B, then B<A

If A>B and B>C, then A>C


Additively axioms

A=D and B>O, then A+B>D

A+B=B+A

If A=D and B=Q, then A+B=D+Q

(A+B)+C=A+(B+C)


These axioms determine four scale levels: scale of names, rank scale, interval scale and

ratio scale. The analyses of these scales are done in [9,16].


All these scales are one-dimension scales and cannot be used to measure vectors. The

multi-dimension scales that we use to measure vectors is not covered by the number 4

axiom, which say that just comparable, can be compared. It is possible to compare

vectors just in case to assign weight function to the vector‘s components.


It will be shown that just, the weighted-sum approach and utility functions can be used in

this case [5,10] as the method of multivariable scales aggregation and converts vector

into a sufficient scalar.


185


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

The intelligence measurement is not the same as a multiobjective optimization of the

systems with intelligence. The optimization can be done based on different scales (scale

of names, rank scale, interval scale, and ratio scale), but measurement can be done just on

the interval and ratio scales [9].


There are many different methods of optimization [3,14, and other]. All of existing methods use each function of the intelligence separately and determine preferences and a

system‘s rank, but not an intelligence value. The additive function is presented in the

most of the research works [3,5 - 11, 16, 19, 20 and other].


The values of separate intellectual abilities (variables) don‘t give any ideas about

artificial intelligence integrated value. Each variable has a certain level of usability. Many

different forms of aggregation were introduced [4,5-8,10-13,15-20]. They can be divided

in two main groups: a weighted arithmetic mean of variables (additive forms) [15,18,20]

and multiplicative forms [4,15,18]. A multiplicative form is a multidimensional function

and, as it was mention above cannot be used. Just one, even not important variable, that

equal to zero brings all evaluations based on multiplication down to zero. Only one not

important variable that has dominant value can create not reasonable high level of the

evaluation function. Additive forms can be divided in two groups: the absolute, non-

normalized ( Wi*Fi ) and relative, normalized (5) form of variables presentation. The

absolute form has the problem. Weight functions (Wi) have to be measured against the

scales calibrated in the units that are an opposite of the variable scale units. Weight

functions of the relative forms are measured against the dimensionless scale.


Aggregation of the separate variables or their usability can be done on the base of the utility theory because utility reflects usability. For example, an American statistician

Harrington [10,16] proposed aggregation of the utility functions as


k n

U =

U i .

i = 1


As it was mention above for multiplicity forms, this form doesn‘t work.


The utility vector [Ui] can represent the vector of abilities [Ai]. The utility of intelligent

alternative can be presented as [10]:


n

UA Ui (1) i 1


where Ui is an utility of i-th basic variable,


As it is shown in [16], equation (1) can be translated into


186


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

n

VA Wi (Fi )* Vi ( Fi), (2)

i 1


where Wi (Fi) is a weight function of i-th variable (Fi),

Vi (Fi) is a utility function measured by the universal utility scale for each basic variable.


The value of the weight function depends on the variable value (the second sandwich is

less important to the hungry person then first one).


A set of variables has to be named for each problem separately.


The function [Wi (Fi) *Vi (Fi)] is not linear. Suppose that Wi (Fi) incorporated the

nonlinear part of the function and Vi (Fi) is a linear part of the function. In this case:


Vi ( Fi ) [V(Fimax)/Fi max ]* Fi . (3)


This utility function is measured by the universal utility scale. So


V(Fimax) =Vmax . (4)


From (2) and (4) we can get the quality index of j-th alternative (domain specific) in

nondimensional units

n

Qj VA/V max Wi (Fi )*( Fi/Fi max ) (5)

i 1


Usually one of the variables is an investment of the j-th alternative (Cj). In this case

equation (5) can be rewritten as:


n-1

Qj

Wi (Fi) * (Fi/ Fmax )- - WC(Cj ) * Cj /C max (6)

i=1


or


n - 1

Qj*(C max/ WC) [ Wi (Fi)/ WC] * C max*(Fi/Fimax )- Cj , (7)

i 1


where WC (Cj ) is a weight function of variable C.

187


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

This equation presents the evaluation of j-th alternative measured in C units. As a result

from the equation (7), artificial intelligence variables can be measured by one universal

scale.


Now we can chose any scale, even financial, as a real universal scale of the measurement.


Cj can be added to the left and the right parts of the equation (7)


n - 1


Ij [Wi (Fi)/ WC] * Cmax*(Fi/Fimax). (8)

i 1


This is the direct way to calculate profit. Let us look at the Meaning of this equation.


Case 1. Suppose one person has 2 horses and another one has a car. They decided to

make an exchange. It means that for these people 2 horses are equivalent (in utility terms)

to one car. We measure car utility against the ―horse‖ scale, but not against the money

scale. Unfortunately or not all horses are different and can not be used as universal

measurement units (instrumentation Bible). Money, as we know, nothing more, but the

result of the between people agreement. Money is just abstract convenient scale even

without any representation of gold as it was many years ago.


Case 2. Suppose one person is ready to pay for a car $20,000 but another one is ready to

pay $25,000 for the same car. It means that finance equivalent is not the constant value

on the money scale.


The equation (8) sets relationship between different nature values, as a basis of the barter

exchange, and covers both of the described cases.


In some cases the expert group can evaluate profit from each intelligence

functions. But in the most cases it is not possible. In this case we can use the

equation (8).


It is understandable that we can not measure intelligence with a high level of accuracy.

But it is better to have an approximate evaluation rather than nothing. The level of

accuracy may be improved by using a self-learning procedure.


The last question is how to determine the value of weight function. The most known and

usable method is an expert method, [3,5,8-11,16,19] and other] but there are several analytical methods (in special cases) to find out the value of this function [6,15,16].

Opponents of an expert method and the aggregation function complain against the

application of a human expertise as a source of information. They dispute an expert

ability to produce objective information.


Yes, a collective expertise has an element of subjectivism but only a human being has the

best sense of the weight of the intelligence functions variables.

188


New Microsoft Intelligence system for Internet is based on weighting of functions by

experts. The weighting takes ―into account dozens of details – like the time of day and

whether the user was in the office, in meeting, on the phone or behind the wheel‖. In

some case weight is measured in dollars and cents [13]. Expert Choice, Inc. created

decision-making system based on weighting of functions by experts.


Each separate intelligence abilities can be measured by appropriate methods but as an

integrated value, intelligence has to be presented as a scalar. There are many different

methods to measure each separate intellectual abilities.


The intelligence measurement is not a new problem. The famous IQ and WAIS-3 [2]

tests are the possible ways to make an evaluation of the human intelligence. These tests

present an aggregated value of the multifunctional intelligence and convert a vector value

into a scalar value. These tests can be used for an artificial intelligence variable

measurement


The opponents to these testes pointed out to the possible social problems bounded to

these methodic. In case of artificial intelligence measurement this problem does not make

sense.


REFERENCE:


1. Albus J., Outline for Theory of Intelligence. IEEE Transactions on Systems, Man, and

Cybernetic, vol. 21, No 3. May/June, 1991.


2. Charles G. Morris, Albert A. Maisto Psychology. Prentice Hall, 1999, 682p.


3. Dhar, V. Stein, R. Intelligent Decision Support Methods. Prentice Hall, 1997,

244p


4. Gutkin L. S., Optimization of the Radio Equipment‘s. Sov. Radio, 1975, 167p. (in

Russian).


5. Fishburn P. C., Additive Utilities with Incomplete Product Set:

Applications to priorities and Assignments - Operations Research, V. 15, 1967,

No.3, p.537-542.


6. Fiebaugh Morris, Artificial Intelligence: A Knowledge-Bas Approach. PWS-Kent

Publishing Co. 1988, 736p .


7. Fishburn P. C. A Study of Independence in Multivariate Utility Theory

Econometric, 37, No.1, 1969, p.7-121.


8 Fishburn P. C. Independence in Utility Theory with Whole Product Sets.

Operations Research, V.13, 1965, p.28-45.


189


9. Hall A. Experience of Methodology for Large Engineering Systems, ―Soviet Radio,‖

M, 1975, 120p.


10. John Von Neumann. O. Morgenstern, Economic Behavior and Theory of Games,

1944, 650p.


11. Marino Y. P. Technological forecasting for decision making, American Elsevior

Company Inc. N.Y., 1972.


12. Markoff J., Microsoft Sees Software ―Agent‖ as Way to Avoid Distractions, The

New York Times, July 17, 2000 .


13. Mitsuo Gen, Runwei Cheng. Genetic Algorithms Engineerin & Optimization. A

Wiley-Interscience Publication, 2000.


14. Pareto M. D‘economic Politque. Paris1927, 695p


15. Pogogev I. B., Optimization of Variables and Quality Control. Znanie, 1972, 51p.

(in Russian).


16. Polyakov L. M., Kheruntsev P. E., Shklovsky B. I., Elements of the

Automated design of the electrical a automated equipment‘s of Machine tools.

Publ. ―Machinostrojenie‖. Moscow, 1974, 157p. (in Russian).


17. Schziver A. Forecast Air Review, V. 16, No.3, 1965, p.12-23.


18. Schor J B., Quality of the Manufacturing Product Evaluation. Znanie, 1971, 56p. (in

Russian).


19. Sigford Y., Pazvin R., Project PATTERN: a methodology for determining

relevance in complex decision making. IEEE Transaction. Eng. Manag. V. EM-12,

No.1, 1965210p.


20. Stuart Russell, Peter Norvig,. Artificial Intelligence, A Modern Approach. Prentice

Hall. 1995, 931p.



190


APPENDIX 4

FUZZY LOGIC



191



192


FUZZY NUMBERS


A fuzzy 8.

A crisp8.

6 7 8 9 10

7 – 9 is the BASE


MEMBERSHIP FUNCTION.

(The set of ―eighty’s‖ with triangular membership function.)


Member Degree of Membarship


7

0

7.5

.5

8

1

8.5

.5

9

0


Crisp and fuzzy arithmetic operations.

Crisp Fuzzy

a = 3 a = -2,3,8

b = 2 b = -3,2,7

Addition: a + b

3 + 2 = 5 (-2,3,8) + (-3,2,7) = (-5,5,15)

The base ranges of two fuzzy numbers are

added (geometrically) together,

formingthe base of the arithmetic result.

Main value: 3 + 2 = 5

First base: -2 + 8 = 10.

Second base: - 3 + 7 = 10.

Sum base: 10 + 10 = 20.

The sum is divided by 2: 20/2 = 10.

Left point: 5 – 10 = -5.

Right point: 5+ 10 = 15.

Subtraction: a – b

3 – 2 = 1 (-2,3,8) - (-3,2,7) = (-9,1,11)

193


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Multiplication: a * b

3 * 2 = 6 (-2,3,8) * (-3,2,7) = (-4,6,16)

Division: a / b

3 / 2 = 1.5 (-2,3,8) / (-3,2,7) = (-8.5,1.5,11.5)


FUZZY LOGIC RULES.


Fuzzy AND is a conjunction or minimum of the input value:

.5 .7 = .5 ( .5 (.5 + .2)) 0 1= 0

Fuzzy OR is a disjunction or maximum of the input value:

.5 .3 .7 .8 = .8 0 1= 1


The rules for evaluating the fuzzy truth, T, of a complex sentence are:

T(A B) = min(T(A), T(B))

T(A B) = max(T(A), T(B))

T( A) = 1 – T(A)

T(A A) T(True) but in Boolean logic it is true A A

True.

Modus ponens (Latin) (affirmative mode) (Rule of inference)

If A is true, then B is also true

A implies B or A


Fuzzy Sets


X1 X2 X3 … X25 …

Set A .8 .2 .7

Set B 1 .3 .4

Union A B 1.0 .3 .7

Intersection A B .8 .2 .4

Difference A \ B (Set A minus of the portion of it that is also in Set B)

Result 0 0 .3


194


APENDIX 5


NEURON NETWORK


195


196


Neuron net

Neuron net is built out of units, simple electronic processors often called

neurons connected to each other by wires that mimic not just the nerve fiber

between neurons, called dendrites and axons, but even the synapses, that gaps

across which neurons connect.


―Neuron network‖, ―connectionism‖, and parallel distributed processing are

all descriptive terms on roughly the same level. They all refer to general

approach to computation that relies on same analogy to the biological system

of neurons and synapses [38, PART 2].


Neuron networks don‘t share the traditional division between software and

hardware. It replace the symbolic logic and programming with learning and

evolution.

Forwardpropagation

Forwardpropagation is a supervised learning algorithm and describes the

"flow of information" through a neural net from its input layer to its output

layer.

The algorithm works as follows:

1. Set all weights to random values ranging from -1.0 to +1.0

2. Set an input pattern (binary values) to the neurons of the net's input layer

3. Activate each neuron of the following layer:

Multiply the weight values of the connections leading to this neuron with the

output values of the preceding neurons. Add up these values.

Pass the result to an activation function, which computes the output value of

this neuron

4. Repeat this until the output layer is reached

5. Compare the calculated output pattern to the desired target pattern and

compute an error value

6. Change all weights by adding the error value to the (old) weight values

7. Go to step 2

8. The algorithm ends, if all output patterns match their target patterns


Example: Suppose you have the following 2-layered Perceptron:


197


Arpsychology and structured design of artificial intelligent systems


Patterns to be learned:

input

target

0 1

0

1 1

1

First, the weight values are set to random values (0.35 and 0.81).

The learning rate of the net is set to 0.25.

Next, the values of the first input pattern (0 1) are set to the neurons of the

input layer (the output of the input layer is the same as its input).

The neurons in the following layer (only one neuron in the output layer) are

activated:


Input 1 of output neuron: 0 * 0.35 = 0

Input 2 of output neuron: 1 * 0.81 = 0.81

Add the inputs: 0 + 0.81 = 0.81

Compute an error value by

subtracting output from target: 0 - 0.81 = -0.81

Value for changing weight 1: 0.25*0*(-0.81) = 0

(0.25 = learning rate)

Value for changing weight 2:0.25*1*(-0.81)=-0.2025

Change weight 1: 0.35 + 0 = 0.35(not changed)

Change weight 2: 0.81 + (-0.2025) = 0.6075

Now that the weights are changed, the second input pattern (1 1) is set to the

input layer's neurons and the activation of the output neuron is performed

again, now with the new weight values:

Input 1 of output neuron: 1 * 0.35 = 0.35

198


Input 2 of output neuron: 1 * 0.6075 = 0.6075

Add the inputs: 0.35 + 0.6075 = 0.9575 (=output)

Compute an error value by

subtracting output from target: 1-0.9575 = 0.0425

Value for changing weight 1: 0.25 * 1 * 0.0425 =

0.010625

Value for changing weight 2: 0.25 * 1 * 0.0425 =

0.010625

Change weight 1: 0.35 + 0.010625 = 0.360625

Change weight 2: 0.6075 + 0.010625 = 0.618125

That was one learning step. Each input pattern had been propagated through

the net and the weight values were changed.

The error of the net can now be calculated by adding up the squared values of

the output errors of each pattern:

Compute the net error:(-0.81)2 + (0.0425)2 =

0.65790625

By performing this procedure repeatedly, this error value gets smaller and

smaller.


The algorithm is successfully finished, if the net error is zero (perfect) or

approximately zero.


Backpropagation


Backpropagation is a supervised learning algorithm and is mainly used by Multi-Layer-Perceptrons to change the weights connected to the net's hidden

neuron layer(s).

The backpropagation algorithm uses a computed output error to change the

weight values in backward direction.

To get this net error, a forwardpropagation phase must have been done

before. While propagating in forward direction, the neurons are being

activated using the sigmoid activation function.

The formula of sigmoid activation is:

1

f(x) = ---------

1 + e-input


The algorithm works as follows:

1. Perform the forwardpropagation phase for an input pattern and calculate

the output error

199


Arpsychology and structured design of artificial intelligent systems

2. Change all weight values of each weight matrix using the formula

weight(old) + learning rate * output error * output(neurons i) *

output(neurons i+1) * ( 1 - output(neurons i+1) )

3. Go to step 1

4. The algorithm ends, if all output patterns match their target patterns

Example: Suppose you have the following 3-layered Multi-Layer-

Perceptron:


Patterns to be learned:

input

target

0 1

0

1 1

1

First, the weight values are set to random values: 0.62, 0.42, 0.55, -0.17 for

weight matrix 1 and 0.35, 0.81 for weight matrix 2.

The learning rate of the net is set to 0.25.

Next, the values of the first input pattern (0 1) are set to the neurons of the

input layer (the output of the input layer is the same as its input).

The neurons in the hidden layer are activated:

Input of hidden neuron 1: 0*0.62+1*0.55 = 0.55

Input of hidden neuron 2: 0*0.42+1*(-0.17)=-0.17

Output of hidden neuron 1: 1 / (1+ exp(-0.55)) =

0.634135591

Output of hidden neuron 2: 1 / (1+exp(+0.17)) =

0.457602059

200


The neurons in the output layer are activated:

Input of output neuron: 0.634135591 * 0.35 +

0.457602059 * 0.81 = 0.592605124

Output of output neuron: 1/(1+exp(-0.592605124))

= 0.643962658

Compute an error value by

subtracting output from target: 0 - 0.643962658

= -0.643962658

Now that we got the output error, let's do the backpropagation.

We start with changing the weights in weight matrix 2:

Value for changing weight 1: 0.25*(-0.643962658) *

0.634135591 * 0.643962658 * (1-0.643962658)

= -0.023406638

Value for changing weight 2: 0.25*(-0.643962658) *

0.457602059*0.643962658 * (1-0.643962658) =

-0.016890593

Change weight 1: 0.35+(-0.023406638)=0.326593362

Change weight 2: 0.81+(-0.016890593)=0.793109407

Now we will change the weights in weight matrix 1:

Value for changing weight 1:0.25*(-0.643962658)*

0* 0.634135591 * (1-0.634135591) = 0

Value for changing weight 2: 0.25*(-0.643962658)

*0* 0.457602059 * (1-0.457602059) = 0

Value for changing weight 3: 0.25*(-0.643962658) *

1* 0.634135591* (1-0.634135591) = -0.037351064

Value for changing weight 4: 0.25*(-0.643962658) *

1* 0.457602059 *(1-0.457602059) = -0.039958271

Change weight 1: 0.62 + 0 = 0.62 (not changed)

Change weight 2: 0.42 + 0 = 0.42 (not changed)

Change weight 3: 0.55+(-0.037351064)=0.512648936

Change weight 4: -0.17+(-0.039958271)=

-0.209958271

The first input pattern had been propagated through the net.

The same procedure is used for the next input pattern, but then with the

changed weight values.

After the forward and backward propagation of the second pattern, one

learning step is complete and the net error can be calculated by adding up the

squared output errors of each pattern.

By performing this procedure repeatedly, this error value gets smaller and

smaller.

201


Arpsychology and structured design of artificial intelligent systems


The algorithm is successfully finished, if the net error is zero (perfect) or

approximately zero.

Note that this algorithm is also applicable for Multi-Layer-Perceptrons with

more than one hidden layer.


"What happens, if all values of an input pattern are zero?"

If all values of an input pattern are zero, the weights in weight matrix 1

would never be changed for this pattern and the net could not learn it. Due to

that fact, a "pseudo input" is created, called Bias that has a constant output value of 1.

This changes the structure of the net in the following way:


These additional weights, leading to the neurons of the hidden layer and the

output layer, have initial random values and are changed in the same way as

the other weights. By sending a constant output of 1 to following neurons, it

is guaranteed that the input values of those neurons are always differing from

zero.

Selforganization

Selforganization is an unsupervised learning algorithm used by the Kohonen Feature Map neural net.

A neural net tries to simulate the biological human brain, and

selforganization is probably the best way to realize this.

202


Arpsychology and structured design of artificial intelligent systems

It is commonly known that the cortex of the human brain is subdivided in

different regions, each responsible for certain functions. The neural cells are

organizing themselves in groups, according to incoming information.

Those incoming information is not only received by a single neural cell, but

also influences other cells in its neighborhood. This organization results in

some kind of a map, where neural cells with similar functions are arranged

close together.

A neural network can also perform this selforganization process. Those

neural nets are mostly used for classification purposes, because similar input

values are represented in certain areas of the net's map.

A sample structure of a Kohonen Feature Map that uses the selforganization

algorithm is shown below:


Kohonen Feature Map with 2-dimensional input and 2-dimensional map (3x3

neurons)

As you can see, each neuron of the input layer is connected to each neuron on

the map. The resulting weight matrix is used to propagate the net's input values

to the map neurons.

Additionally, all neurons on the map are connected among themselves. These

connections are used to influence neurons in a certain area of activation around

the neuron with the greatest activation, received from the input layer's output.

The amount of feedback between the map neurons is usually calculated using

the Gauss function:

-|xc-xi|2

--------

2 * sig2

203


where xc is the position of the most activated

neuron

xi are the positions of the other map

neurons feedbackci = e

sig is the activation area (radius)


In the beginning, the activation area is large and so is the feedback between

the map neurons. This results in an activation of neurons in a wide area

around the most activated neuron.

As the learning progresses, the activation area is constantly decreased and

only neurons closer to the activation center are influenced by the most

activated neuron.

Unlike the biological model, the map neurons don't change their positions on

the map. The "arranging" is simulated by changing the values in the weight

matrix (the same way as other neural nets do).

Because selforganization is an unsupervised learning algorithm, no

input/target patterns exist. The input values passed to the net's input layer are

taken out of a specified value range and represent the "data" that should be

organized.

The algorithm works as follows:

1. Define the range of the input values

2. Set all weights to random values taken out of the input value range

3. Define the initial activation area

4. Take a random input value and pass it to the input layer neuron(s)

5. Determine the most activated neuron on the map:

Multiply the input layer's output with the weight values

The map neuron with the greatest resulting value is said to be "most

activated"

Compute the feedback value of each other map neuron using the Gauss

function

6. Change the weight values using the formula:

weight(old) + feedback value * ( input value - weight(old) ) * learning rate

7. Decrease the activation area

8. Go to step 4

9. The algorithm ends, if the activation area is smaller than a specified value

Example: see sample applet

The shown Kohonen Feature Map has three neurons in its input layer that

represent the values of the x-, y- and z-dimension. The feature map is

204


initially 2-dimensional and has 9x9 neurons. The resulting weight matrix has

3 * 9 * 9 = 243 weights, because each input neuron is connected to each map

neuron. In the beginning, when the weights have random values, the feature

map is just an unordered mess.


After 200 learning cycles, the map has "unfolded" and a grid can be seen.

As the learning progresses, the map becomes more and more structured.

It can be seen that the map neurons are trying to get closer to their nearest

blue input value.


At the end of the learning process, the feature map is spanned over all input

values.

The selforganization is finished at this point.


Freeman believes that much of neural network theory and neurobiology is

founded not so much on truth as on convenience. Neurobiologists and

cognitive scientist believe in the reflex model because it promises to bake the

brain into easily analyzable machine. Neurobiologists concentrate on the

feed-forward networks ion the brain while ignoring the feedback loops,

because it‘s easier in the former case to connect a stimulus to response

Neural network modelers concentrate on the same feed-forward networks

because the mathematics of networks using feedback loop is so difficult.

Adding feedback makes network unstable [38].


205


Arpsychology and structured design of artificial intelligent systems


206


Arpsychology and structured design of artificial intelligent systems


207


Arpsychology and structured design of artificial intelligent systems


208


Arpsychology and structured design of artificial intelligent systems


209


Arpsychology and structured design of artificial intelligent systems


210


Arpsychology and structured design of artificial intelligent systems


211


212


APENDIX 6


GENETIC ALGORITHM


213


214


GENETIC ALGORITHM

Genetic algorithms, a school of computation most closely identified with John Holland,

are designed to ―solve‖ systems through artificial evolution.


A system that uses genetic algorithms begins with some kind of fitness function. Each

entity consist of a computer program for solving the task at hand that is initially designed

by an engineer, and a two-part genetic algorithm, which sets the rules of reproduction for

surviving programs.


Each computer program entity is measured against the fitness function. Those programs

that pass the threshold are allowed to reproduce, yielding a new generation similar to

their parents. Program that don‘t pass the threshold ―die‖.


Some neural network researchers are using them to configure the connections in their

networks. Some neurobiologists are using them to explain how the brain completes its

own organization during development. David Stork (Ricoh‘s Menlo Park, California)

uses a similar kind of evolution to grow neural network that recognize different

typefaces.


11101010110

111010010111

111010010111


0


000110010111

000110101100

000110101100


111010101001

111110101001


11101101110


0

00111010100

001110101100

001110101101


1

Offsprings

Initial Population

(Selection) Cross-Over Mutation


Gene is the smallest unit of a GA .

A series of genes, or a chromosome, represents one possible

complete solution to the problem.


Genetic algorithm consists of the several steps:


1. To select the initial population. If nothing is known

About the problem solution, then solution can be

chosen at random from the space of all possible

solutions.

2. To apply a rule of selections to determine solutions

215


Will survive to become parents of the next generation.

3. To apply a fitness function.

4. To repeat step 2 and 3 until acceptable result will be

created.


216


APENDIX 7


EXPLORE BRAIN SCANNING TECHNIC


217



218


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Explore brain-scanning technology:


EEG (Electroencephalograph)

CAT (Computerized Axial Tomography ) Scan

PET (Position Emission Tomography) Scan

MRI (Magnetic Resonance Imaging)

MEG (Magneto encephalography)


EEG (Electroencephalograph)


The EEG shows the electrical impulses of the brain.


Active neurons create currents in the brain tissue, which can leak through the skull and be

recorded by electrodes attached to the scalp. As the path from the active area to the scalp

can be quite complicated , spatial resolution is poor compared to PET and MRI. But the

temporal resolution is in the order of 1 ms.


EEGs allow researchers to follow electrical impulses across the surface of the brain and

observe changes over split seconds of time. An EEG can show what state a person is in --

asleep, awake, and anaesthetized -- because the characteristic patterns of current differ for

each of these states. One important use of EEGs has been to show how long it takes the

brain to process various stimuli. A major drawback of EEGs, however, is that they cannot

show us the structures and anatomy of the brain or really tell us which specific regions of

the brain do what.


CAT (Computerized Axial Tomography) Scan


High-resolution magnetic resonance image of normal brain with CAT scan.

219


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


CAT scans of the brain can detect brain damage and also highlight local changes in

cerebral blood flow (a measure of brain activity) as the subjects perform a task.


PET (Position Emission Tomography) Scan


The gray outer surface is the surface of the brain from MRI and the inner colored

structure is cingulated gyrus, part of the brain's emotional system visualized with

PET.

PET imaging software allows researchers to look at cross-sectional "slices" of the brain,

and therefore observe deep brain structures, which earlier techniques like EEGs could

not. PET is one of the most popular scanning techniques in current neuroscience research.

PET relies on the injection of radioactively labeled water (using the O-15 isotope) into

the vein of the test person. In a short time the water accumulates in the brain, forming an

image of the blood flow as follows: The O-15 decays emitting a positron that after

annihilating with an electron emits two gamma rays into almost opposite directions.

These gamma rays can be detected and their origin located. Neurologists found that when

resting neurons become active, the blood flow to them increases. Thus an image of the

blood flow can act as a means to locate neural activity.


MRI (Magnetic Resonance Imaging)


220


MRI uses the technique of nuclear magnetic resonance. This technique allows you to

detect slight changes in the magnetic properties of the substance under investigation. In

the case of brain activity one exploits the fact that a neuron becoming active results in an

increased oxygen level in the blood vessels around it. The oxygen in the blood is carried

by hemoglobin, whose magnetic properties change, when the oxygen level is rising. This

change is detected by MRI and thus indicates the active area.


MRI can produce very clear and detailed pictures of brain structures. Often, the images

take the form of cross-sectional "slices." The images of these slices are obtained through

the use of "gradient magnets" to alter the main magnetic field in a very specific area

while the magnetic force is being applied. This allows the MRI technician to pick exactly

what area of the person's brain he or she wants an image of.


MEG (Magneto encephalography)

MEG measures the tiny magnetic fields created by active areas in the brain with highly

sensitive

measurement devices called

SQUIDs (superconducting quantum

interference devices). MEG has the same temporal resolution as EEG but signals are less

affected by the conductivity profile of the brain, skull and scalp. Thus MEG is superior to

EEG. The spatial resolution is less than for MRI.


221


222


APPENDIX 8

DEFENITION


223


224


The list of five rules by means of which to evaluate the success of connotative

definitions:

1. Focus on essential features. A good definition tries to point out the features that

are essential to the designation of things as members of the relevant group.

2. Avoid circularity. Since a circular definition uses the term being defined as part of its own definition, it can't provide any useful information. Thus, for example,

there isn't much point in defining "cordless 'phone" as "a telephone that has no

cord."

o

Capture the correct extension. A good definition will apply to exactly

the same things as the term being defined, no more and no less.


Successful intentional definitions must be satisfied by all and only those things

that are included in the extension of the term they define.


3. Avoid figurative or obscure language. Since the point of a definition is to

explain the meaning of a term to someone who is unfamiliar with its proper

application, the use of language that doesn't help such a person learn how to apply

the term is pointless.

4. Be affirmative rather than negative. It is always possible in principle to explain

the application of a term by identifying literally everything to which it does not

apply. In a few instances, this may be the only way to go: a proper definition of

the mathematical term "infinite" might well be negative, for example. But in

ordinary circumstances, a good definition uses positive designations whenever it

is possible to do so.


225


226


APPENDIX 9


PREDICTION OF TIME THE NEUROAL NET WILL BE AT LEAST

AS COMPLEX AS THE HUMAN BRAIN


227


228


If we support the hypothesis of consciousness as a physical property of the brain, the

question becomes: When will computers at least as complex as the human?


Consciousness


Brain complexity


Human brain


Fig. 1. The complexity threshold.


If consciousness is a function of brain complexity, the brain marks the complexity

threshold required.


1 Gbyte

100 Mb


10 Mb


1 Mb


!00 Kb

10 Kb


1 Lb


1980 1982 1984 1986 1988 1992 1994 1996 1998 2000


Fig. 2 RAM capacity


Consciousness seems to represent a step function of brain complexity and the human

brain provides the threshold, as Figure 1 shows.


How much memory would a computer require to replicate the human brain's

complexity? The human brain has about 1012 neurons. Each neuron makes about 103

synaptic connections with other neurons, on average, for a total of 1015 synapses.


Artificial neural networks can simulate synapses using a floating-point number that

requires 4 bytes of memory to be represented in a computer. As a consequence, simulating

1015 synapses requires a total of 4 million Gbytes. Simulating the human brain requires 5

million Gbytes, including the auxiliary variables for storing neuron outputs and other

internal brain states.

229


When will such a memory be available in a computer? During the past 20 years,

random-access memory capacity increased exponentially by a factor of 10 every four

years. The plot in Figure 2 shows the typical memory configuration installed on

personal computers since 1980.


By interpolation, we can derive the following equation, which gives RAM size as a

function of the year:

(year – 1966)

bytes =10 4


For example, from this equation we can derive that in 1990, personal computers typically

had 1 Mbyte of RAM, whereas in 1998, a typical configuration had 100 Mbytes of RAM.

Assuming that RAM will continue to grow at the same rate, we can invert this relationship

to predict the year in which computers will have a given amount of memory:


year = 1966 + log10 (bytes)


To calculate the year in which computers will have 5 million Gbytes of RAM, we

substitute that number in equation above. This gives the year 2029.


In reality computational ability depends also on the structural complexity. Contemporary

intelligent system can develop a world model of the external and internal world. Existing

system also can develop mention above circle in the social environment.


230


APPENDIX 10


231


232


Original text


yuo hvae a sgtrane mnid if yuo cna raed this.

Cna yuo raed tihs? Olny

55 pcenert of plepoe cluod uesdnatnrd ym

wariteng. The compute’sr ilteleignnce hsa hte sema

phaonmneal pweor

as the hmuan’s mind. Aoccdrnig to a

rscheearch at Cmabrigde

Uinervtisy, it dseno't mtaetr in waht oerdr the

ltteres in a wrod are, the olny

iproamtnt tihng is taht the frsit and lsat ltteer be

in the rghit pclae. The

rset can be a taotl mses and you can sitll raed it

whotuit a pboerlm. Tihs is

bcuseae the huamn mnid deos not raed ervey lteter by

istlef, but the wrod as

a wlohe. Btu it si nto mipotratn ot ehav the frits and teh

lats eltters ni the ritgh poositni. oyu can rade even fi

the lats letrest aer in teh rwogn poosiotns.


Recognition-Translation

You have a strange mind if You can read this.

Can you read this?

55% of people could actually understand my reading

The computer‘s intelligence

power has the same phenomenal power as the human‘s mind. According to a

researchers at Cambridge

University, it doesn‘t matter in what order the

letters in a word are, the only

important thing is that the first and last letter be

in the right place. The

rest can be a total mess and you can still read it

without a problem. This is

because the human mind does not read every letter by

itself, but the word as

a whole. But it is not important to have the the last letter in the right position. You can

read if these letters are in the wrong positions


233


234


APPENDIX 11


HIDDEN MARKOV MODEL


235


236


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Hidden Markov model


From Wikipedia, the free encyclopedia

(Redirected from Hidden Markov Model)


State transitions in a hidden Markov model (example)

x — hidden states

y — observable outputs

a — transition probabilities

b — output probabilities

A hidden Markov model (HMM) is a statistical model where the system being modeled is assumed to be a Markov process with unknown parameters, and the challenge is to determine the hidden parameters from the observable parameters. The extracted model parameters can then be used to perform further analysis, for example for pattern

recognition applications. A HMM can be considered as the simplest dynamic Bayesian

network.

In a regular Markov model, the state is directly visible to the observer, and therefore the

state transition probabilities are the only parameters. In a hidden Markov model, the state

is not directly visible, but variables influenced by the state are visible. Each state has a

probability distribution over the possible output tokens. Therefore the sequence of tokens

generated by an HMM gives some information about the sequence of states.

Hidden Markov models are especially known for their application in temporal pattern

recognition such as speech, handwriting, gesture recognition and bioinformatics.

A concrete example

Assume you have a friend who lives far away and to whom you talk daily over the

telephone about what he did that day. Your friend is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is

determined exclusively by the weather on a given day. You have no definite information

237


about the weather where your friend lives, but you know general trends. Based on what

he tells you he did each day, you try to guess what the weather must have been like.

You believe that the weather operates as a discrete Markov chain. There are two states,

"Rainy" and "Sunny", but you cannot observe them directly, that is, they are hidden from you. On each day, there is a certain chance that your friend will perform one of the

following activities, depending on the weather: "walk", "shop", or "clean". Since your friend tells you about his activities, those are the observations. The entire system is that

of a hidden Markov model (HMM).

You know the general weather trends in the area, and what your friend likes to do on

average. In other words, the parameters of the HMM are known. You can write them

down in the Python programming language:

states = ('Rainy', 'Sunny')


observations = ('walk', 'shop', 'clean')


start_probability = {'Rainy': 0.6, 'Sunny': 0.4}


transition_probability = {

'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3},

'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6},

}


emission_probability = {

'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5},

'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1},

}

In this piece of code, start_probability represents your uncertainty about which state

the HMM is in when your friend first calls you (all you know is that it tends to be rainy

on average). The particular probability distribution used here is not the equilibrium one,

which is (given the transition probabilities) actually approximately {'Rainy': 0.571,

'Sunny': 0.429}. The transition_probability represents the change of the weather

in the underlying Markov chain. In this example, there is only a 30% chance that

tomorrow will be sunny if today is rainy. The emission_probability represents how

likely your friend is to perform a certain activity on each day. If it is rainy, there is a 50%

chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is

outside for a walk.


238


APPENDIX 12

THREE LAWES OF ROBOTICS

239


240


Arpsychology and structured design of artificial intelligent systems

Three Laws of Robotics

From Wikipedia, the free encyclopedia


This cover of I, Robot illustrates the story "Runaround", the first to list all Three Laws of Robotics.


In science fiction, the Three Laws of Robotics are a set of three rules written by Isaac

Asimov, which all positronic robots appearing in his fiction must obey. Introduced in his 1942 short story "Runaround" , the Laws state the following, quoted exactly: 1. A robot may not injure a human being or, through inaction, allow a human being

to come to harm.

2. A robot must obey orders given it by human beings except where such orders

would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict

with the First or Second Law.


According to the Oxford English Dictionary , the first passage in Asimov's short story

"Liar!" (1941) that mentions the First Law is the earliest recorded use of the word

robotics. Asimov was not initially aware of this; he assumed the word already existed by analogy with mechanics, hydraulics , and other similar terms denoting branches of applied

knowledge.


The Three Laws form an organizing principle and unifying theme for Asimov's fiction,

appearing in his Robot series and the other stories linked to it, as well as Lucky Starr and

the Moons of Jupiter . Other authors working in Asimov's fictional universe have adopted 241


them, and references (often parodic) appear throughout science fiction and in other genres. Technologists in the field of artificial intelligence, working to create real machines with some of the properties of Asimov's robots, have speculated upon the role

the Laws may have in the future.


242


APPENDIX 13

DISCRIMINANT ANALYSIS


243


244


Linear discriminant analysis

http://en.wikipedia.org/wiki/Linear_discriminant_analysis

Linear discriminant analysis (LDA) and the related Fisher's linear discriminant are used in statistics to find the linear combination of features which best separate two or more classes of object or event. The resulting combinations may be used as a linear

classifier, or more commonly in dimensionality reduction before later classification.

LDA is closely related to ANOVA (analysis of variance) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. In the other two methods however, the dependent variable is a

numerical quantity, while for LDA it is a categorical variable ( i.e. the class label).

LDA is also closely related to principal component analysis (PCA) and factor analysis.

LDA explicitly attempts to model the difference between the classes of data. PCA on the

other hand does not take into account any difference in class, and factor analysis builds

the feature combinations based on differences rather than similarities. Discriminant

analysis is also different from factor analysis in that it is not an interdependence

technique : a distinction between independent variables and dependent variables (also

called criterion variables) must be made.

LDA works when the measurements made on each observation are continuous quantities.

When dealing with categorical variables, the equivalent technique is Discriminant

Correspondence Analysis (see References).

Applications


Face recognition

In computerised face recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of

features to a more manageable number before classification. Each of the new dimensions

is a linear combination of pixel values, which form a template. The linear combinations

obtained using Fisher's linear discriminant are called Fisher faces, while those obtained

using the related principal component analysis are called eigenfaces.

Marketing

In marketing, discriminant analysis is often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other

forms of collected data. The use of discriminant analysis in marketing is usually

described by the following steps:

1. Formulate the problem and gather data - Identify the salient attributes consumers use to evaluate products in this category - Use quantitative marketing research

techniques (such as surveys) to collect data from a sample of potential customers concerning their ratings of all the product attributes. The data collection stage is

usually done by marketing research professionals. Survey questions ask the

respondent to rate a product from one to five (or 1 to 7, or 1 to 10) on a range of

245


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

attributes chosen by the researcher. Anywhere from five to twenty attributes are

chosen. They could include things like: ease of use, weight, accuracy, durability,

colourfulness, price, or size. The attributes chosen will vary depending on the

product being studied. The same question is asked about all the products in the

study. The data for multiple products is codified and input into a statistical

program such as SPSS or SAS. (This step is the same as in Factor analysis).

2. Estimate the Discriminant Function Coefficients and determine the statistical

significance and validity - Choose the appropriate discriminant analysis method.

The direct method involves estimating the discriminant function so that all the

predictors are assessed simultaneously. The stepwise method enters the predictors

sequentially. The two-group method should be used when the dependent variable

has two categories or states. The multiple discriminant method is used when the

dependent variable has three or more categorical states. Use  Wilks‘s Lambda to test for significance in SPSS or F stat in SAS. The most common method used to

test validity is to split the sample into an estimation or analysis sample, and a

validation or holdout sample. The estimation sample is used in constructing the

discriminant function. The validation sample is used to construct a classification

matrix which contains the number of correctly classified and incorrectly classified

cases. The percentage of correctly classified cases is called the hit ratio.

3. Plot the results on a two dimensional map, define the dimensions, and interpret

the results. The statistical program (or a related module) will map the results. The

map will plot each product (usually in two dimensional space). The distance of

products to each other indicate either how different they are. The dimensions must

be labelled by the researcher. This requires subjective judgement and is often very

challenging. See perceptual mapping.

References


Pattern Classification (2nd ed.), R.O. Duda, P.E. Hart, D.H. Stork, Wiley

Interscience, (2000).  ISBN 0-471-05669-3


Fisher, R.A. The Use of Multiple Measurements in Taxonomic Problems. Annals

of Eugenics, 7: 179-188 (1936) pdf file


Friedman, J.H. Regularized Discriminant Analysis. Journal of the American

Statistical Association, (1989) pdf file


Mika, S. et al. Fisher Discriminant Analysis with Kernels. IEEE Conference on

Neural Networks for Signal Processing IX, (1999) gzipped ps file


246


APPENDIX 14

INFORMATION EXCHANGE BETWEEN SHORT AND LONG

TERM MEMORIES IN THE NATURAL BRAIN

247


248


Classification by information type

http://en.wikipedia.org/wiki/Memory#Classification


Long-term memory can be divided into

1. declarative (explicit)

1.1. Semantic memory, which concerns facts taken independent of context..

Semantic memory allows the encoding of abstract knowledge about the world, such as "Paris is the capital of France".

1.2. Episodic memory. Episodic memory is used for more personal

memories, such as the sensations, emotions, and personal associations

of a particular place or time

1.3. Visual memory is part of memory preserving some characteristics of

our senses pertaining to visual experience. We are able to place in

memory information that resembles objects, places, animals or people

in sort of a mental image. Visual memory can result in priming and it is assumed some kind of perceptual  representational system or PRS

underlies this phenomenon.


Declarative memory requires conscious recall, in that some conscious process must call back the information. It is sometimes called explicit memory, since it consists of

information that is explicitly stored and retrieved.

2. procedural (implicit) memories. (Anderson, 1976) (or implicit memory) is not based on the conscious recall of information, but on implicit learning. Procedural memory is primarily employed in learning motor skills and should be considered a

subset of implicit memory. It is revealed when we do better in a given task due only

to repetition - no new explicit memories have been formed, but we are

unconsciously accessing aspects of those previous experiences. Procedural memory involved in motor learning depends on the cerebellum and basal ganglia.


Information Exchange


The finding, reported by Daoyun Ji and Matthew A. Wilson, researchers of the rats‘ brain

at the Massachusetts Institute of Technology, showed that during nondreaming sleep, the neurons of both the hippocampus and the neocortex replayed memories — in repeated

simultaneous bursts of electrical activity — of a task the rat learned the previous day.


Special neurons in the hippocampus are known as ―place cells‖ because each is activated

when the rat passes a specific location, as if they were part of a map in the brain.


Dr. Wilson reported that after running a maze, rats would replay their route during idle

moments, as if to consolidate the memory, although the replay, surprisingly, was in

reverse order of travel. These fast rewinds lasted a small fraction of the actual time spent

on the journey.

249


The same replays occurring in the neocortex as well as in the hippocampus as the rats

slept. The rewinds appeared as components of repeated cycles of neural activity, each of

which lasted just under a second. Because the cycles in the hippocampus and neocortex

were synchronized, they seemed to be part of a dialogue between the two regions.


The researchers recorded electrical activity only in the visual neocortex, the region that

handles input from the eyes, but they assumed many other regions participated in the

memory replay activity. One reason is that there is no direct connection between the

visual neocortex and the hippocampus, suggesting that a third brain region coordinates a

general dialogue between the hippocampus and all necessary components of the

neocortex.


250


APPENDIX 15


STUDENT’S DISTRIBUTION


251


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


Student's t-distribution

http://en.wikipedia.org/wiki/Student%27s_t_distribution


In probability and statistics, the t-distribution or Student's t-distribution is a

probability distribution that arises in the problem of estimating the mean of a normally

distributed population when the sample size is small. It is the basis of the popular

Student's t-tests for the statistical significance of the difference between two sample

means, and for confidence intervals for the difference between two population means.

Student's distribution arises when (as in nearly all practical statistical work) the

population standard deviation is unknown and has to be estimated from the data.


Occurrence and specification of Student's t-distribution


Suppose X 1, ..., Xn are independent random variables that are normally distributed with expected value μ and variance σ2. Let


be the sample mean, and


be the sample variance. It is readily shown that the quantity


is normally distributed with mean 0 and variance 1, since the sample mean

is

normally distributed with mean μ and standard deviation

. Gosset studied a

related quantity,


and showed that T has the probability density function


with ν equal to n − 1. The distribution of T is now called the t-distribution. The parameter ν is conventionally called the number of degrees of freedom. The distribution depends on ν , but not μ or σ; the lack of dependence on μ and σ is what makes the t-

distribution important in both theory and practice. Γ is the Gamma function.

The moments of the t-distribution are

252


Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems

Arpsychology and structured design of artificial intelligent systems


Special cases

Certain values of ν give an especially simple form.

ν = 1

Distribution function


ν = 2

Distribution function


Density function


253


254


INDEX


255


A

de Beauvoir Simone 18

A-strategy 117

Behaviorism 6, 9, 53

A*- strategy 117

Behaviorist 11

Ability 5, 7-10, 23, 24

Beethoven 9, 26

Abstraction 73, 93

Berg-Cross G. 179

Abstract thinking 93

Bergson Henri 100

Aconceptual mind 28

Body language 124, 127, 128

Actuator 8-9, 26, 34, 56, 62, 68, 129,

Bohm 28

129

Bohr N. 29

Adaptation 7, 10, 39, 58-62, 156, 177

Brain Development Stages 169

Aesthetics 154

Brook Rodney 9, 132

AIBO robot platform 118, 133


Agent 23

C

Agent class 24

Cartesian Theater 12, 71

Aggression 126

Cattell 124

Albus J. 55

Cerebellum 70, 119, 154

Altruism 140, 163

Cerebrum 72

American Society for the Prevention of

Chromosome 19, 160, 214

Cruelty to Robots 143

Cingulate gyrus 69

Amygdule 123, 125, 129

Classification 24, 38, 83, 98, 107, 144

Android 142, 143, 153

Classes 25

Anthropomorphic robot 142

- agent 25

Apprehension 5, 38, 153

- goal 25

Aristotle 5, 26, 95, 140

Cognition 10, 27, 30, 39, 54, 70,

Art 155

Cognitive sciences 28

Artificial gene 160

Cognitive psychology 51

Artificial Life 155-159

Cognetivist 11

Artificial person 124

Golomb Beatrice 73, 97

Association 73

Combination 21

Associative ball 95

Comfort 134

Associative memory 95

Communication 32, 47, 58, 68, 77, 96,

Associative thinking 16, 95

109, 143

Attention 30, 37, 58, 67, 69

Compassion 21, 70, 143

Autonomy 11, 15, 24, 37, 59-62, 126,

Compromise 123, 145, 146

255

Concept 109-115

Autonomous robots 142, 144, 156

Conception 77, 96

Award 36

Conceptualization 9, 15, 24-26,38-39,

Awareness 23, 27, 30

96-97, 102, 197, 154

Axiom 7, 126, 175

Connectionist 53

Axon 171, 196

Conceive 38, 72, 73, 77, 118, 156


Connotative relation 76

B

Conscious 9, 29, 30

Baby test 7, 175

- intentional

Bagnall Douglas 155

- process

Bartneck Christof 133

- unintentional

Beauty 154

Control system

256


- local 29

External world 27

- main 29

Eysenck Hans 124

Convergent thinking 15, 16, 178


Courage 140

F

Costa 124

Face recognition 244

Cottell Raymond 179

Fair Deal 148-149

Creativity 13, 15-17, 22 , 38, 39, 103,

Fairness 147-149

105

Fear 122-126, 135, 142

Creativity Machine 20

Feedback 9, 19, 57, 113, 117, 119, 129,

Cridland John 141

132

Curiosity 116, 117

Fembots 162


Feeling 127

D

Fogel L. 59

Darwin Charles 39, 67

Fountain Henry 132

Decision tree 121

Free will 35

Decomposition 54, 55, 58, 82, 96, 120

Frontal lobe 125, 126, 128, 131, 152

Definitions 5, 6, 10, 11, 15, 55, 222

Frontal cortex 128

Definitions of Intelligence 7

Frustrations 133

Dendrites 171, 196

Functionalism 53

Dennett Daniel 12

Furber Steve 14

Denotative relation 76

Fuzziness 10

Descartes 32, 55, 81

Fuzzy logic 190

Determinism 35-37

Fuzzy image 21

Dinosaur Pleo 159


Discriminant analysis 133, 242

G

Discrimination 62, 67, 69, 146

Galten Francis 179

Disorder 19, 33, 67, 119, 151

Gender 51, 161

Distributed control theory 159

General Intelligence 8-10

Divergent thinking 15, 17

Generalization 10, 24, 38, 62, 86, 97-98,

Dreyfus Hubert 81, 100

107, 121, 179, 194

Dreyfus Stuart 82

Genetic Algorithm 18, 19, 159-161, 212

Duality of intelligence 8, 179

Genetic code 8, 124, 125, 161, 169

Dynamic Systems 8, 19, 53-54, 73

Genius 17-18


Genome 156, 160

E

GenoPharm 18, 95

EcoBot 156, 157,

Gestalt psychology 53

Einstein 155

Gibson J. 68

Electrical ephapse 171

Goal class 24

Emery Marcia 100

Goal driven system 167

Emotional-family 128

Goal 23

Emotions 30, 38-40, 51, 95, 127-129

- external 7

Engels F. 35

- internal 7

Entrepreneurs 141

Golomb Beatrice 70

Evolution 7, 60, 117, 132, 155, 166

Confidence 127

Existence 31

Gray Jeremy 8

Expert system 12

Gray P. 101

257


Greedy search 18, 121, 122,

- general 7, 8, 12

Guilford 15, 17

- knowledge-based 10


Intelligent Design 29

H

Intelligent tasks 23

Hall David 154

Intention 12, 29

Happiness 126, 136, 146,

Internet 18, 22, 95, 109

Hard coded 11, 30, 34, 40, 155, 161, 162

Interpretation 38, 52, 58, 70-75

Hard wired 7, 11, 30, 34, 40, 54, 116,

Intuition 9, 16, 38, 99-106

161, 162

Intuitionalists 100

Harrington 185


Hate 27, 134

J

Hebb 8

Judgment 15, 38, 56, 135-136

Heisenberg W. 28

Johnson-Laird Philip N. 29, 111

Hibbard Bill 22

Joy 134

Hill 162


Hippocampus 248

K

Hobbes 55, 81

Kant Immanuel 17, 98, 152

Holland John 214

Keller Helen 9, 26

Hope 133

Kelly Ian 156

Horn 179

Kismet 13, 132, 137

Hubert 78

Knowledge 7-10, 18, 52, 83

Humanoid 51, 52, 130-

Knowledge mining 21

132,142,144,146,156

Koch Christof 26

Hume David 152

Koffka Kurt 51

Humean 154

Köhler Wolfgang 51

Husserl Edmund 103

Kruskal‘s algorithm 115,

Hybrid robot 6

Krishnamurti 101

Hybrot 6


Hypothesis 29,38, 58, 77, 98, 107

L


Langton Chris 161

I

Law 152

Identity 32,73, 146

Learning 109

Image 17, 21, 70-73, 81, 96

Learning by Experience 112

Imagination 20, 21

Learning by Imitation 117

- objection 21

Learning by Instructions 112

- subjection 21

Learning by Interactions 116

‗imagination engines‘ 18

Learning Concepts 109

‗imagitrons‘ 18

Learning decision tree 1119

Impression 70, 100, 104, 154

Leibniz 55, 81

Information-processing systems 51

Limbic system 69, 127, 135

Inheritance 8

Locke John 100

I.Q. 8, 22, 112

Localization 38, 71

Inspiration 139, 140

Love 67, 134

Instinct 100, 152, 162

Lubart 17

Intelligence 8, 12, 14, 21, 26, 151


- duality 8, 170


258


M


Malfunctions 151

O

Markov model 74, 234

Object recognition 72

Marr 54

Operating system 13, 29

Mataric 132

Operator 62, 121

Materialistic 11


McCarty John 101

P

McCrae 124

Pavlov Ivan 53

Measurements 23

Path 121

Medulla 129

Perceive 41, 69

Melhuish Chris 156

Perception 26, 30, 31, 37, 39, 56, 62 70-

Memory

72, 126, 154

- factual 83

Personality 124-125, 139

- long-term 83

Pfeifer Rolf 160

- procedural 83

Piaget Jean 53, 171

- short-term 83

Picasso P. 73

Metabolism 156, 157

Pinker Steven 101

Meystel A.11, 55, 60, 156, 157, 169

Planner 120

Mind 1, 9, 12, 22, 26-31, 99, 141

Planning 58, 120

Minsky Marvin 12, 101

Planning algorithms 120

M.I.Q. 22

Plato 5, 26, 53, 100---------------

Mirror Cells 68, 129

Pleo 157, 159

Modularity 56

Parietal lobe 70

Moral 59, 125, 127, 131, 139, 142, 152

Possibility 36, 60, 70, 100, 123,, 140,

Multi-KB 84

179

Multivariable functions 182

Post-phenomenologists 28,

Multilevel structure 17, 24, 39, 54, 56,

Potter Steve 6

58, 84, 139

Pribram Karl 12

Musical

Prime‘s algorithm 121

- harmonies 153

Process

- scale 153

- intentional 17, 30, 38, 39, 104,

Mutation 19, 214

105, 153, 154


- unintentional 17, 30, 38, 39, 104,

N

105, 153

Nass Clifford 133

Psychoanalysis 53

Neisser Ulric 7

Psycholinguistics 53

Netrebko Anna 129

Pulses 29, 68, 170

Nettleton Philip 177

Punishment 36, 56, 139, 140

neuromuscular junction 169, 170

Pylkkanen P. 28

Von Neumann machine 81

Pylkko 28

Neuro-sciences 28

Pythagoras 155

Neuron 160, 168


Neural maps 27

Q

Newell 101, 178

Quantum-like process 27

Newton Isaac 18

Quantum physics 28

Node 95, 121


259


SlugBot 157

R

Smart 22-24, 176

Random choices 35

Specific Intelligence 171

Reasoning 9, 10, 11, 12, 15, 16, 18, 20,

Speech Recognition Technology 51, 74,

22, 24, 26, 27, 38, 57, 60, 78, 80, 82, 84,

236

88, 92, 93, 96, 98, 100, 113, 120, 145,

Spinoza 99, 100

149, 167

Stanislavski 129

Reliability 13, 14, 152

State

Reflexes 130, 155, 175

- initial 121

- action 33

- space 121

- arc 33

Sternberg 17, 178

- conditional 33, 34

Stimuli 31, 133

- unconditional 33, 34

- external 27

Reflexes family 128

- internal 27

Regeneration 14

Straight line algorithm 122

Remote association test 17

Structuralism 53

Reproduction 25, 156, 179, 214

Symbolic 53

Risk 16, 22, 36, 135, 140

Symbolic equation test 17

Rheingold Howard 98

Subconscious process 29

Robinson Daniel N. 96

Subjective risk 140

Robocup 134

Success_expected 134

Robotics Law 238

Success_observed 134

Robustness 13, 14, 152, 160

Superintelligence 21

Rumelhart 81

Supervised Learning 109

Russell 55, 81, 100

Synapse 169-170, 238


S

T

Sartre Jean-Paul 33, 103

Tautology 82, 96

Scheutz M. 133

Teledendron 170

Schopenhauer Arthur 17

Temporal lobe 72, 136, 139, 154, 236

Self-awareness 7, 30-34, 39, 81, 172

Text Recognition Technology 74

Self-consciousness 33

Thale Stephen 19

Self -confidence 139-142, 151

Thought 26

Self-esteem 139

Transhuman minds 22

Sejnowski Terrence J. 105

Translation 74.

Semiotics 54, 76, 96

Triarachic theory of intelligence 178

Sensation 26, 67

Twins studies 7

Sensing 8, 37, 67-70, 126

Tulvin Endel 86

Sensing system 8

Turing test 23, 24

Sentient 9, 10, 27, 28, 39, 40, 132, 176


SEXNET 73

U

Shepard 98

Uncertainty 14, 36, 69, 127, 135, 141,

Simone de Beauvoir 18

161, 237

Simon Herbert 6, 98

Unconsciousness 28

Singularity 22, 29

Undirected graph 95

Skinner B. F. 53


260


Warwick Kevin 132, 143

V

Watson John B 53

Vants 161

Wertheimer Max 53

Virtual embryos 157, 160

Whalen T. 59

Virtual growth 156, 179

Whitehead 55, 81

Virtual intelligence 156, 179

Will (free) 35

Virtual life 156, 179

William James 27, 73

Virtual metabolism 156,179

Wittgenstain L. 56, 81


W

Z

Waives 28

Zadeh Lotfi 84

261


262


Arpsychology and structured design of artificial intelligent systems

ABOUT THE AUTHOR

Professor Leonid M. Polyakov is a member of the Computer Science and Math

department faculty at Globe Institute of Technology (New York), the author over 100

books and articles. He earned his Ph.D. in Electrical Engineering and Theory of Control

Systems from Moscow Machine Tool Institute. He was the principle designer of the

Intelligent Control system for the machine-tool manufacturing company (Odessa,

Ukraine). He was teaching Cybernetic and Intelligent Control systems at Odessa

Polytechnic Institute and has intensive working experience with different American

engineering companies.

263



home | my bookshelf | | Arpsychology and structured design of artificial intelligent systems |     цвет текста   цвет фона   размер шрифта   сохранить книгу

Текст книги загружен, загружаются изображения
Всего проголосовало: 13
Средний рейтинг 4.8 из 5



Оцените эту книгу