Consciousness in Cognitive Architectures

Below there are quotes from and small comments to

Carlos Hernández, Ricardo Sanz and Ignacio López
Consciosusness in Cognitive Architectures
A Principled Analysis of RCS, Soar and ACT-R
http://cogprints.org/6228/1/ASLAB-R-2008-004.pdf

See also

http://groups.google.com/group/everything-list/t/b304b194f8a40ac8

p. 13 “A possible path to the solution of the increasing control software complexity is to extend the adaptation mechanism from the core controller to the whole implementation of it.

Adaptation of a technical system like a controller can be during construction or at runtime. In the first case the amount of rules for cases and situations results in huge codes, besides being impossible to anticipate all the cases at construction time. The designers cannot guarantee by design the correct operation of a complex controller. The alternative is move the adaptation from the implementation phase into the runtime phase. To do it while addressing the pervasive requirement for increasing autonomy the single possibility is to move the responsibility for correct operation to the system itself.”

I guess that this is exactly what happens within the software industry.

move the adaptation from the implementation phase into the runtime phase

They just do something and then test in the runtime phase what happens and what should be corrected. I am not sure if I like it although it seems to be impossible to change it.

One typical way out is not to use the newest versions of the software and let the software companies to test them on freaks. Unfortunately this does not work well as often features needed are available in the newest version only. In this case, another typical strategy would be to keep both versions (the newest and previous) running and when necessary to change them. The complexity growths indeed.

p. 16

  • The maximal desideratum of production engineers is both simple and unrealizable: let the plant work alone.
  • The maximal desideratum of automation engineers is both simple and unrealizable: make the plant work alone.”

p. 17 “Intensive uncertainty refers to the deviation of the controlled variables from their desired values. Feedback control mechanisms enable correction of this deviations. An impressive example of this is a humanoid robot walking with an accurate gait to maintain balance.

Qualitative uncertainty refers to the occurrence of unexpected events that qualitatively change the situation. Take the example of the previous robot stepping on a ball.”

p. 18 “Architectures that model human cognition. One of the mainstreams in cognitive science is producing a complete theory of human mind integrating all the partial models, for example about memory, vision or learning, that have been produced. These architectures are based upon data and experiments from psychology or neurophysiology, and tested upon new breakthroughs. However, this architectures do not limit themselves to be theoretical models, and have also practical application, i.e. ACT-R is applied in software based learning systems: the Cognitive Tutors for Mathematics, that are used in thousands of schools across the United States. Examples of this type of cognitive architectures are ACT-R and Atlantis.”

p. 19 “The central connectionist principle is that mental phenomena can be described by interconnected networks of simple units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses. Another model might make each unit in the network a word, and each connection an indication of semantic similarity.”

p. 19 “Computationalism or symbolism the computational theory of mind is the view that the human mind is best conceived as an information processing system very similar to or identical with a digital computer. In other words, thought is a kind of computation performed by a self-reconfigurable hardware (the brain).”

p. 25 A definition of perception that in my view is incomplete. It is unclear what perceives the obtained information.

p. 28 “The whole previous self-functionality is biologically related to self-awareness a more general property, synthesising all the previous ones, that allow the architecture not just to monitor its own state, but to understand the functional implications of the observed state and take appropriate actions actions over itself.”

p. 30 A definition of a system.

The observer selects a system according to a set of main features which we shall call traits.”

Presumably this means that without an observer a system does not exist.

p. 44 “We call embodiment of a conceptual quantity to its physicalisation, that is to say the relation between the conceptual quantity and the physical quantity that supports it [Landauer, 1992], in which it is embodied, i.e. in our example the relation between the robot speed and the memory bits used to represent it.”

p. 45 “This makes that, in reality, the state of the environment, from the point of view of the system, will not only consist of the values of the coupling quantities, but also of its conceptual representations of it. We shall call this the subjective state of the environment.”

p. 52 “These principles, biologically inspired by the old metaphor –or not so metaphor but an actual functional definition– of the brain-mind pair as the controller-control laws of the body –the plant–, provides a base characterisation of cognitive or intelligent control.”

p. 56 “In biological systems, the substrate for learning is mostly neural tissue. Neural networks are universal approximators that can be tuned to model any concrete object or objects+relations set. This property of universal approximation combined with the potential for unsupervised learning make the neural soup a perfect candidate for model boostraping and continuous tuning. The neural net is an universal approximator; the neural tissue organised as brain is an universal modeller.”

p. 60 “Principle 5: Model-driven perception — Perception is the continuous update of the integrated models used by the agent in a model-based cognitive control architecture by means of real-time sensorial information.”

p. 61 “Principle 6: System awareness—A system is aware if it is continuously perceiving and generating meaning from the countinuously updated models.”

p. 62 “Awareness implies the partitioning of predicted futures and postdicted pasts by a value function. This partitioning we call meaning of the update to the model.”

p. 65 “Principle 7: System attention — Attentional mechanisms allocate both physical and cognitive resources for system processes so as to maximise performance.”

p. 116 “From this perspective, the analysis proceeds in a similar way: if modelbased behaviour gives adaptive value to a system interacting with an object, it will give also value when the object modelled is the system itself. This gives rise to metacognition in the form of metacontrol loops that will improve operation of the system overall.”

p. 117 “Principle 8: System self-awareness/consciousness — A system is conscious if it is continuously generating meanings from continously updated self-models in a model-based cognitive control architecture.”

p. 122 ‘Now suppose that for adding consciousness to the operation of the system we add new processes that monitor, evaluate and reflect the operation of the “unconscious” normal processes (Fig. fig:cons-processes). We shall call these processes the “conscious” ones.’

If I understood it correctly, the authors when they develop software just mark some bits as a subjective state and some processes as conscious. Voilà! We have a conscious robot.

Let us see what happens.


Posted

in

,

by

Tags: