On the Status of Computationalism as a Law of Nature

Colin Hales
International Journal of Machine Consciousness (IJMC)
Volume: 3, Issue: 1(2011) pp. 55-89
http://dx.doi.org/10.1142/S1793843011000613

Scientific behavior is used as a benchmark to examine the truth status of computationalism (COMP) as a law of nature. A COMP-based artificial scientist is examined from three simple perspectives to see if they shed light on the truth or falsehood of COMP through its ability or otherwise, to deliver authentic original science on the a priori unknown like humans do. The first perspective (A) looks at the handling of ignorance and supports a claim that COMP is “trivially true” or “pragmatically false” in the sense that you can simulate a scientist if you already know everything, which is a state that renders the simulation possible but pointless. The second scenario (B) is more conclusive and unusual in that it reveals that the COMP scientist can never propose/debate that COMP is a law of nature. This marked difference between the human and the artificial scientist in this single, very specific circumstance, means that COMP cannot be true as a general claim. The third scenario (C) examines the artificial scientist’s ability to do science on itself/humans to uncover the “law of nature” which results in itself. This scenario reveals that a successful test for scientific behavior by a COMP-based artificial scientist supports a claim that COMP is true. Such a test is quite practical and can be applied to an artificial scientist based on any design principle, not merely COMP. Scenario (C) also reveals a practical example of the COMP scientist’s inability to handle informal systems (in the form of liars), which further undermines COMP. Overall, the result is that COMP is false, with certainty in one very specific, critical place. This lends support to the claims (i) that artificial general intelligence will not succeed based on COMP principles, and (ii) computationally enacted abstract models of human cognition will never create a mind.

Discussion on the everything-list:

COMP refutation paper – finally out
http://groups.google.com/group/everything-list/t/b4b3ce1e80e457d1

A comment from Bruno:

If you are duplicated at the right substitution level, few would say that “you” have become an “artificial intelligence”. It would be a case of the good old natural intelligence, but with new clothes.

In fact, if we are machine, we cannot know which machine we are, and that is why you need some luck when saying “yes” to a doctor who will build a copy of you/your-body, at some level of description of your body.

This is an old result. Already in 1922,  Emil Post, who discovered “Church thesis” ten years before Church and Turing (and others) realized that the “Gödelian argument” against Mechanism (that Post discovered and refuted 30 years before Lucas, and 60 years before Penrose), when corrected, shows only that a machine cannot build a machine with equivalent qualification to its own qualification (for example with equivalent provability power in arithmetic)  *in a provable way*. I have refered to this, in this list, under the name of “Benacerraf principle”, who rediscovered this later.

We just cannot do artificial intelligence in a provable manner. We need chance, or luck. Even if we get some intelligent machine, we will not know-it-for sure (perhaps just believe it correctly).

Results from a search in Google for the names in the Bruno’s answer:

http://en.wikipedia.org/wiki/Emil_Leon_Post

http://en.wikipedia.org/wiki/Paul_Benacerraf


Comments are closed.