The myth of Artificial Intelligence

In Daniel Dewey's essay Crucial Phenomena,
if we can meaningfully affect humanity’s long-term future, then it is immensely important that we do so; it is plausibly much, much more important to influence humanity’s future from 2100 onward, than it is to influence the mere 86 years we have remaining between now and 2100. I will not try to argue the point conclusively here, since there are many subtleties and others have done so much better; I refer the curious reader especially to Beckstead, “On the overwhelming importance of shaping the far future”.
The problem is that the aspects of the long term future that he focuses on, that is the very complex behavior of future outcomes of artificial intelligence and bioengineering, are things I would classify as not predictable even in principle, because they are aspects of technological progress that are a matter of new effective laws and high complexity, very far from the mere progressive extension of values of known parameters that can enter a table of statistics.

So, while it would be nice indeed to do works that would positively steer the long term future of mankind if only we had good reasons to expect this effect to happen, this advantage disappears in circumstances like those of the particular topics of that article, as any work or prediction that might be done now about the long-term future of artificial intelligence or biotechnology, just has 0% chance of turning out to be anything else than ridiculously baseless speculation that will be made fun of when the effective shape of such technologies will actually come up. As explained in A Rope over an Abyss
"Many changes in human affairs are on record, but nothing that comes close to being either species-transcendence or extinction. Furthermore, we know enough about human nature to see why some people are motivated to advance extravagant claims, or uncritically to embrace such claims, although the assertions lack adequate support. Nearly all human beings like to feel they are special. One might be special as an individual, or one might be special by virtue of belonging to a special country or a special group. Perhaps one lives in a distinctive era, in an era of remarkable promise, or in an era of remarkable danger. “It was the best of times, it was the worst of times....” Perhaps one lives at a moment in history when critical choices must be made. Because of the human psychological makeup, people are inclined to believe so, whether or not they have good reasons for the belief. Thus, we must be cautious and skeptical as we examine claims that our lives right now are set in an unusual period and that an even more unusual future might soon overtake us.
But I have much reason to believe that computer scientists and others will continue to devise instruments that assist, simulate, and improve thinking. For transformation of the human condition, that will be good enough"
More precisely, AI specialists like to feel they are special, or more precisely, that AI is a special field, which is why they oriented themselves into this field of research.

Only when a specific technology is sufficiently close to be under hand, it is realistic to try assessing its possible consequences, its risks and needed prevention methods. And when it will be known, and only then, we will know what is needed to prevent the risk. Then it will be a matter of collective discipline to apply to rules of risk prevention. But what do we need to ensure the future collective discipline of application of rules of risk prevention ? Well, precisely by political and economical means, which are what my project is all about. This only needs to be done a rather short time before, as the danger is imminent and its mechanisms will be explicitly known, without any need of too highly speculative long term anticipation.
Otherwise, even if one could predict long in advance that some kind of future technology has risks, I do not expect it possible to slow down or stop any research of a kind anywhere in the world that might approach to it, if it is multi-purpose research with diverse other applications than the one with feared effects. Technological innovation cannot be stopped. What is possible, however, and what my plan is about, is to care implementing a specific crucial innovation that other people didn't think of. As it is a specific plan, and only because of this, its effects have some range of predictability (though I'm sorry I cannot describe these effects as I expect them to be very still very complex). And this cannot be dissociated with the fact that the implementation method is explicit and straightforward and thus can soon be complete if only it is undertaken (which has unfortunately not been the case yet) ; I do not think anything can be said about any technology that will only be invented in the far future, except, maybe, hum... the possibility of redesigning the human genome ? the fact that interstellar travel will need nuclear energy ?
But my main reason to be skeptical about the relevance of this research in these topics he calls "crucial phenomena" is this one

I don't believe in Artificial Intelligence

Computers can make lots of computations but they cannot think. They cannot understand what they are computing. Not just now, but never.

Why ? Because they have no soul. It is a matter of principle, not a matter of number. I explained the metaphysics here, there and there.
It is no proof ? Well ok it is no proof. I cannot prove that I exist not just as a mathematical object to those who only look at "logical proofs" and forgot about common sense. So I cannot prove that no "artificial intelligence" will ever be able to genuinely think, either. However I know it. It is a matter of common sense.
Why would anyone think otherwise ? They cannot prove either that they are mere mathematical objects (since this is what their thesis really means). Some people have this believe because they follow crazy metaphysical assumptions (maybe because they are fed up with religions and so try to oppose religious views as much as they can ??? I understand this as religions are indeed awful nonsense, but it still does not justify to follow an opposite nonsense instead) and are only looking at the statistics of how the number of transistors in a computer will someday exceed the number of neurons in a brain, as if it made any sense. No problem, someday the computing power of computers can formally exceed the one of neurons in a brain, so what ? It won't change anything to the basic fundamental difference between them, and that computers will never be able to think, because the ability to think is not a material process, so that the numerical comparison of computing powers between transistors and neurons is completely irrelevant.
Computers can check mathematical proofs but they cannot guess which theorems can be interesting for humans. They have no imagination, they can obey but not guess which operation needs to be obeyed, which problems need to be solved (hum so many scientists also have this defect of being only smart to find hard answers but not to ask the right questions, but...), which operation makes more sense in a given context unless someone had someday grasped this sense and provided the right instruction to apply.

See also an interesting talk on the myth of AI.

One of the most valuable applications of AI I can figure out

To provide lots of realistic responses to Nigerian scammers, to keep them busy for a little while. Even just a little while, eventually up to providing harmless (fake) randomnly generated banking details, multiplied by a large number of virtual respondents, could suffice...

My comment on Rick Searle's comment "Is AI a Myth?" that replied to this talk

(I sent this as comment but it did not appear on his page, I don't know why so anyway here it is)

Sorry, what do you mean by "The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible." ?
Hum, on which planet are you living ? Different sources may give different figures, for example:
"A Gallup poll on immortality [1982] found that only 16% of leading scientists believed in life after death as opposed to anywhere from 67% to 82% of the general population, according to several polls combined."

"32% of Atheists & Agnostics Believe in an Afterlife"

"a questionnaire to leading scientists .... Belief in immortality (1998): 7.9% believe, 76.7% disbelief, 23.3% doubt or agnostic"

"A survey examining religion in medicine found that most U.S. doctors believe in God and an afterlife"

 (1997): ``about 40 percent of scientists still believe in a personal God and an afterlife. In both surveys, roughly 45 percent disbelieved and 15
percent were doubters (agnostic).''

But whatever the real figures, it remains clear there is a significant number of people with non-materialistic convictions, including among scientists.
I guess what you really mean, thus, is only that you did not yet stumble on an article written by any of such people explicitly declaring the theoretical impossibility of an AI that would ever think consciously in the same sense as humans do (and be able to replace all or most jobs of human intelligence), in the name of the immateriality of consciousness (which seems to me quite logically related issues).

But, precisely if they consider the AI thesis to be nonsense, why would they care to write about it ? Do you think that people who explicitly write on the topic, or who are AI specialists, would be necessarily more competent about the materiality/computability of consciousness, than whose who don't write specifically on AI but went to other scientific fields instead precisely due to their disbelief in AI ? As pointed out by Jaron Lanier, we are still very far from any realistic AI simulation of consciousness, or even any clear idea what kind of algorithm might do it. In these conditions, why consider AI specialists any more competent than scientists from other fields, to guess about the materiality of consciousness and the AI thesis ? To me such an attitude, of only looking at explicitly developed declarations on the topic from AI involved people, looks no better than polling priests or theologians about God's existence.
As for me, I work on the foundations of maths and physics, and I do confidently consider the idea of human-level intelligence in AI as theoretically impossible, due to the immateriality and non-algorithmic nature of consciousness. I just explained this view in my fqxi essay.

In this writing about futurism and utopia with detailed replies to a number of fqxi essays, I mention this AI issue, but the biggest stakes of the future I see and develop there are quite disconnected from anything that might depend on future huge increases of computer power (of which I see little point). Instead, I see it as a matter of picking some very crucial low-hanging fruits of possible not-so-complex algorithms which could already work very well on the current internet infrastructure, and which I explicitly described, that would constitute a global political and economic revolution if only a few programmers accepted to work on it. Unfortunately since years, no professional IT people payed attention to this, since the needed concepts fail to fit the condition of popularity in the eyes of programmers, as it requires non-trivial logical thinking outside narrow IT specialization areas.

Example : one of the crucial causes of the Ukraine/Russia conflict, is the impossibility to make a fair, anonymous and publicly verified referendum in a region dominated by corrupt armed groups. To this I wrote the solution last year : (except the step of listing the legitimate voters, which I know how to make but did not publish yet). That is a rather simple and satisfying solution. What does the official research community do on this topic meanwhile ? They lost themselves in overly complicated solutions with new amazing cryptographic algorithms, which will never be of any use anyway because it will make things too complicated to explain and operate with ordinary citizens. Why ? Because when a problem is crucial and is reputed unsolved, researchers feel obliged to lose their thoughts into absurd technological complications as the best way for them to put patents and/or demonstrate the high level of their intelligence. Sigh.
Other example : Bitcoin. People promoting it are cryptography fanatics but have no clue about finance. They fail to see how their system is completely worthless and unreliable, since the crucial aspects to consider have nothing to do with the lowly technical aspects they so loudly advertise and are lost in. The real solution for a good online money would still have some algorithmic difficulty, but for reasons of mapping the complexity of the real economic problem that has nothing to do with cryptography (beyond the simple use of good old SSL/GPG for connection with servers), in the sense that it would not be significantly simplified by assuming all transactions to be operated on a powerful universal central server with a benevolent almighty webmaster/programmer. I explained the concept here:

Here again, the main difficulty to implement this revolution, is just the psychological difficulty of getting programmers to care understanding theoretical concepts outside their usual area of specialization. It is just a matter of combining present-day computer power with moderately difficult but inter-disciplinary scientific design of new software. However, it looks as if it would take a revolution of mentalities just for the purpose of getting the few needed inter-disciplinary professionals that the implementation work requires. Would it ?

About formalizing mathematics

Someone sent me this link : "Logic, Explainability and the Future of Understanding" by Stephen Wolfram, that is a praise of artificial intelligence. So yes computers can find out the answers to well-defined questions by exhaustive enumeration of possibilities. Only once questions are very precisely, algorithmically well-defined. The problem is to find and express interesting questions. That is something only naturally intelligent minds can do. That is what philosophers claim to be here for, though they are very bad at it. I don't see his particular example (finding the simplest axiom for logic) as interesting : an unimaginative mind formulating a ridiculously pointless question which a computer can resolve by exhaustive enumeration = both working for no actually useful result at the end.
So yes this kind of approach to "logic" is a possible game for computers to play. A game like many others, not useful to anything I can think of. But this is only about Boolean logic (propositional calculus), which is but a small part of some full mathematical logic (such as first-order logic).

My care is to provide foundations of maths which are altogether simplest AND deepest AND most intuitive AND from which the rest of useful mathematics can be rigorously deduced in the simplest AND most intuitive ways (something which this "simplest axiom for logic" is very far from).
Such purpose may seem a priori ridiculous because it does not sound expectable for such different criteria of optimization to come to agree with each other, one may expect lots of different "optimal" solutions depending on priorities and interpretations of the different criteria...
Yet with my mind alone and without any help of artificial intelligence I am on the way to providing such a globally optimal solution, thus prove these criteria to be finally compatible. Which no artificial intelligence could have done because it is not a "well-defined problem" from a computer's viewpoint. Because artificial intelligence cannot :
  1. interpret what means "intuitive", even less "deep".
  2. guess what can actually mean "simplest" even from a formal viewpoint by lack of a priori definition/interpretation of "formal simplicity", since the very concept of "formalism" remains a priori undefined until someone comes to effectively specify what a "formalism" might look like, and the first choice may not be the best
  3. guess what is actually the interesting "rest of useful mathematics"
So, that was just an example. But I think it is typical of the general situation, that many supporters of AI have oversized expectations as they fail to grasp the need of looking at the world in terms of searching for the right questions rather than searching for the right answers to supposedly well-defined questions. The failure of Bitcoin is another example. So yes it is a system designed and praised by humans, but humans who failed to ask themselves the right questions (well I mean users, while the designers did not need to care but could get rich at the expense of this rest of unimaginative fellow humans). As long as humans fail to be creative (or lose sight of the need to be creative), they may indeed lose their jobs replaced by machines. This still does not make machines actually creative in relevance for the real needs.

Back to page On humanity's failures to steer itself properly that the above is part of