if we can meaningfully affect humanity’s long-term future, then it is immensely important that we do so; it is plausibly much, much more important to influence humanity’s future from 2100 onward, than it is to influence the mere 86 years we have remaining between now and 2100. I will not try to argue the point conclusively here, since there are many subtleties and others have done so much better; I refer the curious reader especially to Beckstead, “On the overwhelming importance of shaping the far future”.The problem is that the aspects of the long term future that he focuses on, that is the very complex behavior of future outcomes of artificial intelligence and bioengineering, are things I would classify as not predictable even in principle, because they are aspects of technological progress that are a matter of new effective laws and high complexity, very far from the mere progressive extension of values of known parameters that can enter a table of statistics.
"Many changes in human affairs are on record, but nothing that comes close to being either species-transcendence or extinction. Furthermore, we know enough about human nature to see why some people are motivated to advance extravagant claims, or uncritically to embrace such claims, although the assertions lack adequate support. Nearly all human beings like to feel they are special. One might be special as an individual, or one might be special by virtue of belonging to a special country or a special group. Perhaps one lives in a distinctive era, in an era of remarkable promise, or in an era of remarkable danger. “It was the best of times, it was the worst of times....” Perhaps one lives at a moment in history when critical choices must be made. Because of the human psychological makeup, people are inclined to believe so, whether or not they have good reasons for the belief. Thus, we must be cautious and skeptical as we examine claims that our lives right now are set in an unusual period and that an even more unusual future might soon overtake us.More precisely, AI specialists like to feel they are special, or more precisely, that AI is a special field, which is why they oriented themselves into this field of research.
But I have much reason to believe that computer scientists and others will continue to devise instruments that assist, simulate, and improve thinking. For transformation of the human condition, that will be good enough"
Computers can make lots of computations but they cannot think. They cannot understand what they are computing. Not just now, but never.
Why ? Because they have no soul. It is a matter of principle, not a matter of number. I explained the metaphysics here, there and there.Sorry, what do you mean by "The first thing I think one needs to
      grasp is that none of the AI skeptics are making non-materialistic
      claims, or claims that human level intelligence in machines is
      theoretically impossible." ?
      Hum, on which planet are you living ? Different sources may give
      different figures, for example:
      "A Gallup poll on immortality [1982] found that only 16% of
      leading scientists believed in life after death as opposed to
      anywhere from 67% to 82% of the general population, according to
      several polls combined."
http://www.juliaassante.com/reflections/can-science-prove-life-after-death/
      
      "32% of Atheists & Agnostics Believe in an Afterlife"
http://www.theskepticsguide.org/one-third-of-atheists-agnostics-believe-in-an-afterlife
      
      "a questionnaire to leading scientists .... Belief in immortality
      (1998): 7.9% believe, 76.7% disbelief, 23.3% doubt or agnostic"
      https://www.lhup.edu/~dsimanek/sci_relig.htm
      
      "A survey examining religion in medicine found that most U.S.
      doctors believe in God and an afterlife"
http://www.nbcnews.com/id/8318894/ns/health-health_care/t/survey-most-doctors-believe-god-afterlife/
      
       (1997): ``about 40 percent of scientists still believe in a
      personal God and an afterlife. In both surveys, roughly 45 percent
      disbelieved and 15
      percent were doubters (agnostic).''
      https://www.mat.univie.ac.at/~neum/sciandf/contrib/clari.txt
      
      But whatever the real figures, it remains clear there is a
      significant number of people with non-materialistic convictions,
      including among scientists.
      I guess what you really mean, thus, is only that you did not yet
      stumble on an article written by any of such people explicitly
      declaring the theoretical impossibility of an AI that would ever
      think consciously in the same sense as humans do (and be able to
      replace all or most jobs of human intelligence), in the name of
      the immateriality of consciousness (which seems to me quite
      logically related issues).
      
      But, precisely if they consider the AI thesis to be nonsense, why
      would they care to write about it ? Do you think that people who
      explicitly write on the topic, or who are AI specialists, would be
      necessarily more competent about the materiality/computability of
      consciousness, than whose who don't write specifically on AI but
      went to other scientific fields instead precisely due to their
      disbelief in AI ? As pointed out by Jaron Lanier, we are still
      very far from any realistic AI simulation of consciousness, or
      even any clear idea what kind of algorithm might do it. In these
      conditions, why consider AI specialists any more competent than
      scientists from other fields, to guess about the materiality of
      consciousness and the AI thesis ? To me such an attitude, of only
      looking at explicitly developed declarations on the topic from AI
      involved people, looks no better than polling priests or
      theologians about God's existence.
      As for me, I work on the foundations of maths and physics, and I
      do confidently consider the idea of human-level intelligence in AI
      as theoretically impossible, due to the immateriality and
      non-algorithmic nature of consciousness. I just explained this
      view in my
        fqxi essay. 
      
     In this writing
        about futurism and utopia with detailed replies to a number 
    of fqxi essays, I mention this AI issue, but the biggest stakes of the 
future I see and develop there are
      quite disconnected from anything that might depend on future huge
      increases of computer power (of which I see little point).
      Instead, I see it as a matter of picking some very crucial
      low-hanging fruits of possible not-so-complex algorithms which
      could already work very well on the current internet
      infrastructure, and which I explicitly described, that would
      constitute a global political and economic revolution if only a
      few programmers accepted to work on it. Unfortunately since years,
      no professional IT people payed attention to this, since the
      needed concepts fail to fit the condition of popularity in the
      eyes of programmers, as it requires non-trivial logical thinking
      outside narrow IT specialization areas.
      
      Example : one of the crucial causes of the Ukraine/Russia
      conflict, is the impossibility to make a fair, anonymous and
      publicly verified referendum in a region dominated by corrupt
      armed groups. To this I wrote the solution last year :
      http://spoirier.lautre.net/en/e-voting.htm (except the step of
      listing the legitimate voters, which I know how to make but did
      not publish yet). That is a rather simple and satisfying solution.
      What does the official research community do on this topic
      meanwhile ? They lost themselves in overly complicated solutions
      with new amazing cryptographic algorithms, which will never be of
      any use anyway because it will make things too complicated to
      explain and operate with ordinary citizens. Why ? Because when a
      problem is crucial and is reputed unsolved, researchers feel
      obliged to lose their thoughts into absurd technological
      complications as the best way for them to put patents and/or
      demonstrate the high level of their intelligence. Sigh.
      Other example : Bitcoin. People promoting it are cryptography
      fanatics but have no clue about finance. They fail to see how
      their system is completely worthless and unreliable, since the
      crucial aspects to consider have nothing to do with the lowly
      technical aspects they so loudly advertise and are lost in. The
      real solution for a good online money would still have some
      algorithmic difficulty, but for reasons of mapping the complexity
      of the real economic problem that has nothing to do with
      cryptography (beyond the simple use of good old SSL/GPG for
      connection with servers), in the sense that it would not be
      significantly simplified by assuming all transactions to be
      operated on a powerful universal central server with a benevolent
      almighty webmaster/programmer. I explained the concept here:
      http://spoirier.lautre.net/money.htm
      
      Here again, the main difficulty to implement this revolution, is
      just the psychological difficulty of getting programmers to care
      understanding theoretical concepts outside their usual area of
      specialization. It is just a matter of combining present-day
      computer power with moderately difficult but inter-disciplinary
      scientific design of new software. However, it looks as if it
      would take a revolution of mentalities just for the purpose of
      getting the few needed inter-disciplinary professionals that the
      implementation work requires. Would it ?
    
Back to page On humanity's
          failures to steer itself properly that the above is
      part of