Skip to content

AI on Inside Story

May 27, 2015

My appearance on Inside Story didn’t work out (travel mixup–someone confused Kenyon with Ohio State) but still generated some blog discussion on one of our favorite themes:

  • When will computers be as smart as humans?
  • What about their lack of common sense, intuition, and other intrinsically human qualities?
  • What happens to humans once singularity is achieved?

Thoughts, anyone?

  1. May 27, 2015 8:53 pm

    This link might help.
    And a good counter argument by Kevin Kelly: which I like.
    I see AI as being complementary to human intelligence for a long while to come, for example AI wouldn’t be able to handle Asimov’s 3 laws anytime soon (if ever?), whilst humans have no problem.

  2. May 27, 2015 9:09 pm

    I just re-watched “The Imitation Game” yesterday and I was again impressed with Turing’s attitude: The trouble with asking if a computer can think like a human is that it’s a stupid question. Different people think differently, have differing preferences and so on–so a machine will ‘think’ differently from humans, but that doesn’t mean they can never do anything more than mere calculation.

    Additionally, to program a machine for things such as ‘common sense’ or ‘intuition’, we first have to figure out exactly what we mean by those terms–and have some understanding of how the Human brain goes about performing those functions. The subsequent programming to model such activities should be the ‘easy part’.

    Quantum computing holds the greatest promise for human-level, Turing-test kinds of thinking, as it allows not only “On” and “Off”, but “Undetermined”, which follows the human thought process much more closely than a Yes/No digital breadboard could ever do.

    I’ll be watching–hope you have fun!

  3. May 27, 2015 9:14 pm

    I agree it’s a naive question. I suspect that intelligent machines will eventually act more or less like people–for the same reason that bats act more or less like birds and pterosaurs.

  4. May 29, 2015 2:33 pm

    My problem is the idea comparing advanced AI to human consciousness. While we are far from understanding human consciousness, machine consciousness will have a radically different origin.

    People often forget, or don’t realize, that human consciousness has a very large somatic element. The “somatic” element of machine consciousness will be radically different from that of humans. While machines may be able to imitate elements like vision, hearing, touch and even taste and odor, they will bring a broader knowledge base to all of those plus senses we can’t even imagine yet.

    I think that if we really do encounter machine self-awareness, it will not be all that different from encountering truly alien (as in outer space alien) species.

Comments are closed.

%d bloggers like this: