Skip to content

Incremental Singularity

December 10, 2011

Does the “identity” of a person depend on the atomic level, with all the implied quantum weirdness?  It cannot–otherwise, you would be a completely different person from who you were in the previous nanosecond, let alone yourself a year ago. Or yourself as a baby–completely different, yet the same? That is one reason the “holidays” can be depressing: We remember our family members as they used to be, years ago, when they were babies or children.  Today they’re the same person–yet the person they used to be is gone.

What would it take for an information system to have the same “awareness” as I do?

The current model of memory (as far as I understand) is defined by the pattern of connections among the brain’s neurons (and possibly accessory cells such as glia). In any event, there are a defined number of neurons (and accessory cells) in a brain, so at any given moment there must be a defined number of neural connections and memory bits.

But to retain a memory requires continual reinforcement, perhaps making new connections and losing old ones. That’s why we recall best those memories that we keep remembering again–but we also tend to change those “memories” as we remember them. And false memories can be planted (Faux News does it all the time.)

In fact, as we surround ourselves by ever-more immersive media, including 3D sound, vision, and touch, will our “false” experiences become more real than our “real” ones? As we replace hearing with hearing aids, and (eventually) vision with brain implants, and brain with cyber-processors, are we uploading ourselves incrementally?

16 Comments
  1. Alex Tolley permalink
    December 10, 2011 9:21 pm

    are we uploading ourselves incrementally?

    Arguably we have been doing this at a low level of that with books, photographs, and more recently, videos.

    However, I don’t think that we are “uploading” so much as extending our cognitive prostheses. Unless the consciousness or integrating parts of the mind can be embedded in these prostheses, I think that the baggage laden term, uploading, is inappropriate in this context.

    I would certainly change my mind if I saw signs of agency in the prostheses, wherever they may be, as that would indicate that some part of mental states, however distorted, was independent of the person.

    • December 12, 2011 10:15 am

      What signs of “agency” would we recognize?
      Already machines are built to “interact” with us as pets and companions, and even to “tell stories.”

      Even if a machine is not “independent” of us, what happens as it assumes more and more of our creative functions?

      • Alex Tolley permalink
        December 12, 2011 10:27 am

        What signs of “agency” would we recognize?(…)
        Even if a machine is not “independent” of us, what happens as it assumes more and more of our creative functions?

        Throwing the question back, would you accept a book as some sort of mind upload? (c.f. Doug Hofstadter’s discussion about this). If you do, then yes we are definitely uploading in your scenario. If no, then where do we draw boundaries, or at least, have some sense of whether there is even an axis of mind not-uploaded to mind uploaded.

        If we think of mind as an emergent property of interacting, unconscious processes, then again, it is possible to see that mind could be considered being uploaded incrementally, as long as those uploaded processes were allowed to interact. An analogy might be how far we still have “life” as we move chemistry from the cell to test tubes. If you moved each reaction to a different different tube, clearly there is no life. If you then interact the different reactions by allowing mixing, at some point we get living systems emerging again. So if mind processes are analogous to chemical reactions, how would that influence your thinking?

        • Dan permalink
          December 23, 2011 12:18 pm

          What are “we”? This is a question asserted by Socrates. He asked: What is man? If something belonging to man is not man, we can discover by exclusion what man is. And he began: is man’s body man? Obviously not, is belonging to man, so the body isn’t the man. His mind? His soul? So none of these are the true essence of man. Until we do not set into mathematical therms what man is, we cannot have AI, because intelligence is a human quality – animals do not posses real, abstract intelligence.

          • Alex Tolley permalink
            December 23, 2011 12:33 pm

            Philosophy does not shed light on this matter, no matter what authority you wish to cite.

            we cannot have AI, because intelligence is a human quality – animals do not posses real, abstract intelligence

            What intelligence do animals posses? Worryingly, you also seem accept some sort of human distinctiveness from all other animals, despite the evidence of evolution. Are you arguing that intelligence is some magical quality only humans posses? Is there some line between human and animal, so that our ancestors crossed some “intelligence line” and acquired it?

          • Dan permalink
            December 26, 2011 2:46 pm

            Sorry, did you answered Socrate’s question? It is a very essential one. As long as we do not clarify what is man, we cannot understand the most fabulous of his abilities, the definitory one – intelligence. There is no mathematical description of intelligence, that is why there is no real AI.

          • December 26, 2011 7:31 pm

            See below.

  2. December 11, 2011 5:01 pm

    The dimension of Time is a factor in human thought–whereas electronics are (outside of the macroscopic) forever. Our selves operate in mid-river, whereas PCs erect stone structures by comparison–the passage of time doesn’t have any effect on magnetic media (again, yes, it does, but not in the same way).

    Heteromeles’s analogy of ‘building a car with the engine running’, applies here as well–because our selves are not static–and the restlessness and frustration that comes from boredom or sensory deprivation are indicative of our minds’ dislike for stasis. The passage of time, the ‘clock’, is not a separate part of our awareness, as it is for PCs. Our awareness canoes along the river of time, changing with every moment (sometimes small changes, like a mood-swing that saps one’s energy for a moment–and sometimes large, like an inspired act of invention, or a concussion).

    Consciousness is also linked to hunger, fatigue, stress (all cyclical conditions) by the slight impetus of vector change they bring to one’s train of thought as it passes through time.

    And, like many conversations, identity is sometimes left unfinished, left just hanging in space, unresolved–and after the distraction is gone, it is as likely to go on to new ground as it is to return to the previous focus. This amorphous quality makes Identity difficult to define, or describe–let alone duplicate.

    Back when PCs were new, the process of accommodating brains with processors was as often laid off on the persons, as much as spending time and money to re-program the machines to favor our old thought paradigms. People had trouble digesting this–the formatting of dates, mailing addresses, filenames, data structures–all these things had elasticity while people were keeping track. But they had to be forced into pre-set formats before a computer could help keep track of these things. It was a brief period which steamrollered over every business obstacle, giving rise to the phrase ‘user-friendly’ and creating a side-show of programming called entry-verification, and sometimes ‘idiot-proofing’. We don’t even think that way anymore, and only someone like me, who happened to attend that party, can even remember that it was ever different.

    So I believe we still have a big job ahead–at present, we cannot ‘know’ each other’s identities, we cannot really know or control our own, and we have no solid idea, as yet, of what exactly ‘Identity’ is. One thing we’re fairly sure of is that a binary processor cannot be made to mimic a brain–the inclusion of ‘maybe’ (or perhaps ‘time is passing’) to the binary ‘Yes’ and ‘No’ options is mandatory if we are to ever create a synthetic identity–our brains do so much more than process a string of true/false calculations. So, still lot’s of work to go: step 1 figure out what identity is, step 2 invent a trinary processor (or perhaps even more options, quatrenary, quintrinary, who knows?) and step 3 invent a brain scanner that updates in real time. Then we can really settle down to work.

    • December 12, 2011 10:18 am

      Why is it that a “binary processor cannot be made to mimic a brain”?
      First, I believe that by logic theory that any higher-order processor can be made out of binary parts. That includes the human CNS; it is ultimately composed of binary neural synapses. A neuron may have a hundred dendrites, but ultimately the neuron fires or not.

  3. December 12, 2011 12:29 pm

    [Alex] “would you accept a book as some sort of mind upload?”

    First let’s distinguish two questions: (1)Is a book “alive,” (2) Is a book self-aware?

    A book as an inert object is neither alive nor self-aware. It never changes, has no metabolism, no reproduction, and cannot “respond” to anything.

    What about a book that gets copied and recopied over the ages, including “mutations,” like manuscripts of Hamlet? There it gets interesting because book reproduction may be said to “evolve” like the genome of an organism or a virus. The math looks similar. Computer viruses evolve in ways very similar to biological viruses (future post on that). Are viruses alive? The consensus of microbiologists is that some large viruses did evolve from living cells.

    I think a book is about as far away from a self-aware mind as a virus is from a nervous system. But both evolved from a common ancestor.

    • Alex Tolley permalink
      December 12, 2011 2:05 pm

      A book is certainly not aware. But neither are the various agents running in your brain. A book takes on some “awareness” in another brain once it is read.

      A book can respond through the appropriate wetware. A simple case is a printed lookup table that takes an input and returns an output. Is this so different from a memory based on neural connection patterns?

      As a thought experiment, imagine 2 books, each of which takes input from the other and whose output creates the input for the other. Now consider many, many books doing the same thing. Would that system of books be “alive” in any sense? Obviously it needs a substrate to run in, but so does DNA in a cell, or the virus example you suggested.

      Make those books webpages and the substrate Google Search, use search results to create new searches and now you have primitive informational agents interacting.

      If a mind is heavily based on memories, which in turn are stored patterns of inputs and outputs, how different is this than the examples I’ve given?

  4. December 12, 2011 8:22 pm

    The 2 book system:
    Such a system could be said to evolve, if it reproduces new books. The system could also undergo directed selection for content.

    Memory storage could be very similar to the two-book system. The more we learn about memory, the more scary it gets. Human memory is unreliable without external evidence. Yet the more we outsource our memory to machines, the more we hasten the day when the machines will tell us what to remember.

    • Alex Tolley permalink
      December 12, 2011 9:42 pm

      Yet the more we outsource our memory to machines, the more we hasten the day when the machines will tell us what to remember.

      But is this worse than what our brains tell us to remember? Emotional and painful memories are retained well. We have next to no effective episodic memory – no accurate replay, no real storage of details (unless we are are trained). Writing is clearly a superior medium for memory than wetware. Digital media even more so. Yes, I agree that what we are “allowed” to remember will be different, but not more out of our control than current memories. What would be a concern is that memories we want will become inaccessible for some reason (new forms of extortion, punishment or hacking?).

      I don’t think the books need to evolve to maintain the analogy, but I would expect the output responses to change as learning influences the I/O weights and interactions.

  5. December 12, 2011 10:48 pm

    “Writing is clearly a superior medium”–that is an ancient argument, even from the ages when people actually did remember things a lot better than we do today. In the ancient world, people could recall a string of a hundred objects after hearing it once.

    Still, writing was argued to be “better.” That was the claim of the Bible and the Koran–“the word” rules. The advantage of “the word” is that you could “set it in stone” (the ten commandments) to remind people who forget. But writing has the disadvantage that you remember things that might better be forgotten (like rules for buying and selling people).

    We are seeing now the advantages and disadvantages of too much electronic memory. People can use the electron cloud to create things, to discover things, to conduct revolutions. But they also download lies–it’s much easier than it used to be to implant false memories in a large population.

  6. SFreader permalink
    December 13, 2011 2:34 pm

    “What would it take for an information system to have the same “awareness” as I do?”

    Connectedness, feedback, self-adjusting/querying – basically the ability to talk to itself and hypothesize/play out/evaluate more than one perspective. Otherwise Google would be sentient by now – so it’s not just sheer mass of data, but what you do with that data.

    “… the ages when people actually did remember things a lot better than we do today…”

    Memory relies on the ability to ‘attend’. So if you factor in the total increase in stimuli today versus even 30 years ago, I’d have to say that total ability to remember hasn’t really changed. What has changed I think – although I’ve not seen any academic lab-based studies on it – is how well and what types of filters we now select for what we should attend to. Today, as 40 years ago – and probably ages past – emotionally relevant events still have the more impact on memory than info-dump content. (Ads rely on this, and advertising still does work.) Uniqueness of an experience is also a very strong memory helper; more powerful than recency. (That is, you’re likelier to accurately recall details of your first house purchase than of the house you’re living in today, or of what you had for dinner last night.)

    Instead I would say that rote memorization ability for content (text) is on the decline because that is a skill strengthened by practice/repetition. However, even here I’m not so sure because memorization of new content/learning has become a life-time occupation for most people. Just on a day-to-day basis there are now more things that I need to remember (passwords, and more new things that I have to learn how to do than my parents and certainly my grandparents ever had to learn.

  7. December 26, 2011 7:42 pm

    Dan, the question of “What is man?” is so open-ended that we could take another twenty posts for different parts of it. The question of intelligence alone has shifted over the centuries, as our ideas shift about what intelligence consists of. Adam Gopnik in the New Yorker has a good discussion of this, how in medieval times memory and calculation were the mark of intelligence; now that computers do this, we exclude them from the definition:
    http://www.newyorker.com/arts/critics/books/2011/04/04/110404crbo_books_gopnik

    On the other hand, if being “human” means having a soul… perhaps God could put a soul in a human; a human with some metal-plastic replacement parts; or even an all-metal-plastic human?

Comments are closed.

%d bloggers like this: