Wednesday, February 20, 2013

Could we experience the workings of our own brains?

One of the oft quoted paradoxes of consciousness is that we are unable to observe or experience our own conscious minds at work; that we cannot be conscious of the workings of consciousness. I've always been puzzled about why this is a puzzle. After all, we don't think it odd that word processors have no insight into their inner workings (although that's a bad example because we might conceivably code a future self-aware WP and arrange for it to access its inner machinery).

Perhaps a better example is this. The act of picking up a cup of hot coffee and bringing it to your lips appears, on the face of it, to be perfectly observable. No mystery at all. We can see the joints and muscles at work, 'feel' the tactile sensing of the coffee cup, and its weight as we begin to lift it. We can even build mathematical models of the kinetics and dynamics, and (with somewhat more difficulty) make robot arms to pick up cups of coffee. But - I contend - we are kidding ourselves if we think we know what's going on in the complex sensory and neurological processes that appear so effortless to perform. The fact we can observe and even feel ourselves lifting a coffee cup gives very little real insight. And the mathematical models - and robots - are not really models of the human neurological and physiological processes at all, they are models of idealised abstractions of limbs, joints and hand.

I would argue that we have no greater insight into the workings of this (apparently straightforward) physical act, than we do of thinking itself. But again this is not surprising. The additional cognitive machinery to be able to access or experience the inner workings of any process, whether mental or physical, would be huge and (biologically) expensive. And with no apparent survival value (except perhaps for philosophers of mind), it's not surprising that such mechanisms have not evolved. They would of course require not just extra grey matter, but sensing too. It's interesting that there are no pain receptors within our brains - that's why it's perfectly possible to have brain surgery while wide awake.

But this got me thinking. Imagine that at some future time we have nanoscale sensors capable of positioning themselves throughout our brains in order to provide a very large sensor network. If each sensor is monitoring the activity of key neurons, or axons, and able to transmit its readings in real-time to an external device, then we would have the data to provide ourselves with a real-time activity image of our own brains. It could be presented visually, or perhaps sonically (or via multi-media). It might be fun for awhile, but this personal brain imaging technology (let's call it iBrain) probably wouldn't provide us with much more insight or experience of our own thought processes.

But let's assume that by the time we have the nanotechnology for harmlessly inserting millions of brain nanosensors we will have also figured out the major architectural structures of the brain - crucially linking the neural scale to the macro scale. Actually, if we believe that the recently announced European and US human brain Grand Challenges will achieve what they are promising in terms of modelling and mapping human brain activity, then such an understanding should only be a few decades away. So now build those maps and structures into the personal iBrain, and we will be presented not with a vast and bewildering cloud of colours, as in the beautiful image above, but a simpler image with major highways and structures highlighted. Still complex of course, but then so are street maps of cities or countries. So the iBrain would allow you to zoom into certain regions and really see what's going on while you (say) listen to Bach (the very thing I'm doing right now).

Then we really would be able to observe our own brains at work and, just perhaps, experience the connection between brain and thought.


  1. Whatever the neurological mechanisms are for cup lifting they probably are functionally equivalent to a kinematic chain. Otherwise you wouldn't be able to use a pencil or a shovel or drive a vehicle with much competency. Other animals also seem to have the same capacity for extended kinematics, but it might be that in humans that skill has been particularly refined by a tool-making culture (the idea of culture led gene-culture co-evolution).

    1. Yes I agree Bob. I'm guessing you will also agree with me that functional equivalency doesn't tell us much (if anything?) about the underlying neurological mechanisms and control architectures.

      Many thanks for your comment:)

    2. Yes. Systems could have the same inputs and outputs but have very different kinds of architecture in between. Likewise in nature there are examples of convergent evolution.

  2. Would'nt it be scary to see actually how the brain does its stuff? Al those, for the eye, simple things, like lifting a cup, drinking it and then giving the instruction to taste, smell and digest it. What if such research would teach us how to realy exploit the brains' capacity? Could we train our brain to be more powerfull than today modern computers?

  3. Great point of view! Imagine that one day researches unlock the inner working of the brain exactly! Would man be able to use his brain to the full extent? With all that unused and unknown capacity stored in that gray matter wouldn't we be more powerfull than todays most supercomputers? Multitasking is not a problem already, our brain handles digestion, feelings, tactilly sensations, seeing, hearing, tasting etc... Imagine doing a calculus of PI in a split second. I think it is possible, we just don't know how yet!

    1. "I think it is possible, we just don't know how yet!"
      Unfortunately, it's not possible. First and foremost, most areas of brain are "narrow specialists" - they perform only some single, genetically defined task (such as recognizing a horizontal line or deciding whether it's time to sleep). Cerebral cortex is the main part used for "intentional" tasks, such as computing PI digits. Second bad thing is, we can see its limits. Remember (or imagine) learning how to ride a bicycle. When you only start learning moves, cerebellum isn't involved and cerebral cortex has to manage with muscle groups and their contractions and... it's hard. You don't have much "computing power" remaining - you won't notice beautiful sunset or expressions on faces of people around (unless they seem threatening). That's why brain has special trick to "record" and "replay" moves - it just doesn't have enough power to 'think' about every movement.
      I suppose that some improvement is possible, but it won't be anything near "supercomputer power". Alas.

    2. Many thanks Danny for your comments.

      Re your first comment, not sure about scary but I agree it might be very weird. Come to think of it - it might be dangerous. I could imagine feedback loops in which you become somehow mesmerised. High even. Imagine you get pleasure from observing the brain's pleasure centres. Maybe the iBrain would need safety features.

      Re your 2nd question - I agree with absurdated's very interesting reply (thanks!). I rather think it's a myth that there is a load of untapped potential in the brain. Although I think the brain's 'processing power' is vey large indeed, I think it's very hard to make comparisons with supercomputers. The way brains appear to work is profoundly different to the way digital computers work.

    3. "I rather think it's a myth that there is a load of untapped potential in the brain."

      Nature appears to favour the principle least energy.

      I won't mention redundancy as the subject of evolution throws a spanner into that argument. :-D

  4. Brilliant. Let's get nanobots into peoples Brains... and wire them up to centrally located systems, owned by the elite... behave yourselves!

    1. Yes good point. I would worry about that too.

    2. Isn't that the BBC's role?

      24hr news.

  5. "After all, we don't think it odd that word processors have no insight into their inner workings" - word processors are not conscious to begin with :-)

    1. Quite right - which is why I realised it was a bad example.

      But thinking about things that are conscious... Most people agree that humans are not the only animals that are conscious, and there is a commonly held intuition that there most be degrees of consciousness. At the top are a small group of animals that are self-aware (i.e. pass the mirror test). And, as far as we know, only one - humans - that think about thinking. So, in evolutionary terms, self-aware consciousness is very rare indeed, and reflective consciousness (as far as we know) has only evolved once. So if these higher forms of consciousness are so astonishingly rare it's not surprising that nothing has (again - as far as we know) evolved the ability to be aware of its own thought processes.

      My own view is that reflective self-consciousness may be an unexpected emergent property of consciousness.

      Thanks for your comment!

  6. It is not generally realised that this technology already exists.

    It is still an exotic field, despite significant research, however the results have been proven repeatedly to reveal not only the ability to observe physical activity, but additionally, to observe the formation, composition, action, and dissipation, of thoughts.

    As the process is Biokinetic, there is no real requirement for sophisticated technology, however the techniques employed are key, and are themselves highly sophisticated.

    Excellent results can be gained through repeating the process consistently over at least a year, although progress is noted at an earlier stage.

    The technology is open source, and is frequently referred to as meditation. One of the best vendor groups is known as Buddhism.

    1. Thank you Peter. These were also my thoughts on reading the article. Cheers.

    2. We are living in a culture entirely hypnotized by the illusion of time, in which the so-called present moment is felt as nothing but an infintesimal hairline between an all-powerfully causative past and an absorbingly important future. We have no present. Our consciousness is almost completely preoccupied with memory and expectation. We do not realize that there never was, is, nor will be any other experience than present experience. We are therefore out of touch with reality. We confuse the world as talked about, described, and measured with the world which actually is. We are sick with a fascination for the useful tools of names and numbers, of symbols, signs, conceptions and ideas.
      ~ Alan Watts

    3. Love that a lot, open source and vendor Buddhism. Cool way to express this topic!

    4. Thank you Peter for your observation about meditation and buddism. I experimented a little with transcendental meditation in the 1970s, and know one or two people who are *very* practiced in Zen meditation. As far as I know they don't claim to be able to observe the workings of their own brains. Or am I wrong about this..?

      I accept entirely that meditation (and psychotropic drugs) can radically change perception, and alter your sense of self (and I've experienced some of that myself), but I'm not sure that experiencing those changes gives any real subjective or objective insight into the inner workings that I am interested in here.

      Thanks also to BPDecaf and Danny for your additional comments. I appreciate the Alan Watts quote. The self-perception of time is deeply interesting. I'm pretty convinced by arguments that the temporal continuity of consciousness (and therefore of self), is an illusion.

  7. The logistics of instantaneous capture would be a nightmare.

    re: Heisenberg's uncertainty principle.

    1. Indeed. Pretty exotic technology all round!

    2. If we ever achieve that then a Star Trek transporter would most likely be feasible too.

      At least remote exploration of other worlds might be possible by uploading a brain copy to an artifical lifeform and sending it off into the cosmos.

  8. If you can build and install brain nanosensors it wouldn't be a major step to then produce the opposite nano devices which can input a signal. With read and write capabilities, and the fact brain neurons are already designed to connect together, could we not then link two brains together? Ie neurons in one brain being connected to neurons in the other with communication between them?

    With sufficient shared neurons perhaps the two brains would adapt to the extra connections and learn to use the capacity of the other. Possibly both people just die as the brain can't adapt, but on the other-hand maybe it allows one brain to become extended at the expense or the other, or it produces a kind of shared consciousness, or even a single being living within two separate bodies with all the of four eyes, four ears, etc.

    If that works out you could then produce artificial brains which individuals can connect to to extend their capabilities.

    1. We already of that. It's called...
      -peer reviewed journals.
      -MPD (Multiple Personality Disorder).


  9. "Imagine that at some future time we have nanoscale sensors capable of positioning themselves throughout our brains...."

    At that point you are expanding consciousness to include direct perception of the inner workings of the brain, much as you can now experience the mechanical workings of your hand from the inside.

    The problem at that point is that you are no longer observing "consciousness," you are observing brain activity that excludes this new facet of consciousness that you've created.

    So you are going to need a way to analyze your new analytical systems as well, and then do that again at the next level, and so on. Understanding the brain becomes a moving target in which the parts involved in analyzing the brain are themselves beyond direct (phenomenological) analysis.

    You can never get to the point of understanding how you understand, that's the fundamental problem to begin with.

    Also, you kind of need to back up and explain what you think consciousness is, because there is a lot of contention about how to define that. You seem to take a Dennett-like view; but this is probably beyond the scope of a short article.

    1. My own belief is that space and time are a strategy the brain uses to reduce complexity.

      If we didn't have space and time, our senses would be clogged with information overload.
      We can't observe or do anything without space and time, but we have no guarantee that the universe is really like that, and no guarantee that brains have to work within that framework.
      There are plenty of hints in modern physics that space and time don't really match the everyday model we all use.
      What happens when we are in deep sleep? In that state, space and time cease to exist, and we can't observe or do anything. But we can be observed...

    2. "We can't observe or do anything without space and time, but we have no guarantee that the universe is really like that, and no guarantee that brains have to work within that framework."

      There's a contradiction there. If we can't function without space and time, then that's a pretty good guarantee that consciousness requires space and time.

      Also, you're assuming that there is some "real" universe out there separate from consciousness; that's not a settled question (at least not in the philosophical sense).

      In any case, it's a misunderstanding to think of space and time as illusory or unreal just because our understanding is mediated by consciousness which can't grasp the whole picture. Nothing can be understood without space and time, they are the very bedrock of reality, so if space and time aren't "real" then neither is anything else (including you and me).

    3. Its a metaphysical belief of course, which probably can't be tested or falsified. But I still think it would explain a lot.

      I don't think there's a contradiction, though I could have phrased it better:
      There is no guarantee that brains are built according to the strategy they use to observe the world. In fact, I think its rather unlikely.

      Whether there is a 'real' universe out there or not is a bit of a red herring. I take it we are part of the universe, whether we understand it or not.

      Actually, I dont think using the words 'real' and 'unreal' is going to get us anywhere. What would be the difference between 'real' and 'unreal' space and time?

    4. Anonymous: "Nothing can be understood without space and time, they are the very bedrock of reality, so if space and time aren't "real" then neither is anything else (including you and me)." - You underline the point and I again quote Alan Watts eloquent explanation of why these kinds of discussions about 'consciousness' go round and round in an imprisoning circle.

      "We are therefore out of touch with reality. We confuse the world as talked about, described, and measured with the world which actually is. We are sick with a fascination for the useful tools of names and numbers, of symbols, signs, conceptions and ideas." ~ Alan Watts

    5. Thank you Anon for your comment at the start of this thread.

      You write "So you are going to need a way to analyze your new analytical systems as well, and then do that again at the next level, and so on". I agree that what I'm suggesting may appear paradoxical. Especially since you appear to be using the same mechanisms (consciousness) to observe (and perhaps gain some insights into) the very thing you are observing. But I would argue that it may not be as paradoxical (and impossible) as it might appear.

      Firstly, there's an awful lot going on to observe, and vast parallelism, so you would I suspect end up only observing some facet of the 'thing' that gives rise to the subjective experience of consciousness.

      And secondly, we humans do appear to be capable of both engaging in an activity and imagining ourselves doing that thing, from a different perspective *at the same time*. (You can have a conversation and, with a bit of effort, imagine looking at yourself and your interlocutor from the other side of the room, at the same time.) By the same token we might be able to deal with the observing-the-processes-of-thinking while being conscious of the act of observing trick.

      You are right that I should explain what I think consciousness is. You'll get a good idea from reading my other blog posts on consciousness, self-awareness, free will, intelligence, etc. My views on consciousness are very much influenced by especially Daniel Dennett, Susan Blackmore, Owen Holland and Thomas Metzinger. Essentially I regard consciousness as an emergent property of the cognitive processes of some animals. Although consciousness is puzzling, so are lots of emergent properties of complex systems - until you figure out what's going on. My research in swarm intelligence (itself an emergent property) over the past dozen years has shown me that surprising and puzzling emergent properties can with effort be analysed and understood. I'm afraid I don't buy arguments that consciousness is, by definition, beyond analysis or understanding.

    6. @James Ingram
      "My own belief is that space and time are a strategy the brain uses to reduce complexity.

      If we didn't have space and time, our senses would be clogged with information overload."

      That sounds like space and time are a manifestation of the mind, which without our senses, would be a satifactory conclusion.

      What information? The tendency toward disorder (entropy) has no facility to change/evolve without space to expand into.

      No space = no *new* information. (stasis)

    7. Entropy is something we observe with our senses. It says something about our experience of space and time. We can't say anything about it otherwise.

      I don't think the universe is static - on the contrary, there must be something spacey and timey about it, otherwise we wouldn't see it that way at all.
      Calling it 'static' is temporal, sense-centred language. And I think we'd also be on shakey ground trying to say that there is a simple arrow of time (defined by everyday entropy).

      "What information?"
      The only way to describe it is, I think, with pure mathematics. But pure mathematics evolves too. Mathematicians can be very creative. :-)

    8. Your view of entropy appears to be at odds with mine.

      But I get the impression maybe your belief system is one of religion. So you probably see things from a different viewpoint.

      As for the creation of a sentient being, I don't believe it's possible either. But more importantly I don't want it to happen even if it were possible.

      Ultimately we are responsible for our own creations, and if they out-live us, then what?

    9. Of course I mean inanimate creations.

      Our kids hopefully do outlive us.

    10. I have a belief system, but I wouldn't agree to it being called religious. Metaphysical maybe.
      I think metaphysics are unavoidable. Nobody knows everything. Conjectures are necessary at the edges of our knowledge. If we make no hypotheses, nobody gets anywhere. We end up with a boring, unrealistically static world.
      As far as I'm concerned, Popper was right, and "Conjectures & Refutations" is the way to make progress in the sciences.

      Yes, I do believe in progress... and entropy in our tangible world. :-)

      Obviously, science would rather have falsifiable conjectures. I have no idea if my basic hypothesis is falsifiable (is scientific) or not. Either way, it keeps me happy while I get on with other things. :-)

      "If [our own creations] out-live us, then what?"
      We become immortal of course. :-)

    11. Here's an attempt to be more 'scientific' in the above sense:
      A stronger version of my original hypothesis would be to propose that there is another dimension beyond space and time. Lets call it "xyz". This is something which by definition cannot be perceived by my physical senses.

      This is like proposing that the Earth is not flat, but has a third dimension, without having the technical means to test the proposal.
      The proposal is initially "un-scientific" but becomes testable as our tools improve (ocean-going ships).

      I think that the universe is basically information, and that pure mathematics is a window on that, so I would expect mathematical descriptions of LHC observations which included "xyz" to be particularly elegant.
      The difficulty is, of course, to know when we are seeing "xyz" in the (man-made) mathematical descriptions. Even calling it a "dimension" is very brain-centric.

      That's why I was deliberately vague in my original formulation as to what the brain is simplifying.

    12. Well the whole problem with unifying gravity with everything else calls for some wild ideas, like up to twenty four dimensions folded into themselves. Plus string theory, which is my favourite one right now.

      I haven't revisited the LHC website for a while, so I'm not exactly up to speed with current results.

    13. So my answer to
      "Could we experience the workings of our own brains"
      seems to be no -- unless "experience" means understanding exotic maths and machinery like the LHC.

      The argument is, I think, comprehensible even if you are not a theoretical physicist. (I'm not either.)

      Fundamental research can lead in unexpected directions. Maybe LHC research will someday have an effect on the development of Artifical Intelligence. Maybe. :-)

      Thanks for the thread, I enjoyed it a lot.
      Best wishes.

    14. BPDecaf: "You underline the point and I again quote Alan Watts eloquent explanation of why these kinds of discussions about 'consciousness' go round and round in an imprisoning circle"

      Watts has a point but there is a danger in taking what he is saying too literally.

      If you literally believe in a world-as-it-is which is beyond the ability of consciousness to understand through observation and measurement, and truly consider the world that is measured and observed as illusory, then you've just thrown all human knowledge into a metaphysical abyss.

      It's important to draw a distinction between symbols and what they represent, but there is a danger in taking this so far that it becomes the basis of reality. Reality is what we know, whether our knowledge is complete or not, everything else is metaphysics.

    15. I would suggest that as serviceable as our tools may be in constructing an "everyday" existence we do ourselves a limiting disservice to equate thinking with consciousness. I do not see that this immediately leads us to throwing out the baby with the bathwater as Anonymous maintains. Within their sphere tools are helpful but to extend their usefulness beyond their limited realm is the hallmark of human hubris. A butter knife serves us well when applied to a piece of toast. It may prove to be adequate when it comes to steak but its utility diminishes rapidly when it comes to constructing a building. We can continually devise ever more sophisticated tools to meet a given set of requirements but by their nature they limit us to the very requirements we ourselves have identified through cognition. As uncomfortable as the notion may be consciousness cannot be subsumed by the tools of thought.

  10. Forget the real time behavior (which is complex enough). How on earth does the brain save and then index and recall a memory. This is way more complex and probably requires sensibility on the molecular level.

    1. As a computer scientist I'm familiar with neural networks, whenever the brain learns something, it makes a connection between synapses and neurons. In software we try to do this too. Get input, work on it, store it and use it afterwards. Such algorithms are for example used in voice, face, fingerprint recognition. But also for many other purposes. Backpropagation is one type of neural network. But okay that is out of the scope of this article. I just wanted to state that once the brain learned something either smell, taste or experience it can quickly recall this information. How, it does that? I haven't got a clue, chemistry at work i suppose?

    2. Even more intriguing...

      Do new experiences/memories get stored wholesale, or just the differences from what is already learned?

      Personally I believe it's the latter. As the saying goes, 'you can't run before you can walk'

  11. Such is the central theme explored by Douglas Hofstadter in "Godel, Escher, Bach: an Eternal Golden Braid".

    Although it appears on the surface to be a book on philosophy and meta mathematics, it is actually targeted at exploring this very question, along with tangential ideas about how would we go about building in consciousness, how would we recognize that, etc.

  12. The quantum wave particular duality and its apparent collapsing effect on observable probability waves/effects might have something to say on any possible deterministic relationship between biochemistry and behavior.

    On the the Zen tip, the eye (I) cannot see itself; a time delayed, lower resolution, or reflection is not self. The feedback loop that is self recognition changes the self during emission. Hence, it is a fools errand.

    Nonetheless, the consciousness resides not in the flesh but includes perhaps the mesh of surrounds. Ask most parents where their heart lives...


  13. The brain is actually orders of magnitude to complex too understand, in a real time conscious gestalt way. 10^10 neurons, 10^14 synapses, more potential states than there are elementary particles in the universe. Examine you own own thinking, you can only handle a few variables at once in this mode. Try to model the whole brain consciously and you'd be, to use that gaming expression, cluster-fucked.

    On the other hand, the automatic processes in your brain are capable of almost miraculous levels of parallel processing. You can coordinate hundreds of muscles to walk across the room while looking out the window at a crowd and picking out a human wearing a blue coat from hundreds of people and other objects. These systems do stuff and present results and exceptions, you don't see the code executing. If you really had to consciously operate your muscles you'd be confined to a wheelchair, or more likely a bed or an iron lung.

    The brain is vastly more complex than the skeleton and musculature. Fully understanding a 100 milliseconds of brain activity might take multiple lifetimes. That's why we don't do it and it's why evolution has not attempted it. Consciousness is a executive function, not micro-manager. "You" get information about your brain states in on a need-to-know basis, in digestible, high level information. There's a cacophony of activity going on even when you are lying on a couch with your eyes closed, listening to slow, quiet music, or asleep. There's no way the limited resources of consciousness could process that vast, bewildering, maddening array of information.

    We understand our brain - to the extent we do - using top down conceptual models and exception reporting. It can't be any other way. If anything ever understands the brain in the way you're suggesting it will be a computer that is much smarter than use. A computer that doesn't exist, yet, and one we really don't know how to build or program.

    (This isn't to say that brain biofeedback is useless, just that it needs to be focused on stuff we might be able to handle.)

  14. ...

    Another way of looking at this is through considering what consciousness actually is. If you believe that conscious arises in some parallel non-physical realm not subject to computing limitations then the above discussion is void, but you'd have to explain "where" that realm might be, how it arose, how it works, and plus deal with Occam's Razor. I say you can't.

    More reasonably we can see consciousness arising something like this: A single nerve cell just passes information. No consciousness. A small cluster of cells can create a homeostatic controller. Still nothing conscious. A small brain can reasonably well to a complex environment, react to a range of threats and opportunities. But this doesn't require or produce consciousness. As the brain gets more complex it can begin to not merely react to environmental cues, but actually model the environment. Show a dog a biscuit then place it in one of two clenched hands, the dog remembers which hand for several minutes. It has some kind of simple model of a part of the world, so it can go to the correct hand. This still doesn't produce consciousness. Humans use models vastly more complex than this, that enable extremely complex human activities like say the legal system. We didn't evolve to cope with the modern legal system, it wasn't around, but we did evolve to cooperate and compete in groups. To get a decent social system going you need to be able to model your fellows. Not just where they hid a biscuit, but what they are thinking: your successful survival, mating opportunities, cooperative hunting, group defence, child care, depend on it. Thus, the brain must be able to model your fellow group members and their interactions - somehow, we know very little of how these systems work. But of course, modelling and monitoring your fellows is useful but since you're actually one of the tribe - in fact, the one who you have the greatest evolutionary interest in - you're really the most important individual to model and monitor. And, this is how I see consciousness. It's not just "Seeing Red" - a smart phone can do that - it's seeing red, knowing that you are seeing it, and modelling an "I" who is knowing that it is seeing red. That is being conscious. It's complex and it must require a lot of brain working in a coordinated way to produce it.

    So merely knowing what is happing at different brain points is beside the point. It's like measuring voltages on a computer circuit board to work out how to use a drawing program. You need the top level organisation. Only when we can combine these points into a synthesised thought-like something are we really getting something tractable. And we don't know how that works. We can program a computer that looks at a scene and can answer a question like "Is the blue block on the red block?" but that's how primitive our understanding of how these models work is. Stuff like "Is Jenny happy?" or "Am I pleased that Jenny is coming to dinner?" requires a model that is so complex that we've hardly got a clue where to start.

    1. "Thus, the brain must be able to model your fellow group members and their interactions..."

      Yes, one of the most plausible explanations of consciousness is that it is a side effect of the ability to internally model and predict the behavior of others and the self. This naturally gives rise to the ability to self-reflect, which could be what gives rise to our experience of consciousness.

  15. Someone wrote once about the same example(maybe the same Douglas Hofstadter, mentioned here) that human beings might use a totally opposite strategy to solve the coffee cup problem than robot arms. While the robot knows exactly the path it must follow, humans don't know it at all. Humans might use only imagination to accomplish that. This is what he proposed: while moving the arm, we imagine all the possible "futures" that will not take the cup to the mouth or that would spill over the coffee. Avoiding all those possible paths, what remains is the correct path.

    1. Imagining all the possible futures would take an infinite amount of processing power to accomplish in the given time. I'm still wondering if imagination is somehow outside (robot) time.
      When deeply concentrated we lose track of time. Machines use crystal cycles to measure time. We don't.

    2. I don't think that the machine analogy is a particularly good one when it comes to thought.

      Computers are nothing more than sophisticated puppets, no more thinking entities than puppets made of wood and string. You can't see the strings but they are there, tied to the programmers who have pre-determined the behaviors of the machine.

      To the degree that machines seem to think, they are actually displaying human thought, that of their programmers. I know this because I'm one of the puppeteers and I know how computers work down to the lowest level. They are nothing more than complex arrays of switches, no more capable of thought than an abacus (and just as dependent on human thought).

      Computers can augment and mimic human thought; they can't actually perform it. At least, not as we understand computing today: a true thinking machine would be unrecognizable to contemporary computer science, and closer to the realm of biology.

      The coffee cup problem is an engineering challenge rather than a philosophical exercise. You can't make a robot solve the problem in the human way, because they fundamentally don't work that way. So you use human thought to make the puppet do what you want, instead of trying to make the robot think about it.

    3. But the computing machinery we have now is still based on calculation. Perhaps when we know more about the inner communication of the brain, we'll build new artificial machines that move away from pure calculation. Hopefully someone will come up with a name other than 'computer'.

    4. Ever considered that you might be a "complex array of switches" too? A different kind of switch to a transistor gate in a computer but switches none-the-less. The brain is a radically different design to a standard computer, but it's a computer alright.

      If not, what?

    5. The analogy to conventional switches is far too crude.

      Reading papers on brain research (MCulloch & Pitts) the variation of pulses seems to suggest there's some sort of voltage-to-frequency transmission. Transistor switches don't have inhibiting inputs, whereas neurons certainly do.

    6. Apropos "If not what?":

      The word "computer" is a bit misleading. Too physical.

      I think it was Bertrand Russell who pointed out that you can't have space without objects or time without events. So that's what we should really be looking at. Computers are objects that deal with events.

      So I'd agree with you that brains are in a sense computers. To be more precise, computers are *information* processors, and information is what both objects and events are made of. The apparent physical structure is only half the story. Brains also have to be taught.

    7. Steve Furber (co-designer of the ARM chip) has been doing work on spiking neural networks at Manchester Uni.

    8. I have a great deal of respect for research like that, but suspect that the brain is not actually doing parallel processing.
      They'll probably end up with something useful (powerful robots) without actually simulating a brain.

      Further to what I said above:
      All living things, including brains, are born, and die. I think its likely that these two events are intrinsic.

      In other words, I think its unlikely that robot research will lead to the creation of immortal beings. :-)

      Actually, in my experience, current computers have a life span of about 3 years.
      Software is a bit different. It can live/evolve on different computers. But software also has a life-cycle. At some point, replacement software gets written from scratch.

    9. OK, the life span of computers has nothing to do with out-and-out failure rates. Just obsolescence in the eye of the beholder.

      Software, well that requires a whole book to catalogue the failure rate. But it's mostly down to incomplete knowledge 'how to build/write' correct software. 60yrs on and the programming language Lisp is still in common use.

      'All living things, including brains, are born, and die.'

      Thankfully ideas live on, and thanks to modern medicine some lives are prolonged beyond there function. I'd rather die early than get alzheimers. (if you get my drift)

    10. Interesting that languages are longer lived than the individuals that use them. English has been recognizably the 'same language' for over 500 years, but its changing nonetheless.

      Talking to machines is a very new development. Wikipedia says:
      [Lisp was] "Originally specified 1958..." and
      "Lisp has changed a great deal since its early days, and a number of dialects have existed over its history."
      Programming languages change as we learn more about how to write software. New languages are developed, and the old ones can stabilize. I doubt if languages stabilize if they are continuing to be used. If you use a language, you find ways to improve it.

      The programs themselves are on a different level, of course. Some may be simple enough to be useful beyond their authors' lifetime (simple text editors maybe), but the context in which such programs are run (operating system) changes. So that works against their continued use. One uses an equivalent program, better adapted to the environment.

    11. "but the context in which such programs are run (operating system) changes. So that works against their continued use."

      Thank goodness we have virtual machines. I wouldn't be able to run legacy 16-bit software on my 64-bit OS without it.

      Operating system changes aren't a big deal if it's open source. Software doesn't die at the whim of controlling party. This was a problem when I used to be a Mac OS-X user.

    12. Virtual machines are software too. Enough people have to know about, and want to use, the legacy software, otherwise the virtual machine gets forgotten/lost.

      But maybe you are right, and there will always be virtual machines for simulating older environments. Computer museums would iterested in that, as hardware components break and become irreplaceable.

      Everyday use is something else. If I were 40 years younger, I wouldn't bother with installing and using a virtual 1970s UNIX system just so that I could use their "vi" text editor. I'd just learn and use the most convenient editor I can get my hands on. "vi" gets forgotten.

      I'm writing open-source software, but I dont think that automatically makes it 'immortal'. :-)
      Its like writing a book. Most books get forgotten. Its the concepts they describe that have a chance of becoming part of other/younger people's culture.

      I think we've drifted a bit off topic here, but the subject is so big that its worth taking a look at some context.

    13. Fyi: I think you would be surprised at just how common vi (vim) is nowadays. It is one of the more popular editors around.

      As far as installing a virtual machine of Unix is concerned, certainly I do it with Linux and BSD (Unix) all the time. Anyone familiar with that 1970s Unix you refer to would be completely at home in either of those (they would not have to learn any new commands!)

  16. As was correctly pointed out by many readers above, awareness is a constructed model held in short term memory. The meaning of "to become aware" is simply to add things into this model. A computer is a machine, and so is a brain. So to be aware of something, to experience something, is a process more like a hard disk access than a CPU cycle. It takes time to marshall experience into a coherent stream. The underlying processes have long gone well before anything reaches awareness.
    There are really several questions here.
    a) Can we become aware of our awareness?
    b) Can we shorten the time scale of awareness?
    c) Can we drill down to become aware of underlying processes?
    The short answer to a) is that we already are. We know that we are awake, and we know the feelings that certain thoughts invoke. Thinking is not localised in the brain to some imaginary intellectual center, for example when you use a verb in your sentences you are already involving your motor cortex. We know the strange and pleasurable feeling we get from emotional centers when we learn something new.
    b) We cannot become aware of the process of becoming aware except in hindsight, because it is simply an infinite regress.
    The mind must have criteria to allow thoughts to be accepted, a decision procedure if you will, or it would become overwhelmed by a vast number of potential thoughts with no acceptance criteria. Things live in the brain as potentiated ideas, long before they crystallise into a stream of consciousness. This procedure must again be out of reach or you get another regress. Goedel's theorem also tells us that there will always be knowledge about a system (truths) that will escape criteria of acceptance by the system.
    So c) can scale down? If we build a thinking robot out of wheels and cogs, then we can be sure that it is not thinking at the domain of the size of a gear tooth. It may have thoughts at the domain of a planetary orbit though, if the machine is big enough.
    Biology has an advantage in that brains can be engineered down to molecular sizes, and also the million dolar question, does it take advantage of spatio-temporal delocalisation ie qualtum engineering.
    Because we interact with the world at a certain spatio-temporal scale, and we are very social creatures, we tend to imagine that our consciousness is about the size of our heads and about 5 seconds long. But in fact there is no evidence for that. The "you" that is "you" may be much smaller than those parts of your body. You may only be the size of a drifting pattern over a constellation of molecular proteins.
    Or perhaps the not.
    You may be no larger than your head - but even this is called into question by quantum delocalisation - you may be the size of the universe. Your wave functions may extend to infinity to the beginning and the end of the universe. You may even be a lot weirder than you think, you may be as weird as the universe itself.
    And lets not stop there - remember schrodinger's cat, well you may be that cat. Scientists often debate about the fate of the universe, will it be a big crunch, a total expansion, an oscillation etc. - the universe itself may exist in a mixed state - all of the above. Under such ceircumstances I doubt if you will ever get to know yourself properly.

    1. Hi kaonyx, you wrote:
      > As was correctly pointed out by many readers above, awareness is a constructed model held in short term memory. The meaning of "to become aware" is simply to add things into this model.
      And the model uses space and time (Sorry if I'm insisting on this too much, but I think its an important part of the story.) Our idea of 'memory' also presupposes time (as a sequence of events).
      I can't imagine conciousness without space and time. Perhaps, conciousness *is* the awareness of space & time. And, I said when I first joined this discussion, space & time are, for me, the brain's strategy for reducing complexity. The universe can only look at part of itself. I can't look at the back of my head.

      > A computer is a machine, and so is a brain.
      I'm not so sure about that. Depends what you mean by machine.
      > So to be aware of something, to experience something, is a process more like a hard disk access than a CPU cycle. It takes time to marshall experience into a coherent stream.
      You're using the most advanced models for information processing we have (computers, hard disks, CPUs), but I don't think we can assume that that is the model on which brains are built. Older attempts to reproduce life using contemporary technologies (clockwork) now look very quaint. (See e.g. the Wikipedia article on Automata.)
      Even attempts to reproduce the movements of the inanimate heavens in clockwork failed because the relevant concepts and technologies were not there.
      When it comes to modelling brains, we have no reason to suppose that our technologies are much better than those of the ancient Greeks or 18th century Frenchmen.
      > Things live in the brain as potentiated ideas, long before they crystallise into a stream of consciousness.
      That's an interesting thought.
      > This procedure must again be out of reach or you get another regress.
      Yes, isn't that interesting! :-)
      > Goedel's theorem also tells us that there will always be knowledge about a system (truths) that will escape criteria of acceptance by the system.
      As I understand it, Goedel's theorem says that if the system is sufficiently complex, there is no universal recipe for deciding whether one of its propositions is true or false. A proposition can be true but unprovable. Please correct me if I got that wrong.

      > Biology has an advantage in that brains can be engineered down to molecular sizes, and also the million dolar question, does it take advantage of spatio-temporal delocalisation ie qualtum engineering.
      I would be very surprised if nature somehow decided not to be itself! :-)
      > ...You may even be a lot weirder than you think, you may be as weird as the universe itself.
      Yes, but I think I'd put it more strongly: You *must* be as weird as the universe itself. :-)

      Are we off topic?

      We may have got here because we're talking about self-awareness.
      I'm self-aware, even though I can't see the back of my head. :-)


    2. This comment has been removed by the author.

    3. (a slightly reworded comment)

      I do wonder about the development of self awareness in people who have been blind since birth. Most animals with a so called higher consciousness, don't appear to be self aware until they have made the connection that the image being reflected mimics their every move, with zero time lag.

      Or am I confusing self aware with self recognition?.. What's the real difference?

      *Maybe* this could be the ultimate Turing test for an artificial entity. And not just a parlour trick to convince a human at the end of a distant terminal that the conversation isn't with a computer. A test that is wide open to misinterpretation and false-positive claims of intelligence. Perhaps in order to retain/gain more funding in a project.

    4. This suggests a little digression about tools.
      Mirrors were probably invented very early. Pools of water, polished metal. You can see more with two mirrors than you can with one...
      Galileo's telescope and the Large Hadron Collider also extend the range of our senses, but in the end we see the results at the scale of our everyday world. There is no escape. In the end, you have to believe your eyes.

    5. "In the end, you have to believe your eyes."

      If they are functional. Which was the point I tried to raise.


  17. The analogies floating around that our brains are lot like holograms with respect to the whole is greater than the sum of it's parts. If you cut a holographic film the entire image is still reproducible but has a lower viewing angle, and reduced in size.

    I suppose we should be asking. How much of the brain has to be removed before it behaves like an automaton?

    1. The hologram is a fluffy analogy for brain processes that was handy at the time. It's a bit mystical. For a start a brain bleeds if you cut it. It's better to think of the brain as a computer with multiple connected subsystems running in parallel with a lot of redundancy and fault tolerance. A holographic image is actually stored at every location on the film, but this is not true of the brain. It is true that a complex memory is not stored at a single point in the brain but this is a quite different to it being stored everywhere.

    2. So how do people who suffered a brain lesion retain most of their memories?

      That's the analogy with holograms. Memory is spread out, not localised. Which suggests that the wave nature of electrons plays some role with respect to parallel operation.

    3. Just for the record. I don't subscribe to the mystical or metaphysical theories.

      Just physics, chemistry and biology.

    4. What you loose with a brain lesion depends on what gets hit. The mechanics of memory isn't well understood, but it's clear that a memories are not stored as little atomic facts but as associations, and invoking those associations requires the multiple areas that use the memory. If you could remember you owned a red car and had a lesion in the colour processing part of the cortex you might no longer remember the actual colour red but might have a declarative memory of owning a "red" car. So have you forgotten the red car? It depends what you mean. There was an interesting case of a bilingual Chinese man who, after a stroke, forgot how to read Chinese but could still read English. Ideographic text processing uses an additional brain area to alphabetic text processing and he lost this area: so he forgot how to read Chinese. His knowledge of how to read the Chinese was localised.

      This is different to punching a hole out of a holographic image of a toy red car: you might loose one viewing angle but you could still reliably see the complete car. The image is completely and "evenly" spread across the whole holographic image. Human memory is a bit more like having a cluster of information about something with elements stored in different locations. You can loose a chunk of data in a brain lesion and retain the cluster but careful analysis would show that something went missing.

    5. So, for instance. Would memories for sounds be approximately local to the areas where sound is processed? Ditto for vision and taste etc.

      I had known of the association problem where stroke victims would recognise objects and faces, but couldn't name them or like you say, say what colour it was. The Chinese you mentioned is a very interesting, that ideograms aren't in the same category as western symbols.

  18. Consciousness and the brain have, in my view, a peripheral relationship. That being said this bit of news may provide an interesting addition to the discussion:

  19. immortality: in body the important is the head, in head the important is the brain, in brain the important is...THE MEMORY, which is, simply, what we are