UKC

Consciousness...what is it and why do wen have it?

New Topic
This topic has been archived, and won't accept reply postings.
 The Lemming 08 Nov 2013
Is it an extension of the thought and problem solving process?

I think therefore I blink?
 1poundSOCKS 08 Nov 2013
In reply to The Lemming: It's an illusion...like time and particles.
In reply to The Lemming:

If anything, it is the thought and problem solving process. If you ask me, it doesn't exist, only as a useful term with which to describe the collective action of various brain/CNS functions.
 Jon Stewart 08 Nov 2013
In reply to SidharthaDongre:
> (In reply to The Lemming)
>
> If anything, it is the thought and problem solving process. If you ask me, it doesn't exist, only as a useful term with which to describe the collective action of various brain/CNS functions.

You're a robot/zombie then.

Do you not wake up in the morning and think "oh look, consciousness again"?

Conscious mental states are absolutely and blindingly obviously distinct from unconscious activity of the CNS, yet we cannot yet put our finger on what the physical differences in the activity of the neurones is. There seems to be a bit of fashion these days for flagrantly denying this obvious fact.
 Jon Stewart 08 Nov 2013
In reply to The Lemming:
> Is it an extension of the thought and problem solving process?

Thoughts without consciousness? How does that work?

 csw 08 Nov 2013
In reply to The Lemming:
> Is it an extension of the thought and problem solving process?
>
> I think therefore I blink?


it can't be an extension of that process, since it's the environment the process occurs in - a bit like space not being big, just somewhere to be big in.

As to why we have it. Asking that question implies a few assumptions about the universe that would seem to include an organising entity. I think it's just a property that lifeforms exhibit once they reach a certain level of complexity, but i confess, i've never tried to firm up my definition any more than that
In reply to Jon Stewart:

Kind of yeah, just biologically derived.

How are they blindingly different? You can't access your unconscious processes, apart from those that are modifiable. How then can you tell me that they are blindingly different? Because you 'feel' like it should be like that?
 Coel Hellier 08 Nov 2013
In reply to The Lemming:

The brain is there as a decision-making device, to take in information from the surrounding, to process it and make decisions. For this reason it is continually interpreting and running simulations ("what if ...?") about the world and particularly other people. One of the factors in that is knowing information about itself and interrogating itself and considering the effect of its own acts in evaluating choices.

For example, if you are trying to interpret another human being and their motives and likely actions, then one of the best methods is to ask such questions of your own brain, which is much more accessible and a close copy of the other-human's brain.

All of this information receiving, self-interrogation and awareness of oneself in one's local environment are what we call "consciousness".
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> Conscious mental states are absolutely and blindingly obviously distinct from unconscious activity of
> the CNS, yet we cannot yet put our finger on what the physical differences in the activity of
> the neurones is. There seems to be a bit of fashion these days for flagrantly denying this obvious fact.

The differences are in the different patterns of electrical activity. One can tell the difference between an unconscious brain and a conscious brain in a brain scanner.
 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to The Lemming)
>
> The brain is there as a decision-making device, to take in information from the surrounding, to process it and make decisions. For this reason it is continually interpreting and running simulations ("what if ...?") about the world and particularly other people. One of the factors in that is knowing information about itself and interrogating itself and considering the effect of its own acts in evaluating choices.
>
> For example, if you are trying to interpret another human being and their motives and likely actions, then one of the best methods is to ask such questions of your own brain, which is much more accessible and a close copy of the other-human's brain.
>
> All of this information receiving, self-interrogation and awareness of oneself in one's local environment are what we call "consciousness".

That's a reasonably neat and consistent way of thinking about it, but it fails to answer the crucial question: a robot could do as you suggest, but it would not have a cohesive internal world that blends perception, memory, emotion, and a sense of self into what feels like a continuous (but interrupted by sleep) unified experience.

In your view, what is the difference between a robot catching a ball as it takes in sensory data and responds with the correct motor commands, to the human who experiences catching the ball. Or can you not see the difference, because you are actually a robot?

 1poundSOCKS 08 Nov 2013
In reply to Jon Stewart: Has Coel failed the Turing test again?
 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Jon Stewart)
>
> [...]
>
> The differences are in the different patterns of electrical activity. One can tell the difference between an unconscious brain and a conscious brain in a brain scanner.

So there is a difference that you can see on a brain scanner, how insightful. That's precisely what I said on the other thread, neuroscience tells us what gross physical brain activity is associated with conscious states, without offering any causal mechanisms. It's an interesting start to the inquiry into the brain, but it isn't an understanding of what is going on in there.
 Jon Stewart 08 Nov 2013
In reply to 1poundSOCKS:
> (In reply to Jon Stewart) Has Coel failed the Turing test again?

Haha, yes!
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> a robot could do as you suggest, but it would not have a cohesive internal world that blends
> perception, memory, emotion, and a sense of self into what feels like a continuous (but interrupted
> by sleep) unified experience.

How do you know that? I assert that a robot as multi-capable as ourselves would indeed experience that just as we do.

> In your view, what is the difference between a robot catching a ball as it takes in sensory data and
> responds with the correct motor commands, to the human who experiences catching the ball.

This is what I put on the other thread (ok, own up, who started a duplicate thread?):

How do you know that? The robot/zombie would have to know about the ball, and the location of its own arm, and would have to know about the decreasing distance between the ball and its arm, and it would need to know about when to clutch its fingers, possibly in response to a sensor registering the arrival of the ball, and it would have to know when the ball is secure and thus when it could move its arm away. In what sense is that *not* "experiencing" catching the ball?
 Jon Stewart 08 Nov 2013
In reply to SidharthaDongre:
> (In reply to Jon Stewart)
>
> Kind of yeah, just biologically derived.
>
> How are they blindingly different? You can't access your unconscious processes, apart from those that are modifiable. How then can you tell me that they are blindingly different?

I'm not saying that the physical mechanisms need to be blindingly different, but the outcomes, the job they do, are blindingly different.

Because you 'feel' like it should be like that?

Exactly. Some neural processes generate feelings and experiences in my conscious world, while other neural processes do very useful things, like monitor the CO2 level in my blood, but contribute zilch to my experience. On one level the processes are identical - potentials zipping along axons - but at a higher level they are massively different. Just acknowledging that the two jobs are different would be a start to investigating the difference.

Seems like a massive cop-out to say "too hard, don't like problems I can't solve, I'll construct an argument that the problem doesn't exist instead, that's easier", which seems to be the approach in vogue.

 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Jon Stewart)
>
> [...]
>
> How do you know that? I assert that a robot as multi-capable as ourselves would indeed experience that just as we do.

I don't know that the robot doesn't have an internal world, it's just a perverse way around the problem (that you create for yourself) of acknowledging the difference between conscious beings and robots/zombies.
In reply to Jon Stewart:
> (In reply to SidharthaDongre)
> [...]
>
> I'm not saying that the physical mechanisms need to be blindingly different, but the outcomes, the job they do, are blindingly different.

Only if you reduce them down to be so. They're all integrative processes.


>
> On one level the processes are identical - potentials zipping along axons - but at a higher level they are massively different.

That is included in this particular view. Addionally, the 'axonal' processes you refer to aren't identical, nor are they the only way that the brain communicates information electrically.

>
> Seems like a massive cop-out to say "too hard, don't like problems I can't solve, I'll construct an argument that the problem doesn't exist instead, that's easier", which seems to be the approach in vogue.

No-one is copping-out. The view is that the problem is integrated, not reduced.
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> I don't know that the robot doesn't have an internal world, it's just a perverse way around the problem ...

Or perhaps it is exactly the solution to the problem. Our consciousness and self-awareness is just a highly developed example of the sort of thing that you would *have* to build into a robot in order to perform such tasks.

You can't construct a robot to catch a ball without it having awareness of its arm position and without it registering the arrival of the ball. At that point you've agreed the principle and are only haggling about the price (or, in this case, the degree of development of this self-awareness).
 andrewmc 08 Nov 2013
In reply to The Lemming:

The difference is that the robot can catch the ball, but (currently) only the person can think 'why did someone just throw a ball at me? Who am I anyway?'.

Not all animals have self-awareness, see the (probably somewhat flawed) mirror test.
 Coel Hellier 08 Nov 2013
In reply to andrewmcleod:

> The difference is that the robot can catch the ball, but (currently) only the person can think
> 'why did someone just throw a ball at me? Who am I anyway?'.

Which is only a difference in degree. Suppose you were to build a zombie/robot as a playmate for your kids, then to do that task well it would indeed have to figure out the intent of the person throwing the ball at them and respond appropriately.

As I see it, people arguing for some fundamental difference between humans and zombies/robots that are equally functionally competent simply haven't thought through what you'd need to program into the robot to make it equally functionally competent.
Tim Chappell 08 Nov 2013
In reply to The Lemming:


What is it?

--You know this because you have it.

Why do we have it?

--In order to experience beauty.
 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Jon Stewart)
>
> [...]
>
> Or perhaps it is exactly the solution to the problem. Our consciousness and self-awareness is just a highly developed example of the sort of thing that you would *have* to build into a robot in order to perform such tasks.

I agree that we have to be conscious to do the things we do, a brain generating an internal world of perception, memory, etc, is the best solution to the demands of being an animal like us. The reason we have consciousness is clear.

> You can't construct a robot to catch a ball without it having awareness of its arm position and without it registering the arrival of the ball. At that point you've agreed the principle and are only haggling about the price (or, in this case, the degree of development of this self-awareness).

A real robot that can say, get around the room to clean the laminate flooring and perhaps feed the cat works by running Intel processors and the like and I would hope we can both agree doesn't have an internal world. At what point, as we make robots more and more effective and sensitive, will they begin to develop this internal, cohesive experience? When will you say "but you can't do that without consciousness!". Feeding that cat, fine. Catching a ball, not fine?

 Jon Stewart 08 Nov 2013
In reply to Tim Chappell:
> (In reply to The Lemming)
>
>
> Why do we have it?
>
> --In order to experience beauty.

Well no. We have it because it comes in handy if you're an animal that in order to do its job (of passing on genetic material) needs to get around the place interacting with the world. That we experience beauty is great, and it's a part of that interaction with the world and each other. The experience of beauty allows us to form social bonds - we need to have things to share about our internal experience to make us connect and empathise, which are useful things to do when you're not busy beating people up and nicking their stuff.
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> ... and I would hope we can both agree doesn't have an internal world.

Why would I agree that?

> At what point, as we make robots more and more effective and sensitive, will they begin to develop
> this internal, cohesive experience?

Nearly everything about biology is a continuum. Your question is like asking at what point, over evolutionary time, did animals become intelligent, or become complex? All such things -- intelligence, complexity, self-awareness -- are evolved continua. Thus your floor-cleaning robot scores 3.6 units on the self-awareness scale and a human scores 12,000,000 units.

> When will you say "but you can't do that without consciousness!".

Wrong question! The better question is "do you need 15 units of consciousness or would 12 suffice?".
Tim Chappell 08 Nov 2013
In reply to Jon Stewart:

Well, so you say. But there's an ambiguity about "Because" here.

Q. Why is the radio on?
A1. Because of the position of its switches and dials

Q. Why is the radio on?
A2. So that I can listen to I'm Sorry, I Haven't A Clue.

A1 gives, roughly, what Aristotle would call a material because. I was giving an answer of the A2 sort: what Aristotle would call a final because.

These sorts of becauses are not exclusive alternatives. Giving an A1 type answer doesn't stop you giving an A2 answer as well, nor vice versa.
 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Jon Stewart)
>
> [...]
>
> Why would I agree that?
>
> [...]
>
> Nearly everything about biology is a continuum. Your question is like asking at what point, over evolutionary time, did animals become intelligent, or become complex? All such things -- intelligence, complexity, self-awareness -- are evolved continua. Thus your floor-cleaning robot scores 3.6 units on the self-awareness scale and a human scores 12,000,000 units.
>
> [...]
>
> Wrong question! The better question is "do you need 15 units of consciousness or would 12 suffice?".

I knew it, you're a robot.

As I say (on the other thread) I reckon that the sliding scale of consciousness goes up the biological scale, with clams quite near the bottom, rabbits somewhere in the middle, dolphins and monkeys quite near the top, and us beyond them

But you see the same continuum from the pocket calculator to the iphone to the Iridis4 supercomputer to the Coel.
 Jon Stewart 08 Nov 2013
In reply to Tim Chappell:
> (In reply to Jon Stewart)
>
> Well, so you say. But there's an ambiguity about "Because" here.
>
> Q. Why is the radio on?
> A1. Because of the position of its switches and dials
>
> Q. Why is the radio on?
> A2. So that I can listen to I'm Sorry, I Haven't A Clue.
>
> A1 gives, roughly, what Aristotle would call a material because. I was giving an answer of the A2 sort: what Aristotle would call a final because.
>
> These sorts of becauses are not exclusive alternatives. Giving an A1 type answer doesn't stop you giving an A2 answer as well, nor vice versa.

I agree with that, except that you did not give an A2 answer. In your example above, the A2 answer has greater explanatory power than the A1 answer. In the question in hand, I give an answer with explanatory power, and you give an answer that is just made up.
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> I knew it, you're a robot.

Yep, sure am! To quote Dan Dennett: "We're all zombies". We don't have anything more than it would be necessary for us to have to do what we do.
 David Riley 08 Nov 2013
In reply to The Lemming:

Consciousness must begin at a simple level. It is the combination of the drive to survive and replicate, and our computational abilities.
Comparing a robot with a human is not helpful.
To compare a robot with a microbe or perhaps insect would be the way I would investigate it. There is no reason to think that insects don't have consciousness.
Tim Chappell 08 Nov 2013
In reply to Coel Hellier:

I think Galen Strawson is rather good on what's wrong with Dennett's views.

http://www.theguardian.com/books/2011/jan/09/soul-dust-nicholas-humphrey-re...
 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Jon Stewart)
>
> [...]
>
> Yep, sure am! To quote Dan Dennett: "We're all zombies".

Exactly. The author of "consciousness ignored".
 Jon Stewart 08 Nov 2013
In reply to David Riley:
> (In reply to The Lemming)
>
> Consciousness must begin at a simple level. It is the combination of the drive to survive and replicate, and our computational abilities.
> Comparing a robot with a human is not helpful.
> To compare a robot with a microbe or perhaps insect would be the way I would investigate it. There is no reason to think that insects don't have consciousness.

I'm not sure 'computational abilities' is an effective description of what a biological nervous system is up to. I don't know much about measurements of computational ability, but as you suggest, a massive supercomputer might have similar power to an insect. It strikes me that because the insect has purpose intrinsic to itself, it does more than just compute. Computers have to be created in order to run operations (which may involve learning) given to them by their creators. Insects just get on with it, they know what to do. They must obviously be doing a lot of computation to get around the place, responding to stimuli, but don't they require something extra to organise and unify that computation into purposeful behaviour? It's this addition to the computation that I think is distinct to biological stuff and which electronic processors do not and will not possess. [This is just me thinking now, it's not something I can back up!]

 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> but don't they require something extra to organise and unify that computation into purposeful behaviour?

Yes they indeed do, and they get that when humans program them, creating, for example, chess-playing computers or aircraft autopilots. For biological things the programming is done by evolution.
 Coel Hellier 08 Nov 2013
In reply to Tim Chappell:

> I think Galen Strawson is rather good on what's wrong with Dennett's views.

I think Galen Strawson simply misunderstands and misrepresents Dennett in that piece. He isn't saying that consciousness doesn't exist.
Tim Chappell 08 Nov 2013
In reply to Coel Hellier:


Yet you've just said, with a nod to Dennett's authority, that "we're all zombies".

Aren't *you* saying that consciousness doesn't exist? And saying it on Dennett's say-so?

Or at the very least, aren't you and Dennett both saying that consciousness is an illusion--which as Galen points out, is a flatly incoherent position?

I think Galen's line on all this is much more promising one. The dualist and the physicalist share the presupposition that there's something about the nature of matter than makes it mysterious how matter could support consciousness. So dualists go all mysterian, like Colin McGinn, while physicalists do their best to airbrush consciousness away, like Dennett. But both attitudes are quite unmotivated. We simply don't know enough about the nature and organisation of matter to know that its supporting consciousness is mysterious at all.
 Andy Hardy 08 Nov 2013
In reply to The Lemming:
Oh good, Tim and Coel are squaring up again. Do you have shares in popcorn?

 Coel Hellier 08 Nov 2013
In reply to Tim Chappell:

> Yet you've just said, with a nod to Dennett's authority, that "we're all zombies". Aren't *you* saying
> that consciousness doesn't exist? And saying it on Dennett's say-so?

The definition of "zombie" in this context is of something that is functionally indistinguishable from a human but doesn't have that "extra" of consciousness. Thus, the idea is of a mechanical robot that could do everything a human can do and act exactly like a human, yet not have that "inner experience".

What Dennett is saying (and I agree) is that the above is a false concept. The *only* way that a robot/zombie could be functionally indistinguishable from a human is if it *did* have that inner experience and self awareness that a human does.

Again, this is similar to the suggestion that a robot could catch a ball but not "experience" catching the ball -- I reject that idea.

Thus by saying that humans are "zombies" Dennett is saying that we have nothing more than a robot that is functionally indistinguishable from us would have to have. In *that* sense we are nothing more than zombies.

That is, however, *not* saying that consciousness does not exist or that it is an illusion, it is saying that it is an inevitable property of the functionally-indistinguishable zombie.

Saying that Dennett thinks that consciousness doesn't exist or is merely illusory is to totally misunderstand Dennett. His stance (and I agree) is that it is the concept of a "zombie" that is incoherent, and that those suggesting that there could be a ball-catching robot that doesn't "experience" catching the ball, or a functionally-indistinguishable zombie-human that doesn't "experience" being human, have simply not thought through properly what being such a thing would actually entail.
 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Jon Stewart)
>
> [...]
>
> Yes they indeed do, and they get that when humans program them, creating, for example, chess-playing computers or aircraft autopilots. For biological things the programming is done by evolution.

I agree that evolution does the programming of biological things, but I don't think that that provides a sufficiently close analogy to make a convincing argument that all biological nervous systems do is computation.

It's hard to see the mechanism by which evolution has 'programmed' the brains of creatures. We know that the genes do somehow code for characteristics in our behaviour, and so our conscious experiences, but how do genes do this? You don't have proteins that map to behavioural traits, but you presumably do have patterns of neural activity that map to such traits as they are being expressed. How do the genes program the neurones to fire in certain ways? The analogy with computer hardware running software seems to me to break down.

I have no doubt that the information about how to behave is passed down through DNA, I just don't think the brain is a lump of hardware running evolution's 'program', I think that's just a bad analogy which fits with the 'let's ingnore the problem of consciousness' view you (and Dennett) are so keen to cling to.

Tim Chappell 08 Nov 2013
In reply to Coel Hellier:

Thank you, I know what Dennett says. I was trying to get clearer about what you think he says.

But actually I think Dennett doesn't say quite what you have him saying. To pay you a compliment, I think what you say is clearer and more consistent than what Dennett says. Because sometimes DCD does take the Robert-Kirk-type line you adumbrate above. And sometimes he does deny, or come very close to denying, that there is any such thing as consciousness.

My own line on the "Could there be zombies?" question is "How can we possibly tell?". With a side-order of "What is the point of asking this?".

Can there be something that is structurally and anatomically identical to a human being, and yet has no consciousness? With one part of my mind I'm tempted to reply "Well, how about an unconscious human being?"; with another, "Who knows?"; and with a third, "No, of course not, if it's a structural and anatomical doppelgaenger of me, then it will be a consciousness doppelgaenger of me too."

What I do think is certain is that I don't see any grounds for *dogmatism* about how to answer the zombie question. We can't be sure what would happen unless we actually built a zombie. But then the question arises: what counts as building a zombie?

How about cloning a human? Well, we know what happens there, because we've done it. Cloned humans just are humans, and they're conscious, of course.

Or making a humanoid robot? Well, we've done that too, but in primitive forms which are nothing like organic humans structurally speaking. Such robots aren't conscious, of course.

So actually, I'm not sure the zombie question is entirely clear.
Tim Chappell 08 Nov 2013
In reply to 999thAndy:
> (In reply to The Lemming)
> Oh good, Tim and Coel are squaring up again. Do you have shares in popcorn?
>
>


I'm not really squaring up; I'm just trying to work out what he actually thinks. I'm not a dualist, which will no doubt disappoint him
 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Tim Chappell)

> His stance (and I agree) is that it is the concept of a "zombie" that is incoherent, and that those suggesting that there could be a ball-catching robot that doesn't "experience" catching the ball, or a functionally-indistinguishable zombie-human that doesn't "experience" being human, have simply not thought through properly what being such a thing would actually entail.

I'm pretty sure you could - and people probably have - done experiments with directing attention that allow you to do things such as catch a ball without being conscious of it. Our nervous systems do all kinds of amazingly clever computations which don't make it up to the 'top level' of consciousness because conscious awareness of them is not useful. Catching a ball is a purely computational task requiring the processing of inputs and responding with linearly defined outputs. The fact that we do this task consciously is not because the task is so complex it demands consciousness, but because catching balls is something we do which is linked in to other tasks which need consciousness, such as playing rounders in the park. It would be unhelpful to do it unconsiously, so we do it consciously. On the other hand, it is not useful to be aware of the level of calcium in the blood, so we do this task unconsciously.
 Dave Garnett 08 Nov 2013
In reply to Jon Stewart:
> (In reply to Coel Hellier)
> [...]
>
> neuroscience tells us what gross physical brain activity is associated with conscious states, without offering any causal mechanisms.

Why can't it be a question of recruitment of more processing power, functions, short and long-term memory etc? Actually, it seems to me that if you have enough massively parallel processing, short term memory (isn't that called RAM?), sensory input and motor feedback, and a rich sensory environment, consciouness is pretty much inevitable. Add personal long-term memory and you have personality too.

You seem to think consciousness is either off or on, but the fact that we operate at varying levels of consciousness and arousal is clear.

I've just got off a flight from San Diego, so I speak from experience.

 David Riley 08 Nov 2013
In reply to Jon Stewart:

My point was that human consciousness is built on something with very little intelligence. I believe we experience the world as our own simulation continously updated by the sum of all our senses. Without sensory input we are still there. This is complex computation. The simulation running in an amoeba would be very simple but still with a similar (stay alive) incentive system. The problem is to understand the incentive system. Work out the amoeba one and you're mostly there. How hard can it be ?
 1poundSOCKS 08 Nov 2013
In reply to Dave Garnett: And the first thing you do is switch on your smartphone and check the UKC forums.
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> It's hard to see the mechanism by which evolution has 'programmed' the brains of creatures.
> How do the genes program the neurones to fire in certain ways?

The genes are a recipe for a human, this recipe follows a developmental pathway and the resulting brain is programmed to learn. Thus the end result is a mixture of genetic programming and genetically programmed learning.

> I just don't think the brain is a lump of hardware running evolution's 'program'

Why not? Note that for neural networks the hardware/software distinction isn't there, since the hardware pattern *is* the software. That neural network does indeed run evolution's program, remembering that the program is a recipe, not just a series of "if ... then ..." statements.

> fits with the 'let's ingnore the problem of consciousness' view you (and Dennett) are so keen to cling to.

Addressing the problem properly is not ignoring it.
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> ... that allow you to do things such as catch a ball without being conscious of it.

There you treat consciousness as an on/off thing, whereas a better way of thinking of it is as a continuum. Some parts of the ball-catching device need to be aware of what they are doing, to some degree of awareness.

 Coel Hellier 08 Nov 2013
In reply to Tim Chappell:

> To pay you a compliment, I think what you say is clearer and more consistent than what Dennett says. -- Prof. Tim Chappell

I'm framing this!

> Can there be something that is structurally and anatomically identical to a human being, and yet
> has no consciousness? With one part of my mind I'm tempted to reply "Well, how about an unconscious
> human being?"; ...

An unconscious human being is clearly not functionally equivalent to a conscious one.

> and with a third, "No, of course not, if it's a structural and anatomical doppelgaenger of me,
> then it will be a consciousness doppelgaenger of me too."

I agree with your third part, though I'd want to add in "and functionally equivalent" [For example, a switched-off computer or car might be "structurally and anatomically" similar to a switched-on one but not functionally so.]
Tim Chappell 08 Nov 2013
In reply to Coel Hellier:


Hmm. The trouble with a continuum view, I think, is that it doesn't make good sense of the first-personality of experience. In consciousness it's not just that there is a warm glow of orange. It is that there's a warm glow of orange FOR ME: presented TO MY FIRST-PERSONAL VIEWPOINT.

That *does* look rather on-off.

Please note, my agenda here is not to say "This can't have evolved". Though it might be to say "I'm not sure I buy *your* story about how it evolved".
 Jon Stewart 08 Nov 2013
In reply to Dave Garnett:
> (In reply to Jon Stewart)
> [...]
>
> Why can't it be a question of recruitment of more processing power, functions, short and long-term memory etc? Actually, it seems to me that if you have enough massively parallel processing, short term memory (isn't that called RAM?), sensory input and motor feedback, and a rich sensory environment, consciouness is pretty much inevitable. Add personal long-term memory and you have personality too.

I think it is much more difficult than you suggest to organise all that information processing into what feels like a single, unified experience complete with a consistent sense of self.

> You seem to think consciousness is either off or on, but the fact that we operate at varying levels of consciousness and arousal is clear.

No, my consciousness 'changes shape' depending on the conditions and demands of the situation. When I'm above my gear on a hard route, I'm all about the sensory processing and immediate problem solving. When I'm reading an article about some unfathomable psychophysics experiment on the train, I'm shutting off most of the sensory input and trying to visualise and link together abstract ideas. When I'm fully asleep I'm not conscious at all, and when I'm trying to ignore the alarm clock I'm only about 10% better than that.

 Coel Hellier 08 Nov 2013
In reply to Tim Chappell:

> That *does* look rather on-off.

I agree that a person's experience of consciousness can be largely on/off. However, a computer's functioning can also be on/off (with a switch) even though one can conceive of a a continuum of less-and-less capable computers down to a single isolated logic gate.

In the same way the "continuum" of a person's consciousness is not about them now, it's about how that consciousness developed gradually from the time they were an embryo without a brain, and how the consciousness of their ancestors developed gradually over evolutionary time from the point of simple, non-conscious life.

 Jon Stewart 08 Nov 2013
In reply to Coel Hellier:
> (In reply to Jon Stewart)
>
> [...]
>
> There you treat consciousness as an on/off thing, whereas a better way of thinking of it is as a continuum. Some parts of the ball-catching device need to be aware of what they are doing, to some degree of awareness.

What bothers me about your view is that fail to acknowledge any distinction between computational processing in the brain which is not conscious, and a very special and selective subset of brain activity which gives rise to conscious experience.

When I look at a visual scene full of objects, I have to devine from the pattern of light falling on my retinas what I'm looking at. This takes an absurdly stupendous amount of computational power, building up the picture from areas of light and dark, to picking out edges, building in thousands of learnt assumptions about the world, comparing the images from each eye and inferring the depth dimension, etc etc etc. All of this is unconscious. At the end of the process comes a conclusion like "red Fiat Punto travelling at 35mph towards me" and that is only bit I'm aware of. The conclusion of the computational process is distinct from computational process itself, they are not one and the same thing.
 Coel Hellier 08 Nov 2013
In reply to Jon Stewart:

> distinction between computational processing in the brain which is not conscious, and a very special and
> selective subset of brain activity which gives rise to conscious experience.

I don't see a problem with this distinction. Suppose you are designing a general-purpose robot. That programming is going to be modular, with different modules for different tasks. Some modules are going to be concerned with mundane matters like keeping the gubbins working (as parts of our non-conscious brain do). Other modules are going to concerned with assimilating data from outside, and with decision making. Some of these modules are going to have properties of consciousness, while other modules don't need that and so don't. If you look at these modules from the point of view of functionality, and if you look at consciousness from the point of view of functionality, then there is no real problem here.
 Jon Stewart 08 Nov 2013
In reply to 1poundSOCKS:
> (In reply to Jon Stewart) Have you read this...
>
> http://www.amazon.co.uk/Thinking-Fast-Slow-Daniel-Kahneman/dp/0141033576

That looks brilliant, cheers Matthew!
OP The Lemming 08 Nov 2013
In reply to The Lemming:

Got to admit that I am enjoying this discussion, but can we recap to keep some of the conversation a little less cerebral for my brain to comprehend?

My limited medical understanding is that the brain has different areas with different tasks and that the spinal cord also has a very significant role to play in conjunction with the brain. Somehow all these things work independently and cooperatively to allow me to punch a few keys on a keyboard.

Some on here say that a computer has roughly the consciousness of a fruitfly, but what about a collection of supercomputers working together while also plugging into the mass collective knowledge of the internet?

Surely a form of consciousness is evolving/developing within the internet?

If we keep things simple, is my consciousness just the small voice taking in my head rather that the actual words coming out of my mouth when I think of a problem?

I don't need to think about breathing, so no need for consciousness. However I'd quite like to shag that bit of totty across the street, so I give some thought about how I would woo said strumpet. Is my consciousness, the active and deliberate thought process of trying to eject my sperm with a little more help than my own hand?
 Dave Garnett 09 Nov 2013
In reply to 1poundSOCKS:
> (In reply to Dave Garnett) And the first thing you do is switch on your smartphone and check the UKC forums.

Yes, I said I was only semiconscious! My alternative reading was going to need more than that.
 toad 09 Nov 2013
In reply to The Lemming:

I'm pink...




Therefore I'm spam
OP The Lemming 09 Nov 2013
In reply to toad:

That lame joke showed a thought process, but did it prove consciousness?

I can have a conversation with an internet not and I'd struggle to notice that I was conversing with a machine. Does this mean that the internet not has a level of consciousness?
OP The Lemming 09 Nov 2013
In reply to The Lemming:

internet bot. Sodding phone predicting my key strokes.
 tlm 09 Nov 2013
In reply to Tim Chappell:

> Why do we have it?
>
> --In order to experience beauty.

or in order to invent the very idea of beauty?

 Jon Stewart 09 Nov 2013
In reply to tlm:
> (In reply to Tim Chappell)
>
> [...]
>
> or in order to invent the very idea of beauty?

Wouldn't that be nice

New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...