UKC

Interesting Article on Artificial Intelligence

New Topic
This topic has been archived, and won't accept reply postings.
 Cú Chullain 06 Mar 2015
Its a long read but it's pretty interesting stuff and much of it new to me. Not too sure I'll sleep tonight.

Part 1
http://waitbutwhy.com/2015/01/artificia ... ion-1.html

Part 2
http://waitbutwhy.com/2015/01/artificia ... ion-2.html
 SenzuBean 06 Mar 2015
In reply to Cú Chullain:

It is very interesting indeed. I wonder what it'll mean for our society. As in the past, technological improvements have made many jobs obsolete. Humans tend to adapt by doing technologically more difficult jobs. But in the future, the number of jobs that will be made redundant will be unprecedented. It might not be long until the majority of people won't have a job to do at all.
If that's the case, and we have our current system of laws and social structures - we're set to have a large underclass of people who won't get any work (thus won't have income), a small sector of workers, and a small amount of people who will coast on compound interest of their assets. This is an inherently unstable structure - how would it work? Would jobs be split up, so that full-time workers are not needed anymore? Would jobs be made up (bit like in Soviet Russia where they drove trains of resources across the country to improve the figures)? Would we just say to most people "you don't need to work, go have fun [climbing]! - collect your free money at the end of the week" (and then allow them to have a _comfortable_ living). Who knows. Exciting.
 jkarran 06 Mar 2015
In reply to SenzuBean:

> ...we're set to have a large underclass of people who won't get any work (thus won't have income), a small sector of workers, and a small amount of people who will coast on compound interest of their assets. This is an inherently unstable structure - how would it work?

It won't, for that reason it probably won't happen and if it does I believe the traditional solution was the guillotine.

jk
Post edited at 18:37
 SenzuBean 06 Mar 2015
In reply to jkarran:

> It won't, for that reason it probably won't happen and if it does I believe the traditional solution was the guillotine.

> jk

I meant to imply that it won't work
(although scarily - what if it could... we're not far from the point where we could make a machine that the majority humans can't evade or kill. Replace the majority of the police force and army with robots. Drone technology has come a long way in only a few years.... Have fun sleeping )
 Rob Exile Ward 06 Mar 2015
In reply to SenzuBean:

'we're not far from the point where we could make a machine that the majority humans can't evade or kill. '

Only about a thousand years. I'll sleep OK.
 summo 06 Mar 2015
In reply to Rob Exile Ward:

The reality is it could be done within a few years, if the will was there, spend the same on it as they are spending on the next lunar landings job done. Look at what's needed; a computer that can learn etc. It already exists.
 pneame 06 Mar 2015
In reply to summo:

I'd agree. The emphasis would be on "majority". So I'm not sure where Rob gets his sunny and optimistic thousand years from. They wouldn't be cheap and you'd need quite a few for them to be useful..... But look what mass production did for the machine gun.
 summo 06 Mar 2015
In reply to pneame:

They've just had a computer learning to play 80s computer games, within just few hours of learning, they'd bettered ALL human efforts, taking the learning and thinking of one game, applying it to others, 40+ games I think in total, breakout, pacman etc.. they achieved total efficiency, through trial and error. Have a super computer controlling a million drones, by the time you have the last 1000, they'd have seen all scenarios and hunt us down.
 Jon Stewart 06 Mar 2015
In reply to Cú Chullain:

I've seen this before and didn't read all of it then, nor now I'm afraid, but...

But it all seems rather "strong AI" to me, or, it ignores the fact that there's nothing to suggest that the brain - which generates consciousness - is doing computation. We have no idea how the brain thinks. This reverse engineering and neural network stuff is wishful thinking, it doesn't show us we're on the cusp of AI. Most jobs require thinking, not computing, and aren't very well suited to a non-conscious robot with enormous computing power.

Have you seen how shit the 'cleverbot' Turing competition entries are? That's what computation achieves by 'learning' from lots of data going in and crunching through some instructions. AI is a vastly overblown term.

I went to university 10 years ago with people studying AI and it doesn't look to me like things have moved on in any significant way since then. Why should I believe that the next 10 or even 50 years should be different when the fundamental scientific problems of what it means to be 'intelligent' (assuming as I do that you can't have intelligence without thought, i.e. consciousness) show no signs of being solved?
 Bobling 06 Mar 2015
In reply to Cú Chullain:

I had a long pub discussion with my best friend who is quite 'enlightened' about these things and he sent me this paper, normally I don't read this sort of stuff but this was quite compelling. The financial angle just makes it more chilling!

https://ir.citi.com/FItMGwO7Z6DKRf5xjTZeeR04vf8wunqYOfIoQz%2B%2B8TD4DND7PwW...
 Sharp 06 Mar 2015
In reply to SenzuBean:

We live in a country where lots of people are jobless and lots of people work 60+ hour weeks. Work is built into our culture and our value system, I don't think a decrease in the available jobs would make a difference to much else than making more people redundant.
 pneame 06 Mar 2015
In reply to summo:

Now we're talking - rise of the machines etc...
I'd agree with Jon that AI per se hasn't moved on much, but it doesn't seem a stretch to use pattern recognition as a an alternative to heatseeking, radar seeking etc as a detection mode.
A fairly primitive form of pattern recognition is used in, for example CV screening. Another is in those well-designed forms that will take anything that looks like a phone number or date and turn it into a phone number or date that it can actually use.
(it's a pet peeve of mine that when you put a phone number or a date in as a string of numbers that some forms say "no it must be this format or nothing" while others just go ahead and reformat)
 summo 07 Mar 2015
In reply to pneame:
A combination of infra red and pattern recognition, a human shape giving off the correct heat signature. More predator than terminator.
Recognition is already quite advanced, robotic cow milkers, will locate the teet on a living moving animal etc.
Post edited at 07:08
 Hugh Cottam 07 Mar 2015
In reply to Jon Stewart: your's is the enlightened informed answer on the subject of AI. I did research on AI for 10 years and we're still not remotely close to solving any of the standard AI problems: machine learning, natural language understanding, speech recognition. We can't build a machine that can navigate effectively around a room or answer questions meaningfully even on a very restricted subject area.

The vast number crunchers are not likely to take over the earth. At best they may beat us at chess. People's concerns about AI are based more upon the power of science fiction and active imaginations than anything to do with real AI. Brain science is also at a very limited level of understanding. Basic ideas about this bit of the brain does this, another bit does that, get rendered meaningless when you find there are people with 2% of normal brain tissue who operate as though there was nothing different about them.

 DancingOnRock 07 Mar 2015
In reply to SenzuBean:

If you don't have a job, you don't have any money, so you can't buy anything, so all the things made by the robots go unsold and the people who own the robots stop making things.

Money is a funny thing.

For centuries people have argued over the split between cost of labour and cost of materials.

 Hooo 07 Mar 2015
In reply to Cú Chullain:
I don't think we are anywhere near real AI yet. There is a fundamental difference between the current tech like self driving cars and an actual intelligence. A self driving car is like a trained chimp, it can be trained to do a particular task better than a human, but it will never learn something new by itself. It will never come up with an original idea, and it will never be developed into a real intelligence. Real AI will require a leap to a new concept, and we currently have no idea what this concept is.
We will have real AI eventually though. There is no fundamental reason why a process that takes place in the human brain can't be reproduced in other hardware. Once this happens, AI will leap forward at a terrifying rate as AIs rapidly develop better AIs.
I couldn't speculate when this will happen, but I'm convinced that it will, and when it does, the new world will be unrecognisable to us as we are now.
KevinD 07 Mar 2015
In reply to Hooo:
> (In reply to Cú Chullain)
> I don't think we are anywhere near real AI yet. There is a fundamental difference between the current tech like self driving cars and an actual intelligence. A self driving car is like a trained chimp, it can be trained to do a particular task better than a human, but it will never learn something new by itself. It will never come up with an original idea

What do you mean by learning something new?
Would learning to recognise a tank (or whether it is sunny) count?
Does solving maths problems count as original?
 Hooo 07 Mar 2015
In reply to dissonance:

It will never invent a car.
Solving maths problems using techniques that have been taught is not original, inventing a new technique - such as calculus - is original.
KevinD 07 Mar 2015
In reply to Hooo:

> It will never invent a car.

how do you mean? A new car design or a complete new concept,

> Solving maths problems using techniques that have been taught is not original, inventing a new technique - such as calculus - is original.

which the majority of people couldnt do either. Plus I am not so convinced a neural network couldnt, given enough time.
 Rob Exile Ward 07 Mar 2015
In reply to summo:

Teaching computers to play games is trivial. Even chess is a vastly simplified, structured, systematized version of what actually happens in real life.

Our intelligence doesn't come from being taught stuff. It comes from our brains being hard wired to make sensible interpretations of literally infinite sense data. For instance - when you look at a landscape, how do you 'know' that you are looking at a distant landscape and not a picture right in front of you? The answer is you don't, your brain just assumes that's the case because in the last billion years that we have been evolving that has usually been the case. This isn't learned, it's hardwired. Babies are afraid of drops, even though they have never fallen.

Show me a computer that can play catch with a toddler, carry 4 pint mugs across a crowded pub or find the bathroom in the middle of the night with the light off and not tripping over the kids toys, and I'll start worrying about robots hunting me down.



 jkarran 07 Mar 2015
In reply to SenzuBean:

> Have fun sleeping

Killer machines, machines that can kill on an unimaginable scale have existed for most if not all of my lifetime. The most chilling I can think of has commanded our destruction at least once but was thwarted by its human minions. Google the Soviet Union's Dead Hand if you fancy a sleepless night.

jk
KevinD 07 Mar 2015
In reply to Rob Exile Ward:

> Show me a computer that can play catch with a toddler, carry 4 pint mugs across a crowded pub or find the bathroom in the middle of the night with the light off and not tripping over the kids toys, and I'll start worrying about robots hunting me down.

There are two different issues there.
It is possible, even with todays limitations, to build a robot which would be fairly capable at hunting you down. You only need a couple of forms of specific intelligence for that. Ability to fly and target weapons both of which exist. Plus a guarded base and a production line. run into problems eventually but would work for a while.

dunno how many people can pass all of those tests either.
One thing I did notice in that article though is they didnt acknowledge, when talking about all the researchers confidence about when it would happen, that the previous generation of researchers had similar, if not more optimistic projections.
As for the upload the brain to a computer predictions. One researcher did a quick study on that. The results showed a very strong correlation between the date that people thought it would be ready and their expected life span minus a few years.
 Hooo 07 Mar 2015
In reply to dissonance:

> how do you mean? A new car design or a complete new concept,

A new concept, even a minor one.

> which the majority of people couldnt do either. Plus I am not so convinced a neural network couldnt, given enough time.

On the contrary, everyone can invent. We tend to forget this, because almost all of these inventions are crap. Either they won't work, or they've been done before. But once in a billion one of these inventions will work. Our current "AI" on the other hand, will never invent - crap or otherwise.
 David Riley 07 Mar 2015
We have automation, but no AI at all.
 summo 07 Mar 2015
In reply to Cú Chullain:
What is AI how do you define? Independent thinking of a completely new problem and a successful outcome, or a learned decision through trial and error?

Does man have intelligence, or is everything we do through learning process since birth... What is inherited... The boundaries are vague and so is our knowledge as mentioned above. What about the neurones in our hearts that are even less uderstood.

One thing is for certain it's progressing , the earths over supply of workers reduces its need.. but what about developing an AI robot to carry out work on the moon, mars... Or even places on earth that have Conditions too harsh for man, I think these will be the drivers to push it.. not apples or Google's desire to capture a market share.
Post edited at 17:48
 SenzuBean 07 Mar 2015
In reply to Rob Exile Ward:

If you read the article - it explains quite clearly how by the time it's doing those things - it's probably too late to worry.
 Only a hill 08 Mar 2015
In reply to Cú Chullain:

I suspect a lot of the people commenting on this thread haven't actually read the article.
 Rob Exile Ward 08 Mar 2015
In reply to Only a hill:

Well, I have, and when I read this:

'One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor “neurons,” connected to each other with inputs and outputs, and it knows nothing—like an infant brain.'

which is plain wrong, I begin to question the rest of the assumptions. How The Mind Works or The Blank Slate or both accessible and interesting reviews of our current understanding. Read those first and you might be a bit more sceptical too.
 Jon Stewart 08 Mar 2015
In reply to Rob Exile Ward:

> How The Mind Works or The Blank Slate or both accessible and interesting reviews of our current understanding. Read those first and you might be a bit more sceptical too.

Great stuff. And here's Susan Greenfield on the difference between brains and computers:

youtube.com/watch?v=G8VwDOK3Lio&
 summo 08 Mar 2015
In reply to Only a hill:
I read the article and just thought it was high on waffle, speculation and errors, but very low on scientific details and fact. You could have said the same thing with some good editing using only a fraction of the word count.
 summo 08 Mar 2015
In reply to Rob Exile Ward:

there is so much that's not understood in terms of what's inherited through DNA.... but could pre-programming a robot that will then self learn afterwards with some basic rules or laws, be likened to a new born having some inherited programming too? It's roughly estimated that 50% of potential intellect is inherited (regardless of if the individual ever fully applies it), the rest is environmental gained since birth.. so comparing a computer with a clean slate to any human, puts us at an unfair advantage, as we have a few million years of evolution that has developed our human brain software.

 Rob Exile Ward 08 Mar 2015
In reply to summo:

It's not a few million, we have been evolving 'intelligence' effectively since the first self replicating organisms that responded to their environment appeared. We're talking billions.

Orders of magnitude matter!
 summo 08 Mar 2015
In reply to Rob Exile Ward:
Yes, I was being more general modern man/ homo sapien, not specific, 2 billion ish.. if you want to consider complex multi Cellular life.
But, it is our brain development in relate recent times, a few millions, that have probably enabled or allowed us to develop in a way almost completely unique compared to the many thousands of other species of life that have ever existed on earth. It's these recent things that matter. Although deciphering what is cause and effect is still unclear.
Post edited at 18:42
 Hooo 08 Mar 2015
In reply to Only a hill:

> I suspect a lot of the people commenting on this thread haven't actually read the article.

I don't think that really matters. As summo says, it's mostly waffle and contains some glaring innacuracies.
This thread is worthwhile though, I'm going to take Rob's advice and read some more Pinker. Add Hofstadter to that list too.
cb294 09 Mar 2015
In reply to Hooo:

> it's mostly waffle and contains some glaring innacuracies.

True,

CB
 Rob Exile Ward 09 Mar 2015
In reply to summo:

I think this is a common misapprehension. If you want computers to have 'AI', whatever that might mean, they need to have sensory perception of their environment. The sensory perception that we use- hearing, smell, above all sight - that is key to genuine AI HAS been evolving since the first single celled organisms starting responding to their environment. Computers have a lot of catching up to do.
 wintertree 09 Mar 2015
In reply to Hooo:

> A self driving car is like a trained chimp,

No, no it isn't. A chimp has the unknown mechanism(s) to bestow a conscious/self awareness.

That is what we don't understand, what separates AI from real intelligence.

Anyone saying that is 5/50/500 years off is hypothesising, no more. Fundamental breakthrogh(s) in understanding are required here, and by their very nature they can't be predicted. Having said that, various research projects outside the traditional AI field are making more inroads, and I wouldn't rule out a step change in a decade or two.

In the absence of a breakthrogh, self driving cars show how far we can get by just throwing compute power at dumb AI, and compute power is still developing rapidly.
Post edited at 10:54
 wintertree 09 Mar 2015
In reply to Rob Exile Ward:

> Teaching computers to play games is trivial. Even chess is a vastly simplified, structured, systematized version of what actually happens in real life.

That's the point though. It taught itself to play the games. Trivial the games may be, but it trained itself to extract information from the graphics, to understand control inputs, and to link them by determining the rules. It did this for each different game with one set of programming.

To me that's a big step.
KevinD 09 Mar 2015
In reply to Rob Exile Ward:

> I think this is a common misapprehension. If you want computers to have 'AI', whatever that might mean, they need to have sensory perception of their environment.

what environment would that be? What about one which lives purely on the internet?
 CurlyStevo 09 Mar 2015
In reply to Rob Exile Ward:

I don't think intelligence is limited to the sensory perception we have. Have you heard of Deepmind? That can now play most old school Atari games better than a human player and the only 'senses' are the pixels drawn to the screen and the game score. There is no pre programmed knowledge about the games and most people would say this is demonstrating some level of intelligence, however it's clearly limited to 2d games just now, that isn't the end goal the software however.

The notion of perception linking to AI is an interesting one, what is perception? For me it's just our internal model of the world based on information from the senses coupled with our prediction of what that means. As such it is always flawed and incomplete.

I myself believe before we see AI anywhere close to human level intelligence in a broad arena, that the AI will need to be self aware (or alternatively one that understands (why, context etc). People learn how to solve problems and make dynamic leaps of understanding, most of the scientific principles and equations were created with jumps of understanding and then proved mathematically after the fact. Pretty much all machine learning is confined at the moment, even neural networks as they are generally limited to the problem domain they were designed for. It is still the programmer that understands the problem and creates a system the AI can operate within. If AI will ever be self aware in the true sense is not known at the moment IMO.
 wercat 09 Mar 2015
In reply to Cú Chullain:
One of the huge differences between us and computers is the way information in our low clock brains is organised. In effect it is held within its own search algorithm which is continually active. But this is an inverted search algorithm.

In conventional computing the CPU sits on top sifting through storage to find that which it has been told to find. OK, interrupt service routines can make it appear to respond to the real world intelligently but it always goes back to that sifting mechanically through storage for the appropriate data or processing module.

Once our brains store something they effectively keep it in an active algorithm which is constantly in touch with what is going on in our subconscious and conscious "Now Frame-store" (which will include sensory inputs and memories awoken by the contents of that store). What is different about our memories is that they are, in effect, held in a self searching structure such that they recognise when they are relevant and bubble up as required, active memory that knows when its contents are needed and feeds them upwards as needed. I suspect that a bidding process then gives focus to the most relevant for immediate action so that little or no computation is required between something coming into the "Now frame" and retrieval of the most relevant information and action triggers.

We are a long way from that now, though I was speaking to someone who was working with prof Furber on the Spinnaker project and asked whether they were looking at storing information "actively" anything like this and he hinted that they hoped the many processors in Spinnaker might be going this way.

(ps I know we have had limited "associative" computing memory for decades, but it's too limited in scope to be considered brain like. Perhaps massively wide optical data bus access to billions of memory elements addressed by content rather than physical address might start getting there but there are a lot of practical problems to come close to brain function!)
Post edited at 13:27
 Dave Garnett 09 Mar 2015
In reply to CurlyStevo:

> It is still the programmer that understands the problem and creates a system the AI can operate within. If AI will ever be self aware in the true sense is not known at the moment IMO.

I doubt it. How will AI ever become self-aware if it only ever talks to programmers?
 summo 09 Mar 2015
In reply to Dave Garnett:

> I doubt it. How will AI ever become self-aware if it only ever talks to programmers?

It's down to the task given and how the programmer originally defines a success, the computer will then keep trying different options of a given task, until it achieves success. It won't repeat the same incorrect method, not because it learns, but it remembers.

Do humans have natural intelligence though, aren't we programmed as well. We are born with an existing operating system and some bizarrely embedded skills intellect or programming through DNA. We then spent 4-7years in pre-school programming, then a further 12-20 in education be programmed some more... what's the difference. Imagine a super super computer that took 20-25years to programme.. how vast would it be, how wide a range of tasks could it complete.. yet it would never forget, which is more than can be said for us.

The difference between us and computers, that is hard to program or quantify, is our natural curiousity, our desire to experiment and think laterally to solve a problem, or our ability to predict an outcome, despite having never done that given task before, we kind of sense the outcome, or rather we draw on other often unrelated experiences to help make a decision.

Jimbo W 09 Mar 2015
In reply to Dave Garnett:
> (In reply to CurlyStevo)
>
> [...]
>
> I doubt it. How will AI ever become self-aware if it only ever talks to programmers?

Hehe. But how does a human zygote ever become self aware given as it is only an instruction manual with a bit of machinery to read and execute functions contained within?
 wercat 09 Mar 2015
In reply to Jimbo W:

intrinsic to our brain operation is emotion. Almost all conscious operations take place within that context. Every input to the here and now, to a greater or lesser extent is measured against a an emotional cursor ("You are Here, this affects your happiness,fear,confidence,arousal,aggression etc) which provokes change in the mental Now-frame and the resultant pattern will determine what memories and intentions are allowed to call themselves up into that frame. I'm not aware of any current CPU that does much more than execute binary in a moodless state
 summo 09 Mar 2015
In reply to wercat:
> intrinsic to our brain operation is emotion.

What are emotions, inherited through DNA, or learned and developed from birth? Lots of scientific research done on un/over/hyper/mal developed emotional people. Evidence so far indicates that much is learnt or programmed from birth. Do you need emotions to be intelligent? Many of the worlds great thinkers, doers, scientists were accused of being emotional cripples thinking only of themselves and the task at hand!
Post edited at 16:48
 Dave Garnett 09 Mar 2015
In reply to summo:



> The difference between us and computers, that is hard to program or quantify,

Sorry, I was more concerned about the difference between us and programmers!
 summo 09 Mar 2015
In reply to Dave Garnett:

> Sorry, I was more concerned about the difference between us and programmers!

fair one.... they'll probably evolve into a different human sub-species in a few years.
Jimbo W 09 Mar 2015
In reply to summo:

> fair one.... they'll probably evolve into a different human sub-species in a few years.

Are they known to reproduce?
 summo 09 Mar 2015
In reply to Jimbo W:
only in labs artificially... once they've found someone with some matching DNA (done online of course).
Post edited at 16:56
cb294 09 Mar 2015
In reply to summo:

Anyone seen the cool paper in Nature Neuroscience today? Implanting (crude) false memories into the brains of sleeping mice! Decent summary available in the Guardian here:

http://www.theguardian.com/science/2015/mar/09/rodent-recall-false-but-happ...

Exciting times in biology, I guess we (or rather, they..) are finally getting an experimental handle on issues such as memory, and hopefully eventually consciousness.

CB
 wercat 09 Mar 2015
In reply to summo:
I don't think you need emotions for pure AI but I have considered at least since the early 80s that emotion is at the centre of consciousness (in the sense of self awareness)

Clearly individual characters and their emotional profiles will be formed, subject to genetic factors, during infancy, childhood and adolescence and subsequently modified by life experiences.

Surely you are not asserting that the basic ability to feel emotion is learned?

When you refer to emotional cripples, is that not really meaning "social cripples"? A different thing altogether. Isn't it likely that the urge to immerse oneself in "great thinking" or any other field of activity is feeding and satisfying goal seeking based on an emotional state - ie a "wish" ?

Is there such a thing in humans or mammals as goal seeking not based on emotion? Even if that is a wish for logic and order it is still a wish, an unsatisfied state.
Post edited at 17:51
 summo 09 Mar 2015
In reply to wercat:
> Clearly individual characters and their emotional profiles will be formed, subject to genetic factors, during infancy, childhood and adolescence and subsequently modified by life experiences.
> Surely you are not asserting that the basic ability to feel emotion is learned?

I would suggest that they can weaken, enhanced or modified through life and are certainly inherited to some level... but when you have someone who is clearly "their father's son" etc. through almost identical behaviour and thinking etc.. is that inherited or learnt through their parenting. Hard to prove either at times. But, yes, as with the experiments / data of twins adopted at birth by different parents.. it's proven a percentage of intellect is through DNA, the rest is environmental. Not sure if emotions are the same as potential intellect.

> satisfying goal seeking based on an emotional state - ie a "wish" ?

Is that an emotion though, is it simply a human instinct to strive to survive, to improve, enhance or make things better. Finding a better shaped stone to break the nut open with, or finding a higgs boson... same thing? Not sure if it's emotional though.

> Is there such a thing in humans or mammals as goal seeking not based on emotion? Even if that is a wish for logic and order it is still a wish, an unsatisfied state.

Goal seeking for the social group's or your peer's admiration, glory... is emotion. Goal seeking to kill your prey and eat, or feed the family in your tribal culture cut off from the rest of the world is human survival, instinct.

Emotion - not as easy to define as one would think.
Post edited at 18:07
 summo 09 Mar 2015
In reply to cb294:

yes, curious stuff, sounds the same as was recently featured on Inside Science (radio4).
 wercat 09 Mar 2015
In reply to summo:

indeed, and I didn't attempt to define emotion. In fact in a given situation there are levels of motivating emotion for human beings. Desire to mate is expressed as an emotive sexual urge which is experienced, not just exhibited behaviour. Climbing can be a competition of emotions, apprehension and fear have to be suppressed in order to satisfy a more deep seated wish to succeed on a route, or sometimes just to survive. It's possible to suppress immediate panic and carry on while being aware of fear and not wanting to fail or die.

I'm simply saying that consciousness comes from the fact that we don't simply "compute". We actually experience sensory and mental inputs to the here and now because a lot of them are defined by an emotional response. Because that response is defined by how we "feel" there is an intrinsic self involved in processing information that arises from the presence of emotion.
 summo 10 Mar 2015
In reply to wercat:

> I'm simply saying that consciousness comes from the fact that we don't simply "compute".

I think with AI the definition of success is a machine that can solve a problem it hasn't been directly programmed for, using the lessons of other problems previously solved.

Expecting a computer to grasp fear, apprehension or the difference between empathy and sympathy could be a few more weeks away! Because most of these come from imagination, you can imagine the consequences of a given action, a machine (presently) could only ever refer to historical knowledge, either programmed or lessons learned.
 wercat 11 Mar 2015
In reply to summo:

I'm not at all keen on the idea of making "conscious" AI, certainly not machines that can "feel". There are profound ethical considerations. In effect they might be capable of suffering and fearing extinction in the same way as slaves or mistreated animals. Can you imagine, if such devices were available, electronic hobbyists building circuits badly that involved them making something that suffered? Hacking an emoPhone might be like torture!
 Jon Stewart 11 Mar 2015
In reply to wercat:

> I'm not at all keen on the idea of making "conscious" AI, certainly not machines that can "feel".

It's a terrible idea, but luckily one that's not remotely feasible, at least not until we've worked out how the brain generates consciousness. And given that scientists and philosophers haven't even agreed on

- whether consciousness actually exists
- whether consciousness is physical or something distinct
- whether consciousness is a fundamental property of matter, or a state that the brain is in, or something else entirely
- etc, etc

then I don't think we have to worry about making conscious machines just yet. We should probably put our energies into the easier problems like time travel and teleportation first...
 Hooo 11 Mar 2015
In reply to Jon Stewart:

> then I don't think we have to worry about making conscious machines just yet. We should probably put our energies into the easier problems like time travel and teleportation first...

I wouldn't be so sure. We know consciousness exists, even if we don't know exactly what it is. We know it resides completely within the brain. We have tools to view brain hardware and software ( to some degree at least ). There is no fundamental reason why we won't be able to replicate it.
Time travel and teleportation ( in any useable form ) would involve breaking the laws of physics.
 Jon Stewart 11 Mar 2015
In reply to Hooo:
> I wouldn't be so sure. We know consciousness exists, even if we don't know exactly what it is.

How do we know that?

> We know it resides completely within the brain.

I agree that it seems to be generated by the activity of the nervous system - but I don't know what you mean by 'resides within'.

> We have tools to view brain hardware and software ( to some degree at least ).

There is no such thing as brain hardware and software, at least according to this leading neuroscientist (link posted above, here for convenience: youtube.com/watch?v=G8VwDOK3Lio& ).

We can start to map the connections between neurons, but we have no idea why this might be useful, and doing this for a whole brain as opposed to the nervous system of a roundworm (see http://blog.eyewire.org/behind-the-science-an-introduction-to-connectomics/ ) is currently a loooonnnnngggg way beyond current technology.

> There is no fundamental reason why we won't be able to replicate it.

I agree that there is no *fundamental* reason why we won't be able to replicate a brain once we've mapped its connectome. I just think that that's the same as saying that there's no fundamental reason we can't get the entire human race to sing happy birthday simultaneously. The practical barriers are just too vast to conceivably overcome.

> Time travel and teleportation ( in any useable form ) would involve breaking the laws of physics.

Well since we don't know how conciousness is generated, we don't know whether artificial consciousness breaks physical laws or not.
Post edited at 18:35
 wercat 11 Mar 2015
In reply to Hooo:

It will, I'm certain, be found that the ability to experience pain and pleasure etc is at the heart of consciousness. Add the ability to perform inductive and deductive logic to the ability to "feel" a reaction to sensory inputs and voila!

However, I don't think we are even at the start of understanding the concepts needed to make a circuit "feel" anything.
 Timmd 11 Mar 2015
In reply to jkarran:
> Killer machines, machines that can kill on an unimaginable scale have existed for most if not all of my lifetime. The most chilling I can think of has commanded our destruction at least once but was thwarted by its human minions. Google the Soviet Union's Dead Hand if you fancy a sleepless night.

> jk

For things like the Dead Hand, a relative one commented there's no point in worrying about things you can't do anything about. I guess we'd never sleep if we thought about all the different ways our species might wipe itself out.
Post edited at 20:57
 Hooo 12 Mar 2015
In reply to Jon Stewart:

We know the thing we refer to as "consciousness" exists, don't we? Or is there some debate about that? We know that this consciousness is contained within the human body.
I can't view YouTube at the moment, but I'll try and watch that video at some point.
We know that human consciousness doesn't break physical laws, so I think it's reasonable to assume that another form of consciousness could exist. I don't think we are anywhere near creating this at the moment, and it may be that it is far beyond our current technology. But that doesn't mean it won't happen, and within our lifetimes. There are people alive who remember when travelling to the moon was impossible, as the practical barriers were too vast to overcome. The self driving car was impossible only a few decades ago; no one imagined that the sheer processing power required would ever exist, let alone be portable. Getting the entire world to sing simultaneously is possible with current technology, the only thing stopping it is lack of motivation.
If something is possible (ie. permitted by the laws of physics), and we have sufficient motivation, then it will be achieved eventually. Unless the human race wipes itself out first
 Rob Exile Ward 12 Mar 2015
In reply to Hooo:

At what point does a vanishingly small probability put an event 'beyond the laws of physics'?
 john arran 12 Mar 2015
In reply to Rob Exile Ward:

> At what point does a vanishingly small probability put an event 'beyond the laws of physics'?

when it vanishes?
 jkarran 12 Mar 2015
In reply to Timmd:

> For things like the Dead Hand, a relative one commented there's no point in worrying about things you can't do anything about. I guess we'd never sleep if we thought about all the different ways our species might wipe itself out.

Sensible relative!

I think what I find most appalling about systems like Dead Hand is the objective, a guaranteed civilization changing retaliatory strike, so much utterly pointless destruction and suffering. The capability exists not to deter the enemy from striking first (it was secret) but to reassure soviet hardliners the US and Europe would burn whether or not they were still alive to command it, it's an insane check on an insane leadership tempted to strike first. I doubt that problem (or the dreadful solution) was uniquely soviet.

jk
 Dave Garnett 12 Mar 2015
In reply to Jon Stewart:

> We can start to map the connections between neurons, but we have no idea why this might be useful, and doing this for a whole brain as opposed to the nervous system of a roundworm (see http://blog.eyewire.org/behind-the-science-an-introduction-to-connectomics/ ) is currently a loooonnnnngggg way beyond current technology.

Even if we could map all the connections in three dimensions (and it's a ridiculously large number to the point it might actually be fundamentally impossible), that would only be the basic wiring and only at one point in time. There's the whole concept of the thresholds needed for any given neurone to sum its inputs and decide what it's going to do about it, long-term potentiation whereby some pathways are more popular than others, global effects of arousal and depression, not to mention plasticity...

However, in the same way that the answer to practical powered flight was the fixed wing rather than the ornithopter and vehicles with wheels work much better than mechanical legs, I suspect that the answer to real AI won't look anything like a three-dimensional physical neural network.
 Hooo 12 Mar 2015
In reply to Rob Exile Ward:

> At what point does a vanishingly small probability put an event 'beyond the laws of physics'?

When the probability is zero, of course.
 Jon Stewart 12 Mar 2015
In reply to Dave Garnett:

> However, in the same way that the answer to practical powered flight was the fixed wing rather than the ornithopter and vehicles with wheels work much better than mechanical legs, I suspect that the answer to real AI won't look anything like a three-dimensional physical neural network.

At least with wheels and wings, we understood what it was we were trying to do, and how legs and flapping achieved it.
 Jon Stewart 12 Mar 2015
In reply to Hooo:

> We know the thing we refer to as "consciousness" exists, don't we? Or is there some debate about that?

Certainly is. There's a whole bunch of people (philosophers and even scientists) out there who refuse to see that consciousness is actually a thing. Weird, but true.

> We know that this consciousness is contained within the human body.

Again, lots of odd beliefs around about brains "tuning in" to consciousness "out there" and stuff. There's no scientific evidence to disprove this, even if it does smack of hippy-trippy-bollox. And you say human body; what about dolphins? mice? ants?

> We know that human consciousness doesn't break physical laws.

Maybe. I would say that the physical laws we've come up with don't explain consciousness.

> But that doesn't mean it won't happen, and within our lifetimes.

Where we are at the moment re. AI is like talking about going to the moon when you haven't even worked yet out that the Moon is a ball that goes round the Earth. Try getting to the moon when you believe that the earth is flat and that there are 12 different moons that have different shapes!

In reply to Jon Stewart:

BTW, Jon, what do you do for a living? It sounds as if you have some kind of grounding in philosophy.
 Jon Stewart 12 Mar 2015
In reply to Gordon Stainforth:

Optometrist*. The little I know about philosophy I learnt from youtube!

*If you study the visual system, you are of course confronted by the question of how do the firing neurons create the 'picture in your head'.
In reply to Jon Stewart:

Yes, what an interesting question. And a perfect example of a scientific question that cannot even begin to be answered at a purely scientific level without resorting to philosophy.
 Hooo 13 Mar 2015
In reply to Jon Stewart:

Well the nature of consciousness itself, that's a fascinating subject and a more worthwhile topic than speculating about the future of AI.
It does seem that that there isn't an entity you can call I that resides anywhere. People can lose significant brain function and still be recognisable as the same personality, and on the other hand multiple personalities can exist within the same person. I believe that the conscious I is a phenomenon that arises from the interaction of multiple independent processes. Vision is a particularly interesting one. Experiments show that the data rate from the optic nerve is very low - it's nothing like a video camera. What "we" "see" is a construction held in memory, which is only updated when some process decides it's necessary. This construction is not a picture as such, but a collection of recognised objects, and there are different parts of the brain for different types of objects. Recognising terrain so that we can walk is taken care of by a brain mechanism that exists in all animals, while recognising faces appears to have a dedicated part of the human (and a few animals) brain.
There's a theory that an ant colony could be considered a conscious entity. Individual ants are like cells in a form of brain - sensing, communicating and then acting as one entity.
I'm not going to give any credit to the "tuning into an external consciousness" bolox. Not without any evidence of a mechanism. As usual with religious explanations (and that is what it is), it doesn't explain anything. It just shifts it to a level that we can't know about.
 Dave Garnett 13 Mar 2015
In reply to Jon Stewart:

> At least with wheels and wings, we understood what it was we were trying to do, and how legs and flapping achieved it.

I agree. I'm not sure I agree with everything in the article originally cited but I wouldn't discount the possibility that something indistinguishable from consciousness will one day arise spontaneously from a self-teaching system sufficiently complex and we may not have any more idea of exactly how the threshold of self-awareness is crossed then than we do now.
 Trevers 13 Mar 2015
In reply to Cú Chullain:

Nobody's mentioned quantum computation so far in this thread... I feel like it *could* be key
 Hooo 13 Mar 2015
In reply to Trevers:

> Nobody's mentioned quantum computation so far in this thread... I feel like it *could* be key

I was tempted, it does sound like the huge leap that's required to do the job. I can't say I understand it enough to comment though.
KevinD 13 Mar 2015
In reply to Hooo:

> I was tempted, it does sound like the huge leap that's required to do the job. I can't say I understand it enough to comment though.

I am not sure that throwing resources at it will provide the breakthrough. I think it is more going to be someone coming up with a new way to look at it or merging the existing options in an new way.
 Trevers 13 Mar 2015
In reply to Hooo:

To be honest, nor do I. It's more a feeling really, based on the way that quantum and classical computers are able to tackle certain problems.

A well-known example is the factoring algorithm problem, which is a basic foundation of cryptography and modern security. Give a conventional computer two large (10 digit) prime numbers, and get it to multiply them together. It will be able to do it within a fraction of a second. Give a computer the resulting huge number, and get it to factorise them into the original primes, and it'll be chugging away till long after the Sun has fried the Earth. There is however a quantum algorithm, which a quantum computer could use to solve the same problem on the spot.

I think there's also a similar jigsaw problem. Give a computer a jigsaw puzzle which a child could solve, and it might get it before the end of the Universe... probably. Again the quantum computer can solve it quickly. Both these examples highlight how the quantum computer is able to compute 'laterally', using qubits that are a quantum mixture of 1 and 0.

I've even come across the suggestion, made by professors in the field, that the brain is a quantum computer and that consciousness is an emergent property of a complex quantum system. So given all these things, it wouldn't surprise me if quantum computation is the key to realising a properly conscious AI, as opposed to just a highly intelligent computer.

There's another parallel here: There are a lot of very intelligent people currently working on quantum computation, and there is a lot of secrecy surrounding research into it. Clearly both technologies would infer incredible power to the people holding it, and could stand to change the world as we know it.
 Jon Stewart 15 Mar 2015
In reply to Hooo:
> I believe that the conscious I is a phenomenon that arises from the interaction of multiple independent processes.

Yes - the mystery, as I see it, is how it all seems to be "stitched together" in such a way that we experience a unified subjective narrative in which perception (from all the incoming sensory data) is accompanied by thoughts, emotions and a continuous sense of self. We know this "bringing it all together" doesn't happen at any specific place in the brain, it happens seemingly as a result of complex interactions between lots of bits. I think there is a genuine mystery here: in David Chalmers' words "how is the water of the brain turned into the wine of consciousness?".

> Vision is a particularly interesting one. Experiments show that the data rate from the optic nerve is very low - it's nothing like a video camera.

Your optic nerves have a million fibres each (and the first little bit of processing has been done by a few layers of neurons in the retina), so while each one relies on relatively slow and limited chemical process to send data, there's a fair amount of it flowing down there.

> What "we" "see" is a construction held in memory, which is only updated when some process decides it's necessary. This construction is not a picture as such...

Yes, the way we see is incredibly efficient, with quite a lot of the "picture" seeming to be there, rather than actually being there...but this is just a matter of efficiency. While some people (e.g. Dan Dennett youtube.com/watch?v=vkaS5JWZ1hY& ) think that this has something really crucial to say about the nature of consciousness, I disagree. It just shows us that the system is very well designed (and so it should be, it took millions of years after all).

> There's a theory that an ant colony could be considered a conscious entity. Individual ants are like cells in a form of brain - sensing, communicating and then acting as one entity.

It's hard to imagine what kind of consciousness that might be - but that doesn't make it impossible I suppose. I personally think that consciousness is something that requires a nervous system to generate it as an emergent phenomenon of some kind - or perhaps something that does the same thing, whatever that is

> I'm not going to give any credit to the "tuning into an external consciousness" bolox.

No me neither really. Just that really weird/daft/made-up ideas are, I think, as good a go at the problem as the lazy "it doesn't really exist" or "it's just really complex computation" get-outs that are popular among many leading thinkers.
Post edited at 16:06
 Jon Stewart 15 Mar 2015
In reply to Trevers:

> To be honest, nor do I. It's more a feeling really, based on the way that quantum and classical computers are able to tackle certain problems.

> I've even come across the suggestion, made by professors in the field, that the brain is a quantum computer and that consciousness is an emergent property of a complex quantum system. So given all these things, it wouldn't surprise me if quantum computation is the key to realising a properly conscious AI, as opposed to just a highly intelligent computer.

Roger Penrose put this forward didn't he - it seemed compelling to me, but since I don't really understand what a quantum computer can/could do it's very hard to have a view. There is an angle on Penrose's idea that put me off: he seemed to be saying, "since we don't really understand the process behind the collapse of the wave function in quantum physics, and we don't understand how consciousness is generated by the brain in neurophysiology, wouldn't it be nice if they were the same thing". Well yes, it would be nice, but that doesn't make it likely to be true!

 Hooo 16 Mar 2015
In reply to Jon Stewart:

>I personally think that consciousness is something that requires a nervous system to generate it as an emergent phenomenon of some kind - or perhaps something that does the same thing, whatever that is.

I agree, that is the crux of the matter. The question is, can that nervous system be synthetic? I don't see why not, in principle. The fact that we don't know how, or whether our current technology is capable, are things that can be solved.
 Dave Garnett 16 Mar 2015
In reply to Jon Stewart:

> Roger Penrose put this forward didn't he - it seemed compelling to me, but since I don't really understand what a quantum computer can/could do it's very hard to have a view.


Penrose is a hell of a mathematician but he has some very odd ideas about cell biology. It's been a while since I read that stuff about quantum vibrations in microtubules but given how little we understand about the huge 'conventional' complexity of neuroscience it seems an extravagant over-elaboration based on minimal evidence, if any.


New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...