UKC

Artificial Intelligence

New Topic
This topic has been archived, and won't accept reply postings.
 The Lemming 03 Feb 2016
When will it or has it become sentient?
8
 Trevers 03 Feb 2016
In reply to The Lemming:

I'm going with when we've cracked quantum computation

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
 Brass Nipples 03 Feb 2016
In reply to The Lemming:


I thought you were it, and doing a reasonable job.
abseil 03 Feb 2016
In reply to The Lemming:

> When will it or has it become sentient?

Rumour has it that at least 6 UKC users are AI. Who is it though? I wonder.................
 felt 03 Feb 2016
In reply to abseil:

AI Evans, for starters
1
abseil 03 Feb 2016
In reply to felt:

> AI Evans, for starters

Right. But oh no! There have been dozens of climbing AIs!!
 Rob Exile Ward 03 Feb 2016
In reply to The Lemming:

As Stephen Pinker (Bill Gates' author of choice) has noted: it will be a little while before a computer can make sense of a couple of lines like this:

I'm leaving
Who is she?



 CurlyStevo 03 Feb 2016
In reply to Rob Exile Ward:

it will be a little while before computers can make sense of any words, its all smoke and mirrors as they don't understand concepts. Currently all learning is orchestrated within confined systems and not at all broad.
 Mick Ward 03 Feb 2016
In reply to CurlyStevo:

> ...its all smoke and mirrors as they don't understand concepts.

Phew, I was getting worried for a mo! Had been hoping to get away with a last few senile years undisturbed before the rise of the robots. But it's OK after all. ("Jolly good, carry on!" as my old mate, Phil the Greek was wont to utter.)

Mind you, I'm stuck on a planet with over eight billion others and, if they understand concepts, there's precious little agreement.

Oh, sod it. Don't want to think about that. "Jolly good, carry on!" will hopefully see me through.

Mick

P.S. Hope you're faring better 'oop (further) North than the ninth legion...



 wercat 03 Feb 2016
In reply to The Lemming:

when it can "feel"

Hope it never does for ethical reasons
KevinD 03 Feb 2016
In reply to Rob Exile Ward:

> As Stephen Pinker (Bill Gates' author of choice) has noted: it will be a little while before a computer can make sense of a couple of lines like this:

Plenty of people would have difficulty with that though.
I think the main breakthrough will be when someone comes up with a good architecture for plugging various components together.

 john arran 03 Feb 2016
In reply to KevinD:

Maybe the main breakthrough will be when a computer itself comes up with a good architecture for plugging various components together!
 Dave Garnett 04 Feb 2016
In reply to john arran:

> Maybe the main breakthrough will be when a computer itself comes up with a good architecture for plugging various components together!

Actually, I think this is a key point. I struggle to see how real self-awareness can arise without some experience of self-determination. Without some sort of 'body' to control with both sensory and motor experience I don't see how AI could ever become genuinely conscious.
 CurlyStevo 04 Feb 2016
In reply to Dave Garnett:

What about if the body was virtual in a virtual world perhaps populated by people controlling virtual bodies along with some AI also controlling virtual bodies?
 wercat 04 Feb 2016
In reply to Dave Garnett:
I think you are right. I hate the assertion that somehow from complexity SI will arise. There would be required a qualitative change in architecture - associative memory, for instance, and some kind of mood-state ("feeling") centre - associative memory would be linked to that - memories "fetch" themselves when information or mood or other self-retrieved memories in the clearing centre associate with them.

Don't think a "fetch-execute" algorithm imprisoned machine will ever do more than simulate emotion or sentience according to someone's recipe
Post edited at 08:50
 Dave Garnett 04 Feb 2016
In reply to CurlyStevo:

You don't think that just existing in the real world would be simpler? Wouldn't any convincing virtual environment need to be more complex than the consciousness it housed?
 Rob Parsons 04 Feb 2016
In reply to wercat:

> I think you are right. I hate the assertion that somehow from complexity SI will arise.

How do think the brain itself works? Is it 'magic'? Or just - at the end of it - complex physics and chemistry.
 wbo 04 Feb 2016
In reply to Dave Garnett; Does it need to be convincing tho'?. We don't need to the real world for an enviroment to test AI, we just need an enviroment where intelligent choices need to be made through self determination. A simpler enviroment is good for training, testing, but can still be complex to require choices to be made

In reply to john arran:

> Maybe the main breakthrough will be when a computer itself comes up with a good architecture for plugging various components together!

Done already to a certain extent. John Koza built up an impressive list of human-competitive computer-designed systems and architectures . Mostly in the area of electronics design, and many leading to patents. However, don't raise your hopes too high! This is achieved though genetic programming which allows the computer to develop architectures and networks concurrently with components, and again concurrently optimised by (usually) genetic algorithms. Sometimes the results and methods are so unforeseen, and the performance so excellent that it's easy to be seduced into thinking that there has been some element of 'intelligent design' involved. Mostly because on the whole people are unaware of just how powerful the elements of crossover, mutation and population selection in a GA are, and that a human set the problem and objective function in the first place. (Disclaimer, other problem solving heuristics other than GAs are available)
KevinD 04 Feb 2016
In reply to wercat:

> Don't think a "fetch-execute" algorithm imprisoned machine

We already have alternatives to that though. Neural networks arent tied by algorithms.
 wercat 04 Feb 2016
In reply to Rob Parsons:

no, not magic, but NOT from sheer complexity. A qualitative difference from von Neumann architecture.

In essence I think that sentience arises from emotion - feeling how any item of information or retrieved memory or the results of following a train of processing will affect "you". How such a "you" element would be accomplished I do not know and as I said before there would be serious ethical objections/considerations.
 Rob Parsons 04 Feb 2016
In reply to wercat:

> no, not magic, but NOT from sheer complexity. A qualitative difference from von Neumann architecture.

Von Neumann architecture is just the mechanism; it is not the model itself.

> In essence I think that sentience arises from emotion - feeling how any item of information or retrieved memory or the results of following a train of processing will affect "you".

Ok then, same question - is 'emotion' magic? Or just the outcome of the (complex) physics and chemistry?
 Lord_ash2000 04 Feb 2016
In reply to Dave Garnett:
> Actually, I think this is a key point. I struggle to see how real self-awareness can arise without some experience of self-determination. Without some sort of 'body' to control with both sensory and motor experience I don't see how AI could ever become genuinely conscious.

Although there is a lot of work to do on the computing side of things yet I think the body and sensory experience is the easiest bit to do, in fact we can do that already.

I imagine you've have your large central computer which contains the AI 'brain'. (or it may even be distributed across many remote machines). This would be where the thinking and understanding happens. Then it would be able to remotely control a number of physical robots (more Mars rover than terminators) these would be equipped with cameras, microphones, thermometers and other sensors, maybe even a pressure sensitive 'hand' so it can physically interact with objects. These would be its eyes and ears etc.

The hard part is making learning software and teaching it about the world. I think you'd have to teach it much like you'd teach a baby at first. Helping it understand that words are associated to objects for example. You could show it a small red ball and say it's a ball and then a large blue ball and say its a ball. The software would have to deduce that the common factor for being a ball is the spherical shape and the size or color of the object aren't impotent, you'd then hope it could identify a medium sized green spherical object as a ball without you telling it. It's very basic at first but its about breaking down how we know things to the most simplistic level and writing software to behave like that.
Post edited at 09:11
 CurlyStevo 04 Feb 2016
In reply to Dave Garnett:
I was using it as an example. It in effect simplifies the problem as you don't need to create something physical and then create bespoke systems for interpreting sensors (it does appear we have specialist areas of our brains for image processing for example)
 wercat 04 Feb 2016
In reply to Rob Parsons:

Well I thought that it was implicit in my reply that producing a computational emotion element would be an accomplishment, ie the result of research, development and engineering. I think the AI people who put blind faith in complexity producing sentience "somehow" are really asserting that it is "by magic"!
 CurlyStevo 04 Feb 2016
In reply to Lord_ash2000:
I disagree with this, but since we don't really have generic 'AI' its a matter of debate that can't be proved.

There isn't any 'AI' systems currently available that could learn generically about the world so once you've developed the body you'd still have the hard part to do.

All the solutions to AI problems in existence be they GA's or deep learning neural nets have to be configured to solve the problem by the AI coders. They aren't generic problem solvers like our brains. For true AI you need a generic problem solver, that's the hard bit and I think if its achievable it could have nothing to do with a 'body' (as we know it anyway)

Certainly google are making some pretty good advances but they are still a long way off making a machine that can learn in the way we can (about pretty much any subject and understand)
Post edited at 09:19
 CurlyStevo 04 Feb 2016
In reply to wercat:
Its akin to saying more powerful computers == intelligence, I agree its nonsense. If we can make computers faster and they would become intelligent we could have had really slow really extreme intelligence years ago.
Post edited at 09:17
 MonkeyPuzzle 04 Feb 2016
In reply to The Lemming:

Is Artificial Intelligence where someone (usually an estate agent) wrongly says "myself" instead of "me", because they think it makes them sound clever?
 wercat 04 Feb 2016
In reply to MonkeyPuzzle:

no, it is another thing entirely - Affected Intelligence, similarly initialled but unfortunately often confused by certain "professionals".
 wercat 04 Feb 2016
In reply to CurlyStevo:

I always remember a lecturer telling us back in 1979 one definition of an electronic computer - "Just remember that a computer is just a machine that lets you make mistakes at the speed of light".
In reply to CurlyStevo:


> All the solutions to AI problems in existence be they GA's or deep learning neural nets have to be configured to solve the problem by the AI coders. They aren't generic problem solvers like our brains. For true AI you need a generic problem solver, that's the hard bit and I think if its achievable it could have nothing to do with a 'body' (as we know it anyway)

Generic is certainly the problem. The next move has been to 'pool' groups of heuristics like GA, Simulated Annealing etc under supervisory Metaheuristic group which oversees the activity. It's certainly not deterministic as most of the search process is statistical or random based. You then use experience to reduce the degrees of freedom and extent of the search space to reduce the computational load. Something like Fred Glover's 'Taboo Search'. I've often wondered whether this is part of our intelligence, as we grow up, we acquire 'feasible solutions' which bound our search spaces when we encounter unseen problems. As an example from the AI side, I've been involved in the past delivering a runway management decision support system to a major international hub airport. If aircraft enter the taxiway for takeoff out of order, the rescheduling search space is somewhere around 2^21, which is a large number! Simple application of taboo renders the problem extremely tractable and capable of recalculation within a specified demand time of 3.4 seconds.
 CurlyStevo 04 Feb 2016
In reply to paul_in_cumbria:
I don't think our brains work like a search and if it was as simple as your suggestion then someone would just do it - google are spending zillions in this area just now.

I personally don't think GA's are all that great, only a little better than random and the intelligence is in the design of the data, designed by the coder to specifically solve the problem (by arranging it in a domain specific format).

Most advances in scientific algorithms and theories didn't occur using the scientific method or by searching all the available options (there are just too many even after restricting the space etc) they were proved by science after a jump in understanding was made, ie someone had a spontaneous idea! I don't think we have a clue how to make computers have spontaneous ideas
Post edited at 09:49
KevinD 04 Feb 2016
In reply to CurlyStevo:


> All the solutions to AI problems in existence be they GA's or deep learning neural nets have to be configured to solve the problem by the AI coders.

So do our brains. Different parts have been "designed" for different jobs and then get further configuration and guidance as we grow up. Its just the hardwiring of our brains has been built up over rather a long time and we still then take many years of dedicated training to learn how to use the skills.
 CurlyStevo 04 Feb 2016
In reply to KevinD:
Our brains seem to have some specialist wiring yes, but they have generic components too. I find it interesting how people with brain damage to crucial areas of their brain can use other parts to overcome most the problems etc. Also humans can learn and improve their behaviour across an immensely broad number of activities. Computer learning is just nothing like this. It is set up by the programmer to solve 1 specific problem. Take neural network how it is configured, and setting up the inputs and outputs in way the NN can actually solve the problem (and indeed choosing a problem it can solve), in its self requires a lot of intelligence / intelligent design.

We are a long long long way of truly understanding the human brain or indeed the human body IMO. Most disease they still don't know the cause!
Post edited at 10:08
 wercat 04 Feb 2016
In reply to CurlyStevo:
Searching in the brain I believe to be no more than presenting the information in the right area. The memory actively searches itself. It could be a bit like the effect of organising conventional memory where the address bus is not used for looking up at all - you present the information requiring retrieval of associations through a "data bus" almost as wide as the content that can be stored. Each element of memory would itself compute whether it was required or not and perhaps even the strength of the association and report ITSELF in a bidding process (at the same time as all other elements were doing the same thing) back to a structure that acts as a clearing house. This would then prioritise the most significant items (which could in itself be subject to an associative process in order to cause self- retrieval of either hardwired or learned inhibitory or amplificatory information to improve the prioritisation by overriding inappropriate response).

Memory is in essence "aware" of what it contains and what is currently needed to be looked up
Post edited at 09:55
cb294 04 Feb 2016
In reply to The Lemming and several others:

I think the discussion on this thread is insufficiently separating intelligence and sentience, both natural and artificial. One does not imply the other.


CB
 CurlyStevo 04 Feb 2016
In reply to wercat:
Our brains are a lot more complex than an information retrieval system, which is where I was going with the spontaneous problem solving. Most maths algorithms with proofs were not discovered via the proof, they were only proved by it. Its about knowing and having spontaneous ideas and then showing they are correct not working bottom up using search etc.

I'm not sure that information retrieval has much to do with search either, there are 100 billion neurons in the brain, searching through them wouldn't really work. But yeah it's an arbitrary discussion, as its all hypothetical, as we don't really understand the brain. However I think some theories seem more likely than others IMO. Certainly memories do seem to work on connections, that's why smells will sometimes trigger a whole bunch of memories or once you remember one thing a whole bunch of other stuff is remembered. However quite often a lot of the stuff you remember (or indeed think about) isn't even important or relevant!
Post edited at 10:09
KevinD 04 Feb 2016
In reply to CurlyStevo:

> Take neural network how it is configured, and setting up the inputs and outputs in way the NN can actually solve the problem (and indeed choosing a problem it can solve), in its self requires a lot of intelligence / intelligent design.

True.However the same is true of our brains. Its just the hardwork has been done already. Bit by bit various parts of specialist behaviour are being replicated by computers.
The question is whether NN could be brought together to something roughly resembling a brain eg train up a general vision centre which feeds into processing and so on. Most likely by tying multiple NN together alongside with some more specialist hardcoded areas.
The follow up question is whether we should go the imitation route or try alternatives. As you say the brain is extremely complex and poorly understood.
 Trangia 04 Feb 2016
In reply to The Lemming:

If you were a bull with 40 heifers in the adjoining field I don't think you would be a fan of AI.........
 CurlyStevo 04 Feb 2016
In reply to KevinD:
Did you read this http://googleresearch.blogspot.co.uk/2016/01/alphago-mastering-ancient-game... ?

Its super interesting and not entirely different to how we play games (and relevant to your point about piecing together NN's)

My friend works on the project. I also work in AI but games so not academic AI although it does cross over and before now I've written GA's and NN's
Post edited at 13:21
OP The Lemming 04 Feb 2016
In reply to CurlyStevo:

It was the AI winning at GO that got me interested, especially where AI teaches itself to play games. Teaches itself.
 CurlyStevo 04 Feb 2016
In reply to The Lemming:
Yeah but the Neural Network has been set up to learn GO. It couldn't learn any other game without being reconfigured by a human. Its still super cool how they have one NN that evaluates the current situation, one NN that suggests moves and then a Monte Carlo search (modern twist on mini max) to try and a number of them and pick the best option. Very well designed and not far different to the way a human brain considers the problem.

Also I don't think NN's can learn more than one domain at once. For example even if similar config could learn GO and chess, I think the actual NN's would be mutually exclusive and if you tried to train one NN for both it would likely fail at one and not be as good at either. This would probably improve human performance to a point at least.
Post edited at 13:27
 CurlyStevo 04 Feb 2016
In reply to CurlyStevo:
Actually I wonder if this is true, perhaps with the right configuration and training a NN could learn more than one domain at once. Hmmmmm
Post edited at 14:09
KevinD 04 Feb 2016
In reply to CurlyStevo:

> This would probably improve human performance to a point at least.

Question though is if you examined the human brain is it using the same neurons for both (just the game playing ignoring all the image processing etc) or whether we are effectively using different networks*. Which then goes back to linking together multiple neural networks.
We do have dedicated areas although as someone has mentioned we can then sometimes rewire existing areas to do something new.

*Or and probably more likely a mix of the two.
 CurlyStevo 04 Feb 2016
In reply to KevinD:
Reasonable point but there would be some cross over I think somewhere for many games learnt in the human brain.

I wonder if you took an traditional NN and separated the inputs and outputs in to two sets one for each problem and trained it so that it can only respond to one set of inputs with one set of outputs, if a single NN could learn to solve a different problem on each. I'm actually thinking this might be possible if so it would probably be a function of the number of nodes in the middle layer, ok its not generic learning but its still interesting.
Post edited at 14:42
In reply to CurlyStevo:

> Yeah but the Neural Network has been set up to learn GO. It couldn't learn any other game without being reconfigured by a human. Its still super cool how they have one NN that evaluates the current situation, one NN that suggests moves and then a Monte Carlo search (modern twist on mini max) to try and a number of them and pick the best option. Very well designed and not far different to the way a human brain considers the problem.

The setup is performing search by parallel exploitation and exploration, so is pretty cool. About learning multiple domains, NNs can approximate any arbitrary nonlinear function, but struggle with discontinuity and hence discontinuous domains. I think you're right that the NNs would be mutually exclusive.

 Rob Parsons 04 Feb 2016
In reply to wercat:

> ... I think the AI people who put blind faith in complexity producing sentience "somehow" are really asserting that it is "by magic"!

Who are the AI people (serious ones, anyway) who 'put blind faith in complexity producing sentience.' That claim just isn't something I recognise.

As to 'producing a computational emotion element would be an accomplishment': What do you mean by emotion? What test would you propose to satisfy yourself that a machine was indeed capable of it?
KevinD 04 Feb 2016
In reply to CurlyStevo:

> Reasonable point but there would be some cross over I think somewhere for many games learnt in the human brain.

Yup although given how weird and wonderful the human brain is not sure its guaranteed. However would make sense for some basic strategy ideas to be reusable.

> ok its not generic learning but its still interesting.

Not sure. I havent touched NN for a long time and even then it was basic stuff.
 CurlyStevo 04 Feb 2016
In reply to KevinD:
same here The last I wrote was about 10 years ago (for fun).
Post edited at 15:38
 Jon Stewart 04 Feb 2016
In reply to cb294:

> I think the discussion on this thread is insufficiently separating intelligence and sentience, both natural and artificial. One does not imply the other.

Absolutely.

I don't know what is meant by 'intelligence' in this context, but something like a programme that learns stuff might qualify for some reasonable definitions.

Anything being done in computing might be increasing the amount of 'intelligence' machines have, but they're not doing anything relating to artificial sentience.

'Sentience' as the OP asks, is consciousness. The problem of the nature of consciousness is, I think, the biggest question that science has to face, and almost zero progress has been made so far (This is evident by the fact that philosophers are still writing about it). There has been absolutely no progress on the creation of artificial consciousness - since we have no idea how the brain generates human consciousness, it would be nothing short of a miracle if we created something that did that job before we'd answered the question.

Here's a few clips on the problem - ones I agree with!

youtube.com/watch?v=ZuGZhTYnlY4&

youtube.com/watch?v=j_OPQgPIdKg&

youtube.com/watch?v=GzCvlFRISIM&

And one I don't quite agree with, but David Chalmers take the kind of approach I think is needed.

youtube.com/watch?v=uhRhtFFhNzQ&
 Dave Garnett 04 Feb 2016
In reply to Jon Stewart:

> The problem of the nature of consciousness is, I think, the biggest question that science has to face, and almost zero progress has been made so far

It's certainly up there as one of the big three. On gravity we are at least out of the blocks, but on time and consciousness we haven't really even defined the scope of the problem.
KevinD 04 Feb 2016
In reply to CurlyStevo:

> same here The last I wrote was about 10 years ago (for fun).

I might have to play with it again at some point. The tools available have come on leaps and bounds since i last looked properly.

New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...