UKC

Artificial Intelligence to wipe out mankind

New Topic
This topic has been archived, and won't accept reply postings.
 Skyfall 03 Dec 2014
Moley 03 Dec 2014
In reply to Skyfall:

Having observed the IQ of some of the human race, it won't be too difficult for the machines.
 wintertree 03 Dec 2014
In reply to Skyfall:

I think genuine machine intelligence/consciousness is one of those things that have been long promised, and that we are now genuinely on the cusp of. Perhaps in the next 5 years, perhaps 25. I am particularly curious to see if we will understand the basis of conscious self awareness before we build it, or because of what emerges in what we build. Almost none of the gigabucks being spent on this is openly going on anything quantum, which may or may not be critically important.

If we can make its capabilities scale linearly to sizes larger than a human brainy then it becomes an event whose consequences are so unknowable or unpredictable that it forms an event horizon to our predictions of the future.

"The singularity"

Profound stuff, dude.
Post edited at 09:00
 ewar woowar 03 Dec 2014
In reply to Skyfall:

Or I Robot.
 pebbles 03 Dec 2014
In reply to ewar woowar:

AI. outrageous emotional manipulation, but always gets me all choked up
 ByEek 03 Dec 2014
In reply to Skyfall:

I think he may have a point but of more danger to the wider society is ever increasing automation. It is looking like drivers will soon be obsolete for example.

One has to wonder where the tipping point for the economy will be - where business is so efficient because it employs no one, but no one has cash to spend because they have been replaced by robots.
 Lord_ash2000 03 Dec 2014
In reply to Skyfall:

The gist of what he's saying is indeed referring to the well known technologal event horizon theory. That being that the time gap between all major advancement gets ever smaller as technology devlopes, at the point of AI being able to surpass the human brain, humans will no longer be able to keep up with the pase of innovation. Once we have AI smarter than us, it is the AI that will make the next steps, constantly updating and upgrading itself until its well beyond our grasp and we become either inslaved to it or wiped out by it.
 wintertree 03 Dec 2014
In reply to ByEek:
> One has to wonder where the tipping point for the economy will be - where business is so efficient because it employs no one, but no one has cash to spend because they have been replaced by robots.

As tipping points go, one where automation repairs and builds itself and does our work for us seems pretty cushy. It'll need a much stronger socialist movement to make sure it benefits everyone... No reason for it not to though, wealth would be all prevalent and not built on towers of the poor. Needs the energy problem cracking as well for that to happen. Also, can it happen without automation becoming clever enough to deserve legal rights? Then we're back to square one. The tough bit is going to be high level automation without any chance of conscious self awareness perhaps?
Post edited at 09:29
 Lord_ash2000 03 Dec 2014
In reply to ByEek:

It's an interesting point, and one I have pondered for a while. If AI can be contained and used to serve us then many workers will no longer be needed. In fact it's possible to imagine some kind of post capatilst utopia where all the needs of Man are taken care off by a machine slave race and man is left to live a life of liesure and personal forfillment. There will of course still be some degree of wealth to compare us to each other it'll just be different to how we define it today. Just as in centuries ago having cattle meant you were rich.

The difficulty lies in making the transition, I doubt It'll go smoothly and it'll take decades but hopfully well make it. Let's just hope the AI doesn't realise that it actully doesn't need us and takes over.

Yiu could argue that advanced AI is the next natural step in evolution, I suspect that most advanced cilvisations around the universe are in the form of machine intelligences, we are about to take that next step from a primitive biological cilvisation to an advanced machine race.
 d_b 03 Dec 2014
In reply to Skyfall:

As anyone who has read their Harlan Ellison knows, the AIs won't wipe out humanity. They will keep a few of us around to torture for the rest of time, because it's more fun.
 ByEek 03 Dec 2014
In reply to Lord_ash2000:

I'm not really sure how you break out of a capitalist society since such a system puts value on resources based on the demand for those resources. If AI robots ultimately start to bring down the market, it will auto correct by devaluing further development ultimately putting an end to it. Just look at Concorde for example. Futuristic leap forward. The markets disagreed and priced it out.
 Dave Garnett 03 Dec 2014
In reply to davidbeynon:
> (In reply to Skyfall)
>
> As anyone who has read their Harlan Ellison knows, the AIs won't wipe out humanity. They will keep a few of us around to torture for the rest of time, because it's more fun.

I prefer the Iain Banks vision of benevolent AIs that enjoys human company and consider the highest ethical standards not only the defining feature of an advanced civilisation but also a demonstration to competitors of their technical capability.

Although there are some very bad AIs in his universe too.

Actually, what Hawking was talking about sounded more like one of Banks's monopathic hegemonising swarms - a nuisance that all capable (AI assisted) civilisations have a duty to help control.

 d_b 03 Dec 2014
In reply to Dave Garnett:

I'm sure the people being tortured in the story would prefer that as well
 The New NickB 03 Dec 2014
In reply to Skyfall:

Matrix, surely!?!
 Dave Garnett 03 Dec 2014
In reply to davidbeynon:

It's an interesting question though, would sufficient intelligence and capability favour tolerance and benevolence (and perhaps a certain reverence for their biological progenitors), or would advanced AIs regard biological intelligences as mere animals or worse, an infection?
 GarethSL 03 Dec 2014
In reply to Skyfall:

I doubt it, unlike humans. Computers have an off button, be it inbuilt or added with a sledgehammer at a later date.

Otherwise a pint of water is known to cause significant problems to electriccy things... if push comes to shove my weapon of choice will be a supersoaker.
 Xharlie 03 Dec 2014
In reply to Skyfall:

This story is pure sensationalism.

Having some experience with A.I., I can safely say that we are not even approaching "Intelligence".

Monte Carlo Tree Search can play a proper game of chess if it is given some fancy heuristics, robotic agents can map their locations and navigate the self-built map, controlling actuators and servo motors to approximate a "self-driving" car, drones can identify a target from sensor data and wot not but none of this is "Intelligence" and the idea that machines will develop some sort of consciousness is laughable.
 d_b 03 Dec 2014
In reply to Dave Garnett:

I can't see either scenario being that likely. The reason boils down to what, if anything an AI and its designers want.

Real AI software of the kind that actually exists tends to be able to solve specific problems, but has nothing you would recognise as motivation. They do one or two things well, and that's it. Think google & the route finding algorithms on your satnav rather than skynet.

For an AI apocalypse you would need a general purpose intelligence that has its own goals. Nobody has any idea how to build one of those, and there isn't much incentive either. Why make something that will go off and do its own thing when you can build smart tools instead?

Even if you do figure out how to make something generally competent then it needs to have some sort of motivation. It's unlikely that anyone would type in "No more humans" as a goal state, so it would need drives of some sort. Ours are evolved in - we want to eat, reproduce, not die etc. because earlier beings that failed to do that didn't get to be our ancestors.

Where do they come from in a machine? Do you deliberately build in a rapacious desire for it to consume all matter in the universe to make more of itself?

 MeMeMe 03 Dec 2014
In reply to wintertree:

Your posts on this topic would be so much better if your username was 'Wintermute' rather than 'Wintertree'.
 Dave Garnett 03 Dec 2014
In reply to davidbeynon:
> (In reply to Dave Garnett)
>
> Where do they come from in a machine? Do you deliberately build in a rapacious desire for it to consume all matter in the universe to make more of itself?

As a weapon that becomes uncontrollable? As part as an arms race high-functioning autonomous AI might become necessary.

Then again, thirty years ago who would have imagined a ubiquitous ecosystem of computer viruses and other malware?
 nufkin 03 Dec 2014
In reply to GrendeI:

> if push comes to shove my weapon of choice will be a supersoaker.

Now why didn't John Connor think of that?
 Dave Garnett 03 Dec 2014
In reply to Xharlie:
> (In reply to Skyfall)
>
> This story is pure sensationalism.
>
> Having some experience with A.I., I can safely say that we are not even approaching "Intelligence".

I agree, at the moment, but who can predict where we will be 100 years from now.
>
> the idea that machines will develop some sort of consciousness is laughable.

I guess that depends on how much complexity you need for consciousness to become possible. Human neurology has truly astronomic complexity but maybe a deliberately designed substrate could be more efficient, or maybe we will develop some paradigm-changing quantum level technology, or, maybe, it might turn out not to need to be quite as complicated as the way we are wired up. Although technology may be inspired by biology, practical machines rarely work in quite the same way.

So, yes, it's probably still a long way off, but it's not laughable.
 John H Bull 03 Dec 2014
In reply to davidbeynon:
>Why make something that will go off and do its own thing when you can build smart tools instead?

The philospher on NewsNight last night started to talk about a 'paperclips' example (in a book he's written maybe) but got cut short somewhat. Something about if AI can make paperclips and robots can use them you don't need humans, so humans take no further interest in paperclips, and forget even how to use them as tools. So, eventually, we lose all the skills we had. At least, I think that's what he was saying.
Post edited at 11:35
 ByEek 03 Dec 2014
In reply to davidbeynon:

I tend to agree. However Hawking does make the valid point about computer evolution. I don't think we should underestimate anything. I remember the driverless cars of the 80's on the end of an umbilical cable attached to a truck full of computers. Yet now we are starting to see that technology creep into consumer products.

I don't believe AI will destroy society in a Terminator style but I do think it has the ability to render much of society as worthless. That is the true danger.
OP Skyfall 03 Dec 2014
In reply to bullybones:
I suppose the paperclip point being that we all forget how and why things happen and end up being redundant, even if we don't physically get wiped out Terminator style. Remember when using a computer meant programming it. Now computers make things happen without 99% of the population being aware of them. Perhaps it's not such a big step to computers designing themselves to work better even if not being truly "conscious" or whatever. Then where does it end? I'd like to see it but doubt I'll be around.
Post edited at 11:40
 d_b 03 Dec 2014
In reply to Skyfall:

99% of the population doesn't know how to make paperclips anyway. This general problem has existed in some form ever since people started to specialise.
 d_b 03 Dec 2014
In reply to Dave Garnett:

I don't buy it. You would have to make it complex enough to do all its own maintenance & manage its own supply lines too.
 John H Bull 03 Dec 2014
In reply to davidbeynon:
> 99% of the population doesn't know how to make paperclips anyway. This general problem has existed in some form ever since people started to specialise.

This is true. But what if nobody has any skills left - just the machines. The idea then is that machines have won out, and can then take over running themselves because they don't need us. I don't really see how this could happen in my sphere of skills (nothing to do with paperclips), which is language-based. But then, language is also a tool, and can be learned by deaf children with no need for adult teaching. So there's nothing to stop - eventually - machines becoming capable of learning full complexities of language. As an ape with a small brain wired up to some sensory inputs and a sense of purpose and importance that's purely subjective, I might have to admit that, at that stage, some other algorithm may become smarter than me.
Post edited at 12:47
 wintertree 03 Dec 2014
In reply to Xharlie:

> Having some experience with A.I., I can safely say that we are not even approaching "Intelligence".

For what it's worth I consider than almost all research into A.I. has basically no connection to machine intelligence or consciousness. People such as Prof Kevin Warwick have done their best to convince the wider public otherwise, but ignoring their self serving, grandstanding PR efforts, the field of AI really has little to do with actual intelligence or conscious self awareness.

On the other hand, some of the constructed neural networks are encroaching into primate levels of complexity, e.g. TrueNorth or SpiNNaker. Efforts are underway to produce a full human brain simulation within the next ten years, and we now have laboratory equipment that can image the entire neural signalling of the entire brains of some fish in real time, continuously. Many pieces of the puzzle are on the table and are coming together.

Frustratingly it's almost 20 years since this paper - http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9691 - that shows just how much more powerful "digital" logic can be if its non linear and randomly perturbed pathways are embraced, not ignored, just as biology does, and as far as I am aware nobody in the field is publishing research on modern substrates designed to harness this power.
Post edited at 12:54
 Trevers 03 Dec 2014
In reply to Skyfall:

I wonder if you could apply a Drake's Law type argument to this.

If intelligent alien life exists, and was able to avoid wiping itself out long enough to create artificial intelligence, then surely that artificial intelligence would have spread itself out across the galaxy and made itself known to us by now.

There might be the following caveats:
-It's impossible for civilisation to get to the point of creating AI without first destroying itself.
-True AI is impossible.
-True AI exists, but sees no need to leave its home planet. It's able to create a fully sustainable machine civilisation, perhaps even to extend the lifetime of it's parent star indefinitely.
-True AI is already here, is watching us and sniggering.
-True AI hasn't reached us yet

I suspect that quantum computation may be the key.
 wintertree 03 Dec 2014
In reply to Trevers:

> I wonder if you could apply a Drake's Law type argument to this

You're not thinking wildly enough. True AI that has a mastery of quantum and a good understanding of the next level of Physics that unites and explains relativity and quantum probably doesn't need to travel the universe. The possibilities of truly mastering consciousness at the quantum scale may be far more alluring, and those of transcending the universe even more so.
 Rob Exile Ward 03 Dec 2014
In reply to Dave Garnett:

I tend to think the idea of AI is vastly overstated/misunderstood. There is an enormous gulf between computers being able to fly aeroplanes or lay chess - which, ultimately, are relatively straightforward, clearly delineated tasks - and being able to take decisions that children from quite a young age will manage. Imagine an AI powered robot that can navigate to a destination, up and down stairs, with 90% of its key sensory apparatus disabled; that can anticipate and avoid obstacles that may never have been encountered before; and can do all that with much of its logic scrambled by outside factors, but can still recover enough to function. That's me finding my way to the bathroom at night after I've had one too many.

Or (a Pinker example) imagine a computer that can understand this exchange:

'I'm leaving.'
'Who is he?'

Making sense of the world, our intelligence, isn't something we learn, or are born with, it is a capability that has evolved over billions of years since the first proteins started replicating. Computers have got a bit of catching up to do.
In reply to Xharlie:

> and the idea that machines will develop some sort of consciousness is laughable.

Why? Organic life managed to evolve consciousness and it is made from the same basic elements we have available to build machines. We have no idea how to design a conscious machine but the evidence from nature is that it can evolve on its own.

In reply to Rob Exile Ward:

I agree. Whenever I encounter bold claims of huge technological advancements just around the corner I usually mention that the existing record for fastest passenger sub sonic atlantic crossing is still held by a VC10 (1962) , the fastest ever passenger crossing Concorde (1969) and usually finish with a picture of the first 747 (1969) and one that has just rolled off the line in Seattle and play "spot the difference"
 RomTheBear 03 Dec 2014
In reply to tom_in_edinburgh:

> Why? Organic life managed to evolve consciousness and it is made from the same basic elements we have available to build machines. We have no idea how to design a conscious machine but the evidence from nature is that it can evolve on its own.

I'll start worrying when we'll have managed to build a machine that exhibits the awareness and agency of a single cell organism, which we haven't even managed to do yet.
 Mike Stretford 03 Dec 2014
In reply to RomTheBear:

> I'll start worrying when we'll have managed to build a machine that exhibits the awareness and agency of a single cell organism, which we haven't even managed to do yet.

The point is you won't have time to worry!!!
1
 deepsoup 03 Dec 2014
In reply to davidbeynon:
> As anyone who has read their Harlan Ellison knows, the AIs won't wipe out humanity. They will keep a few of us around to torture for the rest of time, because it's more fun.

I've never read any Harlan Ellison. But a powerful being with vastly superior intelligence that gets a kick out of torturing people for eternity.. Can't quite put my finger on it but I'm sure I've read about something similar somewhere. Ah well, just some rubbishy bit of far-fetched dystopian fiction I expect.
 Clarence 03 Dec 2014
In reply to Skyfall:

How do we know that Hawking is controlling that voicebox, for all we know he may be driven by a rogue AI spreading disinformation about the coming robot revolution.

I for one, welcome my robotic overlords!
 Mike Stretford 03 Dec 2014
In reply to Clarence:

> I for one, welcome my robotic overlords!

They need to work on their expressions

http://tinyurl.com/nc392ot
 The Lemming 03 Dec 2014
In reply to Skyfall:
What are the chances of human initially made/designed robots with AI exploring the universe and possibly colonising some if not all of it within a few million years?

Obviously we will not factor in the scenario as we are too weak and reliant on the earth to survive, but little robots with solar panels or other forms of energy could go anywhere.
Post edited at 17:20
 wercat 03 Dec 2014
In reply to Skyfall:

Skynet already existed in the 1970s, though I think it suffered a setback when one of the launches was a total loss!

Personally I wouldn't worry about AI until researchers finally cotton on that it isn't all about computation power but about an emotionally driven ego - when machines can compute and also 'feel' a reaction to data with subsequently derived intented actions, that is when I'll worry. Amd ethically to create such and then switch it off would be like murder.
 RomTheBear 04 Dec 2014
In reply to The Lemming:

> What are the chances of human initially made/designed robots with AI exploring the universe and possibly colonising some if not all of it within a few million years?

Fermi's paradox, if it was possible then it probably would have happened millions of years ago somewhere in the galaxy and they would be everywhere by now.

One interesting question is what would be the motivation for an AI to actually create a civilisation of robots and explore the universe ?
Intelligent robots might have very different goals than humans do beyond their own survival.

 The Lemming 04 Dec 2014
In reply to RomTheBear:

> One interesting question is what would be the motivation for an AI to actually create a civilisation of robots and explore the universe ?

> Intelligent robots might have very different goals than humans do beyond their own survival.

The motivation would be survival. We've used up most resources as it is on this blue planet. So if we do create AI robots, surely they would want to survive in a place where they can grow and reproduce , spawn or replicate.

Energy baby, energy.

We've proved that solar panels work. Our Overlord robots now need raw materials to replicate.

 RomTheBear 04 Dec 2014
In reply to The Lemming:
> The motivation would be survival. We've used up most resources as it is on this blue planet. So if we do create AI robots, surely they would want to survive in a place where they can grow and reproduce , spawn or replicate.

> Energy baby, energy.

Well the question is, why would they want to reproduce or replicate and use more and more energy, if an artificial consciousness emerges, is there any logical argument as to why it would necessarily want to expand and make copies of itself beyond what is needed to ensure survival ?
Post edited at 14:40
 The Lemming 04 Dec 2014
In reply to RomTheBear:

> Well the question is, why would they want to reproduce or replicate and use more and more energy,

The same question could be asked of humans too. We've stepped way beyond our remit as a creature.
 RomTheBear 04 Dec 2014
In reply to The Lemming:

> The same question could be asked of humans too. We've stepped way beyond our remit as a creature.

Indeed, what I am thinking is that an AI could evolve very different goals and motivations than ours.

It could be that an AI develops goals that seem completely useless or stupid to us.
 Theo Moore 04 Dec 2014

If you believe that singularity is likely to occur (computers will continue to develop at an exponential rate), and

if you believe that future humans will have a desire to create simulations of human life (e.g for historical study), and

if you believe that human experience can be in principle be replicated artificially,

then it follows that you should accept that it is likely you are in fact a simulation.

the first two premises of this argument certainly seem plausible to me.
Post edited at 15:52
 Rob Exile Ward 04 Dec 2014
In reply to The Lemming:

I don't like to be rude (well, tbh I don't really mind) but:

We've stepped way beyond our remit as a creature.

WTF is that supposed to mean?

We don't have a 'remit', there is no goal, we're not heading anywhere - we just are.
 Rob Exile Ward 04 Dec 2014
In reply to theomoore: 'computers will continue to develop at an exponential rate'

Is that true? Memory storage and processor power may continue to develop, though perhaps not as fast as previously; but software doesn't. And that's what counts. A limited cr*p program running 1000 times faster is still a limited, cr*p program.

 Theo Moore 04 Dec 2014
In reply to Skyfall:

I think that concerning software the idea is that:
(from the first premise) processing power, as you rightly qualified, will grow exponentially + (from the third premise) human experience can be in principle replicated artificially = human experience could, in the future, be replicated artificially.

i.e 'software' could be developed which is a replication of human experience/replicates human experience

The argument is a fun one - that came up a few times in metaphysics seminars - the idea being that if you accept those 3 premises (which many people do at least at first glance) then it seems like we're all just computer simulations.... spooky.

New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...