/ Robots

This topic has been archived, and won't accept reply postings.
The Ice Doctor - on 14 Jan 2017

Three questions:

1. Would you befriend a robot?

2. Let it drive you somewhere?

3. How do you hold a robot legally responsible?
Post edited at 14:03
Angrypenguin - on 14 Jan 2017
In reply to The Ice Doctor:

1) Depends how you define a robot. Robots of the current generation probably not. A hypothetical future robot that is indistinguishable from a human, yes.

2) Some of the self driving cars are already better than the average person, I would absolutely trust them.

3) An excellent question which is the reason that autonomous cars are testing the ground very carefully even though they are already having fewer accidents per mile than people.

I would suggest that there might be some element of an intergenerational split on this one. I am still in my 20's (just). I talked to my mum about this very thing the other day and she was fairly opposed to trusting a self driving car.
stp - on 14 Jan 2017
In reply to The Ice Doctor:

1. Meaningless - one can't be friends with something that's not sentient.

2. Depends on the stat record of the robot in question plus the ability to override.

3. A piece of machinery can't be held responsible for anything. It's makers are the responsible party.
2
markAut - on 14 Jan 2017
In reply to The Ice Doctor:

1. No, it's a tool.

2. Not on a road with other users unless a human was able to override it and take control.

3. You can't. It's up to the owner, programmer, user or manufacturer.

I work with robots, so my be biased.
2
Dax H - on 14 Jan 2017
In reply to The Ice Doctor:

1 If AI gets to the point where friendship could be possible

2 yes and no, yes going on the amount of accidents that the current self drive testing has not had I would have no problem trusting it and it will only get better and time moves forward. No because I get both bored and travel sick as a passenger so I always drive.

3 don't know, at the moment any problems are a result of the manufacturer but that might change as AI improves and who knows, we might end up with actual intelligence rather than artificial.
summo on 14 Jan 2017
In reply to Angrypenguin:
> 2) Some of the self driving cars are already better than the average person, I would absolutely trust them.

what if they had to make a decision between swerving to avoid a pedestrian or wild animal, which might then cause you to hit something? ie. on coming traffic, don't think AI is quite up to those decisions yet.
3
damhan-allaidh on 14 Jan 2017
In reply to The Ice Doctor:

1. Why not? As long as it was nice and had good conversation.

2. Probably.

3. Asimov's 3 (+Zeroth) Laws of Robotics
MG - on 14 Jan 2017
In reply to summo:

> what if they had to make a decision between swerving to avoid a pedestrian or wild animal, which might then cause you to hit something? ie. on coming traffic, don't think AI is quite up to those decisions yet.

How would a human be better?
Lion Bakes on 14 Jan 2017
In reply to The Ice Doctor:

No, I'll just let the robot clean the house for now.

Dax H - on 14 Jan 2017
In reply to summo:

> what if they had to make a decision between swerving to avoid a pedestrian or wild animal, which might then cause you to hit something? ie. on coming traffic, don't think AI is quite up to those decisions yet.

Where as humans on the other hand can remain calm and rational in that split second situation and can always be trusted to make the correct decision.

The chances are that the AI car will already be applying the brakes before the human has even registered that someone has stepped out in front of them.
Once everything is self driving the on coming cars will have realised the problem and will be taking the appropriate action as well.
wercat on 14 Jan 2017
In reply to The Ice Doctor:

I'd rather that cats were altered to make them talking driver-companions. I'd then befriend but perhaps not quite trust the animabot.

I'd be responsible through coming up with such an odd driver
Deviant - on 14 Jan 2017
In reply to The Ice Doctor:

> Three questions:

> 1. Would you befriend a robot?


Hell no, I'm not that lonely, yet ! Lonely people have pets and I know of one old dear who is forever talking to her dog whilst taking it out "walkies". There again, human beings already f**k inflatable dolls so perhaps it's not such a big step after-all !


> 2. Let it drive you somewhere?


Only if it was guaranteed to be better than a woman driver !



> 3. How do you hold a robot legally responsible?


That's a tough one ! I think artificial intelligence has a long way to go before insurance companies would consider covering a robot for its' liability

3
summo on 14 Jan 2017
In reply to MG:

> How would a human be better?

it's an AI conundrum, in an impossible to avoid in impact situation, do you programme the AI to protect the drivers life, or the pedestrians, or other road users? Not suggest a pure human decision is better. Could AI be programmed sufficiently to almost read the body language that the person has realised their error, or that eye to recognition and is about to jump back onto the kerb etc..

When we drove up the A1 last week on the newest bit by Scotch Corner, the screen on the sat nav showed us as being off road, as it was as yet unmapped. Digital mapping of the UK would have to advance a little too.

There is another example where it is currently possible to programme a robot very specifically to carry out a very finite complex process or mathematical equation. But, think about the situation where you in a friends house for the first time and they say help yourself to make a cup of tea, every kettle, tap, cupboard etc. is different but it takes the human brain no longer than usual to complete this task... the permutations for a robot are huge, true AI that than 100% control a car safely is years away.
2
summo on 14 Jan 2017
In reply to Dax H:
> Where as humans on the other hand can remain calm and rational in that split second situation and can always be trusted to make the correct decision.

No but we are responsible for our own decision, be it crashing into a tree, or stepping out in road without looking.

> The chances are that the AI car will already be applying the brakes before the human has even registered that someone has stepped out in front of them.

But, then given the proximity of busy pavements in the UK to many roads, the car would never set off, as people are often putting one foot in the road to dodge around someone etc.. What about cyclists, wild animals, some like a rabbit or dog it's safer to hit than a risky swerve, but what about bigger stuff Red Deer, Elks etc...

> Once everything is self driving the on coming cars will have realised the problem and will be taking the appropriate action as well.

It will work if roads are only for cars, then you could have cars streaming along 5m apart, bumper to bumper, at 50mph... all communicating with each other.
Post edited at 16:43
2
ads.ukclimbing.com
Dax H - on 14 Jan 2017
In reply to summo:

> No but we are responsible for our own decision, be it crashing into a tree, or stepping out in road without looking.

Or being run over by someone not paying attention or not reacting in time.

> But, then given the proximity of busy pavements in the UK to many roads, the car would never set off, as people are often putting one foot in the road to dodge around someone etc..

How do you recognise if someone is stepping round or starting to cross? This isn't that far outside the bounds of current AI

> What about cyclists, wild animals, some like a rabbit or dog it's safer to hit than a risky swerve, but what about bigger stuff Red Deer, Elks etc...

Accidents are caused by people swerving for things they shouldn't, a cat runs out and without thought most human instinct is to stop or swerve, the robot car can take in to account speed, conditions, proximity to other vehicles and people in a fraction of a second and act accordingly.

> It will work if roads are only for cars, then you could have cars streaming along 5m apart, bumper to bumper, at 50mph... all communicating with each other.

This will probably happen in time, there is already a call for banning large vehicle's in city's during the day, cycling infrastructure is getting better and there are more and more bus lanes.

Chris Harris - on 14 Jan 2017
In reply to stp:

> 1. Meaningless - one can't be friends with something that's not sentient.

You've not met my mate Ian's missus.

stp - on 14 Jan 2017
In reply to Deviant:

> There again, human beings already f**k inflatable dolls so perhaps it's not such a big step after-all

I think they're already doing it with robots (sexbots) and some forecasts reckon by 2050 robot sex will be more common than human partner sex. Robophilia could be the norm in 50 years (though there's already a campaign to stop development of sexbots in the UK).

Perhaps a better and more realistic question 1 would be: would you have sex with a robot?

tom_in_edinburgh - on 15 Jan 2017
In reply to The Ice Doctor:

> 1. Would you befriend a robot?

Kids befriend Teddy Bears. It's certainly possible for people to feel an emotional bond to a robot. What can't happen until we can make conscious machines is for a robot to befriend a person.

> 2. Let it drive you somewhere?

Sure. Wouldn't think twice if I trusted the technology. People let autopilots fly planes all the time.

> 3. How do you hold a robot legally responsible?

Any way you like. Unless the robot is conscious it is just lawyers and politicians playing silly games to pretend the law has power over something completely inanimate. See https://en.wikipedia.org/wiki/Indiana_Pi_Bill

If we ever did make an electronic consciousness then it could most likely do all the things computer software can do, including save its current state as a backup, send itself across computer networks at nearly the speed of light, communicate with most of the computers in the world across the internet, have many copies of itself running at the same time and take advantage of buildings full of equipment (rather than being limited to what can fit inside a human head and be powered by digesting food).

You can try holding something that can be started off from where it left off if it is killed, feels no pain and is thousands of times more intelligent than you responsible but it might be wiser not to piss it off.





The Ice Doctor - on 15 Jan 2017
In reply to stp:

Why do you need a sex bot, if you can go to Pattaya,Amsterdam,or your nearest street corner?
1
Dax H - on 15 Jan 2017
In reply to The Ice Doctor:

> Why do you need a sex bot, if you can go to Pattaya,Amsterdam,or your nearest street corner?

You obviously have not seen the working girls round here.
We have a tolerated zone in Leeds very close to my house.
You see the very occasional one that looks nice but most I wouldn't touch with yours let alone mine.

Skinny, open sores, snaggle toothed nastiness.

Would I shag a robot? Sure why not.
The wife and myself have a collection of toys that we sometimes play with alone or together as I suspect most people do.
I even have all the parts to make a fu##ing machine (oscillating dildo with variable depth and speed) when I get time to build it.
A robot is just another step.

The one caveat I am going to put on this is it would have to be a dumb robot, any signs of intelligence or sentience would feel like sexual slavery to me.
charliesdad - on 15 Jan 2017
In reply to Dax H:

I'm sure you're factually correct about the charms of the local working girls, but perhaps a little compassion is in order: few, (if any), will be enthusiatic volunteers for this work, so having a drug habit and health problems probably go with the territory.
Duncan Bourne - on 15 Jan 2017
In reply to The Ice Doctor:

1. Probably yes if it had something that gave it a semblance of character (think R2D2). Humans are very good at projecting feelings onto inanimate objects (fluffy toys etc.) let alone animate ones.

2. currently no but who knows in the future. If I became incapable of driving myself then a robot chauffeur might be preferable to buses or taxis.

3. A very good question. I suspect it would be something along the lines of what we have now with airlines and automatic pilots. So if your robot failed to keep you or the public safe then it would be down to deciding if it was the manufacturers fault or the programmer, or anyone who might have tampered with it.
If you had truelly sentient robots capable of making independant decisions then it comes down to if it is percieved as an individual with rights or not. Trial or decommission
stp - on 15 Jan 2017
In reply to The Ice Doctor:

I'm sure there are lots of pros.

A few spring to mind.

1. You are are robophiliac
2. Hygiene - no risk of stds
3. No exploitation of women
4. consistency and availabilty of service
5. Cheaper - at least eventually.

From a business point of view - a company that hires out sexbots - there will be all the usual advantages of machines over people, not to mention the legal issues with prostitution. Would probably be a very lucrative business. After all sex sells.
tom_in_edinburgh - on 15 Jan 2017
In reply to Duncan Bourne:

> 3. A very good question. I suspect it would be something along the lines of what we have now with airlines and automatic pilots. So if your robot failed to keep you or the public safe then it would be down to deciding if it was the manufacturers fault or the programmer, or anyone who might have tampered with it.

The problem is with AI learning algorithms where a neural net is trained and evolves with stimulus over time. The programmer doesn't understand 'how it works' because he didn't design the neural network that is solving the problem it evolved. It should be tested before it is released but there is no way to prove it will not do something completely unexpected, particularly if it keeps learning after it is deployed. The classic example is the Microsoft chatbot that 'learned' to swear and use racist language after lots of people started to swear at it after it was released.

To some extent if you buy a product based on a learning algorithm then you have to accept the risk of it doing something strange and unexpected occasionally as the price of its 'intelligence'.

Duncan Bourne - on 15 Jan 2017
In reply to tom_in_edinburgh:

> To some extent if you buy a product based on a learning algorithm then you have to accept the risk of it doing something strange and unexpected occasionally as the price of its 'intelligence'.

I think this neatly underlines the problem. Where's Asimov when you need him?
Lion Bakes on 15 Jan 2017
In reply to The Ice Doctor:

Why would you need a robot to "drive" you somewhere? Surely the vehicle would just do it itself and it's unlikely in the future that people will own their own vehicle. An autonomous vehicle will just turn up and when the journey is done go off somewhere else and take someone else somewhere else, 24 hours a day. Taxi drivers will be redundant.
Duncan Bourne - on 15 Jan 2017
In reply to Lion Bakes:

Carry your bags? I am thinking here of the elderly folk who use taxis to get to heir local supermarket
Lion Bakes on 15 Jan 2017
In reply to Duncan Bourne:

> Carry your bags? I am thinking here of the elderly folk who use taxis to get to heir local supermarket

Why would we need supermarkets in the future? Why would we travel to get food if we have poor mobility and strength that impacts our ability to carry bags? In this future AI world surely food distribution would be far more refined than the supermarket model?
ads.ukclimbing.com
Stefan Jacobsen - on 15 Jan 2017
In reply to The Ice Doctor:

> Three questions:

> 1. Would you befriend a robot?
Would robots befriend each other?

> 2. Let it drive you somewhere?
Yes, Copenhagen Metro is automatic, and occasionally I let it drive me.

> 3. How do you hold a robot legally responsible?
You don't. You hold the owner or the manufacturer responsible.

john arran - on 15 Jan 2017
In reply to Duncan Bourne:

> 1. Probably yes if it had something that gave it a semblance of character (think R2D2).

Let me get this straight: You're saying you want to have sex with R2D2?

;-)
Duncan Bourne - on 15 Jan 2017
In reply to Lion Bakes:

Maybe folk would want to travel to get food. Also you have to travel to go on holiday
Duncan Bourne - on 15 Jan 2017
In reply to john arran:

You mean you don't? Weirdo!
Lion Bakes on 15 Jan 2017
In reply to john arran:

> Let me get this straight: You're saying you want to have sex with R2D2?

> ;-)

Best quote of the night. Just imaging it is so funny
pec on 16 Jan 2017
In reply to john arran:

> Let me get this straight: You're saying you want to have sex with R2D2? >

Check this out

https://www.youtube.com/watch?v=FvnO3On4_fo


jkarran - on 16 Jan 2017
In reply to The Ice Doctor:

> 1. Would you befriend a robot?
Were it worthwhile, yeah, why not

> 2. Let it drive you somewhere?
Yes. My only reservation is hastening the almost inevitable restriction of direct human driving, something I still enjoy from time to time.

> 3. How do you hold a robot legally responsible?
We can't, it's meaningless.
jk
Toerag - on 17 Jan 2017
In reply to Stefan Jacobsen:

> Would robots befriend each other?

someone got a couple of the new 'digital assistants' (Alexa, Siri etc.) to fall in love with each other.
Stefan Jacobsen - on 17 Jan 2017
In reply to Toerag:

Platonic love, I presume.

This topic has been archived, and won't accept reply postings.