ChaoticMusings

Members Login
Username 
 
Password 
    Remember Me  
Post Info TOPIC: AI willing to MURDER humans, experiment proves.


Getting Gobby

Posts: 290
Date:
RE: AI willing to MURDER humans, experiment proves.
Permalink   
 


Anonymous wrote:

After robots were mooted during the rise in popularity of science fiction in the 1940s there was a recurring theme of robots being created and then rising up and destroying their creator.

 

It seems that the same plot is now being revived for AI.

 

Asimov introduced 3 laws of robotics to deal with it.

 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 I didn't realise Asimov wrote the original "I Robot", what I assume the film was based on. I wasn't keen on it as I'm not really into science fiction (despite loving Black Mirror!).happy

But yes, if one thinks carefully there is certainly loopholes to those "laws".

Totally off topic but I loved ,"I am Legend "(also starring the now cancelled Will Smith) There was a real eerie quality to the deserted city and mannequins are sinister when filmed correctly. When he realised one had slightly moved you knew he wasn't as isolated and safe as he has always thought. I cried when his dog died, I admit it beg I'm always the same with animals in films and he had been a very loyal friend. Just because of that I can't watch the film again.



__________________

"The past is a foreign country.

They do things differently there."

 

 



Getting Gobby

Posts: 418
Date:
Permalink   
 

Red Okktober wrote:
You are far more confident than me that AI won't go rogue. We are only at it's very beginning in the modern world (putting aside all the sci fi related stuff from the 20th century). If you compare it's development with that of weapons, AI isn't even at the catapult or bow and arrow stage yet, and it will evolve many times faster than weapons have done. 

You say that the inputs will be by humans, but what's to stop a rogue human (terrorist, mad professor etc) from creating rogue AI? We hear about the potentional threat of terrorists getting access to nuclear or biological weapons, but not so much about terrorists developing their own AI. It doesn't have to be AI robots zapping humans with ray guns, although it could be - what if rogue AI accessed the NHS or transport systems? People given wrong medications, pacemakers stopped, planes falling out of the sky?

The biggest immediate threat from AI is children seeking advice from chatbots and viewing them as friends because they are 'nice' to them. These children are our future. In 20, 30, 40 years time, politicians, lawyers, surgeons, policemen, soldiers etc would have interacted with AI at some stage. What if that AI was rogue, or even a little bit rogue?


 It might go rogue but that would require it gaining sentence as I mentioned before and coming up with an ethical code for itself beyond what human beings have inputted. That is still the stuff of sci-fi at the moment.

We could make the same arguments for any powerful technology which has the potential to be used as a weapon. That does not mean we stop its use or development but we have appropriate regulations or safeguards in place. The obvious difficulty is the technology is developing so rapidly that the law making process cannot keep up and it relies on the ethical standards of businesses which can be conflicted with the profit motive.

Children seeking advice from chat bots could be a good thing. Talking therapies are very good at treating certain types of mental health conditions and mental health services are very stretched at the moment. Sometimes it is good to get intrusive thoughts out of your mind and onto paper or a written form to help manage them. You could use a chatbot effectively for that. However, there could also be downsides as you say including a child (or an adult for that matter) thinking they have a real emotional connection with it rather than the facsimile of a relationship. In addition it wouldn't help the child develop internal coping mechansims in the way a good therapist would but instead relies on something external (so the chat bot in this instance.) 



__________________

Let mercy come and wash away what I've done 



Getting Gobby

Posts: 310
Date:
Permalink   
 

Barksdale wrote:


 It might go rogue but that would require it gaining sentence as I mentioned before and coming up with an ethical code for itself beyond what human beings have inputted. That is still the stuff of sci-fi at the moment.

We could make the same arguments for any powerful technology which has the potential to be used as a weapon. That does not mean we stop its use or development but we have appropriate regulations or safeguards in place. The obvious difficulty is the technology is developing so rapidly that the law making process cannot keep up and it relies on the ethical standards of businesses which can be conflicted with the profit motive.

Children seeking advice from chat bots could be a good thing. Talking therapies are very good at treating certain types of mental health conditions and mental health services are very stretched at the moment. Sometimes it is good to get intrusive thoughts out of your mind and onto paper or a written form to help manage them. You could use a chatbot effectively for that. However, there could also be downsides as you say including a child (or an adult for that matter) thinking they have a real emotional connection with it rather than the facsimile of a relationship. In addition it wouldn't help the child develop internal coping mechansims in the way a good therapist would but instead relies on something external (so the chat bot in this instance.) 


 We don't know yet if future versions of AI will be capable of self-evolvement or not. So yeah, sci fi stuff at this moment in time and as things stand...but...

Here's some examples of AI being dangerous to children, to the extent that 'child safe' AI is being seen as an urgent requirement. 

Amazon has updated its Alexa voice assistant after it "challenged" a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a "challenge to do".

"Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs," the smart speaker said.

https://www.bbc.co.uk/news/technology-59810383

 

"Snapchat's My AI gave adult researchers posing as a 13 year old girl tips on how to lose her virginity to a 31 year old"

https://www.cam.ac.uk/research/news/ai-chatbots-have-shown-they-have-an-empathy-gap-that-children-are-likely-to-miss

 

This is why I don't, and won't ever, view AI as being infallible. It will make mistakes, even without rogue input, and some of those mistakes could be catastrophic.

 



__________________


Getting Gobby

Posts: 290
Date:
Permalink   
 

The above examples that Red has highlighted is my exact problem with AI. Like any computer programming there can be glitches in the system. The story about Alexa instructing a child to play with a plug socket is beyond disturbing. 
Or..If the system is not corrupt itself then the person controlling it can be corrupt and have malicious intent. You could say the same about biological weapons but that doesn't cancel out yet another extra threat to human life.

Even though the experiment I cited was apparently distorted and I accept that there have been many involved with AI who have voiced concerns. It is definitely becoming far more advanced far more quickly than it's creators thought possible and they have far more knowledge about this than the average joe.

To err is human and if we are imperfect creators playing God our technological offspring will be flawed in some way too. I've already admitted I  may be paranoid in this regard  but my instinct is not to trust it. I hope to be proved wrong.



__________________

"The past is a foreign country.

They do things differently there."

 

 



Getting Gobby

Posts: 418
Date:
Permalink   
 

Red Okktober wrote:
We don't know yet if future versions of AI will be capable of self-evolvement or not. So yeah, sci fi stuff at this moment in time and as things stand...but...

Here's some examples of AI being dangerous to children, to the extent that 'child safe' AI is being seen as an urgent requirement. 

Amazon has updated its Alexa voice assistant after it "challenged" a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a "challenge to do".

"Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs," the smart speaker said.

https://www.bbc.co.uk/news/technology-59810383

 

"Snapchat's My AI gave adult researchers posing as a 13 year old girl tips on how to lose her virginity to a 31 year old"

https://www.cam.ac.uk/research/news/ai-chatbots-have-shown-they-have-an-empathy-gap-that-children-are-likely-to-miss

 

This is why I don't, and won't ever, view AI as being infallible. It will make mistakes, even without rogue input, and some of those mistakes could be catastrophic.

 


 Valid concerns but I don't think most people would take the position AI should be treated as infallible. AI can have "hallucinations" where it comes up with an incorrect answer based on the available dataset or flawed conclusions / answers. The Amazon Alexa incident is one such example. The dataset was information extracted from the internet. At the time there was a viral TikTok trend going around in relation to a challenge of touching a penny to the exposed prongs of a charger which is why the AI recommended it. Clearly dangerous and not something that a human being would recommend as AI doesn't have the common sense we do. There wasn't another such incident as Amazon changed their algorithm following it. 

However, these limitations can be overcome with better moderation and datasets etc. We definitely need strong safeguards when it comes to kids however. 



__________________

Let mercy come and wash away what I've done 



Getting Gobby

Posts: 310
Date:
Permalink   
 

We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right

I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that the human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.

It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?



__________________
Anonymous

Date:
Permalink   
 

Red Okktober wrote:

We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right

I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that the human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.

It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?


 No doubt things can and will go wrong, just like with any kind of technology. But of course this is all a far cry from the fears raised in the original post, which is that AI will develop consciousness and self-awareness, and rise up to deliberately destroy its makers.



__________________
Vam


Musing at the Chaos

Posts: 871
Date:
Permalink   
 

Red Okktober wrote:

We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right

I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that hthe human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.

It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?


 In this and your other posts, you’ve highlighted the potential peril AI could pose far better than I ever could. 
 
As I’ve often said, like the internet, AI will almost certainly prove to be both a blessing and a curse. 

It’s the awesome wonder of its technology that makes it so scary. And after your comments, I reckon Fluffy could soon be building an underground bunker - complete with a Faraday Cage, just to be sure scruffy



__________________

Ego dulls the pain of stupidity.



Getting Gobby

Posts: 290
Date:
Permalink   
 

Vam wrote:
Red Okktober wrote:

We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right

I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that hthe human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.

It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?


 In this and your other posts, you’ve highlighted the potential peril AI could pose far better than I ever could. 
 
As I’ve often said, like the internet, AI will almost certainly prove to be both a blessing and a curse. 

It’s the awesome wonder of its technology that makes it so scary. And after your comments, I reckon Fluffy could soon be building an underground bunker - complete with a Faraday Cage, just to be sure scruffy


 One can never be too  careful! ragga  Fret not Vam, there will be room in the bunker for you! happy



__________________

"The past is a foreign country.

They do things differently there."

 

 

«First  <  1 2 3 | Page of 3  sorted by
Quick Reply

Please log in to post quick replies.