After robots were mooted during the rise in popularity of science fiction in the 1940s there was a recurring theme of robots being created and then rising up and destroying their creator.
It seems that the same plot is now being revived for AI.
Asimov introduced 3 laws of robotics to deal with it.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I didn't realise Asimov wrote the original "I Robot", what I assume the film was based on. I wasn't keen on it as I'm not really into science fiction (despite loving Black Mirror!).
But yes, if one thinks carefully there is certainly loopholes to those "laws".
Totally off topic but I loved ,"I am Legend "(also starring the now cancelled Will Smith) There was a real eerie quality to the deserted city and mannequins are sinister when filmed correctly. When he realised one had slightly moved you knew he wasn't as isolated and safe as he has always thought. I cried when his dog died, I admit it I'm always the same with animals in films and he had been a very loyal friend. Just because of that I can't watch the film again.
__________________
You're probably dancing with your blonde hair
Falling like ribbons on your shoulder, just like we always saw
You are far more confident than me that AI won't go rogue. We are only at it's very beginning in the modern world (putting aside all the sci fi related stuff from the 20th century). If you compare it's development with that of weapons, AI isn't even at the catapult or bow and arrow stage yet, and it will evolve many times faster than weapons have done.
You say that the inputs will be by humans, but what's to stop a rogue human (terrorist, mad professor etc) from creating rogue AI? We hear about the potentional threat of terrorists getting access to nuclear or biological weapons, but not so much about terrorists developing their own AI. It doesn't have to be AI robots zapping humans with ray guns, although it could be - what if rogue AI accessed the NHS or transport systems? People given wrong medications, pacemakers stopped, planes falling out of the sky?
The biggest immediate threat from AI is children seeking advice from chatbots and viewing them as friends because they are 'nice' to them. These children are our future. In 20, 30, 40 years time, politicians, lawyers, surgeons, policemen, soldiers etc would have interacted with AI at some stage. What if that AI was rogue, or even a little bit rogue?
It might go rogue but that would require it gaining sentence as I mentioned before and coming up with an ethical code for itself beyond what human beings have inputted. That is still the stuff of sci-fi at the moment.
We could make the same arguments for any powerful technology which has the potential to be used as a weapon. That does not mean we stop its use or development but we have appropriate regulations or safeguards in place. The obvious difficulty is the technology is developing so rapidly that the law making process cannot keep up and it relies on the ethical standards of businesses which can be conflicted with the profit motive.
Children seeking advice from chat bots could be a good thing. Talking therapies are very good at treating certain types of mental health conditions and mental health services are very stretched at the moment. Sometimes it is good to get intrusive thoughts out of your mind and onto paper or a written form to help manage them. You could use a chatbot effectively for that. However, there could also be downsides as you say including a child (or an adult for that matter) thinking they have a real emotional connection with it rather than the facsimile of a relationship. In addition it wouldn't help the child develop internal coping mechansims in the way a good therapist would but instead relies on something external (so the chat bot in this instance.)
It might go rogue but that would require it gaining sentence as I mentioned before and coming up with an ethical code for itself beyond what human beings have inputted. That is still the stuff of sci-fi at the moment.
We could make the same arguments for any powerful technology which has the potential to be used as a weapon. That does not mean we stop its use or development but we have appropriate regulations or safeguards in place. The obvious difficulty is the technology is developing so rapidly that the law making process cannot keep up and it relies on the ethical standards of businesses which can be conflicted with the profit motive.
Children seeking advice from chat bots could be a good thing. Talking therapies are very good at treating certain types of mental health conditions and mental health services are very stretched at the moment. Sometimes it is good to get intrusive thoughts out of your mind and onto paper or a written form to help manage them. You could use a chatbot effectively for that. However, there could also be downsides as you say including a child (or an adult for that matter) thinking they have a real emotional connection with it rather than the facsimile of a relationship. In addition it wouldn't help the child develop internal coping mechansims in the way a good therapist would but instead relies on something external (so the chat bot in this instance.)
We don't know yet if future versions of AI will be capable of self-evolvement or not. So yeah, sci fi stuff at this moment in time and as things stand...but...
Here's some examples of AI being dangerous to children, to the extent that 'child safe' AI is being seen as an urgent requirement.
Amazon has updated its Alexa voice assistant after it "challenged" a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.
The suggestion came after the girl asked Alexa for a "challenge to do".
"Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs," the smart speaker said.
https://www.bbc.co.uk/news/technology-59810383
"Snapchat's My AI gave adult researchers posing as a 13 year old girl tips on how to lose her virginity to a 31 year old"
This is why I don't, and won't ever, view AI as being infallible. It will make mistakes, even without rogue input, and some of those mistakes could be catastrophic.
The above examples that Red has highlighted is my exact problem with AI. Like any computer programming there can be glitches in the system. The story about Alexa instructing a child to play with a plug socket is beyond disturbing. Or..If the system is not corrupt itself then the person controlling it can be corrupt and have malicious intent. You could say the same about biological weapons but that doesn't cancel out yet another extra threat to human life.
Even though the experiment I cited was apparently distorted and I accept that there have been many involved with AI who have voiced concerns. It is definitely becoming far more advanced far more quickly than it's creators thought possible and they have far more knowledge about this than the average joe.
To err is human and if we are imperfect creators playing God our technological offspring will be flawed in some way too. I've already admitted I may be paranoid in this regard but my instinct is not to trust it. I hope to be proved wrong.
__________________
You're probably dancing with your blonde hair
Falling like ribbons on your shoulder, just like we always saw
We don't know yet if future versions of AI will be capable of self-evolvement or not. So yeah, sci fi stuff at this moment in time and as things stand...but...
Here's some examples of AI being dangerous to children, to the extent that 'child safe' AI is being seen as an urgent requirement.
Amazon has updated its Alexa voice assistant after it "challenged" a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.
The suggestion came after the girl asked Alexa for a "challenge to do".
"Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs," the smart speaker said.
https://www.bbc.co.uk/news/technology-59810383
"Snapchat's My AI gave adult researchers posing as a 13 year old girl tips on how to lose her virginity to a 31 year old"
This is why I don't, and won't ever, view AI as being infallible. It will make mistakes, even without rogue input, and some of those mistakes could be catastrophic.
Valid concerns but I don't think most people would take the position AI should be treated as infallible. AI can have "hallucinations" where it comes up with an incorrect answer based on the available dataset or flawed conclusions / answers. The Amazon Alexa incident is one such example. The dataset was information extracted from the internet. At the time there was a viral TikTok trend going around in relation to a challenge of touching a penny to the exposed prongs of a charger which is why the AI recommended it. Clearly dangerous and not something that a human being would recommend as AI doesn't have the common sense we do. There wasn't another such incident as Amazon changed their algorithm following it.
However, these limitations can be overcome with better moderation and datasets etc. We definitely need strong safeguards when it comes to kids however.
We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right
I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that the human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.
It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?
We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right
I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that the human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.
It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?
No doubt things can and will go wrong, just like with any kind of technology. But of course this is all a far cry from the fears raised in the original post, which is that AI will develop consciousness and self-awareness, and rise up to deliberately destroy its makers.
We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right
I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that hthe human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.
It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?
In this and your other posts, you’ve highlighted the potential peril AI could pose far better than I ever could.
As I’ve often said, like the internet, AI will almost certainly prove to be both a blessing and a curse.
It’s the awesome wonder of its technology that makes it so scary. And after your comments, I reckon Fluffy could soon be building an underground bunker - complete with a Faraday Cage, just to be sure
__________________
No amount of evidence will ever persuade an idiot.
We will have to agree to disagree on this one Avon. I can see the huge benefits that AI will bring, and no no doubt it will be viewed as the best thing since sliced bread as long as things are going right
I feel it will be a ticking time bomb under the surface though for when something goes wrong. That it is artificial, ie man-made, synthetic, means it can only be as good as those who provide the inputs, and because of how fast it will evolve, I'm not sure that hthe human inputters will be able to keep up with it in a 100% safe way despite any number of safeguards.
It was to be expected that Amazon changed the algorithm, but it's that first mistake, before the algorithm can be changed, which is the problem. It could be something far more serious than telling a kid to touch a live plug. If this mistake was caused by a viral tiktok trend, and Alexa picked up on it, who's to say that AI won't pick up on other things that it shouldn't?
In this and your other posts, you’ve highlighted the potential peril AI could pose far better than I ever could.
As I’ve often said, like the internet, AI will almost certainly prove to be both a blessing and a curse.
It’s the awesome wonder of its technology that makes it so scary. And after your comments, I reckon Fluffy could soon be building an underground bunker - complete with a Faraday Cage, just to be sure
One can never be too careful! Fret not Vam, there will be room in the bunker for you!
__________________
You're probably dancing with your blonde hair
Falling like ribbons on your shoulder, just like we always saw
-- Edited by Vam on Thursday 14th of August 2025 10:24:18 AM
I agree with a lot of what he says - especially about the average man in the street not understanding what's heading our way - it will be unrecognisable from what it is now - chatbots, google searches, AI generated pics and fake news etc, will soon become the equivlant of ancient cave paintings but in far, far quicker time.
Many people are assuming that humans will always be the master of AI, but that seems to be based on all governments collaborating to keep AI in check. but that would mean a war-free existence, and when has that ever happened? In the same way that you now have rogue nations trying to develop their own nuclear programmes for nefarious purposes, you will have rogue nations and organisations going their own way with AI, and not 'sticking to the rules' and as Hinton says, AI will be far more intelligent and far more powerful than humans.
-- Edited by Vam on Thursday 14th of August 2025 10:24:18 AM
I agree with a lot of what he says - especially about the average man in the street not understanding what's heading our way - it will be unrecognisable from what it is now - chatbots, google searches, AI generated pics and fake news etc, will soon become the equivlant of ancient cave paintings but in far, far quicker time.
Many people are assuming that humans will always be the master of AI, but that seems to be based on all governments collaborating to keep AI in check. but that would mean a war-free existence, and when has that ever happened? In the same way that you now have rogue nations trying to develop their own nuclear programmes for nefarious purposes, you will have rogue nations and organisations going their own way with AI, and not 'sticking to the rules' and as Hinton says, AI will be far more intelligent and far more powerful than humans.
What could possibly go wrong?
It’s the predicted timeline that freaked me out! No time is a good time for what he’s anticipating will happen, obviously. But humans could be ‘toast’ within ‘5 to 20 years’? Seriously??
The only glimmer of hope is what he said at around 2 minutes in: That all countries would be in the same boat, so it wouldn't be in a rogue country's interests to develop/programme their AI in order to establish ‘dominance’, because no country would want AI to ‘take over’.
__________________
No amount of evidence will ever persuade an idiot.
It’s the predicted timeline that freaked me out! No time is a good time for what he’s anticipating will happen, obviously. But humans could be ‘toast’ within ‘5 to 20 years’? Seriously??
The only glimmer of hope is what he said at around 2 minutes in: That all countries would be in the same boat, so it wouldn't be in a rogue country's interests to develop/programme their AI in order to establish ‘dominance’, because no country would want AI to ‘take over’.
I think it near impossible that all countries will collaborate 100% of the time. If advanced AI was available now, you would think it would currently be playing a part in some form in the Israel/Palestine and Ukraine/Russia conflicts. But to what extent?
If the threat of AI is so great that it is able to prevent war, then the side effect of that would be exploding population numbers, added to that that AI will allow people to live longer with all the new cures and treatments. Imagine if AI is able to cure cancer and prevent heart disease. Earth will remain the same size, so where will all these extra people live. Space colonies?
It's not hard to imagine that in the future people may be limited to the number of children they can have, or maybe selected couples only. Or that there are very few sick people. Maybe people will have to be euthanised at a certain age to make way for others? Perhaps all those old sci-fi books and movies weren't so far off the truth?
It’s the predicted timeline that freaked me out! No time is a good time for what he’s anticipating will happen, obviously. But humans could be ‘toast’ within ‘5 to 20 years’? Seriously??
The only glimmer of hope is what he said at around 2 minutes in: That all countries would be in the same boat, so it wouldn't be in a rogue country's interests to develop/programme their AI in order to establish ‘dominance’, because no country would want AI to ‘take over’.
I think it near impossible that all countries will collaborate 100% of the time. If advanced AI was available now, you would think it would currently be playing a part in some form in the Israel/Palestine and Ukraine/Russia conflicts. But to what extent?
If the threat of AI is so great that it is able to prevent war, then the side effect of that would be exploding population numbers, added to that that AI will allow people to live longer with all the new cures and treatments. Imagine if AI is able to cure cancer and prevent heart disease. Earth will remain the same size, so where will all these extra people live. Space colonies?
It's not hard to imagine that in the future people may be limited to the number of children they can have, or maybe selected couples only. Or that there are very few sick people. Maybe people will have to be euthanised at a certain age to make way for others? Perhaps all those old sci-fi books and movies weren't so far off the truth?
I honestly wouldn’t like to guess the answer to that, Red. If there even is an answer in these relatively early stages of AI development.
You paint a grim picture, but I guess the optimist in me still believes natural selection will continue to take its course, and people’s life cycles will end as a result of the natural order of things.
That’s infinitely preferable to me than imagining rampaging mobs of automaton death squads, randomly singling people out and saying the words “Sorry. Computer says ‘No’…”
Predictably, not everyone is buying into the doom & gloom….👇🏻
“I just refuse to believe that humans are going to be obsolete. It just seems like it’s an absurd concept,”
I honestly wouldn’t like to guess the answer to that, Red. If there even is an answer in these relatively early stages of AI development.
You paint a grim picture, but I guess the optimist in me still believes natural selection will continue to take its course, and people’s life cycles will end as a result of the natural order of things.
That’s infinitely preferable to me than imagining rampaging mobs of automaton death squads, randomly singling people out and saying the words “Sorry. Computer says ‘No’…”
Predictably, not everyone is buying into the doom & gloom….👇🏻
“I just refuse to believe that humans are going to be obsolete. It just seems like it’s an absurd concept,”
Life expectancy increases anyway, but AI is going to accelerate that. So while AI is preventing wars and coming up with new antibiotics and cures for various diseases - it had better come up with cures for things that affect the elderly like dementia, lack of mobility, incontinence etc, otherwise it will create a society of bedridden OAPs who require round the clock care.
The retirement age will go up along with life expectancy, but AI will be doing a lot of the jobs that humans do, so I'm not sure how that is going to work out.
Where will these extra people live, what will they eat? We already have famine in the world. Perhaps synthetic food will become the norm, so instead of tucking in to a Sunday roast, you will inject your nutrients or take a tablet - then go off to work as a carer, until it's your turn to be cared for. But what if AI robots become carers and human ones become obsolete? What then?
If I was writing a sci-fi book about the end of the human race, the first few pages would be based on what is actually happening now
Re: AI willing to MURDER humans, experiment proves.
"Willing" is not the appropriate term nor is "Murder". A more appropriate use of words would be: AI given executive powers can result in the killing of many humans as a direct consequence of those executive powers.
Thinking is an unnecessary speculation when it comes to AI decisions.
Red - “ If I was writing a sci-fi book about the end of the human race, the first few pages would be based on what is actually happening now “
Truth is often stranger than fiction. Buckle up! 👇🏻
(seen on X) - Scientists in China are developing a ‘pregnancy robot’ capable of carrying a baby to term and giving birth. The humanoid will be equipped with an artificial womb. The humanoid will be equipped with an artificial womb that receives nutrients through a hose and will replicate the entire pregnancy process from conception to delivery. Society is destroying itself via AI.
Attachments
__________________
No amount of evidence will ever persuade an idiot.
-- Edited by Vam on Thursday 14th of August 2025 10:24:18 AM
AI with maternal instincts:
a) protective towards something (its "baby").
b) all out aggression against anything it sees as a threat to its "baby" - which includes killing what it identifies as a threat.
Here we are talking about AI controlled automatons, AI killer drones, AI controlled "defence" systems. AI controlled trucks, AI controlled planes, AI controlled ships, AI controlled nuclear submarines ...