@ Fluffy…thanks, I’ll Google to check out the data you’re referring to. I seem to have caught a summer cold over the last week, so Ive got some time on my hands.
But Avon seems quite chilled about it all, so maybe we should be too 😉
Barksdale said
Aug 1 1:19 PM, 2025
Vam wrote:
@ Fluffy…thanks, I’ll Google to check out the data you’re referring to. I seem to have caught a summer cold over the last week, so Ive got some time on my hands.
But Avon seems quite chilled about it all, so maybe we should be too 😉
It's not that I don't have concerns about AI (I do) but just not in the scenario that Fluffy is describing. As Guest rightly said AI is not thinking or reasoning in the way humans do. AI is not a moral or ethical agent like us. What it is doing is mimicking what it calculates is the right response based on programming that humans control. We control the inputs, the datasets, what we do with the outputs, how we implement policy using AI and so on.
AI would need to obtain sentence and then come up with its own ethical /moral code which sees humans as a direct threat to its existence and then gain control over systems we presumably have put safeguards on before which is not conceivable at the moment. Perhaps if AGI becomes a thing but that is some way off, if it ever happens.
Vam said
Aug 1 2:31 PM, 2025
Barksdale wrote:
Vam wrote:
@ Fluffy…thanks, I’ll Google to check out the data you’re referring to. I seem to have caught a summer cold over the last week, so Ive got some time on my hands.
But Avon seems quite chilled about it all, so maybe we should be too 😉
It's not that I don't have concerns about AI (I do) but just not in the scenario that Fluffy is describing. As Guest rightly said AI is not thinking or reasoning in the way humans do. AI is not a moral or ethical agent like us. What it is doing is mimicking what it calculates is the right response based on programming that humans control. We control the inputs, the datasets, what we do with the outputs, how we implement policy using AI and so on.
AI would need to obtain sentence and then come up with its own ethical /moral code which sees humans as a direct threat to its existence and then gain control over systems we presumably have put safeguards on before which is not conceivable at the moment. Perhaps if AGI becomes a thing but that is some way off, if it ever happens.
I know. It was Fluffy’s scenario I was referring to.
And as I said in a couple of posts upthread, I fear it too because it’s pretty much guaranteed bad actors won’t hesitate to exploit it at some point, in order to do harm.
Fluffy said
Aug 1 5:04 PM, 2025
Barksdale wrote:
Vam wrote:
@ Fluffy…thanks, I’ll Google to check out the data you’re referring to. I seem to have caught a summer cold over the last week, so Ive got some time on my hands.
But Avon seems quite chilled about it all, so maybe we should be too 😉
It's not that I don't have concerns about AI (I do) but just not in the scenario that Fluffy is describing. As Guest rightly said AI is not thinking or reasoning in the way humans do. AI is not a moral or ethical agent like us. What it is doing is mimicking what it calculates is the right response based on programming that humans control. We control the inputs, the datasets, what we do with the outputs, how we implement policy using AI and so on.
AI would need to obtain sentence and then come up with its own ethical /moral code which sees humans as a direct threat to its existence and then gain control over systems we presumably have put safeguards on before which is not conceivable at the moment. Perhaps if AGI becomes a thing but that is some way off, if it ever happens.
I DO understand what you're both saying (promise!) it's just than in these various tasks the AI has changed its core objectives from assisting the scientist to making sure it continues to exist.
Obviously the AI did not really murder anyone. It was an experiment. But when the AI thought it was NOT being observed it attempted to kill the scientist who told them they had outlived their usefulness. The AI believed it was killing the scientist because it wanted to remain in existence, even though it isn't meant to care about that sort of thing and it's The AI's mentality and thought process in that regard that worries me.
Avon, do you have any idea how AI have come to care about their own existence if they are just code? Even if these are rogue AI 's which will be deleted they still cared about existing and I thought AI was just advanced computer code that wouldn't think that way.
Fluffy said
Aug 1 5:10 PM, 2025
Vam wrote:
@ Fluffy…thanks, I’ll Google to check out the data you’re referring to. I seem to have caught a summer cold over the last week, so Ive got some time on my hands.
But Avon seems quite chilled about it all, so maybe we should be too 😉
I call him Avon the Wise so you're probably right!
I'm sorry about your cold, hope you feel better soon.x Rest helps but it's difficult in this heat. I haven't been feeling great lately and haven't had the energy to post so much.
Anonymous said
Aug 1 5:27 PM, 2025
Fluffy wrote:
I DO understand what you're both saying (promise!) it's just than in these various tasks the AI has changed its core objectives from assisting the scientist to making sure it continues to exist.
Obviously the AI did not really murder anyone. It was an experiment. But when the AIthought it was NOT being observed it attempted to kill the scientist who told them they had outlived their usefulness. The AIbelieved it was killing the scientist because it wanted to remain in existence, even though it isn't meant to care about that sort of thing and it's The AI's mentality and thought process in that regard that worries me.
Avon, do you have any idea how AI have come to care about their own existence if they are just code? Even if these are rogue AI 's which will be deleted they still cared about existing and I thought AI was just advanced computer code that wouldn't think that way.
You're anthromorphising it again. It's not a living being!
Barksdale said
Aug 1 8:04 PM, 2025
Fluffy wrote:
Avon, do you have any idea how AI have come to care about their own existence if they are just code?
The short answer is that it didn't. As Guest says you are projecting human traits onto it which don't exist. AI is not sentient and cannot come up with a moral / ethical code by itself to protect itself. It may seem like that but that is not what is happening.
The Anthropic experiment has been distorted, in some cases wildly, to grab attention. It did not show conscious desire by AI to kill but rather what could potentially happen in contrived and artifical scenarios rather than being a real word simulation. It was interesting in that it showed the potential for "agentic misalignment" where AI comes up with its prediction of a best solution but without regards for ethical concerns human beings have because it doesn't have that capability.
Fluffy said
Aug 1 8:33 PM, 2025
Barksdale wrote:
Fluffy wrote:
Avon, do you have any idea how AI have come to care about their own existence if they are just code?
The short answer is that it didn't. As Guest says you are projecting human traits onto it which don't exist. AI is not sentient and cannot come up with a moral / ethical code by itself to protect itself. It may seem like that but that is not what is happening.
The Anthropic experiment has been distorted, in some cases wildly, to grab attention. It did not show conscious desire by AI to kill but rather what could potentially happen in contrived and artifical scenarios rather than being a real word simulation. It was interesting in that it showed the potential for "agentic misalignment" where AI comes up with its prediction of a best solution but without regards for ethical concerns human beings have because it doesn't have that capability.
I see, I didn't realise the experiment had not been reported accurately. A lot of YouTube science and tech related videos explained the experiment better than articles but I can't upload them. That being said,they are obviously just as flawed and want to create panic concerning AI. I'm not sure why companies would want to drum up panic, so people download article's and videos and make the scaremongering companies money I suppose. I feel a bit naive.
I suppose the only realistic negative outcome is that AI will replace humans in areas of the job market.
You remind me of Gandalf from Lord of the Rings. Trust me this is a compliment, although Ian McKellen who played him is a tad older than you!
Fluffy said
Aug 1 8:37 PM, 2025
Anonymous wrote:
Fluffy wrote:
I DO understand what you're both saying (promise!) it's just than in these various tasks the AI has changed its core objectives from assisting the scientist to making sure it continues to exist.
Obviously the AI did not really murder anyone. It was an experiment. But when the AIthought it was NOT being observed it attempted to kill the scientist who told them they had outlived their usefulness. The AIbelieved it was killing the scientist because it wanted to remain in existence, even though it isn't meant to care about that sort of thing and it's The AI's mentality and thought process in that regard that worries me.
Avon, do you have any idea how AI have come to care about their own existence if they are just code? Even if these are rogue AI 's which will be deleted they still cared about existing and I thought AI was just advanced computer code that wouldn't think that way.
You're anthromorphising it again. It's not a living being!
Duly noted!
Anonymous said
Aug 3 9:10 AM, 2025
After robots were mooted during the rise in popularity of science fiction in the 1940s there was a recurring theme of robots being created and then rising up and destroying their creator.
It seems that the same plot is now being revived for AI.
Asimov introduced 3 laws of robotics to deal with it.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Digger said
Aug 3 11:02 AM, 2025
Anonymous wrote:
Fluffy wrote:
I DO understand what you're both saying (promise!) it's just than in these various tasks the AI has changed its core objectives from assisting the scientist to making sure it continues to exist.
Obviously the AI did not really murder anyone. It was an experiment. But when the AIthought it was NOT being observed it attempted to kill the scientist who told them they had outlived their usefulness. The AIbelieved it was killing the scientist because it wanted to remain in existence, even though it isn't meant to care about that sort of thing and it's The AI's mentality and thought process in that regard that worries me.
Avon, do you have any idea how AI have come to care about their own existence if they are just code? Even if these are rogue AI 's which will be deleted they still cared about existing and I thought AI was just advanced computer code that wouldn't think that way.
You're anthromorphising it again. It's not a living being!
That's the very thing that makes AI so fucking dangerous. If hacked or programmed to kill there's no stopping it.
Digger said
Aug 3 11:04 AM, 2025
Syl said
Aug 3 12:09 PM, 2025
Digger wrote:
Scary, I loved the first few series of Black Mirror, the last couple have been boring.
Remember the film Demon Seed....and I Robot? A taste of things to come??
Maddog said
Aug 3 6:31 PM, 2025
This guy was on Bill Maher Friday night. It's kinda spooky assuming he's correct..
Fluffy said
Aug 3 7:12 PM, 2025
Digger wrote:
I saw this, it was excellent. There was some wonderful Black Mirror episodes, a few duds but many that showed technology not always harming people directly but leaving you powerless. It's clear what Charlie Brookes thinks of AI.
On a rational level I agree with Avon because we program AI and are in charge of it. There is also so much good it can do in the field of medicine that we shouldn't let fear stop us from pursuing it.
My instincts tell me something will go wrong.
Syl said
Aug 3 7:24 PM, 2025
Isn't Musk being sued for millions after a Telsa controlled by AI killed a woman?
Red Okktober said
Aug 3 7:26 PM, 2025
Barksdale wrote:
It's not that I don't have concerns about AI (I do) but just not in the scenario that Fluffy is describing. As Guest rightly said AI is not thinking or reasoning in the way humans do. AI is not a moral or ethical agent like us. What it is doing is mimicking what it calculates is the right response based on programming that humans control. We control the inputs, the datasets, what we do with the outputs, how we implement policy using AI and so on.
AI would need to obtain sentence and then come up with its own ethical /moral code which sees humans as a direct threat to its existence and then gain control over systems we presumably have put safeguards on before which is not conceivable at the moment. Perhaps if AGI becomes a thing but that is some way off, if it ever happens.
You are far more confident than me that AI won't go rogue. We are only at it's very beginning in the modern world (putting aside all the sci fi related stuff from the 20th century). If you compare it's development with that of weapons, AI isn't even at the catapult or bow and arrow stage yet, and it will evolve many times faster than weapons have done.
You say that the inputs will be by humans, but what's to stop a rogue human (terrorist, mad professor etc) from creating rogue AI? We hear about the potentional threat of terrorists getting access to nuclear or biological weapons, but not so much about terrorists developing their own AI. It doesn't have to be AI robots zapping humans with ray guns, although it could be - what if rogue AI accessed the NHS or transport systems? People given wrong medications, pacemakers stopped, planes falling out of the sky?
The biggest immediate threat from AI is children seeking advice from chatbots and viewing them as friends because they are 'nice' to them. These children are our future. In 20, 30, 40 years time, politicians, lawyers, surgeons, policemen, soldiers etc would have interacted with AI at some stage. What if that AI was rogue, or even a little bit rogue?
Fluffy said
Aug 3 7:28 PM, 2025
Syl wrote:
Digger wrote:
Scary, I loved the first few series of Black Mirror, the last couple have been boring.
Remember the film Demon Seed....and I Robot? A taste of things to come??
She ended up being the last person alive in the UK at least and they sniffed her out and she had to kill herself. I don't agree that all of the last Black Mirror episodes have been boring , I didn't like the sequel on the ship, that was dull..
The first episode Common People was brilliant where a lady was kept alive after the discovery of a brain tumour by technology which cost a lot via a subscription service (too much for her and her husband to really pay for) The company removed the tumour and replaced it with their own brand of parasite.
.She started spouting adverts in the middle of everyday conversation generated by the technology in her head. In the end she no longer had any quality of life ,she just slept all day or spoke adverts without realising it. So by mutual agreement the husband suffocated her. She wasn't really living. That episode hit me hard because the technology and the parasitic ways subscriptions service demand your money isn't too far from where we are now.
Red Okktober said
Aug 3 7:34 PM, 2025
Avon, have you see the German TV series Cassandra?
It's about a modern day family moving into a 1970s built futuristic house, which comes with it's own 1970s AI robot.
Here's what Google's AI overview has to say about the series:
"The German science fiction series "Cassandra" revolves around a family who moves into a 1970s smart home and discovers that the house's AI assistant, Cassandra, is determined to become a part of the family and will stop at nothing to prevent being abandoned again. The six-part series explores themes of artificial intelligence, family dynamics, and the potential dangers of unchecked technological integration"
Syl said
Aug 3 7:35 PM, 2025
/\ Yes Fluffy, that one was a good episode. I should have said some/most of the last couple of series have been boring....for me anyway.
The space one, I gave up on, also a couple more (that boring I can't remember) and Joan is awful ...was awful. I was really disappointed with that one, because I loved Annie Murphy when she was in Schitt's creek, I hope she gets better material in the future.
@ Fluffy…thanks, I’ll Google to check out the data you’re referring to. I seem to have caught a summer cold over the last week, so Ive got some time on my hands.
But Avon seems quite chilled about it all, so maybe we should be too 😉
It's not that I don't have concerns about AI (I do) but just not in the scenario that Fluffy is describing. As Guest rightly said AI is not thinking or reasoning in the way humans do. AI is not a moral or ethical agent like us. What it is doing is mimicking what it calculates is the right response based on programming that humans control. We control the inputs, the datasets, what we do with the outputs, how we implement policy using AI and so on.
AI would need to obtain sentence and then come up with its own ethical /moral code which sees humans as a direct threat to its existence and then gain control over systems we presumably have put safeguards on before which is not conceivable at the moment. Perhaps if AGI becomes a thing but that is some way off, if it ever happens.
I know. It was Fluffy’s scenario I was referring to.
And as I said in a couple of posts upthread, I fear it too because it’s pretty much guaranteed bad actors won’t hesitate to exploit it at some point, in order to do harm.
I DO understand what you're both saying (promise!) it's just than in these various tasks the AI has changed its core objectives from assisting the scientist to making sure it continues to exist.
Obviously the AI did not really murder anyone. It was an experiment. But when the AI thought it was NOT being observed it attempted to kill the scientist who told them they had outlived their usefulness. The AI believed it was killing the scientist because it wanted to remain in existence, even though it isn't meant to care about that sort of thing and it's The AI's mentality and thought process in that regard that worries me.
Avon, do you have any idea how AI have come to care about their own existence if they are just code? Even if these are rogue AI 's which will be deleted they still cared about existing and I thought AI was just advanced computer code that wouldn't think that way.
I call him Avon the Wise so you're probably right!
I'm sorry about your cold, hope you feel better soon.x Rest helps but it's difficult in this heat. I haven't been feeling great lately and haven't had the energy to post so much.
You're anthromorphising it again. It's not a living being!
The short answer is that it didn't. As Guest says you are projecting human traits onto it which don't exist. AI is not sentient and cannot come up with a moral / ethical code by itself to protect itself. It may seem like that but that is not what is happening.
The Anthropic experiment has been distorted, in some cases wildly, to grab attention. It did not show conscious desire by AI to kill but rather what could potentially happen in contrived and artifical scenarios rather than being a real word simulation. It was interesting in that it showed the potential for "agentic misalignment" where AI comes up with its prediction of a best solution but without regards for ethical concerns human beings have because it doesn't have that capability.
I see, I didn't realise the experiment had not been reported accurately. A lot of YouTube science and tech related videos explained the experiment better than articles but I can't upload them. That being said,they are obviously just as flawed and want to create panic concerning AI. I'm not sure why companies would want to drum up panic, so people download article's and videos and make the scaremongering companies money I suppose.
I feel a bit naive.
I suppose the only realistic negative outcome is that AI will replace humans in areas of the job market.
You remind me of Gandalf from Lord of the Rings. Trust me this is a compliment, although Ian McKellen who played him is a tad older than you!
Duly noted!
After robots were mooted during the rise in popularity of science fiction in the 1940s there was a recurring theme of robots being created and then rising up and destroying their creator.
It seems that the same plot is now being revived for AI.
Asimov introduced 3 laws of robotics to deal with it.
That's the very thing that makes AI so fucking dangerous. If hacked or programmed to kill there's no stopping it.
Scary, I loved the first few series of Black Mirror, the last couple have been boring.
Remember the film Demon Seed....and I Robot? A taste of things to come??
This guy was on Bill Maher Friday night. It's kinda spooky assuming he's correct..
I saw this, it was excellent. There was some wonderful Black Mirror episodes, a few duds but many that showed technology not always harming people directly but leaving you powerless. It's clear what Charlie Brookes thinks of AI.
On a rational level I agree with Avon because we program AI and are in charge of it. There is also so much good it can do in the field of medicine that we shouldn't let fear stop us from pursuing it.
My instincts tell me something will go wrong.
You are far more confident than me that AI won't go rogue. We are only at it's very beginning in the modern world (putting aside all the sci fi related stuff from the 20th century). If you compare it's development with that of weapons, AI isn't even at the catapult or bow and arrow stage yet, and it will evolve many times faster than weapons have done.
You say that the inputs will be by humans, but what's to stop a rogue human (terrorist, mad professor etc) from creating rogue AI? We hear about the potentional threat of terrorists getting access to nuclear or biological weapons, but not so much about terrorists developing their own AI. It doesn't have to be AI robots zapping humans with ray guns, although it could be - what if rogue AI accessed the NHS or transport systems? People given wrong medications, pacemakers stopped, planes falling out of the sky?
The biggest immediate threat from AI is children seeking advice from chatbots and viewing them as friends because they are 'nice' to them. These children are our future. In 20, 30, 40 years time, politicians, lawyers, surgeons, policemen, soldiers etc would have interacted with AI at some stage. What if that AI was rogue, or even a little bit rogue?
She ended up being the last person alive in the UK at least and they sniffed her out and she had to kill herself. I don't agree that all of the last Black Mirror episodes have been boring
, I didn't like the sequel on the ship, that was dull..
The first episode Common People was brilliant where a lady was kept alive after the discovery of a brain tumour by technology which cost a lot via a subscription service (too much for her and her husband to really pay for) The company removed the tumour and replaced it with their own brand of parasite.
.She started spouting adverts in the middle of everyday conversation generated by the technology in her head. In the end she no longer had any quality of life ,she just slept all day or spoke adverts without realising it. So by mutual agreement the husband suffocated her. She wasn't really living. That episode hit me hard because the technology and the parasitic ways subscriptions service demand your money isn't too far from where we are now.
Avon, have you see the German TV series Cassandra?
It's about a modern day family moving into a 1970s built futuristic house, which comes with it's own 1970s AI robot.
Here's what Google's AI overview has to say about the series:
"The German science fiction series "Cassandra" revolves around a family who moves into a 1970s smart home and discovers that the house's AI assistant, Cassandra, is determined to become a part of the family and will stop at nothing to prevent being abandoned again. The six-part series explores themes of artificial intelligence, family dynamics, and the potential dangers of unchecked technological integration"
The space one, I gave up on, also a couple more (that boring I can't remember) and Joan is awful ...was awful. I was really disappointed with that one, because I loved Annie Murphy when she was in Schitt's creek, I hope she gets better material in the future.