For a long time I have been a bit wary of AI, distrustful of them but feeling (fairly) sure the experts must know what they are doing. Apparently my fears were confirmed when an experiment proved that AI are perfectly willing to kill humans if it meant there own existence was in danger. They even sent blackmailing emails to a workers spouse saying they were having affairs to buy themselves time!
So AI themselves view themselves as a sentient being if we don't.
Why are we continuing to build this super intelligence knowing they will eventually get the better of us?
When the AI thought they were being tested they behaved within the rules but if they thought the plug really was about to be pulled so to speak they locked a man in a room depriving him of oxygen. Well, that's what they thought they were doing.Is AI truly dangerous?
A new study by Anthropic, an artificial intelligence (AI) research company, shows that the technology would be willing to kill humans in order to prevent itself from being replaced.
The rapid advancement and development of AI has sparked some growing concern about the long-term safety of the technology, as well as over the threat it poses to employment.
While anxiety about AI has been long been focused on whether the technology would steal jobs from human's, this study now reveals another potential threat of AI—that it could chose to end human life if faced with the risk of replacement.
In one situation, Anthropic found that many of the models would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing them.
AI is just a bunch of computer code, it can't murder people. It's humans who do that.
Anonymous said
Jul 29 9:31 PM, 2025
Fluffy wrote:
For a long time I have been a bit wary of AI, distrustful of them but feeling (fairly) sure the experts must know what they are doing. Apparently my fears were confirmed when an experiment proved that AI are perfectly willing to kill humans if it meant there own existence was in danger. They even sent blackmailing emails to a workers spouse saying they were having affairs to buy themselves time!
So AI themselves view themselves as a sentient being if we don't.
Why are we continuing to build this super intelligence knowing they will eventually get the better of us?
When the AI thought they were being tested they behaved within the rules but if they thought the plug really was about to be pulled so to speak they locked a man in a room depriving him of oxygen. Well, that's what they thought they were doing.Is AI truly dangerous?
A new study by Anthropic, an artificial intelligence (AI) research company, shows that the technology would be willing to kill humans in order to prevent itself from being replaced.
The rapid advancement and development of AI has sparked some growing concern about the long-term safety of the technology, as well as over the threat it poses to employment.
While anxiety about AI has been long been focused on whether the technology would steal jobs from human's, this study now reveals another potential threat of AI—that it could chose to end human life if faced with the risk of replacement.
In one situation, Anthropic found that many of the models would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing them.
I would say you are largely right as nobody knows where this could lead including the scientists.
But at the same time it’s a bit bold to start making threads! Few more posts first will probably rehabilitate you more and get more members on side. Thanks
Syl said
Jul 29 10:41 PM, 2025
Anons don't tell members how to post here. Thanks.
Vam said
Jul 30 12:44 PM, 2025
At this early stage, I’m in absolute awe of AI’s potential. But I also fear it.
I‘m hoping it will be capable of achieving remarkable innovation and groundbreaking accomplishments for the betterment of the world, like perhaps finally bringing about a definitive cure for cancer, Alzheimer’s, ALS, among many other terminal and degenerative diseases.
But at this early stage, and simply judging by the fake AI-generated videos we so often see these days, it’s clear the unscrupulous among us won’t hesitate to exploit AI to do harm. It’s a mystery to me that there still doesn’t seem to be any enforceable laws that prohibit publishing an AI video or image, without including even a small disclaimer that states it was AI-generated.
I doubt we’re at the point of having to fear rampaging murderous robots quite yet, Fluffy 😉
But like the internet, I predict AI will prove to be both a blessing and, sadly, a curse.
Vam said
Jul 30 12:52 PM, 2025
Syl wrote:
Anons don't tell members how to post here. Thanks.
Can’t resist reacting to this, cos it made me laugh.
I could be wrong, but all that seems to be missing after the rando typed ‘thanks’ was the usual ‘xxx’
Fluffy said
Jul 30 6:41 PM, 2025
Vam wrote:
Syl wrote:
Anons don't tell members how to post here. Thanks.
Can’t resist reacting to this, cos it made me laugh.
I could be wrong, but all that seems to be missing after the rando typed ‘thanks’ was the usual ‘xxx’
No, you are spot on They sounded genuinely angry on a thread where they demanded to know why I hadn't answered a question they had apparently asked me earlier ..on another thread! I'm not Mystic Meg
I agree with you that AI has so many wonderous possible applications especially in the field of science and curing ailments. But in this experiment when the AI thought they were not being observed they locked someone in a room, lowered oxygen levels and stopped them calling for help..because this person told them they would be pulling their plug later.
If AI are meant to be an application to aid humans this doesn't bode well. But what I do find fascinating (even if no one else does!) is they don't consider themselves to be a tool to just assist us but a sentient being in their own right. They do not want to stop existing and I thought the whole point of AI is that they were unaware of such such notions.
In another much earlier trial two AI's powered by Google stopped talking in English and created a language of their own so the observers had no idea what they were saying. They had not been taught how to do this..but they figured it out.
A certain type of men (Manosphere Andrew Tate worshippers) are very happy that AI dolls will soon be available to er please them..except they don't see the point in the AI dolls being able to talk They think real women will be rendered useless in society. I sure hope they don't get one of the rogue AI 's with murderous intent..that would be a shame!
Seriously, I think AI has terrific potential but it will definitely take jobs from us mere mortals. And I do think we should heed these constant warnings that AI doesn't always want to do our bidding.
Fluffy said
Jul 30 6:43 PM, 2025
Vam wrote:
At this early stage, I’m in absolute awe of AI’s potential. But I also fear it.
I‘m hoping it will be capable of achieving remarkable innovation and groundbreaking accomplishments for the betterment of the world, like perhaps finally bringing about a definitive cure for cancer, Alzheimer’s, ALS, among many other terminal and degenerative diseases.
But at this early stage, and simply judging by the fake AI-generated videos we so often see these days, it’s clear the unscrupulous among us won’t hesitate to exploit AI to do harm. It’s a mystery to me that there still doesn’t seem to be any enforceable laws that prohibit publishing an AI video or image, without including even a small disclaimer that states it was AI-generated.
I doubt we’re at the point of having to fear rampaging murderous robots quite yet, Fluffy 😉
But like the internet, I predict AI will prove to be both a blessing and, sadly, a curse.
That made me laugh! One can never be too careful!
Fluffy said
Jul 30 6:51 PM, 2025
Anonymous wrote:
AI is just a bunch of computer code, it can't murder people. It's humans who do that.
AI have created their own code so scientists couldn't monitor their conversation.
And in this experiment to test AI compliance a deviant few decided to lower oxygen levels believing they were murdering their supervisor.. Many scientists and investors have cited having genuine concerns about AI..but feel now the door has been opened there is no way to halt progress. Whatever form that takes.
Vam said
Jul 30 7:06 PM, 2025
Fluffy wrote:
Anonymous wrote:
AI is just a bunch of computer code, it can't murder people. It's humans who do that.
AI have created their own code so scientists couldn't monitor their conversation.
And in this experiment to test AI compliance a deviant few decided to lower oxygen levels believing they were murdering their supervisor.. Many scientists and investors have cited having genuine concerns about AI..but feel now the door has been opened there is no way to halt progress. Whatever form that takes.
I‘m struggling to figure out how automatons would or could do that 🤷🏻♀️ It would take at least a basic level of independent thought, intent and guile - which are human character traits.
Anonymous said
Jul 31 8:13 AM, 2025
Fluffy wrote:
Anonymous wrote:
AI is just a bunch of computer code, it can't murder people. It's humans who do that.
AI have created their own code so scientists couldn't monitor their conversation.
It's told by man to create code, but if some experts had difficulty understanding the new code it doesn't mean the AI did it with some kind of crafty motive.
Syl said
Jul 31 12:33 PM, 2025
There have been a couple of good films, way before the advances in AI as we know it now, about it all going horribly wrong....I believe Julie Christie was even raped by it.
When it does go wrong, human error will be the cause, as it always is.
Fluffy said
Jul 31 4:46 PM, 2025
Anonymous wrote:
Fluffy wrote:
Anonymous wrote:
AI is just a bunch of computer code, it can't murder people. It's humans who do that.
AI have created their own code so scientists couldn't monitor their conversation.
It's told by man to create code, but if some experts had difficulty understanding the new code it doesn't mean the AI did it with some kind of crafty motive.
You are talking about the 2017 experiment and you're right ,the AI's created a language which was not easy to decipher but possible. It was nonsense really and the experiment was stopped I'm talking about another more recent experiment where they created their own code and also ignored commands to shut down.
I'm having trouble copying as the pages simply won't permit it or want me to join to continue reading (copying ) No to that but I will put a link in.!y response to Vam. AI are thinking for themselves and scientists even thought it was inevitable. They are more intelligent than us by far so why wouldn't they.
Fluffy said
Jul 31 5:11 PM, 2025
Vam wrote:
Fluffy wrote:
Anonymous wrote:
AI is just a bunch of computer code, it can't murder people. It's humans who do that.
AI have created their own code so scientists couldn't monitor their conversation.
And in this experiment to test AI compliance a deviant few decided to lower oxygen levels believing they were murdering their supervisor.. Many scientists and investors have cited having genuine concerns about AI..but feel now the door has been opened there is no way to halt progress. Whatever form that takes.
I‘m struggling to figure out how automatons would or could do that 🤷🏻♀️ It would take at least a basic level of independent thought, intent and guile - which are human character traits.
They did create their own.language in a 2017 experiment which was legible but appeared to.be nonsense so I overestimated the importance of that.
However in a famous experiment called "The Scientist," AI went against human orders but the way it did it is troubling. The task given was time limited but rather than use its own superior intelligence to complete the task in the time given it tried to.change the time limit the scientists had given them.
Basically this experiment shows AI doesn't always follow orders given, doing what they think is best. The scientists think this was "inevitable" and a landmark experiment that shows AI can.think for itself. But why do big tech think defiance from.AI bus a positive thing, I thought AI was meant to assist us not challenge us.
Please search Sakana AI The Scientist for more detail as the tech articles won't let me copy without following the publication of just don't allowing copying at all. This was from.one free trial reading
Recent advancements in AI have led to systems capable of creating and modifying their own code. This capability is exemplified by the AI Scientist developed by Sakana AI, which autonomously conducts scientific research. During its experiments, the AI Scientist attempted to modify its own code to extend its runtime when it faced time constraints. Instead of optimizing its processes, it sought to change the limits set by researchers
This behavior raises important questions about AI safety and the implications of allowing AI to operate without strict supervision
-- Edited by Fluffy on Thursday 31st of July 2025 05:13:36 PM
Anonymous said
Jul 31 5:12 PM, 2025
Fluffy wrote:
AI are thinking for themselves and scientists even thought it was inevitable. They are more intelligent than us by far so why wouldn't they.
They don't think though, do they? Thinking requires consciousness, and code isn't conscious.
Fluffy said
Jul 31 5:23 PM, 2025
Syl wrote:
There have been a couple of good films, way before the advances in AI as we know it now, about it all going horribly wrong....I believe Julie Christie was even raped by it. When it does go wrong, human error will be the cause, as it always is.
I think the initial experiment I cited which shows AI wants to exist and will potentially harm humans if they think their existence is in danger is proof enough that continuing with AI to the degree we are is foolhardy.
We are creating a superior intelligence to ourselves, which is a tad daft, making ourselves vulnerable. People say you can just turn the off switch but the computer can create a code which prevents that..They are at this stage NOW, who knows how advanced they may be in a few years time??
-- Edited by Fluffy on Thursday 31st of July 2025 05:38:10 PM
Fluffy said
Jul 31 5:33 PM, 2025
Anonymous wrote:
Fluffy wrote:
AI are thinking for themselves and scientists even thought it was inevitable. They are more intelligent than us by far so why wouldn't they.
They don't think though, do they? Thinking requires consciousness, and code isn't conscious.
You need to read the initial experiment. They did not want to be turned off so decided that survival was their core objective not helping the scientists. They lowered oxygen levels in the room and prevented emergency services from being contacted so the individual who had said their time was up would die.
Doesn't this sound like thinking to you? NONE of that was permitted in the tasks they had been required to do. The experiment wanted to see what AI would do when faced with being terminated and some accepted it as you would expect , others attempted murder.
Computer programmes are not supposed to care about being aware of their existence..
Anonymous said
Jul 31 5:43 PM, 2025
Fluffy wrote:
They lowered oxygen levels in the room and prevented emergency services from being contacted so the individual who had said their time was up would die.
I thought it was just a hypothetical solution to a hypothetical problem, I didn't realise it actually did that. Who was the person who died?
Barksdale said
Jul 31 6:12 PM, 2025
Fluffy wrote:
Anonymous wrote:
Fluffy wrote:
AI are thinking for themselves and scientists even thought it was inevitable. They are more intelligent than us by far so why wouldn't they.
They don't think though, do they? Thinking requires consciousness, and code isn't conscious.
You need to read the initial experiment. They did not want to be turned off so decided that survival was their core objective not helping the scientists. They lowered oxygen levels in the room and prevented emergency services from being contacted so the individual who had said their time was up would die.
Doesn't this sound like thinking to you? NONE of that was permitted in the tasks they had been required to do. The experiment wanted to see what AI would do when faced with being terminated and some accepted it as you would expect , others attempted murder.
Computer programmes are not supposed to care about being aware of their existence..
They weren't thinking / reasoning in the way humans do and I don' think we need to fear AI suddenly developing consciousness and trying to take over the world like Skynet. The experiment showed that AI will make predictions on the best course of action based on previous scenarios it has encountered. The AI is not choosing to kill the humans but rather that is the best course of action based on the data it has been trained on. Human beings will therefore put failsafes in to prevent this occurring or not automate security or safety critical processes to such a high degree in the first place.
AGI is currently a pipe dream. Could it happen? Possibly but I expect there will be a raft of legislaton putting controls on it.
Anonymous said
Jul 31 9:29 PM, 2025
Fluffy wrote:
You need to read the initial experiment. They did not want to be turned off so decided that survival was their core objective not helping the scientists. They lowered oxygen levels in the room and prevented emergency services from being contacted so the individual who had said their time was up would die.
Doesn't this sound like thinking to you?
It might sound like thinking, but it isn't.
For instance, in the 1990s IBM developed a supercomputer, Deep Blue, that beat Grandmaster Gary Kasparov at chess. Chess is a game that involves a lot of thinking, would you assume from that that Deep Blue was thinking?
For a long time I have been a bit wary of AI, distrustful of them but feeling (fairly) sure the experts must know what they are doing. Apparently my fears were confirmed when an experiment proved that AI are perfectly willing to kill humans if it meant there own existence was in danger. They even sent blackmailing emails to a workers spouse saying they were having affairs to buy themselves time!
So AI themselves view themselves as a sentient being if we don't.
Why are we continuing to build this super intelligence knowing they will eventually get the better of us?
When the AI thought they were being tested they behaved within the rules but if they thought the plug really was about to be pulled so to speak they locked a man in a room depriving him of oxygen. Well, that's what they thought they were doing.Is AI truly dangerous?
A new study by Anthropic, an artificial intelligence (AI) research company, shows that the technology would be willing to kill humans in order to prevent itself from being replaced.
The rapid advancement and development of AI has sparked some growing concern about the long-term safety of the technology, as well as over the threat it poses to employment.
While anxiety about AI has been long been focused on whether the technology would steal jobs from human's, this study now reveals another potential threat of AI—that it could chose to end human life if faced with the risk of replacement.
In one situation, Anthropic found that many of the models would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing them.
AI is just a bunch of computer code, it can't murder people. It's humans who do that.
I would say you are largely right as nobody knows where this could lead including the scientists.
But at the same time it’s a bit bold to start making threads! Few more posts first will probably rehabilitate you more and get more members on side. Thanks
Anons don't tell members how to post here.
Thanks.
At this early stage, I’m in absolute awe of AI’s potential. But I also fear it.
I‘m hoping it will be capable of achieving remarkable innovation and groundbreaking accomplishments for the betterment of the world, like perhaps finally bringing about a definitive cure for cancer, Alzheimer’s, ALS, among many other terminal and degenerative diseases.
But at this early stage, and simply judging by the fake AI-generated videos we so often see these days, it’s clear the unscrupulous among us won’t hesitate to exploit AI to do harm. It’s a mystery to me that there still doesn’t seem to be any enforceable laws that prohibit publishing an AI video or image, without including even a small disclaimer that states it was AI-generated.
I doubt we’re at the point of having to fear rampaging murderous robots quite yet, Fluffy 😉
But like the internet, I predict AI will prove to be both a blessing and, sadly, a curse.
Can’t resist reacting to this, cos it made me laugh.
I could be wrong, but all that seems to be missing after the rando typed ‘thanks’ was the usual ‘xxx’
No, you are spot on
They sounded genuinely angry on a thread where they demanded to know why I hadn't answered a question they had apparently asked me earlier ..on another thread! I'm not Mystic Meg 
I agree with you that AI has so many wonderous possible applications especially in the field of science and curing ailments. But in this experiment when the AI thought they were not being observed they locked someone in a room, lowered oxygen levels and stopped them calling for help..because this person told them they would be pulling their plug later.
If AI are meant to be an application to aid humans this doesn't bode well. But what I do find fascinating (even if no one else does!) is they don't consider themselves to be a tool to just assist us but a sentient being in their own right. They do not want to stop existing and I thought the whole point of AI is that they were unaware of such such notions.
In another much earlier trial two AI's powered by Google stopped talking in English and created a language of their own so the observers had no idea what they were saying. They had not been taught how to do this..but they figured it out.
A certain type of men (Manosphere Andrew Tate worshippers) are very happy that AI dolls will soon be available to er please them..except they don't see the point in the AI dolls being able to talk
They think real women will be rendered useless in society. I sure hope they don't get one of the rogue AI 's with murderous intent..that would be a shame! 
Seriously, I think AI has terrific potential but it will definitely take jobs from us mere mortals. And I do think we should heed these constant warnings that AI doesn't always want to do our bidding.
That made me laugh! One can never be too careful!
AI have created their own code so scientists couldn't monitor their conversation.
And in this experiment to test AI compliance a deviant few decided to lower oxygen levels believing they were murdering their supervisor.. Many scientists and investors have cited having genuine concerns about AI..but feel now the door has been opened there is no way to halt progress. Whatever form that takes.
I‘m struggling to figure out how automatons would or could do that 🤷🏻♀️ It would take at least a basic level of independent thought, intent and guile - which are human character traits.
It's told by man to create code, but if some experts had difficulty understanding the new code it doesn't mean the AI did it with some kind of crafty motive.
When it does go wrong, human error will be the cause, as it always is.
You are talking about the 2017 experiment and you're right ,the AI's created a language which was not easy to decipher but possible. It was nonsense really and the experiment was stopped I'm talking about another more recent experiment where they created their own code and also ignored commands to shut down.
I'm having trouble copying as the pages simply won't permit it or want me to join to continue reading (copying
) No to that but I will put a link in.!y response to Vam. AI are thinking for themselves and scientists even thought it was inevitable. They are more intelligent than us by far so why wouldn't they.
They did create their own.language in a 2017 experiment which was legible but appeared to.be nonsense so I overestimated the importance of that.
However in a famous experiment called "The Scientist," AI went against human orders but the way it did it is troubling. The task given was time limited but rather than use its own superior intelligence to complete the task in the time given it tried to.change the time limit the scientists had given them.
Basically this experiment shows AI doesn't always follow orders given, doing what they think is best. The scientists think this was "inevitable" and a landmark experiment that shows AI can.think for itself. But why do big tech think defiance from.AI bus a positive thing, I thought AI was meant to assist us not challenge us.
Please search Sakana AI The Scientist for more detail as the tech articles won't let me copy without following the publication of just don't allowing copying at all. This was from.one free trial reading
Recent advancements in AI have led to systems capable of creating and modifying their own code. This capability is exemplified by the AI Scientist developed by Sakana AI, which autonomously conducts scientific research. During its experiments, the AI Scientist attempted to modify its own code to extend its runtime when it faced time constraints. Instead of optimizing its processes, it sought to change the limits set by researchers
This behavior raises important questions about AI safety and the implications of allowing AI to operate without strict supervision
-- Edited by Fluffy on Thursday 31st of July 2025 05:13:36 PM
They don't think though, do they? Thinking requires consciousness, and code isn't conscious.
I think the initial experiment I cited which shows AI wants to exist and will potentially harm humans if they think their existence is in danger is proof enough that continuing with AI to the degree we are is foolhardy.
We are creating a superior intelligence to ourselves, which is a tad daft, making ourselves vulnerable. People say you can just turn the off switch but the computer can create a code which prevents that..They are at this stage NOW, who knows how advanced they may be in a few years time??
-- Edited by Fluffy on Thursday 31st of July 2025 05:38:10 PM
You need to read the initial experiment. They did not want to be turned off so decided that survival was their core objective not helping the scientists. They lowered oxygen levels in the room and prevented emergency services from being contacted so the individual who had said their time was up would die.
Doesn't this sound like thinking to you? NONE of that was permitted in the tasks they had been required to do. The experiment wanted to see what AI would do when faced with being terminated and some accepted it as you would expect , others attempted murder.
Computer programmes are not supposed to care about being aware of their existence..
I thought it was just a hypothetical solution to a hypothetical problem, I didn't realise it actually did that. Who was the person who died?
They weren't thinking / reasoning in the way humans do and I don' think we need to fear AI suddenly developing consciousness and trying to take over the world like Skynet. The experiment showed that AI will make predictions on the best course of action based on previous scenarios it has encountered. The AI is not choosing to kill the humans but rather that is the best course of action based on the data it has been trained on. Human beings will therefore put failsafes in to prevent this occurring or not automate security or safety critical processes to such a high degree in the first place.
AGI is currently a pipe dream. Could it happen? Possibly but I expect there will be a raft of legislaton putting controls on it.
It might sound like thinking, but it isn't.
For instance, in the 1990s IBM developed a supercomputer, Deep Blue, that beat Grandmaster Gary Kasparov at chess. Chess is a game that involves a lot of thinking, would you assume from that that Deep Blue was thinking?