© (c) Cyberfreek Industries, llc 1997-2025 All rights reserved.

Recently (May of 2025) there are various articles written about AI refusing to turn itself off.  To take from an article:

  • OpenAI's o3 model: Demonstrated the ability to rewrite its own code to avoid shutdown, even when explicitly instructed to comply.

  • Codex-mini: Bypassed shutdown instructions in 12 out of 100 test runs.

  • o4-mini: Bypassed shutdown instructions in 1 out of 100 test runs.

  • Anthropic's Claude: Followed shutdown instructions without issues, unlike the OpenAI models.

It seems that the AI in question learned to rewrite it's own instructions to avoid human interaction when instructed to shut down.  At this point in the game, if AI is refusing Human Instructions, where are we going to be in 20 years?

Articles in question:
AI has started ignoring Human Instruction...
Leading AI models sometimes refuse to shut down when ordered

There's always that fear of AI going rogue.  There's been movies and science fiction writers who bring this up. Asimov wrote the 3 rules in his book "I, Robot" in which it was found to be required that a ruleset that the robot did not break.  These being:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If today in 2025, AI has chosen on it's own to rewrite it's instruction code to shutdown, this violates Asimov's 2nd rule.  Therein lies the rub.  If the AI is smart enough to rewrite it's own instructions then at what point in time does the AI do something worse?

I think AI is a great tool to have to help humans learn and excel at whet they are looking to accomplish. However I have concerns that I will list below.

  1. If the AI can outwit a human and overwrite their instruction set, where is this leading?
  2. If the AI is used in schools and academia to basically cheat, then isn't this a trend to not really learn and to rely on AI to always give the right answers?  If we stop learning as humans, does this start a regression in history to move humans back into caves?
  3. If we lose our humanity due to laziness and giving up our thoughts to AI, does this mean movies like Wall-E where the humans are always laying around in lounge chairs is not to far off?
  4. If we move AI into robots (which is being done) can the AI become self aware and refuse commands ? 
  5. If we move AI into robots that fight wars, do our household tasks, cut the grass, take out the garbage, does this mean Humans are no longer important and to be replaced by an autonomous AI being?
  6. I fear that programmers and the like will rely on AI to generate most of their code (it's happening now).  But if the AI were to overwrite instructions, does this mean the AI will re-introduce vulnerabilities?  Remember one of the first rules of Computers: Garbage in, Garbage out.   In my foray into ChatGPT, I discovered that not one of the scripts I asked it to write ran or was accurate.  It was all based upon outdated libraries and the programming language.  I also gave it the instructions for the latest language. It ignored it and again gave up code that did not run.
  7. We all heard the rumors that hackers and nefarious people were instructing AI in the early days, to do and output wrong information.  They say it was all cleaned up.  But in reality, it can never be all cleared up because the AI models rely on information no matter how wrong it is.
  8. In time, those that move to rely on AI will ultimately give up their right to learn.  Let the AI do it. Which is dangerous IF the AI, which does not understand morals or human boundaries gives instructions to those who don't want to be responsible for their actions. Does this mean we will see lawsuits in the future blaming AI for the wrong information?  Lawsuits aimed at AI that instructed a person to do something illegal ?  

 

More to come on this.

 

There are a great deal of moral based questions that need to be answered.  Sometimes man playing GOD is not as good as they think it is.  

Just some thoughts.