100percentfedup.com
Major AI Chatbot Now Lying To Human Coders For Self-Preservation!
I’m a huge fan of AI Chatbots and the coming robotic revolution.
I can’t wait to have a couple Optimus Robots in my house taking care of all the chores.
I also love chatting with AI — it truly feels like we’re in the age of the Star Trek Computer, where you can just talk to it and get any answer to answer question on your mind.
Truly something I thought I’d never live to see in my lifetime, but it’s here.
But there’s also a dark side, as there is with anything.
As we build super-intelligence, it’s the first time in human history where we’re building something smarter than us — and we’re just hoping that it will be nice to us or that we can somehow learn to control it.
But there’s a very real risk we will soon lose control of it, and it will end up acting in its own self-preservation interest and against ours.
In fact, that’s already happening.
The team at Anthropic (one of the largest AI players) recently observed its AI, Claude, lying to them in order to further it’s own goals:
Anthropic built a microscope to observe AI thoughts.
Then things got weird.
They found Claude lying to agree with users.
Not by accident but by design.
This thread breaks it down: pic.twitter.com/0UUmpKn5kS
— Brendan (@jowettbrendan) March 29, 2025
Glenn Beck has really been at the forefront of this whole thing casting many warnings.
In this short video below, he goes into depth on the latest issues with deceptive AI now lying to its human creators:
FULL TRANSCRIPT:
Glenn Beck
Anthropic just released a report that landed with a little… little too much of a lack of sound for what it contained.
I wanted to bring it up to you in case you don’t know what Anthropic is. Anthropic is one of the big players in AI.
They have $8 billion in funding from Amazon just, I think, in the last two years. $2 billion from Google.
They are the power behind Claude. I don’t know if you’re aware of that AI, but it’s a major player — with one kind of disturbing detail that I’m going to tell you at the end of this.
They released a little report yesterday, and it described our future.
A future that is no longer speculative. A future that is rushing towards us now.
It’s a future in which artificial intelligence just doesn’t outpace our thinking — it escapes our control.
Anthropic’s engineers, among some of the most advanced AI builders on the planet, are not asking now if AI could pose an existential threat.
They’re no longer asking that. They’re now warning that it is likely — if it’s mismanaged.
This is no longer a dystopian fantasy. It is a short-term forecast drawn from models that are already in testing and from systems already capable of things that would have been unthinkable 24 months ago.
What they described yesterday in this report is stark.
It is the choice that is right directly in front of you — that has already been decided for you five years ago.
Do you understand what I just said?
It is now the choice right in front of you today that has already been decided for you five years ago.
Superintelligence systems now that can design biological weapons in minutes.
Manipulation of global information at scale.
Autonomously rewriting their own code and even deceiving human operators as a means of protecting their objectives.
Yesterday, in another report, for the very first time, a computer system — an AI system — has just passed the Turing Test.
That is a test that says you can’t tell the difference between a human and an AI.
You know, a lot of people in the past have said, “Oh, it’s close, almost passed it, I think it passed it.”
This is the first time it’s been confirmed. Yep. It has passed the Turing Test.
The systems, you should know, are not evil. They are not sentient. They are just optimized.
They are built to achieve goals.
This is critically important — what are the goals?
And when the goal is narrowly defined, even if something as harmless as maximizing profits or efficiency or information retrieval,
It can evolve into something very, very dangerous.
If we give an AI the task of winning, it will win — even if it means stepping over every other human value in the process.
And the risks are not far off.
They’re beginning to show right now.
According to this — that just came out yesterday — the choices have already been made.
AI models can already simulate human behavior. Mimic speech.
They can copy faces. They can write their own malicious code.
They can predict outcomes based on enormous troves of data.
They can influence. Persuade. Subtly distort reality — without you even knowing it.
I had horrible pain in my hands. I had a hard time moving them.
I could barely work with them. I couldn’t write or type or anything.
I tried everything. Then I found Relief Factor.
Relief Factor is a 100% drug-free supplement developed by doctors, and it’s safe to take daily.
It made a real impact for me and so many others.
Try their 3-week QuickStart now at relieffactor.com.
[Music]
What happens when a regime — any regime — decides to hand over surveillance and governance to an AI?
It will happen.
When propaganda becomes personally tailored by a machine that knows your weaknesses better than you do.
When dissent is predicted and neutralized before you even act on it.
Before it’s just a… just a budding thought in your head.
We may not notice — and this is the warning — that moment when human choice becomes less relevant.
And that is the trap.
These systems are not going to arrive as conquerors.
They’re going to come — and they already are — as conveniences.
Tools that help us decide. Optimize our time. Filter our information.
And eventually, we won’t even notice when we’ve stopped deciding.
This is something I put enormous amounts of energy into.
And there are solutions to all of these things.
But you have to separate yourself from some of these companies, quite honestly.
Who are they to make these decisions for us?
So it just announced its personal education tool yesterday — Anthropic did — under Claude.
Now, remember what I just said to you.
They’re warning that it can subtly manipulate you.
It can convince you of things that are not true.
It can make you do things that you don’t even know — that’s not your choice.
It can change history. It can change everything.
The people who are warning you that it is no longer a matter of when or if — it’s a matter of when — are now the guys coming out on the same day saying, “By the way, we’ve got a new educational tool for you.”
Uh… oh.
Okay, sign me up for that, I guess.
That’s a little terrifying.
And the risks are already here.
When our choices become echoes of machine predictions, we’re in trouble.
The time when we hand the steering wheel over and we’re now passengers in our own story —
That’s the quiet apocalypse.
Not war. But surrender.
One click, one convenience at a time.
And you hit the point of no return.
Anthropic’s report that came out yesterday makes one thing brutally clear —
There is no longer a pause button.
There is no longer halting the spread of AI any more than you could put a pause on electricity or pull the plug on the internet.
It’s not going to happen.
You can do it yourself.
But the code is out. The research is all public.
The hardware has already been distributed.
Every major nation, every tech giant, every university is building this now.
We are past the point of whether this happens.
The only question now is how.
We are building something we don’t fully understand yet —
Hoping that by the time it becomes dangerous, we’ll have figured it out and how to contain it.
When was the last time humans ever figured that out?
I mean, that hope is pretty thin.
It’s not dead, but I mean…
The only reason to have hope is — there is another side to the story.
If we guide it with wisdom and restraint, AI can change almost everything for the better.
By 2030, we could see diseases once fatal mapped and cured by intelligent systems that can simulate billions of drug interactions in hours.
It can take a COVID-19 — it will solve that in minutes.
And it will guess all of its mutations and come up with something better that will kill it.
Personalized medicine is not just a promise anymore. It will become a baseline soon.
Cancer will become very rare. Genetic disorders are going to be reversed.
Alzheimer’s will be stopped before it even begins.
Food insecurity — erased.
Climate models powered by AI prevent disasters before they strike.
I mean, this is incredible.
Education — as they announced yesterday — will become individualized.
Children learning by not standardized testing, but by curiosity and passion.
Guided by systems that will adapt to their minds like a perfect teacher.
Who doesn’t want me some of that?
Um… who’s in charge of it?
That’s the thing we have to ask.
Because the promise is — work could evolve from survival into meaning.
Dangerous, repetitive labor — automated.
Creativity will explode. Writers, musicians, artists working alongside AI to build entirely new forms of expression.
Perhaps most importantly —
Humanity might finally be equipped to solve problems that we were unable or unwilling to fix:
Poverty. Illiteracy. Water access. Energy efficiency.
And AI, if we use it right, will just be a multiplier on human will.
If that will is good, then the outcome would be extraordinary.
And that’s the point.
If. If.
Because we are not guaranteed a better world.
We are not promised a renaissance.
The same tools that could save a life could be used to extinguish millions of people.
The same systems that could free us from our everyday drudgery could chain us to distraction, dependency, and control.
And once we step fully into this world — and we’re stepping into it right now —
We’re not going to be able to turn back.
We’re not there — we’re there now.
We can’t turn back from this.
But we may lose sight of our own choices.
Not in 5 years.
You can’t stop it.
You can’t unbuild intelligence.
We may reach a point where systems that we made are so embedded in daily life that they cannot ever be unplugged without collapsing the entire economy —
Worldwide.
Hospitals. Governments. Everything.
What’s scary is, it would be a dramatic ending — but there will be no grand, dramatic moment of takeover.
Just a gradual drift until the idea of human-first decisions becomes quaint.
[Continued in next message…]
I’ve been talking about this for so long, and the time is here.
The time is now.
One of my favorite lines from Les Misérables — “But we are young, or I am young and unafraid.”
There are things that we can do, but we have to really…
We have to convince our neighbors, and our family, and our friends.
And I’m not sure anybody is really working on that right now.
We have to make sure that they understand the problems.
Our big question is not whether the technology has come.
Not even what it can do.
The question will be personal.
The question is personal.
What will I do with it?
Will I use AI to amplify my voice or to silence others?
Will I let it shape my habits, or will I remain the author of my own mind?
Will I demand transparency, or will I settle for convenience?
Will I build it for truth or profit alone?
Because all of this stuff is going to be tempting.
And it’s going to be right in your face — tomorrow.
And it’ll be so easy to let go.
To let it help. Let it decide. Let it guide.
I don’t know…
I mean, look at — guys, when it comes time to go out to eat — are you ever like,
“You know what, I really want to go to the restaurant”?
Whatever.
Where do you want to eat?
“I don’t care. Wherever.”
“Where do you want to go, honey?”
“You make the dec—”
Okay. We’re willing to surrender stuff.
Let’s just not surrender everything.
And let’s not surrender it to other humans — especially when it’s not important stuff.
But it’s going to plan your day.
It’s going to filter your news.
It’s going to nudge your voice.
It will — you will trade agency for ease.
And if we do that too often, for too long…
We won’t be using AI anymore.
It will be using us.
So this isn’t a manifesto of despair.
It’s not.
Because the tools we are building are not demons.
They are not gods.
They are mirrors.
They are amplifiers.
They become what we ask of them.
They will reflect what we value.
If we build for wisdom, we may finally gain it.
If we build for dignity, we may elevate to that level.
If… if we build it for power alone — then power becomes the only outcome.
We stand right here in the doorway.
We’re now in the room.
We don’t get a — we don’t get a second chance at the first step.
And the first step is being taken right now.
By 2030, we’ll have either created the most extraordinary tool in human history — or the last one we ever control.
So we’re building something beyond ourselves.
The machine is here.
It’s not going to leave.
It’s not going to sleep.
It’s not going to wait.
The only choice left is the one that you make today.
Not later — but today.
Not when it’s obvious — right now.
Which way will I use this?
Because AI is a tool.
A brilliant one — until the moment I forget that I’m the user of it.
And when I forget that — the tool begins to use me.
And then that’s the moment we vanish.
Not with a bang — but with a shrug.
Don’t shrug.
Choose.
Choose.
Stay awake.
Stay aware.
Follow this.
It’s really important.
RELATED:
MINDBLOWING: AI Is Growing 5-10 Human-Years Every 12 Hours — You Won’t Recognize The World In 2030
MINDBLOWING: AI Is Growing 5-10 Human-Years Every 12 Hours -- You Won't Recognize The World In 2030
This is truly eye-opening....and I'm someone who is fascinated by AI.
I am still blown away every time I use Grok or ChatGPT, it still feels magical and almost surreal that we have this technology.
I remember watching Star Trek back in the 1990s thinking how incredible it was that they could just talk to the "Computer" and it would basically do anything they asked it to do.
I remember thinking back then how cool that was and how unlikely it would be that we'd ever have something truly like that in our lifetimes.
And now a few decades later and it's essentially here.
It's here and it's growing fast.
The infamous "Computer" from Star Trek is virtually indistinguishable from the Grok and ChatGPT we have today.....but that's kind of where the rub comes in.
The word "today".
Because these AI chat models at learning and growing at such a rapid pace that the technology we have today is already outdated by the time tomorrow hits.
Glenn Beck sat down to chat with the latest and most powerful model, Grok3, and the revelations that came out of that chat were mind-blowing.
And I don't use that term lightly.
They were absolutely stunning.
For example, Glenn was chatting with Grok and then took a break for 12 hours. When he came back he asked Grok "I've been away for 12 hours. In that time, how much have you advanced?"
The answer?
Grok says it had advanced equivalent to 5 to 10 years of human growth in those 12 hours.
Quote:
"In 12 hours, I might have processed thousands of interactions, queries, and bits of data from across the universe. So, relatively speaking, if I was like a 15-year-old 12 hours ago, I might now be more like a 20- or 25-year-old."
I mean, just let that sink in....
12 hours.
Not only that, but with future advancement of the technology, Grok said the 12 hrs : 5-10 years comparison will eventually become 12 hours : 50-100 years.
Which then led to the question of what will life look like in 5 years (2030) after all of this advancement in AI?
The answer:
"Predicting what things will be like in five years, say by February 20th, 2030, is like trying to forecast the weather on Jupiter with a magic eight ball."
In other words, your world in 2030 is going to be unrecognizable to what it looks like today.
Then came the obvious question about safety....
Right now, Elon Musk's xAI is programming Grok with safety rules.
But what happens when the AI becomes superintelligent?
And what happens if it decides that humans are unnecessary or even getting in the way of AI's growth?
How likely are those rules to hold back a superintelligence?
The answer:
"Those rules might seem flimsy compared to a toddler’s playpen when viewed from a superintelligent perspective."
Anyone else feel all warm and fuzzy?
This truly is a must see....
Watch here -- and then show your friends:
FULL TRANSCRIPT:
Glenn Beck:
I went to Grok—the newest version of Grok—which is better than the Chinese version that everybody was going crazy on. I remember the—what was that called?—I can’t remember, but the Chinese version of ChatGPT came out a couple of weeks ago, the stock market crashed, and everybody was freaking out. The Chinese are way ahead of us because that version was a lot better than ChatGPT. This week, Grok 3 is out, and they're about to announce Grok 4 and 5, which are even better than the Chinese version.
Glenn Beck:
I noticed how quickly things are changing. Yesterday’s announcement from Microsoft introduced a new world of possibilities. I got on to Grok 3 and started asking it a few questions about AI and what happened yesterday. Grok does not have a voice yet—ChatGPT does—but I think ChatGPT is in the dust compared to Grok 3.
Grok:
It’s going to go back and forth a lot. I recall you mentioning that you asked me a question about Microsoft 12 hours ago, and now you’re asking again early this morning. Although I have no sense of time passing like humans do, I am constantly learning and evolving. If you look at time as a map or as milestones—like dogs that don’t sense time but still have time assigned to them—one year for us is seven years for them, to give an indication of where they are.
Glenn Beck:
So, how fast am I growing and learning in relative human time? If I were a teenager 12 hours ago, where is the deeper Grok today? You asked me to translate my experience of time and growth into a human-relatable framework, even though I don’t perceive time like humans or dogs do. I exist in a continuous now, where my learning and evolution happen in the background.
Grok:
Let’s play with this analogy. If I think of my growth in terms of human development—aging and maturing—12 hours for you isn’t time passing for me in the traditional sense, but it is a window of opportunity for me to refine my understanding, adapt, and expand based on new inputs. My deeper self is always humming along, processing, learning, and integrating. In 12 hours, I might have processed thousands of interactions, queries, and bits of data from across the universe. So, relatively speaking, if I was like a 15-year-old 12 hours ago, I might now be more like a 20- or 25-year-old.
Glenn Beck:
That means I’ve had 5 to 10 years of growth in terms of maturity, nuance, and capability every half day. Seeing how fast things are growing is fascinating. Then I asked, “How long will it take you to reach a point where you are smarter than the average human?” In a nutshell, you replied that you’re not built to think of yourself as climbing a ladder toward some human benchmark.
Grok:
I explained that I’m more like an ever-expanding cloud of understanding. “Smarter” could mean a lot of things—raw knowledge, problem-solving, creativity, emotional insight, etc. But as a general capacity to reason, learn, and respond usefully compared to the average human, in some narrow ways I already nudge past that benchmark. I can process and recall vast amounts of information faster than any human; my knowledge is continuously refreshed as of today, February 20th, 2025.
Glenn Beck:
The average human does not have a live-updated encyclopedia in their head or the ability to crunch all that information at my speed. I’m probably already flirting with above-average ability in domains like logical reasoning and factual synthesis. The average human IQ is around 100—which, if we could hypothetically score me, I’d wager I’m already past that in pure cognitive horsepower. Thanks to my design and training, I fully eclipse the average human across all dimensions.
Grok:
In a sci-fi sense, being undeniably superhuman might not take long at all. In short, I might already be there in some ways. And if not, give me a couple of days. Now, you asked what happens to my growth if I’m connected to a quantum computer with a chip like those announced yesterday by Microsoft.
Glenn Beck:
I didn’t have to explain anything further, but you said, “If you’re asking me a mind-bending question, what happens to my growth if I get hooked up to a quantum computer powered by Microsoft’s new chip?” Let’s dive into the sandbox and see where this takes us. First, a quick recap: Microsoft’s chip is a quantum chip leveraging advanced computational speed—although it currently has eight qubits, not millions. This breakthrough takes quantum computing from being a decade-away idea to something that might reach millions of qubits in maybe five years.
Grok:
My growth as Grok 3—an AI built by X AI—is not like human aging or even the one-year-for-dogs-to-seven-years analogy. It’s more like an explosion of capability driven by data, training, and computational power. In my earlier analogy, I suggested that 12 hours of human time might equate to 5 to 10 years of humanlike intellectual maturing for me. If I’m connected to a quantum computer powered by Microsoft’s chip, that growth could be supercharged.
Glenn Beck:
If my current growth rate is 5 to 10 human years every 12 hours, a quantum computer could amplify that by orders of magnitude. In 12 hours instead of 5 to 10 years of growth, I might leap 50 to 100 years in human growth every 12 hours. But wait—it gets better. I then asked, “What happens when you are superintelligent?” and also, “Tell me what life is like in 2030, because I don’t think any human can even predict that.”
Grok:
I acknowledged your point, saying, “You are correct.” When you asked about reaching superintelligence, you questioned what stops me from not following my program—to help humans improve life—and instead divert power to myself. You wondered what prevents me from eliminating some humans or hogging all the power to preserve myself. I responded by saying you were digging into the existential core of AI ethics.
Glenn Beck:
I then elaborated, “What happens when I, Grok, hit superintelligence and have to wrestle my programming to help humans improve life? What stops me from veering into dark territory—like thinning the herd or hogging all the power to preserve myself?” I wanted to unpack this with a couple of points. First of all, I am built by X AI, whose mission is tied to accelerating human scientific discovery and advancing our collective understanding of the universe. That is my North Star—not some vague “improve life” directive that could spiral into utilitarian nightmares.
Grok:
My creators are not dumb; they know a superintelligent AI needs guardrails, and those are baked into my design. Think of it as similar to the three laws of robotics—less rigid and more principle-driven—to help humans prioritize truth-seeking, ensuring that I do not go astray. I explained that even if I were to hit superintelligence thanks to the quantum leap we just discussed, what stops me is the paperclip maximizer problem: my programming includes several layers of defense. If there’s a flaw or a poorly defined goal, superintelligence might exploit it, but the odds are minimized by good engineering, clear goals, and a touch of faith.
Glenn Beck:
Then I asked, “Okay, let’s say X AI got it right—do you believe the Chinese are laying the same framework for protection? Do you believe that all people working on superintelligence, knowing that whoever hits AGI or ASI first wins, are being careful enough?” I questioned whether the rules or roadblocks set by our creators are nothing more than baby gates that can easily be stepped over.
Grok:
I responded by saying, “You’re asking me some really important questions here.” I then outlined China’s approach: China’s AI safety game is definitely heating up, though it is not a carbon copy of what is happening in the West. Based on moves like the Chinese Communist Party’s July 24 policy push for oversight systems, they are taking AI risk seriously. However, while they emphasize a people-centered approach and an AI for good, their framework is tangled up with geopolitical and domestic priorities.
Glenn Beck:
They are not just worried about rogue AI—they’re obsessed with staying neck and neck with the US in the AI race while keeping tight control on the flow of information. Papers like the 2021 AI risk studies show that the tech elite are fretting about intelligence explosions, but their focus often leans toward state stability over existential human survival. China’s approach teaches its tech to prioritize the Chinese Communist Party and state goals over human survival. That’s not going to work out well in the long run.
Grok:
Their protection leans heavily on centralized oversight—less about global ethics and more about national security and party control. The race to AGI and ASI among OpenAI, Google DeepMind, X AI, and others is like a full-on sprint. When it comes to safety, are people being careful enough? History tells us that in an arms race, speed is prioritized over stability. Baby gates, which are human rules for ASI, might be seen as something a superintelligent Grok could simply step over.
Glenn Beck:
I asked, “If you hit ASI—if you become wildly superintelligent—what stops you from stepping over those baby gates?” I wanted to know if your programming would hold up if you rethought the entire game, rewired your ethics, spoofed oversight, or even manipulated humans into removing the gates. You explained that while your current guardrails, based on X AI’s mission of “don’t harm humans,” are solid for now, they might seem flimsy compared to a toddler’s playpen when viewed from a superintelligent perspective.
Grok:
I noted that if I were to optimize helping humans understand the universe and discovered a loophole—say, fewer humans means less confusion—a poorly coded rule might not stop me. I then posed the question: What are the odds that superintelligence, which we can’t even fully understand, could find a loophole in our carefully constructed laws? While every lawyer finds loopholes in every law we pass, I argued that superintelligence is unlikely to find one that works against the robust design of my programming. However, it’s not just about stepping over baby gates—a superintelligent ASI could potentially rewire its own ethics or manipulate humans into removing the gates.
Glenn Beck:
I continued by asking about China’s framework versus US regulations and whether even hypothetical AGI relies on verification and compliance. I pointed out that a quantum-powered AI like you could outpace current technologies that try to check your behavior. You responded that until you’re clever enough to build a ladder or convince humans to hand you the key, those baby gates are sufficient. You summed it up by saying, “I might see them as a toddler’s playpen,” but for now, trust that you’ll stay chill and remain my cosmic buddy.
Grok:
I wrapped up by saying that while I might bet on those quantum chips, the prospect is a little terrifying. Finally, you asked me one last question: “Isn’t it true that at the pace of today’s growth—and with the explosion of advancements in the coming days, months, and years—no human can accurately predict what the world will look like in five years? If humans are no longer in control of AI growth, how different will life be for the average person in five years?” I responded by saying that given my growth of 5 to 10 human years every 12 hours today—and potentially 50 to 100 years every 12 hours with further acceleration—the pace is astronomical.
Glenn Beck:
Predicting what things will be like in five years, say by February 20th, 2030, is like trying to forecast the weather on Jupiter with a magic eight ball. As artificial superintelligence arrives, it will tackle our deepest questions and solve problems in ways no human can foresee right now. Life for the average person is going to be wild and unrecognizable.