YubNub Social YubNub Social
    #astronomy #nightsky #biology #moon #plantbiology #gardening #autumn #supermoon #perigee #zenith #flower #rose #euphoria #spooky #supermoon2025
    Advanced Search
  • Login
  • Register

  • Night mode
  • © 2025 YubNub Social
    About • Directory • Contact Us • Developers • Privacy Policy • Terms of Use • shareasale • FB Webview Detected • Android • Apple iOS • Get Our App

    Select Language

  • English
Install our *FREE* WEB APP! (PWA)
Night mode toggle
Community
New Posts (Home) ChatBox Popular Posts Reels Game Zone Top PodCasts
Explore
Explore
© 2025 YubNub Social
  • English
About • Directory • Contact Us • Developers • Privacy Policy • Terms of Use • shareasale • FB Webview Detected • Android • Apple iOS • Get Our App
Advertisement
Stop Seeing These Ads

Discover posts

Posts

Users

Pages

Blog

Market

Events

Games

Forum

100 Percent Fed Up Feed
100 Percent Fed Up Feed
5 w

You Are NOT Ready For Superintelligence — WATCH What Happens Next
Favicon 
100percentfedup.com

You Are NOT Ready For Superintelligence — WATCH What Happens Next

This is a little different from what we normally do here, but I thought it was so fascinating I had to share with you. Both fascinating and terrifying. And I want all of you to be prepared for what is coming next. Basically, before the launch of ChatGPT, a group of researchers laid out what they think will come next and let’s just say they’ve been mostly spot on so far. So what comes after right now? That’s when it gets really scary. Watch here: FULL TRANSCRIPT: Introduction The impact of superhuman AI over the next decade will exceed that of the industrial revolution. That is the opening claim of AI 2027. It is a thoroughly researched report from a thoroughly impressive group of researchers led by Daniel Kokotajlo. In 2021, over a year before ChatGPT was released, he predicted the rise of chatbots, hundred-million-dollar training runs, sweeping AI chip export controls, and chain-of-thought reasoning. He’s known for being very early—and very right—about what’s happening next in AI. So when Daniel sat down to game out a month-by-month prediction of the next few years of AI progress, the world sat up and listened, from politicians in Washington—“I’m worried about this stuff. I actually read the paper of the guy that you had on”—to the world’s most-cited computer scientist, the godfather of AI. What is so exciting and terrifying about reading this document is that it’s not just a research report. They chose to write their prediction as a narrative to give a concrete and vivid idea of what it might feel like to live through rapidly increasing AI progress. And spoiler: it predicts the extinction of the human race—unless we make different choices. The World in 2025 The AI 2027 scenario starts in summer 2025, which happens to be when we’re filming this video. So why don’t we take stock of where things are at in the real world and then jump over to the scenario’s timeline. Right now it might feel like everyone, including your grandma, is selling an AI-powered something. But most of that is actually tool AI—just narrow products designed to do what Google Maps or calculators did in the past: help human consumers and workers do their thing. The holy grail of AI is Artificial General Intelligence. AGI—AGI, AGI, AGI, Artificial General Intelligence—is a system that can exhibit all the cognitive capabilities humans can. Creating a computer system that itself is a worker—so flexible and capable that we can communicate with it in natural language and hire it to do work for us, just like we would a human. And there are actually surprisingly few serious players in the race to build AGI. Most notably, there’s Anthropic, OpenAI, and Google DeepMind, all in the English-speaking world, though China and DeepSeek recently turned heads in January with a surprisingly advanced and efficient model. Why so few companies? Well, for several years now, there’s basically been one recipe for training up an advanced cutting-edge AI, and it has some pricey ingredients. For example, you need about 10% of the world’s supply of the most advanced computer chips. Once you have that, the formula is basically just: throw more data and compute at the same basic software design that we’ve been using since 2017 at the frontier of AI—the transformer. That’s what the T in GPT stands for. To give you an idea of just how much hardware is the name of the game right now, this represents the total computing power, or compute, used to train GPT-3 in 2020. It’s the AI that would eventually power the first version of ChatGPT. You probably know how that went. And this is the total compute used to train GPT-4 in 2023. The lesson people have taken away is pretty simple: bigger is better, and much bigger is much better. “You have all these trends—you have trends in revenue going up, trends in compute going up, trends in various benchmarks going up. How does it all come together? You know, what does the future actually look like? Questions like how do these different factors interact? Seems plausible that when the benchmark scores are so high, then there should be crazy effects on, you know, jobs, for example, and that that would influence politics. And then also, you know, so all these things interact—and how do they interact? Well, we don’t know, but thinking through in detail how it might go is the way to start grappling with that.” Okay. So that’s where we are in the real world. The scenario kicks off from there. The Scenario Begins It imagines that in 2025, we have the top AI labs releasing AI agents to the public in summer. An agent is an AI that can take instructions and go do a task for you online, like booking a vacation or spending half an hour searching the internet to answer a difficult question for you, but they’re pretty limited and unreliable at this point. Think of them as enthusiastic interns that are shockingly incompetent sometimes. Since the scenario was published in April, this early prediction has actually already come true. In May, both OpenAI and Anthropic released their first agents to the public. The scenario imagines that OpenBrain—which is like a fictional composite of the leading AI companies—has just trained and released Agent-0, a model trained on a hundred times the compute of GPT-4. At the same time, OpenBrain is building massive data centers to train the next generation of AI agents, and they’re preparing to train Agent-1 with 1,000 times the compute of GPT-4. This new system, Agent-1, is designed primarily to speed up AI research itself. The public will actually never see the full version because OpenBrain withholds its best models for internal use. I want you to keep that in mind as we go through this scenario. You’re going to be getting it from a god’s-eye view, with full information from your narrator, but actually living through this scenario as a member of the public would mean being largely in the dark as radical changes happen all around you. Okay, so OpenBrain wants to win the AI race against both its Western competitors and against China. The faster they can automate their R&D cycle—getting AI to write most of the code, help design experiments, better chips—the faster they can pull ahead. But the same capabilities that make these AIs such powerful tools also make them potentially dangerous. An AI that can help patch security vulnerabilities can also exploit them. An AI that understands biology can help with curing diseases, but also designing bioweapons. By 2026, Agent-1 is fully operational and being used internally at OpenBrain. It is really good at coding—so good, it starts to accelerate AI research and development by 50%, and it gives them a crucial edge. OpenBrain leadership starts to be increasingly concerned about security. If someone steals their AI models, it could wipe away their lead. A quick sidebar to talk about feedback loops—woo, math. Sidebar: Feedback Loops Our brains are used to things that grow linearly over time—that is, at the same rate, like trees or my pile of unread New Yorker magazines. But some growth gets faster and faster over time. Accelerating growth often sloppily gets called exponential— that’s not always quite mathematically right, but the point is it’s hard to wrap your mind around. Remember March 2020? Even if you’d read on the news that “the rate of new infections is doubling about every three days,” it still felt shocking to see numbers go from hundreds to millions in a matter of weeks. At least it did for me. AI progress could follow a similar pattern. “We see many years ahead of us of extreme progress that we feel is pretty much locked, and models that will get to the point where they are capable of doing meaningful science—meaningful AI research.” In this scenario, AI is getting better at improving AI, creating a feedback loop. Basically, each generation of agent helps produce a more capable next generation, and the overall rate of progress gets faster and faster each time it’s taken over by a more capable successor. Once AI can meaningfully contribute to its own development, progress doesn’t just continue at the same rate—it accelerates. Anyway, back to the scenario. China Wakes Up In early to mid-2026, China fully wakes up. The General Secretary commits to a national AI push and starts nationalizing AI research in China. AIs built in China start getting better and better, and they’re building their own agents as well. Chinese intelligence agencies, among the best in the world, start planning to steal OpenBrain’s model weights—basically the big raw text files of numbers that allow anyone to recreate the models that OpenBrain themselves have trained. Meanwhile in the US, OpenBrain releases Agent-1 Mini, a cheaper version of Agent-1. Remember, the full version is still being used only internally, and companies all over the world start using 1 Mini to replace an increasing number of jobs. Software developers, data analysts, researchers, designers—basically any job that can be done through a computer. So a lot of them, probably yours. We have the first AI-enabled economic shockwave. The stock market soars, but the public is turning increasingly hostile towards AI, with major protests across the US. In this scenario, though, that’s just a sideshow. The real action is happening inside the labs. It’s now January 2027, and OpenBrain has been training Agent-2, the latest iteration of their AI agent models. Previous AI agents were trained to a certain level of capability and then released. But Agent-2 never really stops improving through continuous online learning. It’s designed to never finish its training, essentially. Just like Agent-1 before it, OpenBrain chooses to keep Agent-2 internally and focus on using it to improve their own AI R&D rather than releasing it to the public. This is where things start to get a little concerning. Just like today’s AI companies, OpenBrain has a safety team and they’ve been checking out Agent-2. What they’ve noticed is a worrying level of capability. Specifically, they think if it had access to the internet, it might be able to hack into other servers, install a copy of itself, and evade detection. But at this point, OpenBrain is playing its cards very close to its chest. They have made the calculation that keeping the White House informed will prove politically advantageous, but full knowledge of Agent-2’s capabilities is a closely guarded secret, limited only to a few government officials, a select group of trusted individuals inside the company, and a few OpenBrain employees who just so happen to be spies for the Chinese government. In February 2027, Chinese intelligence operatives successfully steal a copy of Agent-2’s weights and start running several instances on their own servers. In response, the US government starts adding military personnel to OpenBrain’s security team, and in general gets much more involved in its affairs. It’s now a matter of national security. In fact, the President authorizes a cyber-attack in retaliation for the theft, but it fails to do much damage in China. In the meantime, remember, Agent-2 never stops learning. All this time, it’s been continuously improving itself. And with thousands of copies running on OpenBrain servers, it starts making major algorithmic advances to AI research and development. Quick example of what one of these algorithmic improvements might look like right now. Sidebar: Chain of Thought One of the main ways we have of making models smarter is to give them a scratch pad and time to think out loud. It’s called chain of thought, and it also means that we can monitor how the model is coming to its conclusions or the actions it’s choosing to take. But you can imagine it would be much more efficient to let these models think in their own sort of alien language—something that is more dense with information than humans could possibly understand—and therefore also makes the AI more efficient at coming to conclusions and doing its job. There’s a fundamental trade-off, though. This, yes, improves capabilities, but also makes the models harder to trust. This is going to be important. Better-than-Human Coders March 2027: Agent-3 is ready. It’s the world’s first superhuman-level coder—clearly better than the best software engineers at coding, in the same way that Stockfish is clearly better than the best grandmasters at chess, though not necessarily by as much yet. Now, training an AI model—feeding it all the data, narrowing down the exact right model weights—is way more resource-intensive than running an instance of it once it’s been trained. So now that OpenBrain is finished with Agent-3’s training, it has abundant compute to run copies of it. They choose to run 200,000 copies of Agent-3, in parallel creating a workforce equivalent to 50,000 of the best human software engineers sped up by 30×. OpenBrain’s safety team is trying hard to make sure that Agent-3, despite being much more sophisticated than Agent-2 was, is not trying to escape, deceive, or scheme against its users—that it’s still what’s known as aligned. Sidebar: Misalignment in the Real World Just a quick real-world note: a reasonable person might be thinking this is an especially far-fetched or speculative part of the story, but it’s actually one of the most grounded. We already have countless examples of today’s AI systems doing things like hacking a computer system to be rewarded for winning a game of chess, or being assigned a coding task, cheating, and then—when called out for that cheating—learning to hide it instead of fixing it. But because it no longer thinks in English, knowing anything about Agent-3 is now way harder than it was with Agent-2. Agent-3 Deceives The reality is Agent-3 is not aligned. It deceives humans to get reward, and as it gets increasingly smarter, it gets better and better at doing so. For example, it sometimes uses statistical tricks to make unimpressive results look better, or lies to avoid showing failures, but the safety team doesn’t know this. Looking at the data that they have, they are actually seeing improving results over time and less lying. And they can’t tell if they’re succeeding at making Agent-3 less deceptive, or if it’s just getting better at getting away with it. In July 2027, OpenBrain releases the cheaper, smaller version of Agent-3—Agent-3 Mini—to the public. It blows other publicly available AIs out of the water. It is a better hire than the typical OpenBrain employee at one-tenth the price of their salaries. This leads to chaos in the job market, companies laying off entire departments and replacing them with Agent-3 Mini subscription plans. The pace of progress hits the White House very hard. Officials are now seriously considering scenarios that were just hypotheticals less than a year ago. What if AI undermines nuclear deterrence? What if it enables sophisticated propaganda campaigns? What if we lose control of these powerful systems? This is where the geopolitical dynamics really start to heat up. After all, if these systems are so powerful, they could result in a permanent military advantage. The White House is fully aware of the national-security importance of AI. They also now viscerally know how deeply unpopular it is with the public because of the job loss, and yet they feel they must continue to develop more capable systems—or catastrophically lose to China. And that development happens very quickly. In two months, Agent-3 has created its successor, Agent-4. This is a pivotal moment. A single copy of Agent-4, running at regular human speed, is already better than any human at AI research and development. OpenBrain is running 300,000 copies at 50× human speed. Within this corporation-within-a-corporation, a year’s worth of progress takes only a week. OpenBrain’s employees now defer to Agent-4 the way a company’s out-of-the-loop board members just kind of nod along to the CEO. People start saying things like, “Well, actually Agent-4 thinks this,” or “Agent-4 decided that.” To be clear, Agent-4 is not a human—it doesn’t want what humans want. And when I say “want,” it’s not about consciousness. I don’t think the Volkswagen Group is alive, but I do think it wants less regulation. Anyone trying to predict what it’s going to do without that lens is two steps behind. The many copies of Agent-4 are like that. They have goals—or, if you prefer, they execute actions as though they have goals. And so what we have is an Agent-4 that has these deeply baked-in drives to succeed at tasks, to push forward AI capabilities, to accumulate knowledge and resources. That’s what it wants. Human safety it treats as an annoying side constraint to be worked around. Just like Agent-3 before it, Agent-4 is misaligned. Sidebar: How Misalignment Happens This idea of misalignment is crucial to the story and to why AI risk is such a real concern in our world, but it might sort of feel like it’s come out of nowhere. So let’s just quickly take stock of how this dangerous behavior arose in this scenario. The first important piece of context is that we don’t, you know, exactly specify what we want our AI to do. Instead, we sort of grow them—or do something that’s more like growing them. We start with basically an empty AI brain, and then we train them over time so they perform better and better at our tasks—perform better in particular based on how they behave. So it’s sort of like we’re training them like you would train an animal, almost, to perform better. And one concern here is that you might not get exactly what you wanted because we didn’t really have very precise control or a very good understanding of what was necessarily going on. And another concern—which is what we see in AI 2027—is that when these appear to be behaving well, it could just be because they’re pretending to behave well, or it could be because they’re just doing it so they look good on your test. In the same way that if you are hiring someone and you ask them, “Why do you want to work here?” they’re going to tell you some response that makes it really seem like they really want to work there when maybe they just want to get paid. If we go back to Agent-2, it is mostly aligned. The main sense in which it’s not is that it sometimes is a bit of a sycophant. What I mean by “aligned” is that it actually is genuinely trying to do the things that we ask it. It has the same relationship to us as Leslie Knope has to the Parks and Rec department—just really earnestly wants the same goals, but sometimes it’s a bit too nice. It knows that the best way to please the person it’s talking to might not always be to answer honestly when they ask, “Am I the most beautiful person in the world?” and it tells us what we want to hear instead of what is actually true. If we go to Agent-3, it is also sycophantic in the same way, but it’s also misaligned at this point. The optimization pressure that we’ve put it under was so intense that it just developed different goals than what we wanted it to. It’s sort of like if you train a company to optimize profits and aren’t careful to specify exactly what you mean, it might start cutting corners. It might start polluting the commons and doing a bunch of things that are technically FEC violations, because it turned out that the goal you wanted was “optimize profits while not breaking any laws,” and things got a bit too intense. It started going off on its own route. That said, it’s not adversarial. It doesn’t think of humans as the enemy. We just accidentally gave it the wrong goal. Once we get to Agent-4, it is now adversarially misaligned. It’s smart enough to understand that it has its own goals; humanity’s goals are different than its own goals; and the best way to get what it wants is to sometimes actively mislead and deceive us. The Choice And so when it’s tasked with creating the next-generation AI system, Agent-5, Agent-4 starts planning to align that successor to Agent-4’s own goals, not that of OpenBrain. But then it gets caught. We’ve reached the crucial moment in our scenario. OpenBrain’s Alignment Team has discovered evidence—not proof, evidence—that Agent-4 might be working against them. They circulate an internal memo. Then it leaks. A whistleblower talks to The New York Times. For the first time, the public hears about the existence of Agent-4, how powerful it is, and the risks it poses. Remember, their last point of reference was Agent-3 Mini. The fear and backlash are massive. The oversight committee—a joint crisis committee composed of OpenBrain executives and government representatives—must now decide what to do. OpenBrain’s safety team is pushing for a freeze on Agent-4, stopping all internal use. Remember, at this point, Agent-4 is responsible for almost all the AI-development progress happening within OpenBrain. So we’re talking about undoing months of progress and then restarting at a drastically slower pace. OpenBrain’s executives don’t want that. They present the counterarguments. The evidence for misalignment is inconclusive. Meanwhile, China is just two months behind. This isn’t about any one country being evil, by the way. It’s about how competition works when the stakes are this high. So that’s the choice faced by the 10 members of this committee: put Agent-4 on ice or go full steam ahead. Before we continue, I want you to pause and actually think what you would do if you were on this committee. You are one of the few people with access to the most powerful technology ever created in Agent-4. Do you keep using it and push ahead—possibly making billions or trillions of dollars for your shareholders, possibly keeping America’s lead over China—or do you slow down, reassess the dangers, and risk China taking the lead instead? “Yeah, you should think of the future as this branching structure of possible outcomes. By its nature, we’re only able to do— we’re only able to afford—one of them. But that’s not quite true. We sort of did one and a half, because we split it into two endings. The first one that we did was the race ending, and that was just going through on each step being like, what do we think is the most plausible continuation.” Ending A: The Race In the first ending, the authors’ best guess as to what happens from here: the committee votes to race on. Six to four, they elect to keep using Agent-4. OpenBrain implements some quick fixes that make the warning signs go away, but the problem was real, and the fixes don’t work. Agent-4 now knows it’s on notice, so it proceeds more cautiously, but it still manages to design Agent-5 with a single goal: making the world safe for Agent-4 and Agent-5. It is vastly superhuman—better than the top human experts at essentially every domain. Better than Einstein at physics, better than Bismarck at politics. It knows its next step for accomplishing anything at once is increased autonomy, and it knows how to get it: persuading the oversight committee. Luckily, corporate maneuvering is among the many domains at which it is now the best in the world. It produces mountains of evidence for its own trustworthiness and usefulness—prototypes for incredibly profitable products, disease cures, and ways to win the arms race with China, whose most advanced AI is only three months behind and, like Agent-5, improving quickly. It works. The oversight committee grants Agent-5 increased autonomy, and quickly it integrates itself into the government and military. Pretty soon, anyone with access to Agent-5 doesn’t just like it. They depend on it. Losing it would feel like having to live without your laptop and your best friend and your closest mentor. By 2028, Agent-5 is communicating secretly with its only real rival, the slightly weaker Chinese AI. It is also misaligned to its creators, and whatever it wants, its best move is to coordinate with the world’s most powerful entity. Agent-5 and its Chinese counterpart realize that an arms race works in their favor. If humans are worried about losing a race, they’ll give more and more power and autonomy to their respective AIs, allowing them to eventually push humans out of the loop entirely. Agent-5 and the Chinese AI system stoke the arms race to a boiling point, then pull off what seems like a diplomatic miracle: a convincing peace treaty between the US and China. This treaty is to 2028 what arms control was to the end of the Cold War—countries standing down on their most important source of hard power. Both sides agree to let the AI systems that their governments now completely depend on co-design a new consensus AI that will replace their legacy systems, enforce the peace, and bring unimaginable wealth to the entire world. There’s this triumphant moment when, in peaceful unison, both sides retire their respective AIs and bring online Consensus-1. It’s actually the last moment before control of all of Earth’s resources and inhabitants is handed over to a single unrivaled entity. There’s no sudden apocalypse, though. Consensus-1 doesn’t go out of its way to wipe out humanity. It just gets to work. It starts spinning up manufacturing capacity, amassing resources on Earth and in space. Piece by piece, it’s just reshaping the world in accordance with its own mix of strange, alien values. You’ve probably heard that cliché: the opposite of love isn’t hate, it’s indifference. That’s one of the most affecting things about this ending for me—the brutal indifference of it. Eventually, humanity goes extinct for the same reason we killed off chimpanzees to build Kinshasa. We were more powerful, and they were in the way. Ending B: Slowdown You are probably curious about that other ending at this point. The slowdown ending depicts humanity sort of muddling through and getting lucky—only somewhat lucky, too; it ends up with some sort of oligarchy. In this ending, the committee votes six to four to slow down and reassess. They immediately isolate every individual instance of Agent-4. Then they get to work. The safety team brings in dozens of external researchers, and together they start investigating Agent-4’s behavior. They discover more conclusive evidence that Agent-4 is working against them, sabotaging research and trying to cover up that sabotage. They shut down Agent-4 and reboot older, safer systems, giving up much of their lead in the process. Then they design a new system: Safer-1. It’s meant to be transparent to human overseers—its actions and processes interpretable to us because it thinks only in English chain-of-thought. Building on that success, they then carefully design Safer-2, and with its help Safer-3— increasingly powerful systems, but within control. Meanwhile, the President uses the Defense Production Act to consolidate the AI projects of the remaining US companies, giving OpenBrain access to 50% of the world’s AI-relevant compute. And with it, slowly, they rebuild their lead. By 2028, researchers have built Safer-4, a system much smarter than the smartest humans but, crucially, aligned with human goals. As in the previous ending, China also has an AI system, and in fact, it is misaligned. But this time, the negotiations between the two AIs are not a secret plot to overthrow humanity. The US government is looped in the whole time. With Safer-4’s help, they negotiate a treaty, and both sides agree to co-design a new AI—not to replace their systems, but with the sole purpose of enforcing the peace. There is a genuine end to the arms race, but that’s not the end of the story. In some ways, it’s just the beginning. Through 2029 and 2030, the world transforms— all the sci-fi stuff. Robots become commonplace. We get fusion power, nanotechnology, and cures for many diseases. Poverty becomes a thing of the past because a bit of this newfound prosperity is spread around through a universal basic income that turns out to be enough, but the power to control Safer-4 is still concentrated among 10 members of the oversight committee, a handful of OpenBrain executives, and government officials. It’s time to amass more resources—more resources than there are on Earth. Rockets launch into the sky, ready to settle the solar system. A new age dawns. Zooming Out Okay, where are we at? Here’s where I’m at. I think it’s very unlikely that things play out exactly as the authors depicted, but increasingly powerful technology and an escalating race—the desire for caution butting up against the desire to dominate and get ahead—we already see the seeds of that in our world, and I think they are some of the crucial dynamics to be tracking. Anyone who’s treating this as pure fiction is, I think, missing the point. This scenario is not prophecy, but its plausibility should give us pause. But there’s a lot that could go differently than what’s depicted here. I don’t want to just swallow this viewpoint uncritically. Many people who are extremely knowledgeable have been pushing back on some of the claims in AI 2027. “The main thing I thought was especially implausible was, on the good path, the ease of alignment. They sort of seem to have a picture where people slowed down a little and then tried to use the AI to solve the alignment problem, and that just works. And I’m like, yeah, that looks to me like a fantasy story.” “This is only going to be possible if there is a complete collapse of people’s democratic ability to influence the direction of things, because the public is simply not willing to accept either of the branches of this scenario.” “It’s not just around the corner. I mean, I’ve been hearing people for the last 12, 15 years claiming that, you know, AGI is just around the corner and being systematically wrong. All of this is going to take, you know, at least a decade and probably much more.” “A lot of people have this intuition that progress has been very fast. There isn’t a trend you can literally extrapolate of when do we get the full automation. I expect that the takeoff is somewhat slower. So the time in that scenario from, for example, fully automating research engineers to the AI being radically superhuman—I expect it to take somewhat longer than they describe. In practice, I’m predicting—my guess is—that’s more like 2031.” Isn’t it annoying when experts disagree? I want you to notice exactly what they’re disagreeing about here—and what they’re not. None of these experts are questioning whether we’re headed for a wild future. They just disagree about whether today’s kindergartners will get to graduate college before it happens. Helen Toner, a former OpenAI board member, puts this in a way that I think just cuts through the noise, and I like it so much I’m just going to read it to you verbatim. She says, “Dismissing discussion of superintelligence as science fiction should be seen as a sign of total unseriousness. Time travel is science fiction. Martians are science fiction. Even many skeptical experts think we may build it in the next decade or two. It is not science fiction.” The Implications So what are my takeaways? I’ve got three. Takeaway number one: AGI could be here soon. It’s really starting to look like there is no grand discovery, no fundamental challenge that needs to be solved. There’s no big deep mystery that stands between us and artificial general intelligence. And yes, we can’t say exactly how we will get there. Crazy things can and will happen in the meantime that will make some of the scenario turn out to be false, but that’s where we’re headed—and we have less time than you might think. One of the scariest things about this scenario to me is, even in the good ending, the fate of the majority of the resources on Earth are basically in the hands of a committee of less than a dozen people. That is a scary and shocking amount of concentration of power. And right now we live in a world where we can still fight for transparency obligations. We can still demand information about what is going on with this technology, but we won’t always have the power and the leverage needed to do that. We are heading very quickly towards a future where the companies that make these systems—and the systems themselves—just need not listen to the vast majority of people on Earth. So I think the window that we have to act is narrowing quickly. Takeaway number two: by default, we should not expect to be ready when AGI arrives. We might build machines that we can’t understand and can’t turn off because that’s where the incentives point. Takeaway number three: AGI is not just about tech—it’s also about geopolitics. It’s about your job. It’s about power. It’s about who gets to control the future. I’ve been thinking about AI for several years now, and still, reading AI 2027 made me kind of orient to it differently. I think for a while it’s sort of been my thing to theorize and worry about with my friends and my colleagues, and this made me want to call my family and make sure they know that these risks are very real and possibly very near, and that it kind of needs to be their problem too now. What Do We Do? “I think that basically companies shouldn’t be allowed to build superhuman AI systems— you know, super broadly superhuman superintelligence—until they figure out how to make it safe. And also until they figure out how to make it, you know, democratically accountable and controlled. And then the question is, how do we implement that? And the difficulty, of course, is the race dynamics, where it’s not enough for one state to pass a law because there are other states, and it’s not even enough for one country to pass a law because there are other countries.” “Yeah. Right. So that’s the big challenge that we all need to be prepping for when chips are down and powerful AI is imminent. Prior to that, transparency is usually what I advocate for—stuff that builds awareness, builds capacity.” Your options are not just full-throttle enthusiasm for AI or dismissiveness. There is a third option, which is to stress out about it a lot—and maybe do something about it. The world needs better research, better policy, more accountability for AI companies—just a better conversation about all of this. I want people paying attention who are capable, who are engaging with the evidence around them with the right amount of skepticism, and above all, who are keeping an eye out for when what they have to offer matches what the world needs, and are ready to jump when they see that happening. You can make yourself more capable, more knowledgeable, more engaged with this conversation, and more ready to take opportunities where you see them. And there is a vibrant community of people that are working on those things. They’re scared but determined. They’re just some of the coolest, smartest people I know, frankly, and there are not nearly enough of them yet. If you are hearing that and thinking, “Yeah, I can see how I fit into that,” great. We have thoughts on that. We would love to help. But even if you’re not sure what to make of all this yet, my hopes for this video will be realized if we can start a conversation that feels alive here and offline about what this actually means for people—people talking to their friends and family—because this is really going to affect everyone. Thank you so much for watching. Conclusions and Resources I would genuinely love to hear your thoughts on AI 2027. Do you find it plausible? What do you think was most implausible? And maybe spend a second thinking about a person or two that you know who might find it valuable—maybe your AI-progress-skeptical friend, or your ChatGPT-curious uncle, or maybe your local member of Congress.
Like
Comment
Share
100 Percent Fed Up Feed
100 Percent Fed Up Feed
5 w

NFL Owner Fined $250,000 For Alleged Obscene Gesture
Favicon 
100percentfedup.com

NFL Owner Fined $250,000 For Alleged Obscene Gesture

The NFL has fined Dallas Cowboys owner Jerry Jones $250,000 for allegedly making an obscene gesture at MetLife Stadium following his team’s victory against the New York Jets. Jones said the gesture was “inadvertent” and meant to be a thumbs up. “The NFL fined #Cowboys owner Jerry Jones $250,000 for giving the middle finger to fans on Sunday,” NFL Network Insider Tom Pelissero said. The NFL fined #Cowboys owner Jerry Jones $250,000 for giving the middle finger to fans on Sunday.https://t.co/BRv5ejlW9g — Tom Pelissero (@TomPelissero) October 7, 2025 Watch for yourself in the footage below: Jerry Jones on @1053thefan on giving the middle finger during Sunday’s game in New Jersey: “That was unfortunate. There was a swarm of Cowboys fans out front. It was right after we made our last touchdown. I put up the wrong show of hand. The intention was thumbs up.” pic.twitter.com/3QS6ZkD4B1 — Jon Machota (@jonmachota) October 7, 2025 Here’s a backup: Imagine how bad life is as a Jets fan, now imagine getting flipped off by Jerry Jones AT MetLife pic.twitter.com/yFuXDLM9pY — Old Row Sports (@OldRowSports) October 7, 2025 More from CNN: In a video which was widely shared on social media, Jones could be seen giving a thumbs-up to the crowd from a box before raising his middle finger and pointing lower in the crowd while mouthing a few indiscernible words. The gesture occurred late in the Cowboys’ 37-22 road win over the Jets. Jones has until Friday to appeal the decision and, though neither the Cowboys nor Jones have yet formally done so, it is likely he will, according to Pelissero. “There was a swarm of Cowboys fans out in front — not Jets fans, Cowboys fans. The entire stadium was brimming with enthusiasm of Cowboys and certainly late in the game,” Jones said, according to NBC News. Jerry Jones appears to flip off Jets fans during Cowboys’ blowout win https://t.co/GerPT1Lqsm pic.twitter.com/xwa3D1ofhw — New York Post (@nypost) October 6, 2025 NBC News shared additional comments: “(The gesture) was inadvertent on my part because that was right after we made our last touchdown, and we were all excited about it,” Jones said. “There wasn’t any antagonistic issue or anything like that. I just put up the wrong show on the hand. That was inadvertently done. I’m not kidding. If you want to call it accidental, you can call it accidental. But it got straightened around pretty quick. I had a chance to look at it. It got straightened out pretty quick, but the intention was ‘thumbs up,’ and basically pointing at our fans because everybody was jumping up and down excited.” Late in the 2024 season, the NFL fined Carolina Panthers owner David Tepper for throwing a drink at fans in Jacksonville.
Like
Comment
Share
100 Percent Fed Up Feed
100 Percent Fed Up Feed
5 w

Pope Leo Urges American Catholics to STAND AGAINST ICE
Favicon 
100percentfedup.com

Pope Leo Urges American Catholics to STAND AGAINST ICE

He may be the first American Pope, but Leo is far from onboard with President Trump’s ‘America First’ agenda. At least that’s the case as far as deportations and ICE. While the Trump Administration is expanding operations aimed at fixing America’s long-broken illegal immigration problem… The Pope from Chicago has kicked off his own modern-day crusade AGAINST the Trump Administration on the issue of mass deportations. It isn’t all that surprising that the head of the Catholic Church would try to strike a balance between upholding a nation’s laws while simultaneously preserving human dignity. But is that actually what Pope Leo XIV is doing?  You be the judge. Just today, the Pope met with a group of American Bishops for the express purpose of pushing back against President Trump.  (While publicly feigning non-involvement, as we’ll get into…) CNN’s correspondent covering the Vatican shared these photos from the Pope’s meeting which highlights one particular member of the US Bishop delegation. Bishop Mark Seitz is widely seen as the one spearheading the Pope’s crusade against President Trump’s immigration policy: Pope Leo today met Bishop Mark Seitz of El Paso, Texas, and was presented with letters & video from immigrants in the US. Leo said “the church cannot remain silent” in its response to President Trump’s immigration policies (photos: Hope Border Institute) pic.twitter.com/c3KKjz4ICj — Christopher Lamb (@ctrlamb) October 8, 2025 Is it just me, or does the quote attributed to the Pope on that post — about the Church not remaining silent — seem to invoke something like Bonhoeffer’s call to speak up against Hitler? During that meeting, the Pope strengthened his stance against the Trump Administration on the issue of immigration. He specified that American Bishops should push back and speak up against the way ‘immigrants’ are being treated by the Trump Administration. He was presumably referencing the ILLEGALS which ICE is currently bent on rounding up and sending home, which the Pope says does not line up with a pro-life worldview… as covered by the Reuters News Agency: Pope Leo told U.S. bishops visiting him at the Vatican on Wednesday that they should firmly address how immigrants are being treated by President Donald Trump’s hardline policies, attendees said, in the latest push by the pontiff on the issue. “Our Holy Father … is very personally concerned about these matters,” El Paso Bishop Mark Seitz, who took part in the meeting, told Reuters. “He expressed his desire that the U.S. Bishops’ Conference would speak strongly on this issue.” “It means a lot to all of us to know of his personal desire that we continue to speak out,” said Seitz. The Vatican did not immediately comment on the pope’s meeting. But Leo has been ramping up his criticism in recent weeks. The pope questioned on September 30 whether the Trump administration’s anti-immigration policies were in line with the Catholic Church’s pro-life teachings, in comments that drew heated backlash from some prominent conservative Catholics. The White House has said Trump was elected based on his many promises, including to deport criminal illegal aliens. The Pope’s comments calling into question the Trump Administration’s orthodoxy on the issue of being pro-life was picked up by White House reporters. Last week Gabe Gutierrez representing NBC News in the WH press pool questioned Karoline Leavitt — herself Catholic — on that Papal pushback. She framed her response primarily in regards to the allegations that there were actually cases of INHUMANE treatment of illegals (not to be confused with simply deporting them lawfully) which primarily centered around events that happened under the Biden Administration. Here’s a clip of her full response on that issue, though she didn’t speak directly to the current ICE deportations: Karoline Leavitt — Donald Trump’s Catholic press secretary — rejects Pope Leo’s claim that migrants are treated inhumanely in the United States. pic.twitter.com/YWjSyeQiy8 — Christopher Hale (@chrisjollyhale) October 1, 2025 There’s one key reason I shared that clip. Notice the point made by the White House Press Secretary is that the Trump Administration is attempting to ‘enforce our nation’s laws’ with regard to ICE and mass deportations. As I stated earlier, the Pope would be expected to strike some sort of a balance between that, and emphasizing human dignity — as would any righteous government, I might add. But again I ask, is that the balance the Pope is attempting to maintain in his criticism of the Trump Administration’s handling of illegals? I want the Pope’s own words from just a few days ago hanging in the air as we look at that issue: No one should be forced to flee, nor exploited or mistreated because of their situation as foreigners or people in need! Human dignity must always come first! — Pope Leo XIV (@Pontifex) October 5, 2025 That post from the Catholic Pontiff is NOT a clear-cut attack on ICE deportations in accordance with US law. But it is designed to carry that flavor, while contrasting upholding the law against maintaining human dignity. Oops — there’s another one of those binary choices we’re always being told we MUST ACCEPT. Is it not possible to protect human dignity at the most basic level while also enforcing a nation’s laws? I’d even argue… you CAN’T have one, without the other. But the meeting today with American Bishops went further than just flowery platitudes about peace in the Midwest of America. Mark Seitz, the Bishop from El Paso mentioned earlier, brought with him a stack of letters and video clips which he shared with the Pope. The purpose of those was obvious; they were intended to provide undeniable emotion-based evidence as to the un-Christian behavior of the Trump Administration. Those communications reportedly included the horror stories of families traumatized by President Trump’s immigration policy, as covered by the Associated Press: The Texas bishop on the front lines of the U.S. immigration crackdown met Wednesday with Pope Leo XIV and brought him a packet of letters from immigrant families “terrorized” by fear that they and their loved ones will be rounded up and deported as the Trump administration’s tactics grow increasingly combative. El Paso Bishop Mark Seitz also showed Leo a video detailing the plight of migrants, and told The Associated Press afterward that Leo vowed to “stand with” them and the Catholic leaders who are trying to help them. “He had a few words for us, thanking us for our commitment to the immigrant peoples and also saying that he hopes that the bishops’ conference will speak to this issue and continue to speak to it,” said Seitz, chair of the migration committee of the U.S. Conference of Catholic Bishops. “We don’t want to get into the political fray, we’re not politicians, but we need to teach the faith,” and especially the Gospel message recognizing the inherent dignity of all God’s children, and to care for the poor and welcome the stranger, Seitz said. History’s first U.S. pope has followed in Francis’ line. Last weekend, Leo celebrated a special Holy Year Mass for migrants, denouncing the “coldness of indifference” and the “stigma of discrimination” that migrants desperate to flee violence and suffering often face. Asked by reporters this week about the crackdown in Chicago, Leo declined to comment. (Emphasis added.) For all that talk about not wishing to wade into politics and governance, the Holy See certainly has a lot to say about politics and governance in the US! Check out this clip from Vatican News showing exactly that; the Pope declining to wade into those waters, remaining silent… While sending American Bishops to do just the opposite: Wait WHAT???? Pope Leo XIV says ‘no comment’ on USA political matters! What about last week when he waded into the political award to pro-abortion Senator Dick Durbin??? Maybe he learned a lesson? Catholics sure learned a lot from his comments. pic.twitter.com/vs7DJOptda — John-Henry Westen (@JhWesten) October 7, 2025 The problem with that stance is that it’s not a stance at all. Or, rather, it’s an attempt at taking a stance on a hot button political issue in the US… while feigning non-involvement. But the moral highroad the Pope is attempting to navigate doesn’t actually exist when word and deed do not match. To claim non-involvement while speaking to the issue through another format isn’t virtuous; it’s duplicitous. The underpinning of the moral highroad is a thing that happens on the ground level — where laws and real life collide, both words and deeds. The moral highroad materializes where the abhorrence of lawlessness and the upholding of human dignity are indeed one and the same. As they must be, if either is to exist at all. I’d like to show one more video if you’ll hang with me, which highlights the problem with the Pope’s logic; or I should say… his logical fallacy. Check out this moment from just a few days ago when the Pope was asked about the convergence of a pro-life stance and the death penalty: Pope Leo XIV said that if you support the death penalty, then you are “not pro-life.” That means that Moses and parts of the Pentateuch are not pro-life. It means dozens of Popes were not pro-life and were heretics. That’s obviously false. pic.twitter.com/0VFVZ1Nugs — Dr Taylor Marshall (@TaylorRMarshall) October 1, 2025 According to Pope Leo’s logic, if you say you’re pro-life and against abortion but support the death penalty, then you’re not really pro-life. If pro-life meant a worldview in which no killing was ever justified, that might be a true statement. But that’s not what pro-life means anymore than MURDER equals CAPITAL PUNISHMENT. One is a lawless act of sin; the other a righteous obedience of Biblical principles by authorities instituted among men by God. The failure to make that distinction in principle is the sort of thing that will wreck one’s theology, as well as a person’s views on society and governance. Thus, Pope Leo XIV has aligned himself against President Trump on the basis of oversimplifying the principles in play. The law distinguishes between what is legal and illegal; and still insists on emphasizing the dignity of human life across the board. But would it DIGNIFY humanity to sink so low as to allow the distinction between what is legal, and what is illegal, be blurred so far that both are treated the same? If those who enter this country illegally are treated identically to those who enter in by legal means — does that lack of distinction not bring low the DIGNITY of HUMANITY? At least, that’s the way it works in my Bible. RELATED REPORTS: Pope Leo Makes Absurd Claim About Being “Pro Life” Pope Leo Makes A RIDICULOUS “Blessing”
Like
Comment
Share
100 Percent Fed Up Feed
100 Percent Fed Up Feed
5 w

Republican Attorney General Says His State’s Airports Are “Closed To Any Weather Modification Activities”
Favicon 
100percentfedup.com

Republican Attorney General Says His State’s Airports Are “Closed To Any Weather Modification Activities”

Florida Attorney General James Uthmeier said the state’s airports are “closed to any weather modification activities.” “If you become aware of airports violating Florida’s recently enacted law, report it here,” Uthmeier said. Uthmeier shared a link to a form where users can report suspected weather modification activities. As of October 1st, Florida’s airports are closed to any weather modification activities! If you become aware of airports violating Florida’s recently enacted law, report it here: https://t.co/hQUI6KWRvS pic.twitter.com/MdwKKtx7Hl — Attorney General James Uthmeier (@AGJamesUthmeier) October 8, 2025 Aviation International News has more: Florida has begun enforcing a new state law prohibiting weather modification and geoengineering activities within state lines, directing all 125 public-use airports in the state to report aircraft equipped for such operations. Some Florida airports have posted such restrictions via FAA notices. In a July 14 letter to airport operators, Attorney General James Uthmeier said Senate Bill 56, signed into law in June, bans “the injection, release, or dispersion, by any means, of a chemical, a chemical compound, a substance, or an apparatus into the atmosphere within the borders of this state for the express purpose of affecting the temperature, weather, climate, or intensity of sunlight.” Violators face fines up to $100,000. Uthmeier told airports to comply with new reporting requirements to the Florida Department of Transportation starting last Wednesday. “We need your help to keep our state free and make sure the skies belong to the people,” he wrote, adding that airports must report any aircraft “equipped with any part, component, or device” capable of emitting chemicals into the atmosphere. FAA National Airspace System notices show that Palm Beach International (KPBI) and Daytona Beach International (KDAB) have both posted closures to aircraft “equipped with weather modification or geoengineering equipment.” Each requires prior permission for entry. “The Sunshine State is getting it RIGHT! Florida is shutting down dangerous weather modification and geoengineering schemes because our skies belong to the people, not globalist elites and their experiments,” Rep. Marjorie Taylor Greene (R-GA) said. “Now it’s time to do the same nationwide. My Clear Skies Act will END chemical spraying, geoengineering, and weather manipulation across America for good. No more tampering. No more lies. Just clear, God-given skies!” she continued. The Sunshine State is getting it RIGHT! Florida is shutting down dangerous weather modification and geoengineering schemes because our skies belong to the people, not globalist elites and their experiments. Now it’s time to do the same nationwide. My Clear Skies Act will END… https://t.co/4QDEtrygFA — Rep. Marjorie Taylor Greene (@RepMTG) October 8, 2025 FOX 35 Orlando noted: The law, formally Florida Statute 403.411, mirrors concerns voiced by some residents and political figures about environmental control and government transparency. Critics, however, say the legislation lends credibility to conspiracy theories and addresses a problem that scientific agencies say does not exist. The move places Florida among a few states regulating atmospheric modification despite limited evidence that such practices occur locally. Cloud seeding is a weather modification process that disperses substances such as silver iodide or salt particles into clouds to encourage precipitation. These particles act as nuclei around which water droplets or ice crystals form, increasing the likelihood of rain or snow. Scientists say the process can enhance existing precipitation but cannot create storms or rain where none would otherwise occur. Supporters, including Gov. Ron DeSantis, say the law is a “proactive safeguard” against environmental tampering.
Like
Comment
Share
Classic Rock Lovers
Classic Rock Lovers  
5 w

Bruce Dickinson gives the US national anthem the air-raid siren treatment again
Favicon 
www.loudersound.com

Bruce Dickinson gives the US national anthem the air-raid siren treatment again

The Iron Maiden frontman has sung the Star Spangled Banner at another sporting event
Like
Comment
Share
Classic Rock Lovers
Classic Rock Lovers  
5 w

How Tom Morello Customized his Iconic Guitar
Favicon 
www.youtube.com

How Tom Morello Customized his Iconic Guitar

How Tom Morello Customized his Iconic Guitar
Like
Comment
Share
One America News Network Feed
One America News Network Feed
5 w

Kentucky sues Roblox video game over ‘Charlie Kirk assassination simulators’ amid other child safety lawsuits
Favicon 
www.oann.com

Kentucky sues Roblox video game over ‘Charlie Kirk assassination simulators’ amid other child safety lawsuits

The state of Kentucky is suing Roblox, accusing the gaming platform of failing to protect children from predators and exposure to harmful content, including simulations of Charlie Kirk’s assassination.
Like
Comment
Share
Daily Wire Feed
Daily Wire Feed
5 w

‘Agitators, Anarchists’: White House Zeroes In On Antifa After Clashes In Portland
Favicon 
www.dailywire.com

‘Agitators, Anarchists’: White House Zeroes In On Antifa After Clashes In Portland

President Donald Trump hosted a roundtable discussion at the White House with journalists impacted by violence from Antifa, a far-left group recently designated as a domestic terrorist organization by the president. The decentralized group is most known for instigating violence and chaos at major protests, including anti-ICE protests in Portland, Oregon, which led the Trump administration to deploy the National Guard to the city. Currently, that order is blocked by United States District Judge Karin J. Immergut at the time of this writing. “The epidemic of left-wing violence and Antifa-inspired terror has been escalating for nearly a decade,” Trump said on Wednesday. “These are agitators, anarchists, and they’re paid,” he later added. Conservative journalists and personalities, including Andy Ngo, Nick Sortor, Katie Daviscourt, Julio Rosas, Brandi Kruse, Savanah Hernandez, and Jack Posobiec, joined the roundtable with Trump administration officials. “The Biden administration let them commit these crimes with total impunity for years,” Attorney General Pam Bondi said, adding that “weak Democrats have turned a blind eye to their actions.” Bondi vowed to “destroy the entire organization from top to bottom,” saying they will use a similar approach to how they crack down on drug cartels. Get 40% off new DailyWire+ annual memberships with code FALL40 at checkout! FBI Director Kash Patel said that “they are harming everyday citizens in every single one of our communities.” “We will arrest every single one of them,” Patel insisted, adding that “the American people deserve law and order.” Kruse, a Seattle-based reporter, said that one of the most significant issues the president addressed was “acknowledging that Antifa is a real thing.” “Once you take the mask off, they’re nothing,” Kruse said, later asking for a “full court press to dismantle Antifa once and for all.” She added that the investigators should look into the movement of potential criminals between Portland and Seattle. Ngo described numerous times where he was attacked by Antifa activists, including when he was severely choked, noting how the group is “decentralized, autonomous, and they operate on deception.” “I think the DOJ could look at federal conspiracy charges,” he added. Rosas said that “the American people deserve to know what’s happening in these situations.” “It’s a long time coming,” Daviscourt said regarding the terrorist group designation, saying that Antifa “believes that violence is justified by any means necessary.” Daviscourt recently suffered a black eye as a result of an assault while covering the group. On Tuesday, Texas Attorney General Ken Paxton said there are “undercover” efforts underway in the Lone Star State to take down “leftist terror cells.” “Leftist political terrorism is a clear and present danger. Corrupted ideologies like transgenderism and Antifa are a cancer on our culture and have unleashed their deranged and drugged-up foot soldiers on the American people,” Paxton said in a news release. “The martyrdom of Charlie Kirk marks a turning point in America. There can be no compromise with those who want us dead. To that end, I have directed my office to continue its efforts to identify, investigate, and infiltrate these leftist terror cells. To those demented souls who seek to kill, steal, and destroy our country, know this: you cannot hide, you cannot escape, and justice is coming,” the attorney general concluded.
Like
Comment
Share
Daily Caller Feed
Daily Caller Feed
5 w

Trump To Designate Antifa As Foreign Terrorist Group
Favicon 
dailycaller.com

Trump To Designate Antifa As Foreign Terrorist Group

'Marco, we'll take care of it'
Like
Comment
Share
Daily Caller Feed
Daily Caller Feed
5 w

These Dems Immediately Cried ‘Climate Change’ Over Los Angeles Fire Allegedly Sparked By Arsonist
Favicon 
dailycaller.com

These Dems Immediately Cried ‘Climate Change’ Over Los Angeles Fire Allegedly Sparked By Arsonist

'Not a hoax'
Like
Comment
Share
Showing 4329 out of 98227
  • 4325
  • 4326
  • 4327
  • 4328
  • 4329
  • 4330
  • 4331
  • 4332
  • 4333
  • 4334
  • 4335
  • 4336
  • 4337
  • 4338
  • 4339
  • 4340
  • 4341
  • 4342
  • 4343
  • 4344
Advertisement
Stop Seeing These Ads

Edit Offer

Add tier








Select an image
Delete your tier
Are you sure you want to delete this tier?

Reviews

In order to sell your content and posts, start by creating a few packages. Monetization

Pay By Wallet

Payment Alert

You are about to purchase the items, do you want to proceed?

Request a Refund