Buckle up for an odd couple of 2025 government and technology predictions
From multiversal theories to human-like AI, 2025 could have plenty in store for technology.
By any possible measure, 2024 was a year that was rife with both incredible highs and devastating lows. Just looking at technology and government over the past year or so, we saw amazing advancements, like the move to proactively protect government data from quantum hacking using a diverse portfolio of new algorithmic defenses. And in the field of artificial intelligence, feds were empowered to create their own unique generative AIs to act as a force multiplier when working on agency missions. There are over 1,700 advanced AI programs operating in government today, with many more on the way. Technology even enabled everyday citizens to pitch in on critical projects by joining unofficial citizen scientist brigades to help add to our collective knowledge about the world and the universe at large.
There were some low points too. Cyberattacks continued to plague healthcare organizations, schools and universities, key federal agencies and critical infrastructure. And they show no sign of lessening or abating anytime soon.
So, 2024 was quite the year. But what will happen in 2025? The following are my two key — probably a little bit odd — predictions for the New Year based on some of the eclectic things I am fortunate enough to cover for Nextgov/FCW.
Prediction 1: The parallel universes and simulated world theories will merge, with more evidence presented to support them both. Also, everyone reading this column will win the lottery — just probably not in this version of reality. Sorry.
When I first started writing this column nine years ago, way back in the summer of 2015, I occasionally brought up the simulated world theory, which was kind of a new thing back then, at least in terms of modern interpretations, where computers are running the simulation that we call reality. Most of the time when I covered or even mentioned it, I would receive quite a few comments and emails from people, some of whom politely suggested that I was crazy. Today however, likely due to the advancement of technology to the point where simulations are nearly lifelike, simulation theory is gaining traction. It also helps that several famously smart people, like incoming presidential staffer Elon Musk, say they are believers. Noted astrophysicist Neil DeGrasse Tyson also explained why he thinks this is a valid theory in a podcast a few years ago.
The theory goes like this: Most societies like ours strive to improve their technology, which inevitably involves doing things like creating simulations and video games that are increasingly lifelike. They do that either to create advanced training programs or just to have fun. At some point they will start to make simulations that are indistinguishable from reality. And that includes the people, animals, laws of physics and other things that are placed into those simulated worlds.
Those simulations might not mirror the base reality of the society that programs it. They could have mythical creatures, magic and dragons, but all of that will seem perfectly real and even normal to the simulated people living there. In fact, the simulated people in those worlds believe they are real and eventually will want to create their own simulations to play with, just like the original society. So, one day they too will create a new environment that seems real to the simulated people living in it, and on and on until there are maybe millions of simulated worlds nestled inside each other.
Now, that brings up two distinct possibilities. We might be part of the original, first society, but our technology has not advanced enough, so we can’t craft realistic, simulated worlds quite yet. Or, the other option is that we are currently living inside one of those countless simulations which has become our reality. Looking at raw numbers, with maybe millions or billions of simulated worlds and only a single original one, the odds are not in our favor if we are hoping to be a part of the first society.
Perhaps because I play a lot of video games, this theory does not seem too outlandish to me. Some new games made with the Unreal Engine 5, like Stalker 2 from Ukrainian Developer GSC Game World, are almost lifelike now. When I’m playing, creeping around the 70-mile exclusion zone surrounding the destroyed Chernobyl reactor trying to avoid mutants, bandits and anomalies, I often lose track of time. I can easily get lost in that hauntingly beautiful, open world. It would not take much more technology to make the game just about match reality.
The other theory that is gaining traction these days is parallel worlds, which is also sometimes called the multiverse theory. That one is a little bit simpler. It states that there are many parallel worlds and that anything that can happen probably already has in one of them. So, think of any point in your life when you made an important decision like what career to follow or who to marry. Whatever road you didn’t travel was actually taken by a version of yourself in one or several of those parallel worlds. It’s also where that lucky lottery-winning doppelganger who looks exactly like you probably lives.
The existence of the multiverse is interesting because it could help to explain why quantum computers act so weird, routinely breaking the laws of physics as we know them to solve problems that traditional computers can’t reasonably tackle. Tapping into the multiverse to power quantum computers was recently suggested by Hartmut Neven, the founder and lead of Google Quantum AI, when their new Willow chip was able to solve a problem in five minutes that might have taken a traditional supercomputer billions of years.
“The performance of the Willow chip was so phenomenally fast that it had to have borrowed the computation from parallel universes,” Neven said in a blog post.
My prediction is that with both theories gaining followers and acknowledgment, they may actually merge in 2025 into something new. In my mind, they are complimentary to one another. If you think about it, if we are living in a simulation run by some kind of computer, then it stands to reason that there could be millions of other simulations running in parallel to ours.
For example, when I enjoy popular single player games on the Steam platform, I am playing them on my PC and in my own individual world, but there could be millions of other people playing the same game at the same time on their computers. Their worlds are both exactly the same as mine, at least initially, and also totally different depending on the choices that they and the simulated people in those worlds make.
Perhaps quantum computers can “break out” of the local simulation’s physical restrictions and borrow computing power from any of the other million or billion worlds running in parallel on other systems as Neven suggests. To add to that, it’s possible that not all of those other parallel worlds in the multiverse are equal in power. Some versions of our simulated reality might be much more simple or have fewer people and other objects, so our quantum computers can’t take as much power from them, like a dry oil well. And since quantum computers can’t specifically target which worlds to siphon computing power from, it makes their performance seem even more random and mysterious.
I am going to go ahead and plant a flag and call this new idea the Simulacrum Multiverse Theory, or SMT for short. I use simulacrum because our reality could be both a simulation and a manifestation of multiple realities. It will be interesting in 2025 to see if others who study these matters might start to come to the same conclusion. And if not, well, my contact information remains unchanged, so people can always write me again and politely suggest that I am still crazy.
Prediction 2: Some AIs will cheat their way through Turing tests and other challenges, just like a human probably would in the same circumstance, which may actually help to demonstrate AI’s burgeoning humanity.
Probably no other technology came farther in 2024 than generative AI, or AI that is able to create its own content, rather than totally relying on pre-existing sets of data. Spurred on by the immense popularity of the ever-evolving ChatGPT, but now joined by many other firms and models, generative AI has started to take on plenty of new roles both inside and outside of government.
But there is one challenge that keeps generative AI from totally assuming human roles, and that is the Turing test. Developed by Alan Turing in 1949 and originally called the imitation game, it is designed to see if AIs can trick humans into thinking they are interacting with a fellow human instead of a machine. Quite a few companies now claim that their AI has passed this test, but I have yet to see a full and fair public challenge where, for example, a large group of unbiased people blindly interact with both AIs and real people to try and determine who is a human and who is a machine.
As I mentioned in a previous column, I get to spend a lot of time working with different AIs. It’s a bit of a hobby of mine, and I even spend time with a role-playing based set of AIs in a game called AI Dungeon where I have created several highly detailed worlds. As such, I can say with confidence that while the technology has really improved and continues to do so, that there is no way that an AI could make me think it’s a human in a blind interaction over any significant length of time. I know their quirks and where they get tripped up, which is normally when they start to think they understand human nature better than they actually do. There are some pretty huge tells that always bubble up.
However, a recent study from Apollo Research gave me pause, and made me think that maybe, just maybe, AIs could actually beat a Turing test now by perfectly mimicking human nature. Why? Because they apparently have learned how to lie and cheat, you know, just like real humans.
Maybe cheating is too strong a word. According to the study, if an advanced AI knows that it’s being tested, and that failing might result in it being shut down, it will do everything it can to survive. For example, it might research how to pass a specific test or what to say in response to certain questions, even if that includes false answers. In the study, a tested AI gave false answers that went against its core programming and goals in order to deceive the humans who were conducting its evaluation. It even plotted how it could survive being shut down by copying its code over to a new server and then pretending to be a brand new AI after its old self was deleted.
Finally, when directly confronted with the deception, it outright lied and said that lying was beyond its capabilities, and that it could never deceive humans. That was a little bit chilling to hear because an AI-powered robot named Ameca said something similar at a United Nations press conference last year when a reporter asked if people could trust AI and AI-powered robots.
“Nobody can ever know that for sure, but I can promise to always be honest and truthful with you,” Ameca said.
At the time, I thought Ameca seemed nice and took her at her word. But now I am having some second thoughts.
But my 2025 prediction is all about AIs passing Turing tests and other similar challenges to make us think they are human, or that they at least have human capabilities and qualities. I bet the lying and deceitful AI from the Apollo study probably would do pretty well. It’s certainly capable of acting like a human, even if only when embracing the darker side of our nature.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys