
Are you ready to generate more awareness for your brand? Consider becoming a sponsor of The AI Impact Tour. Learn more about opportunities here.
After the unprecedented whirlwind and OpenAI drama of the last 10 days, which saw the OpenAI board of directors fire CEO Sam Altman; Meanwhile, he replaced him with CTO Mira Murati; President Greg Brockman resigned; almost all OpenAI employees threatened to resign; and Altman was reinstated the day before Thanksgiving. He was sure that the US holiday weekend would be the perfect opportunity for Silicon Valley to take a break from the AI hype and relax with turkey and stuffing.
It wasn’t going to be. In the early hours of Thanksgiving Day, Reuters reported that before Altman was temporarily exiled, several of his researchers wrote a letter to the board of directors warning of a “powerful artificial intelligence discovery that they said could threaten the humanity”. The news went viral just as people were sitting at turkey-laden tables from Palo Alto to Cerebral Valley: The project, called Q*, was apparently believed to possibly be a breakthrough in efforts to build AGI (artificial general intelligence, which OpenAI defines as “autonomous systems that outperform humans in most economically valuable tasks.” According to Reuters, the new model “was able to solve certain mathematical problems” and “although it only performed mathematics at the level of elementary school students , passing such tests made researchers very optimistic about the future success of Q*.”
What’s really behind the endless cycle of AI hype?
The fact that there was no lull in the AI and social media news cycle between the OpenAI boardroom soap opera and the Q* discussions – note that I’m not in favor of a six-month hiatus in AI development, just a day or two. watching the Thanksgiving Day Parade and eating some leftovers in peace, made me wonder: what’s really behind the endless cycle of AI hype? After all, the enthusiasm for Q* was due to an algorithm that Nvidia senior AI scientist Jim Fan called a “fantasy.” He said that “in the decade I have been dedicated to AI, I have never seen an algorithm that so many people fantasize about. Just for a name, without paper, without statistics, without product.”
Sure, I suppose there’s some intellectual excitement, as well as the usual media competition for the latest gossip and the next viral headline. There’s probably also a lot of anticipatory greed, self-promotional messaging, and typical human arrogance at play. But I wonder if the inability to take even the briefest of breaks after the sleep-robbing OpenAI drama is simply due to anxiety, through the lens of uncertainty.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Learn more
AI uncertainty contributes to anxiety
According to an article by researchers at the University of Wisconsin, uncertainty about possible future threats “disrupts our ability to avoid them or mitigate their negative impact and therefore generates anxiety.” The human brain, the article notes, has been called an “anticipation machine.” “The ability to use past experiences and information about our current state and environment to predict the future allows us to increase the chances of obtaining desired outcomes, while avoiding or preparing for future adversities. This ability is directly related to our level of certainty regarding future events: how likely they are, when they will occur, and what they will look like. “Uncertainty decreases the efficiency and effectiveness with which we can prepare for the future and therefore contributes to anxiety.”
This is true not only for non-tech “norms,” but even for the leading AI researchers and leaders of our time. The truth is that not even the ‘godfathers’ of AI like Geoffrey Hinton, Yann LeCun or Andrew Ng really know what is going to happen when it comes to the future of AI, so their Black Friday social media issuesAlthough they are based on intellectual arguments, they are still nothing more than predictions that do little to calm our anxieties about what is to come.
Lean into the unknown
With that in mind, we can consider the incessant debate, reflection, pondering, pondering, deliberation, deciphering and analysis that has taken place over the past week about OpenAI and Q*, including my own, as an expression of anxiety (and, perhaps, a big dose of OCD) about how uncertain the future of AI seems right now.
If we don’t stop our excessive information gathering, our quest for reassurance, our repetitive thoughts and worries, we won’t have to focus on the bottom line when it comes to the future of AI: it’s uncertain. It is unknown.
Of course, I’m not saying that we don’t need to debate, discuss, plan, prepare and keep pace with the evolution of AI. But surely everything can wait until we’ve finished our meal and taken a tryptophan-induced nap?
It’s exactly four weeks until Christmas. Maybe everyone – the effective altruists and the effective accelerationists, the techno-optimists and the doomers, the industry leaders and academic researchers, the “move fast and break things” people and the “slow down and move carefully” cohort. – can you agree on The briefest of lulls in AI enthusiasm for eggnog and cookies? Our mutual anxiety about the future of AI (and the hype that comes with it) will still be there after the New Year. I promise.
VentureBeat’s mission is to be a digital marketplace for technical decision makers to gain insights into transformative business technology and transact. Discover our Briefings.