
Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
The founders of artificial intelligence startup Together, which made headlines last month by replicating Meta’s LLaMA dataset with the goal of creating open source LLMs, is celebrating today after raising a $20 million seed round to build an open source cloud and artificial intelligence platform.
These days it seems like everyone in open source AI is toasting to recent success. For example, a wave of new open source LLMs have been released that are close enough in performance to the proprietary Google and OpenAI models, or at least good enough for many use cases, that some experts say most software developers will opt for free. versions. This has led the open source AI community to encourage the pushback in the AI shift over the past year to closed, proprietary LLMs, which experts say will lead to “industrial capture,” in which the power of technology on end. Cutting-edge AI technology is controlled by a few big, deep-pocketed tech companies.
And then there are the actual parties: Open source hub Hugging Face kicked off the party in early April with its “Woodstock of AI” meetup that drew over 5,000 people to the Exploratorium in downtown San Francisco. And this Friday, Stability AI, which created the popular open-source imager Stable Diffusion, and Lightning AI, which developed PyTorch Lightning, willpower host a “Come Together to Keep AI Open Source” meeting in New York City at a “secret location” so far.
Big Tech considers its moat, or lack thereof
As part of open source AI, Big Tech is weighing its options. Last week, a leaked Google memo from one of its engineers, titled “We Have No Moat,” claimed that the “inconvenient truth” is that neither Google nor OpenAI are positioned to “win this arms race.”
Event
transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.
Register now
That, the engineer said, was due to open source AI. “Simply put, we are being outclassed,” the memo continued. “While our models still have a slight advantage in terms of quality, the gap is closing astonishingly quickly.”
Some say these concerns may reduce the willingness of big tech companies to share their LLM research. But Lightning AI CEO William Falcon told VentureBeat in March that this was already happening. The OpenAI release of GPT-4, he explained, included a 98-page white paper “posing as research.”
“Now, because they have this pressure to monetize, I think literally today is the day that they went really closed source,” Falcon said after the release of GPT-4. “They just divorced from the community.”
Last month, Joelle Pineau de Meta, vice president of AI research at Meta, told VentureBeat that accountability and transparency in AI models are essential. “My hope, and it’s reflected in our strategy for data access, is to figure out how to enable transparency for verifiability audits of these models,” she said.
But even Meta, which has long been known as a particularly “open” big tech company (thanks to FAIR, the fundamental AI research team founded by Meta’s chief AI scientist Yann LeCun in 2013), can have its concerns. boundaries. In an MIT Technology Review article written yesterday by Will Douglas Heaven, Pineau said the company may not open up its code to outsiders forever. “Is this the same strategy that we will adopt for the next five years? I don’t know, because the AI is moving so fast,” he said.
How long can the open source AI party last?
That’s where the problem lies for the open source AI, and how its partying ways could come to a sudden halt. If big tech companies shut down access to their models entirely, their “secret recipes” could be even harder to uncover, as Falcon explained to VentureBeat. In the past, he explained, even though Big Tech models might not be exactly replicable, the open source community knew what the basic ingredients of the recipe were. Now, there may be ingredients that no one can identify.
“Think if I give you a fried chicken recipe, we all know how to make fried chicken,” he said. “But suddenly I do something slightly different and you think, wait, why is this different? And you can’t even identify the ingredient. Or maybe it’s not even fried. Who knows?”
This, he said, sets a bad precedent. “You’re going to have all these companies that are no longer incentivized to open source things, to tell people what they’re doing,” she said, adding that the dangers of unmonitored models are real.
“If this model goes wrong, and it will, you’ve already seen it hallucinating and giving you false information, how is the community supposed to react?” he said. “How are ethical researchers supposed to go and suggest solutions and say, this way it doesn’t work, maybe modify it to do this other thing? The community is missing out on all of this.”
VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.