-2 C
New York
Wednesday, November 29, 2023
HindiEnglishSpanishGermanJapaneseArabic
HomeArtificial IntelligenceThis week in AI: OpenAI debacle shows the dangers of going commercial...

This week in AI: OpenAI debacle shows the dangers of going commercial | TechCrunch

This week in AI: OpenAI debacle shows the dangers of going commercial |  TechCrunch


Keeping up with an industry that evolves as quickly as AI is a difficult task. So until an AI can do it for you, here’s a helpful summary of recent stories in the world of machine learning, along with notable research and experiments that we didn’t cover on their own.

This week, it was impossible to ignore (for this journalist included, to the dismay of my sleep-deprived brain) the leadership controversy surrounding AI startup OpenAI. The board ousted Sam Altman, CEO and co-founder, reportedly for what they considered misplaced priorities on his part: commercializing AI at the expense of safety.

Altman was reinstated as CEO, largely through the efforts of Microsoft, a major backer of OpenAI, and most of the original board of directors was replaced. But the saga illustrates the dangers of AI companies, even those as large and influential as OpenAI, such as the temptation to take advantage… monetizationFinancing sources aimed at the euro sector are becoming stronger.

It’s not that AI labs necessarily want becoming entangled with venture firms and technology giants that are commercially aligned and eager for returns. The very high costs of training and developing AI models make it almost impossible to avoid this fate.

According to CNBC, the process of training a large language model like GPT-3, the predecessor to OpenAI’s flagship text-generating AI model GPT-4, could cost more than $4 million. That estimate doesn’t take into account the cost of hiring data scientists, artificial intelligence experts and software engineers, all of whom command high salaries.

It is no coincidence that many large AI labs have strategic agreements with public cloud providers; Computing, especially at a time when chips to train AI models are in short supply (which benefits vendors like Nvidia), has become more valuable than gold to these labs. Anthropic, OpenAI’s main rival, has taken on investments from both Google and Amazon. Meanwhile, Cohere and Character.ai are supported by Google Cloud, which is also their exclusive IT infrastructure provider.

But, as this week demonstrated, these investments carry risk. The technology giants have their own agendas and the weight they must distribute to ensure that their offers are fulfilled.

OpenAI attempted to maintain some independence with a unique “capped earnings” structure that caps investors’ total returns. But Microsoft showed that computing can be as valuable as capital to control a startup; Much of Microsoft’s investment in OpenAI is in the form of Azure cloud credits, and the threat of withholding these credits would be enough to get the attention of any board of directors.

Barring a massive increase in investments in public supercomputing resources or AI grant programs, the status quo does not seem likely to change anytime soon. AI startups of a certain size, like most startups, are forced to give up control of their destinies if they want to grow. Hopefully, unlike OpenAI, they make a deal with the devil they know.

Here are some other notable AI stories from the past few days:

  • OpenAI is not going to destroy humanity: Has OpenAI invented AI technology with the potential to “threaten humanity”? Judging by some of the recent headlines, one might be inclined to think so. But there is no cause for alarm, experts say.
  • California observes AI rules: The California Privacy Protection Agency is preparing for its next trick: putting up barriers to AI. Natasha writes that the state privacy regulator recently published a draft regulation on how people’s data can be used for AI, taking inspiration from existing rules in the European Union.
  • Bard answers YouTube questions: Google has announced that its Bard AI chatbot can now answer questions about YouTube videos. Although Bard already had the ability to analyze YouTube videos with the launch of the YouTube extension in September, the chatbot can now give you specific answers to queries related to the content of a video.
  • X’s Grok ready to launch: Shortly after screenshots surfaced showing xAI’s Grok chatbot appearing in the X web app, X owner Elon Musk confirmed that Grok would be available to all of the company’s Premium+ subscribers sometime this week . While Musk’s pronouncements about product delivery timelines have not always held up, code developments in the X app itself reveal that Grok integration is underway.
  • Stability AI launches video generator: AI startup Stability AI last week announced Stable Video Diffusion, an AI model that generates videos by animating existing images. Based on Stability’s existing Stable Diffusion text-to-image model, Stable Video Diffusion is one of the few video generation models available open source, or commercially, for that matter.
  • Anthropic Releases Claude 2.1: Anthropic recently released Claude 2.1, an improvement to its flagship large language model that keeps it competitive with OpenAI’s GPT series. Devin writes that the new Claude update has three major improvements: context window, precision, and extensibility.
  • OpenAI and open AI: Paul writes that the OpenAI debacle has highlighted the forces controlling the burgeoning AI revolution, leading many to wonder what happens if you go for a centralized proprietary player, and what happens if things then go wrong .
  • AI21 Labs raises money: AI21 Labs, a company that develops generative AI products similar to OpenAI’s GPT-4 and ChatGPT, last week raised $53 million, bringing its total raised to $336 million. AI21 Labs, a Tel Aviv-based startup that creates a range of AI tools for generating text, was founded in 2017 by Mobileye co-founder Amnon Shashua, Ori Goshen and Yoav Shoham, the startup’s other co-CEO.

More machine learning

Getting AI models to be more honest about when they need more information to produce a safe response is a difficult problem, since the model doesn’t actually know the difference between right and wrong. But by having the model expose its inner workings a bit, you can get a better idea of ​​when it’s most likely to be lying.

Image credits: Purdue University

This work from Purdue creates a human-readable “Reeb map” of how the neural network represents visual concepts in its vector space. Elements he considers similar are grouped together, and overlaps with other areas could indicate similarities between those groups or confusion on the part of the model. “What we’re doing is taking these complicated sets of information that come out of the network and giving people an idea of ​​how the network views the data at a macroscopic level,” said lead researcher David Gleich.

Image credits: Los Alamos National Laboratory

If your data set is limited, it might be best if you don’t extrapolate too much from it, but if you must… perhaps a tool like “Senseiver” from Los Alamos National Laboratory is your best bet. The model is based on Google’s Perceiver and is able to take a handful of sparse measurements and apparently make surprisingly accurate predictions by filling in the gaps.

This could be for things like climate measurements, other scientific readings, or even 3D data like low-fidelity maps created by high-altitude scanners. The model can be run on cutting-edge computers, such as drones, which can now perform searches for specific features (in their test case, methane leaks) rather than simply reading the data and then bringing it back to analyze later.

Meanwhile, researchers are working to make the hardware that runs these neural networks more like a neural network itself. They made an array of 16 electrodes and then covered it with a bed of conductive fibers in a random but consistently dense network. When they overlap, these fibers can form connections or break them, depending on several factors. In some ways, it’s a lot like the way neurons in our brain form connections and then dynamically reinforce or abandon them.

The UCLA/University of Sydney team said the network was able to identify handwritten numbers with 93.4% accuracy, which actually outperformed a more conventional approach on a similar scale. It’s certainly fascinating, but it’s a long way from practical use, although self-organizing networks will probably eventually find their way into the toolbox.

Image credits: UCLA/University of Sydney

It’s good to see machine learning models helping people, and this week we have some examples of that.

A group of Stanford researchers is working on a tool called GeoMatch aimed at helping refugees and immigrants find the right placement for their situation and abilities. It is not an automated procedure: these decisions are currently made by placement officers and other officials who, while experienced and informed, cannot always be sure that their choices are supported by data. The GeoMatch model takes a series of characteristics and suggests a place where the person is likely to find solid employment.

“What once required hours of research for several people can now be done in minutes,” said project leader Michael Hotard. “GeoMatch can be incredibly useful as a tool that simplifies the process of gathering information and establishing connections.”

At the University of Washington, robotics researchers just presented their work on creating an automated feeding system for people who cannot feed themselves. The system has gone through many versions and evolved with community feedback, and “we’ve gotten to the point where we can pick up almost everything a fork can handle. So we can’t have soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad and a vegetable salad, as well as a pre-sliced ​​pizza or a sandwich or pieces of meat,” said co-director Ethan K. Gordon in a session of questions and answers published by the University.

Image credits: University of Washington

It’s an interesting talk that shows how projects like these are never really “finished”, but at each stage they can help more and more people.

There are a few projects out there to help blind people navigate the world, from Be My AI (powered by GPT-4V) to Microsoft’s Seeing AI, a collection of models designed specifically for everyday tasks. Google had its own, a path-finding app called Project Guideline, which aimed to help people stay on track when walking or jogging on a trail. Google just made it open source, which usually means they’re giving up something, but their loss is other researchers’ gain, because the work done at the billion-dollar company can now be used in a personal project.

Finally, some fun in FathomVerse, a game/tool ​​for identifying sea creatures the same way apps like iNaturalist and others identify leaves and plants. However, it needs your help because animals like anemones and octopuses are soft and difficult to identify. So sign up for the beta and see if you can help get this off the ground!

Image credits: FathomVerse

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular