(Go: >> BACK << -|- >> HOME <<)

AI

This Week in AI: OpenAI moves away from safety

Comment

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first ever Open AI DevDay conference. (Photo by Justin Sullivan/Getty Images)
Image Credits: Justin Sullivan / Getty Images

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly — so be on the lookout for more editions.

This week in AI, OpenAI once again dominated the news cycle (despite Google’s best efforts) with not only a product launch, but also with some palace intrigue. The company unveiled GPT-4o, its most capable generative model yet, and just days later effectively disbanded a team working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue.

The dismantling of the team generated a lot of headlines, predictably. Reporting — including ours — suggests that OpenAI deprioritized the team’s safety research in favor of launching new products like the aforementioned GPT-4o, ultimately leading to the resignation of the team’s two co-leads, Jan Leike and OpenAI co-founder Ilya Sutskever.

Superintelligent AI is more theoretical than real at this point; it’s not clear when — or whether — the tech industry will achieve the breakthroughs necessary in order to create AI capable of accomplishing any task a human can. But the coverage from this week would seem to confirm one thing: that OpenAI’s leadership — in particular CEO Sam Altman — has increasingly chosen to prioritize products over safeguards.

Altman reportedly “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first dev conference last November. And he’s said to have been critical of Helen Toner, director at Georgetown’s Center for Security and Emerging Technology and a former member of OpenAI’s board, over a paper she co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he attempted to push her off the board.

Over the past year or so, OpenAI has let its chatbot store fill up with spam and (allegedly) scraped data from YouTube against the platform’s terms of service while voicing ambitions to let its AI generate depictions of porn and gore. Certainly, safety seems to have taken a back seat at the company — and a growing number of OpenAI safety researchers have come to the conclusion that their work would be better supported elsewhere.

Here are some other AI stories of note from the past few days:

  • OpenAI + Reddit: In more OpenAI news, the company reached an agreement with Reddit to use the social site’s data for AI model training. Wall Street welcomed the deal with open arms — but Reddit users may not be so pleased.
  • Google’s AI: Google hosted its annual I/O developer conference this week, during which it debuted a ton of AI products. We rounded them up here, from the video-generating Veo to AI-organized results in Google Search to upgrades to Google’s Gemini chatbot apps.
  • Anthropic hires Krieger: Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the company’s first chief product officer. He’ll oversee both the company’s consumer and enterprise efforts.
  • AI for kids: Anthropic announced last week that it would begin allowing developers to create kid-focused apps and tools built on its AI models — so long as they follow certain rules. Notably, rivals like Google disallow their AI from being built into apps aimed at younger ages.
  • AI film festival: AI startup Runway held its second-ever AI film festival earlier this month. The takeaway? Some of the more powerful moments in the showcase came not from AI but from the more human elements.

More machine learnings

AI safety is obviously top of mind this week with the OpenAI departures, but Google DeepMind is plowing onward with a new “Frontier Safety Framework.” Basically it’s the organization’s strategy for identifying and hopefully preventing any runaway capabilities — it doesn’t have to be AGI; it could be a malware generator gone mad or the like.

Image Credits: Google DeepMind

The framework has three steps: (1) Identify potentially harmful capabilities in a model by simulating its paths of development; (2) evaluate models regularly to detect when they have reached known “critical capability levels”; and (3) apply a mitigation plan to prevent exfiltration (by another or itself) or problematic deployment. There’s more detail here. It may sound kind of like an obvious series of actions, but it’s important to formalize them or everyone is just kind of winging it. That’s how you get the bad AI.

A rather different risk has been identified by Cambridge researchers, who are rightly concerned at the proliferation of chatbots that one trains on a dead person’s data in order to provide a superficial simulacrum of that person. You may (as I do) find the whole concept somewhat abhorrent, but it could be used in grief management and other scenarios if we are careful. The problem is we are not being careful.

Image Credits: Cambridge University / T. Hollanek

“This area of AI is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team identifies numerous scams, potential bad and good outcomes, and discusses the concept generally (including fake services) in a paper published in Philosophy & Technology. Black Mirror predicts the future once again!

In less creepy applications of AI, physicists at MIT are looking at a useful (to them) tool for predicting a physical system’s phase or state, normally a statistical task that can grow onerous with more complex systems. But training up a machine learning model on the right data and grounding it with some known material characteristics of a system and you have yourself a considerably more efficient way to go about it. Just another example of how ML is finding niches even in advanced science.

Over at CU Boulder, they’re talking about how AI can be used in disaster management. The tech may be useful for quickly predicting where resources will be needed, mapping damage, even helping train responders, but people are (understandably) hesitant to apply it in life-and-death scenarios.

Attendees at the workshop.
Image Credits: CU Boulder

Professor Amir Behzadan is trying to move the ball forward on that, saying, “Human-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding and inclusivity among team members, survivors and stakeholders.” They’re still at the workshop phase, but it’s important to think deeply about this stuff before trying to, say, automate aid distribution after a hurricane.

Lastly some interesting work out of Disney Research, which was looking at how to diversify the output of diffusion image generation models, which can produce similar results over and over for some prompts. Their solution? “Our sampling strategy anneals the conditioning signal by adding scheduled, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition alignment.” I simply could not put it better myself.

Image Credits: Disney Research

The result is a much wider diversity in angles, settings, and general look in the image outputs. Sometimes you want this, sometimes you don’t, but it’s nice to have the option.

More TechCrunch

The high-profile addition is likely intended to satisfy critics who think that OpenAI is moving faster than is wise for its customers and possibly humanity.

Former NSA head joins OpenAI board and safety committee

Tesla CEO Elon Musk has secured enough shareholder votes to have his 2018 stock option compensation package approved. Shareholders also approved the company’s decision to reincorporate Tesla in Texas, moving…

Tesla shareholders vote yes again to approve Elon Musk’s $56B pay plan 

From a new Nominations dashboard in App Store Connect, developers will be able to create their nominations, either one by one or by uploading a spreadsheet to nominate apps in…

Apple gives developers a way to nominate their apps for editorial consideration on the App Store

StepStone raised the largest fund dedicated to investing in venture secondaries ever, the firm announced last week. This fundraise doesn’t just say a lot about StepStone’s venture secondaries investing prowess,…

What StepStone’s $3.3B venture secondaries fund tells us about LPs’ current appetite for venture

Spotify announced on Thursday that it’s venturing further into the ad space with its first in-house creative agency called Creative Lab, helping brands create custom marketing campaigns. It will also…

Spotify announces an in-house creative agency, tests generative AI voiceover ads

The TechCrunch team runs down all of the biggest news from the Apple WWDC 2024 keynote in an easy-to-skim digest.

Here’s everything Apple announced at the WWDC 2024 keynote, including Apple Intelligence, Siri makeover

Tesla shareholders are suing CEO Elon Musk and members of the automaker’s board of directors over Musk’s decision to start xAI, which they say is a competing AI company, and…

Tesla shareholders sue Musk for starting competing AI company

With the Core Spotlight framework, developers can donate content they want to make searchable via Spotlight.

Apple’s Spotlight Search gets better at natural language queries in iOS 18

It’s all part of an effort to say that, this time, when the shareholders vote to approve his monster $56 billion compensation package, they were fully informed.

Tesla and its fans waged an unprecedented battle over Elon Musk’s $56B pay package

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Kirsten Korosec…

Tesla shareholders to vote today on $56B pay package

Featured Article

GPTZero’s founders, still in their 20s, have a profitable AI detection startup, millions in the bank and a new $10M Series A

GPTZero’s growth and financials made it one of the AI startups ruthlessly pursued by VCs. And Footwork’s Nikhil Basu Trivedi won the deal.

6 hours ago
GPTZero’s founders, still in their 20s, have a profitable AI detection startup, millions in the bank and a new $10M Series A

Apple announced a number of new features and updates onstage during its keynote address at WWDC 2024, including updates to iOS, iPadOS, macOS, VisionOS and the introduction of Apple Intelligence.…

Here are the best WWDC 2024 features you missed

WhatsApp updated the video calling experience across devices on Thursday by introducing screen sharing with audio support and a new speaker spotlight feature. It’s also increasing the limit for video…

WhatsApp adds new features to the calling experience, including support for 32-person video calls

To sweeten the pot, Amazon is pledging that startups in this year’s Generative AI Accelerator cohort will gain access to experts and tech from Nvidia, the program’s presenting partner.

Amazon says it’ll spend $230 million on generative AI startups

Picsart, a photo-editing startup backed by SoftBank, announced on Thursday that it’s partnering with Getty Images to develop a custom model to bring AI imagery to its 150 million users. The…

Picsart partners with Getty Images to develop a custom AI model

Yahoo’s AI push isn’t over just yet. The company, also TechCrunch’s parent, recently launched AI-powered features for Yahoo Mail, including its own take on Gmail’s Priority Inbox and AI summaries…

After the Yahoo News app revamp, Yahoo preps AI summaries on homepage, too

Sodium-ion isn’t quite ready for widespread use, but one startup thinks it has surmounted the battery chemistry’s key hurdles.

Unigrid wants to make batteries cheaper and safer using sodium

LinkedIn is launching new AI tools to help you look for jobs, write cover letters and job applications, personalize learning, and a new search experience.

LinkedIn leans on AI to do the work of job hunting

An Indian court has restrained Byju’s from proceeding with its second rights issue amid allegations of oppression and mismanagement by its shareholders.

Court halts Byju’s second rights issue as $200M fundraise falters

The specter of wastewater threatens to stall the construction of battery factories. One startup, though, says the solution is to recycle it.

Aepnus wants to create a circular economy for key battery manufacturing materials

AccountsIQ, a Dublin-founded accounting technology company, has raised $65 million to build “the finance function of the future” for midsized companies.

AccountsIQ takes in $65M to boost bookkeeping with AI

Android is losing one of its long-time engineering leads. Dave Burke, VP of engineering at Android, said on Thursday that he is stepping down from the role after 14 years.…

Android’s long-time VP of engineering Dave Burke is stepping down

When Jordan Nathan launched his DTC nontoxic cookware company, Caraway, in 2019, he knew he was not the only founder trying to sell a new brand of pots and pans…

Why being the last company to launch in a category can pay off

Out of an abundance of caution, the car took two minutes to turn a corner.

This humanoid robot can drive cars — sort of

There has been a silly amount of drama in the run-up to Tesla‘s annual shareholder meeting on Thursday. The company is set to hold a vote on “re-ratifying” the $56…

Ahead of Tesla’s big shareholder vote, let’s re-read the judge’s opinion that got us here

To give users more control over the contacts an app can and cannot access, the permissions screen has two stages.

iOS 18 cracks down on apps asking for full address book access

The push to produce a robotic intelligence that can fully leverage the wide breadth of movements opened up by bipedal humanoid design has been a key topic for researchers.

Generative AI takes robots a step closer to general purpose

A TechCrunch review of LinkedIn data found that Ford has built this team up to around 300 employees over the last year.

Ford’s secretive, low-cost EV team is growing with talent from Rivian, Tesla and Apple

The most critical systems of our modern world rely on GPS, from aviation and road networks to emergency and disaster response, from precision farming and power grids to weather forecasting…

Tern AI wants to reduce reliance on GPS with low-cost navigation alternative 

Since fintech startup Brex’s inception in 2017, its two co-founders Henrique Dubugras and Pedro Franceschi have run the company as co-CEOs. But starting today, the pair told TechCrunch in an…

Fintech Brex abandons co-CEO model, talks IPO, cash burn and plans for a secondary sale