Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

The agents are coming — the AI agents, that is.

This week, Anthropic released its newest AI model, an upgraded version of Claude 3.5 Sonnet, that can interact with the web and desktop apps by clicking and typing — much like a person. It’s not perfect. But 3.5 Sonnet with “Computer Use,” as Anthropic’s calling it, could be transformative in the workplace.

At least, that’s the elevator pitch.

Whether Anthropic’s new model lives up to the hype remains to be seen. But its arrival signifies Anthropic’s ambitions in the nascent AI agent market, which some analysts believe could be worth close to $50 billion by 2030.

Anthropic isn’t the only one investing resources in developing AI agents, which, broadly defined, automate tasks that previously had to be performed manually. Microsoft is testing agents that can use Windows PCs to book appointments and more, while Amazon is exploring agents that can proactively make purchases.

Organizations might be waffling on generative AI. But they’re pretty bullish on agents so far. A report out this month from MIT Technology Review Insights found that 49% of executives believe agents and other forms of advanced AI assistants will lead to efficiency gains or cost savings.

For Anthropic and its rivals building “agentic” technologies, that’s welcome news indeed. AI isn’t cheap to build — or run. Case in point, Anthropic is said to be in the process of raising billions of dollars in venture funds, and OpenAI recently closed a $6.5 billion funding round.

But I wonder if most agents today can really deliver on the hype.

Take Anthropic’s, for example. In an evaluation designed to test an AI agent’s ability to help with airline booking tasks, the new 3.5 Sonnet managed to complete less than half of the tasks successfully. In a separate test involving tasks like initiating a product return, 3.5 Sonnet failed roughly one-third of the time.

Again, the new 3.5 Sonnet isn’t perfect — and Anthropic readily admits this. But it’s tough to imagine a company tolerating failure rates that high for very long. At a certain point, it’d be easier to hire a secretary.

Still, businesses are showing a willingness to give AI agents a try — if for no other reason than keeping up with the Joneses. According to a survey from startup accelerator Forum Ventures, 48% of enterprises are beginning to deploy AI agents, while another third are “actively exploring” agentic solutions.

We’ll see how those early adopters feel once they’ve had agents up and running for a bit.

News

Data scraping protests: Thousands of creatives, including actor Kevin Bacon, novelist Kazuo Ishiguro, and the musician Robert Smith, have signed a petition against unlicensed use of creative works for AI training.

Meta tests facial recognition: Meta says it’s expanding tests of facial recognition as an anti-fraud measure to combat celebrity scam ads.

Perplexity gets sued: News Corp’s Dow Jones and the NY Post have sued growing AI startup Perplexity, which is reportedly looking to fundraise, over what the publishers describe as a “content kleptocracy.”

OpenAI’s new hires: OpenAI has hired its first chief economist, ex-U.S. Department of Commerce chief economist Aaron Chatterji, and a new chief compliance officer, Scott Schools, previously Uber’s compliance head.

ChatGPT comes to Windows: In other OpenAI news, OpenAI has begun previewing a dedicated Windows app for ChatGPT, its AI-powered chatbot platform, for certain segments of customers.

xAI’s API: Elon Musk’s AI company, xAI, has launched an API for Grok, the generative AI model powering a number of capabilities on X.

Mira Murati raising: Former OpenAI CTO Mira Murati is reportedly fundraising for a new AI startup. The venture is said to focus on building AI products based on proprietary models.

Research paper of the week

Militaries around the world have shown great interest in deploying — or are already deploying — AI in combat zones. It’s controversial stuff, to be sure, and it’s also a national security risk, according to a new study from the nonprofit AI Now Institute.

The study finds that AI deployed today for military intelligence, surveillance, and reconnaissance already poses dangers because it relies on personal data that can be exfiltrated and weaponized by adversaries. It also has vulnerabilities, like biases and a tendency to hallucinate, that are currently without remedy, write the co-authors.

The study doesn’t argue against militarized AI. But it states that securing military AI systems and limiting their harms will require creating AI that’s separate and isolated from commercial models.

Model of the week

This week was a very busy week in generative AI video. No fewer than three startups released new video models, each with their own unique strengths: Haiper’s Haiper 2.0, Genmo’s Mochi 1, and Rhymes AI’s Allegro.

But what really caught my eye was a new tool from Runway called Act-One. Act-One generates “expressive” character performances, creating animations using video and voice recordings as inputs. A human actor performs in front of a camera, and Act-One translates this to an AI-generated character, preserving the actor’s facial expressions.

Runway Act-One
Image Credits:Runway

Granted, Act-One isn’t a model per se; it’s more of a control method for guiding Runway’s Gen-3 Alpha video model. But it’s worth highlighting for the fact that the AI-generated clips it creates, unlike most synthetic videos, don’t immediately veer into uncanny valley territory.

Grab bag

AI startup Suno, which is being sued by record labels for allegedly training its music-generating tools on copyrighted songs sans permission, doesn’t want yet another legal headache on its hands.

At least, that’s the impression I get from Suno’s recently announced partnership with content ID company Audible Magic, which some readers might recognize from the early days of YouTube. Suno says it’ll use Audible Magic’s tech to prevent uploads of copyrighted music for its Covers feature, which lets users create remixes of any song or sound.

Suno has told labels’ lawyers that it believes songs it used to train its AI fall under the U.S.’ fair-use doctrine. That’s up for debate. It wouldn’t necessarily help Suno’s case, though, if the platform was storing full-length copyrighted works on its servers — and encouraging users to share them.

source

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending