Hey Everyone,
There’s another green light for Generative A.I. startups in the United States, or should I say red light?
Around nine hours ago it was confirmed by the WSJ that Google is investing up to $2 Billion in Anthropic, much like Amazon investing up to $4 Billion announced previously. After lucking out and getting $500 million back in the day from SBF, Anthropic has really taken off.
They are literally a spin-out from OpenAI with a more trustworthy philosophy. The commitment involves a $500 million upfront investment and an additional $1.5 billion to be invested over time.
One month ago it was the same story with Amazon - which means Anthropic is among the best funded A.I. startups in the world. So if things go right, Anthopric can get $6 Billion in funding just from the magnificent five, Amazon and Google more specifically.
Anthropic is doing things somewhat differently as compared to OpenAI, it also seems like a more trustworthy brand. If there was a startup that could make human level AGI more safe, it might be this team!
The March, 2023 funding was led by Spark Capital.
Why has Google invested in Anthropic in the first place?
We knew Google was interested in Anthropic at least nine months ago. This week’s earnings showed that Microsoft Azure Cloud momentum has accelerated faster than Google Cloud, with some analysts are saying it’s because of the AI speed that Microsoft took with GPT-4.
[link]
Google, Salesforce and Zoom already participated in the Series C Funding in March, 2023. So this is the second time Google has really doubled down on Anthopric. One of the main reasons Google is doing so is their own need for high level understanding of A.I. safety with their Gemini LLM, the first real serious product by the united Google Brain and DeepMind teams.
Anthropic, an artificial intelligence startup founded in 2021 by former OpenAI research execs, is taking full advantage of the market hype, not to mention the x-risk debates. Anthropic, the creator of the Claude 2 chatbot, was valued earlier this year at $4.1 billion.
Anthropic has even experimented with Collective Constitutional AI. Its research is unlike anything we typically see at Meta, Microsoft and so forth. Claude 2’s capabilities are more transparent and framed to people better.
Claude 2 has the ability to summarize up to about 75,000 words, which could be the length of a book. Users can input large data sets and ask for summaries in the form of a memo, letter or story. ChatGPT, by contrast, can handle about 3,000 words.
Anthropic says it’s used by companies such as used by companies Slack, Notion, DuckDuckGo and Quora.
It’s a very close spin-out of OpenAI and literally the same people involved. For instance, it was founded by Dario Amodei, OpenAI’s former vice president of research, and his sister Daniela Amodei, who was OpenAI’s vice president of safety and policy. Several other OpenAI research alumni were also on Anthropic’s founding team.
AI Laws and Safety
For all of OpenAI’s talk, its products are a bit less trustworthy and pricey. Anthropic’s Constitutional AI (CAI) is an Anthropic-developed method for aligning general purpose language models to abide by high-level normative principles written into a constitution. While the world is talking and forming groups, Anthropic is on the front lines.
The Anthropic team has previously conducted research into GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Back in 2021 Anthopric said:
““Anthropic’s goal is to make the fundamental research advances that will let us build more capable, general, and reliable AI systems”
It would be very interesting of Anthopric acquired Perplexity, another heavily Google backed Search engine. Prior to the up to $4 Billion financing, Google also signed a major cloud agreement with Anthropic Anthropic is thus playing the field, instead of going fully closed-commercial and trying to exploit first-mover advantage like OpenAI is doing along with world tours. Gone are the days where Anthropic only had one tenth of OpenAI’s funding.
Research by Arthur AI, a machine learning monitoring platform, found Claude 2 to be most reliable chatbot in terms of “self-awareness,” meaning accurately gauging what it does and doesn’t know, and answering only questions it had training data to support.
If OpenAI is the AGI company of the 2020s, for whatever that means, Anthropic is certainly the trust & safety foundational model startup company. I hope this means they will have very different business models. Cohere and many others will be great competitors to OpenAI’s enterprise ambitions.
Anthropic’s September 25th, 2023 deal with Amazon gives it up to $4 Billion from Amazon. This means Anthropic can get the best from its partners including Google, Amazon and Salesforce. Microsoft needs to double-down on Cloud revenue from Azure since its Bing AI did not even remotely take market share in search from Google. Microsoft’s cloud saw accelerating growth in the third quarter, while Google’s growth rate slowed.
OpenAI and Anthropic are intermingled now with the magnificent seven and their own adoption strategies of Generative A.I. It means the fate of AGI might be in the hands of Silicon Valley’s elite. Given that Google pays Apple and others $26 billion a year just to be the default search engine, it’s a bit of a mafia tech hustle. How does Anthoropi’s R&D in AI safety apply here exactly?
Anthropic’s mission has always been clear. Anthropic will focus on research into increasing the safety of AI systems; specifically, the company is focusing on increasing the reliability of large-scale AI models, developing the techniques and tools to make them more interpretable, and building ways to more tightly integrate human feedback into the development and deployment of these systems. But what sorts of compromises will it have to make just to keep up?
Cloud and Advertising revenue will begin to get more tied to Generative A.I. adoption and efficacy. Amazon Web Services was the slowest grower among the three cloud leaders. AWS has the most to lose from not being more aggressive in the A.I. space. They are going more the route of boosting Ads and trying to be agnostic with Amazon Bedrock. But if Microsoft’s Cloud in Azure gains marketshare they might have to acquire someone like Cohere.
OpenAI’s revenue is around $1.3 Billion and thus conceivably they could pay back Microsoft’s incredibly large investment in them and gain their independence. A weirdly high valuation of $86 Billion might help? I’m not understanding the multiple at which OpenAI and Anthropic value themselves. In an era where valuations have dropped in the area of 30-50% in recent months in many cases, it’s not very pragmatic. OpenAI is likely burning cash really quickly as well. With all of its talent, compute and antics, what does it have to show for it? GPT-4 is a very expensive API for most companies. ChatGPT while a pleasant trailblazer in the first half of 2023, has slowed considerably.
The bleeding edge of Generative A.I. feels more like a game of thrones of Silicon Valley than an incredible oasis of innovation. Don’t tell that to the folk who are pretending this is safe A.I.
AWS will become Anthropic’s primary cloud provider for mission critical workloads. Amazon is investing more than Google. Anthropic says it has found significant AWS customer demand for Claude 2. But when the numbers are in, will OpenAI or Anthropic ever be profitable? Gritty startups in the domain of foundational B2B enterprise A.I. might perform more efficiently like Cohere, A21 Labs, Aleph Alpha and others.
Still, OpenAI and Anthropic along with Inflection and a few others have managed to unlock the gold gates of Silicon Valley funding. Both from VCs and Corporate wings. This means Silicon Valley has a huge first mover advantage in this new stage of HLAI movement. Whatever Generative A.I. evolves into, it might create a few decent companies.
Angels of Democratic AI?
In the early days, Anthropic’s Series A round was led by Jaan Tallinn, technology investor and co-founder of Skype. The round included participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research, Eric Schmidt, and others.
Eric Schmidt
Dustin Moskovitz
Jaan Tallinn
James McClave
Sam Bankman-Fried
Are these the angels of the new world? Sam Bankman-Fried, the CEO of FTX led the Series B. Eric Schmidt is a wheeler and dealer of late in the A.I. policy and anti-China debates, as well as having some juicy rumours.
If Anthropic are the good guys, are we in trouble? SBF and Eric look like kids in the park of the Game of Thrones of AI.
Is taking funds from two of the ‘Magnificent Five’ really a great idea if you are trying to be independent on the side of AI safety? The pressure to keep up with OpenAI must have been high.
AI Policy Divides?
We are going to likely live in a world of AI regulatory hot-cold war. Anthropic itself splitting from OpenAI might have created a schism that will never be truly resolved. Amodei, the former VP of research at OpenAI, launched Anthropic in 2021 as a public benefit corporation, taking with him a number of OpenAI employees, including OpenAI’s former policy lead Jack Clark. Amodei split from OpenAI after a disagreement over the company’s direction, namely the startup’s increasingly commercial focus.
But if Eric Schmidt is one of your backers and you get closer to Google, to the tune of $2 Billion, what do you expect to happen to your company in the long-term? Google isn’t exactly known to be a good actor.
So behind the fact that A.I. startups can somehow attract Billions is a world where AI is running wild without minimum supervision. I think our artificial ignorance about what’s really going on protects us from our own naivete. Most startups that get too much funding don’t tend to do so well historically speaking, just ask Softbank’s Vision Fund.
Anthropic’s impressive ability to keep up with OpenAI may be less impressive if we compare their revenue with OpenAI’s in 2024, but time will tell. Their plan is going accordingly. AI policy and regulation debates globally are like a picture of Silicon Valley dystopia itself and if Anthropic is a shining light, for how long can it last in such a cutthroat environment. That remains to be seen.