In a significant move towards advancing artificial intelligence (AI) development, Google and Anthropic have announced the expansion of their collaboration. The partnership aims to support the evolution of AI and implement Google Cloud’s TPUv5e platforms on a broader scale, now generally available.
AISS Summit Sets the Stage for Collaboration
The announcement unfolded during the recent AI Security Summit (AISS) held in the United Kingdom, coinciding with the formalization of the Bletchley Declaration. Anthropic, a San Francisco-based research and security company, has been in close collaboration with Google since its inception in 2021. The partnership has yielded new updates designed to propel the growth of AI technology, with a commitment to continue working “boldly and responsibly.”
Thomas Kurian, CEO of Google Cloud, emphasized that the agreement will “safely bring AI to more people” and serves as an exemplar of how cutting-edge AI startups are flourishing within the Google Cloud ecosystem.
Anthropic Leads the Way in TPUv5e Implementation
Anthropic is set to be among the early adopters to implement Google Cloud’s TPUv5e chips at scale, touted as the “most cost-effective and scalable” AI accelerator to date. The move aligns with Anthropic’s vision of making AI systems understandable, reliable, and interpretable for global enterprises, as stated by Dario Amodei, the co-founder and CEO of Anthropic.
Additionally, Anthropic has integrated Google Cloud’s security services, including Google Cloud, Chronicle Security Operations, Secure Enterprise Browsing, and Security Command Center. This integration ensures organizations deploying Anthropic’s models on Google Cloud are fortified against cyber threats.
Google has further committed to enhancing AI security by collaborating with Anthropic and joining forces with the nonprofit organization MLCommons. This partnership is part of a new AI Safety Benchmarking working group under MLCommons.
TPU V5e: New Updates Unleashed
Simultaneously, Google has unveiled TPU v5e, now available to the public, offering customers a unified TPU platform for both training and inference workloads. The platform boasts a 2.3 times higher training performance per dollar compared to the previous generation.
The Multislice training technology empowers Anthropic to scale its Large Language Models (LLMs) beyond the physical limits of a single TPU module, reaching tens of thousands of interconnected chips.
Since its August launch, Google reports widespread adoption of TPU v5e by its customers for a diverse range of workloads spanning AI model training and services.
Moreover, Google is making available to all users its Singlehost Inference and Multislice Training technologies, introduced in September. These technologies bring cost-effectiveness, scalability, and versatility to Google Cloud customers, allowing them to leverage a unified TPU platform for both training and inference workloads.