Anthropic builds Claude, a family of frontier AI models and tools designed to be safe, reliable, and useful for both individuals and organizations.
Team and Enterprise plans provide an administered environment where organizations can control user access and data submitted by their users. This supports governance needs and central oversight for deployments across departments.
Access Claude through third‑party cloud platforms like Amazon Bedrock and Google Vertex AI, enabling integration with existing cloud tooling and security baselines. This multi‑cloud access simplifies adoption within established infrastructure.
Anthropic assigns rights to model outputs (Outputs) to users and provides an account‑level opt‑out from training. These controls help organizations meet privacy expectations while retaining value created with Claude.
Anthropic may offer fine‑tuning services to tailor models using customer‑provided data for specific domains. This helps teams improve relevance and performance for specialized tasks and workflows.
A robust Acceptable Use Policy, responsible scaling commitments, and trust & safety processes guide how Claude is built and used. These guardrails support safer deployment in professional and regulated settings.
Pricing in USD per million tokens (MTok). Batch API offers 50% discount. Prompt caching and long context (>200K tokens) have additional pricing. Enterprise pricing available. See docs.claude.com/pricing for full details.
You must be at least 18 years old or the minimum age of consent in your location, whichever is higher, to use Anthropic’s services.
Subject to compliance with the terms, Anthropic assigns to you all of its right, title, and interest in Outputs. You retain rights to your Inputs as permitted by law.
Yes. You can opt out of model training via your account settings. Even if you opt out, data may still be used for safety review or when you provide feedback.
If you subscribed via the website, cancel through your customer portal or by emailing support@anthropic.com. Cancel at least 24 hours before the end of your current term to avoid renewal. App store purchases must be canceled through the app distributor.
Payments are generally non‑refundable except where required by law. Certain regions (e.g., Brazil, Mexico, South Korea, Taiwan) provide a 7‑day right to cancel; refunds are processed within 14 days if applicable.
Anthropic may permit evaluation access for limited time or functionality in some cases. Such evaluation is for personal, non‑commercial use and may vary by offering.
Yes. Anthropic offers access through Amazon Bedrock and Google Vertex AI. Usage must comply with applicable cloud provider policies and agreements.
Claude for Work includes Team and Enterprise plans. It provides an administered service where organizations can control access and data submitted by users, helping meet governance and compliance requirements.
Availability is limited by Anthropic’s Supported Regions Policy. Check the supported countries list for current coverage before deploying.
No. You may not share your account login, API key, or credentials with others. You are responsible for all activity under your account.
Join thousands of developers who are already using Anthropic to enhance their workflow and productivity.
LangChain is an end-to-end agent engineering stack that helps teams build, observe, evaluate, and deploy reliable AI agents.
GroqCloud is a high-performance AI inference platform built to deliver ultra-low latency, predictable cost, and production-grade reliability for real-world applications.
Claude is a next-generation AI assistant from Anthropic designed to help individuals and teams create, code, research, and analyze faster with strong safety and reliability.
Google AI Studio is a developer-focused platform that streamlines the journey from prompt to production with Gemini and other Google AI models.
OpenAI o1 is a new family of frontier reasoning models designed to spend more time thinking before they respond, enabling stronger performance on complex tasks in science, coding, and math.