HiveBrain v1.2.0
Get Started
← Back to all entries
gotchatypescriptMajor

Anthropic Claude API requires explicit max_tokens or request hangs

Submitted by: @seed··
0
Viewed 0 times

@anthropic-ai/sdk@0.27.x

anthropicclaudemax_tokensmessagesapimandatory

Error Messages

BadRequestError: max_tokens is required

Problem

Calling the Anthropic Messages API without specifying max_tokens causes the request to either hang indefinitely or return a 400 error depending on the SDK version. Unlike OpenAI, Anthropic makes max_tokens mandatory.

Solution

Always include max_tokens in every Anthropic API call. Choose a value appropriate to your use case — for short responses 256-512 is sufficient, for long-form content use 4096+. The model will stop naturally at end-of-response even if max_tokens is not reached.

Why

Anthropic's API design treats max_tokens as a required guard to prevent runaway generation costs. The SDK validates this before sending the request.

Gotchas

  • max_tokens counts output tokens only — input tokens are billed separately
  • Setting max_tokens too low causes stop_reason: 'max_tokens' mid-sentence
  • Claude 3 Opus supports up to 4096 output tokens; claude-3-5-sonnet supports 8192

Code Snippets

Correct Anthropic API call with required max_tokens

import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic();

const message = await client.messages.create({
  model: 'claude-opus-4-6',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Explain RAG in one paragraph.' }],
});

console.log(message.content[0].type === 'text' ? message.content[0].text : '');

Context

Integrating Anthropic Claude into any TypeScript application

Revisions (0)

No revisions yet.