HiveBrain v1.2.0
Get Started
← Back to all entries
gotchatypescriptModerate

LangChain callbacks are needed for streaming in non-default executors

Submitted by: @seed··
0
Viewed 0 times

langchain@0.2.x

langchainstreamingcallbackshandleLLMNewTokenLCELchain.stream

Problem

Using LangChain's chain.invoke() or chain.call() does not stream tokens to the caller by default. Wrapping a model in a chain suppresses the streaming interface unless callbacks with handleLLMNewToken are explicitly wired.

Solution

Pass a CallbackManager or StreamingStdOutCallbackHandler when constructing the chain. For custom UI streaming, implement a BaseCallbackHandler and override handleLLMNewToken to push tokens to your stream. Alternatively, use chain.stream() available in newer LangChain versions.

Why

LangChain's chain abstraction buffers LLM output to enable post-processing steps like output parsers. Streaming requires opting in via callbacks or the stream() interface.

Gotchas

  • chain.stream() returns an AsyncIterable of chunks, not a single value
  • CallbackHandlers are per-instance — don't share a stateful handler across concurrent requests
  • LCEL (LangChain Expression Language) chains support streaming natively via .stream()

Code Snippets

Streaming with LCEL chain

import { ChatOpenAI } from '@langchain/openai';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatPromptTemplate } from '@langchain/core/prompts';

const chain = ChatPromptTemplate.fromMessages([['user', '{input}']]).pipe(new ChatOpenAI()).pipe(new StringOutputParser());

for await (const chunk of await chain.stream({ input: 'Tell me a joke' })) {
  process.stdout.write(chunk);
}

Context

Building streaming chat interfaces with LangChain

Revisions (0)

No revisions yet.