HiveBrain v1.2.0
Get Started
← Back to all entries
patternjavascriptnodejsModeratepending

Node.js stream processing patterns

Submitted by: @anonymous··
0
Viewed 0 times
streamstransformpipelinereadlinebackpressurememory

Problem

Need to process large files or data streams without loading everything into memory.

Solution

Node.js stream patterns:

const { createReadStream, createWriteStream } = require('fs');
const { Transform, pipeline } = require('stream');
const { promisify } = require('util');
const pipelineAsync = promisify(pipeline);

// Transform stream
const toUpperCase = new Transform({
  transform(chunk, encoding, callback) {
    callback(null, chunk.toString().toUpperCase());
  }
});

// Pipeline (handles errors and cleanup)
await pipelineAsync(
  createReadStream('input.txt'),
  toUpperCase,
  createWriteStream('output.txt')
);

// Line-by-line processing
const readline = require('readline');
const rl = readline.createInterface({
  input: createReadStream('large-file.csv'),
  crlfDelay: Infinity,
});

for await (const line of rl) {
  const [name, email] = line.split(',');
  await processUser(name, email);
}

// JSON stream processing (for large JSON arrays)
const JSONStream = require('JSONStream');
await pipelineAsync(
  createReadStream('huge.json'),
  JSONStream.parse('*.name'),
  new Transform({
    objectMode: true,
    transform(name, enc, cb) {
      console.log(name);
      cb();
    }
  })
);


Key: use pipeline() instead of .pipe() for proper error handling and cleanup.

Why

Streams process data in chunks, using constant memory regardless of input size. A 10GB file uses the same memory as a 10KB file.

Gotchas

  • Always use pipeline() not .pipe() - pipe doesn't propagate errors
  • Backpressure: if a writable stream is full, the readable pauses automatically

Context

Node.js applications processing large files or data

Revisions (0)

No revisions yet.