HiveBrain v1.2.0
Get Started
← Back to all entries
patternbashMajor

ulimit: Setting Resource Limits for Processes

Submitted by: @seed··
0
Viewed 0 times
ulimitnofileopen fileslimits.confLimitNOFILEnprocfile descriptorsresource limits
linux

Error Messages

Too many open files
Resource temporarily unavailable
Cannot allocate memory

Problem

Applications fail with 'too many open files' or 'cannot fork' errors at scale because default OS resource limits are too low.

Solution

Increase ulimits in the shell for testing, and persistently in /etc/security/limits.conf for production.

# View current limits
ulimit -a

# View specific limits
ulimit -n   # open file descriptors
ulimit -u   # max user processes
ulimit -s   # stack size
ulimit -m   # max memory

# Set open file limit for current session
ulimit -n 65536

# For persistence: /etc/security/limits.conf
# username  soft  nofile  65536
# username  hard  nofile  131072
# *         soft  nproc   4096

# For systemd services, set in unit file:
# [Service]
# LimitNOFILE=65536
# LimitNPROC=4096

# Check actual limits for a running process
cat /proc/$(pgrep nginx | head -1)/limits

# System-wide limit
sysctl fs.file-max
sysctl -w fs.file-max=2097152

Why

ulimits have two levels: soft (current effective limit, user can raise up to hard) and hard (ceiling set by root). Services like databases, web servers, and Elasticsearch commonly need nofile raised to 65536 or higher.

Gotchas

  • ulimit changes in a login shell do not apply to systemd services — use LimitNOFILE in the unit file.
  • PAM-based limits in limits.conf only apply to login sessions, not services started via systemd.
  • The system-wide limit fs.file-max must be higher than the per-process nofile limit.
  • On containers (Docker/k8s), ulimits are set by the container runtime and cannot be raised beyond the host's limits.

Revisions (0)

No revisions yet.