How to Fix Nginx 'Too Many Open Files' Error

nginx too many open filesulimit configurationnginx worker_rlimit_nofilelinux file descriptor limitfix nginx errors
Published·Modified·

Recently, I noticed that my self-hosted CDN node at https://imgurl.org/ occasionally became inaccessible. Server load was low, but Nginx connection counts stalled around 1024. While Nginx should easily handle around 1024 concurrent connections, the site remained inaccessible. Checking the Nginx error logs revealed numerous "Too many open files" errors, indicating that Nginx could not open more files. The issue was not with the concurrency count.

This error may be caused by system ulimit restrictions or Nginx configuration. Let's first understand the concept.

What is ulimit?

The ulimit command is used to restrict shell users' access to system resources. If you are unfamiliar with this, the following explanation may help:

Imagine a scenario where 10 users log in simultaneously to a Linux host. Without system resource limits, if all 10 users open 500 documents simultaneously, and each document is 10MB in size, the system's memory resources would face a huge challenge.

ulimit restricts the resources occupied by processes started by the shell. It supports various types of limits, including:

  • Size of core files created
  • Size of process data blocks
  • Size of files created by the shell process
  • Size of locked memory
  • Size of resident set
  • Number of open file descriptors
  • Maximum stack size
  • CPU time
  • Maximum number of threads per user
  • Maximum virtual memory usable by the shell process

It also supports both hard and soft resource limits.

In simple terms, ulimit descriptors can limit the number of files a user can open (not just file count), preventing a single user from opening too many files and causing system crashes or resource exhaustion.

Checking ulimit

Now that we understand what ulimit does, we need to check the underlying system limits. The parameters for ulimit are as follows:

-a: Display current resource limit settings;
-c <core file limit>: Set the maximum size of core files, in blocks;
-d <data segment size>: Maximum size of program data segments, in KB;
-f <file size>: Maximum file size the shell can create, in blocks;
-H: Set hard resource limits, i.e., administrator-set limits;
-m <memory size>: Specify the upper limit of usable memory, in KB;
-n <file count>: Specify the maximum number of files that can be opened simultaneously;
-p <buffer size>: Specify the size of pipe buffers, in 512-byte units;
-s <stack size>: Specify the stack upper limit, in KB;
-S: Set soft resource limits;
-t <CPU time>: Specify the upper limit of CPU usage time, in seconds;
-u <program count>: Maximum number of programs a user can open;
-v <virtual memory size>: Specify the upper limit of usable virtual memory, in KB.

Since the Nginx error is about being unable to open too many files, we can directly use ulimit -n to check the maximum number of files that can be opened simultaneously.

[root@bwh-cdn conf]# ulimit -n
1024

From the command output, we can see the limit is 1024 files. This causes Nginx to throw the "Too many open files" error when it attempts to open more than 1024 files.

Solution

Modify ulimit Limits

Execute the command ulimit -n 65535 to modify the number of open files. 65535 represents the maximum number of files to be opened simultaneously; adjust according to your specific needs.

The ulimit command modification only applies to the current shell session and will be lost upon exit. To make it permanent, you need to modify the /etc/security/limits.conf file. Add the following configuration at the bottom:

* soft nproc 65535
* hard nproc 65535
* soft nofile 65535 
* hard nofile 65535
  • *: Represents global settings
  • soft: Represents soft limits
  • hard: Represents hard limits
  • nproc: Represents the maximum number of processes
  • nofile: Represents the maximum number of open files

After modification, run the command ulimit -n again to verify the settings have taken effect:

[root@rakcdn conf]# ulimit -n
65535

Modify Nginx Open File Limits

Modify nginx.conf to add the following line, then reload the Nginx configuration using nginx -s reload:

worker_rlimit_nofile 65535;

The parameter worker_rlimit_nofile means: "Change the limit on the maximum number of file descriptors for Nginx worker processes. This allows increasing the limit without restarting the master process."

Summary

The above operations modified two areas: the ulimit limit and the Nginx worker_rlimit_nofile limit. After these changes, Nginx can easily read more files.

This article references the following content: