Solving the "Too many open files" issue with Nginx

Publish: 2019-06-02 | Modify: 2019-06-02

Recently, I found that https://imgurl.org/ occasionally cannot be accessed. When checking the server load, it was not high and the number of nginx connections was around 1024, which should be more than enough to handle. However, there were "Too many open files" errors in the nginx error log, indicating that nginx couldn't open more files. It seems that the problem is not related to the number of connections.

This error may be related to the system's ulimit limit and nginx's own configuration. Let's understand the concept first.

What is ulimit?

The ulimit command is used to limit a user's access to shell resources. To understand this, consider the following situation: imagine that there are 10 people logged in to a Linux host at the same time. In a situation where there are no limits on system resources, these 10 users simultaneously open 500 documents, and let's assume that each document is 10MB in size. In this case, the system's memory resources will face a huge challenge.

ulimit is used to limit the resources consumed by shell processes and supports various types of limits, such as the maximum size of core files, the maximum size of program data segments, the maximum size of files created by the shell process, the maximum size of memory locked in memory, the maximum size of resident set size, the maximum number of open file descriptors, the maximum size of allocated stacks, CPU time, the maximum number of threads for a single user, and the maximum virtual memory that the shell process can use. It supports both hard and soft limits.

In simple terms, the ulimit descriptor can limit the number of files opened by a user (not just limiting the number of open files), preventing a single user from opening too many files and causing the system to crash or run out of resources.

Checking ulimit

Now that we know what ulimit does, let's first find out what the system's underlying limit is. The parameters for ulimit are as follows:

-a: Display the current resource limits.
-c <core file size>: Set the maximum size of core files in blocks.
-d <data seg size>: Set the maximum size of a program's data segment in kilobytes.
-f <file size>: Set the maximum size of files created by the shell in blocks.
-H: Set the hard limit for the given resource.
-m <memory size>: Set the maximum resident set size in kilobytes.
-n <file descriptors>: Set the maximum number of open file descriptors.
-p <pipe size>: Set the size of the pipe buffer in 512-byte blocks.
-s <stack size>: Set the maximum stack size in kilobytes.
-S: Set the soft limit for the given resource.
-t <cpu time>: Set the maximum amount of CPU time in seconds.
-u <processes>: Set the maximum number of processes available to a single user.
-v <virtual memory>: Set the maximum amount of virtual memory available to the shell.

Since the nginx error mentioned above is about not being able to open too many files, we can directly use ulimit -n to check the maximum number of open files.

[root@bwh-cdn conf]# ulimit -n
1024

From the above command, we can see that the limit is set to 1024 files, which causes the "Too many open files" error when nginx tries to open more files (beyond 1024).

Solution

Modifying ulimit limits

To modify the number of open files, you can directly execute the command ulimit -n 65535, where 65535 specifies the maximum number of files that can be opened at the same time. Please modify it according to your own situation.

The modification made using the ulimit command is only valid for the current shell session and will be lost after exiting. If you want the changes to be permanent, you need to modify the /etc/security/limits.conf file. Add the following configuration at the bottom:

* soft nproc 65535
* hard nproc 65535
* soft nofile 65535 
* hard nofile 65535
  • *: Represents global settings
  • soft: Represents soft limits
  • hard: Represents hard limits
  • nproc: Specifies the maximum number of processes
  • nofile: Specifies the maximum number of open files

After making the changes, you can check if it has taken effect by running the command ulimit -n:

[root@rakcdn conf]# ulimit -n
65535

Modifying nginx's open file limit

Modify the nginx.conf file and add a line, then reload the nginx configuration using nginx -s reload:

worker_rlimit_nofile 65535;

The worker_rlimit_nofile parameter is used to "change the limit on the maximum number of open file descriptors for the worker processes. This directive is used to increase the limit on the number of open files without restarting the main process."

Conclusion

The above steps involve modifying two things: the ulimit limit and the nginx worker_rlimit_nofile limit. After the modifications, nginx will be able to read more files without any issues.

Some parts of this article were referenced from:


Comments