The open files limit is a setting in Linux that limits the number of open file descriptors that a process can have. A file descriptor is a number that identifies a file or other resource that a process can access. The open files limit can affect a variety of applications, such as web servers (like Nginx) and databases (like MongoDB). If the open files limit is too low, it can cause problems such as:

  • Nginx can’t accept new connections.
  • MongoDB can’t read or write data.
  • Other applications may also experience problems.

In this blog post, we will discuss the open files limit in more detail, and how to increase it if necessary. Thumbnail

What is open files limits?

The open files limit can be a problem in a variety of contexts. For example, it can be a problem:

  • On a web server, if the open files limit is too low, Nginx may not be able to accept new connections. This can lead to errors such as “503 Service Unavailable”.
  • On a database server, if the open files limit is too low, MongoDB may not be able to read or write data. This can lead to errors such as “Out of memory”.
  • On a system with a lot of concurrent processes, the open files limit can be reached even if each process is only opening a few files. This can lead to performance problems as processes have to wait for file descriptors to become available.

The open files limit is also affected by the kernel version and the distribution of Linux that you are using. For example, the default open files limit in Ubuntu is 1024, while the default open files limit in CentOS is 4096.

Here are some of the reasons why the open files limit can be too low:

  • The default open files limit is too low.
  • The number of concurrent processes is too high.
  • The application is opening a lot of files.
  • The kernel is not able to allocate enough file descriptors.

This is an example when mongodb exceed the open files limits:

2023-09-09T12:14:20.616+0000 W  NETWORK  [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files
2023-09-09T12:14:20.623+0000 W  NETWORK  [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files
2023-09-09T12:14:20.623+0000 W  NETWORK  [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files
2023-09-09T12:14:20.625+0000 W  NETWORK  [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files
2023-09-09T12:14:20.629+0000 W  NETWORK  [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files
2023-09-09T12:14:20.630+0000 W  NETWORK  [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files
2023-09-09T12:14:20.644+0000 I  NETWORK  [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files
2023-09-09T12:14:20.644+0000 I  NETWORK  [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files
2023-09-09T12:14:20.644+0000 I  NETWORK  [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files
2023-09-09T12:14:20.644+0000 I  NETWORK  [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files

Solutions

If you are facing issues with the open files limit on your system, there are several potential solutions that you can try out. Let’s explore them:

  • Increase the open files limit: This involves adjusting the maximum number of files that can be opened simultaneously by your system. By increasing this limit, you allow your application to handle a greater number of files without encountering issues.
  • Reduce the number of concurrent processes: If your system is running an excessive number of processes concurrently, it can strain the open files limit. By reducing the number of active processes, you free up resources and alleviate the burden on the file system.
  • Reduce the number of files being opened: Sometimes, an application might unnecessarily open numerous files, placing additional strain on the open files limit. By optimizing your code or implementing more efficient file handling techniques, you can minimize the number of files being opened and improve performance.
  • Upgrade to a newer kernel: In some cases, outdated kernels may have limitations on the number of open files. Upgrading to a newer kernel version can help overcome these restrictions and provide enhanced capabilities for handling files.

In my opinion, among the suggested solutions, increasing the open files limit seems to be the most appropriate course of action. However, it is crucial to carefully evaluate your specific needs and constraints before implementing any changes. Remember, finding the right solution for your particular situation will ensure smoother file handling and improved system performance.

1. Step 1: Check current limits

To view the current user limits set, add the “-a” option in the “ulimit” command:

ulimit -a

E.x:

$ ulimit -a
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
coredump(blocks)     unlimited
memory(kbytes)       unlimited
locked memory(kbytes) 65536
process              31703
nofiles              1024
vmemory(kbytes)      unlimited
locks                unlimited
rtprio               0

What are soft limits and hard limits?

In Linux, soft limit and hard limit are two types of limits that can be set on the resources that a process can use:

  • The soft limit is the maximum amount of a resource that a process is allowed to use at any given time.
  • The hard limit is the maximum amount of a resource that a process can ever use, even if it temporarily exceeds the soft limit.

For example, the soft limit for the number of open files that a process can have is 1024 by default. This means that a process can have up to 1024 open files at any given time. However, if the process tries to open more than 1024 files, it will not be allowed to do so. The hard limit for the number of open files is 4096 by default. This means that a process can never have more than 4096 open files, even if it temporarily exceeds the soft limit of 1024.

Soft limits are useful because they allow a process to temporarily exceed its limit without causing problems. For example, a web server may need to open a large number of files when it is handling a lot of traffic. The soft limit allows the web server to do this without crashing. However, the hard limit prevents the web server from using too many resources and causing problems for the system.

The soft limit and hard limit can be set for each user and for each process. The default limits are set by the system administrator, but users can change their own limits. To change the soft limit or hard limit, you can use the ulimit command.

To view the current soft limit, execute the ulimit command with the “-Sn” option:

ulimit -Sa

To view the current hard limit, execute the ulimit command with the “-Hn” option:

ulimit -Ha

2. Step 2: Increase open file limits

The limits.conf file is a configuration file that is used to set limits on the resources that users and processes can use. To increase the open files limit using the limits.conf file, you need to add a line to the file that specifies the new limit.

$ man limits.conf
NAME
       limits.conf - configuration file for the pam_limits module

DESCRIPTION
       The pam_limits.so module applies ulimit limits, nice priority and number of simultaneous login sessions limit to user login sessions. This description of the configuration file syntax applies to the
       /etc/security/limits.conf file and *.conf files in the /etc/security/limits.d directory.

 <<more>>

The format of the line is as follows:

<domain>        <type>  <item>  <value>

Where:

  • <domen> is the username of the user or group that you want to set the limit for.
  • <type> is the soft limit or hard limit.
  • <item> is one of the following:
    • core - limits the core file size (KB)
    • data - max data size (KB)
    • fsize - maximum filesize (KB)
    • memlock - max locked-in-memory address space (KB)
    • nofile - max number of open file descriptors
    • rss - max resident set size (KB)
    • stack - max stack size (KB)
    • cpu - max CPU time (MIN)
    • nproc - max number of processes
    • as - address space limit (KB)
    • maxlogins - max number of logins for this user
    • maxsyslogins - max number of logins on the system
    • priority - the priority to run user process with
    • locks - max number of file locks the user can hold
    • sigpending - max number of pending signals
    • msgqueue - max memory used by POSIX message queues (bytes)
    • nice - max nice priority allowed to raise to values: [-20, 19]
    • rtprio - max realtime priority
    • chroot - change root to directory (Debian-specific)
  • <value> is the value of item want to be set.

E.x: To increase the open files limit for the user root to 2048, you would add the following line to the limits.conf file:

# /etc/security/limits.conf
root soft nofile 2048
root hard nofile 2048

3. Step 3: Enable the pam_limits for the current session

PAM stands for Pluggable Authentication Module. The PAM module pam_limits.so provides functionality to set a cap on resource utilization. The command ulimit can be used to view current limits as well as set new limits for a session. The default values used by pam_limits.so can be set in /etc/security/limits.conf.

To enable the pam_limits module for the current session by adding the line session required pam_limits.so to the /etc/pam.d/common-session file:

# /etc/pam.d/common-session
.......... << contents >>
session required pam_limits.so

Here are some additional things to keep in mind when enabling the pam_limits module:

  • The pam_limits module is not enabled by default.
  • The pam_limits module can be used to check other limits as well, such as the number of processes that a user can run.
  • The pam_limits module can be configured to use different limits for different users or groups.

4. Step 4: System-wide limits

The ulimit is a user-level limit that is set by the ulimit command. The sysctl limit is a kernel-level limit that is set by the sysctl command.

The ulimit is typically lower than the sysctl limit. This is because the ulimit is designed to protect users from accidentally exceeding their resource limits and causing problems. The sysctl limit is designed to allow the system to function properly, even if some users exceed their ulimit.

If the ulimit is set higher than the sysctl limit, the ulimit will take precedence. This means that the user will be able to open more files than the sysctl limit allows.

It is important to note that setting the ulimit higher than the sysctl limit can lead to problems. If a user opens more files than the sysctl limit allows, the system may run out of file descriptors and crash.

You can increase the limit of opened files in Linux by editing the kernel directive fs.file-max. For that purpose, you can use the sysctl utility:

$ sysctl -w fs.file-max=<value>

Or, set fs.file-max” parameter in file /etc/sysctl.conf and using sysctl -p to load the kernel settings from the “sysctl.conf” file . :

$ cat <<EOF > /etc/sysctl.conf
fs.file-max=<value>
EOF
$ sysctl -p

To verify the changes again use:

$ cat /proc/sys/fs/file-max
<value>

5. Step 5: Reboot

Once you have done the above step, you need to save the file and then restart the system or the affected user’s session.

$ sudo reboot

Practical example

In context, to address the issue of Nginx being unable to accept new connections, the most effective solution is to increase the open files limits. By adjusting this limit, you can enable the system to handle a higher number of simultaneous file operations. Increasing the open files limits provides Nginx with the necessary resources to accept and process new connections, preventing any potential bottlenecks or disruptions in serving content. It allows for smoother operation and improved performance, ensuring that your website can accommodate a greater number of concurrent users without any issues.

1. Check nginx limits

As default, nginx using the user nginx to run the process, so we need to check nginx user current limits with following command:

$ sudo -u nginx -H sh -c "ulimit -a"
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
coredump(blocks)     unlimited
memory(kbytes)       unlimited
locked memory(kbytes) 65536
process              31703
nofiles              1024
vmemory(kbytes)      unlimited
locks                unlimited
rtprio               0

2. Increase open files limits for user nginx:

The open files limit is too low, you can increase it using editting the file /etc/security/limits.conf for user nginx with both hard and soft limit. The <value> parameter is the new open files limit. For example, to increase the open files limit to 2048, you would use the following command:

$ cat <<EOF >> /etc/security/limits.conf
nginx soft nofile 2048
nginx hard nofile 2048
EOF

3. Enable the pam_limits for the current session

The pam_limits module can be configured to use different limits for different users or groups. The default limits are defined in the /etc/security/limits.conf file. To apply the limits configure in limits.conf file.

$ cat <<EOF >> /etc/pam.d/common-session
session required pam_limits.so
EOF

4. Set system-wide limits

To set a system-wide limit using the sysctl command, you need to use the -w option. To increase the kernel open files limit to 65000:

$ sysctl -w fs.file-max=65000

To verify the changes using the command as follows:

$ cat /proc/sys/fs/file-max
65000

5. Reboot and check results

Reboot to apply changes:

$ sudo reboot

Check the user nginx limits:

$ sudo -u nginx -H sh -c "ulimit -a"
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
coredump(blocks)     unlimited
memory(kbytes)       unlimited
locked memory(kbytes) 65536
process              31703
nofiles              2048
vmemory(kbytes)      unlimited
locks                unlimited
rtprio               0

Conclusion

In conclusion, the open files limit is an important setting that can affect the performance of a variety of applications. If the open files limit is too low, it can cause problems such as nginx not being able to accept new connections or MongoDB not being able to read or write data.

If you are experiencing problems with an application that may be related to the open files limit, you can use the steps outlined in this blog post to troubleshoot the problem. By increasing the open files limit, you can often resolve these problems and improve the performance of your applications.

I hope this blog post has been helpful. If you have any further questions, please feel free to leave a comment below.