Linux is one of the most used operating systems on the web today, serving many websites (meteen.info is amongst them 😉) out of the box the OS comes with several limits to prevent your system from locking up and freezing completely, these limits however, are set way too safe for higher volume websites and services.
Word of warning! implementing what I’m about to share with you might freeze your system, please try and test the upping the limits before you go live.
My flavor of Linux OS has changed so much over the past years.. I started out with OpenSuse, moved to Fedora, moved to CentOS, tried and got frustrated with Arch, before finally landing on Ubuntu, to which I’m currently sticking.
The reason I stumbled upon this limit was that I had issues with Janus restart when roughly ~ 50 users were in a videoroom and also with a NodeJS Socket server when > 1000 users were connected.
On Linux there is a file /etc/security/limits.conf there you can persist your per process limits like so:
# Nick Hooijenga
# Almost unlimited not recommended on live systems
* soft core unlimited
* hard core unlimited
* soft data unlimited
* hard data unlimited
* soft fsize unlimited
* hard fsize unlimited
* soft nofile 1000000
* hard nofile 1000000
* soft cpu unlimited
* hard cpu unlimited
* soft stack unlimited
* hard stack unlimited
* soft nbproc 64010
* hard nbproc 65545
* soft sigpending unlimited
* hard sigpending unlimited
* soft locks unlimited
* hard locks unlimited
* soft msgqueue unlimited
* hard msgqueue unlimited
Upping these limits and persisting them won’t affect running processes, in contrary to popular beliefs, a reboot of your OS is not required if all you want is that your processes (*like Apache, Nginx, HA Proxy, etc..) spawn with these limits.
If you are running a modern version of your OS than chances are that prlimit is installed, use it by running:
prlimit --pid $pid --$limit=$soft:$hard
In the example below we find the root process of Apache and then up the file descriptors limits (from 1024, OS default) to 20.000 soft and 40.000 hard.
root@svr01:~# ps aux | grep apache2 | head -n 1
---- 1981870 0.0 0.0 621768 32456 ? Ss nov06 0:01 /usr/sbin/apache2 -k start
root@svr01:~# prlimit --pid 1981870 --nofile=20000:40000
By gradually upping the limits, you can discover the required limits and then persist them to your limits.conf file.
If you’ve written custom software, and you expect the systemd service to respect the limits set in limits.conf then I’m sorry to disappoint… You’ll need to tweak your service by adding in some limits.
The example provided above (again) is obviously not tweaked, your mileage may vary. Don’t forget to run systemctl daemon-reload after making the changes.
I created a custom DNS server service
/etc/systemd/system/dns.service with the service configured as follows:
Description=DNS Server Service
Now your custom service runs far beyond the OS configured limits.
We’ve skipped the ulimit application as this only controls the resources available to the current shell and to processes started by its applications. The tweaks explained on this page will allow you to permanently go beyond any and all limits.
If a process does not take limits from
/etc/security/limits.conf then please add them to the service (see example.) Verify this by looking up the process ID and then checking its runnings limits, in our example:
root@svr01:~# cat /proc/1981870/limits
Limit Soft Limit Hard Limit Units
Max open files 20000 40000 files