Elasticsearch: too many open files

Elasticsearch: too many open files

Last updated:
Table of Contents

Sometimes (in particular if you're just starting out with Elasticsearch and using the default options), you will see error messages like this in your log:

... Caused by: java.io.FileNotFoundException: /esdata/elasticsearch/elasticsearch-cluster001/nodes/0/indices/.marvel-20 14.10.27/0/index/_fo.fdt (Too many open files)

Or something like that. The main thing to notice here is the bit where it says (Too many open files). This means that Elasticsearch needs to keep open more files than it currently is allowed to.

Quick fix

Edit file /etc/security/limits.conf (for most Linux distributions) and add this at the end:

elasticsearch soft  nofile 32000
elasticsearch hard  nofile 32000
root          hard  nofile 32000
root          soft  nofile  32000
# End of file

HEADS-UP On Ubuntu, there's a few extra things you must do:

  • Add the following line to file /etc/sysctl.conf:
  fs.file-max = 500000 
  • And the following line to /etc/pam.d/common-session-noninteractive:
 session required pam_limits.so
  • And, finally, restart the system.


This is the file where you set limits for what your users can do in your system. Limits for things like maximum cpu usage, maximum number of open files or maximum memory allowed can be set here on a per-user or per-group basis.

For this issue, you may set higher limits for the elasticsearch and for the rootuser (if you happen to be running elasticsearch as root) as well as for any other user if you are running ES as another user.


Dialogue & Discussion