The nexus.log file is full of "too many open files" exceptions, how can I fix this?

Visit for documentation on Nexus Repository version 2.

The Lucene search indexes maintained by Nexus Repository 2 can temporarily use 10-20 files per repository during re-index and search operations (this is a very rough approximation).

Nexus Repository 2 Professional will consume about 400 file handles on a clean start. So as the number of indexed repositories in a Nexus Repository instance increases you may start to approach your system's default limits for open file handles.  When you hit this limit you will see "too many open files" exceptions in the logs.

To recover from this you must first increase the limit, then repair the damage.

Checking the open file limit


Determine the user id of the user running the Nexus Repository 2 process. Then execute the following:

# su - <nexus_userid>
$ ulimit -Hn
$ ulimit -Sn

Increasing the Open File Limit


Default open file limit is usually 1024.

To increase this add this to /etc/security/limits.conf (where "nexus" is the UID of the user running Nexus).

#<domain>      <type>  <item>         <value>
nexus          hard    nofile          2048
nexus          soft    nofile          2048

This will take effect when the next login shell is executed. If you are running Nexus Repository 2 via the "nexus" init.d script and you have RUN_AS_USER set then all you need to do is restart the nexus service. the script will login when it switches users.

Note 2: On Ubuntu, you also need to add the following line to /etc/pam.d/common-session:

session required

Note: If you're using systemd to launch the server the above won't work. Instead, modify the configuration file to add a "LimitNOFILE" line:

Description=nexus service

ExecStart=/opt/nexus/bin/nexus start
ExecStop=/opt/nexus/bin/nexus stop



Default limit per process is 256. This can be increased to 1024 by issuing "ulimit -n 1024".

To increase higher than this, edit /etc/system

  1. set hard limit on file descriptors
    set rlim_fd_max = 4096
  2. set soft limit on file descriptors
    set rlim_fd_cur = 4096

After this add "ulimit -n 4096" work


Limit is 2048. We have not yet found a way to increase this, however (so far) we have not had anyone hit this limit.


There is a stackexchange answer that summarizes steps for various versions of OSX.

Repairing the Damage

The most likely areas of damage after running out of file handles or filling up a disk are the Lucene and RDF indexes, databases, and the repository storage areas.

To fix the search indexes, schedule a "repair index" task against "all repositories. The are also Lucene indexes for the system feeds, these are stored in "sonatype-work/nexus/timeline. These will be fixed automatically at startup if Nexus detects they are corrupted.

In the storage area, Nexus Repository 2 may have created zero length files. This happens because a directory entry can still be created on a full disk. To find these, run the following in sonatype-work/nexus/storage:

find . -type f -size 0

Double check that the above worked correctly, and then to remove them run:

find . type f -size 0 -delete
Have more questions? Submit a request


  • 0
    Mark Pritchett (suspended)

    Repair the Damage was needed in our case

  • 0
    Peter Lynch

    We are closing this article for comments.

    If you have a support license, please contact us by submitting a support ticket.

    If you do not have a support license, please use our Nexus Users List or our other free support resources.

Article is closed for comments.