OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable

If you are a manager, you can:

  1. change temporarily the limit number of processes with the command ulimit -u [number]

  2. change permanently the limit number of processes, i.e. the parameter nproc in /etc/security/limit.conf

If you are a user, you can:

  1. In bash
    $ export OPENBLAS_NUM_THREADS=2
    $ export GOTO_NUM_THREADS=2
    $ export OMP_NUM_THREADS=2
  1. In Python
    >>> import os
    >>> os.environ['OPENBLAS_NUM_THREADS'] = '1`

Then the problem due to multiple-threads in Python should be solved. The key is to set the number of threads to be less than the limit for you in the cluster.


This is for others in the future who encounter this error. The cluster setup most likely limits the number processes that can be run by a user on an interactive node. The clue is in the second line of the error:

OpenBLAS blas_thread_init: pthread_create: Resource temporarily unavailable
OpenBLAS blas_thread_init: RLIMIT_NPROC 64 current, 64 max

Here the limit is set to 64. While this is quite sufficient for normal CLI use, it's probably not sufficient for interactively running Keras jobs (like the OP); or in my case, trying to run an interactive Dask cluster.

It may be possible to increase the limit from your shell using, say ulimit -u 10000, but that's not guaranteed to work. It's best to notify the admins like the OP.


Often this issue is related to the limit number of processes available through ulimit (in Linux):

→ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127590
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096         # <------------------culprit
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

A temporary solution is to increase this limit:

ulimit -u unlimited

Most servers I've encountered have this values set to the number of pending signals. E.g. ulimit -i. So, on my example above I did instead:

ulimit -u 127590

Then, added a line on my ~/.bashrc file to set it on login.

For more info on how to permanently fix this, check out: https://serverfault.com/a/485277


I had this problem running numpy on an ubuntu server. I got all of the following errors, depending on whether I tried to import numpy in a shell or running my django app:

  • PyCapsule_Import could not import module "datetime" from
  • numpy.core._multiarray_umath import ( OpenBLAS blas_thread_init:
  • pthread_create failed for thread 25 of 32: Resource temporarily unavailable

I'm posting this answer since it drove me crazy. What helped for me was to add:

import os
os.environ['OPENBLAS_NUM_THREADS'] = '1'

before

import numpy as np

I guess the server had some limit for the amount of threads it allows(?). Hope it helps someone!