2018-01-23 17:48 UTC

View Issue Details Jump to Notes ]
IDProjectCategoryView StatusLast Update
0012681CentOS-7filesystempublic2017-12-22 08:07
Reportermsubramm@gmail.com 
PriorityhighSeveritymajorReproducibilityalways
StatusnewResolutionopen 
PlatformLinuxOSCentOS Linux releaseOS Version7.2.1511
Product Version7.2.1511 
Target VersionFixed in Version 
Summary0012681: Too many open files error even after setting max limit
DescriptionWe have docker installed on a centos box.
We want to create multiple containers on the box. After 120 containers we are unable to spawn.

Error observed
Too many open files

Once this error occurs, even we are unable to stop or start other services. Complains same error.



cat /proc/sys/fs/file-nr
7104 0 2097152


lsof|wc -l
409607

ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15014
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 900000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 840000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited


cat /proc/3673/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 0 bytes
Max resident set unlimited unlimited bytes
Max processes 840000 840000 processes
Max open files 900000 900000 files
Max locked memory 524288 524288 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 15014 15014 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us

cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
stepping : 2
microcode : 0x25
cpu MHz : 2394.498
cache size : 30720 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bogomips : 4788.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
stepping : 2
microcode : 0x25
cpu MHz : 2394.498
cache size : 30720 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bogomips : 4788.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

free -m
              total used free shared buff/cache available
Mem: 3534 2188 235 23 1109 761
Swap: 0 0





Steps To Reproduce1.Installed docker engine
2.Using a script tried to deploy go-server containers - 125 containers.
After containers are created - last five had issues - error too many open files.

systemctl stop cassandra
Error: Too many open files

Additional Informationuname -a
Linux ip-10-0-0-11 3.10.0-327.28.2.el7.x86_64 #1 SMP Wed Aug 3 11:11:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
TagsNo tags attached.
abrt_hash
URL
Attached Files

-Relationships
+Relationships

-Notes

~0028421

tigalch (manager)

Your kernel, and very likely other packages as well, is outdated. Please update to the current kernel (and all packages to the 7(1611)-release) with 'yum update', and report back how your issue is afterwards.

~0028422

msubramm@gmail.com (reporter)

Updated the system
uname -a
Linux ip-10-0-0-11 3.10.0-327.28.2.el7.x86_64 #1 SMP Wed Aug 3 11:11:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@ip-10-0-0-11 ~]#
[root@ip-10-0-0-11 ~]#
[root@ip-10-0-0-11 ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)


same issue. Too many open files error.

[root@ip-10-0-0-11 ~]# systemctl stop cassandra
Error: Too many open files
[root@ip-10-0-0-11 ~]#

~0028423

tigalch (manager)

no, you still have an old kernel running. Please update that one as well! 3.10.0-514.6.1 is current

~0028424

msubramm@gmail.com (reporter)

I upgraded the kernel, restarted the instance. Still the same issue

uname -a
Linux ip-10-0-32-67.us-east-2.compute.internal 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@ip-10-0-32-67 ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)


cat /proc/sys/fs/file-nr
7104 0 2097152

~0030815

jhutar (reporter)

Hello. I realize this is an old tread, but still. Had the same issue and fixed it with setting "fs.inotify.max_user_instances=8192". Wrote a blogpost about al what did not worked and about symptoms: https://jhutar.blogspot.cz/2017/12/error-too-many-open-files-when-inside.html
+Notes

-Issue History
Date Modified Username Field Change
2017-01-19 20:18 msubramm@gmail.com New Issue
2017-01-19 20:21 tigalch Note Added: 0028421
2017-01-19 20:37 msubramm@gmail.com Note Added: 0028422
2017-01-19 20:41 tigalch Note Added: 0028423
2017-01-19 22:01 msubramm@gmail.com Note Added: 0028424
2017-12-22 08:07 jhutar Note Added: 0030815
+Issue History