OutOfMemoryError in Hadoop
Error: unable to create new native thread
Error initializing attempt_201111090003_0013_r_000000_0:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:614)
at java.lang.UNIXProcess$1.run(UNIXProcess.java:157)
at java.security.AccessController.doPrivileged(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:119)
at java.lang.ProcessImpl.start(ProcessImpl.java:81)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:468)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:329)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:750)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1664)
at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1629)
When you have this kind of erros when runnning hadoop jobs, there might be a numer of possible reasons thanks to the feeble implementation of Hadoop. One possible reason is because in your MapReduce programs you open too much processes exceeding the default setting of your OS, for example, the default number is 1024 (you can check this number by executing 'ulimit -u'). A perfect example of using many processes would be such a case, in which you want control the output file name based on key-value pair in the reduce stage. To solve this problem, you need to modify some configuration files to raise up the maximum process number you can use, which can be done by editing /etc/security/limits.conf. Simply adding the following two lines to the llimits.conf to set the 100000 as the maximum number of processs in your system for user hadoop.
hadoop soft nproc 100000
hadoop hard nproc 100000
Other useful resources about OOM in hadoop can be found in the following links:
The dark side of hadoop;
NoSQL;
Dealing with outofmemoryerror-in-hadoop;