2014-04-29 2 views
1

Hive 쿼리를 실행할 때 Hadoop 2.3.0 클러스터의 datanode에 메모리 부족 오류가 발생합니다. nodemanager가 내려 가지 않도록 설정하려면 어떤 설정을해야합니까?Hadoop YARN : 컨테이너를 시작하지 못했습니다.

2014-04-29 12:03:33,505 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Failed to launch container. 
java.lang.OutOfMemoryError: Java heap space 
    at java.lang.ClassLoader.findLoadedClass0(Native Method) 
    at java.lang.ClassLoader.findLoadedClass(ClassLoader.java:932) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:312) 
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:268) 
    at javax.xml.parsers.FactoryFinder.getProviderClass(FactoryFinder.java:112) 
    at javax.xml.parsers.FactoryFinder.newInstance(FactoryFinder.java:178) 
    at javax.xml.parsers.FactoryFinder.newInstance(FactoryFinder.java:147) 
    at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:265) 
    at javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121) 
    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2195) 
    at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2172) 
    at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2089) 
    at org.apache.hadoop.conf.Configuration.get(Configuration.java:838) 
    at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:857) 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1876) 
    at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:149) 
    at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:240) 
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:332) 
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:329) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:416) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) 
    at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:329) 
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:443) 
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:423) 
    at org.apache.hadoop.fs.FileContext.getLocalFSFileContext(FileContext.java:409) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:185) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79) 
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:166) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
2014-04-29 12:03:56,003 FATAL org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[Socket Reader #1 for port 8040,5,main] threw an Error. Shutting down now... 

답변

1

당신은 NodeManager의 힙 메모리를 증가 구성 디렉토리에 구성 파일 yarn-env.sh에 다음과 같은 환경 변수를 설정하고 nodemanger를 다시 시작해야 할 수도 있습니다.

export YARN_NODEMANAGER_HEAPSIZE=2048 

기본값은 1000입니다 즉 1,000메가바이트

입니다
관련 문제