2015-02-03 2 views
0

여러 개의 노드 (물리적)에 Hadoop Cluster 설치를 설치했습니다. NameNode, ResourceManager 및 JobHistory 서버에 대해 하나의 서버가 있습니다. DataNode 용 서버가 2 대 있습니다. 구성하는 동안 this tutorial을 따라갔습니다. 더이 감소하고 있기 때문에 (Hadoop 2.6.0이 작동하지 않음 WordCount 예제에서 작업 줄이기

나는, Teragen 및 randomwriter 내가 시작, 모든 내가 그래서 hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar

에서 실행할 수 있습니다 등등 단어 수, Terasoft, Teragen 등 맵리 듀스 프로그램을 테스트하기 위해 시도하지하고 성공 상태로 완료 작업,지도 작업 만)하지만 WordCount 또는 WordMean을 시작하면 작업 완료 (1 작업)가 완료되지만 항상 0 %가 감소합니다. 그것은 단지 완료를 그만 둡니다. 실제로 내가 무료 감속기를 참조하는 방법을 모른다,

INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed... 

내가 해결책을 찾기 위해 노력하고 내가 similar question on SOF,를 발견했습니다하지만 정답은 없다 : yarn-root-resourcemanager-yamaster.log에서 성공지도 작업 후 난 단지 하나 개의 행을 참조 자원 관리자. 내가 무엇을 가지고 :

  • 하둡 웹 인터페이스 : 마스터 : 50070는
  • 자원 관리자 : 마스터 : 8088는
  • JobHistory 서버 : 마스터 : 19888은/jobhistory

UPDATE : I 시도 key -D mapd.reduce.tasks = 0 :

hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount -D mapd.reduce.tasks=0 /bigtext.txt /bigtext_wc_1.txt 
을 사용하여 줄이는 작업없이 단어 수를 예제 프로그램으로 실행하는 방법

그리고 작동합니다. 단어 개수 결과가 있습니다. 그것은 잘못되었지만 제 프로그램이 완료되었습니다.

15/02/03 12:40:37 INFO mapreduce.Job: Running job: job_1422950901990_0004 
15/02/03 12:40:52 INFO mapreduce.Job: Job job_1422950901990_0004 running in uber mode : false 
15/02/03 12:40:52 INFO mapreduce.Job: map 0% reduce 0% 
15/02/03 12:41:03 INFO mapreduce.Job: map 100% reduce 0% 
15/02/03 12:41:04 INFO mapreduce.Job: Job job_1422950901990_0004 completed successfully 
15/02/03 12:41:05 INFO mapreduce.Job: Counters: 30 

업데이트 # 2 : 응용 프로그램 로그에서

더 많은 정보 : 클러스터의

2015-02-03 15:02:12,008 INFO [IPC Server handler 0 on 55452] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1422959549820_0005_m_000000_0 is : 1.0 
2015-02-03 15:02:12,025 INFO [IPC Server handler 1 on 55452] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1422959549820_0005_m_000000_0 
2015-02-03 15:02:12,028 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1422959549820_0005_m_000000_0 TaskAttempt Transitioned from RUNNING to SUCCESS_CONTAINER_CLEANUP 
2015-02-03 15:02:12,029 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_1422959549820_0005_01_000002 taskAttempt attempt_1422959549820_0005_m_000000_0 
2015-02-03 15:02:12,030 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1422959549820_0005_m_000000_0 
2015-02-03 15:02:12,030 INFO [ContainerLauncher #1] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave102.hadoop.ot.ru:51573 
2015-02-03 15:02:12,063 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1422959549820_0005_m_000000_0 TaskAttempt Transitioned from SUCCESS_CONTAINER_CLEANUP to SUCCEEDED 
2015-02-03 15:02:12,084 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1422959549820_0005_m_000000_0 
2015-02-03 15:02:12,087 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1422959549820_0005_m_000000 Task Transitioned from RUNNING to SUCCEEDED 
2015-02-03 15:02:12,094 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1 
2015-02-03 15:02:12,792 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:1 RackLocal:0 
2015-02-03 15:02:12,794 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:4096, vCores:-1> 
2015-02-03 15:02:12,794 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold reached. Scheduling reduces. 
2015-02-03 15:02:12,795 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps assigned. Ramping up all remaining reduces:1 
2015-02-03 15:02:12,795 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:1 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:1 RackLocal:0 
2015-02-03 15:02:13,805 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1422959549820_0005: ask=1 release= 0 newContainers=0 finishedContainers=1 resourcelimit=<memory:6144, vCores:0> knownNMs=4 
2015-02-03 15:02:13,806 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1422959549820_0005_01_000002 
2015-02-03 15:02:13,808 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:1 RackLocal:0 
2015-02-03 15:02:13,808 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1422959549820_0005_m_000000_0: Container killed by the ApplicationMaster. 
Container killed on request. Exit code is 143 
Container exited with a non-zero exit code 143 

구성 파일.

HDFS-site.xml의

<configuration> 
<!-- Properties for NameNode --> 
    <property> 
     <name>dfs.namenode.name.dir</name> 
     <value>/grid/hadoop1/nn</value> 
     <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.</description> 
    </property> 
    <property> 
      <name>dfs.namenode.hosts</name> 
      <value>/opt/current/hadoop/etc/hadoop/slaves</value> 
      <description>List of permitted DataNodes.If necessary, use these files to control the list of allowable datanodes.</description> 
    </property> 
    <property> 
      <name>dfs.namenode.hosts.exclude</name> 
      <value>/opt/current/hadoop/etc/hadoop/excludes</value> 
      <description>List of excluded DataNodes. If necessary, use these files to control the list of allowable datanodes.</description> 
    </property> 
    <property> 
      <name>dfs.blocksize</name> 
      <value>268435456</value> 
      <description>HDFS blocksize of 256MB for large file-systems.</description> 
    </property> 
    <property> 
      <name>dfs.namenode.handler.count</name> 
      <value>100</value> 
      <description>More NameNode server threads to handle RPCs from large number of DataNodes.</description> 
    </property> 

<!-- Properties for DataNode --> 
    <property> 
      <name>dfs.datanode.data.dir</name> 
      <value>/grid/hadoop1/dn</value> 
      <description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.</description> 
    </property> 
</configuration> 

코어를 site.xml

<configuration> 
    <property> 
     <name>fs.defaultFS</name> 
     <value>hdfs://master:8020</value> 
     <description>Default hdfs filesystem on namenode host like - hdfs://host:port/</description> 
    </property> 
    <property> 
     <name>io.file.buffer.size</name> 
     <value>131072</value> 
     <description>Size of read/write buffer used in SequenceFiles.</description> 
    </property> 
</configuration> 

mapred-site.xml의

<configuration> 
<!-- Configurations for MapReduce Applications -->   
     <property> 
       <name>mapreduce.framework.name</name> 
       <value>yarn</value> 
       <description>Execution framework set to Hadoop YARN.</description> 
     </property> 
     <property> 
       <name>mapreduce.map.memory.mb</name> 
       <value>1536</value> 
       <description>Larger resource limit for maps.</description> 
     </property> 
     <property> 
       <name>mapreduce.map.java.opts</name> 
       <value>-Xmx1024M</value> 
       <description>Larger heap-size for child jvms of maps.</description> 
     </property> 
     <property> 
       <name>mapreduce.reduce.memory.mb</name> 
       <value>3072</value> 
       <description>Larger resource limit for reduces.</description> 
     </property> 
     <property> 
       <name>mapreduce.reduce.java.opts</name> 
       <value>-Xmx2560M</value> 
       <description>Larger heap-size for child jvms of reduces.</description> 
     </property> 
     <property> 
       <name>mapreduce.task.io.sort.mb</name> 
       <value>512</value> 
       <description>Higher memory-limit while sorting data for efficiency.</description> 
     </property> 
     <property> 
       <name>mapreduce.task.io.sort.factor</name> 
       <value>100</value> 
       <description>More streams merged at once while sorting files.</description> 
     </property> 
     <property> 
       <name>mapreduce.reduce.shuffle.parallelcopies</name> 
       <value>50</value> 
       <description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description> 
     </property> 
<!-- Configurations for MapReduce JobHistory Server --> 
     <property> 
       <name>mapreduce.jobhistory.address</name> 
       <value>master:10020</value> 
       <description>MapReduce JobHistory Server host:port. Default port is 10020.</description> 
     </property> 
     <property> 
       <name>mapreduce.jobhistory.webapp.address</name> 
       <value>master:19888</value> 
       <description>MapReduce JobHistory Server Web UI host:port. Default port is 19888.</description> 
     </property> 
     <property> 
       <name>mapreduce.jobhistory.intermediate-done-dir</name> 
       <value>/mr-history/tmp</value> 
       <description>Directory where history files are written by MapReduce jobs.</description> 
     </property> 
     <property> 
       <name>mapreduce.jobhistory.done-dir</name> 
       <value>/mr-history/done</value> 
       <description>Directory where history files are managed by the MR JobHistory Server.</description> 
     </property> 
</configuration> 

원사를 site.xml

<configuration> 

<!-- Site specific YARN configuration properties --> 
<!-- Configurations for ResourceManager and NodeManager --> 
     <property> 
       <name>yarn.acl.enable</name> 
       <value>yes</value> 
       <description>Enable ACLs? Defaults to false.</description> 
     </property> 
     <property> 
       <name>yarn.admin.acl</name> 
       <value>false</value> 
       <description>ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access.</description> 
     </property> 
     <property> 
       <name>yarn.log-aggregation-enable</name> 
       <value>false</value> 
       <description>Configuration to enable or disable log aggregation</description> 
     </property> 
<!-- Configurations for ResourceManager -->  
     <property> 
       <name>yarn.resourcemanager.address</name> 
       <value>master:8050</value> 
       <description>Value: host:port. If set, overrides the hostname set in yarn.resourcemanager.hostname.</description> 
     </property> 
     <property> 
       <name>yarn.resourcemanager.scheduler.address</name> 
       <value>master:8030</value> 
       <description>ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources. If set, overrides the hostname set in yarn.resourcemanager.hostname.</description> 
     </property> 
     <property> 
       <name>yarn.resourcemanager.resource-tracker.address</name> 
       <value>master:8025</value> 
       <description>ResourceManager host:port for NodeManagers. If set, overrides the hostname set in yarn.resourcemanager.hostname.</description> 
     </property> 
     <property> 
       <name>yarn.resourcemanager.admin.address</name> 
       <value>master:8141</value> 
       <description>ResourceManager host:port for administrative commands. If set, overrides the hostname set in yarn.resourcemanager.hostname.</description> 
     </property> 
     <property> 
       <name>yarn.resourcemanager.webapp.address</name> 
       <value>master:8088</value> 
       <description>web-ui host:port. If set, overrides the hostname set in</description> 
     </property>    
     <property> 
       <name>yarn.resourcemanager.hostname</name> 
       <value>master</value> 
       <description>ResourceManager host</description> 
     </property> 
     <property> 
       <name>yarn.resourcemanager.scheduler.class</name> 
       <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> 
       <description>ResourceManager Scheduler class.</description> 
     </property> 
     <property> 
      <name>yarn.scheduler.maximum-allocation-mb</name> 
      <value>6144</value> 
      <description>Maximum limit of memory to allocate to each container request at the Resource Manager. In MBs</description> 
     </property> 

     <property> 
      <name>yarn.scheduler.minimum-allocation-mb</name> 
      <value>2048</value> 
      <description>Minimum limit of memory to allocate to each container request at the Resource Manager. In MBs</description> 
     </property> 
     <property> 
       <name>yarn.resourcemanager.nodes.include-path</name> 
       <value>/opt/current/hadoop/etc/hadoop/slaves</value> 
       <description>List of permitted NodeManagers. If necessary, use these files to control the list of allowable NodeManagers.</description> 
     </property> 
     <property> 
       <name>yarn.resourcemanager.nodes.exclude-path</name> 
       <value>/opt/current/hadoop/etc/hadoop/excludes</value> 
       <description>List of excluded NodeManagers. If necessary, use these files to control the list of allowable NodeManagers.</description> 
     </property> 
<!-- Configurations for NodeManager --> 
     <property> 
       <name>yarn.nodemanager.resource.memory-mb</name> 
       <value>2048</value> 
       <description>Resource i.e. available physical memory, in MB, for given NodeManager. Defines total available resources on the NodeManager to be made available to running containers</description> 
     </property> 
     <property> 
      <name>yarn.nodemanager.vmem-pmem-ratio</name> 
      <value>2.1</value> 
      <description>Maximum ratio by which virtual memory usage of tasks may exceed physical memory. The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio.</description> 
     </property> 
     <property> 
       <name>yarn.nodemanager.local-dirs</name> 
       <value>/grid/hadoop1/yarn/local</value> 
       <description>Comma-separated list of paths on the local filesystem where intermediate data is written.Multiple paths help spread disk i/o.</description> 
     </property>  
     <property> 
      <name>yarn.nodemanager.log-dirs</name> 
      <value>/var/log/hadoop-yarn/containers</value> 
      <description>Where to store container logs.</description> 
     </property> 
     <property> 
      <name>yarn.nodemanager.log.retain-second</name> 
      <value>10800</value> 
      <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description> 
     </property>  
     <property> 
      <name>yarn.nodemanager.remote-app-log-dir</name> 
      <value>/logs</value> 
      <description>HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled.</description> 
     </property> 
     <property> 
      <name>yarn.nodemanager.remote-app-log-dir-suffix</name> 
      <value>logs</value> 
      <description>Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled.</description> 
     </property> 
     <property> 
      <name>yarn.nodemanager.aux-services</name> 
      <value>mapreduce_shuffle</value> 
      <description>Shuffle service that needs to be set for Map Reduce applications.</description> 
     </property> 

</configuration> 

마지막으로/etc/hosts :

127.0.0.1 localhost 

## BigData Hadoop Lab ## 
#Name Node 
172.25.28.100 master.hadoop.ot.ru master 
172.25.28.101 secondary.hadoop.ot.ru secondary 
#DataNodes on DL Servers 
172.25.28.102 slave102.hadoop.ot.ru slave102 
172.25.28.103 slave103.hadoop.ot.ru slave103 
172.25.28.104 slave104.hadoop.ot.ru slave104 
172.25.28.105 slave105.hadoop.ot.ru slave105 
172.25.28.106 slave106.hadoop.ot.ru slave106 
172.25.28.107 slave107.hadoop.ot.ru slave107 
#DataNodes on ARM Servers 
172.25.40.25 slave25.hadoop.ot.ru slave25 
172.25.40.26 slave26.hadoop.ot.ru slave26 
172.25.40.27 slave27.hadoop.ot.ru slave27 
172.25.40.28 slave28.hadoop.ot.ru slave28 

답변

0

답변이 충분하지 않습니다. 작업의 모든 컨테이너 (맵 또는 축소)가 내 컴퓨터에 비해 너무 큽니다.

이 오류 :

2015-02-03 15:02:13,808 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1422959549820_0005_m_000000_0: Container killed by the ApplicationMaster. 
Container killed on request. Exit code is 143 
Container exited with a non-zero exit code 143 

그것에 대해 알려줍니다.

yarn.scheduler.minimum-allocation-mb=768 
yarn.scheduler.maximum-allocation-mb=3072 
yarn.nodemanager.resource.memory-mb=3072 
mapreduce.map.memory.mb=768 
mapreduce.map.java.opts=-Xmx512m 
mapreduce.reduce.memory.mb=1536 
mapreduce.reduce.java.opts=-Xmx1024m 
yarn.app.mapreduce.am.resource.mb=768 
yarn.app.mapreduce.am.command-opts=-Xmx512m 
: 최적의 설정 내 서버의 대부분의

관련 문제