2012-11-24 3 views
2

그래프 알고리즘을 구현하기 위해 임베디드 돼지를 사용하고 있습니다. 로컬 모드에서 정상적으로 작동합니다.돼지 작업에 대하여 jar fie

2012-11-23 22:00:00,651 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job4116346741117365374.jar 
2012-11-23 22:00:09,418 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job4116346741117365374.jar created 
2012-11-23 22:00:09,423 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up multi store job 
2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=296 
2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Neither PARALLEL nor default parallelism is set for this job. Setting number of reducers to 1 
2012-11-23 22:00:09,442 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission. 
2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job null has failed! Stop running all dependent jobs 
2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 
2012-11-23 22:00:09,992 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 6015: During execution, encountered a Hadoop error. 
2012-11-23 22:00:09,993 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 
2012-11-23 22:00:09,994 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: 

HadoopVersion PigVersion UserId StartedAt FinishedAt Features 
0.20.1 0.10.0 jierus 2012-11-23 21:52:38 2012-11-23 22:00:09 HASH_JOIN,GROUP_BY,DISTINCT,FILTER,UNION 

Some jobs have failed! Stop running all dependent jobs 
Failed Jobs: 
JobId Alias Feature Message Outputs 
N/A vec_comp,vec_comp_final,vec_comp_tmp HASH_JOIN,MULTI_QUERY Message: java.io.FileNotFoundException: File /tmp/Job4116346741117365374.jar does not exist. 
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361) 
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:192) 
    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1184) 
    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1160) 
    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1132) 

은 아무도 잘못 내 코드 또는 작업의 어떤 부분을 알고 있나요 (마지막 몇 줄을 참조하십시오) :하지만 완전히 분산 하둡 클러스터, 오류 메시지가 아래처럼 항상있다?

+0

클러스터 노드에/tmp에 올바른 권한이 있습니까? –

답변

0

돼지에 대한 작업 추적기를 지정하지 않은 것처럼 냄새가납니다 (hdfs만으로는 충분하지 않습니다!). 예 :

<property> 
    <name>mapred.job.tracker</name> 
    <value>10.xx.xx.99:9001</value> 
</property>