2016-10-25 2 views
0

표를 실패 .. (에서 작업 기록) 하이브 메타 스토어 로그에서하이브 트랜잭션 테이블 압축이 만들어

No of maps and reduces are 0 job_1476884195505_0031 
Job commit failed: java.io.FileNotFoundException: File hdfs://hadoop1.openstacksetup.com:8020/apps/hive/warehouse/log.db/syslog_staged/hostname=cloudserver19/year=2016/month=10/day=24/_tmp_27c40005-658e-48c1-90f7-2acaa124e2fa does not exist. 
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:904) 
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113) 
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966) 
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962) 
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962) 
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorOutputCommitter.commitJob(CompactorMR.java:776) 
at org.apache.hadoop.mapred.OutputCommitter.commitJob(OutputCommitter.java:291) 
at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:285) 
at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 

:

난 당신이 트랜잭션 하이브 속성 이하로 설정했다고 가정 트랜잭션 테이블 -

set hive.compactor.worker.threads=1; set hive.compactor.initiator.on=true;

에 대한 압축을 최적화하기위한 속성 아래

2016-10-24 16:33:35,503 WARN [Thread-14]: compactor.Initiator (Initiator.java:run(132)) - Will not initiate compaction for log.syslog_staged.hostname=cloudserver19/year=2016/month=10/day=24 since last hive.compactor.initiator.failed.compacts.threshold attempts to compact it failed. 

답변

0

설정하십시오

set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; set hive.support.concurrency=true; set hive.exec.dynamic.partition.mode=nonstrict; set hive.enforce.bucketing=true;