2014-03-12 7 views
2

120000 행 이상의 을 포함하는 select count(*) from T1을 실행 한 hbase에 테이블 (예 : T1)이 있습니다. 그러나 다음과 같은 시간 초과 예외 오류를 제공합니다. Phoenix의 제한 시간 매개 변수를 변경할 수 있습니까?hbase 데이터에 대한 Phoenix 쿼리의 시간 초과 예외

com.salesforce.phoenix.exception.PhoenixIOException: com.salesforce.phoenix.exception.PhoenixIOException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at com.salesforce.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
    at com.salesforce.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:217) 
    at com.salesforce.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:54) 
    at com.salesforce.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:76) 
    at com.salesforce.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:96) 
    at com.salesforce.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:78) 
    at com.salesforce.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:49) 
    at com.salesforce.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:741) 
    at com.salesforce.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:113) 
    at com.salesforce.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:260) 
    at com.salesforce.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:207) 
Caused by: java.util.concurrent.ExecutionException: com.salesforce.phoenix.exception.PhoenixIOException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:262) 
    at java.util.concurrent.FutureTask.get(FutureTask.java:119) 
    at com.salesforce.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:211) 
    ... 9 more 
Caused by: com.salesforce.phoenix.exception.PhoenixIOException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at com.salesforce.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
    at com.salesforce.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:62) 
    at com.salesforce.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:86) 
    at com.salesforce.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:110) 
    at com.salesforce.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:75) 
    at com.salesforce.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:69) 
    at com.salesforce.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:184) 
    at com.salesforce.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:174) 
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:166) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:679) 
Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:283) 
    at com.salesforce.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:57) 
    ... 11 more 
Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: -3353955827223074008 
    at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2590) 
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:616) 
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) 
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) 

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:532) 
    at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:96) 
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:149) 
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:42) 
    at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:163) 
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:274) 
    ... 12 more 

답변

2

hbase-site.xml에서 phoenix.query.timeoutMs를 더 높은 값으로 수정 해보십시오. 기본값은 10 분입니다.

참조 :https://github.com/forcedotcom/phoenix/wiki/Tuning

+0

내 테이블 (~ 5 천만 행)을 읽는 Spark 응용 프로그램을 실행할 때 동일한 예외가 발생합니다. 나는 Ambari와 함께 HDP 2.4를 사용하고 있습니다. HBase 설정에서 Phoenix를 활성화하고 쿼리 시간 제한 값을 설정할 수 있지만 클러스터를 다시 시작한 후에도 아무 효과가없는 것 같습니다 ... 누군가 도움을받을 수 있습니까? –

-1

시도는 HBase와 서버 사이트에 hbase.regionserver.lease.periodhbase.client.scanner.timeout.period을 변경할 수 있습니다.

관련 문제