2016-12-06 1 views
1

하이브 avro 테이블을 만들고 pyspark에서 읽으려고합니다. 기본적으로 pypark에서이 하이브 avro 테이블에 대한 기본적인 쿼리를 실행하여 분석을 수행하려고합니다.Pyspark + 하이브 avro 테이블

from pyspark import SparkContext 
from pyspark.sql import HiveContext 

hive_context = HiveContext(sc) 
test = hive_context.table("default.test_avro") 
test.registerTempTable("test_temp") 
hive_context.sql("select * from test_temp").show() 

그러나 다음과 같은 오류가 발생합니다. "비행"은 avro 스키마에서 중첩 된 레코드입니다.

: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.avro.AvroTypeException: Found test.net.flight, expecting union 
    at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:292) 
    at org.apache.avro.io.parsing.Parser.advance(Parser.java:88) 
    at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:267) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:155) 
    at org.apache.avro.generic.GenericDatumReader.readArray(GenericDatumReader.java:219) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:155) 
    at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193) 
    at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142) 
    at org.apache.hadoop.hive.serde2.avro.AvroDeserializer$SchemaReEncoder.reencode(AvroDeserializer.java:111) 
    at org.apache.hadoop.hive.serde2.avro.AvroDeserializer.deserialize(AvroDeserializer.java:175) 
    at org.apache.hadoop.hive.serde2.avro.AvroSerDe.deserialize(AvroSerDe.java:201) 
    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:381) 
    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:380) 
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:312) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:88) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850) 
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215) 
    at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207) 
    at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385) 
    at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1903) 
    at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1384) 
    at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1314) 
    at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1377) 
    at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:178) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:497) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:207) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.avro.AvroTypeException: Found test.net.flight, expecting union 
    at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:292) 
    at org.apache.avro.io.parsing.Parser.advance(Parser.java:88) 
    at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:267) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:155) 
    at org.apache.avro.generic.GenericDatumReader.readArray(GenericDatumReader.java:219) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:155) 
    at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193) 
    at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151) 
    at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142) 
    at org.apache.hadoop.hive.serde2.avro.AvroDeserializer$SchemaReEncoder.reencode(AvroDeserializer.java:111) 
    at org.apache.hadoop.hive.serde2.avro.AvroDeserializer.deserialize(AvroDeserializer.java:175) 
    at org.apache.hadoop.hive.serde2.avro.AvroSerDe.deserialize(AvroSerDe.java:201) 
    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:381) 
    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:380) 
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:312) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:88) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    ... 1 more 

누구든지이 문제를 해결할 수 있습니까?

편집 :

{"namespace": "test", 
"type": "record", 
"name": "ticket", 
"fields": 
[ 
{"name": "name",   "type": "string"}, 
{"name": "date",  "type": "string"}, 
{"name": "carrier", "type": "string", "default": "null"}, 
{"name": "passengerNumber", "type": "int"}, 
{"name":"flights", 
"default": "null", 
"type":{ 
"type":"array", 
"items": { 
"name":"flight", "type":"record", "fields": 
[ 
    {"name":"orig", "type": "string"}, 
    {"name":"dest", "type": "string"}, 

] 
} 
} 
} 
] 
} 

답변

0

내가 당신의 avsc 스키마 파일이 잘못된 것 같다 : 여기에 브로 스키마입니다. 하이브를 읽고 동일한 예외가 발생하는지 확인하십시오. 동일하면 스키마 문제입니다.

avro 데이터가있는 경우 avro-tools를 사용하여 스키마 파일을 가져 와서 hdfs/s3 위치에 배치하십시오.

자바 -jar ~/아 브로 - 도구-1.7.4.jar getschema 번호는 아래의 방법으로 시도해보세요 #

avrofile : 테스트 = hive_context.sql ("" "* db_name.table_name에서 선택" "")

+0

답장을 보내 주셔서 감사합니다. 그러나 스키마가 올바른 것처럼 보입니다. 나는이 명령에서 같은 스키마를 얻는다. – SuWon

+0

이 방법으로 시도 할 수있다 : –

+0

이 방법으로 시도 할 수있다 : –

관련 문제