2017-04-13 1 views
0

MLlib에서 제공되는 ALS 행렬 인수 분해를 적용하려고합니다. 다음은 내 코드spark DataFrame을 pyspark의 csv에 쓰는 동안 오류가 발생했습니다.

from pyspark.sql.types import StringType 
from pyspark import SQLContext 
sqlContext = SQLContext(sc) 

t1 =   
sqlContext.read.csv("/user/hadoop/personalization/test1.csv",header=False) 

from pyspark.mllib.recommendation\ 
import ALS,MatrixFactorizationModel, Rating 

model=ALS.train(t1,rank=2,iterations=20,seed=0) 

products_for_users = model.recommendProductsForUsers(2).collect() 


l2=sqlContext.createDataFrame(products_for_users) 
l2.show() 
l2.write.csv('l2.csv') 

마지막 단계 인의 write.csv()를 실행 한 후, 나는 다음과 같은 오류가 점점 오전 : 누군가가 오류

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 674, in csv 
    self._jwrite.csv(path) 
    File "/usr/lib/spark/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", 
lin                       
e 933, in __call__ 
    File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco 
    return f(*a, **kw) 
    File "/usr/lib/spark/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", 
line 31                       
2, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling o140.csv. 
: java.lang.UnsupportedOperationException: CSV data source does not support struct<_1:struct<user:bigint,product:bigint,rating:double>,_2:struct<user:bigint,pro                      duct:bigint,rating:double>> data type. 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun                      $verifySchema$1.apply(CSVFileFormat.scala:186) 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun                      $verifySchema$1.apply(CSVFileFormat.scala:183) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:893) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) 
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) 
    at org.apache.spark.sql.types.StructType.foreach(StructType.scala:95) 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.verifySc                      hema(CSVFileFormat.scala:183) 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.prepareW                      rite(CSVFileFormat.scala:87) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1$$anonfun$4.apply(InsertIntoHadoopFsRelationCommand.scala:                      121) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1$$anonfun$4.apply(InsertIntoHadoopFsRelationCommand.scala:                      121) 
    at org.apache.spark.sql.execution.datasources.BaseWriterContainer.driver                      SideSetup(WriterContainer.scala:105) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:140) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLEx                      ecution.scala:57) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command.run(InsertIntoHadoopFsRelationCommand.scala:115) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffect                      Result$lzycompute(commands.scala:60) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffect                      Result(commands.scala:58) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(                      commands.scala:74) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(Spa                      rkPlan.scala:115) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(Spa                      rkPlan.scala:115) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.appl                      y(SparkPlan.scala:136) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s                      cala:151) 
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala                      :133) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryE                      xecution.scala:86) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.sc                      ala:86) 
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSourc                      e.scala:487) 
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211) 
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194) 
    at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:551) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.                      java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces                      sorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 
    at py4j.Gateway.invoke(Gateway.java:280) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:211) 
    at java.lang.Thread.run(Thread.java:745) 
+0

나는()' – ImDarrenG

+0

+ --- + -------- 열/s의 복잡한 유형을 포함을 가지고 l2' DataFrame는'당신이 l2.show'의 출력을 게시 할 수 있습니다하시기 바랍니다 믿는다 ------------ + | _1 | _2 | + --- + -------------------- + | 1 | [[1,1,4.076836144 ... | | 2 | [[2,6,4.933567648 ... | | 3 | [[3,7,19.06817406 ... | + --- + -------------------- + –

답변

0

I이었다을 소스를 식별시겠습니까 CSV에 대한 가능성이있는 데이터 프레임을 작성하는 동안 유사한 오류가 발생합니다. 아래에서 시도해 볼 수 있습니다.

l2.toPandas().to_csv('l2.csv') 
+0

해결책이 아닌 해결 방법입니다. – thecheech

관련 문제