2017-09-08 3 views
0

자바에서 mallib apache spark를 배우기 시작했습니다. 나는 공식 웹 사이트에서 2.1.1 문서를 따르고있다. spark-2.1.1-bin-hadoop2.7을 우분투 14.04 lts에 설치했습니다. 이 코드를 실행하려고합니다.java apache spark mllib

public class JavaLogisticRegressionWithElasticNetExample { 
public static void main(String[] args) { 
    SparkSession spark = SparkSession.builder().appName("JavaLogisticRegressionWithElasticNetExample") .master("local[*]").getOrCreate(); 
    // $example on$ 
    // Load training data 
    Dataset<Row> training = spark.read().format("libsvm") 
      .load("data/mllib/sample_libsvm_data.txt"); 

    LogisticRegression lr = new LogisticRegression() 
      .setMaxIter(10) 
      .setRegParam(0.3) 
      .setElasticNetParam(0.8); 

    // Fit the model 
    LogisticRegressionModel lrModel = lr.fit(training); 

    // Print the coefficients and intercept for logistic regression 
    System.out.println("Coefficients: " 
      + lrModel.coefficients() + " Intercept: " + lrModel.intercept()); 

    // We can also use the multinomial family for binary classification 
    LogisticRegression mlr = new LogisticRegression() 
      .setMaxIter(10) 
      .setRegParam(0.3) 
      .setElasticNetParam(0.8) 
      .setFamily("multinomial"); 

    // Fit the model 
    LogisticRegressionModel mlrModel = mlr.fit(training); 

    // Print the coefficients and intercepts for logistic regression with multinomial family 
    System.out.println("Multinomial coefficients: " + lrModel.coefficientMatrix() 
      + "\nMultinomial intercepts: " + mlrModel.interceptVector()); 
    // $example off$ 

    spark.stop(); 
} 

}

내 시스템에서 스파크 2.1.1 - 빈 - hadoop2.7를 설치했습니다. 내가 가진의 pom.xml 파일이

<dependency> 
     <groupId>org.apache.spark</groupId> 
     <artifactId>spark-sql_2.11</artifactId> 
     <version>2.1.1</version> 
     <scope>provided</scope> 
    </dependency> 
<dependency> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-mllib_2.10</artifactId> 
    <version>2.1.1</version> 
</dependency> 
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-mllib-local_2.10 --> 
<dependency> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-mllib-local_2.10</artifactId> 
    <version>2.1.1</version> 
</dependency> 

하지만 난 당신이 같은 프로그램에 스칼라의 다른 버전을 사용하는 경우 오류가 이런 종류의 발생이 예외

17/09/08 16:42:19 INFO SparkEnv: Registering OutputCommitCoordinator Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.$scope()Lscala/xml/TopScope$; at org.apache.spark.ui.jobs.AllJobsPage.(AllJobsPage.scala:39) at org.apache.spark.ui.jobs.JobsTab.(JobsTab.scala:38) at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:65) at org.apache.spark.ui.SparkUI.(SparkUI.scala:82) at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:220) at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:162) at org.apache.spark.SparkContext.(SparkContext.scala:452) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2320) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860) at JavaLogisticRegressionWithElasticNetExample.main(JavaLogisticRegressionWithElasticNetExample.java:12) 17/09/08 16:42:19 INFO DiskBlockManager: Shutdown hook called 17/09/08 16:42:19 INFO ShutdownHookManager: Shutdown hook called 17/09/08 16:42:19 INFO ShutdownHookManager: Deleting directory /tmp/spark-8460a189-3039-47ec-8d75-9e0ca8b4ee5d 17/09/08 16:42:19 INFO ShutdownHookManager: Deleting directory /tmp/spark-8460a189-3039-47ec-8d75-9e0ca8b4ee5d/userFiles-9b6994eb-1376-47a3-929e-e415e1fdb0c0

+1

그렇다면 문제는 무엇입니까/문제입니까? – Derlin

+0

예외 발생 17/09/08 16:42:19 INFO SparkEnv : OutputCommitCoordinator 등록 스레드 "main"의 예외 java.lang.NoSuchMethodError : scala.Predef $. $ scope() Lscala/xml/TopScope $; \t at org.apache.spark.ui.jobs.AllJobsPage. (AllJobsPage.scala : 39) –

답변

0

을 얻고있다. 실제로 종속물 (pom.xml)에는 scala 2.10이있는 라이브러리와 scala 2.11이있는 라이브러리가 있습니다.

spark-sql_2.11 대신 spark-sql_2.10을 사용하면 괜찮습니다 (또는 mllib 버전을 2.11로 변경).

+0

@Derlin에게 감사드립니다. –

관련 문제