2017-12-30 13 views
2

그냥 아치 리눅스에 스팍 2.12.4와 아파치 하둡 3.0뿐만 아니라 아파치 스파크 2.2.0-4를 설치했고, 일단 spark-shell을 실행하면 다음과 같은 예외가 있습니다.spark-shell이 ​​"SymbolTable.exitingPhase ... java.lang.NullPointerException"로 실패하는 이유는 무엇입니까?

Exception in thread "main" java.lang.NullPointerException 
at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256) 
at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896) 
at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895) 
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895) 
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895) 
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918) 
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337) 
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336) 
at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64) 
at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336) 
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908) 
at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002) 
at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997) 
at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579) 
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567) 
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565) 
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807) 
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681) 
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395) 
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38) 
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37) 
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37) 
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214) 
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37) 
at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:98) 
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920) 
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97) 
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909) 
at org.apache.spark.repl.Main$.doMain(Main.scala:70) 
at org.apache.spark.repl.Main$.main(Main.scala:53) 
at org.apache.spark.repl.Main.main(Main.scala) 
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.base/java.lang.reflect.Method.invoke(Method.java:564) 
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) 
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) 
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

Spark Shell "Failed to Initialize Compiler" Error on a mac를 확인한 후 나는 JDK 8을 사용하려고했지만이 솔루션은 나를 위해 작동하지 않습니다.

그 밖에 무엇이 될 수 있는지 실마리를주십시오.

편집 2017년 12월 30일 : 이것은 내 콘솔 밖으로

:

[[email protected] ~]$ java -version 
java version "1.8.0_152" 
Java(TM) SE Runtime Environment (build 1.8.0_152-b16) 
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode) 

[[email protected] ~]$ spark-shell 
/usr/bin/hadoop 
WARNING: HADOOP_SLAVES has been replaced by HADOOP_WORKERS. Using 
value of HADOOP_SLAVES. 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel). 

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found. 
** Note that as of 2.8 scala does not assume use of the java 
classpath. 
** For the old behavior pass -usejavacp to scala, or if using a 
Settings 
** object programmatically, settings.usejavacp.value = true. 

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found. 
** Note that as of 2.8 scala does not assume use of the java 
classpath. 
** For the old behavior pass -usejavacp to scala, or if using a 
Settings 
** object programmatically, settings.usejavacp.value = true. 
Exception in thread "main" java.lang.NullPointerException 
at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256) 
at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896) 
at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895) 
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895) 
at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895) 
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918) 
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337) 
at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336) 
at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64) 
at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336) 
at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908) 
at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002) 
at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997) 
at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579) 
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567) 
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565) 
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807) 
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681) 
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395) 
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38) 
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37) 
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37) 
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214) 
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37) 
at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:98) 
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920) 
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97) 
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909) 
at org.apache.spark.repl.Main$.doMain(Main.scala:70) 
at org.apache.spark.repl.Main$.main(Main.scala:53) 
at org.apache.spark.repl.Main.main(Main.scala) 
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.base/java.lang.reflect.Method.invoke(Method.java:564) 
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) 
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) 
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

편집 2017년 12월 31일

[[email protected] ~]$ export SPARK_PRINT_LAUNCH_COMMAND=1 
[[email protected] ~]$ spark-shell 
/usr/bin/hadoop 
WARNING: HADOOP_SLAVES has been replaced by HADOOP_WORKERS. Using 
value of HADOOP_SLAVES. 
Spark Command: /usr/lib/jvm/default-runtime/bin/java -cp /opt/apache- 
spark/conf/:/opt/apache-spark/jars/*:/etc/hadoop/:/usr/lib/hadoop- 
3.0.0/share/hadoop/common/lib/*:/usr/lib/hadoop- 
3.0.0/share/hadoop/common/*:/usr/lib/hadoop- 
3.0.0/share/hadoop/hdfs/:/usr/lib/hadoop- 
3.0.0/share/hadoop/hdfs/lib/*:/usr/lib/hadoop- 
3.0.0/share/hadoop/hdfs/*:/usr/lib/hadoop- 
3.0.0/share/hadoop/mapreduce/*:/usr/lib/hadoop- 
3.0.0/share/hadoop/yarn/:/usr/lib/hadoop- 
3.0.0/share/hadoop/yarn/lib/*:/usr/lib/hadoop- 
3.0.0/share/hadoop/yarn/* -Dscala.usejavacp=true -Xmx1g 
org.apache.spark.deploy.SparkSubmit --class 
org.apache.spark.repl.Main --name Spark shell spark-shell 
======================================== 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel). 

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found. 
** Note that as of 2.8 scala does not assume use of the java 
classpath. 
** For the old behavior pass -usejavacp to scala, or if using a 
Settings 
** object programmatically, settings.usejavacp.value = true. 

Failed to initialize compiler: object java.lang.Object in compiler 
mirror not found. 
** Note that as of 2.8 scala does not assume use of the java 
classpath. 
** For the old behavior pass -usejavacp to scala, or if using a 
Settings 
** object programmatically, settings.usejavacp.value = true. 
Exception in thread "main" java.lang.NullPointerException 
at 

... 

(same exception as before) 

... 

[[email protected] ~]$ /usr/lib/jvm/default-runtime/bin/java -version 
java version "9.0.1" 
Java(TM) SE Runtime Environment (build 9.0.1+11) 
Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode) 

답변

4

TL; DR 스파크가 지원하는 자바 (8) (Java 9도 지원하지 않습니다). (마스터에서 오늘 내장) 2.3.0-SNAPSHOT까지


스파크는 자바 (9)와 자바 (8)

을 지원하는 PATH 당신이 직면했던 예외를 얻을 것이다.

$ java -version 
java version "9.0.1" 
Java(TM) SE Runtime Environment (build 9.0.1+11) 
Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode) 

Failed to initialize compiler: object java.lang.Object in compiler mirror not found. 
** Note that as of 2.8 scala does not assume use of the java classpath. 
** For the old behavior pass -usejavacp to scala, or if using a Settings 
** object programmatically, settings.usejavacp.value = true. 

Failed to initialize compiler: object java.lang.Object in compiler mirror not found. 
** Note that as of 2.8 scala does not assume use of the java classpath. 
** For the old behavior pass -usejavacp to scala, or if using a Settings 
** object programmatically, settings.usejavacp.value = true. 
Exception in thread "main" java.lang.NullPointerException 
    at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256) 
    at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896) 
    at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895) 
    at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895) 
    at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895) 
    at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918) 
    at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337) 
    at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336) 
    at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64) 
    at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336) 
    at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908) 
    at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002) 
    at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997) 
    at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579) 
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567) 
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565) 
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807) 
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681) 
    at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1$$anonfun$apply$mcV$sp$2.apply(SparkILoop.scala:79) 
    at scala.collection.immutable.List.foreach(List.scala:381) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkILoop.scala:79) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:79) 
    at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:78) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78) 
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:78) 
    at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214) 
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:77) 
    at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:110) 
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920) 
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909) 
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97) 
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909) 
    at org.apache.spark.repl.Main$.doMain(Main.scala:76) 
    at org.apache.spark.repl.Main$.main(Main.scala:56) 
    at org.apache.spark.repl.Main.main(Main.scala) 
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.base/java.lang.reflect.Method.invoke(Method.java:564) 
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:878) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

Java 8이 PATH 인 바로 다음에 Spark이 좋습니다. 영업 이익으로


$ java -version 
java version "1.8.0_152" 
Java(TM) SE Runtime Environment (build 1.8.0_152-b16) 
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode) 

$ ./bin/spark-shell 
17/12/30 20:15:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 
17/12/30 20:15:12 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. 
Spark context Web UI available at http://192.168.1.2:4041 
Spark context available as 'sc' (master = local[*], app id = local-1514661312813). 
Spark session available as 'spark'. 
Welcome to 
     ____    __ 
    /__/__ ___ _____/ /__ 
    _\ \/ _ \/ _ `/ __/ '_/ 
    /___/ .__/\_,_/_/ /_/\_\ version 2.3.0-SNAPSHOT 
     /_/ 

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152) 
Type in expressions to have them evaluated. 
Type :help for more information. 

scala> spark.version 
res0: String = 2.3.0-SNAPSHOT 
는 아치 리눅스에 있으며 Apache Spark package from AUR 설치는 소스 절에서는 리눅스 배포판, 포함 맞게 사용자 7 개 파일을 보여줍니다. spark-env.sh.

export JAVA_HOME=/usr/lib/jvm/default-runtime 

에 관계없이 PATH 환경 변수가 spark-shell 용도의 자바 9 선택할 수 있습니다 : JAVA_HOME을 설정 spark-env.sh에서이 매우 흥미로운 라인이있다

. 당신은 SPARK_PRINT_LAUNCH_COMMAND 환경 변수를 사용할 수

PROTIP 명령 및 Java를 알고 그 spark-shell 시작으로, 예를 들어, SPARK_PRINT_LAUNCH_COMMAND=1 spark-shell. 쉘 스크립트 (예 : spark-shell)를 디버깅하는 데 더 많은 Linux 방식 인 sh -x spark-shell을 확인할 수도 있습니다.

해결책은 /usr/lib/jvm/default-runtime이 Java 8 (Java 9가 아님)을 기본적으로 사용하도록 구성하는 것입니다.하지만 이는 ... 잘 ... 가정 운동입니다. 행복한 Spark'ing!

2

두 번째 편집에서 볼 수 있듯이 문제를 명확히하기 위해 스파크는 자바 9를 사용하고있었습니다. spark-env.쉬 세트는 JAVA_HOME :

export JAVA_HOME=/usr/lib/jvm/default-runtime 

다른 JVM 분포를 확인하기 위해 폴더 는/usr/lib 디렉토리/JVM을 확인, 아치 리눅스에서 기본 JDK를 설정합니다.

는 기본적으로 분포를 표시하려면 수행

archlinux-java set java-8-jdk 

screenshoot

[[email protected] ~]$ ls /usr/lib/jvm/ 
default/   default-runtime/ java-8-jdk/  java-8-openjdk/ 
java-9-jdk/  
[[email protected] ~]$ archlinux-java set java-8-jdk 
This script must be run as root 

[[email protected] ~]$ sudo archlinux-java set java-8-jdk 
[sudo] password for yago: 
[[email protected] ~]$ sudo archlinux-java set java-8-jdk 
[[email protected] ~]$ spark-shell 
/usr/bin/hadoop 
WARNING: HADOOP_SLAVES has been replaced by HADOOP_WORKERS. Using 
value of HADOOP_SLAVES. 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel). 
2017-12-31 10:47:13,050 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes 
where applicable 
Spark context Web UI available at http://127.0.0.1:4040 
Spark context available as 'sc' (master = local[*], app id = local- 
1514717237307). 
Spark session available as 'spark'. 
Welcome to 
    ____    __ 
    /__/__ ___ _____/ /__ 
    _ \ \/ _ \/ _ `/ __/ '_/ 
    /___/ .__/\_,_/_/ /_/\_\ version 2.2.0 
    /_/ 

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 
1.8.0_152) 
Type in expressions to have them evaluated. 
Type :help for more information. 

scala> 
+0

을 꼼짝 못하게하는 어떤 공동의 노력! 확실한 답을 게시 해 주셔서 감사합니다. _2018 년 새해 복 많이 받으세요! _ –

+0

감사합니다. @Jacek Laskowsky. 새해 복 많이 받으세요 – Yago

관련 문제