나는 Spark 작업을 클러스터에 제출할 기계 A를 사용하고 싶습니다. A에는 스파크 환경이없고 java 만 있습니다. jar를 실행하면 HTTP 서버가 시작됩니다.머신에서 스파크 항아리를 시작할 때 HTTP 서버가 시작됩니다. 그게 뭐죠?
[[email protected] ~]$ java -jar helloCluster.jar SimplyApp
log4j:WARN No appenders could be found for logger (akka.event.slf4j.Slf4jLogger).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
14/06/10 16:54:54 INFO SparkEnv: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/06/10 16:54:54 INFO SparkEnv: Registering BlockManagerMaster
14/06/10 16:54:54 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140610165454-4393
14/06/10 16:54:54 INFO MemoryStore: MemoryStore started with capacity 1055.1 MB.
14/06/10 16:54:54 INFO ConnectionManager: Bound socket to port 59981 with id = ConnectionManagerId(bj-230,59981)
14/06/10 16:54:54 INFO BlockManagerMaster: Trying to register BlockManager
14/06/10 16:54:54 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager bj-230:59981 with 1055.1 MB RAM
14/06/10 16:54:54 INFO BlockManagerMaster: Registered BlockManager
14/06/10 16:54:54 INFO HttpServer: Starting HTTP Server
14/06/10 16:54:54 INFO HttpBroadcast: Broadcast server started at http://10.10.10.230:59233
14/06/10 16:54:54 INFO SparkEnv: Registering MapOutputTracker
14/06/10 16:54:54 INFO HttpFileServer: HTTP File server directory is /tmp/spark-bfdd02f1-3c02-4233-854f-af89542b9acf
14/06/10 16:54:54 INFO HttpServer: Starting HTTP Server
14/06/10 16:54:54 INFO SparkUI: Started Spark Web UI at http://bj-230:4040
14/06/10 16:54:54 INFO SparkContext: Added JAR hdfs://master:8020/tmp/helloCluster.jar at hdfs://master:8020/tmp/helloCluster.jar with timestamp 1402390494838
14/06/10 16:54:54 INFO AppClient$ClientActor: Connecting to master spark://master:7077...
그래서이 서버의 의미는 무엇입니까? 그리고 NAT가 설치된 경우이 컴퓨터 A를 사용하여 원격 클러스터에 내 작업을 제출할 수 있습니까?
그런데이 실행 결과는 실패합니다. 오류 로그 :
14/06/10 16:55:05 INFO SparkDeploySchedulerBackend: Executor app-20140610165321-0005/7 removed: Command exited with code 1
14/06/10 16:55:05 ERROR AppClient$ClientActor: Master removed our application: FAILED; stopping client
14/06/10 16:55:05 WARN SparkDeploySchedulerBackend: Disconnected from Spark cluster! Waiting for reconnection...
14/06/10 16:55:11 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
SparkConf에 대한 마스터를 설정 했습니까? 당신이 선언 한 SparkContext를 닫았습니까? – eliasah
@eliasah'val sc = 새 SparkContext ( "spark : // master : 7077", "간단한 응용 프로그램", "/opt/spark-0.9.1-bin-cdh4", // spark home 목록 "hdfs : // master : 8020/tmp/helloCluster.jar") // jar 위치 )'닫힌 SparkContext의 의미를 이해하지 못합니까? – hakunami
앱이 끝날 때까지는 앱이 올바르게 종료 될 수 있도록 sc.stop()을 넣어야합니다. – eliasah