2017-01-31 1 views
0

메소 클러스터에 대해 pyspark 쉘을 실행한다고 가정합니다. 12 개의 CPU 코어 만 차지하고 싶습니다.메 스파 스에 스파크 : 작업이 단일 노드에서 스케쥴 됨

[email protected]:~$ pyspark --master mesos://e3.test:5050 --total-executor-cores 12 

을 그리고 보통의 물건을 간다 : 그래서 나는이처럼 시작

Python 2.7.13 |Anaconda 2.5.0 (64-bit)| (default, Dec 20 2016, 23:09:15) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 
Type "help", "copyright", "credits" or "license" for more information. 
Anaconda is brought to you by Continuum Analytics. 
Please check out: http://continuum.io/thanks and https://anaconda.org 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
17/01/31 08:16:31 INFO SparkContext: Running Spark version 1.6.2 
17/01/31 08:16:31 INFO SecurityManager: Changing view acls to: uu 
17/01/31 08:16:31 INFO SecurityManager: Changing modify acls to: uu 
17/01/31 08:16:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(uu); users with modify permissions: Set(uu) 
17/01/31 08:16:31 INFO Utils: Successfully started service 'sparkDriver' on port 53336. 
17/01/31 08:16:31 INFO Slf4jLogger: Slf4jLogger started 
17/01/31 08:16:32 INFO Remoting: Starting remoting 
17/01/31 08:16:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:59860] 
17/01/31 08:16:32 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 59860. 
17/01/31 08:16:32 INFO SparkEnv: Registering MapOutputTracker 
17/01/31 08:16:32 INFO SparkEnv: Registering BlockManagerMaster 
17/01/31 08:16:32 INFO DiskBlockManager: Created local directory at /var/tmp/spark/blockmgr-6b16ff11-b0bc-4a71-82f5-c69a363c8c1a 
17/01/31 08:16:32 INFO MemoryStore: MemoryStore started with capacity 511.1 MB 
17/01/31 08:16:32 INFO SparkEnv: Registering OutputCommitCoordinator 
17/01/31 08:16:32 INFO Utils: Successfully started service 'SparkUI' on port 4040. 
17/01/31 08:16:32 INFO SparkUI: Started SparkUI at http://r4.test:4040 
I0131 08:16:32.582038 24965 sched.cpp:226] Version: 1.1.0 
I0131 08:16:32.586931 24958 sched.cpp:330] New master detected at [email protected]:5050 
I0131 08:16:32.587162 24958 sched.cpp:341] No credentials provided. Attempting to register without authentication 
I0131 08:16:32.596922 24956 sched.cpp:743] Framework registered with 075ef8d0-de21-472d-8198-80805006b93d-0051 
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Registered as framework ID 075ef8d0-de21-472d-8198-80805006b93d-0051 
17/01/31 08:16:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51135. 
17/01/31 08:16:32 INFO NettyBlockTransferService: Server created on 51135 
17/01/31 08:16:32 INFO BlockManagerMaster: Trying to register BlockManager 
17/01/31 08:16:32 INFO BlockManagerMasterEndpoint: Registering block manager r4.test:51135 with 511.1 MB RAM, BlockManagerId(driver, r4.test, 51135) 
17/01/31 08:16:32 INFO BlockManagerMaster: Registered BlockManager 
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0 
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING 
Welcome to 
     ____    __ 
    /__/__ ___ _____/ /__ 
    _\ \/ _ \/ _ `/ __/ '_/ 
    /__/.__/\_,_/_/ /_/\_\ version 1.6.2 
     /_/ 

Using Python version 2.7.13 (default, Dec 20 2016 23:09:15) 
SparkContext available as sc, HiveContext available as sqlContext. 

는하지만 등록 된 하나의 집행자로 끝나는 :

>>> 17/01/31 08:16:35 INFO CoarseMesosSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (r5.test:42965) with ID 023af0f2-fc60-4d9d-a3db-301ab34764c9-S3 
17/01/31 08:16:35 INFO BlockManagerMasterEndpoint: Registering block manager r5.test:33239 with 511.1 MB RAM, BlockManagerId(023af0f2-fc60-4d9d-a3db-301ab34764c9-S3, r5.test, 33239) 

전체 스파크 의미 앱이 단일 노드에서 실행됩니다. 그리고 이것은 내가 원하는 스케줄링이 아닙니다 (주로 데이터 지역 고려 사항 때문입니다). 내가 기대했던 것은 Spark 독립형 설치 방법과 비슷합니다. --total-executor-cores은 클러스터 전체에 균등하게 분산됩니다.

이것을 달성하는 방법은 무엇입니까? 집행자/코어 번호를 언급하는 나머지 옵션은 아무런 영향을 미치지 않습니다 (독립 실행 형 및 얀 구성에만 관련됨).

왜 Spark with Mesos는 배포 작업보다는 노드를 하나씩 채우는 배치 전략을 사용합니까?

UPD 다음 docs에 언급 컨퍼런스 항목 중 하나가 작동하지 않습니다

pyspark --master mesos://e3.test:5050 --conf spark.executor.cores=2 --conf spark.cores.max=12 

답변

0
version 1.6.2 

가 문제입니다. 최신 버전에는 실행 프로그램 당 코어 수를 제한하는 옵션 spark.cores.max이 있습니다.

관련 문제