Spark练习 - 提交作业到集群 - submit job via cluster
Created by Wang, Jerry, last modified on Sep 12, 2015
start-master.sh ( sbin folder下)
then ps -aux
7334 5.6 0.6 1146992 221652 pts/0 Sl 12:34 0:05 /usr/jdk1.7.0_79/bin/java -cp /root/devExpert/spark-1.4.1/sbin/…/conf/:/root/devExpert/spar
monitor master node via url: http://10.128.184.131:8080
启动两个worker:
./spark-class org.apache.spark.deploy.worker.Worker spark://NKGV50849583FV1:7077 ( bin folder下)
提交job到集群
./spark-submit --class “org.apache.spark.examples.JavaWordCount” --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
成功执行job
./spark-submit --class “org.apache.spark.examples.JavaWordCount” --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
added by Jerry: loading load-spark-env.sh !!!1
added by Jerry:…
/root/devExpert/spark-1.4.1/conf
added by Jerry, number of Jars: 1
added by Jerry, launch_classpath: /root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar
added by Jerry,RUNNER:/usr/jdk1.7.0_79/bin/java
added by Jerry, printf argument list: org.apache.spark.deploy.SparkSubmit --class org.apache.spark.examples.JavaWordCount --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
added by Jerry, I am in if-else branch: /usr/jdk1.7.0_79/bin/java -cp /root/devExpert/spark-1.4.1/conf/:/root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-rdbms-3.2.9.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-core-3.2.10.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar -Xms512m -Xmx512m -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master spark://NKGV50849583FV1:7077 --class org.apache.spark.examples.JavaWordCount /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
15/08/15 14:08:02 INFO SparkContext: Running Spark version 1.4.1
15/08/15 14:08:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
15/08/15 14:08:03 WARN Utils: Your hostname, NKGV50849583FV1 resolves to a loopback address: 127.0.0.1; using 10.128.184.131 instead (on interface eth0)
15/08/15 14:08:03 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/08/15 14:08:03 INFO SecurityManager: Changing view acls to: root
15/08/15 14:08:03 INFO SecurityManager: Changing modify acls to: root
15/08/15 14:08:03 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/08/15 14:08:04 INFO Slf4jLogger: Slf4jLogger started
15/08/15 14:08:04 INFO Remoting: Starting remoting
15/08/15 14:08:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.128.184.131:44792]
15/08/15 14:08:04 INFO Utils: Successfully started service ‘sparkDriver’ on port 44792.
15/08/15 14:08:04 INFO SparkEnv: Registering MapOutputTracker
15/08/15 14:08:04 INFO SparkEnv: Registering BlockManagerMaster
15/08/15 14:08:04 INFO DiskBlockManager: Created local directory at /tmp/spark-6fc6b901-3ac8-4acd-87aa-352fd22cf8d4/blockmgr-4c660a56-0014-4b1f-81a9-7ac66507b9fa
15/08/15 14:08:04 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/08/15 14:08:05 INFO HttpFileServer: HTTP File server directory is /tmp/spark-6fc6b901-3ac8-4acd-87aa-352fd22cf8d4/httpd-b4344651-dbd8-4ba4-be1a-913ae006d839
15/08/15 14:08:05 INFO HttpServer: Starting HTTP Server
15/08/15 14:08:05 INFO Utils: Successfully started service ‘HTTP file server’ on port 46256.
15/08/15 14:08:05 INFO SparkEnv: Registering OutputCommitCoordinator
15/08/15 14:08:05 WARN Utils: Service ‘SparkUI’ could not bind on port 4040. Attempting port 4041.
15/08/15 14:08:05 WARN QueuedThreadPool: 2 threads could not be stopped
15/08/15 14:08:05 WARN Utils: Service ‘SparkUI’ could not bind on port 4041. Attempting port 4042.
15/08/15 14:08:05 WARN Utils: Service ‘SparkUI’ could not bind on port 4042. Attempting port 4043.
15/08/15 14:08:06 WARN Utils: Service ‘SparkUI’ could not bind on port 4043. Attempting port 4044.
15/08/15 14:08:06 WARN Utils: Service ‘SparkUI’ could not bind on port 4044. Attempting port 4045.
15/08/15 14:08:06 INFO Utils: Successfully started service ‘SparkUI’ on port 4045.
15/08/15 14:08:06 INFO SparkUI: Started SparkUI at http://10.128.184.131:4045
15/08/15 14:08:06 INFO SparkContext: Added JAR file:/root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar at http://10.128.184.131:46256/jars/JavaWordCount-1.jar with timestamp 1439618886415
15/08/15 14:08:06 INFO AppClient
C
l
i
e
n
t
A
c
t
o
r
:
C
o
n
n
e
c
t
i
n
g
t
o
m
a
s
t
e
r
a
k
k
a
.
t
c
p
:
/
/
s
p
a
r
k
M
a
s
t
e
r
@
N
K
G
V
50849583
F
V
1
:
7077
/
u
s
e
r
/
M
a
s
t
e
r
.
.
.
15
/
08
/
1514
:
08
:
06
I
N
F
O
S
p
a
r
k
D
e
p
l
o
y
S
c
h
e
d
u
l
e
r
B
a
c
k
e
n
d
:
C
o
n
n
e
c
t
e
d
t
o
S
p
a
r
k
c
l
u
s
t
e
r
w
i
t
h
a
p
p
I
D
a
p
p
−
20150815140806
−
000315
/
08
/
1514
:
08
:
06
I
N
F
O
A
p
p
C
l
i
e
n
t
ClientActor: Connecting to master akka.tcp://sparkMaster@NKGV50849583FV1:7077/user/Master... 15/08/15 14:08:06 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150815140806-0003 15/08/15 14:08:06 INFO AppClient
ClientActor:Connectingtomasterakka.tcp://sparkMaster@NKGV50849583FV1:7077/user/Master...15/08/1514:08:06INFOSparkDeploySchedulerBackend:ConnectedtoSparkclusterwithappIDapp−20150815140806−000315/08/1514:08:06INFOAppClientClientActor: Executor added: app-20150815140806-0003/0 on worker-20150815125648-10.128.184.131-53710 (10.128.184.131:53710) with 8 cores
15/08/15 14:08:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150815140806-0003/0 on hostPort 10.128.184.131:53710 with 8 cores, 512.0 MB RAM
15/08/15 14:08:06 INFO AppClient
C
l
i
e
n
t
A
c
t
o
r
:
E
x
e
c
u
t
o
r
a
d
d
e
d
:
a
p
p
−
20150815140806
−
0003
/
1
o
n
w
o
r
k
e
r
−
20150815125443
−
10.128.184.131
−
34423
(
10.128.184.131
:
34423
)
w
i
t
h
8
c
o
r
e
s
15
/
08
/
1514
:
08
:
06
I
N
F
O
S
p
a
r
k
D
e
p
l
o
y
S
c
h
e
d
u
l
e
r
B
a
c
k
e
n
d
:
G
r
a
n
t
e
d
e
x
e
c
u
t
o
r
I
D
a
p
p
−
20150815140806
−
0003
/
1
o
n
h
o
s
t
P
o
r
t
10.128.184.131
:
34423
w
i
t
h
8
c
o
r
e
s
,
512.0
M
B
R
A
M
15
/
08
/
1514
:
08
:
06
I
N
F
O
A
p
p
C
l
i
e
n
t
ClientActor: Executor added: app-20150815140806-0003/1 on worker-20150815125443-10.128.184.131-34423 (10.128.184.131:34423) with 8 cores 15/08/15 14:08:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150815140806-0003/1 on hostPort 10.128.184.131:34423 with 8 cores, 512.0 MB RAM 15/08/15 14:08:06 INFO AppClient
ClientActor:Executoradded:app−20150815140806−0003/1onworker−20150815125443−10.128.184.131−34423(10.128.184.131:34423)with8cores15/08/1514:08:06INFOSparkDeploySchedulerBackend:GrantedexecutorIDapp−20150815140806−0003/1onhostPort10.128.184.131:34423with8cores,512.0MBRAM15/08/1514:08:06INFOAppClientClientActor: Executor updated: app-20150815140806-0003/0 is now LOADING
15/08/15 14:08:06 INFO AppClient
C
l
i
e
n
t
A
c
t
o
r
:
E
x
e
c
u
t
o
r
u
p
d
a
t
e
d
:
a
p
p
−
20150815140806
−
0003
/
1
i
s
n
o
w
L
O
A
D
I
N
G
15
/
08
/
1514
:
08
:
06
I
N
F
O
A
p
p
C
l
i
e
n
t
ClientActor: Executor updated: app-20150815140806-0003/1 is now LOADING 15/08/15 14:08:06 INFO AppClient
ClientActor:Executorupdated:app−20150815140806−0003/1isnowLOADING15/08/1514:08:06INFOAppClientClientActor: Executor updated: app-20150815140806-0003/0 is now RUNNING
15/08/15 14:08:06 INFO AppClientKaTeX parse error: Double subscript at position 1112: …ock broadcast_0_̲piece0 stored a…OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/08/15 14:08:16 INFO SparkContext: Successfully stopped SparkContext
15/08/15 14:08:16 INFO Utils: Shutdown hook called
15/08/15 14:08:16 INFO RemoteActorRefProvider
R
e
m
o
t
i
n
g
T
e
r
m
i
n
a
t
o
r
:
S
h
u
t
t
i
n
g
d
o
w
n
r
e
m
o
t
e
d
a
e
m
o
n
.
15
/
08
/
1514
:
08
:
16
I
N
F
O
U
t
i
l
s
:
D
e
l
e
t
i
n
g
d
i
r
e
c
t
o
r
y
/
t
m
p
/
s
p
a
r
k
−
6
f
c
6
b
901
−
3
a
c
8
−
4
a
c
d
−
87
a
a
−
352
f
d
22
c
f
8
d
415
/
08
/
1514
:
08
:
16
I
N
F
O
R
e
m
o
t
e
A
c
t
o
r
R
e
f
P
r
o
v
i
d
e
r
RemotingTerminator: Shutting down remote daemon. 15/08/15 14:08:16 INFO Utils: Deleting directory /tmp/spark-6fc6b901-3ac8-4acd-87aa-352fd22cf8d4 15/08/15 14:08:16 INFO RemoteActorRefProvider
RemotingTerminator:Shuttingdownremotedaemon.15/08/1514:08:16INFOUtils:Deletingdirectory/tmp/spark−6fc6b901−3ac8−4acd−87aa−352fd22cf8d415/08/1514:08:16INFORemoteActorRefProviderRemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
如果关掉一个worker:
要获取更多Jerry的原创文章,请关注公众号"汪子熙":
相关文章
- elasticsearch 集群
- 在ubuntu上部署Kubernetes管理docker集群示例
- Spark-1.4.0集群搭建
- Redis 切片集群:数据增多了,是该加内存还是加实例?
- 解决spark提交任务至k8s集群时报错(jdk证书问题,需生成jssecacerts):PKIX path building failed
- 【网址收藏】k8s高可用集群详细搭建步骤
- K8S集群的搭建:环境准备及相关命令
- Spark集群搭建+基于zookeeper实现高可用HA
- Hadoop 分布式集群搭建步骤
- Spark平台上提交作业到集群生成的日志文件
- 手绘流程图讲解spark是如何实现集群的高可用
- Greenplum集群或者Postgresql出现死锁肿么办?
- Kubernetes集群EmptyDir的数据存储类型(三十二)
- Kubernetes集群常用资源管理(三)
- Redis主从集群切换数据丢失问题
- Kubernetes初探[1]:部署您的第一个ASP.NET Core应用到k8s集群
- Logstash如何连接开启了SSL的Elasticsearch集群?
- 大数据Hadoop之——Spark集群部署(Standalone)
- Kubernetes RBAC 内置集群角色ClusterRole
- 从集群资源管理和任务调度角度看spark