Spark --- 启动、运行、关闭过程
Spark 运行 启动 过程 --- 关闭
2023-09-27 14:19:38 时间
// scalastyle:off println
package org.apache.spark.examples
import scala.math.random
import org.apache.spark._
/** Computes an approximation to pi */
object SparkPi {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Spark Pi")
val spark = new SparkContext(conf)
val slices = if (args.length > 0) args(0).toInt else 2
val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
val count = spark.parallelize(1 until n, slices).map { i =>
val x = random * 2 - 1
val y = random * 2 - 1
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / n)
spark.stop()
}
}
[abc@search-engine---dev4 spark]$ ./bin/run-example SparkPi
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/06/07 03:43:20 INFO SparkContext: Running Spark version 1.6.1
16/06/07 03:43:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
#进行acls用户权限认证
16/06/07 03:43:20 INFO SecurityManager: Changing view acls to: abc
16/06/07 03:43:20 INFO SecurityManager: Changing modify acls to: abc
16/06/07 03:43:20 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(abc); users with modify permissions: Set(abc)
16/06/07 03:43:21 INFO Utils: Successfully started service 'sparkDriver' on port 40568.
16/06/07 03:43:23 INFO Slf4jLogger: Slf4jLogger started
#启动远程监听服务,端口是36739,Spark的通信工作由akka来实现
16/06/07 03:43:23 INFO Remoting: Starting remoting
16/06/07 03:43:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@127.0.0.1:36739]
16/06/07 03:43:23 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 36739.
#注册MapOutputTracker,BlockManagerMaster,BlockManager
16/06/07 03:43:23 INFO SparkEnv: Registering MapOutputTracker
16/06/07 03:43:23 INFO SparkEnv: Registering BlockManagerMaster
#分配存储空间,包括磁盘空间和内存空间
16/06/07 03:43:23 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-8a68c39e-40e5-43ca-b21e-081ef8d278e2
16/06/07 03:43:23 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/06/07 03:43:23 INFO SparkEnv: Registering OutputCommitCoordinator
16/06/07 03:43:24 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/06/07 03:43:24 INFO SparkUI: Started SparkUI at http://127.0.0.1:4040
16/06/07 03:43:24 INFO HttpFileServer: HTTP File server directory is /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/httpd-796af3e2-122c-4780-9273-f4aa7d32bb04
#启动HTTP服务,可以通过界面查看服务和任务运行情况
16/06/07 03:43:24 INFO HttpServer: Starting HTTP Server
16/06/07 03:43:24 INFO Utils: Successfully started service 'HTTP file server' on port 54315.
#启动SparkContext,并上传本地运行的jar包到http://127.0.0.1:54315
16/06/07 03:43:24 INFO SparkContext: Added JAR file:/usr/local/spark/lib/spark-examples-1.6.1-hadoop2.6.0.jar at http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1465285404966
16/06/07 03:43:25 INFO Executor: Starting executor ID driver on host localhost
16/06/07 03:43:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59217.
16/06/07 03:43:25 INFO NettyBlockTransferService: Server created on 59217
16/06/07 03:43:25 INFO BlockManagerMaster: Trying to register BlockManager
16/06/07 03:43:25 INFO BlockManagerMasterEndpoint: Registering block manager localhost:59217 with 511.1 MB RAM, BlockManagerId(driver, localhost, 59217)
16/06/07 03:43:25 INFO BlockManagerMaster: Registered BlockManager
#Spark提交了一个job给DAGScheduler
16/06/07 03:43:26 INFO SparkContext: Starting job: reduce at SparkPi.scala:36
#DAGScheduler收到一个编号为0的含有2个partitions分区的job
16/06/07 03:43:26 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 2 output partitions
#将job转换为编号为0的stage
16/06/07 03:43:26 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)
#DAGScheduler在submitting stage之前,首先寻找本次stage的parents,如果missing parents为空,则submitting stage;
#如果有,会对parents stage进行递归submit stage,随之又将stage 0分成了2个task,提交给TaskScheduler的submitTasks方法。
#对于某些简单的job,如果它没有依赖关系,并且只有一个partition,这样的job会使用local thread处理而并不会提交到TaskScheduler上处理。
16/06/07 03:43:26 INFO DAGScheduler: Parents of final stage: List()
16/06/07 03:43:26 INFO DAGScheduler: Missing parents: List()
16/06/07 03:43:26 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
16/06/07 03:43:26 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B)
16/06/07 03:43:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1218.0 B, free 3.0 KB)
16/06/07 03:43:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:59217 (size: 1218.0 B, free: 511.1 MB)
16/06/07 03:43:26 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
16/06/07 03:43:26 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
#TaskSchedulerImpl是TaskScheduler的实现类,接收了DAGScheduler提交的2个task
16/06/07 03:43:26 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/06/07 03:43:26 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2152 bytes)
16/06/07 03:43:26 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2152 bytes)
#Executor接收任务后则从远程的服务器中将运行jar包存放到本地,然后进行计算,并各自汇报了任务执行状态
16/06/07 03:43:26 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/06/07 03:43:26 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/06/07 03:43:26 INFO Executor: Fetching http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1465285404966
16/06/07 03:43:27 INFO Utils: Fetching http://127.0.0.1:54315/jars/spark-examples-1.6.1-hadoop2.6.0.jar to /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/userFiles-b021b090-3024-421c-b4b0-73fc9f723f44/fetchFileTemp4760324069006875921.tmp
16/06/07 03:43:28 INFO Executor: Adding file:/tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/userFiles-b021b090-3024-421c-b4b0-73fc9f723f44/spark-examples-1.6.1-hadoop2.6.0.jar to class loader
16/06/07 03:43:29 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1031 bytes result sent to driver
16/06/07 03:43:29 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1031 bytes result sent to driver
#TaskSetManager、SparkContent各自收到任务完成报告
16/06/07 03:43:29 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2131 ms on localhost (1/2)
16/06/07 03:43:29 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2189 ms on localhost (2/2)
16/06/07 03:43:29 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/06/07 03:43:29 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 2.217 s
16/06/07 03:43:29 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 2.877995 s
#打印程序执行结果
Pi is roughly 3.14282
#Spark服务关闭
16/06/07 03:43:29 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
16/06/07 03:43:29 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/06/07 03:43:29 INFO MemoryStore: MemoryStore cleared
16/06/07 03:43:29 INFO BlockManager: BlockManager stopped
16/06/07 03:43:29 INFO BlockManagerMaster: BlockManagerMaster stopped
16/06/07 03:43:29 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/06/07 03:43:29 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/06/07 03:43:29 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/06/07 03:43:29 INFO SparkContext: Successfully stopped SparkContext
16/06/07 03:43:29 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/06/07 03:43:29 INFO ShutdownHookManager: Shutdown hook called
16/06/07 03:43:29 INFO ShutdownHookManager: Deleting directory /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7/httpd-796af3e2-122c-4780-9273-f4aa7d32bb04
16/06/07 03:43:29 INFO ShutdownHookManager: Deleting directory /tmp/spark-3ef0b16c-fe81-482e-8446-30571da062e7
相关文章
- Spark 2.0 PCA主成份分析
- 深度剖析Spark分布式执行原理
- Hive、SparkSQL区别与联系:【SparkSQL集成了HiveSQL】【Spark通过sparkSQL使用HiveSQL语句操作Hive表】
- 英特尔开源BigDL,可直接在Spark框架下运行深度学习
- 用Spark分析Amazon的8000万商品评价(内含数据集、代码、论文)
- 《Spark与Hadoop大数据分析》——第1章 从宏观视角看大数据分析
- Spark-Spark Streaming例子整理(一)
- Spark PairRDDFunctions[K,V]聚合相关的API
- 第142课: Spark面试经典系列之Cache和Checkpoint
- 第89课程 Spark STREAMING kafka 安装!
- 第51课:Spark中的新解析引擎Catalyst源码SQL最终转化为RDD具体实现
- 第5课:基于案例一节课贯通Spark Streaming流计算框架的运行源码(Spark streaming源代码导入IDEA)
- 第1章对运行在YARN上的Spark进行性能调优
- 第35课: 打通Spark系统运行内幕机制循环流程
- 第3期Spark纯实战公益大讲坛:通过案例实战掌握Spark内核运行内幕
- 第7课:spark机器学习第7课:spark机器学习内幕剖析
- 生产环境实战spark (5)分布式集群 5台设备之间hosts文件配置 ssh免密码登录
- spark 开发考题!面试题! 根据IP地址查询归属地,统计归属地IP地址数
- Spark Client和Cluster两种运行模式的工作流程
- Note_Spark_Day01:Spark 基础环境
- [Spark精进]必须掌握的4个RDD算子之map算子
- 【Spark深入学习-11】Spark基本概念和运行模式
- 大数据Spark(三十七):SparkStreaming实战案例二 UpdateStateByKey
- Spark VS Flink