zl程序教程

您现在的位置是:首页 >  系统

当前栏目

CentOS 6.4下安装配置Spark-0.9集群

2023-09-14 08:57:29 时间

Spark是一个快速、通用的计算集群框架,它的内核使用Scala语言编写,它提供了Scala、Java和Python编程语言high-level API,使用这些API能够非常容易地开发并行处理的应用程序。
下面,我们通过搭建Spark集群计算环境,并进行简单地验证,来体验一下使用Spark计算的特点。无论从安装运行环境还是从编写处理程序(用Scala,Spark默认提供的Shell环境可以直接输入Scala代码进行数据处理),我们都会觉得比Hadoop MapReduce计算框架要简单得多,而且,Spark可以很好地与HDFS进行交互(从HDFS读取数据,以及写数据到HDFS中)。

安装配置

下载安装配置Scala wget http://www.scala-lang.org/files/archive/scala-2.10.3.tgz
scp -r ~/cloud/programs/spark-0.9.0-incubating-bin-hadoop1 shirdrn@s1:~/cloud/programs/
scp -r ~/cloud/programs/spark-0.9.0-incubating-bin-hadoop1 shirdrn@s2:~/cloud/programs/
scp -r ~/cloud/programs/spark-0.9.0-incubating-bin-hadoop1 shirdrn@s3:~/cloud/programs/

启动Spark集群

我们会使用HDFS集群上存储的数据作为计算的输入,所以首先要把Hadoop集群安装配置好,并成功启动,我这里使用的是Hadoop 1.2.1版本。启动Spark计算集群非常简单,执行如下命令即可:

cd /home/shirdrn/cloud/programs/spark-0.9.0-incubating-bin-hadoop1/

可以看到,在m1上启动了一个名称为Master的进程,在s1上启动了一个名称为Worker的进程,如下所示,我这里也启动了Hadoop集群:
主节点m1上:

54968 SecondaryNameNode
tail -100f $SPARK_HOME/logs/spark-shirdrn-org.apache.spark.deploy.master.Master-1-m1.out
tail -100f $SPARK_HOME/logs/spark-shirdrn-org.apache.spark.deploy.worker.Worker-1-s1.out
27.159.254.192 - - [21/Feb/2014:11:40:46 +0800] "GET /archives/526.html HTTP/1.1" 200 12080 "http://shiyanjun.cn/archives/526.html" "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0"
120.43.4.206 - - [21/Feb/2014:10:37:37 +0800] "GET /archives/417.html HTTP/1.1" 200 11464 "http://shiyanjun.cn/archives/417.html/" "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0"

统计该文件里面IP地址出现频率,来验证Spark集群能够正常计算。另外,我们需要从HDFS中读取这个日志文件,然后统计IP地址频率,最后将结果再保存到HDFS中的指定目录。
首先,需要启动用来提交计算任务的Spark Shell:

bin/spark-shell

在Spark Shell上只能使用Scala语言写代码来运行。
然后,执行统计IP地址频率,在Spark Shell中执行如下代码来实现:

val file = sc.textFile("hdfs://m1:9000/user/shirdrn/wwwlog20140222.log")
val result = file.flatMap(line = line.split("\\s+.*")).map(word = (word,1)).reduceByKey((a, b) = a + b)

上述的文件hdfs://m1:9000/user/shirdrn/wwwlog20140222.log是输入日志文件。处理过程的日志信息,示例如下所示:

14/03/06 21:59:22 INFO MemoryStore: ensureFreeSpace(784) called with curMem=43296, maxMem=311387750
14/03/06 21:59:22 INFO MemoryStore: Block broadcast_11 stored as values to memory (estimated size 784.0 B, free 296.9 MB)
14/03/06 21:59:22 INFO DAGScheduler: Got job 10 (collect at console :13) with 1 output partitions (allowLocal=false)
14/03/06 21:59:22 INFO DAGScheduler: Submitting Stage 21 (MapPartitionsRDD[84] at reduceByKey at console :13), which has no missing parents
14/03/06 21:59:22 INFO DAGScheduler: Submitting 1 missing tasks from Stage 21 (MapPartitionsRDD[84] at reduceByKey at console :13)
14/03/06 21:59:22 INFO TaskSetManager: Starting task 21.0:0 as TID 19 on executor localhost: localhost (PROCESS_LOCAL)
14/03/06 21:59:22 INFO HadoopRDD: Input split:hdfs://m1:9000/user/shirdrn/wwwlog20140222.log:0+4179514
14/03/06 21:59:23 INFO TaskSetManager: Finished TID 19 in 211 ms on localhost (progress: 0/1)
14/03/06 21:59:23 INFO DAGScheduler: Stage 21 (reduceByKey at console :13) finished in 0.211 s
14/03/06 21:59:23 INFO DAGScheduler: Submitting Stage 20 (MapPartitionsRDD[86] at reduceByKey at console :13), which is now runnable
14/03/06 21:59:23 INFO DAGScheduler: Submitting 1 missing tasks from Stage 20 (MapPartitionsRDD[86] at reduceByKey at console :13)
14/03/06 21:59:23 INFO TaskSetManager: Starting task 20.0:0 as TID 20 on executor localhost: localhost (PROCESS_LOCAL)
14/03/06 21:59:23 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Getting 1 non-zero-bytes blocks out of 1 blocks
14/03/06 21:59:23 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote gets in 1 ms
14/03/06 21:59:23 INFO TaskSetManager: Finished TID 20 in 17 ms on localhost (progress: 0/1)
14/03/06 21:59:23 INFO DAGScheduler: Stage 20 (collect at console :13) finished in 0.016 s
14/03/06 21:59:23 INFO SparkContext: Job finished: collect at console :13, took 0.242136929 s
res14: Array[(String, Int)] = Array((27.159.254.192,28), (120.43.9.81,40), (120.43.4.206,16), (120.37.242.176,56), (64.31.25.60,2), (27.153.161.9,32), (202.43.145.163,24), (61.187.102.6,1), (117.26.195.116,12), (27.153.186.194,64), (123.125.71.91,1), (110.85.106.105,64), (110.86.184.182,36), (27.150.247.36,52), (110.86.166.52,60), (175.98.162.2,20), (61.136.166.16,1), (46.105.105.217,1), (27.150.223.49,52), (112.5.252.6,20), (121.205.242.4,76), (183.61.174.211,3), (27.153.230.35,36), (112.111.172.96,40), (112.5.234.157,3), (144.76.95.232,7), (31.204.154.144,28), (123.125.71.22,1), (80.82.64.118,3), (27.153.248.188,160), (112.5.252.187,40), (221.219.105.71,4), (74.82.169.79,19), (117.26.253.195,32), (120.33.244.205,152), (110.86.165.8,84), (117.26.86.172,136), (27.153.233.101,8), (123.12...

可以看到,输出了经过map和reduce计算后的部分结果。
最后,我们想要将结果保存到HDFS中,只要输入如下代码:

result.saveAsTextFile("hdfs://m1:9000/user/shirdrn/wwwlog20140222.log.result")