Spark集群搭建

it2022-05-05  216

一、前提

1、安装hadoop集群

2、安装scala

3、假设三个节点:master、slave1、slave2

二、Spark集群搭建

1、建立文件夹存放spark压缩包

2、在本文件夹下解压

3、将解压得到的文件夹名重命名

4、进入spark-2.20\conf文件夹,修改spark-env.sh.template为spark-env.sh

5、在spark-env.sh中添加以下内容:

export JAVA_HOME=/opt/softWare/java/jdk1.8.0_141 export SCALA_HOME=/opt/software/scala/scala-2.12.4 export HADOOP_HOME=/opt/software/hadoop/hadoop-2.7.3 export HADOOP_CONF_DIR=/opt/softWare/hadoop/hadoop-2.7.3/etc/hadoop export SPARK_MASTER_IP=192.168.XXX.XX #export SPARK_WORKER_INSTANCES=1 //每个Slave中启动几个Worker实例 export SPARK_WORKER_MEMORY=1g #export SPARK_DIST_CLASSPATH=$(/home/hadoop/hadoop-2.7.2/bin/hadoop classpath)

注意:前面有#的可以不用添加

6、进入spark-2.20\conf文件夹,修改slaves.template为slave,添加如下内容:

master slave1 slave2

7、赋权限

chmod 777 /opt/software/spark/spark-2.20/bin/* chmod 777 /opt/software/spark/spark-2.20/sbin/*

8、scp到其他两台机器。

scp -r /opt/software/spark slave1:/opt/software/spark scp -r /opt/software/spark slave2:/opt/software/spark

9、在每台主机配置/etc/profile中配置环境变量

#spark export SPARK_HOME=/opt/software/spark/spark-2.20 PATH=$SPARK_HOME/bin:$PATH

然后:source /etc/profile

10、启动spark

cd /opt/software/spark/spark-2.20/sbin/ ./start-all.sh

11、查看jps

12、查看spark Web

ip:8080


最新回复(0)