Note:- To set-up I have used version 1.5.2
When you run jobs in Spark’s local mode. In this mode, the Spark driver runs along with an executor in the same Java process.
When you run jobs in Spark’s local mode. In this mode, the Spark driver runs along with an executor in the same Java process.
Spark can run over a variety of cluster managers to access
the machines in a cluster. If you only want to run Spark by itself on a set of
machines, the built-in Standalone mode is the easiest way to deploy it. Spark
can also run over two popular cluster managers: Hadoop YARN and Apache Mesos.
Standalone Cluster
manager
- Copy a compiled version of Spark to the same location on all your machines—for example, /usr/local/spark.
- Set up password-less SSH access from your master machine to the others. This requires having the same user account on all the machines, creating a private SSH key for it on the master via ssh-keygen, and adding this key to the .ssh/authorized_keys file of all the workers. If you have not set this up before, you can follow these commands:
# On master: run ssh-keygen
accepting default options
$ ssh-keygen -t dsa
Enter file in which to save the key
(/home/hduser/.ssh/id_dsa): [ENTER]
Enter passphrase (empty for no
passphrase): [EMPTY]
Enter same passphrase again:
[EMPTY]
# On workers:
# copy ~/.ssh/id_dsa.pub from your
master to the worker, then use:
$ cat ~/.ssh/id_dsa.pub >>
~/.ssh/authorized_keys
$ chmod 644 ~/.ssh/authorized_keys
- Edit the conf/slaves file on your master and fill in the workers’ hostnames. In our case I have 2 machines and one machine will run both master and slave, and other only slave.
- To start the cluster, run sbin/start-all.sh on your master (it is important to run it there rather than on a worker). If everything started, you should get no prompts for a password, and the cluster manager’s web UI should appear at http://masternode:8080 and show all your workers.
- To stop the cluster, run sbin/stop-all.sh on your master node.
** Check web-UI http://masternode:8080/
To submit application:
>spark-submit --master spark://masternode:7077 yourapp
Launch spark-shell:
spark-shell --master spark://masternode:7077
Comments
Post a Comment