http://blog.sequenceiq.com/blog/2015/01/09/spark-1-2-0-docker/
Apache Spark on Docker
This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker image, available at the SequenceIQ GitHub page. The base Hadoop Docker image is also available as an official Docker image.
Pull the image from Docker Repository
$sudo docker pull sequenceiq/spark:1.5.1
Building the image
$sudo docker build --rm -t sequenceiq/spark:1.5.1 .
Running the image
- if using boot2docker make sure your VM has more than 2GB memory
- in your /etc/hosts file add $(boot2docker ip) as host 'sandbox' to make it easier to access your sandbox UI
- open yarn UI ports when running container
$sudo docker run -it -p 8088:8088 -p 8042:8042 -h sandbox sequenceiq/spark:1.5.1 bash
or
$sudo docker run -it -h sandbox sequenceiq/spark:1.5.1 bash
$
cd /usr/local/spark
or$sudo docker run -d -h sandbox sequenceiq/spark:1.5.1 -d
Versions
Hadoop 2.6.0 and Apache Spark v1.5.1 on Centos
Testing
There are two deploy modes that can be used to launch Spark applications on YARN.Disable logs
Just execute this command in the spark directory:
cp conf/log4j.properties.template conf/log4j.properties
Edit log4j.properties:
Replace at the first line:
log4j.rootCategory=INFO, console
by
log4j.rootCategory=WARN, console or
log4j.rootCategory=ERROR, console
Save and restart your shell. It works for me for Spark 1.1.0 and Spark 1.5.1 on OS X.
YARN-client mode
In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
# run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1
# execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()
YARN-cluster mode
In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.Estimating Pi (yarn-cluster mode):
# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
# note you must specify --files argument in cluster mode to enable metrics
spark-submit \
--class org.apache.spark.examples.SparkPi \
--files $SPARK_HOME/conf/metrics.properties \
--master yarn-cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.5.1-hadoop2.6.0.jar
Estimating Pi (yarn-client mode):
# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.5.1-hadoop2.6.0.jar
Great Article ! Thank You
ReplyDeletehadoop ónlinÉ training
free big data bootcamp
hadoop big data videos
spark ónlinÉ training
Big data QA Tester training
Big data Analyst training