Wednesday 11 November 2015

vagrant-hadoop-spark-cluster

https://github.com/dnafrance/vagrant-hadoop-spark-cluster

1. Introduction

Vagrant project to spin up a cluster of 4, 32-bit CentOS6.5 Linux virtual machines with Hadoop v2.6.0 and Spark v1.1.1.

Ideal for development cluster on a laptop with at least 4GB of memory.
  1. node1 : HDFS NameNode + Spark Master
  2. node2 : YARN ResourceManager + JobHistoryServer + ProxyServer
  3. node3 : HDFS DataNode + YARN NodeManager + Spark Slave
  4. node4 : HDFS DataNode + YARN NodeManager + Spark Slave

2. Prerequisites and Gotchas to be aware of

  1. At least 1GB memory for each VM node. Default script is for 4 nodes, so you need 4GB for the nodes, in addition to the memory for your host machine.
  2. Vagrant 1.7 or higher, Virtualbox 4.3.2 or higher
  3. Preserve the Unix/OSX end-of-line (EOL) characters while cloning this project; scripts will fail with Windows EOL characters.
  4. Project is tested on Ubuntu 32-bit 14.04 LTS host OS; not tested with VMware provider for Vagrant.
  5. The Vagrant box is downloaded to the ~/.vagrant.d/boxes directory. On Windows, this is C:/Users/{your-username}/.vagrant.d/boxes.

3. Getting Started

  1. Download and install VirtualBox
  2. Download and install Vagrant.
  3. Run vagrant box add centos65 http://files.brianbirkinbine.com/vagrant-centos-65-i386-minimal.box
  4. Git clone this project, and change directory (cd) into this project (directory).
  5. Download Hadoop 2.6 into the /resources directory
  6. Download Spark 1.1.1 into the /resources directory
  7. Download Java 1.8 into the /resources directory
  8. Run vagrant up to create the VM.
  9. Run vagrant ssh to get into your VM.
  10. Run vagrant destroy when you want to destroy and get rid of the VM.

4. Modifying scripts for adapting to your environment

You need to modify the scripts to adapt the VM setup to your environment.
  1. List of available Vagrant boxes
  2. ./Vagrantfile
    To add/remove slaves, change the number of nodes:
    line 5: numNodes = 4
    To modify VM memory change the following line:
    line 13: v.customize ["modifyvm", :id, "--memory", "1024"]
  3. /scripts/common.sh
    To use a different version of Java, change the following line depending on the version you downloaded to /resources directory.
    line 4: JAVA_ARCHIVE=jdk-8u25-linux-i586.tar.gz
    To use a different version of Hadoop you've already downloaded to /resources directory, change the following line:
    line 8: HADOOP_VERSION=hadoop-2.6.0
    To use a different version of Hadoop to be downloaded, change the remote URL in the following line:
    line 10: HADOOP_MIRROR_DOWNLOAD=http://apache.crihan.fr/dist/hadoop/common/stable/hadoop-2.6.0.tar.gz
    To use a different version of Spark, change the following lines:
    line 13: SPARK_VERSION=spark-1.1.1
    line 14: SPARK_ARCHIVE=$SPARK_VERSION-bin-hadoop2.4.tgz
    line 15: SPARK_MIRROR_DOWNLOAD=../resources/spark-1.1.1-bin-hadoop2.4.tgz
  4. /scripts/setup-java.sh
    To install from Java downloaded locally in /resources directory, if different from default version (1.8.0_25), change the version in the following line:
    line 18: ln -s /usr/local/jdk1.8.0_25 /usr/local/java
    To modify version of Java to be installed from remote location on the web, change the version in the following line:
    line 12: yum install -y jdk-8u25-linux-i586
  5. /scripts/setup-centos-ssh.sh
    To modify the version of sshpass to use, change the following lines within the function installSSHPass():
    line 23: wget http://pkgs.repoforge.org/sshpass/sshpass-1.05-1.el6.rf.i686.rpm
    line 24: rpm -ivh sshpass-1.05-1.el6.rf.i686.rpm
  6. /scripts/setup-spark.sh
    To modify the version of Spark to be used, if different from default version (built for Hadoop2.4), change the version suffix in the following line:
    line 32: ln -s /usr/local/$SPARK_VERSION-bin-hadoop2.4 /usr/local/spark

5. Post Provisioning

After you have provisioned the cluster, you need to run some commands to initialize your Hadoop cluster. SSH into node1 using
vagrant ssh node-1 Commands below require root permissions. Change to root access using sudo su or create a new user and grant permissions if you want to use a non-root access. In such a case, you'll need to do this on VMs.
Issue the following command.
  1. $HADOOP_PREFIX/bin/hdfs namenode -format myhadoop

Start Hadoop Daemons (HDFS + YARN)

SSH into node1 and issue the following commands to start HDFS.
  1. $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
  2. $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
SSH into node2 and issue the following commands to start YARN.
  1. $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
  2. $HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR start nodemanager
  3. $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start proxyserver --config $HADOOP_CONF_DIR
  4. $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR

Test YARN

Run the following command to make sure you can run a MapReduce job.
yarn jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 2 100

Start Spark in Standalone Mode

SSH into node1 and issue the following command.
  1. $SPARK_HOME/sbin/start-all.sh

Test Spark on YARN

You can test if Spark can run on YARN by issuing the following command. Try NOT to run this command on the slave nodes.
$SPARK_HOME/bin/spark-submit --class org.apache.spark.examples.SparkPi \
    --master yarn-cluster \
    --num-executors 10 \
    --executor-cores 2 \
    lib/spark-examples*.jar \
    100

Test Spark using Shell

Start the Spark shell using the following command. Try NOT to run this command on the slave nodes.
$SPARK_HOME/bin/spark-shell --master spark://node1:7077
Then go here https://spark.apache.org/docs/latest/quick-start.html to start the tutorial. Most likely, you will have to load data into HDFS to make the tutorial work (Spark cannot read data on the local file system).

6. Web UI

You can check the following URLs to monitor the Hadoop daemons.
  1. NameNode
  2. ResourceManager
  3. JobHistory
  4. Spark

7. References

This project was put together with great pointers from all around the internet. All references made inside the files themselves. Primaily this project is forked from Jee Vang's vagrant project

Friday 6 November 2015

Apache Spark on Docker

https://github.com/sequenceiq/docker-spark
http://blog.sequenceiq.com/blog/2015/01/09/spark-1-2-0-docker/

Apache Spark on Docker


This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker image, available at the SequenceIQ GitHub page. The base Hadoop Docker image is also available as an official Docker image.

Pull the image from Docker Repository

$sudo docker pull sequenceiq/spark:1.5.1

Building the image

$sudo docker build --rm -t sequenceiq/spark:1.5.1 .

Running the image

  • if using boot2docker make sure your VM has more than 2GB memory
  • in your /etc/hosts file add $(boot2docker ip) as host 'sandbox' to make it easier to access your sandbox UI
  • open yarn UI ports when running container
$sudo docker run -it -p 8088:8088 -p 8042:8042 -h sandbox sequenceiq/spark:1.5.1 bash
 
or
 
$sudo docker run -it -h sandbox sequenceiq/spark:1.5.1 bash
$cd /usr/local/spark 
  
or
$sudo docker run -d -h sandbox sequenceiq/spark:1.5.1 -d

Versions

Hadoop 2.6.0 and Apache Spark v1.5.1 on Centos 

Testing

There are two deploy modes that can be used to launch Spark applications on YARN.

Disable logs

Just execute this command in the spark directory:
 
cp conf/log4j.properties.template conf/log4j.properties

Edit log4j.properties:

Replace at the first line:
 
log4j.rootCategory=INFO, console

by
 
log4j.rootCategory=WARN, console   or  log4j.rootCategory=ERROR, console

Save and restart your shell. It works for me for Spark 1.1.0 and Spark 1.5.1 on OS X.

YARN-client mode

In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
 
# run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1

# execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()

YARN-cluster mode

In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.

Estimating Pi (yarn-cluster mode):
 
# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
# note you must specify --files argument in cluster mode to enable metrics
spark-submit \
--class org.apache.spark.examples.SparkPi \
--files $SPARK_HOME/conf/metrics.properties \
--master yarn-cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.5.1-hadoop2.6.0.jar

Estimating Pi (yarn-client mode):
 
# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.5.1-hadoop2.6.0.jar

Install Docker on Ubuntu 14.04 LTS

Introduction

Docker is a container-based software framework for automating deployment of applications. “Containers” are encapsulated, lightweight, and portable application modules.

Pre-Flight Check

  • These instructions are intended for installing Docker.
  • I’ll be working from a Liquid Web Core Managed Ubuntu 14.04 LTS server, and I’ll be logged in as root.

Step 1: Installation of Docker

First, you’ll follow a simple best practice: ensuring the list of available packages is up to date before installing anything new.
apt-get update

Let’s install Docker by installing the docker-io package:
apt-get -y install docker.io

Link and fix paths with the following two commands:
ln -sf /usr/bin/docker.io /usr/local/bin/docker
sed -i '$acomplete -F _docker docker' /etc/bash_completion.d/docker


Finally, and optionally, let’s configure Docker to start when the server boots:
update-rc.d docker defaults

Step 2: Download a Docker Container

Let’s begin using Docker! Download the fedora Docker image:
docker pull ubuntu

Step 3: Run a Docker Container

Now, to setup a basic ubuntu container with a bash shell, we just run one command. docker run will run a command in a new container, -i attaches stdin and stdout, -t allocates a tty, and we’re using the standard ubuntu container.
docker run -i -t ubuntu /bin/bash

That’s it! You’re now using a bash shell inside of a ubuntu docker container.
To disconnect, or detach, from the shell without exiting use the escape sequence Ctrl-p + Ctrl-q.

There are many community containers already available, which can be found through a search. In the command below I am searching for the keyword debian:
docker search debian

Tuesday 20 October 2015

Install Deep Learning tool in Ubuntu 14.04

1. List of available Deep Learning tools:
http://deeplearning.net/software_links/

2. Install Theano - Python-based DL
http://deeplearning.net/software/theano
http://deeplearning.net/tutorial/

$ sudo apt-get install python-numpy python-scipy python-dev python-pip python-nose
g++ libopenblas-dev git

$ sudo pip install Theano


3. Install Matlab toolbox
https://github.com/rasmusbergpalm/DeepLearnToolbox




Thursday 3 September 2015

Most useful linux commands

1. ls Command Examples : 
This is the most frequently used linux command and it list the the current directory.
ls without any option list the current working directory
By default ls without option won’t show the hidden files, to view the hidden files type
to list files in Human Readable Format use the option -lh
to list files ordered by Last Modified Time use option -ltr
2. cd Command Examples
to change the directory in linux  we use cd command
to goto one level up use cd .. command
3. man Command Examples
sometimes you remember the command but forgot the options available with it, man command helps you here. It will show the manual of the command and list out all the options available with that command. Lets check ls command manual
it will show you all the options and their uses, to exit from man press q.
4. cp Command Examples
The cp command is  used to copy one file to other
using cp with -p option will preserve the mode, ownership and time. using cp with -i option will prompt for overwriting the file.
5. mv Command Example
The mv command is used for move operation from one location to other.
to move the file from one location to other use command
it also acts as a rename command and to do that type
above command will rename the file1 to file2
6. mkdir Command Example
In Linux/Unix we use mkdir command to create new directory, but to do that you must have write permission.
if you need to create nested directory then you can use mkdir -p command
 7. chmod Command Example
chmod command is used to set the permission on files or directory or alter previously set permission. Permission can be set using symbolic or octal codes.
to add read, write and execute permission for file we can do this by
to add read and write permission to any directory we can do this by
 8. date Command Example
date returns the current date and time of the system
We can also format the output of date command like this
 9. file Command Example
Some times you need to know the type of any file or what type of data it contains then you can use file command.
 10. tar Command Example
To compress a file you can use tar command.
above command will create a archive demo.tar with contents of temp directory. To see the content of tar archive use this  option
to extract the content of archived file use xf option. Here x means extract and f means file. It will copy the content of archive in current directory.
 11. grep Command Example
Some time you need to search with a pattern because you forgot the file name but partially you remember, in that case you can use grep command.
In above example output will come with is highlighted in red. With grep you can search in multiple file for the same pattern
Also if you need to search for word codingbyte in all the available files which name is starting with a and are present in current directory type this.
 12. ssh Command Example
ssh command is used to login to remote host securely. It provides encrypted communication between client and host.
 13. rmdir Command Example
To remove a directory you can use rmdir command. The directory you are removing must be empty.
If you need to deleted a directory which is not empty you can use option -r with rmdir
 14. rm Command Example
rm Command is used to remove a file or directory. To remove a file you must have write permission on the directory where file reside.
Before deleting if you want a confirmation then you can use -i option
to remove a directory and all the contents of it, you can use -r option
Above command will first recursively delete the all the files from directory and sub directory and then delete the temp directory itself.
15.pwd Command Example
pwd command will show the current working directory of logged in user
 16. ps Command Example
ps command is used to check the current running processes on the system. It will list out the process id and other detail of the process.
To see the full detail of processes use the option fu with ps
17. passwd Command Example
If you need to change the password then you can use passwd command.
 18. more Command Example
Suppose you want to open a big file and read it page by page, this you cannot do with cat command. more command allow you to read a big file one page at a time. To quit before reaching end of file press q
19. kill Command Example
To kill a running process you can use kill command, this command requires process id and that you can get from ps command.
 20. lpr Command Example
To send the content of a file to print use lpr command.
Here printer1 is the name of printer and file1 is the file you are going to print.
21. gzip Command Example
gzip command is used to create .gzip compressed file.
To uncompress the above file use below command
 22. unzip Command Example
To uncompress the .zip file use unzip command
It is also possible to see the content of zip file without uncompressing the file
 23. shutdown Command Example
To turn off your system of schedule the turnoff you can use shutdown command
Above command will turn of the system instantly. To schedule the turnoff after 20 minutes use below command
to reboot the system you can use below command
 24. free Command Example
To check the usage of memory you can use free command. It will show the free memory, used memory and swap memory .
You can format the memory display using  -b, -k, -m and -g .
 25. top Command Example
To see the top processes from the system use top command. It will show the list sorted by cpu usage
 26. df Command Example
df command is used to see the disk usage of filesystem.
We can format the output  using -h option, it will show the output in human readable format.
 27. whereis Command Example
To search the location of a specific linux command or a specific name  you can use whereis command.
 28. whatis Command Example
As name suggest whatis command will show one line description of any command .
 29. tail Command Example
Suppose you want to see the last few lines of the file then you can use tail command. By default it will show 10 lines.
you can also define how many lines you want to see by using -n option. In below command it will show the N lines from the tail of the file1.
 30. wget Command Example
wget is used to download the software and other files from the internet. In my last post How To Install WordPress on Ubuntu VPS i have used this command to download the wordpress setup file from wordpress server.
I have tried my best to list out the 30 Most Frequently Used Linux Commands and given their example. Please give your suggestion in the comment and also share the post.