After building a sever, I did the following operations to setup the environment.
1. Create a USB ubuntu Installer (on Mac)
Create a bootable USB on MacOS, so that we can use it to install Ubuntu 16.04.
# Eject the USB from your MAC
cd ~/Downloads
# rename the ISO file with a shorter name
hdiutil convert -format UDRW -o ubuntu.iso ubuntu-16.04.3-desktop-amd64.iso
mv ubuntu.iso.dmg ubuntu.iso
diskutil list
# Plug in the USB and figure out the disk ID.
diskutil list
diskutil unmountDisk /dev/disk3 # disk3 is my USB
sudo dd if=./ubuntu.iso of=/dev/rdisk3 bs=1m # disk3 is my USB
2. Install CUDA
Download Nvidia display driver from Nvidia. (NVIDIA-Linux_*.run) Download CUDA “runfile” from Nvidia.
# Install SSH server
sudo apt-get install openssh-server
# Turn off the lightdm
sudo service lightdm stop
sudo apt-get install vim
sudo apt-get install dkms build-essential linux-headers-generic
sudo vi /etc/modprobe.d/blacklist.conf
# insert the following lines to the end:
blacklist nouveau
#blacklist lbm-nouveau
#options nouveau modeset=0
#alias nouveau off
#alias lbm-nouveau off
#> echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
sudo update-initramfs -u
sudo reboot
sudo service lightdm stop
sudo ./NVIDIA-Linux-x86_64-390.48.run --no-x-check --no-nouveau-check --no-opengl-files
sudo reboot
sudo ./cuda_9.0.176_384.81_linux.run --no-opengl-libs
# This time, do not install driver, do not choose opengl and X configuration
vi ~/.bashrc
# insert the following lines to the end:
export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64/"
export CUDA_BIN=/usr/local/cuda-8.0/bin
export CUDA_LIB=/usr/local/cuda-8.0/lib64
export PATH=${CUDA_BIN}:$PATH
# > sudo reboot
3. Enable Nvidia Driver
After reboot, open “Additional Drivers” and choose “Using Nvidia binary driver”.
4. Install CuDNN
Download CuDNN 6.0.
tar -zxvf cudnn-8.0-linux-x64-v6.0tgz
cd cuda
sudo cp lib* /usr/local/cuda-8.0/lib64/
sudo cp cudnn.h /usr/local/cuda-8.0/include/
cd /usr/local/cuda-8.0/lib64
# update links
sudo rm libcudnn.so libcudnn.so.6
sudo ln -s libcudnn.so.6.0.21 libcudnn.so.6
sudo ln -s libcudnn.so.6 libcudnn.so
5. Install Keras
# Step 1: Download Anaconda
# On Linux
wget https://repo.continuum.io/archive/Anaconda3-4.4.0-Linux-x86_64.sh
# On MacOS
curl -O https://repo.continuum.io/archive/Anaconda3-4.4.0-MacOSX-x86_64.sh
# Step 2: Install Anaconda (Use all default settings)
bash Anaconda3-4.4.0-MacOSX-x86_64.sh
# Step 3: Restart your terminal
# Step 4: Create a virtual environment. (so that it will not mess up the existing settings)
conda create -n keras python=3.5
# Step 5: Install Tensorflow
source activate keras
# GPU version on Linux
pip install tensorflow-gpu
# CPU version on Mac
pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0-py3-none-any.whl
# If there is an error, please try it again.
# Step 6: Install Keras
# on MacOs
brew install graphviz
# On Linux
sudo apt-get install python-pydot python-pydot-ng graphviz
pip install keras
# Step 7: Install other Dependencies
conda install HDF5
conda install h5py
pip install pydot
pip install graphviz
pip install pillow
pip install opencv-python
# conda install -c https://conda.anaconda.org/menpo opencv3
# for visualize the model
pip install quiver_engine
pip install keras-vis
# Step 8: Test
python
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import plot_model
model = Sequential()
model.add(Dense(10, input_shape=(700, 1)))
model.summary()
plot_model(model, to_file='abc.pdf', show_shapes=True)
exit()
6. Install other dependencies
# moviepy
pip install moviepy
# If there is a ffmpeg error, please add the following lines at the top of your python file:
import imageio
imageio.plugins.ffmpeg.download()
# ERROR
# This error can ben due to the fact that ImageMagick is not installed on your computer.
# on ubuntu
sudo apt-get install libmagickwand-dev
# on mac
brew install imagemagick
sudo vi /etc/ImageMagick-6/policy.xml
comment: <policy domain="path" rights="none" pattern="@*" />
# to ensure TextClip works
brew install imagemagick
# BeautifulSoup4 | bs4
pip install BeautifulSoup4
7. Install Sublime
# On Linux (need to pay?)
wget -qO - https://download.sublimetext.com/sublimehq-pub.gpg | sudo apt-key add -
echo "deb https://download.sublimetext.com/ apt/stable/" | sudo tee /etc/apt/sources.list.d/sublime-text.list
# OR
echo "deb https://download.sublimetext.com/ apt/dev/" | sudo tee /etc/apt/sources.list.d/sublime-text.list
sudo apt-get update
sudo apt-get install sublime-text
8. Setup LogmeIn
wget http://www.vpn.net/installers/logmein-hamachi_2.1.0.165-1_amd64.deb
sudo dpkg -i logmein-hamachi_2.1.0.165-1_amd64.deb
sudo hamachi login
sudo hamachi attach ***@gmail.com
sudo hamachi create <server_name> <password>
9. Setup OpenAI
conda create -n openai python=3.5
source activate openai
pip install gym
# if you want to use Breakout-ram-v0, please install atari
pip install gym[atari]
# Install tensorflow, keras, and keras-rl
pip install tensorflow
pip install keras
pip install keras-rl
Then, you can try the following code:
import numpy as np
import gym
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam
from rl.agents import DDPGAgent, SARSAAgent, DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory
ENV_NAME = 'Breakout-ram-v0'
# Get the environment and extract the number of actions.
env = gym.make(ENV_NAME)
np.random.seed(123)
env.seed(123)
nb_actions = env.action_space.n
# Next, we build a very simple model.
model = Sequential()
model.add(Flatten(input_shape=(1,) + env.observation_space.shape))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(nb_actions))
model.add(Activation('linear'))
print(model.summary())
# Finally, we configure and compile our agent. You can use every built-in Keras optimizer and even the metrics!
memory = SequentialMemory(limit=50000, window_length=1)
policy = BoltzmannQPolicy()
dqn = SARSAAgent(model=model, nb_actions=nb_actions, nb_steps_warmup=10, policy=policy)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
# Okay, now it's time to learn something! We visualize the training here for show, but this slows down training quite a lot. You can always safely abort the training prematurely using Ctrl + C.
dqn.fit(env, nb_steps=50000, visualize=True, verbose=2)
# After training is done, we save the final weights.
dqn.save_weights('dqn_{}_weights.h5f'.format(ENV_NAME), overwrite=True)
# Finally, evaluate our algorithm for 5 episodes.
dqn.test(env, nb_episodes=5, visualize=True)
pip install tensorflow keras
conda install HDF5 h5py
pip install pydot graphviz pillow quiver_engine keras-vis opencv-python moviepy BeautifulSoup4
pip install mxnet-cu80mkl
Automatically mount HD/SSD
-
Type the following command to get the UUID of your HD/SSD.
sudo blkid
``
The output looks similar as the follows:
/dev/sda2: LABEL="Data" UUID="1438008138******" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="39e88529-****-****-****-************" /dev/sdb2: LABEL="BigData" UUID="3868FD7668******" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="dec90a31-****-****-****-************" /dev/sdc1: UUID="4a76c292-3384-4480-9672-4c8ab9******" TYPE="swap" PARTUUID="7bd24d17-****-****-****-************" /dev/sdc2: UUID="447c124c-baf5-4a24-a2c7-ac413f******" TYPE="ext4" PARTUUID="a96835ae-****-****-****-************" /dev/sdc3: UUID="88bd1683-c1fd-4097-b3c9-71998d******" TYPE="ext4" PARTUUID="7184ca16-****-****-****-************" /dev/sda1: PARTLABEL="Microsoft reserved partition" PARTUUID="d52eb0c9-****-****-****-************" /dev/sdb1: PARTLABEL="Microsoft reserved partition" PARTUUID="bc7c1633-****-****-****-************"
``
-
Add the corresponding info. to /etc/fstab
sudo vi /etc/fstab # add the following info. to the end of the file UUID=1438008138****** /data/Data ntfs defaults 0 2 UUID=3868FD7668****** /data/BigData ntfs defaults 0 2
-
It will automatically mount these two HDs/SSDs at the next boot.
Use remote jupyter on local
ssh -L 8000:localhost:8888 username@IP
jupyter notebook --no-browser --port=8889
ssh -N -L localhost:8888:localhost:8889 user@remote_host
### list ssh
ps aux | grep ssh