Hanjie's Blog

一只有理想的羊驼

DC Barrel jack

Place one of the jumpers on J48. J48 is located between the Barrel Jack connector and the Camera connector1.

The Jetson has two power profiles, called modes. Mode 0 is 10W, Mode 1 is 5W. To set the mode to 5 Watt mode:

1
$ sudo nvpmodel -m 1

To set it back to 10 Watt mode:

1
$ sudo nvpmodel -m 0

The default image on the Jetson Nano is in 10 Watt mode. There’s another utility name jetson_clocks with which you may want to come familiar.

Write Image to the microSD Card

Download the Jetson Nano Developer Kit SD Card Image, and note where it was saved on the computer2.

  1. Do not insert your microSD card yet.
  2. Download, install, and launch Etcher.
  3. Click “Select image” and choose the zipped image file downloaded earlier.
  4. Insert your microSD card(Click Ignore if your Mac shows windows).
  5. If you have no other external drives attached, Etcher will automatically select the microSD card as target device. Otherwise, click “Select drive” and choose the correct device.
  6. Click “Flash!” Your computer may prompt for your username and password before it allows Etcher to proceed. It will take Etcher about 10 minutes to write and validate the image if your microSD card is connected via USB3.
  7. After Etcher finishes(your Mac may let you know it doesn’t know how to read the SD Card), just click Eject and remove the microSD card.

风扇控制

1
sudo sh -c 'echo 30 > /sys/devices/pwm-fan/target_pwm'

SWAP

Since memory (4GB) on the Jetson Nano is rather limited, I’d create and mount a swap file on the system3. I referenced Create a Linux swap file for that. And I made a 8GB swap file on my Jetson Nano DevKit.

1
2
3
4
5
6
7
sudo fallocate -l 8G /mnt/8GB.swap
sudo chmod 600 /mnt/8GB.swap
sudo mkswap /mnt/8GB.swap
sudo swapon /mnt/8GB.swap

sudo cp /etc/fstab /etc/fstab.bak
echo '/mnt/8GB.swap none swap sw 0 0' | sudo tee -a /etc/fstab

重启:

1
2
3
4
sudo swapon -s

Filename Type Size Used Priority
/mnt/8GB.swap file 8388604 0 -1

CUDA

配置环境:

1
sudo gedit ~/.bashrc

添加:

1
2
3
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=$CUDA_HOME:/usr/local/cuda

刷新:

1
source ~/.bashrc

Verifying:

1
2
3
4
5
6
nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sun_Sep_30_21:09:22_CDT_2018
Cuda compilation tools, release 10.0, V10.0.166

换源

1
2
sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
sudo gedit /etc/apt/sources.list

更换为清华大学的aarch64源:

1
2
3
4
5
6
7
8
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
1
sudo apt-get update

CMAKE

下载CMake3.14.5

1
2
3
4
5
6
sudo apt remove cmake

cd $CMAKE_DOWNLOAD_PATH
./configure
make
sudo make install
1
2
3
4
cmake --version
cmake version 3.14.5

CMake suite maintained and supported by Kitware (kitware.com/cmake).

Dependences

1
sudo apt-get install libgtk-3-dev libavcodec-dev libavformat-dev python-dev python-numpy python-tk libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev libswscale-dev libopenexr-dev libeigen3-dev libfaac-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev libx264-dev libqt4-dev libqt4-opengl-dev sphinx-common texlive-latex-extra libv4l-dev libatlas-base-dev libqtgui4 python3-pyqt5 gfortran python3-dev qt5-default

Python

1
2
3
4
5
sudo apt-get install python3-dev python3-pip python3-tk
sudo pip3 install numpy==1.16.4 matplotlib

sudo apt-get install python-dev python-pip python-tk
sudo pip2 install numpy==1.16.4 matplotlib

Jetson stats

1
2
3
sudo -H pip install jetson-stats

sudo jtop
jtop.gif

OpenCV 3.4.2

删除自带的opencv:

1
sudo apt-get purge libopencv*

下载sources

为了解决OpenCV3.4.2与CUDA10.0的冲突,下载NVIDIA VIDEO CODEC SDK,复制Video_Codec_SDK/include/下的nvcuvid.hcuviddec.h/usr/local/cuda/include/,修改opencv/modules/cudacodec/src/下的precomp.hppcuvid_video_source.hppframe_queue.hppvideo_parser.hppvideo_decoder.hpp

1
2
3
4
5
#if CUDA_VERSION >= 9000 && CUDA_VERSION < 10000 
#include <dynlink_nvcuvid.h>
#else
#include <nvcuvid.h>
#endif

为了解决OpenGL相关问题:

1
sudo gedit /usr/local/cuda/include/cuda_gl_interop.h

Here’s how the relevant lines (line #62~68) of cuda_gl_interop.h look like after the modification45:

1
2
3
4
5
6
7
//#if defined(__arm__) || defined(__aarch64__)
//#ifndef GL_VERSION
//#error Please include the appropriate gl headers before including cuda_gl_interop.h
//#endif
//#else
#include <GL/gl.h>
//#endif
1
2
cd /usr/lib/aarch64-linux-gnu/
sudo ln -sf libGL.so.1.0.0 libGL.so

编译OpenCV

1
2
3
4
mkdir build
cd build

cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D BUILD_EXAMPLES=OFF -D BUILD_DOCS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_TESTS=OFF -D WITH_GTK_2_X=OFF -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_CUDA=ON -D CUDA_ARCH_BIN="5.3" -D WITH_CUBLAS=ON -D CMAKE_CXX_FLAGS="-std=c++11" -D CUDA_NVCC_FLAGS="-std=c++11 --expt-relaxed-constexpr" -D WITH_NVCUVID=ON -D BUILD_opencv_cudacodec=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D ENABLE_NEON=ON -D WITH_LIBV4L=ON -D OPENCV_EXTRA_MODULES_PATH=/home/hanjie/Software/opencv-3.4.2/opencv_contrib/modules ..
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
-- General configuration for OpenCV 3.4.2 =====================================
-- Version control: unknown
--
-- Extra modules:
-- Location (extra): /home/hanjie/Software/opencv-3.4.2/opencv_contrib/modules
-- Version control (extra): 3.4.2
--
-- Platform:
-- Timestamp: 2019-07-09T09:09:02Z
-- Host: Linux 4.9.140-tegra aarch64
-- CMake: 3.14.5
-- CMake generator: Unix Makefiles
-- CMake build tool: /usr/bin/make
-- Configuration: Release
--
-- CPU/HW features:
-- Baseline: NEON FP16
-- required: NEON
-- disabled: VFPV3
--
-- C/C++:
-- Built as dynamic libs?: YES
-- C++11: YES
-- C++ Compiler: /usr/bin/c++ (ver 7.4.0)
-- C++ flags (Release): -std=c++11 -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
-- C++ flags (Debug): -std=c++11 -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
-- C Compiler: /usr/bin/cc
-- C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-narrowing -Wno-comment -Wimplicit-fallthrough=3 -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
-- C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-narrowing -Wno-comment -Wimplicit-fallthrough=3 -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
-- Linker flags (Release):
-- Linker flags (Debug):
-- ccache: NO
-- Precompiled headers: YES
-- Extra dependencies: dl m pthread rt /usr/lib/aarch64-linux-gnu/libGL.so /usr/lib/aarch64-linux-gnu/libGLU.so cudart nppc nppial nppicc nppicom nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cufft -L/usr/local/cuda/lib64
-- 3rdparty dependencies:
--
-- OpenCV modules:
-- To be built: aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv datasets dnn dnn_objdetect dpm face features2d flann freetype fuzzy hfs highgui img_hash imgcodecs imgproc java_bindings_generator line_descriptor ml objdetect optflow phase_unwrapping photo plot python2 python3 python_bindings_generator reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking video videoio videostab xfeatures2d ximgproc xobjdetect xphoto
-- Disabled: js world
-- Disabled by dependency: -
-- Unavailable: cnn_3dobj hdf java matlab ovis sfm ts viz
-- Applications: apps
-- Documentation: NO
-- Non-free algorithms: NO
--
-- GUI:
-- QT: YES (ver 5.9.5)
-- QT OpenGL support: YES (Qt5::OpenGL 5.9.5)
-- GTK+: NO
-- OpenGL support: YES (/usr/lib/aarch64-linux-gnu/libGL.so /usr/lib/aarch64-linux-gnu/libGLU.so)
-- VTK support: NO
--
-- Media I/O:
-- ZLib: /usr/lib/aarch64-linux-gnu/libz.so (ver 1.2.11)
-- JPEG: /usr/lib/aarch64-linux-gnu/libjpeg.so (ver 80)
-- WEBP: build (ver encoder: 0x020e)
-- PNG: /usr/lib/aarch64-linux-gnu/libpng.so (ver 1.6.34)
-- TIFF: /usr/lib/aarch64-linux-gnu/libtiff.so (ver 42 / 4.0.9)
-- JPEG 2000: build (ver 1.900.1)
-- OpenEXR: /usr/lib/aarch64-linux-gnu/libImath.so /usr/lib/aarch64-linux-gnu/libIlmImf.so /usr/lib/aarch64-linux-gnu/libIex.so /usr/lib/aarch64-linux-gnu/libHalf.so /usr/lib/aarch64-linux-gnu/libIlmThread.so (ver 2.2.0)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
--
-- Video I/O:
-- DC1394: YES (ver 2.2.5)
-- FFMPEG: YES
-- avcodec: YES (ver 57.107.100)
-- avformat: YES (ver 57.83.100)
-- avutil: YES (ver 55.78.100)
-- swscale: YES (ver 4.8.100)
-- avresample: NO
-- GStreamer:
-- base: YES (ver 1.14.1)
-- video: YES (ver 1.14.1)
-- app: YES (ver 1.14.1)
-- riff: YES (ver 1.14.1)
-- pbutils: YES (ver 1.14.1)
-- libv4l/libv4l2: 1.14.2 / 1.14.2
-- v4l/v4l2: linux/videodev2.h
-- gPhoto2: NO
--
-- Parallel framework: TBB (ver 2017.0 interface 9107)
--
-- Trace: YES (built-in)
--
-- Other third-party libraries:
-- Lapack: NO
-- Eigen: YES (ver 3.3.4)
-- Custom HAL: YES (carotene (ver 0.0.1))
-- Protobuf: build (3.5.1)
--
-- NVIDIA CUDA: YES (ver 10.0, CUFFT CUBLAS)
-- NVIDIA GPU arch: 53
-- NVIDIA PTX archs:
--
-- OpenCL: YES (no extra features)
-- Include path: /home/hanjie/Software/opencv-3.4.2/3rdparty/include/opencl/1.2
-- Link libraries: Dynamic load
--
-- Python 2:
-- Interpreter: /usr/bin/python2.7 (ver 2.7.15)
-- Libraries: /usr/lib/aarch64-linux-gnu/libpython2.7.so (ver 2.7.15rc1)
-- numpy: /usr/local/lib/python2.7/dist-packages/numpy/core/include (ver 1.15.0)
-- packages path: lib/python2.7/dist-packages
--
-- Python 3:
-- Interpreter: /usr/bin/python3 (ver 3.6.8)
-- Libraries: /usr/lib/aarch64-linux-gnu/libpython3.6m.so (ver 3.6.8)
-- numpy: /usr/local/lib/python3.6/dist-packages/numpy/core/include (ver 1.15.0)
-- packages path: lib/python3.6/dist-packages
--
-- Python (for build): /usr/bin/python2.7
--
-- Java:
-- ant: NO
-- JNI: NO
-- Java wrappers: NO
-- Java tests: NO
--
-- Matlab: NO
--
-- Install to: /usr/local
-- -----------------------------------------------------------------
1
2
3
4
5
make -j4
sudo make install

sudo /bin/bash -c 'echo "/usr/local/lib" > /etc/ld.so.conf.d/opencv.conf'
sudo ldconfig

protobuf

1
2
3
4
5
6
7
8
9
10
11
12
13
sudo apt-get install autoconf automake libtool curl make g++ unzip

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.6.0.1
git submodule update --init --recursive
./autogen.sh

./configure
make -j4
make check -j4
sudo make install -j4
sudo ldconfig # refresh shared library cache.
1
2
protoc --version
libprotoc 3.6.0
1
2
3
4
5
sudo pip3 uninstall protobuf
cd protobuf/python
python3 setup.py build
python3 setup.py test
sudo python3 setup.py install

TensorFlow

1
2
3
4
5
6
7
8
9
10
11
12
13
sudo apt-get install libhdf5-serial-dev hdf5-tools

# system-wide install
sudo pip3 install -U virtualenv

# Create a new virtual environment by choosing a Python interpreter and making a ./venv directory to hold it:
virtualenv --system-site-packages -p python3 ./deeplearning

# Activate the virtual environment using a shell-specific command:
source ./deeplearning/bin/activate

python3 -m pip install --upgrade pip
pip list
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
Package                       Version      
----------------------------- -------------
apt-clone 0.2.1
apturl 0.5.2
asn1crypto 0.24.0
beautifulsoup4 4.6.0
blinker 1.4
Brlapi 0.6.6
certifi 2018.1.18
chardet 3.0.4
cryptography 2.1.4
cupshelpers 1.0
cycler 0.10.0
defer 1.0.6
distro-info 0.18
feedparser 5.2.1
html5lib 0.999999999
httplib2 0.9.2
idna 2.6
keyring 10.6.0
keyrings.alt 3.0
kiwisolver 1.1.0
language-selector 0.1
launchpadlib 1.10.6
lazr.restfulclient 0.13.5
lazr.uri 1.0.3
louis 3.5.0
lxml 4.2.1
macaroonbakery 1.1.3
Mako 1.0.7
MarkupSafe 1.0
matplotlib 3.1.1
numpy 1.15.0
oauth 1.0.1
oauthlib 2.0.6
PAM 0.4.2
pip 19.1.1
protobuf 3.0.0
pycairo 1.16.2
pycrypto 2.6.1
pycups 1.9.73
pygobject 3.26.1
PyICU 1.9.8
PyJWT 1.5.3
pymacaroons 0.13.0
PyNaCl 1.1.2
pyparsing 2.4.0
pyRFC3339 1.0
python-apt 1.6.3+ubuntu1
python-dateutil 2.8.0
python-debian 0.1.32
pytz 2018.3
pyxdg 0.25
PyYAML 3.12
requests 2.18.4
requests-unixsocket 0.1.5
SecretStorage 2.3.1
setuptools 41.0.1
simplejson 3.13.2
six 1.11.0
ssh-import-id 5.7
system-service 0.3
systemd-python 234
ubuntu-drivers-common 0.0.0
unattended-upgrades 0.1
unity-scope-calculator 0.1
unity-scope-chromiumbookmarks 0.1
unity-scope-colourlovers 0.1
unity-scope-devhelp 0.1
unity-scope-firefoxbookmarks 0.1
unity-scope-manpages 0.1
unity-scope-openclipart 0.1
unity-scope-texdoc 0.1
unity-scope-tomboy 0.1
unity-scope-virtualbox 0.1
unity-scope-yelp 0.1
unity-scope-zotero 0.1
urllib3 1.22
virtualenv 16.6.1
wadllib 1.3.2
webencodings 0.5
wheel 0.33.4
xkit 0.0.0
zope.interface 4.3.2
1
pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3

Verifying:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Python
import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

config = tf.ConfigProto(allow_soft_placement=True)

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.7)

config.gpu_options.allow_growth = True
sess = tf.Session(config=config)

hello = tf.constant('Hello, TensorFlow!')
print(sess.run(hello))
1
Hello, TensorFlow!

And to exit virtualenv later:

1
deactivate  # don't exit until you're done using TensorFlow

Python Packages

1
2
sudo pip3 install cython
sudo pip3 install scipy sklearn pandas scikit-image

caffe

1
2
3
sudo apt-get install libhdf5-serial-dev libleveldb-dev libprotobuf-dev libsnappy-dev protobuf-compiler libgflags-dev libgoogle-glog-dev liblmdb-dev libatlas-base-dev libblas-dev libatlas-base-dev

sudo apt-get install --no-install-recommends libboost-all-dev
1
2
3
4
5
git clone https://github.com/BVLC/caffe.git

cd caffe
cp Makefile.config.example Makefile.config
gedit Makefile.config

修改:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
USE_CUDNN := 1

USE_OPENCV := 1
OPENCV_VERSION := 3

WITH_PYTHON_LAYER := 1

USE_PKG_CONFIG := 1

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/hdf5/serial

# PYTHON_INCLUDE := /usr/include/python2.7 \
# /usr/lib/python2.7/dist-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python-py36 python3.6m
PYTHON_INCLUDE := /usr/include/python3.6m \
/usr/local/lib/python3.6/dist-packages/numpy/core/include

CUDA_ARCH := -gencode arch=compute_53,code=sm_53 \
-gencode arch=compute_53,code=compute_53

修改Makefile文件:

1
2
3
4
将:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m
改为:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial
1
2
make -j4
make -j4 test

Verifying:

1
make runtest -j4
1
make pycaffe
1
2
3
4
import sys
#添加caffe下的python文件夹的路径
sys.path.append("./caffe/python")
import caffe

or sudo gedit ~/.bashrc添加:

1
export PYTHONPATH=path_to/caffe/python:$PYTHONPATH
1
source ~/.bashrc

Bazel

1
2
3
4
5
6
7
8
9
sudo apt-get install zlib1g-dev unzip openjdk-8-jdk

wget https://github.com/bazelbuild/bazel/releases/download/0.18.0/bazel-0.18.0-dist.zip
unzip bazel-0.18.0-dist.zip -d bazel-0.18.0
cd bazel-0.18.0
./compile.sh
sudo cp output/bazel /usr/local/bin
bazel help
bazel version
1
2
3
4
5
Build label: 0.18.0- (@non-git)
Build target: bazel-out/aarch64-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Sat Jul 13 12:03:24 2019 (1563019404)
Build timestamp: 1563019404
Build timestamp as int: 1563019404

Uninstall

1
2
3
sudo rm -fr ~/.bazel ~/.bazelrc
sudo rm -fr ~/.cache/bazel
sudo rm /usr/local/bin/bazel /etc/bazelrc /usr/local/lib/bazel -fr

TensorFlow C++6

1
2
3
4
5
6
7
8
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout v1.12.0

wget https://gist.githubusercontent.com/sdeoras/533708a95a3de5ff6de1c6b018cabf6d/raw/d0ff1fdcabb99aac3dce02f020cad0e5e78a8a56/tf_jetson_nano_build_112.diff
git apply tf_jetson_nano_build_112.diff

./configure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
WARNING: ignoring LD_PRELOAD in environment.
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.18.0- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3


Found possible Python library paths:
/home/hanjie/Software/caffe/python
/usr/lib/python3.6/dist-packages
/usr/lib/python3/dist-packages
/usr/local/lib/python3.6/dist-packages
Please input the desired Python library path to use. Default is [/home/hanjie/Software/caffe/python]
/usr/lib/python3/dist-packages
Do you wish to build TensorFlow with XLA JIT support? [Y/n]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 10.0


Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:


Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1


Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/lib/aarch64-linux-gnu


Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.

Please specify the location where TensorRT is installed. [Default is /usr/lib/aarch64-linux-gnu]:/usr/src/tensorrt


Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 1.3


Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 5.3


Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:


Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=noignite # Disable Apacha Ignite support.
--config=nokafka # Disable Apache Kafka support.
Configuration finished

1
2
3
4
5
6
7
8
9
10
11
12
bazel build -c opt --local_resources 5000,1.0,1.0 --verbose_failures --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt="-DPNG_ARM_NEON_OPT=0" --config=noaws //tensorflow:libtensorflow_cc.so

bazel build -c opt --local_resources 5000,1.0,1.0 --verbose_failures --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --config=cuda --copt=-DPNG_ARM_NEON_OPT=0 //tensorflow:libtensorflow_framework.so


sudo mkdir -p /usr/local/include/tf/tensorflow
sudo ln -s abs_path_to_tensorflow/bazel-genfiles/ /usr/local/include/tf
sudo ln -s abs_path_to_tensorflow/tensorflow/cc /usr/local/include/tf/tensorflow
sudo ln -s abs_path_to_tensorflow/tensorflow/core /usr/local/include/tf/tensorflow
sudo ln -s abs_path_to_tensorflow/third_party /usr/local/include/tf
sudo ln -s abs_path_to_tensorflow/bazel-bin/tensorflow/libtensorflow_cc.so /usr/local/lib
sudo ln -s abs_path_to_tensorflow/bazel-bin/tensorflow/libtensorflow_framework.so /usr/local/lib
1
2
git clone https://github.com/abseil/abseil-cpp.git
sudo ln -s abs_path_to_abseil-cpp/absl /usr/local/include/tf/third_party

如果想要卸载请运行如下命令7

1
2
sudo rm -r /usr/local/include/tf
sudo rm /usr/local/lib/libtensorflow_*.so

TensorFlow C++ Demo

demo.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#include <tensorflow/core/platform/env.h>
#include <tensorflow/core/public/session.h>
#include <iostream>

using namespace std;
using namespace tensorflow;

int main()
{
Session* session;
Status status = NewSession(SessionOptions(), &session);
if (!status.ok()) {
cout << status.ToString() << "\n";
return 1;
}
cout << "Session successfully created.\n";
return 0;
}

CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
cmake_minimum_required(VERSION 3.5)

project(demo)

set(CMAKE_CXX_STANDARD 11)

add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)

find_package(OpenCV REQUIRED)
find_package(Eigen3 REQUIRED )

add_definitions(${EIGEN3_DEFINITIONS})

set(TENSORFLOW_INCLUDES
/usr/local/include/tf/
/usr/local/include/tf/bazel-genfiles
/usr/local/include/tf/tensorflow/
/usr/local/include/tf/third_party/
)

set(TENSORFLOW_LIBS
/usr/local/lib/libtensorflow_cc.so
/usr/local/lib//libtensorflow_framework.so)


include_directories(
${TENSORFLOW_INCLUDES}
${OpenCV_INCLUDE_DIRS}
${EIGEN3_INCLUDE_DIR}
)

add_executable(demo demo.cpp)
target_link_libraries(demo ${TENSORFLOW_LIBS} ${OpenCV_LIBS})

ZED Camera

Download (ZED SDK for Jetpack 4.2 v2.8.2)[https://www.stereolabs.com/developers/release/]:

1
2
chmod +x ZED_SDK_JP4.2_v2.8.2.run
./ZED_SDK_JP4.2_v2.8.2.run

The ZED SDK installer gives you the possibility to set the Jetson in maximum performance mode. This makes sure the Jetson is ready to run the ZED SDK and your programs at the maximum of its capabilities8.

1
systemctl disable jetson_clocks

Visual Studio Code9

1
2
3
4
sudo gpg --keyserver keyserver.ubuntu.com --recv 0CC3FD642696BFC8
sudo apt-get update
sudo -s
. <( wget -O - https://code.headmelted.com/installers/apt.sh )

环境

  • 系统: Ubuntu 16.04.4 LTS
  • 内核: 4.15.0-50-generic
  • CUDA: 9.0.176
  • 显卡: 940mx
  • 显卡驱动: 384.13
  • GCC: 5.4.0
  • python: 3.5.2

protobuf

1
2
3
4
5
6
7
8
9
10
11
12
13
sudo apt-get install autoconf automake libtool curl make g++ unzip

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.6.0
git submodule update --init --recursive
./autogen.sh

./configure
make -j4
make check -j4
sudo make install -j4
sudo ldconfig # refresh shared library cache.
1
2
protoc --version
libprotoc 3.6.0
1
2
3
4
5
sudo pip3 uninstall protobuf
cd protobuf/python
python3 setup.py build
python3 setup.py test
sudo python3 setup.py install

Bazel

下载bazel-0.18.1-installer-linux-x86_64.sh

1
2
chmod +x bazel-0.18.1-installer-linux-x86_64.sh
sudo ./bazel-0.18.1-installer-linux-x86_64.sh --user

The --user flag installs Bazel to the $HOME/bin directory on your system and sets the .bazelrc path to $HOME/.bazelrc. Use the --help command to see additional installation options.

Check CuDNN Version

1
cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2

or

1
cat /usr/include/x86_64-linux-gnu/cudnn.h | grep CUDNN_MAJOR -A 2

TensorFlow

1
2
git clone https://github.com/tensorflow/tensorflow.git
git checkout v1.12.2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
./configure


WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/home/luohanjie/.cache/bazel/_bazel_luohanjie/install/cdf71f2489ca9ccb60f7831c47fd37f1/_embedded_binaries/A-server.jar) to field java.lang.String.value
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.18.1 installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3


Found possible Python library paths:
/home/luohanjie/Documents/software/caffe/python
/usr/lib/python3.5/dist-packages
/usr/local/lib/python3.5/dist-packages
/usr/lib/python3/dist-packages
Please input the desired Python library path to use. Default is [/home/luohanjie/Documents/software/caffe/python]
/usr/lib/python3/dist-packages
Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: n
No Apache Ignite support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [Y/n]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]:


Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:


Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.4.2


Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/include/x86_64-linux-gnu


Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.

Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]:/usr/src/tensorrt


Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 1.3


Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 5.0


Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:


Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
Configuration finished

1
2
3
4
5
6
7
8
9
10
11
12
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=nonccl //tensorflow:libtensorflow_cc.so

bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=nonccl //tensorflow:libtensorflow_framework.so

sudo mkdir -p /usr/local/include/tf/tensorflow
sudo ln -s abs_path_to_tensorflow/bazel-genfiles/ /usr/local/include/tf
sudo ln -s abs_path_to_tensorflow/tensorflow/cc /usr/local/include/tf/tensorflow
sudo ln -s abs_path_to_tensorflow/tensorflow/core /usr/local/include/tf/tensorflow
sudo ln -s abs_path_to_tensorflow/third_party /usr/local/include/tf
sudo ln -s abs_path_to_tensorflow/bazel-bin/tensorflow/libtensorflow_cc.so /usr/local/lib
sudo ln -s abs_path_to_tensorflow/bazel-bin/tensorflow/libtensorflow_framework.so /usr/local/lib
sudo ln -s tensorflow/contrib/makefile/downloads/absl/absl /usr/local/include/tf/third_party

如果想要卸载请运行如下命令1

1
2
sudo rm -r /usr/local/include/tf
sudo rm /usr/local/lib/libtensorflow_*.so

TensorFlow C++ Demo

Demo.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#include <tensorflow/core/platform/env.h>
#include <tensorflow/core/public/session.h>
#include <iostream>

using namespace std;
using namespace tensorflow;

int main()
{
Session* session;
Status status = NewSession(SessionOptions(), &session);
if (!status.ok()) {
cout << status.ToString() << "\n";
return 1;
}
cout << "Session successfully created.\n";
return 0;
}

CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cmake_minimum_required(VERSION 3.5)

set(CMAKE_CXX_STANDARD 11)

find_package(OpenCV REQUIRED)
find_package(Eigen REQUIRED )

add_definitions(${EIGEN_DEFINITIONS})

set(TENSORFLOW_INCLUDES
/usr/local/include/tf/
/usr/local/include/tf/bazel-genfiles
/usr/local/include/tf/tensorflow/
/usr/local/include/tf/third-party
)

set(TENSORFLOW_LIBS
/usr/local/lib/libtensorflow_cc.so
/usr/local/lib//libtensorflow_framework.so)


include_directories(
${TENSORFLOW_INCLUDES}
${OpenCV_INCLUDE_DIRS}
${EIGEN_INCLUDE_DIR}
)

add_executable(demo demo.cpp)
target_link_libraries(demo ${TENSORFLOW_LIBS} ${OpenCV_LIBS})

测试DFANet语义分割网络,基于论文DFANet: Deep Feature Aggregation for Real-Time Semantic Segmentation,主要特点在于它的实时性:

测试使用cityscapes数据集,可以在这里下载。

服务器及数据准备

假设有已有一个远程docker服务器root@0.0.0.0 -p 9999。

Dependence

1
2
3
4
5
6
7
8
pytorch==1.0.0
python==3.6
numpy
torchvision
matplotlib
opencv-python
tensorflow
tensorboardX
1
2
3
4
apt install -y libsm6 libxext6
pip3 install opencv-python
pip3 install pyyaml
pip3 install tensorboardX

检查pytorch版本:

1
2
import torch
print(torch.__version__)
1
2
3
# Linux, pip, Python 3.6, CUDA 9
pip3 install --upgrade pip
pip3 install --upgrade torch torchvision
使用scp指令将本地程序和数据集上传到服务器:
1
scp -P 9999 local_file root@0.0.0.0:remote_directory
解压缩zip文件
1
2
3
apt-get update
apt-get install zip -y
unzip local_file
下载DFANet
1
2
git clone https://github.com/huaifeng1993/DFANet.git
cd DFANet
Pretrained model

打开utils/preprocess_data.py,修改dataset位置:

1
2
cityscapes_data_path = "/home/luohanjie/Documents/SLAM/data/cityscapes"
cityscapes_meta_path = "/home/luohanjie/Documents/SLAM/data/cityscapes/gtFine"

运行脚本,生成labels:

1
python3 utils/preprocess_data.py
main.py

打开main.py,修改dataset位置:

1
2
3
4
5
train_dataset = DatasetTrain(cityscapes_data_path="/home/luohanjie/Documents/SLAM/data/cityscapes",
cityscapes_meta_path="/home/luohanjie/Documents/SLAM/data/cityscapes/gtFine/")

val_dataset = DatasetVal(cityscapes_data_path="/home/luohanjie/Documents/SLAM/data/cityscapes",
cityscapes_meta_path="/home/luohanjie/Documents/SLAM/data/cityscapes/gtFine/")

2019.4.24 An function has been writed to load the pretrained model which was trained on imagenet-1k.The project of training the backbone can be Downloaded from here -https://github.com/huaifeng1993/ILSVRC2012. Limited to my computing resources(only have one RTX2080),I trained the backbone on ILSVRC2012 with only 22 epochs.But it have a great impact on the results.

由于我们没有ILSVRC2012的pretrained model,所以需要关掉标志位:

1
net = dfanet(pretrained=False, num_classes=20)
ERROR: TypeError: init() got an unexpected keyword argument 'log_dir'

打开train.py,修改为:

1
writer = SummaryWriter(logdir=self.log_dir)
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm)

出现这个错误的情况是,在服务器上的docker中运行训练代码时,batch size设置得过大,shared memory不够(因为docker限制了shm).解决方法是,将Dataloader的num_workers设置为01

打开main.py,修改:

1
2
3
4
5
6
train_loader = DataLoader(dataset=train_dataset,
batch_size=10, shuffle=True,
num_workers=0)
val_loader = DataLoader(dataset=val_dataset,
batch_size=10, shuffle=False,
num_workers=0)

Train

1
python3 main.py
0%