Intel_Deep_Learing_SDK_Deployment_Tool_User_Guide


Чтобы посмотреть этот PDF файл с форматированием и разметкой, скачайте его и откройте на своем компьютере.
Intel® Deep Learning SDK
Deployment Tool
User Guide
Copyright
2016
Intel Corporation
All Rights Reserved
Revision:
��
Intel® Deep Learning SDK Deployment Tool
��2 &#x/MCI; 1 ;&#x/MCI; 1 ; &#x/MCI; 2 ;&#x/MCI; 2 ; &#x/MCI; 3 ;&#x/MCI; 3 ;Legal
Information
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantabili
ty, fitness for a particular
purpose, and non
infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All inf
ormation provided here is subject to change
without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may
cause deviations from published specifications.
Current characterized errata are available on request.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1
800
548
4725 or by visiting
www.intel.com/design/literature.htm
Intel, Intel logo, Intel Core, VTune, Xeon are trademarks of Intel Corporation in the U.S. and other countries.
* Other names and brands may be claimed as the property of ot
hers.
Copyright © 2016 Intel Corporation.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission from Khronos.
Revision
History
Revision
Number
Description
Revision Date
Initial
version
December
�� Introduction
��User Guide

Contents
Legal Information
Introduction
1.1
Related Information
1.2
Installing Intel® Deep Learning SDK Deployment Tool
1.3
Conventions and Symbols
1.4
Introducing Intel® Deep Learning SDK Deployment Tool
Using Intel® Deep Learning SDK Deployment Tool
2.1
Typical Usage Model
2.2
Model Optimizer Overview
2.2.1
Prerequisites
2.2.2
Running the Model Optimizer
2.2.3
Known Issues and Limitations
2.3
Inference Engine Overview
2.3.1
Building the Sample Applications
2.3.2
Running the Sample Applications
End
End User Scenarios
3.1
Inferring an Image Using the Intel® Math Kernel Library for Deep Neural Networks Plugin
��
Intel® Deep Learning SDK Deployment Tool
Introduction
The Intel® Deep Learning SDK Deployment Tool User Guide provides guidance on how to
use the
Deployment Tool to optimize trained deep learning models and
integrate
the inference with application
logic
using
a unified API
See t
he
nd
User Scenarios
chapter
to find
usage sample
This guide does not provide an in
formation on
the
Intel® Deep Learning SDK Training Tool.
For this
information, see the
Intel® Deep Learning SDK Training Tool User Guide
1.1
Rela
ted Information
For
more
information on SDK requirements,
new features,
known issues and limitations, refer to the
Release Notes
document.
1.2
Installing
Intel® Deep Learning SDK
Deployment
For installation steps please refer to the In
tel® Deep Learning SDK Deployment Tool
Installation Guide
1.3
Conventions and Symbols
The following conv
entions are used in this document.
SDK
Software Development Kit
API
Application
rogram
ming
nterface
Int
ernal
representation of a deep learning
network
CNN
Convolutional Neural Network
1.4
Introducing
Intel® Deep Learning SDK
Deployment
The
Deployment
Tool is a feature of the Intel
�� Introduction
��User Guide

Deliver a unified API to integrate
inference with
application logic
The Deployment Tool
comprises
two main components:
Model Optimizer
Model Optimizer
is a cross
platform command line tool
that
Takes
as input
trained
network
that
contains
certain
network topology, parameters
and the
trained
weights
and
biases.
The
Model Optimizer currently only supports
nput
network
that are
produced
using
the Caffe*
framework.
Performs h
orizontal
and
vertical fusion of
the
network
ayers
es
unused branches in the
network
Appl
ies
eights compression
methods
Produces as
utput a
nternal
Representation
(IR)
of the network
a pair of files that describe the
whole model:
Topology file
an .xml file
that describes the network topology
Trained
data file
a .bin file that contains
the
weigh
s and biases
as
binary
data.
The produced IR is used as an input for the Inference Engine.
Inference Engine
Inference Engine
is
a runtime which
Takes as input a
n IR produced by Model
Optimizer
��
Intel® Deep Learning SDK Deployment Tool
Using
Intel
Deep Learning SDK
Deployment
Tool
2.1
Typical
sage
odel
The scheme displays the typical usage of the Deployment Tool
to perform i
nference
tra
ined deep
neural network
model
You can
train a model using
the Intel® Deep Learning SDK T
raining Tool or Caffe*
framework.
rovide
the
model in the Caffe* format for Model Optimizer
to produce the
of the model based
on the certain network topology, weight and bias values
and other parameters.
Test
the
model in the
format
using
the Inference Engin
in the target environment
Deployment
Tool
contains
sample Inference Engine application
For more information, see the
Running the
ample
pplication
section
Integrate the Inference Engine in your application and d
eploy the
model
n the
target environment
2.2
Model Optimizer Overview
The
Model Optimizer
is a cross
platform command line tool that facilitates
transition between
training
and deployment environment
The
Model Optimizer
Conver
a trained model from a
framework
specific
format
to a unified
framework
independent
format (IR).
The current version supports conversion of models in Caffe* format only.
Can
ptimiz
trained
model
by removing redundant layers and fusing
layers
, for instance,
Batch
Normalization
and C
onvolution
yers
The Model Optimizer
takes a trained model in
affe
* format
.prototxt
file with the network topology
and
.cafemodel
file with the network weights
) and outputs a
model i
he
IR
format
an
.xml file with
the network
topology and
binary .bin
file with
the network
weights):
�� Using Intel® Deep Learning SDK Deployment Tool
��User Guide

The
Model Optimizer
is also
included into
distributions of
Intel® Deep Learning SDK Training Tool.
2.2.1
Prerequisites
he
Model Optimizer is distributed as a
set of binar
y files
Add the path to the
libCaffe.so
shared object
and the path to the Model Optimizer executable
binary
LD_LIBRARY_PATH
Change the current directory to the Model Optimizer
directory. For example:

cd
Run the
./ModelOptimizer
command
with
desired
command line arguments
w"
ath to
a binary file with
the
model
weights (.caffemodel file)
i"
enerate IR
Desired precision (for now, must be FP32, because the
MKLD
NN plugin currently supports
only FP32
ath to
a file with
the
network topology (.prototxt
file
b”
atch size
; an o
ptional
parameter, equals
the number of
CPU
cores
by default
ms"
ean image values per channel
mf”
File with mean image in
the
binaryproto format
f"
etwor
k normalization factor
(for now, must be set to 1, which corresponds to
the
FP32
precision).
Some models require subtract the image mean from each image on both sides training and deploying.
There are
two
available
option
for subtraction
allows
you
to subtract mean values per channel
subtract
the whole mean image.
Model Optimizer
.caffemodel
.prototxt
IR:
.xml
.bin
��
Intel® Deep Learning SDK
Deployment Tool
Mean image file should be in
the
binaryproto format. For
dataset
mean image file can be
downloaded by
the
Model Optimizer create
a text .xml file and a binary .bin file with a model in the IR format in the
directory in your current directory.
2.2.3
Known Issues and Limitations
The current version of the Model Optimizer h
as
the following
limitations
It is distributed for 64
bit Ubuntu* OS
14.04
only.
It can process models in Caffe* format only.
It can process popular image classification network models, including AlexNet, GoogleNet, VGG
16,
$ mkdir build
$ cd build
un CMa
ke to generate
Make
files:
&#xpath;&#x_to_;&#xsamp;&#xles_; ire; tor;&#xy000;$ cmake path_to_samples_directory
�� Using Intel® Deep Learning SDK Deployment Tool
��User Guide

Ru
n Make to build the application:
$ make
2.3.2
Run
ning the
ample
pplication
Running
the sample application for image classification
Running the application with the
option
shows
the
usage prompt:
h,
--


Print a usage message.
&#xpath;က&#xpath; i "path1" "path2" ...,





Path to a folder with images or path to an image files:
&#xpath;m "path",

Path to an .xml file with a trained model.
&#xname;p "name",

Plugin name. For example MKLDNNPlugin.
&#xpath;pp "path",
Path to a plugin folder.
ni N,

The number
of iterations to do inference; 1 by default.
&#xpath;l "path",

Path to a file with labels for a model.
nt N,

Number of top results to output; 10 by default.
pc,
--


Enables printing of performance counts.
ample
commands
below
demonstrate use of
the
sample
application
for image classification
to
perform
i
-
m
By default the application outputs 10 top inference results
Add the
option to the
previous command to modify the number of top output results. For example, to get the top
results
you can use the following command:
i
-
m
Running the sample application for image
segmentation
Running the application with the
option
shows
the usage message:
h

-
h Print a usage message.
-
&#xpath;i "path" Path to a .bmp image.
��
Intel® Deep Learning SDK Deployment Tool
-
m
&#xpath;"path" Path to an .xml file with a trained model.
-
&#xname;p "name" Plugin name. For example MKLDNNPlugin.
You can use the following command to do inference on an image using a trained FCN8 network:
&#xpath;&#x_to_;&#ximag;i path_to_image/inputImage.bmp
m
-
p MKLDNNPlugin
pp /path/to/plugins/directory
The application outputs
a segmented image (out.bmp).
�� End-to-End User Scenarios
��User Guide

End
ser
cenarios
3.1
Infer
ring
an
mage
sing
the
Intel® Math Kernel
path_to_DL
/deployment_tools/model_optimizer
Add to
the
LD_LIBRARY_PATH
variable
the path to
the
shared object and the
Model
Optimizer
folder
onfigure
Model
Optimizer
for
the MKL
plugin using the
command line argument
s listed
in
the
Running the Model Optimizer
section.
the following
command:
-
w

path
to
path
to
deploy.prototxt

-
f 1
b 1
ms
The output of successful launching of the command is
IR representation of model and
located here:
path
to
D
L
SDK
/deployment_tools/model_optimizer/bin/Artifacts
Compile
the
Inference Engine
lassification
ample
pplication
as it
is
described
in the
Building the
Sample Applications
chapter
Go to the compiled binaries
&#xpath;&#x_to_; LSD;&#xK000;path_to_DLSDK

Infer
an
image using
the trained and optimized model
./classification_sample
-
i
&#xpath;&#x_to_;&#ximag;path_to_image
-
m

Приложенные файлы

  • pdf 11102537
    Размер файла: 206 kB Загрузок: 0

Добавить комментарий