Commit 756cbb3d authored by Gustavo Valiente's avatar Gustavo Valiente


parent 07b87a92
# PocketTensor # PocketTensor
\ No newline at end of file
PocketTensor is a [Kerasify]( fork designed for running trained Keras models from a C++ application on embedded devices.
## Design goals
* Compatibility with sequential networks generated by Keras 2.x using Tensorflow backend.
* CPU support only (no GPU).
* Easy to build and run (no external dependencies).
* Unit tests for each supported layer.
## Improvements over Kerasify
* Thanks to the awesome [libsimdpp library](, tensor operations have been rewritten using SIMD instructions to improve prediction performance.
* Memory (re)usage has been improved in order to reduce memory allocations.
* Apart from `float`, `double` precision tensors are supported (see `pt_tweakme.h` file).
* Tensor dimensions are rigorously validated on each layer to avoid bad models usage.
* Besides GCC and Clang, Visual Studio compiler is properly supported.
## Hardware requirements
Since there's no GPU support, by default PocketTensor requires the following CPU SIMD instruction sets:
* ARM: NEON with floating point support.
* x86: AVX.
Required SIMD instruction sets are specified in the `pt_tweakme.h` file, so they can be modified with ease.
## Software requirements
Since a copy of libsimdpp comes bundled with this library, there's no external dependencies required, so the only software requirements are a C++11-compatible compiler and CMake >= 3.4.
PocketTensor has been tested with these compilers:
* GCC 4.9.
* MSVC 2017.
* Whatever Clang comes with Apple LLVM 9.1.0.
## How to build
A CMakeLists.txt is provided with this library, so in order to use it you only need to include this file in your CMake project.
To build and run the unit tests, you need to generate them first:
mkdir tests_build
cd tests_build
## Usage
1) Use Keras to build (`model.compile(...)`) and train (``) your model as usual.
2) Now convert it to the Kerasify file format with `kerasify.export_model(model, 'example.model')`.
3) Finally load it in C++ (`pt::create("example.model")`) and use `model->predict(...)` to perform a prediction with your data.
The following example shows the full workflow:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from kerasify import export_model
test_x = np.random.rand(10, 10).astype('f')
test_y = np.random.rand(10).astype('f')
model = Sequential()
model.add(Dense(1, input_dim=10))
model.compile(loss='mean_squared_error', optimizer='adamax'), test_y, epochs=1)
print model.predict(np.array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]))
export_model(model, 'example.model')
// main.cpp:
#include <iostream>
#include "pt_model.h"
#include "pt_tensor.h"
int main()
// Initialize model:
auto model = pt::create("example.model");
// REQUIRE(model);
// Create input tensor:
pt::Tensor in(10);
in.setData({0, 1, 2, 3, 4, 5, 6, 7, 8, 9});
// Run prediction:
pt::Tensor out;
bool success = model->predict(std::move(in), out);
// REQUIRE(success);
// Print output:
std::cout << out << std::endl;
return 0;
## Supported layer types
The most common layer types used in image recognition and sequences prediction are supported, making many popular model architectures possible:
* Convolutions: `Conv1D`, `Conv2D`, `LocallyConnected1D`.
* Sequences related: `LSTM`, `Embedding`.
* Activations: `Linear`, `ReLU`, `ELU`, `Softplus`, `Softsign`, `Tanh`, `Sigmoid`, `HardSigmoid`, `Softmax`.
* Other: `Dense`, `Flatten`, `MaxPooling2D`, `BatchNormalization`, `ELU`.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment