learn_ml.deploy package

Submodules

learn_ml.deploy.convert_to_edgetpu module

API for preparing model to deploy to Google Coral board.

This script will accept a tf2 saved model and convert it into a quantized tflite model. After converting it, it will convert the model using the edge_tpu compiler tool. In addition to a command line interface, it also publishes a python API. To quantize the model, the script also requires a small, representative dataset to determine the input ranges. Tensorflow documentation says the dataset can be as small as 12 examples, but we recommend around 100. The dataset is accepted as a .npy file, containing an array of inputs in the same shape as you pass to the model.

Typical usage example: python3 convert_to_edgetpu.py foo/tf2_model bar/repr_dataset.npy

learn_ml.deploy.convert_to_edgetpu._convert_to_tflite(saved_model_dir, representative_dataset_dir, use_tf1=False)[source]

Converts the model to a fully 8-bit integer quantized tflite model.

By default, uses the tf2 converter to convert the model to a tflite model. This uses the post-training quantization scheme, which requires a representative dataset to quantize the model. If specified, it will use the tf1 converter

Parameters
  • saved_model_dir – Path to directory containing the tf2 saved model

  • representative_dataset_dir – Path to the .npy file containing the representative dataset. This np array should contain multiple input examples.

  • use_tf1 (optional) – True to use the tf1 converter. False will use the tf2 converter.

Returns

A tflite_quantized model, which must be written to a file

learn_ml.deploy.convert_to_edgetpu._edgetpu_compile(tflite_model_path)[source]

Run the edgetpu_compile script for the quantized tflite model.

The function will write the edgetpu compiled model to [MODEL_NAME]_edgetpu.tflite. Currently, this function only works on debian systems.

Parameters

tflite_model_path – Path to the quantized .tflite model

Returns

None

learn_ml.deploy.convert_to_edgetpu._representative_dataset_gen_factory(dataset_dir)[source]

Creates a generator for elements of the representative dataset.

Helper function for creating a generator for the tflite converter.

Parameters

dataset_dir – Path to the .npy file containing the representative dataset. This np array should contain multiple input examples.

Returns

A generator function that can be passed directly to the tflite converter.

learn_ml.deploy.convert_to_edgetpu.convert_and_compile(saved_model_dir, representative_dataset_dir, use_tf1=False)[source]

Converts the model to tflite and compiles for edgetpu.

This function will convert a tf2 saved model to a model compiled for the edgetpu. Once generated, the model will be saved to the file [MODEL NAME]_edgetpu.tflite

Parameters
  • saved_model_dir – Path to directory containing the tf2 saved model

  • representative_dataset_dir – Path to the .npy file containing the representative dataset. This np array should contain multiple input examples.

  • use_tf1 (optional) – True to use the tf1 converter. False will use the tf2 converter.

Returns

None

learn_ml.deploy.deploy module

Provides a command-line interface and python API for deploying a model to the Google Coral board.

Currently, this script is setup to deploy the model via ssh. Therefore, it requires the address of the coral board and that you have a key or password to access it. Once the model is deployed to the coral, it will also publish a web server to display the results. The deploy script also only handles classification problems, support for more model types is in the pipeline. The python API exposes a deploy function. If called from the command line, you must pass a model (using -m argument), the address of the coral (-a), and EITHER an identity file (-i) OR a password (-p).

Typical usage example:

python3 deploy.py -m foo/model_quantized.tflite -a x.x.x.x -i ~/id_rsa

learn_ml.deploy.deploy.deploy(address, model, identity_file=None, password=None)[source]

Deploys the a model to the Coral Board.

Connects to the coral board via ssh to deploy the model. You must pass a tflite model. For maximum performance, you should pass a quantized tflite model, which can be generated using the convert_to_edgetpu module. After deploying the model, the function will start a webserver publishing the results.

Parameters
  • address – Address of the coral board

  • model – Path to the tflite model to deploy

  • identity_file – [Optional] Path to the identity file. Identity file must be provided if password is not provided.

  • password – [Optional] Password to use for ssh authentication. Password must be provided if identity_file is not provided.

Returns

None

learn_ml.deploy.deploy.deploy_usb(model)[source]

Module contents