AIbase
Product LibraryTool Navigation

detectron2onnx-inference

Public

Export [detectron2](https://github.com/facebookresearch/detectron2) model to [onnx](https://github.com/onnx/onnx) and run inference using [caffe2 onnx backend](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html). This let's you run inference on a raspberry pi with acceptable inference times.

Creat2021-03-12T19:51:52
Update2024-12-04T21:10:59
15
Stars
0
Stars Increase