AIbase
Product LibraryTool Navigation

Triton-TensorRT-Inference-CRAFT-pytorch

Public

Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX

Creat2021-07-13T22:02:24
Update2024-11-14T15:35:53
32
Stars
0
Stars Increase