AIbase
Product LibraryTool Navigation

licence-plate-triton-server-ensemble

Public

Triton backend that enables pre-processing, post-processing and other logic to be implemented in Python. In the repository, I use tech stack including YOLOv8, ONNX, EasyOCR, Triton Inference Server, CV2, Minio, Docker, and K8S. All of which we deploy on k80 and use CUDA 11.4

Creat2023-04-07T12:10:15
Update2024-04-15T18:48:57
https://rushai.dev/article/5
4
Stars
0
Stars Increase