AIbase
Product LibraryTool Navigation

vllm

Public

A high-throughput and memory-efficient inference and serving engine for LLMs

Creat2023-02-09T19:23:20
Update2024-05-09T16:46:28
https://docs.vllm.ai
43.4K
Stars
92
Stars Increase

Related projects