AIbase
Product LibraryTool Navigation

llm-cpp-inference

Public

C++ wrapper for LLM Inference using libcurl – A C++ implementation for interacting with locally served language models (LLMs) via HTTP requests. Powered by ollama and libcurl, the project demonstrates LLM inference on local setups without relying on external APIs.

Creat2024-10-14T07:29:35
Update2024-11-25T10:23:48
3
Stars
0
Stars Increase