llm-cpp-inference
PublicC++ wrapper for LLM Inference using libcurl – A C++ implementation for interacting with locally served language models (LLMs) via HTTP requests. Powered by ollama and libcurl, the project demonstrates LLM inference on local setups without relying on external APIs.