glider-gguf
High-performance quantized language model
CommonProductProgrammingGGUFQuantized Model
PatronusAI/glider-gguf is a high-performance quantized language model based on the Hugging Face platform, utilizing the GGUF format, and supporting multiple quantization versions such as BF16, Q8_0, Q5_K_M, and Q4_K_M. This model is built on the phi3 architecture and comprises 3.82 billion parameters. Its main strengths are efficient computational performance and a compact model size, ideal for scenarios requiring rapid inference and low resource consumption. Background information indicates that this model is provided by PatronusAI and is suited for developers and enterprises needing natural language processing and text generation capabilities.
glider-gguf Visit Over Time
Monthly Visits
26103677
Bounce Rate
43.69%
Page per Visit
5.5
Visit Duration
00:04:43