QwQ-32B-Preview-gptqmodel-4bit-vortex-v3
This is a 4-bit quantized version based on the Qwen2.5-32B model, designed for efficient inference and low-resource deployment.
QwQ-32B-Preview-gptqmodel-4bit-vortex-v3 Visit Over Time
Monthly Visits
27175375
Bounce Rate
44.30%
Page per Visit
5.8
Visit Duration
00:04:57