QwQ-32B-Preview-gptqmodel-4bit-vortex-v3
This is a 4-bit quantized version based on the Qwen2.5-32B model, designed for efficient inference and low-resource deployment.
QwQ-32B-Preview-gptqmodel-4bit-vortex-v3 Visit Over Time
Monthly Visits
29742941
Bounce Rate
44.20%
Page per Visit
5.9
Visit Duration
00:04:44