Qwen2.5-Coder-3B-Instruct-GPTQ-Int4
3B parameter instruction-tuned model within the Qwen2.5-Coder series
CommonProductProgrammingCode GenerationCode Inference
Qwen2.5-Coder is the latest series in the Qwen large language model, specifically designed for code generation, inference, and debugging. The model is based on Qwen2.5, extending the training tokens to 5.5 trillion, incorporating source code, textual code foundations, synthetic data, etc. Qwen2.5-Coder-32B stands out as a top performer among open-source code LLMs, matching the encoding capabilities of GPT-4o. This model is a GPTQ-quantized 4-bit instruction-tuned 3B parameter Qwen2.5-Coder model, featuring causal language modeling, pre-training and post-training phases, and a transformers architecture.
Qwen2.5-Coder-3B-Instruct-GPTQ-Int4 Visit Over Time
Monthly Visits
19075321
Bounce Rate
45.07%
Page per Visit
5.5
Visit Duration
00:05:32