This is the GGUF quantized version of the Qwen/Qwen3-14B language model, with 14 billion parameters. It has deep reasoning ability, research-level accuracy, and an autonomous workflow. After conversion, it can be used in local inference frameworks such as llama.cpp, LM Studio, OpenWebUI, and GPT4All.
Natural Language Processing GgufMultiple Languages
GgufMultiple Languages