🔥 Update (April 5): I’ve released the complete training notebook, codebase, and a comprehensive PDF guide to help beginners and enthusiasts understand and reproduce this model's fine-tuning process.
❤️ Special thanks to the Unsloth open-source library and @KyleHessling1 for their support.
👉 GitHub Repository: Jackrong-llm-finetuning-guide Visit the repo to dive into the codebase and reproduce the results locally or on Colab.
🔗 Qwopus3.5-27b Complete Fine-Tuning Guide (PDF)
A Note: My goal isn't just to detail a workflow, but to demystify LLM training. Beyond the social media hype, fine-tuning isn't an unattainable ritual—often, all you need is a Google account, a standard laptop, and relentless curiosity.
No one starts as an expert, but every expert was once brave enough to begin.
All training and testing for this project were self-funded. If you find this model or guide helpful, a Star ⭐️ on GitHub would be the greatest encouragement. Thank you! 🙏
NOTE
The Claude series model optimizations are named under the Qwopus3.5 series, with the latest version being 🌟Qwopus3.5-v3.
Recent advances in language agents have predominantly focused on improving reasoning accuracy through Chain-of-Thought (CoT) and self-reflection mechanisms, encouraging models to iteratively refine their reasoning before taking actions.
However, emerging evidence suggests that such "pre-action overthinking" is not always optimal for sequential decision-making. Instead, agent performance can be more effectively improved through a trial-and-error paradigm, where actions are executed early and refined based on environmental feedback.
Reflexion1 demonstrates that agents can significantly improve decision-making by leveraging trial, error, and self-reflection — shifting the role of reflection from pre-action deliberation to post-action correction, enabling agents to learn from concrete execution outcomes rather than speculative reasoning.
Post-failure reflection + retry2 substantially boosts performance:
This provides strong empirical evidence that reflection is most effective when grounded in execution outcomes, rather than purely internal reasoning.
For multi-step and tool-augmented agent systems, performance should not be optimized solely through deeper pre-execution reasoning. A more effective strategy is an execution-driven optimization loop — where agents perform lightweight initial reasoning, act in the environment, and iteratively refine their behavior based on feedback signals.
Paradigm Shift: from "reason-then-act" → "act-then-refine"
The objective is not to achieve optimal reasoning in a single pass, but to enable robust task completion through iterative interaction and correction.
Qwopus3.5-9B-v3 is a reasoning-enhanced model based on Qwen3.5-9B, designed to simultaneously improve reasoning stability and correctness while optimizing inference efficiency — ultimately achieving stronger cross-task generalization capabilities, particularly in programming.
Key Highlights:
The v2 model was primarily trained through SFT on CoT data distilled from strong teacher models such as Claude. While this can transfer high‑quality reasoning patterns, CoT traces from third‑party datasets do not always faithfully reflect a model’s true internal reasoning process — and after analysis, I found some portions may even be “fabricated”, meaning the traces were not actually generated by the claimed teacher model.34
Prior work further shows that CoT explanations can act as post-hoc rationalizations rather than genuine step-by-step reasoning3. As a result, student models risk learning:
| v2 (Distillation) | v3 (Structural Alignment) | |
|---|---|---|
| CoT Source | Third-party distilled traces | Curated, verifiable reasoning chains |
| Learning Target | Imitate teacher outputs | Learn process-level reasoning |
| Reasoning Style | Compressed, potentially fabricated | Explicit, step-by-step, faithful |
| Robustness | Lower on unseen tasks | Higher generalization |
v3 focuses on improving the faithfulness, completeness, and structural clarity of reasoning traces. Instead of imitating compressed teacher CoT, the model is trained to produce more explicit and verifiable intermediate steps — enabling a transition from “answer imitation” to process-level reasoning learning.
This improves both the interpretability and reliability of the reasoning process, providing a more stable foundation for downstream multi-step and agent-based tasks.
⚠️ Side Effect: The generated CoT length in v3 will be slightly longer than v2, as a direct consequence of more explicit intermediate reasoning.
Inference for models was conducted under the Unsloth runtime environment using bfloat16 (BF16) precision, which provides a balance of numerical range and memory efficiency well-suited to 9B-scale inference. Answer verification, partial chain-of-thought adjudication, and statistical analysis were cross-validated using GPT-5.4-Pro (Thinking) and Claude Opus 4.6 (Thinking) to ensure accuracy and reproducibility of the evaluation outcomes.
HumanEval
I evaluated three 9B-scale Qwen-family models on the full 164-task HumanEval benchmark under a task-level adjudication protocol that resolves code-extraction pollution, answer/code separation issues, and clearly inferable truncated outputs using raw generations. Under this fair and strict evaluation setting, Qwopus3.5-9B-v3 achieves the best base pass@1 of 87.80% (144/164), outperforming both Qwen3.5-9B (82.93%, 136/164) and Claude-Distilled-v2 (82.32%, 135/164). Furthermore, on the stricter plus pass@1 evaluation, Qwopus3.5-9B-v3 also extends its lead to 82.93% (136/164) compared to 77.44% (127/164) for the official baseline (+5.49 pp) and 78.66% (129/164) for the distilled variant.
| Model | Base pass@1 | Plus pass@1 | Rescues (From GPT) | Improvement vs Qwen3.5-9B |
|---|---|---|---|---|
| Qwopus3.5-9B-v3 | 87.80% (144/164) | 82.93% (136/164) | 1 | 📈 Base: +4.87 pp / Plus: +5.49 pp |
| Qwen3.5-9B | 82.93% (136/164) | 77.44% (127/164) | 2 | Baseline |
| Claude-Distilled-v2 | 82.32% (135/164) | 78.66% (129/164) | 0 | 📉 Base: -0.61 pp / 📈 Plus: +1.22 pp vs Qwen3.5-9B |


Note: The test results presented here differ from the scores on the 9B-v2 model card because the context length was increased for this evaluation. Consequently, the number of tasks affected by context window truncation has changed for each model, leading to different final scores. Please ensure comparisons are made under the same variable settings.
All post-evaluation standard result files will be uploaded to this repository for transparency and reproducibility. These include:
Jackrong_Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2_humaneval_all_evalonly_eval_resultsJackrong_Qwopus3.5-9B-v3-test1_humaneval_all_evalonly_eval_resultsqwen_Qwen3.5-9B_humaneval_all_evalonly_eval_results⚠️ Note on evaluation artifacts.
The released result files are based on raw model generations, which may contain formatting issues (e.g., Markdown wrappers, answer/code mixing), truncation, or minor token-level corruption.
I evaluated on 280 MMLU-Pro questions across the following domains: Biology, Chemistry, Computer Science, Health, Mathematics, Physics, and Other Sciences.
All question IDs are identical across both model runs.
| Model | Correct | Total | Accuracy |
|---|---|---|---|
| Qwen3.5-9B | 225 | 280 | 80.36% |
| Qwopus3.5-9B-v3 | 229 | 280 | 81.79% |
Result:
Qwopus3.5-9B-v3 leads by +1.43 pp
| Metric | Qwen3.5-9B | Qwopus3.5-9B-v3 |
|---|---|---|
| Avg think length | 7116 chars | 5313 chars |
| Passes / 10k chars | 1.26 | 1.66 |
| Chars / correct pass | 7938 | 6032 |

While the overall accuracy margin (+1.43 pp) is modest, Qwopus3.5-9B-v3 fundamentally shifts the accuracy-cost paradigm, achieving its victory while spending significantly less reasoning budget. With a 25.3% reduction in mean think length and 24.0% lower token cost per correct answer, this iteration is highly optimized for latency, token budget, and context pressure.
Furthermore, across the mixed domain profile, Qwopus3.5-9B-v3 uniquely offsets Qwen3.5-9B's slight edge in biology, CS, and math by excelling in physics, chemistry, and significantly lowering its unfinished-output rate. Its final rank benefits as much from raw correctness as from an improved ability to cleanly and reliably complete analytical boundaries.
Base Model (Qwen3.5-9B) │ ▼ Qwen3.5-9B fine-tuned with Unsloth │ ▼ Supervised Fine-Tuning (SFT) + LoRA (Response-Only Training masked on "<|im_start|>assistant\n<think>") │ ▼ Qwopus3.5-9B-v3
The model includes targeted optimizations addressing Qwen3.5's tendency toward excessive or repetitive reasoning on simple queries. By distilling the structured reasoning habits of top-tier models like Claude Opus, Qwopus3.5-9B-v3 adopts a highly organized, step-by-step cognitive layout.
Example:The user is asking about [Topic] and how it differs from [Topic B]. This is a [Task type] question. Let me break this down: 1. What is [Topic A]? - [Fact/Mechanism 1] - [Fact/Mechanism 2] 2. What is [Topic B]? - [Fact/Mechanism 1] 3. Key differences: - [Comparison Point 1] - [Comparison Point 2] Let me make sure to be accurate: [...] Actually, I should double-check: is [Fact] used before [Fact]? Yes, typically... Let me provide a clear, well-structured answer:
The model was fine-tuned on a high-fidelity reasoning dataset, which was meticulously curated from a blend of premium open-source sources on Hugging Face. This dataset is the result of a rigorous mixing and cleaning process, specifically designed to filter out low-quality responses and ensure consistently strong logical performance across diverse analytical domains.
(Rest assured, the entire process is strictly by-the-book and 100% compliant with all terms and open-source licenses!)
Significant thanks to the Unsloth AI team for making rapid fine-tuning of large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets.
This qwen3_5 model was trained 2x faster with Unsloth and Huggingface's TRL library.
If you use this model in your research or projects, please cite:
@misc{jackrong_qwen35_9b_v3 title = {Jackrong/Qwopus3.5-9B-v3}, author = {Jackrong}, year = {2026}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/Jackrong/Qwopus3.5-9B-v3}} }
Shinn, N., Cassano, F., Berman, E., Gopinath, A., Narasimhan, K., & Yao, S. (2023).
Reflexion: Language Agents with Verbal Reinforcement Learning.
arXiv:2303.11366. ↩
Bensal, S., Jamil, U., Bryant, C., Russak, M., Kamble, K., Mozolevskyi, D., Ali, M., & AlShikh, W. (2025).
Reflect, Retry, Reward: Self-Improving LLMs via Reinforcement Learning.
arXiv:2505.24726. https://arxiv.org/abs/2505.24726 ↩
Anthropic (2025). Reasoning Models Don't Always Say What They Think.
https://www.anthropic.com/research/reasoning-models-dont-say-think ↩ ↩2
Lyu et al. (2023). Faithful Chain-of-Thought Reasoning. ACL.
https://aclanthology.org/2023.ijcnlp-main.20/ ↩