--- base_model: - deepseek-ai/DeepSeek-R1-Distill-Llama-8B datasets: - open-r1/codeforces-cots license: mit tags: - code pipeline_tag: text-generation library_name: transformers --- # Paper Page [**Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal.**](https://arxiv.org/abs/2508.05988) # LogicCoder-8B **LogicCoder-8B** is an 8B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Llama-8B model and trained on a Python subset of the open-r1/codeforces-cots dataset. This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces. GitHub Repository: [https://github.com/Zengwh02/ASAP](https://github.com/Zengwh02/ASAP) # 🧠 Reasoning Mode We recommend **explicitly activating reasoning mode by inserting `````` in the prompt**. # 🔧 Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-8B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-8B", device_map="auto", trust_remote_code=True).eval() message = [{"role": "user", "content": "Please write a Python quick sort algorithm. "}] prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|> " model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device) outputs = model.generate( model_inputs.input_ids, max_new_tokens=4096, do_sample=False, eos_token_id=tokenizer.eos_token_id ) print(tokenizer.decode(outputs[0][len(model_inputs.input_ids[0]):], skip_special_tokens=False)) ```