Specifications
| Property | Value |
|---|---|
| Parameters | 1.2B |
| Context Length | 32K tokens |
| Architecture | LFM2.5 (Dense) |
Math & Logic
Strong arithmetic and logical reasoning
Chain-of-Thought
Step-by-step problem decomposition
Fine-tunable
TRL compatible (SFT, DPO, GRPO)
Quick Start
- Transformers
- llama.cpp
- vLLM
Install:
pip install "transformers>=5.0.0" torch accelerate
Download & Run:from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "LiquidAI/LFM2.5-1.2B-Thinking"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
dtype="bfloat16",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": "What is machine learning?"}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(input_ids, max_new_tokens=512)
response = tokenizer.decode(output[0][len(input_ids[0]):], skip_special_tokens=True)
print(response)