Skip to main content
← Back to Text Models LFM2.5-1.2B-Base is the pre-trained foundation model for the LFM2.5 series. Ideal for fine-tuning on custom datasets or building specialized checkpoints. Not instruction-tunedβ€”use LFM2.5-1.2B-Instruct for chat applications.

Specifications

PropertyValue
Parameters1.2B
Context Length32K tokens
ArchitectureLFM2.5 (Dense)

Fine-tuning

TRL compatible (SFT, DPO, GRPO)

Custom Training

Build domain-specific models

32K Context

Extended context for long documents

Quick Start

Install:
pip install "transformers>=5.0.0" torch accelerate
Download & Run:
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "LiquidAI/LFM2.5-1.2B-Base"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    dtype="bfloat16",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Base model uses raw text completion (not chat template)
inputs = tokenizer("The future of AI is", return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(output[0], skip_special_tokens=True))