OpenHands LM 32B V0.1

all-hands/openhands-lm-32b-v0.1

About OpenHands LM 32B V0.1

OpenHands LM v0.1 is a 32B open-source coding model fine-tuned from Qwen2.5-Coder-32B-Instruct using reinforcement learning techniques outlined in SWE-Gym. It is optimized for autonomous software development agents and achieves strong performance on SWE-Bench Verified, with a 37.2% resolve rate. The model supports a 128K token context window, making it well-suited for long-horizon code reasoning and large codebase tasks.

OpenHands LM is designed for local deployment and runs on consumer-grade GPUs such as a single 3090. It enables fully offline agent workflows without dependency on proprietary APIs. This release is intended as a research preview, and future updates aim to improve generalizability, reduce repetition, and offer smaller variants.

Specifications

Context Length

16,384

Tokenizer

Other

Pricing

Prompt

2.600

Completion

3.400

Image

0

Request

0

Last updated: 4/11/2025