Swallow: Llama 3.1 Swallow 8B Instruct V0.3
tokyotech-llm/llama-3.1-swallow-8b-instruct-v0.3
About Swallow: Llama 3.1 Swallow 8B Instruct V0.3
Llama 3.1 Swallow 8B is a large language model that was built by continual pre-training on the Meta Llama 3.1 8B. Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities.
Swallow used approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and coding contents, etc (see the Training Datasets section of the base model) for continual pre-training. The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
Specifications
Context Length
16,384
Tokenizer
Llama3
Pricing
Prompt
0.099
Completion
0.199
Image
0
Request
0
Last updated: 4/11/2025