Nous: Hermes 2 Mixtral 8x7B DPO

nousresearch/nous-hermes-2-mixtral-8x7b-dpo

About Nous: Hermes 2 Mixtral 8x7B DPO

Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM.

The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

#moe

Specifications

Context Length

32,768

Tokenizer

Mistral

Pricing

Prompt

0.600

Completion

0.600

Image

0

Request

0

Last updated: 4/11/2025