kimi-k2

Kimi K2

Kimi K2 is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI
Intelligence
Speed
Price
$0.40 • $2.00
Input
Output

Kimi K2 is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. Kimi K2 excels across a broad range of benchmarks, particularly in coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) tasks. It supports long-context inference up to 128K tokens and is designed with a novel training stack that includes the MuonClip optimizer for stable large-scale MoE training.

262000 context window
100000 max output tokens
knowledge cutoff

Modalities

Text

Input and output

Image

Not supported

Audio

Not supported

Video

Not supported

Endpoints

Chat Completions

v1/chat/completions

Responses

v1/responses

Realtime

v1/realtime

Assistants

v1/assistants

Batch

v1/batch

Fine-tuning

v1/fine-tuning

Embeddings

v1/embeddings

Image generation

v1/images/generations

Videos

v1/videos

Image edit

v1/images/edits

Speech generation

v1/audio/speech

Transcription

v1/audio/transcriptions

Translation

v1/audio/translations

Moderation

v1/moderations

Completions (legacy)

v1/completions

Features

Streaming

Supported

Function calling

Not supported

Structured outputs

Not supported

Fine-tuning

Not supported

Distillation

Not supported

Tools

Tools supported by this model when using the Responses API.

Web search

Not supported

File search

Not supported

Image generation

Not supported

Code interpreter

Not supported

Computer use

Not supported

MCP

Not supported