Researchers from the Kimi Team have introduced Kimi k1.5, a next-generation multimodal LLM designed to overcome these challenges by integrating RL with extended context capabilities. This model employs innovative techniques such as long-context scaling, which expands the context window to 128,000 tokens, enabling it to process larger problem contexts effectively. Unlike prior approaches, the Kimi k1.5 avoids relying on complex methods like Monte Carlo tree search or value functions, opting for a streamlined RL framework. The research team implemented advanced RL prompt set curation to enhance the model’s adaptability, including diverse prompts spanning STEM, coding, and general reasoning tasks.
Kimi k1.5 demonstrated significant improvements in token efficiency through its long-to-short context training methodology, enabling the transfer of reasoning priors from long-context models to shorter models while maintaining high performance and reducing token consumption. The model achieved exceptional results across multiple benchmarks, including a 96.2% exact match accuracy on MATH500, a 94th percentile on Codeforces, and a pass rate of 77.5% on AIME, surpassing state-of-the-art models like GPT-4o and Claude Sonnet 3.5 by substantial margins.......