
DeepSeek AI, in collaboration with Tsinghua University, has unveiled an innovative approach aimed at revolutionizing reward modeling within large language models. This breakthrough approach leverages increased inference time compute, resulting in the creation of DeepSeek-GRM. This 27-billion-parameter model is grounded in the open-source framework provided by Google’s Gemma-2-27B. The standout feature of DeepSeek-GRM is the integration of Self-Principled Critique Tuning