Understanding DeepSeek R1
Alisa Falls이(가) 4 달 전에 이 페이지를 수정함


DeepSeek-R1 is an open-source language model developed on DeepSeek-V3-Base that's been making waves in the AI community. Not just does it match-or even surpass-OpenAI's o1 design in many standards, but it also includes fully MIT-licensed weights. This marks it as the very first non-OpenAI/Google model to deliver strong reasoning abilities in an open and available way.

What makes DeepSeek-R1 particularly amazing is its transparency. Unlike the less-open techniques from some market leaders, DeepSeek has actually released a detailed training methodology in their paper. The design is also extremely cost-efficient, with input tokens costing simply $0.14-0.55 per million (vs o1's $15) and output tokens at $2.19 per million (vs o1's $60).

Until ~ GPT-4, the common wisdom was that better designs needed more information and calculate. While that's still valid, models like o1 and R1 show an option: inference-time scaling through thinking.

The Essentials

The DeepSeek-R1 paper presented several models, but main amongst them were R1 and R1-Zero. Following these are a series of distilled designs that, while intriguing, I won't go over here.

DeepSeek-R1 uses 2 major ideas:

1. A multi-stage pipeline where a little set of cold-start information kickstarts the model, followed by massive RL.

  1. Group Relative Policy Optimization (GRPO), a support knowing approach that counts on comparing multiple model outputs per prompt to avoid the requirement for a separate critic.

    R1 and R1-Zero are both reasoning designs. This essentially means they do Chain-of-Thought before responding to. For the R1 series of models, this takes type as believing within a tag, before responding to with a final summary.

    R1-Zero vs R1

    R1-Zero applies Reinforcement Learning (RL) straight to DeepSeek-V3-Base without any supervised fine-tuning (SFT). RL is used to optimize the model's policy to take full advantage of benefit. R1-Zero attains exceptional accuracy however sometimes produces confusing outputs, wiki.myamens.com such as mixing numerous languages in a single response. R1 repairs that by incorporating limited monitored fine-tuning and multiple RL passes, which enhances both accuracy and readability.

    It is fascinating how some languages may express certain ideas much better, which leads the model to choose the most expressive language for the job.

    Training Pipeline

    The training pipeline that DeepSeek released in the R1 paper is profoundly intriguing. It showcases how they produced such strong thinking models, and what you can get out of each stage. This includes the issues that the resulting designs from each phase have, and how they solved it in the next phase.

    It's interesting that their training pipeline differs from the normal:

    The typical training technique: Pretraining on big dataset (train to predict next word) to get the base model → monitored fine-tuningchoice tuning by means of RLHF R1-Zero: Pretrained → RL R1: Pretrained → Multistage training pipeline with several SFT and RL phases

    Cold-Start Fine-Tuning: Fine-tune DeepSeek-V3-Base on a few thousand utahsyardsale.com Chain-of-Thought (CoT) samples to ensure the RL process has a good starting point. This gives an excellent model to start RL. First RL Stage: Apply GRPO with rule-based benefits to improve reasoning correctness and (such as forcing chain-of-thought into thinking tags). When they were near merging in the RL procedure, they moved to the next action. The outcome of this step is a strong thinking model however with weak general capabilities, humanlove.stream e.g., bad formatting and language mixing. Rejection Sampling + basic data: Create brand-new SFT data through rejection sampling on the RL checkpoint (from action 2), integrated with monitored data from the DeepSeek-V3-Base design. They collected around 600k high-quality reasoning samples. Second Fine-Tuning: Fine-tune DeepSeek-V3-Base again on 800k total samples (600k thinking + 200k basic jobs) for wider capabilities. This action led to a strong reasoning model with basic abilities. Second RL Stage: Add more benefit signals (helpfulness, harmlessness) to improve the final model, in addition to the reasoning benefits. The result is DeepSeek-R1. They also did model distillation for numerous Qwen and Llama designs on the reasoning traces to get distilled-R1 designs.

    Model distillation is a strategy where you utilize an instructor model to enhance a trainee design by generating training data for the trainee model. The instructor is generally a bigger model than the trainee.

    Group Relative Policy Optimization (GRPO)

    The fundamental concept behind using support learning for LLMs is to tweak the design's policy so that it naturally produces more accurate and beneficial answers. They used a benefit system that examines not only for correctness however likewise for proper formatting and language consistency, orcz.com so the design slowly finds out to prefer actions that satisfy these quality criteria.

    In this paper, they encourage the R1 design to generate chain-of-thought reasoning through RL training with GRPO. Instead of adding a separate module at reasoning time, the training procedure itself nudges the model to produce detailed, detailed outputs-making the chain-of-thought an emerging habits of the enhanced policy.

    What makes their method particularly intriguing is its reliance on straightforward, rule-based reward functions. Instead of depending on costly external models or human-graded examples as in standard RLHF, the RL utilized for R1 utilizes basic criteria: it might provide a greater reward if the response is correct, if it follows the anticipated/ format, and if the language of the answer matches that of the timely. Not counting on a benefit design likewise means you don't have to hang out and effort training it, and it does not take memory and calculate away from your main model.

    GRPO was introduced in the DeepSeekMath paper. Here's how GRPO works:

    1. For each input prompt, the design generates various reactions.
  2. Each reaction receives a scalar reward based on elements like accuracy, formatting, and language consistency.
  3. Rewards are changed relative to the group's performance, basically measuring just how much better each reaction is compared to the others.
  4. The design updates its method somewhat to prefer reactions with greater relative advantages. It just makes slight adjustments-using methods like clipping and a KL penalty-to make sure the policy does not stray too far from its original habits.

    A cool aspect of GRPO is its flexibility. You can utilize basic rule-based benefit functions-for photorum.eclat-mauve.fr circumstances, awarding a bonus offer when the design correctly uses the syntax-to guide the training.

    While DeepSeek utilized GRPO, you might utilize alternative approaches instead (PPO or PRIME).

    For those aiming to dive deeper, Will Brown has actually composed quite a nice execution of training an LLM with RL utilizing GRPO. GRPO has also already been contributed to the Transformer Reinforcement Learning (TRL) library, which is another good resource. Finally, Yannic Kilcher has a great video explaining GRPO by going through the DeepSeekMath paper.

    Is RL on LLMs the course to AGI?

    As a final note on explaining DeepSeek-R1 and the methodologies they have actually presented in their paper, I wish to highlight a passage from the DeepSeekMath paper, based on a point Yannic Kilcher made in his video.

    These findings show that RL boosts the model's general efficiency by rendering the output circulation more robust, simply put, it appears that the enhancement is credited to boosting the appropriate reaction from TopK rather than the enhancement of essential abilities.

    Simply put, RL fine-tuning tends to shape the output circulation so that the highest-probability outputs are more most likely to be appropriate, even though the total capability (as measured by the variety of correct responses) is mainly present in the pretrained model.

    This recommends that reinforcement knowing on LLMs is more about refining and "shaping" the existing circulation of actions rather than endowing the model with entirely brand-new abilities. Consequently, while RL strategies such as PPO and GRPO can produce considerable performance gains, there appears to be an inherent ceiling determined by the underlying design's pretrained understanding.

    It is uncertain to me how far RL will take us. Perhaps it will be the stepping stone to the next huge turning point. I'm excited to see how it unfolds!

    Running DeepSeek-R1

    I have actually utilized DeepSeek-R1 through the main chat user interface for various problems, which it seems to fix well enough. The extra search performance makes it even better to use.

    Interestingly, o3-mini(-high) was launched as I was writing this post. From my preliminary testing, R1 seems stronger at math than o3-mini.

    I also leased a single H100 through Lambda Labs for $2/h (26 CPU cores, 214.7 GB RAM, 1.1 TB SSD) to run some experiments. The main objective was to see how the model would perform when deployed on a single H100 GPU-not to extensively evaluate the design's abilities.

    671B through Llama.cpp

    DeepSeek-R1 1.58-bit (UD-IQ1_S) quantized model by Unsloth, with a 4-bit quantized KV-cache and partial GPU offloading (29 layers working on the GPU), running by means of llama.cpp:

    29 layers appeared to be the sweet spot offered this configuration.

    Performance:

    A r/localllama user explained that they were able to get over 2 tok/sec with DeepSeek R1 671B, without using their GPU on their regional gaming setup. Digital Spaceport composed a complete guide on how to run Deepseek R1 671b completely in your area on a $2000 EPYC server, on which you can get ~ 4.25 to 3.5 tokens per second.

    As you can see, the tokens/s isn't rather manageable for any major work, however it's fun to run these big models on available hardware.

    What matters most to me is a mix of effectiveness and time-to-usefulness in these models. Since reasoning designs need to believe before answering, their time-to-usefulness is normally higher than other designs, but their usefulness is likewise typically greater. We need to both maximize usefulness and decrease time-to-usefulness.

    70B through Ollama

    70.6 b params, 4-bit KM quantized DeepSeek-R1 running via Ollama:

    GPU usage soars here, as anticipated when compared to the mainly CPU-powered run of 671B that I showcased above.

    Resources

    DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning [2402.03300] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models DeepSeek R1 - Notion (Building a fully regional "deep researcher" with DeepSeek-R1 - YouTube). DeepSeek R1's recipe to duplicate o1 and the future of thinking LMs. The Illustrated DeepSeek-R1 - by Jay Alammar. Explainer: What's R1 & Everything Else? - Tim Kellogg. DeepSeek R1 Explained to your grandmother - YouTube

    DeepSeek

    - Try R1 at chat.deepseek.com. GitHub - deepseek-ai/DeepSeek-R 1. deepseek-ai/Janus-Pro -7 B · Hugging Face (January 2025): Janus-Pro is an unique autoregressive framework that merges multimodal understanding and generation. It can both understand and generate images. DeepSeek-R1: Incentivizing Reasoning Capability in Large Language Models through Reinforcement Learning (January 2025) This paper introduces DeepSeek-R1, an open-source reasoning model that equals the efficiency of OpenAI's o1. It presents a detailed approach for training such designs using massive reinforcement learning techniques. DeepSeek-V3 Technical Report (December 2024) This report goes over the implementation of an FP8 combined accuracy training structure verified on an extremely large-scale design, attaining both sped up training and lowered GPU memory use. DeepSeek LLM: Scaling Open-Source Language Models with Longtermism (January 2024) This paper explores scaling laws and presents findings that assist in the scaling of large-scale designs in open-source setups. It introduces the DeepSeek LLM task, dedicated to advancing open-source language designs with a long-lasting perspective. DeepSeek-Coder: When the Large Language Model Meets Programming-The Rise of Code Intelligence (January 2024) This research study presents the DeepSeek-Coder series, a variety of open-source code models trained from scratch on 2 trillion tokens. The models are pre-trained on a top quality project-level code corpus and utilize a fill-in-the-blank task to enhance code generation and infilling. DeepSeek-V2: A Strong, Economical, wiki-tb-service.com and Efficient Mixture-of-Experts Language Model (May 2024) This paper presents DeepSeek-V2, a Mixture-of-Experts (MoE) language model identified by cost-effective training and effective reasoning. DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence (June 2024) This research introduces DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language design that attains efficiency equivalent to GPT-4 Turbo in code-specific tasks.

    Interesting occasions

    - Hong Kong University duplicates R1 outcomes (Jan 25, '25). - Huggingface reveals huggingface/open-r 1: Fully open reproduction of DeepSeek-R1 to reproduce R1, completely open source (Jan 25, '25). - OpenAI researcher validates the DeepSeek group individually discovered and used some core ideas the OpenAI team used on the way to o1

    Liked this post? Join the newsletter.