All articles
FSRSspaced repetitionlearning sciencealgorithms

How the FSRS Algorithm Works: A Complete Guide

Understand the Free Spaced Repetition Scheduler (FSRS) algorithm — how it calculates optimal review intervals, why it targets 70% recall, and how it outperforms SM-2.

JB
Julius Brussee
February 17, 202612 min read

How the FSRS Algorithm Works: A Complete Guide

If you have ever crammed for an exam the night before, you already know the result: information that felt solid at midnight is gone by morning. Spaced repetition is the antidote. It is a learning technique that schedules reviews at carefully calculated intervals so that you revisit material right before you would forget it.

The idea is simple, but the scheduling math behind it is not. For decades, the dominant algorithm was SM-2, originally developed for the SuperMemo software in 1987. It worked, but it had clear limitations. In 2022, a new algorithm called FSRS (Free Spaced Repetition Scheduler) emerged from the open-source community, bringing machine learning and modern memory research to the problem.

This guide explains how FSRS works, why it is a significant step forward, and how it translates into better study sessions.

Why Timing Matters: The Forgetting Curve

In 1885, the German psychologist Hermann Ebbinghaus conducted a series of experiments on himself, memorizing lists of nonsense syllables and testing his recall at various intervals. His findings revealed what we now call the forgetting curve: memory decays exponentially after learning. Within 24 hours, roughly 70% of newly learned information is lost if no review takes place.

But Ebbinghaus also discovered something more useful. Each time you successfully retrieve a piece of information, the forgetting curve flattens. The memory becomes more durable. The second review interval can be longer than the first. The third can be longer still.

This is the foundation of every spaced repetition system: schedule reviews at increasing intervals, timed to catch the memory just before it fades. The question is how to calculate those intervals optimally.

The Problem with Fixed Intervals

Many study apps use fixed schedules: review after 1 day, then 3 days, then 7 days, then 14 days, and so on. This approach is better than no system at all, but it ignores a fundamental reality — different cards are different. A simple vocabulary word that connects to many things you already know is not the same as an abstract formula with no intuitive hooks.

Fixed intervals also ignore the learner. Someone who studies four hours a day will have different retention patterns than someone who studies twenty minutes a week. A one-size-fits-all schedule inevitably leads to two problems: wasting time on easy material you already know, and losing hard material because the interval was too long.

Adaptive scheduling solves both problems. It adjusts intervals based on the difficulty of the material and the learner's demonstrated performance.

SM-2: The Old Standard

The SM-2 algorithm, created by Piotr Wozniak for SuperMemo and later adopted by Anki, was the first widely used adaptive scheduler. Its core idea is the ease factor — a multiplier that grows or shrinks based on how well you answer.

Here is how SM-2 works in simplified terms:

  1. Each card starts with an ease factor of 2.5.
  2. After a review, you rate your answer on a scale of 0 to 5.
  3. If you scored 3 or above, the next interval is calculated as: previous_interval * ease_factor.
  4. If you scored below 3, the card resets to the beginning.
  5. The ease factor itself is adjusted after each review — good answers increase it, poor answers decrease it.

SM-2 was groundbreaking for its time, and it still works reasonably well. But it has several well-documented problems:

Ease factor hell. Once a card's ease factor drops, it tends to stay low. A single bad day can sentence a card to short intervals indefinitely, even if you have since mastered the material. The algorithm has no mechanism to recover from early mistakes.

No model of memory. SM-2 does not actually model how memory works. The ease factor is a purely mechanical multiplier with no connection to cognitive science. It does not distinguish between a card you forgot after 3 days versus one you forgot after 90 days, even though these represent very different states of learning.

One parameter per card. SM-2 tracks only the ease factor and the interval. It cannot represent the nuanced state of a memory — how stable it is, how inherently difficult the material is, or how likely you are to remember it at any given moment.

No cross-card learning. Each card is an island. SM-2 cannot learn from your overall study patterns to improve predictions for new cards.

Enter FSRS

FSRS was developed by Jarrett Ye as part of the open-spaced-repetition project. Unlike SM-2, which was designed through intuition and manual experimentation, FSRS is built on a mathematical model of memory that was optimized against millions of real review records.

The core innovation of FSRS is that it separates the concept of memory state from the concept of scheduling. Instead of a single ease factor, FSRS tracks three distinct parameters for every card:

Stability (S)

Stability represents how durable a memory is. Technically, it is defined as the time (in days) at which the probability of recall drops to 90%. A card with stability of 30 means that 30 days after the last review, you have a 90% chance of remembering it.

After a successful review, stability increases. How much it increases depends on the current stability, the difficulty of the card, and the retrievability at the moment of review. This is where FSRS diverges sharply from SM-2: reviewing a card when your recall probability is low (a harder retrieval) produces a larger stability gain than reviewing when recall is still high (an easy retrieval). This is consistent with the testing effect in cognitive psychology — effortful retrieval strengthens memory more than effortless retrieval.

Difficulty (D)

Difficulty is a property of the card itself, not of the learner. It is a value between 1 and 10 that represents how inherently hard the material is to retain. A card for a common word in a language you are studying might have a difficulty of 3. An obscure medical term with no mnemonic hooks might have a difficulty of 8.

Difficulty is initialized based on your first rating and then updated incrementally with each subsequent review. Importantly, difficulty in FSRS is mean-reverting — it naturally drifts back toward a neutral value over time, which prevents the "ease factor hell" problem that plagues SM-2.

Retrievability (R)

Retrievability is the probability that you can successfully recall the card right now. It is not stored as a fixed value but calculated on the fly from stability and the elapsed time since the last review, using an exponential decay formula:

R = (1 + elapsed_days / (9 * S))^(-1)

This formula is derived from the power law of forgetting, which decades of memory research have shown to be a better fit for human forgetting than the simple exponential decay Ebbinghaus originally proposed.

When R is high (close to 1.0), you almost certainly remember the card. When R drops to your target recall threshold, it is time to review.

How FSRS Calculates the Next Interval

With these three parameters, scheduling becomes a straightforward calculation. Given a target recall rate (the desired retrievability at the moment of the next review), FSRS solves for the interval:

interval = 9 * S * (1/R_target - 1)

Where S is the current stability and R_target is the desired recall probability at the next review. If your target recall is 0.9 and your stability is 30 days, the next interval would be about 3.3 days. If stability is 100 days, the interval would be about 11.1 days.

After each review, FSRS updates stability based on several factors:

  • Current stability: Higher stability means the memory is already strong, so gains are smaller (diminishing returns).
  • Difficulty: Harder cards gain stability more slowly.
  • Retrievability at review time: Lower retrievability (harder recall) produces larger stability gains, rewarding effortful retrieval.

These update rules are controlled by a set of 19 parameters that have been optimized using machine learning on a dataset of over 700 million review records from real users. The default parameters work well out of the box, but FSRS also supports per-user optimization — if you have enough review history, the algorithm can tune its parameters specifically to your memory patterns.

The 70% Target Recall: Why Not 90%?

One of the most counterintuitive aspects of FSRS is that many implementations, including Revu, default to a target recall rate around 70-85% rather than 90% or higher. This seems wrong at first — why would you want to forget 15-30% of your cards at review time?

The answer lies in the concept of desirable difficulty, first articulated by Robert Bjork in 1994. Bjork's research demonstrated that conditions which make learning feel harder in the short term often produce stronger long-term retention. When you successfully retrieve a memory under difficult conditions (low retrievability), the resulting stability gain is much larger than when retrieval is easy.

A 90% target recall means you review cards while they are still relatively fresh. Retrieval is easy, and the stability gain is modest. You end up doing more reviews for less durable memories.

A 70% target recall means you wait longer between reviews. When you do review, retrieval requires genuine effort. The stability gain is substantial. Over time, you do fewer total reviews while building more durable memories.

This is not just theoretical. Empirical studies comparing different target recall rates consistently find that lower targets (in the 70-85% range) produce better long-term retention per unit of study time. The optimal value depends on personal preference — some learners find a 70% success rate frustrating, while others appreciate the efficiency. FSRS makes this a configurable parameter so each learner can find their own balance.

FSRS vs SM-2: A Concrete Comparison

The differences between FSRS and SM-2 are not merely theoretical. Benchmark studies on real user data show consistent, measurable improvements:

Prediction accuracy. FSRS predicts recall probability with significantly higher accuracy than SM-2. In head-to-head comparisons using log-loss (a standard metric for probability predictions), FSRS achieves roughly 30-50% lower error rates. This means FSRS is substantially better at knowing when you will forget a card.

Review efficiency. Because FSRS models memory state more accurately, it schedules intervals more precisely. Users typically see 20-35% fewer reviews compared to SM-2 while maintaining the same or better retention rates. For a student doing 200 reviews per day, that is 40-70 fewer reviews daily — time that can be spent learning new material.

No ease factor hell. FSRS's mean-reverting difficulty parameter and its separate stability tracking eliminate the pathological spiral where cards get stuck at low ease factors. If you master a card that was once difficult, FSRS recognizes the improvement.

Better handling of lapses. When you forget a card in SM-2, the interval resets to the beginning regardless of how long the previous interval was. FSRS is more nuanced: forgetting a card after a 180-day interval is treated differently than forgetting it after a 3-day interval, because the underlying memory state is different.

Adaptability. SM-2 uses the same algorithm for everyone. FSRS can optimize its parameters to match individual memory characteristics, and its default parameters were trained on a diverse dataset that represents a wide range of learners.

How Revu Uses FSRS

Revu implements FSRS as its core scheduling engine, integrated into a study experience designed for depth and consistency.

When you create a flashcard in Revu, the algorithm initializes it with default stability and difficulty values derived from FSRS's optimized parameters. Your first review establishes the card's initial difficulty based on how easily you recall it.

From there, every review updates the card's memory state. Revu calculates the optimal next review date using your configured target recall rate and presents cards in priority order — those closest to (or past) their optimal review time appear first.

The daily study session is designed around this scheduling. Rather than overwhelming you with every due card at once, Revu respects the algorithm's priorities and your available study time. Cards that have dropped below the target retrievability get priority. Cards that are still above threshold can wait.

This integration means that your study sessions are always focused on the material that benefits most from review at that exact moment, with no wasted effort on cards you already know and no lost cards that slipped through the cracks.

The Future of Evidence-Based Scheduling

FSRS represents a broader shift in how we think about learning software. Rather than relying on hand-tuned heuristics, the algorithm is built on a testable model of memory, validated against real data, and open to continuous improvement.

The open-spaced-repetition project continues to refine the algorithm. Recent work has explored incorporating additional signals — time spent on each review, patterns across related cards, and time-of-day effects. As the dataset of review records grows and the model improves, the scheduling will only get more precise.

For learners, the practical implication is straightforward: better algorithms mean less time reviewing and more durable knowledge. FSRS is not a marginal improvement over SM-2. It is a generational leap, and it makes the difference between a study system that sort of works and one that is genuinely optimized for how your memory actually functions.

The best spaced repetition algorithm is the one you actually use. But given the choice, you should use the one that respects your time by scheduling reviews when they matter most. That algorithm is FSRS.

Continue reading