## Table of Contents

## Learning Curve

A learning curve visualizes changes in student performance over time. The line graph displays opportunities across the x-axis, and a measure of student performance along the y-axis. A good learning curve reveals improvement in student performance as opportunity count (i.e., practice with a given knowledge component) increases. It can also "describe performance at the start of training, the rate at which learning occurs, and the flexibility with which the acquired skills can be used" (Koedinger and Mathan 2004).

- Learning curve types
- Viewing different curves
- Viewing the details of a point on the curve
- Opportunity Cutoff
- Standard Deviation Cutoff
- Predicted Learning Curve
- Learning curve categorization

See also Learning Curve Examples and Learning Curve Algorithm, and our intro to learning curves video (11m:15s).

### Learning Curve types

You can view a learning curve by student or by knowledge component (KC).

By Knowledge Component | View an average across all selected KCs, or view a curve for an individual KC. In the "all selected KCs" graph, each point is an average across data for all selected students and KCs. In a graph for an individual KC, each point is an average across all selected students. |
---|---|

By Student | View an average across all selected students, or view a curve for an individual student. In the "all selected students" graph, each point is an average across data for all selected students and KCs. In a graph for an individual student, each point is an average across all selected KCs. |

Toggle the inclusion of knowledge components or students by clicking their name in the navigation boxes on the left.

Change the measure of student performance by hovering over the y-axis on the graph and clicking the new measure.

Measures of student performance are described below. Regardless of metric, each point on the graph is an average across all selected knowledge components and students.

Measure | Description |
---|---|

Assistance Score | The number of incorrect attempts plus hint requests for a given opportunity |

Error Rate | The percentage of students that asked for a hint or
were incorrect on their first attempt. For example,
an error rate of 45% means that 45% of students asked for a
hint or performed an incorrect action on their first
attempt. Error rate differs from assistance score in that it
provides data based only on the first attempt. As such, an
error rate provides no distinction between a student that
made multiple incorrect attempts and a student that made
only one. |

Number of Incorrects | The number of incorrect attempts for each opportunity |

Number of Hints | The number of hints requested for each opportunity |

Step Duration | The elapsed time of a step in seconds, calculated by adding all of the durations for transactions that were attributed to the step. |

Correct Step Duration | The step duration if the first attempt for the step was correct. The duration of time for which students are "silent", with respect to their interaction with the tutor, before they complete the step correctly. This is often called "reaction time" (on correct trials) in the psychology literature. If the first attempt is an error (incorrect attempt or hint request), the observation is dropped. |

Error Step Duration | The step duration if the first attempt for the step was an error (hint request or incorrect attempt). If the first attempt is a correct attempt, the observation is dropped. |

### Viewing different curves

#### To switch between learning curve types:

- Move your mouse pointer over the y-axis of the graph and click the new measure.

#### To switch between knowledge component and student views:

- Select the desired view from the portion of the navigation side-bar.

#### To examine a single knowledge component or student:

- Select its thumbnail from the gallery of available graphs on the bottom portion of the screen. The main graph then updates.

Available graphs are provided based on the selected samples, students, and knowledge components. (The default sample is titled 'All Data'.)

To compare conditions or other groups of data, you might define a number of samples that are subsets of 'All Data'. For more information on creating and modifying samples, see Sample Selector.

### Viewing the details of a point on the curve

Explore a single point on the curve by clicking it. You can then navigate points on the curve by using the previous- and next-opportunity arrows ( and ) beneath the graph. Change the selected line in the graph by using the sample drop-down.

Each change of the selected point updates the point information beneath the graph. This displays the point's value and observation count, as well as counts of the various units of analysis—unique KCs, problems, steps, and students—that compose the point.

Click a count to see values for observations composing a point. Values shown in the table are averaged by unit of analysis, and exclude dropped or null observations. The "Obs" column displays the number of observations for each of the items in the table. This can be helpful because it signifies if the value for the row itself is composed of more than one data point, as well as the how the value for the item is weighted in determining the point value shown in the learning curve graph.

For views showing values by student or KC, links below the table allow you to change the selection of items in the main navigation boxes based on the values composing the point.

There are a few things to keep in mind when comparing the point details values with the total number of observations for a point and the value for that point:

- Multiple observations often fall under a single KC, problem, or step (but not student—there is only one observation per opportunity for a student), as indicated in the "Obs" column. In these cases, the number you see for that row is an average.
- Multiple KCs might be attributed to a single step, showing more KCs than there are observations.
- In no case should the number of problems, steps, or students exceed the number of observations (although future data might invalidate this claim, such as data attributing multiple possible steps to a single student action).
- To calculate the point value from the values in the details box, you need to find the average while taking into account the frequency of each item, indicated by the number in the "Obs" column.
- Dropped observations can remove data points from the calculation of the point value. These observations will not be shown in the details box. The number of dropped observations are shown in parentheses after the number of included observations, and are the result of a standard deviation cutoff.

### Opportunity Cutoff

When examining a learning curve, it may be useful to limit which student/knowledge component pairs are included in the graph based on the number of opportunities students had with the knowledge component. DataShop calls this the opportunity cutoff. For example, specifying an opportunity cutoff max value of 5 would remove student/knowledge component pairs where students had more than 5 opportunities with the chosen knowledge component(s). This may remove outliers from the data and provide a better means for analysis.

You can set a minimum and/or maximum opportunity cutoff by entering numbers in the learning curve
navigation and pressing **Refresh Graph**.

### Standard Deviation Cutoff

For latency curves (“Step Duration”, “Correct Step Duration”, and
“Error Step Duration”), you can set a **standard deviation cutoff**. This
is the number of standard deviations above and below the mean for which to include data points. Data
points (observations) falling outside the specified standard deviation are dropped from the graph;
the x-axis (number of opportunities) is not affected.

Standard deviation for an opportunity is calculated based on data for all knowledge components in the current knowledge-component model and the currently selected students. Therefore, changing the selected KCs will not affect the standard deviation values but changing the selected students may.

**Note:** If you set both a standard deviation cutoff and min and/or max opportunity cutoff,
DataShop calculates the standard deviation **before** applying the opportunity cutoff(s).

### Predicted Learning Curve

The empirical learning curves (average observed errors of a skill over each learning opportunity) calculated directly from the data contain lots of noise and take the form of wiggly lines. This noise comes from various places, such as recording errors, or the environment where the students worked. The predicted learning curve is much smoother. It is computed using the Additive Factor Model (AFM), which uses a set of customized Item-Response models to predict how a student will perform for each skill on each learning opportunity. The predicted learning curves are the average predicted error of a skill over each of the learning opportunities. As much of the noise is filtered out by the AFM models, the predicted learning curves are much smoother than the empirical learning curves.

While the empirical learning curve may give a visual clue as to how well a student may do over a set of learning opportunities, the predicted curves allow for a more precise prediction of a success rate at any learning opportunity.

There are several ways to use the predicted learning curves. One is to measure how much practice is needed to master a skill. When you see a learning curve that starts high and ends high, students probably finished the curriculum without mastering the skill corresponding to that learning curve. On the other hand, a learning curve that starts low and ends low with lots of learning opportunities probably implies that that the skill is easy and students were over-practicing it. For a detailed example, see Is Over Practice Necessary? Improving Learning Efficiency with the Cognitive Tutor through Educational Data Mining (Cen, Koedinger, and Junker 2007).

The second use of predicted learning curves is to find a better set of skills that matches the student learning. An ideal predicted learning curve should be smooth and downward sloping. If a learning curve is too flat, goes up, or is too wiggly, the corresponding skill is probably not well-chosen and worth refining. For reference, see Learning Factors Analysis - A General Method for Cognitive Model Evaluation and Improvement (Cen, Koedinger, and Junker 2006).

#### To view the predicted learning curve (Error Rate learning curve only):

- Select "View Predicted" from the learning curve navigation box.

In DataShop, AFM computes the statistics of a cognitive model including AIC, BIC, the coefficients of student proficiency, initial knowledge component difficulty, and knowledge component learning rate, generating the probability of success on each trial on different knowledge components. You can view the values of these parameters on the Model Values report (Learning Curve > Model Values).

For more information on the AFM algorithm, see the Model Values help page.

### Learning curve categorization

DataShop categorizes learning curves (KCs) into one of four categories described below. To turn on categorization,
select the checkbox **Categorize Curves**. **Learning curve categorization is only available for Error Rate
learning curves displayed by KC (not student) when only one sample is selected.**

The categorization algorithm first discards points in each curve based on the **student threshold**.
If a point has fewer than that number of students, it is ignored. Within the points remaining:

- If the number of points is below the
**opportunity threshold**, then that curve has**too little data**. - If a point on the curve ever dips beneath the
**low error threshold**, then the curve is**low and flat**. - If the last point of the curve is above the
**high error threshold**, then the curve is**still high**. - If the slope of the predicted learning curve (as determined by the AFM algorithm) is below
the
**AFM slope threshold**, then the curve shows**no learning**.

#### Categories

**low and flat**: students likely received too much practice for these KCs. The low error rate shows that students mastered the KCs but continued to receive tasks for them. Consider reducing the required number of tasks or change your system's knowledge-tracing parameters (if it uses knowledge-tracing) so that students get fewer opportunities with these KCs.**no learning**: the slope of the predicted learning curve shows no apparent learning for these KCs. Explore whether the KC can be split into multiple KCs (via the KC Model export/import) that better account for variation in difficulty and transfer of learning that may be occurring across the tasks (problem steps) currently labeled by this KC. Video tutorial 1 shows the process of identifying a KC worth splitting, while tutorial 2 shows the process of exporting, modifying, and importing a KC model. The video Exploratory analysis with DataShop also shows this type of analysis.**still high**: students continued to have difficulty with these KCs. Consider increasing opportunities for practice.**too little data**: students didn't practice these KCs enough for the data to be interpretable. You might consider adding more tasks for these KCs or merging KCs (via the KC Model export/import). Video tutorial 2 shows the process of merging KCs.**good**: these KCs did not fall into any of the above "bad" or "at risk" categories. Thus, these are "good" learning curves in the sense that they appear to indicate substantial student learning.