Using and Benchmarking Rank: Vadis consulting turnkey prediction software.

Team Leader

Jean-Francois Chevalier
Vadis Consulting

Team Members

Pierre Gramme
Vadis Consulting

Thierry Van de Merkt
Vadis Consulting

Libei Chen
Vadis Consulting

Philip Smet
Vadis Consulting


Supplementary online material

Provide a URL to a web page, technical memorandum, or a paper.


Provide a general summary with relevant background information: Where does the method come from? Is it novel? Name the prior art.

RANK is a predictive modeling tool designed by analysts for the analyst. As a result, it combines powerful techniques and modeling experience. It automates many steps of the CRISP DM methodology ( for building models. RANK is built to allow an analyst to quickly build models on huge data sets, and have all elements to control the model choices and its quality, in order to focus his attention on the most important part of the modeling process: data quality, overfitting, stability and robustness. Using RANK, the analyst will get support for many modeling phases: audit, variable recoding, variable selection, robustness improvement, result analysis and industrialization.


Summarize the algorithms you used in a way that those skilled in the art should understand what to do. Profile of your methods as follows:

Data exploration and understanding

Did you use data exploration techniques to

  • [checked]  Identify selection biases
  • [checked]  Identify temporal effects (e.g. students getting better over time)
  • [checked]  Understand the variables
  • [not checked]  Explore the usefulness of the KC models
  • [not checked]  Understand the relationships between the different KC types

Please describe your data understanding efforts, and interesting observations:

Efforts have been done to understand how the test set could be compared to the training set. As we limited our investment in this competition to 10 man-days, we did not have time to have a better understanding of the KC models.


Feature generation

  • [checked]  Features designed to capture the step type (e.g. enter given, or ... )
  • [checked]  Features based on the textual step name
  • [checked]  Features designed to capture the KC type
  • [checked]  Features based on the textual KC name
  • [checked]  Features derived from opportunity counts
  • [checked]  Features derived from the problem name
  • [checked]  Features based on student ID
  • [not checked]  Other features

Details on feature generation:

We created more than 500 variables.

Feature selection

  • [not checked]  Feature ranking with correlation or other criterion (specify below)
  • [checked]  Filter method (other than feature ranking)
  • [checked]  Wrapper with forward or backward selection (nested subset method)
  • [not checked]  Wrapper with intensive search (subsets not nested)
  • [not checked]  Embedded method
  • [not checked]  Other method not listed above (specify below)

Details on feature selection:

RANK uses a highly optimized implementation of the LARS algorithm with LASSO modification. This technique is based on Efron, Hastie, Johnstone & Tibshirani [1] and allows to select the most pertinent variables for the scoring. The backward pruning in RANK iteratively eliminates variables when their removal does not influence the ROC more than a prescribed threshold. Using cross-validation, it will end with a variables selection that maximizes the area under the ROC curve.

Did you attempt to identify latent factors?

  • [not checked]  Cluster students
  • [not checked]  Cluster knowledge components
  • [not checked]  Cluster steps
  • [not checked]  Latent feature discovery was performed jointly with learning

Details on latent factor discovery (techniques used, useful student/step features, how were the factors used, etc.):

No response.

Other preprocessing

  • [not checked]  Filling missing values (for KC)
  • [not checked]  Principal component analysis

More details on preprocessing:

No response.


Base classifier

  • [not checked]  Decision tree, stub, or Random Forest
  • [checked]  Linear classifier (Fisher's discriminant, SVM, linear regression)
  • [not checked]  Non-linear kernel method (SVM, kernel ridge regression, kernel logistic regression)
  • [not checked]  Naïve
  • [not checked]  Bayesian Network (other than Naïve Bayes)
  • [not checked]  Neural Network
  • [not checked]  Bayesian Neural Network
  • [not checked]  Nearest neighbors
  • [not checked]  Latent variable models (e.g. matrix factorization)
  • [not checked]  Neighborhood/correlation based collaborative filtering
  • [not checked]  Bayesian Knowledge Tracing
  • [not checked]  Additive Factor Model
  • [not checked]  Item Response Theory
  • [not checked]  Other classifier not listed above (specify below)

Loss Function

  • [not checked]  Hinge loss (like in SVM)
  • [not checked]  Square loss (like in ridge regression)
  • [not checked]  Logistic loss or cross-entropy (like in logistic regression)
  • [not checked]  Exponential loss (like in boosting)
  • [not checked]  None
  • [not checked]  Don't know
  • [checked]  Other loss (specify below)


  • [checked]  One-norm (sum of weight magnitudes, like in Lasso)
  • [checked]  Two-norm (||w||^2, like in ridge regression and regular SVM)
  • [not checked]  Structured regularizer (like in group lasso)
  • [not checked]  None
  • [not checked]  Don't know
  • [not checked]  Other (specify below)

Ensemble Method

  • [not checked]  Boosting
  • [not checked]  Bagging (check this if you use Random Forest)
  • [not checked]  Other ensemble method
  • [not checked]  None

Were you able to use information present only in the training set?

  • [not checked]  Corrects, incorrects, hints
  • [not checked]  Step start/end times

Did you use post-training calibration to obtain accurate probabilities?

  • [not selected]  Yes
  • [selected]  No

Did you make use of the development data sets for training?

  • [not selected]  Yes
  • [selected]  No

Details on classification:

The final variable selection in Rank is based on the ROC curve (using cross-validation). This is not optimal in this context.

Model selection/hyperparameter selection

  • [checked]  We used the online feedback of the leaderboard.
  • [checked]  K-fold or leave-one-out cross-validation (using training data)
  • [not checked]  Virtual leave-one-out (closed for estimations of LOO with a single classifier training)
  • [not checked]  Out-of-bag estimation (for bagging methods)
  • [checked]  Bootstrap estimation (other than out-of-bag)
  • [checked]  Other cross-validation method
  • [not checked]  Bayesian model selection
  • [not checked]  Penalty-based method (non-Bayesian)
  • [not checked]  Bi-level optimization
  • [not checked]  Other method not listed above (specify below)

Details on model selection:

No response.


Final Team Submission

Scores shown in the table below are Cup scores, not leaderboard scores. The difference between the two is described on the Evaluation page.

A reader should also know from reading the fact sheet what the strength of the method is.

Please comment about the following:

Quantitative advantages (e.g., compact feature subset, simplicity, computational advantages).

We participated in this contest to validate that our software is still providing state of the art results in really short time (as it did for the last 3 KDD cup). According to results on sample, this goal seems to be achieved. We spent only 10 man-days in total to build the model. Among these, 9 were spent for features creation.

Qualitative advantages (e.g. compute posterior probabilities, theoretically motivated, has some elements of novelty).

Automatic variable recoding: Rank offers several recoding strategies. The most efficient recoding, initially designed for nominal variables, consists of converting modalities into numeric values according to their relation with the target. RANK extends this recoding to numeric variables by coupling it with an efficient binning technique. The advantage of this recoding is that it solves the non-normal distributions problem and allows spotting highly non linear relationship between any variable and the target. Overfitting: RANK is designed to avoid overfitting. This is achieved through cross-validation, ridge regression, small modalities regrouping and missing value treatment. Performance is carefully assessed by using a large amount of bootstrap samples.

Other methods. List other methods you tried.

No response.

How helpful did you find the included KC models?

  • [not selected]  Crucial in getting good predictions
  • [selected]  Somewhat helpful in getting good predictions
  • [not selected]  Neutral
  • [not selected]  Not particularly helpful
  • [not selected]  Irrelevant

If you learned latent factors, how helpful were they?

  • [not selected]  Crucial in getting good predictions
  • [not selected]  Somewhat helpful in getting good predictions
  • [not selected]  Neutral
  • [not selected]  Not particularly helpful
  • [selected]  Irrelevant

Details on the relevance of the KC models and latent factors:

No response.

Software Implementation


  • [not checked]  Proprietary in-house software
  • [checked]  Commercially available in-house software
  • [not checked]  Freeware or shareware in-house software
  • [not checked]  Off-the-shelf third party commercial software
  • [not checked]  Off-the-shelf third party freeware or shareware


  • [checked]  C/C++
  • [not checked]  Java
  • [not checked]  Matlab
  • [not checked]  Python/NumPy/SciPy
  • [not checked]  Other (specify below)

Details on software implementation:

One unique feature of RANK is its ability to store all the data required for the computation of the model inside the RAM of the computer. RANK uses an advanced proprietary compression technique that allows storing 15GB of data in barely 240MB of RAM. With this strategy, the time required to access the data is minimized. This allows RANK to easily and reliably perform multiple-pass model-computations.

Hardware implementation


  • [checked]  Windows
  • [not checked]  Linux or other Unix
  • [not checked]  Mac OS
  • [not checked]  Other (specify below)


  • [selected]  <= 2 GB
  • [not selected]  <= 8 GB
  • [not selected]  >= 8 GB
  • [not selected]  >= 32 GB


  • [checked]  Multi-processor machine
  • [not checked]  Run in parallel different algorithms on different machines
  • [not checked]  Other (specify below)

Details on hardware implementation. Specify whether you provide a self contained-application or libraries.

No response.

Code URL

Provide a URL for the code (if available):

No response.

Competition Setup

From a performance point of view, the training set was

  • [selected]  Too big (could have achieved the same performance with significantly less data)
  • [not selected]  Too small (more data would have led to better performance)

From a computational point of view, the training set was

  • [not selected]  Too big (imposed serious computational challenges, limited the types of methods that can be applied)
  • [selected]  Adequate (the computational load was easy to handle)

Was the time constraint imposed by the challenge a difficulty or did you feel enough time to understand the data, prepare it, and train models?

  • [not selected]  Not enough time
  • [selected]  Enough time
  • [not selected]  It was enough time to do something decent, but there was a lot left to explore. With more time performance could have been significantly improved.

How likely are you to keep working on this problem?

  • [not selected]  It is my main research area.
  • [not selected]  It was a very interesting problem. I'll keep working on it.
  • [selected]  This data is a good fit for the data mining methods I am using/developing. I will use it in the future for empirical evaluation.
  • [not selected]  Maybe I'll try some ideas , but it is not high priority.
  • [not selected]  Not likely to keep working on it.

Comments on the problem (What aspects of the problem you found most interesting? Did it inspire you to develop new techniques?)

Avoid overfitting by selecting a build set similar to the test set from the complete training set.


List references below.

1. B. Efron, T. Hastie, I Johnstone and R. Tibshirani. Least Angle Regression, The Annals of statistics 2004, Vol 32, No 2, 407-499. 2. Hoerl, A. E. and Kennard, R. (1970). Ridge regression: biased estimation for nonorthogonal problems, Technometrics 12: 55-67. 3. Smith EP, Lipkovich I, Ye K. Weight of Evidence (WOE): Quantitative estimation of probability of impact. Blacksburg, VA: Virginia Tech, Department of Statistics; 2002.