Machine learning for climbing grades

Conventional assessment of route difficulty for rock climbing is a subjective process. A small number of people (often just one) assign a level for a particular route, and there isn’t really a process for refining grades once they’ve been assigned (it’s just one opinion vs another). Most of the grading systems are on an ordinal scale, which means you can put the grades in order but the difference or ratio between grades isn’t meaningful. Intentional biases are even part of climbing culture.

To address these problems, I developed a statistical model for grading rock climbing routes. It uses ratings parameters for the difficulty of the routes and the performance of the climbers to predict ascent outcomes. The difference in ratings between a climber and a route determines the probability the climber will ascend the route “successfully” (which loosely means getting to the top without weighting a rope or other mechanical devices). It’s based on a dynamic Bradley-Terry model, which is a common model for game and sports rating systems such as Elo, Glicko-2, and WHR.

While the model provides a mathematical theory for predicting ascent outcomes based on ratings parameters, it’s not useful in practice without a process for estimating the parameters. So I implemented a machine learning algorithm for estimating the parameters, based on the Whole-History Rating algorithm. This is a fast algorithm that uses second-order (Newton-Raphson) optimization for finding the maximum a posteriori estimates of the model parameters. This implementation is available as a free, open-source software package at the climbing ratings project on github.

So, how did it perform? With the help of theCrag.com, I fitted the model to hundreds of thousands of ascents from Australia. The output ratings were closely correlated with the conventional (subjectively-assigned) grades. This is an important result because it shows the principles of science (using observations to make testable predictions about the world) can be applied to grading climbs. I’ve published a more formal write-up of the results in the academic paper: Estimation of Climbing Route Difficulty using Whole-History Rating, and a layman’s explanation in the climbing magazine Vertical Life.

The real value for climbers came when theCrag.com integrated the machine learning model into their website, to produce grades for the thousands of routes where they have sufficient ascent data. The machine-learned grades are available on the route description page under the grade citations as “grAId”, alongside the grades from guidebooks. Sometimes the machine-learned grade and the conventional grade differ, suggesting the conventional grade was inconsistent with climbers’ actual ascent experiences. This sparked some debate, but it also provided some valuable feedback and opportunities for improvements.

I’m now starting to plan out the next steps from a technical perspective. Get in contact if you have ideas!

Leave a Reply

Your email address will not be published.