Chrome omnibox keyword search broken

Update 2021-02-24: The Chrome developers have rolled back the change.

The latest stable Chrome (released February 4), breaks the way keywords can be used to invoke search engines from the omnibox (address bar). TLDR: typing space after the keyword no longer works, but tab does. Disabling the omnibox-keyword-search-button flag will revert to the old behaviour.

Prior to the upgrade I could type g foo directly in the omnibox, hit enter, and get Google search results for “foo”. After typing the space after the “g”, the keyword gets expanded to “Search Google” or similar, so in the case of “g foo”, you actually see something like “Search Google | foo”.

screenshot of omnibox with "g " expanded to "Search Google |"

This works because I configured Google search to use the “g” keyword, from the chrome://settings/searchEngines settings page, as documented by Google.

screenshot of Chrome search engine settings with Google using the "g" keyword

However, this stopped working in Chrome 88.0.4324.150. Instead of “g foo” invoking the search engine I’d configured with keyword “g”, it invoked the default search engine with the query “g foo”! This was extremely disconcerting since I’m very accustomed to using keyword prefixes to search different websites.

However, you can still get at the keyword search functionality by hitting tab after the keyword rather than space. So in the previous example, typing gtabfoo will search for “foo”. The tab key doesn’t work quite the same way that space used to: it moves focus to a button in the autocomplete list for the keyword search, but merely having focus is enough to activate the search when you continue typing.

screenshot of the "Search Google" button focused in the omnibox autocomplete list

The Chrome search engine shortcuts no longer auto-fill with space bug on the Chromium tracker explains that this is due to the new button, and that you can get space working again by opening the chrome://flags/#omnibox-keyword-search-button settings and disabling the omnibox-keyword-search-button flag.

Machine learning for climbing grades

Conventional assessment of route difficulty for rock climbing is a subjective process. A small number of people (often just one) assign a grade for a particular route, and there isn’t really a process for refining grades once they’ve been assigned (it’s just one opinion vs another). Most of the grading systems are on an ordinal scale, which means you can rank the grades in order but the difference or ratio between grades isn’t meaningful. Intentional biases are even part of climbing culture.

To address these shortcomings, I developed a statistical model for grading rock climbing routes. The difficulty of a climbing route and the performance of a climber on a particular day are described by numerical ratings. The difference in ratings between a climber and a route determines the probability the climber will ascend the route “successfully”. For modern sport climbing, success loosely means getting to the top without weighting a rope or other mechanical devices. The climbing model is based on a dynamic Bradley-Terry model, which is a common model for game and sports rating systems such as Elo and Glicko-2.

While the statistical model provides a theory for predicting ascent outcomes based on ratings parameters, it’s not useful in practice without a process for estimating the parameters (individual ratings for climbers and routes) and hyperparameters (generalizations that are independent of individual climbers or routes, e.g. how hard the “average” route is, and how quickly climbers can improve). So I implemented an algorithm for estimating the parameters, based on the Whole-History Rating (WHR) algorithm. WHR is a fast algorithm that uses second-order (Newton-Raphson) optimization for finding the ratings for climbers and routes that maximize the likelihood of observing a particular set of ascents (known as the maximum a posteriori estimates). I used machine learning methods to choose the hyperparameters. The implementation is available as a free, open-source software package at the Climbing Ratings project on GitHub.

Continue reading “Machine learning for climbing grades”

Silencing the Ergodox EZ

Even after modifying my switches with silicone and dental floss, I still wasn’t satisfied with the noise of my Ergodox EZ. The dampened upstrokes were still causing a reverberation that I determined was coming from the Ergodox EZ case/PCB itself.

The Ergodox EZ CIY case has an integrated plate-mount design (where the “plate” is just part of the ABS upper shell), which is a design that’s notorious for producing reverb. The case is also very hollow, with 1–3mm gaps above the PCB and 2–4mm gaps below it. This all contributed to a mid-range “thonk” around 1kHz on upstrokes.

I successfully dampened the “thonk” sound by adding neoprene foam and rubber. This cost less than $10 (AUD) in materials and took about 90 minutes.

photo of Ergodox EZ top case with neoprene rubber strips
Top-half of the Ergodox EZ with 1.5mm neoprene rubber strips inserted in between columns.

Continue reading “Silencing the Ergodox EZ”