Quantitative frameworks like ICE, RICE, etc. seem great on the surface -- "We're using data!"
Unfortunately, these frameworks are inherently limited by a number of biases.
Firstly, the algorithm you use to calculate a single score per project. How much does effort play in vs. impact and confidence? Who decides the algorithm? Should it change over time?
Secondly, the votes of a collaborator more polarized in their scoring (e.g. low = 2, high = 10, and not much in between) will have significantly more impact on the final result than someone more nuanced. That's really bad, right?
Pairwise comparison with Prioritzr fixes all that with a series of simple A-vs-B decisions and a balanced & equitable algorithm. Plus, every collaborator is voting based on their full context instead of just the factors in the scoring system.
Our pairwise comparison method is powered by Glicko-2 Rating System, a method designed for ranking players in Chess and Go.
Each "player" is assigned a starting score and then loses or gains points based on the outcome of each much. When a lower-ranked "player" beats a higher-ranking player, more points are earned by the winner.