Skip to content

This git repository outlines three scoring rules that I believe might serve current forecasting platforms better than current alternatives. It stems from my frustrations with current scoring rules and with the Keynesian Beauty Contest method used in Karger et al. to resolve questions which may otherwise seem unresolvable

Notifications You must be signed in to change notification settings

SamotsvetyForecasting/optimal-scoring

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Better scoring rules

This git repository outlines three scoring rules that I believe might serve current forecasting platforms better than current alternatives. The motivation behind it is my frustration with scoring rules as used in current forecasting platforms, like Metaculus, Good Judgment Open, Manifold Markets, INFER, and others. In Sempere and Lawsen, we outlined and categorized how current scoring rules go wrong, and I think that the three new scoring rules I propose avoid the pitfalls outlined in that paper. In particular, these new incentive rules incentivize collaboration.

I was also frustrated with the "reciprocal scoring" method recently proposed in Karger et al.. It's a method that can be used to resolve questions which may otherwise seem unresolvable or resolve a long time from now. But it resembles a Keynesian Beauty Contest, which means that the forecasters are not incentivized to directly predict reality, but instead to predict the opinion which will be mainstream among forecasters. So I also propose two replacement scoring rules for reciprocal scoring.

I am choosing to publish these scoring rules in Github and in the arxiv 1 because journals tend to be extractive2 and time consuming, and because I am in a position to not care about them. In any case, the three scoring rules are:

Although Amplified Oracle builds upon Beat the house to ensure collaborativeness, I would recommend reading Amplified Oracle first, and then coming back to Beat the house if needed.

Issues (complaints or ideas) or pull requests (tweaks and improvements to our work) are both welcome. I would also like to thank Eli Lifland, Gavin Leech and Misha Yagudin for comments and suggestions, as well as Ezra Karger, SimonM, Jaime Sevilla and others for fruitful discussion.

Footnotes

  1. I am planning to update these papers to the arxiv around the 30th of April, so discussion and suggestions before then might be particularly valuable

  2. For instance, the open access option in the International Journal of Forecasting has an embargo period of 24 months (2 years) or price of $1,200. I also think that the article publishing model is a bad fit, since the proposed scoring rules are fairly experimental.

About

This git repository outlines three scoring rules that I believe might serve current forecasting platforms better than current alternatives. It stems from my frustrations with current scoring rules and with the Keynesian Beauty Contest method used in Karger et al. to resolve questions which may otherwise seem unresolvable

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published