So in fact, the linear scoring rule is so improper that it only recommends a forecaster report their true credence when that credence happens to fall in a set of measure zero. But if you instead adopt the quadratic scoring rule
you have a strictly proper accuracy metric called the Brier score.
Are there strictly proper accuracy scores for fuzzy forecasts? This paper proves that you can't define a strictly proper scoring rule that takes only the upper and lower envelope of a representor and returns a real valued accuracy score (pg 9). In other words, if a fuzzy forecaster reports to you only the highest and lowest credences they entertain on a proposition, and you have to reward them with a single dollar value, there's no way you can incentivize them to tell you their true beliefs.
[4] | There's a more technically cautious definition in §2.1 of this paper. |
The standard procedure for revising fuzzy beliefs when you gain new evidence is to update each of the credences in your representor by normal conditionalization. That is, if your representor at time is , and between and you learn only that proposition is true, then your representor at time should be
There's some writing about why this updating procedure is rationally required, but I don't find it terribly interesting as I'm not tempted to reject conditionalization in either the sharp or the fuzzy case.[5]
[5] | This article by Pettigrew makes the case for sharp Bayesians to update by conditionalization. I also recommend his interview on the Formal Philosophy Podcast. |
Are there cases where it's intutitively appealing for a fuzzy Bayesian to not conditionalize on new evidence?
Is it possible to run fair empirical tests of fuzzy Bayesians' performance compared to sharp Bayesians?
Do working statisticians sometimes use fuzzy probabilities in their models? If so, how do they do it, and why?