Fuzzy Bayesianism
Clearly real humans do not have sharp credences on most propositions of real interest outside of games and scientific experiments. Perfectly sharp credences are ruled out by both our computational and our evidential constraints. Even if I could access enough evidence to pin down a unique sharp posterior on some realistic proposition, my brain runs too slowly to update on all this evidence in reasonable time, and it’s too small to hold the resulting infinitely precise credences in memory. On the other hand, even an ideal Bayesian with infinite memory wouldn’t be able to justify a unique sharp posterior just on the basis of evidence I can access. I simply don’t know enough about the world to say non-arbitrarily that my credence on rain tomorrow should be 0.567… instead of 0.568…
Given that we all have these defects, we might wonder what mathematical formalism is appropriate for representing our fuzzy belief states. Sharp Bayesianism will judge all fuzzy believers like us irrational, but surely that’s not the end of the story. There are better and worse ways of forming beliefs under scarcity of evidence and bounded compute. Maybe there are principles of fuzzy Bayesianism that guarantee our beliefs will be as good as they can be given our constraints.
So the big questions are:
- How should you formally represent a fuzzy belief?
- How do you measure the accuracy of a fuzzy belief? (1)ie, are there scoring rules with desirable properties that don’t assume precision?
- How should you update fuzzy beliefs when you receive new evidence?
- How do you act on a fuzzy belief?
Representing fuzzy beliefs #
A sharp Bayesian has a single credence function that takes in any proposition and returns a probability. A fuzzy Bayesian, by contrast, has a set of credence functions called a representor. Each function in the representor has to be normalized, non-negative, and countably additive, but the functions aren’t required to agree with each other.
Why would you think this is a reasonable model of fuzzy belief given that no real person carries a list of explicit credence functions around in their head? The answer is that realistic fuzzy agents can be represented as though they had representors.
Suppose we have an agent capable of saying, for some pairs of propositions, whether one proposition is at least as subjectively credible as the other. That is, they have a credibility relation $\trianglelefteq$
on the set of all propositions, and that relation may or may not be complete or transitive. Suppose also that there’s a partial order relation $\trianglelefteq'$
on the set of all propositions such that
$A \trianglelefteq B \implies A \trianglelefteq' B.$
($\trianglelefteq'$
always agrees with$\trianglelefteq.$
)- If
$A \trianglelefteq B$
and$B \not\trianglelefteq A,$
then$B \not\trianglelefteq' A.$
($\trianglelefteq'$
never makes two propositions equally credible if$\trianglelefteq$
said only that one was at least as credible as the other.) $\trianglelefteq'$
is complete, transitive, and continuous.
Then it’s possible to represent our agent as a fuzzy Bayesian who judges one proposition at least as credible as another if an only if all the functions in their representor agree the more credible proposition is more likely than the less credible one.
(2)See Bradley DT w/ HF §11.5.2
More formally, the conditions above imply that there exists a maximal set $P$
of probability functions such that
\[A \trianglelefteq B \iff p(A) \le p(B)\;\forall p\in P.\]
I think this theorem gives us a pretty good reason to trust the formalism of fuzzy Bayesianism. Even if your representor isn’t part of your “mental furniture,“ (3)Ahmed ED&C §1.4 we can accurately model you as if your probabalistic judgements were arrived at by consulting a representor—at least unless you’re severely inconsistent.
Scoring fuzzy beliefs #
Given a sharp credence on some proposition and knowledge of whether the proposition is actually true, we can define functions called scoring rules that will tell us how accurate the credence was. Some of these scoring rules have the nice property that if the true chance of the proposition being true is $p$
, the credence that will receive the highest expected accuracy score is also $p.$
Such scoring rules are called strictly proper. Another way of thinking about strict propriety is that if you ask an EV maximizer to give their credence on a proposition and offer them a reward proportional to their accuracy under a strictly proper scoring rule, they will always report their true credence.
(4)There’s a more technically cautious definition in §2.1 of this paper.
Aren’t all plausible scoring rules strictly proper? No, here’s a scoring rule that looks intuitively reasonable to me:
\[S(p) = 1 - |\bold{1}_A- p| \]
where $S$
is your accuracy, $A$
is some proposition, and $p$
is your credence in $A.$
This scoring rule rewards you linearly for getting closer to the truth, with a maximum possible score of 1 and a minimum score of 0. The trouble with this scoring rule is that it doesn’t punish forecasters harshly enough for giving extreme credences in the wrong direction. If the true chance that $A$
is $x,$
the linear scoring rule says that the best credence to have is
\[\argmax_{p\in[0,1]} \big( xp + (1-x)(1-p)\big) = \begin{cases} 1 & \text{if } x>1/2 \\ 0 & \text{if } x<1/2 \\ [0,1] & \text{if } x=1/2 \end{cases}\]
So in fact, the linear scoring rule is so improper that it only recommends a forecaster report their true credence when that credence happens to fall in a set of measure zero. But if you instead adopt the quadratic scoring rule
\[S(p) = 1 - (\bold{1}_A- p)^2, \]
you have a strictly proper accuracy metric called the Brier score.
Are there strictly proper accuracy scores for fuzzy forecasts? This paper proves that you can’t define a strictly proper scoring rule that takes only the upper and lower envelope of a representor and returns a real valued accuracy score (pg 9). In other words, if a fuzzy forecaster reports to you only the highest and lowest credences they entertain on a proposition, and you have to reward them with a single dollar value, there’s no way you can incentivize them to tell you their true beliefs.
Updating fuzzy beliefs #
The standard procedure for revising fuzzy beliefs when you gain new evidence is to update each of the credences in your representor by normal conditionalization. That is, if your representor at time $t$
is $P_t$
, and between $t$
and $t'$
you learn only that proposition $E$
is true, then your representor at time $t'$
should be
\[P_{t'} = \{p(\cdot |E)\,|\, p \in P_t\}\]
There’s some writing about why this updating procedure is rationally required, but I don’t find it terribly interesting as I’m not tempted to reject conditionalization in either the sharp or the fuzzy case. (5)This article by Pettigrew makes the case for sharp Bayesians to update by conditionalization. I also recommend his interview on the Formal Philosophy Podcast.
Questions #
- Are there cases where it’s intutitively appealing for a fuzzy Bayesian to not conditionalize on new evidence?
- Is it possible to run fair empirical tests of fuzzy Bayesians’ performance compared to sharp Bayesians?
- Do working statisticians sometimes use fuzzy probabilities in their models? If so, how do they do it, and why?
Reading List #
- Mahtani’s overview of imprecise probabilities ✔
- Seamus Bradley’s SEP article ✔
- Richard Bradley, DT with a Human Face chapter 11. ✔
- Schoenfield on “The accuracy and rationality of imprecise credence”
- Augustin’s chapter on “Statistics with Imprecise Probabilities”
- Elga presents a Dutch book against agents with fuzzy credences.
Last updated 5 September 2024