# Non-Bayesian Risk

Distortion risk measures

When can a risk preference be represented by a certain functional form? EU, SEU, CEU, Savage etc.

DRM has a single $$X$$ evaluated with a single non-additive prob $$\nu$$ which can be considered as a range of $$\mathsf{Q}\sim \mathsf{P}$$. But in any given situation there is no uncertainty about which $$\mathsf Q$$ the DM will use (assuming $$X$$ is increasing). It is not a random selection. It is not Bayesiable in the sense that it has a different LLN - per Marinacci papers.

Bayes has $$X=X\mid\theta$$, one of a family of RVs depending on an (unknown) parameter $$\theta$$. There is a single $$\mathsf P$$. Samples from $$X$$ converge on average to $$\mathsf{E}[X]$$. Samples from $$X\mid\theta$$ converge to $$\mathsf{E}[X\mid\theta]$$.

Schmeidler (1986): comon functionals are Choquet integrals.

Itzhak Gilboa (1987)

I. Gilboa and Schmeidler (1994): discrete setting version of next paper.

Itzhak Gilboa and Schmeidler (1995): there is a “lifting” of a non-additive prob to a larger space and a measure. Cf Stone spaces? (See also Shafer (1979))

Marinacci (1999)

Ghirardato, Klibanoff, and Marinacci (1998)

Ghirardato (2001)

Epstein and Schneider (2003): IID: Independently and indistinguishably distributed, includes Marinacci LLN result.

Marinacci and Montrucchio (2004)

Maccheroni and Marinacci (2005)

Itzhak Gilboa, Postlewaite, and Schmeidler (2009): it can be rational to ignore Bayes. Making up probs and then acting on them is not rational. * Grand state space * Pick a prior * Bayesian Updating * Utility for DM

In CS, Stats, ML: “small” state space (e.g., a single parameter) and no utility. More constrained.

(Book)

(See also I. Gilboa, Postlewaite, and Schmeidler (2008), Itzhak Gilboa, Postlewaite, and Schmeidler (2012), Itzhak Gilboa (2015))

Itzhak Gilboa and Marinacci (2011)

Barillas, Hansen, and Sargent (2009)

Hansen and Sargent (2010)

Epstein and Seo (2015) more Marinacci like results.

Itzhak Gilboa (2015)

a mode of behavior is irrational for a decision-maker, if, when the latter is exposed to the analysis of her choices, she would have liked to change her decision, or to make different choices in similar future circumstance. Note that this definition is based on a sense of regret, The analysis used for this test should not include new factual information.

We thus refine the notion of rationality as follows: a decision is subjectively rational for a decision-maker if she cannot be convinced that this decision is wrong; a decision is objectively rational for a decision-maker if she can be convinced that this decision is right. For a choice to be subjectively rational, it should be defensible once made; to be objectively rational, it needs to able to beat other possible choices.

Bayesian updating is generally considered to be the only rational approach to learning once one has a prior. The normative appeal of Bayes’s formula has only rarely been challenged, partly because the formula says very little: it only suggests to ignore that which is known not to be the case. Following the empiricist principle of avoiding direct arguments with facts, Bayesian updating only suggests that probabilities be renormalized to retain the convention that they sum up to unity.

The notion of a ‘state of the world’ varies across disciplines and applications. In Bayesian statistics, as described above, the state may be the unknown parameter, coupled with the observations of the experiment. Should one ask, where would I get a prior over the state space, the answer might well be, experience. If the entire statistical problem has been encountered in the past, one may have some idea about a reasonable prior belief, or at least a class of distributions that one may select a prior from.

These extensions of the notion of a state of the world are very elegant, and are sometimes necessary to deal with conceptual problems. Moreover, one may always define the state space in a way that each state would provide answers to all relevant questions. But defining a prior over such a state space becomes a very challenging task, especially if one wishes this definition not to be arbitrary. The more informative are the states, the larger is the space, and, at the same time, the less information one has for the formation of a prior. If, for example, one starts with a parameter of a coin, $$p$$, one has to form a prior over the interval $$[0, 1]$$ and one may hope to have observed problems with coins that would provide a hint about the selection of an appropriate prior in the problem at hand. But if these past problems are now part of the description of a state, there are many more states, as each describes an entire sequence of inference problems. The prior should now be defined at the beginning of time, before the first of these problems has been encountered. Worse still, the choice of the prior over this larger space has, by definition, no information to rely on: should any such information exist, it should be incorporated into the model, requiring the ‘true’ prior to be defined on

Suppose that two individuals, A and B, disagree about the probability of an event, and assign to it probabilities .6 and .4, respectively. Let us now ask individual A, If you’re so certain that the probability is .6, why can’t you convince B of the same esti- mate? Or, if B holds the estimate .4, and you can’t convince her that she’s wrong, why are you so sure of your .6? That is, we have already agreed that the estimate of .6 can’t be objectively rational, as B isn’t convinced by it. It is still subjectively rational to hold the belief .6, as there is no objective proof that it is wrong. However, A might come to ask herself, do I feel comfortable with an estimate that I cannot justify?

It follows that the Bayesian approach is supported by very elegant axiomatic derivations, but that it forces one to make arbitrary choices. Especially when the states of the world are defined, as is often the case in economic theory, as complete description of history, priors have to be chosen without any compelling justification.

The Bayesian approach is quite successful at representing knowledge, but rather poor when it comes to representing ignorance. When one attempts to say, within the Bayesian language, ‘I do not know’, the model asks, ‘How much do you not know? Do you not know to degree .6 or to degree .7?’ One simply doesn’t have an utterance that means ‘I don’t have the foggiest idea’.

Hansen and Marinacci (2016)

## References

Barillas, Francisco, Lars Peter Hansen, and Thomas J. Sargent. 2009. Doubts or variability? Journal of Economic Theory 144 (6): 2388–418. https://doi.org/10.1016/j.jet.2008.11.014.
Epstein, Larry G., and Martin Schneider. 2003. IID: Independently and indistinguishably distributed.” Journal of Economic Theory 113 (1): 32–50. https://doi.org/10.1016/S0022-0531(03)00121-2.
Epstein, Larry G., and Kyoungwon Seo. 2015. Exchangeable capacities, parameters and incomplete theories.” Journal of Economic Theory 157: 879–917. https://doi.org/10.1016/j.jet.2015.02.010.
Ghirardato, Paolo. 2001. Coping with Ignorance: Unforeseen Contingencies and Non-Additive Uncertainty.” Econometric Theory 17 (2): 247–76. https://doi.org/10.2307/2229682.
Ghirardato, Paolo, Peter Klibanoff, and Massimo Marinacci. 1998. Additivity with multiple priors.” Journal of Mathematical Economics 30 (4): 405–20. https://doi.org/10.1016/S0304-4068(97)00047-5.
Gilboa, I., W. A. Postlewaite, and D. Schmeidler. 2008. Probability and uncertainty in economic modeling.” Journal of Economic Perspectives 22 (3): 173–888. https://doi.org/10.32609/0042-8736-2009-10-46-61.
Gilboa, I., and D. Schmeidler. 1994. Additive representation of non-additive measures and the Choquet integral.” Annals of Operations Research 52 (1): 43–65. https://doi.org/10.1007/BF02032160.
Gilboa, Itzhak. 1987. Expected utility with purely subjective non-additive probabilities.” Journal of Mathematical Economics 16 (1): 65–88. https://doi.org/10.1016/0304-4068(87)90022-X.
———. 2015. Rationality and the Bayesian paradigm.” Journal of Economic Methodology 22 (3): 312–34. https://doi.org/10.1080/1350178X.2015.1071505.
Gilboa, Itzhak, and Massimo Marinacci. 2011. Ambiguity and the Bayesian Paradigm.” https://doi.org/10.1016/0002-8703(90)90287-8.
Gilboa, Itzhak, Andrew Postlewaite, and David Schmeidler. 2009. Is It Always Rational to Satisfy Savage’s Axioms?
———. 2012. Rationality of belief or: why savage’s axioms are neither necessary nor sufficient for rationality.” Snthese 187 (1): 11–31. https://doi.org/10.1007/sl.
Gilboa, Itzhak, and David Schmeidler. 1995. Canonical Representation of Set Functions.” Mathematics of Operations Research 20 (1): 197–212.
Hansen, Lars Peter, and Massimo Marinacci. 2016. Ambiguity aversion and model misspecification: An economic perspective.” Statistical Science 31 (4): 511–15. https://doi.org/10.1214/16-STS570.
Hansen, Lars Peter, and Thomas J Sargent. 2010. Fragile beliefs and the price of uncertainty.” Quantitative Economics 1 (1): 129–62. https://doi.org/10.3982/QE9.
Maccheroni, Fabio, and Massimo Marinacci. 2005. A strong law of large numbers for capacities.” Annals of Probability 33 (3): 1171–78. https://doi.org/10.1214/009117904000001062.
Marinacci, Massimo. 1999. Limit Laws for Non-additive Probabilities and Their Frequentist Interpretation.” Journal of Economic Theory 84 (2): 145–95. https://doi.org/10.1006/jeth.1998.2479.
Marinacci, Massimo, and Luigi Montrucchio. 2004. A characterization of the core of convex games through Gateaux derivatives.” Journal of Economic Theory 116 (2): 229–48. https://doi.org/10.1016/S0022-0531(03)00258-8.
Schmeidler, David. 1986. Integral representation without additivity.” Proceedings of the American Mathematical Society 97 (2): 255–55. https://doi.org/10.1090/S0002-9939-1986-0835875-8.
Shafer, Glenn. 1979. Allocations of Probability.” The Annals of Probability 7 (5): 827–39.

posted 2022-03-07 | tags: pricing, insurance, risk, underwriting