Probability Theory

The different interpretations of probability, such as degree of rational belief (subjective), relative frequency (operational) or propensity (physical or dispositional)), often give rise to confusion in deciding which is the most appropriate for a given application. It is the claim of this paper that this confusion is what gives rise to some of the claims of inadequacy of probability theory in designing automatic decision systems as well as making mistakes due to inappropriate metaphors. In this field strong analogies with human decision making provide a guide to system design and although a subjective interpretation of probability in human decision making could be justified and interpreted as “degree of belief” it is difficult to consider this to be correct for a relatively simple autonomous and mechanistic decision module. It is the purpose of this paper to try and dispel the confusion and come to conclusion on the correct interpretation or interpretations for applying probability theory.

Probability theory originated with Pascal and his response to a request for mathematical guidance for games of chance. The only practical help that his results should have provided to the Chevalier de Méré and other gamblers since is to avoid gambling because by working out the odds it can be calculated that the expected win is at best zero and usually negative in any realistic scenario. This advice has not been taken because in the first instance Pascal did not not make it explicit and in the second because most of those who gamble do not or cannot understand the results of probability theory. There is the additional reason that probability is not enough to reach a decision because decision theory is needed with the restrictions that are put on utility functions for rational decisions.

Despite the negative result in the application for which it was invented, the theory has been developed to provide a rich body of deep and subtle results. Not only that but very useful theorems have been discovered, giving rise to the applied branch of the theory known as statistics. The tools provided are essential to decision making, the insurance business, epidemiology, clinical trials and the evaluation of scientific experiments. Probability theory has shown itself to be essential to basic physics, providing an essential part of quantum mechanics, the theory for our description of the subatomic world, and the theory of how the atomic world gives rise to the macroscopic laws of thermodynamics known as statistical mechanics.

As well as problems of interpretation there is the need to treat the manipulation of the formalism and the statement of the models rigorously. Relatively simple examples show that intuition can be a very poor guide in analysing the consequences of probability models. This rigorous approach has longer term advantages in designing reliable autonomous systems.

Treating probability as the degree of belief held by a system provides an implicit tie to what a system knows. Knowledge is commonly only attributed to intelligent beings but the constructs of modal logic [1] provide definitions and a formalism to describe an idealised form of knowledge which can be attributed to mechanistic systems [2]. This provides a method of addressing the question of what a system knows without needing to define a probability model. In this paper it is argued that the relationship between knowledge, belief, uncertainty and randomness can be clarified by using a unified theory of probability and modality and this lays the foundation for a quite general theory of decision support by an agent with incomplete information about and randomness in its immediate, relevant environment is of great practical importance.

A system with capacity for holding and eventually updating knowledge while acting in face of uncertainty with respect to how its knowledge of the world relates to the true world will be called an agent. In this paper “system” will be used very generally to cover avionics systems, interacting robots, telecommunications networks, participants in a battle-space engagement and even players in a game of “chicken”.

Given this diverse list of applications it would be surprising if only one theory1 could apply to them all and it will be made clear in the next section that there are a number of probability theories. However, this paper is only going to address the question of what is the appropriate theory of probability for engineering applications and whether there are viable alternatives. Candidate alternatives are the theory of evidence [3], which will be discussed in this paper, or fuzzy set theory [4], which shall not be2. Dempster-Shafer Theory will shown in section 4.3 to be valid in as far as there is a probability model which provides the same capability but otherwise it is ad hoc

Probability theory will be reviewed before combining it with Kripke structures from modal logic. Then as a prelude to making the comparison with the Dempster-Shafer theory of evidence probability structure will be introduced in section 4, which make use of the mapping from the state space over the proposition space into a truth value but without introducing knowledge structures or temporal modality. Only once the probabilistic foundations of uncertainty representation have been clarified will a complete unification of temporal epistemic logic and probability theory be presented in section 7.

Probability theories

There are a number of probability theories3 in existence some taking relative probability as fundamental [7] and others taking absolute probabilities as fundamental . These two differ in the uninterpreted axiomatic formulation but even with identical formalisms there can be different theories due to different interpretations of the formalism. The problems dealt with here are in general due to inappropriate interpretation rather than formulation and although there is much to commend the axiomisation proposed by Popper it is that of Kolmogorov which holds sway and is most familiar. Therefore, as the purpose of this paper is to dispel rather than contribute to confusion, the Kolmogorov formalism in its modern form will be used. It is likely that the technical results presented here can be derived for Popper’s formalism but this has not been checked. The importance of taking the formal statement of probability theory seriously is often neglected and constructs such as σ-algebra are treated as mere pedantry. A simple example will illustrate the use of σ-algebras in particular to clarify what is meant by a complete probability model or description. To make the arguments clearer the discussion will be restricted to finite discrete sample spaces. This in itself is illustrative because it will be shown that σ-algebras still play a role in bringing clarity to the construction and interpretation of probability models.

To make the discussion precise, it is helpful to state the basic definitions of probability theory.

A probability space (Ω, Σ, μ) consists of a set Ω (called the sample space), a σ-algebra Σ of subsets of Ω (i.e., a set of subsets of Ω containing Ω and closed under complementation and countable union, but not necessarily consisting of all subsets of Ω) whose elements are called measurable sets, and a probability measure μ : Σ → [0, 1] satisfying the following properties:

P1. μ(X)0 for all X  ∈ Σ.

P2. μ(Ω)=1

P3. $\mathbf{\mu}\left( \mathbf{\cup}_{\mathbf{i = 1}}^{\mathbf{\infty}}\mathbf{\ }\mathbf{X}_{\mathbf{i}} \right)\mathbf{= \ }\sum_{\mathbf{i = 1}}^{\mathbf{\infty}}{\mathbf{\mu(}\mathbf{X}_{\mathbf{i}}\mathbf{)}}$, if the Xi’s are pairwise disjoint members of Σ.

Property P3 is called countable additivity. The fact that X is closed under countable union guarantees that if each Xi ∈ X, then so is i=1 Xi.

Example 2.1. The basic definitions will be illustrated with a simple example which shows all the constituents of the definition of probability space. Consider a sample space Ω with three elements H,  F1 and F2. The probability measure only distinguishes between H and F. Therefore H and F = F1F2 are measurable sets and completion forms the σ-algebra

Σ = {⌀, H,  F, Ω}

Within this probability model F1 and F2 are technically not measurable, the question – what is the probability of F1? – cannot be answered because μ(F1) is not defined.

In the probability space of this example, (Ω, Σ, μ), the probability measure μ is not defined on 2Ω (the set of all subsets of ), but only on Σ. μ can be extended to 2Ω as follows: define functions μ* and μ*, known as the inner measure and outer measure induced by μ. For an arbitrary subset A ⊂ Ω,

μ*(A) = sup{μ(X)|XA and XS}

μ*(A) = inf{μ(X)|XA and XS}

where sup denotes least upper bound and inf denotes greatest lower bound. If there are only finitely many measurable sets (in particular, if Ω is finite), then the inner measure of A is the measure of the largest measurable set contained in A, while the outer measure of A is the measure of the smallest measurable set containing A.

Returning to the example 2.1, μ*(F1) = 0 and μ*(F1) =  μ(F). These two values provide the minimum and maximum values that the probability measure of F1 could take in a new probability model consistent with the present one. Therefore probability theory has an internal mechanism which deals as completely as the state of information allows with events without a probability which can be assigned to subsets of the sample space which are not in the σ-algebra of measurable sets. This provides a great deal of the capability of Dempster-Shafer theory [3] as pointed out by Fagin and Halpern [9] . This will be discussed in section4.

Bayes’ formula

Bayes’ formula is the main tool for updating probabilities in line with information acquired. It will be stated and interpreted within a sensor system context. The formula is

$P\left( B \middle| A \right)\left( z \right) = \ \frac{P\left( A \middle| B \right)\left( z \right)P(B)(z - 1))}{P(A)(z)}$

where A and B are measurable sets in the probability space chosen to model the situation, P(B) is the prior probability of B, P(A|B) is the likelihood of A given B and P(B|A) is the posterior probability of given . The formula is an expression of Bayes’ theorem which is a simple consequence of the axioms of probability theory. The terminology just introduced does introduce a notion of before and after which goes beyond the formal result, it is by going beyond the formal structure and introducing a direction of influence that Bayesian Theory is created [10]. This can be made explicit by making the updating nature more clear. Consider eqn.(2.3) being used to update the prior probability for B at time z ∈  ℤ+ as

$P\left( B \middle| A \right)\left( z \right) = \ \frac{P\left( A \middle| B \right)\left( z \right)P(B)(z - 1))}{P(A)(z)}$

P(B|A)(z) → P(B)(z)

In both equations (2.3) and (2.4) the denominator has no more significance than a normalisation factor. However, the other terms do need further explanation. First of all Bayes formula can only be applied if both A,  B are measurable with B assigned non-zero measure. The prior is basically a probability measure modelling the dependence of B on the background information. The likelihood function provides a measure for A being observed given B. The thing to stress is that these are that models may be wrong, even if due care is taken. This is general property of mathematical models and not particular to probability theory. The sequential update interpretation is additional to the axioms required for the formal derivation of the Bayes formula. The fact that the likelihood function, describing the measurement process, does not change on updating the probabilities and that it provides independent updates in consecutive steps are contingent assumptions based on what is desirable in a measurement system. The first assumption would seem quite general and easy to achieve but the second will depend on the details of refresh rates and other physical processes.

In a sensor system the likelihood plays the role of a sensor model in the sense that it relates the observation – in this case A to the possible object B combined with background information. This is likely to be more in the system designers control than is the prior probability which models the probability of an hypothesis B being true for a given state of background information. Ideally the likelihood function P(A|B) will be independent of time or has an adaptive auto-calibration mechanism.

Example 2.2. To make the following discussion easier to follow let example 2.1, discussed above, be made more concrete by letting H represent a helicopter, F1 a fighter with air to air missiles and F2 a fighter with air to ground weaponry. In addition, consider a ground based sensor system which can distinguish between helicopters (with indication h) and fighters (with indication f) but not between different fighters. The sensor is imperfect and its performance is specified by using conditional probabilities

P(f|F),  P(f|H),  P(h|F) and P(h|H)

denoting the probability of the sensor giving an indication f or h in the presence of F or H. The situation is illustrated in the Venn diagram of figure 1. Notice that in this example all possible events are covered by the objects and the sensor indicators separately.

http://media/image5.bin

Figure 1 A Venn diagram showing how the objects H,F1 and F2 partition the elementary event space and how the sensor output maps on to the state space.

Already it is clear that the sample space and therefore the σ-algebra and probability measure cannot be the same as in example 2.1. The elements of the sample space ΩS are now provided by the partition of the space of elementary events by the intersection of object sets and sensor reading sets,

and the $\sigma$-algebra, which still does not allow or to be measurable, is

The new probability measure is

using the marginalisation relationships

Figure 2 An exploded version of the Venn diagram, fig. 1, showing how the objects H, and partition the elementary event space and how the sensor output maps on to the state space.

and the conditional probabilities which characterise the sensor, for example

It is a strong modelling assumption to identify and with and of example 2.1 but some choice must always be made, which may require a critical re-evaluation of the system if no plausible model can be constructed. It can be assumed that the identification does hold for the discussion below.

The complete new probability assignment can be obtained. Then Bayes formula , discussed in section 4, can be used to obtain a target classification, for example

.

or equivalently

.

The inner and outer probabilities can be calculated as above for

and similarly, for. This sensor response cannot statistically distinguish between and , therefore

and similarly, for other combinations of . This means that in the application of the Bayes formula only play a role in inducing upper and lower posterior probabilities.

If bounds are wanted on the effect of sensor modifications leading to distinguishing between with the sensor then must be used appropriately to induce the upper and lower posterior probabilities. This would only apply to sensor modifications which conserved the numerical values in probability statements about

Most physical – and even software – sensor modifications will not have this conservative property which means that the probability model (not probability theory) must be revised.

A number of points illustrated by this example are worth listing at this stage:

• Although the absolute probabilities are fundamental in the formal theory of probability the empirical assignments of probability will often have to be derived from conditional probabilities characterising sensor performance or other statistical dependencies.
• The relationship between the absolute probabilities and obtained via the the conditional probabilities are not logical necessities but the are the result of the choice of relating the models in this way.
• Although an application has been used as an illustration, it has been treated in a purely formal manner. No interpretation of the probability space has been used.