Subjective probability and geometry: three metric theorems concerning random quantities

A ﬃ ne properties are more general than metric ones because they are independent of the choice of a coordinate system. Nevertheless, a metric, that is to say, a scalar product which takes each pair of vectors and returns a real number, is meaningful when n vectors, which are all unit vectors and orthogonal to each other, constitute a basis for the n -dimensional vector space A . In such a space n events E i , i = 1 , . . . , n , whose Cartesian coordinates turn out to be x i , are represented in a linear form. A metric is also meaningful when we transfer on a straight line the n -dimensional structure of A into which the constituents of the partition determined by E 1 , . . . , E n are visualized. The dot product of two vectors of the n dimensional real space R n is invariant: of these two vectors the former represents the possible values for a given random quantity, while the latter represents the corresponding probabilities which are assigned to them in a subjective fashion. We deduce these original results, which are the foundation of our next and extensive study concerning the formulation of a geometric, well-organized and original theory of random quantities, from pioneering works which deal with a speciﬁc geometric interpretation of probability concept, unlike the most part of the current ones which are pleased to keep the real and deep meaning of probability notion a secret because they consider a success to give a uniquely determined answer to a problem even when it is indeterminate. Therefore, we believe that it is inevitable that our references limit themselves to these pioneering works.


Introduction
An event is conceptually a mental separation between subjective sensations. It coincides with a statement or proposition such that, by betting on it, we can establish in an unmistakable fashion whether it is true = 1 or false = 0, that is to say, whether it has occurred or not and so whether the bet has been won or lost (Good, 1962), (Jeffreys, 1961), (Koopman, 1940), (Kyburg jr. & Smokler, 1964). Therefore, an event is always a particular random quantity having only two possible values: 1 and 0. We evidently give an arithmetic or linear interpretation of events. For instance, the arithmetic sum of many events coincides with the random number of successes given by Y = E 1 + . . . + E n . Hence, the arithmetic operations are even applicable to events. If A and B are events, the negation of A isĀ = 1 − A and such an event is true if A is false, while if A is true it is false; the negation of B is similarlyB = 1 − B. The logical product of A and B is A ∧ B = AB and such an event is true if A is true and B is true, otherwise it is false; the logical sum of A and B is (A ∨ B) = Ā ∧B = 1 − (1 − A)(1 − B), from which it follows that such an event is true if at least one of events is true, where we have A ∨ B = A + B when A and B are incompatible events because it is impossible for them both to occur. We have evidently the same thing when we consider the logical product and the logical sum of more than two events. We can extend the logical or Boolean operations into the field of real numbers when we make the following definitions: x ∧ y = min(x, y), x ∨ y = max(x, y),x = 1 − x. Such definitions agree with those known into the field of the idempotent numbers 0 and 1. By applying the logical operations and the arithmetic ones to events as well as the logical operations and the arithmetic ones to numbers we obtain a complete unification of these two distinct series of operations related to events and numbers. Thus, it is not true that the logical operations are applicable only to events and the arithmetic ones only to numbers. An event could also be void: if E and E are events, then E = E |E is the trievent which is void if E is false, while if E is true, then E is true or false according to whether E is respectively true or false (de Finetti, 1981), (de Finetti, 1982). For any individual who does not certainly know the true value of a quantity X, which is random in a non-redundant usage for him, there are two or more than two possible values for X. The set of these values is denoted by I(X) = {x 1 , . . . , x n }, with x 1 < . . . < x n . In any case, only one is the true value of each random quantity and the meaning that you have to give to random is the one of unknown by the individual of whom you consider his state of uncertainty. Thus, random does not mean undetermined but it means established in an unequivocal fashion, so a supposed bet based upon it would unmistakably be decided at the appropriate time. When one wonders if infinite events of a set are all true or which is the true event among an infinite number of events, one can never verify if such statements are true or false. Indeed, these statements are infinite in number, so they do not coincide with any mental separation between subjective sensations. Therefore, they are conceptually meaningless. We can now understand the reason for which it is not a logical restriction to define a random quantity as a finite partition of incompatible and exhaustive events, so one and only one of the possible values for X belonging to the set I(X) is necessarily true. The same symbol P evidently denotes both prevision of a random quantity and probability of an event (de Finetti, 1931a), (de Finetti, 1931b), (de Finetti, 1976). Given an evaluation of probability p i referring to x i , i = 1, . . . , n, a prevision of X turns out to be P(X) = x 1 p 1 + . . . + x n p n , where we have 0 ≤ p i ≤ 1, i = 1, . . . , n, and n i=1 p i = 1: it is rendered as a function of the probabilities p i of the possible values for X. It is usually called the mathematical expectation of X or its mean value. It is a barycenter of the possible values for X. It is certainly possible to extend this result by using more advanced mathematical tools such as Stieltjes integrals. Nevertheless, such an extension adds nothing from conceptual point of view and for this reason we do not consider it. Nothing is even added from operational point of view: P(E) = p is the certain gain p subjectively considered equivalent to a unit gain conditional on the occurrence of E, while P(X) is the certain gain or price of X subjectively considered equivalent to X understood as a random gain. Thus, P is operationally a fair price expressed in terms of gain. An individual correctly makes a prevision of a random quantity X when he leaves the domain into which he recognizes a more or less extensive class of alternatives which appear possible to him in the current state of his information in order to distribute his sensations of probability among them and in the way which will appear most appropriate to him (de Finetti, 1930a), (de Finetti, 1930b), (Ramsey, 1960), (Savage, 1954). Such a class of alternatives is given by I(X) = {x 1 , . . . , x n }. Therefore, the probability is an additional notion, so it comes into play after constituting the range of all possible alternatives: the logic of the probable will fill in this range by considering a probabilistic mass distributed upon it. Probability calculus is based on only one restriction according to which it would be incoherent not to think that the probability of the logical sum of two incompatible events has to increase when the probabilities of these two events increase; putting it differently, with A and B which are two incompatible events, since we have to consider A ∨ B = A + B, after evaluating both A and B in a coherent way, the same individual who evaluates the event-sum A ∨ B in such a way as to obtain P(A ∨ B) P(A) + P(B) is not coherent. We have coherently both 0 ≤ P(A) ≤ 1 and 0 ≤ P(B) ≤ 1 (de Finetti, 1972), (Coletti & Scozzafava, 2002).
2. Vector spaces and spaces of alternatives When we consider one random quantity X, each its possible value, for a given individual at a certain moment, is a real number in the space S of alternatives coinciding with a line on which an origin, a unit of length and an orientation are chosen. Every point of the line is assumed to correspond to a real number and every real number to a point: the real line is a vector space over the field R of real numbers, that is to say, over itself, of dimension 1. When we similarly consider two random quantities, X and Y, a Cartesian coordinate plane is the space S of alternatives: possible pairs (x, y) are the Cartesian coordinates of a possible point of this plane. Every point of a Cartesian coordinate plane is assumed to correspond to an ordered pair of real numbers and vice versa: R 2 is a vector space over the field R of real numbers of dimension 2 and it is called the two-dimensional real space. When we consider three random quantities, X, Y and Z, the three-dimensional real space R 3 corresponds to the set S of alternatives and possible triples (x, y, z) are the Cartesian coordinates of a possible point of this linear space. There is a bijection between the points of the vector space R 3 over the field R of real numbers and the ordered triples of real numbers. More generally, in the case of n random quantities, where n is an integer > 3, one can think of the Cartesian coordinates of the n-dimensional real space R n . There is a bijection between the points of the vector space R n over the field R and the ordered n-tuples of real numbers (Pompilj, 1949), (Pompilj, 1956). It is not necessary to refer to a Cartesian coordinate system because one could theoretically think of any coordinate system as well as it is not necessary to think in terms of a geometric space because it could be enough to think in terms of a space in a merely abstract sense. Nevertheless, it is always essential that different pairs of real numbers are made to correspond to distinct points or different triples of real numbers are made to correspond to different points or, more generally, distinct n-tuples of real numbers are made to correspond to dissimilar points. Sometimes it is useful to the theory of probability that the space S of alternatives does not coincide with the vector space or linear space A called linear ambit: this linear space is an affine space over itself and it could be, for example, the three-dimensional vector space of geometric vectors represented by line segments connecting an initial point O of the three-dimensional affine space with any terminal point of the same affine space. Each line segment has a length and a direction and it could graphically be represented by an arrow. Given an orthonormal basis for such a vector space, it is possible to represent, for example, three random quantities, X, Y and Z, which are related by the equation X 2 + Y 2 + Z 2 = R 2 . This equation represents a spherical surface with center (0, 0, 0) and radius R. Then, the space S of alternatives consists of the spherical surface on which the possible points of the set Q are placed: the possible points of Q may consist of all the points of this surface or a part of it or just a few points according to knowledge of a given individual at a certain moment and possible restrictions and circumstances. The three-dimensional vector space A is isomorphic to R 3 and it has a Euclidean structure by virtue of the introduction of the standard scalar product or dot product on R 3 , so we denote it by A = E 3 . Such a structure manifests itself by means of the following definitions that we attribute to R n in order to avoid a loss of generality Definition 1 Let R n be the n-dimensional real space over the field R of real numbers and let B n = {e 1 , . . . , e n } be the standard basis for R n . Then, for every vector x ∈ R n , there is a unique linear combination of the basis vectors given by x = x 1 e 1 + . . . + x n e n , with the coordinate vector of x relative to B n which is the sequence of coordinates (x 1 , . . . , x n ). The scalars x 1 , . . . , x n are the scalar components of the vector x.
Definition 2 The standard scalar product on R n of its two vectors, x and y, is the sum of the products of the scalar components of each vector given by x · y = x 1 y 1 + . . . + x n y n and whose result is always a real number, where we have x = x 1 e 1 + . . . + x n e n and y = y 1 e 1 + . . . + y n e n , with B n = {e 1 , . . . , e n } which is the standard basis for R n .
Definition 3 On R n the norm or length of the vector x ∈ R n is given by x = √ x · x, for all x in R n whose scalar components are x 1 , . . . , x n . Each vector in R n has a strictly positive length, except the zero vector which has a length of zero.
Therefore, the natural way to obtain those quantities characterizing Euclidean geometry, that is to say, distances between points and the angles between lines or vectors, is by using the standard scalar product on R n above defined. It coincides with the standard bilinear form on the vector space R n represented by the function B : R n × R n → R that is linear in each argument separately: such a function is symmetric because we have B(x, y) = B(y, x), for all x, y in R n , and positivedefinite because we have B(x, x) > 0 and B(x, x) = 0 if and only if x is the zero vector. Moreover, given a basis for R n , the matrix representation for the standard bilinear form on R n is always expressed by the identity matrix of size n. Anyway, affine properties are more general than metric ones because they are independent of the choice of a coordinate system. They underlie essential notions referring to the theory of probability into which the exclusive role of linearity is rarely underlined because of the prominence given to the logical operations and because of the non-immediacy of the arithmetic operations on events when such random entities are not understood as particular random quantities. It follows that one can understand why axiomatic system based on affine properties has a fundamental importance not only in geometry but in mathematics as a whole and, therefore, into the theory of probability too. Indeed, one defines the concept of vector space by means of such a system. Nevertheless, a metric, that is to say, a scalar product, is meaningful when n vectors, which are all vectors of length 1 and orthogonal to each other, are a basis for the n-dimensional vector space A. In such a space n events E i , i = 1, . . . , n, whose Cartesian coordinates turn out to be x i , with x i = 1 or x i = 0, are represented in a linear form and their linear combinations constitute the linear system L which is dual to A: we denote with upper indices the Cartesian coordinates of A because A and L are superposed. Such coordinates have to be understood as contravariant components of vectors of the vector space A which is superposed onto its dual vector space L.

Linearization of random quantities
. . , n, be an orthonormal basis for the vector space E n over the field R of real numbers. For every x ∈ E n , we have x = x i e i , i = 1, . . . , n, with the set of the contravariant components of x which is {x i } according to the Einstein notation. Given E n , its dual vector space is denoted by E n * and it consists of all linear functionals: from B ⊥ n its dual basis is given by the finite set {Φ j }, j = 1, . . . , n, and this dual basis is uniquely defined by the system of equations When we consider a linear function F with its inputs that are vectors of E n and its outputs that are real numbers we have, for every : it follows that F and F(e j )Φ j are the same linear functions. Since we have F(e j ) = u j , where u j is an arbitrary real number, we can write F(e j )Φ j = u j Φ j , with {u j } which is the set of the covariant components of each element of E n * . It can be expressed by means of the set {Φ j }, j = 1, . . . , n, in a unique fashion as a linear combination of this set whose elements are linearly independent. The dual vector space E n * has the same dimension as E n , so it turns out to be dim(E n ) = dim(E n * ) = n (de Finetti, 1970), (Pompilj, 1952), (Pompilj, 1956), (Pompilj, 1984). Having said that we consider the following Theorem 1 Let B ⊥ n = {e 1 , . . . , e n } be an orthonormal basis for A = E n and let E 1 , . . . , E n be possible events of the finite set {E 1 , . . . , E n } of events whose constituents are linearly represented in E n having x 1 , . . . , x n as a coordinate system. Then, the vectors of E n constitute the linear system L which is dual to E n : this system L consists of the random quantities X = u 1 E 1 + . . . + u n E n whose possible values are given by the scalar products X = u i x i , i = 1, . . . , n.
Proof. From n possible events E i , i = 1, . . . , n, with (1 − E i ) which is the negation of E i , we obtain a finite partition, that is to say, a family of s 2 n incompatible and exhaustive events for which it is certain that one and only one event occurs. Such events are called the constituents C 1 , . . . , C s of the partition determined by E 1 , . . . , E n or elementary cases or atoms, so we have C 1 ∨ . . . ∨ C s = C 1 + . . . + C s = 1; they are obtained from the 2 n logical products E 1 ∧ . . . ∧ E n , where each E i is E i or its negation (1 − E i ). It is possible to represent these constituents in E n by means of the basis vectors: x = x i e i ∈ S, where we have S ⊂ E n , is the vector having x i = 0 or x i = 1, i = 1, . . . , n, as its contravariant components and this means that if x i = 0 then E i does not occur because it occurs its negation (1 − E i ), while if x i = 1 then E i occurs. In A = E n the s constituents are the possible points Q of the set Q, with their coordinates which are represented by the n-tuples (x 1 , . . . , x n ) for which we have x i = 0 or x i = 1, i = 1, . . . , n: if s = 2 n , then such coordinates are the ones of all the vertices of a unit hypercube which coincides with the space S of alternatives. Given a basis B ⊥ n for A = E n , its dual basis is {Φ j }, j = 1, . . . , n, so F(x) = u j Φ j (x) = u j x j , with j = 1, . . . , n, is a real number. It is the scalar product of two vectors or points and after choosing the covariant real numbers u j , j = 1, . . . , n, a possible value F(x) for the random quantity under consideration corresponds to the vector x ∈ S. Any random quantity has at most as many different possible values as there are constituents and this occurs if such values are found on distinct hyperplanes u i x i = constant, with i = 1, . . . , n, whose linear equations constitute a linear system which has a unique solution given by the numbers u j , j = 1, . . . , n, of the n-tuple (u 1 , . . . , u n ). When the real numbers u j , j = 1, . . . , n, vary we obtain the possible values of the random quantities X = u 1 E 1 + . . . + u n E n ∈ L: such random quantities can be understood as the gain (or the loss) of someone who receives (or pays) an amount u 1 if E 1 is true, . . . , plus an amount u n if E n is true.
In A = E 3 the space S of alternatives is a unit cube having its edges of length 1 and its vertices which are represented by the ordered triples (x 1 , x 2 , x 3 ) for which we have x i = 0 or x i = 1, i = 1, . . . , 3. The linear combinations of the events E i , i = 1, . . . , 3, constitute the linear system L which is dual to A, so the two vector spaces A and L have the same dimension. The vertices of the unit cube constitute the set Q of the possible points of A. The sum u 1 x 1 + u 2 x 2 + u 3 x 3 shows that each random quantity is the scalar product of two vectors or points belonging to two spaces, A and L, which are superposed. The components (or coordinates) of the first vector or point, which is an element of A, are given by the ordered triple (x 1 , x 2 , x 3 ), the components (or coordinates) of the second vector or point, which is an element of L, are given by the ordered triple (u 1 , u 2 , u 3 ). Any random quantity has at most as many distinct possible values as there are constituents (s = 2 3 = 8 if all the vertices of the cube are possible or s < 8 if not all the vertices are possible) and they are found on different planes whose expressions are given by u 1 x 1 + u 2 x 2 + u 3 x 3 = constant, with x i Cartesian coordinates of A and u i coordinates of the dual system L. When we take all the u i = 1, we consider Y = E 1 + E 2 + E 3 : it is the random number of successes. If all the vertices of the cube are possible then the possible values for Y are distributed over the 3 + 1 = 4 planes, so we have x 1 + x 2 + x 3 = constant = 0, 1, 2, 3 according to the binomial coefficients 3 0 = 1, 3 1 = 3, 3 2 = 3, 3 3 = 1 and this means that there is only one possible way of obtaining both 0 success in 3 events and 3 successes in 3 events, while there are three possible ways of obtaining both 1 success in 3 events and 2 successes in 3 events. In this case, the possible values for Y are not all distinct. If the 8 vertices of the cube are projected onto a diagonal then two of them, (0, 0, 0) and (1, 1, 1), fall at each end of the diagonal, three of them, (1, 0, 0), (0, 1, 0), (0, 0, 1), fall at 1/3 of the way along the diagonal, while the remaining vertices, (1, 1, 0), (0, 1, 1), (1, 0, 1), fall at 2/3 of it. Therefore, the number of successes is 0 in one case, 3 in one case, 1 in three cases, 2 in three cases. For instance, it is useful to consider a simple experiment which consists in a throw of a rolling die having six faces, with each of them showing a different number from 1 to 6. If a finite set of events is given by E 1 = {2, 4, 6}, E 2 = {2, 3, 4, 5, 6}, E 3 = {1, 2, 3}, then the possible constituents are not 2 3 = 8 but they are 5 and they constitute a partition of the certain event {1, 2, 3, 4, 5, 6}. If we observe that the face of the die under consideration that is uppermost when it comes to rest is 2, then E 1 , E 2 , E 3 are definitively true events, while their negations,Ē 1 = {1, 3, 5}, . . . ,Ē 3 = {4, 5, 6}, are definitively false events so it turns out to be E 1 = E 2 = E 3 = 1 andĒ 1 = E 2 =Ē 3 = 0. We can clearly know the Cartesian coordinates of all the constituents which are points of A = E 3 . The result of their arithmetic product is always 0 because its factors can be only 0 and 1, except the one of the elementary case E 1 ∧ E 2 ∧ E 3 = {2} where its factors are all 1. Therefore, given an ordered triple (u 1 , u 2 , u 3 ) of real numbers, one and only one of the possible values of the random quantity expressed as a linear combination of the events E i , i = 1, . . . , 3, is true. The conclusions are conceptually the same when the face of the above die that is uppermost when it comes to rest is not 2 but it is another number.

Affine combinations of possible values of a random quantity
The concept of logical dependence of one random quantity on another or others has exactly the meaning that it has in mathematical analysis when one considers a singlevalued function. For example, if we have X 2 = f (X 1 ), where f is any rule with the property that each input of the set of inputs is related to exactly one output of the set of permissible outputs, then X 2 is logically dependent on X 1 . The function x 2 = f (x 1 ) is defined for all the possible values for X 1 in the sense that each possible value x 1 for X 1 maps to one and only one possible value x 2 for X 2 . Therefore, given any rule f , the set of possible values for X 2 depends on the knowledge of the set of possible values for X 1 . More generally, we write X = f (X 1 , . . . , X n ) when we have a logical dependence of the random quantity X on n random quantities X i , i = 1, . . . , n. Conversely, if the set of possible values for X does not depend on the knowledge of the set of possible values for another or other random quantities, then we have a logical independence of X on another or other random quantities. We say that X linearly depends on the random quantities X 1 , . . . , X n when we have X = f (X 1 , . . . , X n ) and f expresses any linear combination of the random quantities X 1 , . . . , X n : linear dependence is a more restrictive condition than logical dependence because it is a special case of logical dependence. On the other hand, logical independence is a more restrictive condition than linear independence because of the same reason. Logical dependence and logical independence as well as linear dependence and linear independence have an objective meaning because these notions are independent of the evaluation of the probabilities of events so, by taking into account the fact that every event can definitively be true or false, we consider the following Theorem 2 Let X be the random quantity which is a linear combination of n, with n which is a positive integer, random quantities linearly represented in E n , where B ⊥ n = {e 1 , . . . , e n } is an orthonormal basis for E n , and let I(X) be the finite set of possible values for X. Then, X can always be represented on a line on which an origin, a unit of length and an orientation are chosen and each possible value for X is expressible as an affine combination of two possible and distinct values for it.
Proof. By seeing that n events E i , i = 1, . . . , n, are n random quantities, the random quantity X is a linear combination of n events linearly represented in A = E n by means of the binary n-tuples (x 1 , . . . , x n ), with x i = 0 or x i = 1, i = 1, . . . , n. Constituents are visualized in E n and each vector x = x i e i ∈ S, where we have S ⊂ E n , corresponds to a possible value F(x) for X, with x which is a possible constituent and x i = 0 or x i = 1, i = 1, . . . , n, which are its contravariant components. The possible values for X are not always all distinct. We already saw that it turns out to be F(x) = u j Φ j (x) = u j x j , j = 1, . . . , n, and if s is the number of the distinct values of F(x), x ∈ S, then we can write F(x) = b r , r = 1, . . . , s: each b r , r = 1, . . . , s, is a real number which represents a hyperplane characterized by the same gain in the sense that it is a set of points of the n-dimensional linear space in which X has the same value. We have I(X) = {b 1 , . . . , b s }. There is an one-to-one correspondence between the vectors of E n and the points of the space E n of points, with such points which are the terminal points of geometric vectors whose initial points coincide with the origin of a Cartesian coordinate system. Therefore, given the vector u = u j e j ∈ E n , j = 1, . . . , n, it is related to the line ρ 0 ∈ E n which is defined by the set of infinite points {λ u | ∀ λ ∈ R}. In particular, such a line always passes through the origin of E n which is given by the n-tuple (0, . . . , 0). When b r , r = 1, . . . , s, varies, the equation F(x) = b r , x ∈ E n , characterizes a sheaf of hyperplanes which are all parallel, with ρ 0 which is orthogonal to all the hyperplanes defined by the equation F(x) = b r , x ∈ E n : to fix ideas F(x) = b r , x ∈ E 3 , is the equation of a sheaf of planes which are all parallel and the contravariant components of u ∈ E 3 may coincide with the covariant real numbers of F(x) = u j Φ j (x), j = 1, . . . , 3. If x = λ u + x 0 and x = λ u + x 0 are two vectors of E n , with their terminal points belonging to the hyperplanes having b r and b r as numerical values, we have respectively F(x ) = b r and F(x ) = b r : the parallel components to the vector u ∈ E n of x and x are λ u and λ u, while x 0 and x 0 are the orthogonal components to the vector u ∈ E n of the vectors x and x . We have evidently F(x ) = u j Φ j (x ) = λ u 2 = b r and F(x ) = u j Φ j (x ) = λ u 2 = b r and this means that the orthogonal component of the vectors of E n is insignificant as regards the line ρ 0 ∈ E n . Therefore, it is possible that the points of a same hyperplane F(x) = b r , with x ∈ E n , correspond to the point which represents the intersection of a line and a hyperplane given by λ = b r u 2 . If we consider the expressions x 1 = λ u, x 1 = λ u and x 1 = λ u for which we have F(x 1 ) = b r , F(x 1 ) = b r and F(x 1 ) = b r , it turns out to be λ u = t λ u + (1 − t) λ u, with 0 ≤ t ≤ 1, that is to say, each point of the line ρ 0 ∈ E n which represents a possible value for X can be expressed as a non-negative and affine combination of two points which are found on the same line. Such points represent two possible and distinct values for X. The above expression can be written b r u = t b r u + (1 − t) b r u, from which it follows that we have to consider b r = t b r + (1 − t) b r , so it turns out to be t = b r − b r b r − b r . Therefore, we can transfer on a line the n-dimensional structure of E n because there is an one-to-one correspondence between the hyperplanes of a sheaf of hyperplanes and the points of intersection of the line ρ 0 ∈ E n and all parallel hyperplanes: this allows of introducing in both n-dimensional sets, E n which consists of vectors and E n which consists of points, one-dimensional and isomorphic linear structures. Such structures are obviously one-dimensional and isomorphic vector spaces. On the other hand, there is an one-to-one correspondence between the points of a line on which an origin, a unit of length and an orientation are chosen and the possible values for X whose number is finite. Hence, given the points A and B on a line, every point P on the same line is expressed as an affine combination of them given by P = t A + (1 − t) B, with t ∈ R. Two possible values for X on a line are logically independent, while three possible values on the same line are logically and linearly dependent (Pompilj, 1956).
5. Non-intrinsic value and intrinsic value of a metric Let B ⊥ n = {e 1 , . . . , e n } be an orthonormal basis for A = E n and let {Φ j }, j = 1, . . . , n, be the dual basis for the basis B ⊥ n , with {Φ j } which is a finite set of linear functionals on A = E n together with a naturally induced linear structure. Therefore, A = E n has a corresponding dual vector space that we denote by L. Given the random quantities X = u 1 E 1 + . . . + u n E n , with {E 1 , . . . , E n } which is a finite family of incompatible and exhaustive events, their previsions are given by P(X) = u 1 P(E 1 ) + . . . + u n P(E n ), with the contravariant components of p = p i e i ∈ E n , i = 1, . . . , n, which are P(E 1 ) = p 1 , . . . , P(E n ) = p n . Having said that, it turns out to be p 1 + . . . + p n = 1 in a coherent fashion. The linear function F(p) = u j Φ j (p) = u j p j , j = 1, . . . , n, is defined for p in E n and it is the scalar product of two vectors or points of two dual spaces, A and L, which are superposed. The sum P(X) = u j p j ∈ L, where we have j = 1, . . . , n, may depend on the fact that the components or coordinates u j ∈ L vary while the components or coordinates p j ∈ A are constant. It may also depend on the fact that the components or coordinates p j ∈ A vary while the components or coordinates u j ∈ L are constant. It is absolutely unimportant that u j vary while p j are constant or vice versa. This product evidently depends on the choice of the basis B ⊥ n for A = E n , from which it follows the construction of the dual basis {Φ j } for L. Conversely, the value of a metric is intrinsic when it is independent of such a choice as we will see in the following Theorem 3 Let R n be the n-dimensional real space over the field R of real numbers provided with the standard scalar product. Let B ⊥ n be an orthonormal basis for R n such that any vector in R n can be expressed as a unique linear combination of the basis vectors. Let B ⊥ n be another basis for R n and if the scalar product of two vectors in R n , x and p, is denoted by (x · p) with regard to the first basis and the scalar product of x and p is denoted by (x · p) with regard to the second basis, then it turns out to be (x · p) = (x · p) .
Proof. The concept of random quantity is specified when it is chosen a numerical value in order to identify each event of a partition of incompatible and exhaustive events. Moreover, a probability reflecting the degree of belief in the occurrence of one of these numerical values is assigned to them. We have n possible values for the random quantity under consideration. They are represented by the vector or point x of R n , while the probabilities which are subjectively assigned to them are represented by the vector or point p of R n . Thus, its prevision coincides with the scalar product x · p on the n-dimensional real space R n over the field R of real numbers. With regard to the first basis for R n we have x = x i e i and p = p j e j , so their scalar product coincides with the expression given by (x · p) = x i p j e i · e j = x i p j g i j , where g i j is an element of the square matrix of order n which represents the components of the fundamental tensor. With regard to the second basis for R n we have (x · p) = x i p j g i j . It turns out to be g i j = C h i e h and e j = C k j e k when we transform linear representations of vectors taken with respect to B ⊥ n to their equivalent representations with respect to B ⊥ n , where C is the square matrix of order n associated with this change of basis. As a consequence, g i j = e i · e j = C h i C k j g h k is the transformation of the components of the fundamental tensor. The contravariant components of the vectors change to x i = C i r x r , p j = C j s p s when we transform their representations taken with respect to B ⊥ n to their equivalent representations with respect to B ⊥ n , so we have (x · p) = (C i r x r )(C j s p s )C h i C k j g h k = (C i r C h i )(C j s C k j ) x r p s g h k = δ h r δ k s x r p s g h k . Since we have δ h r δ k s x r p s g h k = x h p k g h k = (x · p) , it turns out to be (x · p) = (x · p) .
Since we have g = with respect to B ⊥ n , the prevision of the random quantity, whose possible values are represented in R n , coincides with the tensor product of two vector spaces R n and R n over the same field R, where R is a vector space of dimension 1 over itself. The tensor product R n ⊗ R n = R is itself a vector space. It is the bilinear function R n × R n → R denoted by (x, p) → x ⊗ p which satisfies two properties. Regarding the first, if g : R n × R n → R is a bilinear function, then there is a unique linear function g * : R → R such that, for each pair (x, p) ∈ R n × R n , we have g(x, p) = g * (x ⊗ p). Regarding the second, by seeing that B ⊥ n is a basis for R n , the set {e 1 ⊗ e 1 , . . . , e n ⊗ e n } which consists of n 2 elements is a basis for R n ⊗ R n = R. The sum of n 2 real numbers is an element of the vector space R n ⊗ R n . Nevertheless, it is also a real number belonging to the vector space R, where 1 is a basis for R because each element k ∈ R is uniquely expressible as k = k · 1. We have e 1 ⊗ e 1 · · · e 1 ⊗ e n . . . . . . . . . e n ⊗ e 1 · · · e n ⊗ e n e 1 · e 1 · · · e 1 · e n . . . . . . . . . e n · e 1 · · · e n · e n             with respect to B ⊥ n , and it turns out to be g = g because of intrinsic character of the scalar product of x and p in R n . We used an evident simplification: we did not consider two different vector spaces where one of them has an orthonormal basis which is different from the other but we considered two vector spaces which are the same space having a same basis. Afterwards this same basis changes because we examine a change of basis.

Conclusions
We deduced three fundamental and metric theorems which are the foundation of our next and extensive study concerning the formulation of a geometric, well-organized and original theory of random quantities. When the random quantities of L are expressed as linear combinations of other random quantities which are linearly represented in A, there is an one-to-one correspondence between the hyperplanes of a sheaf of hyperplanes and the points of intersection of a line and all parallel hyperplanes. Therefore, we can transfer on a line the n-dimensional structure of A. On the other hand, there is an one-to-one correspondence between the points of a line on which an origin, a unit of length and an orientation are chosen and the possible values for a random quantity so, given the points A and B on a line, every point P on the same line is expressed as an affine combination of them. Each random quantity may also be understood as a finite sequence of ordered pairs of real numbers because we do not consider n point masses on a line but we consider two vectors or points of two n-dimensional real spaces having a Euclidean structure which are the same linear space R n : of these two vectors or points the former represents the possible values for a random quantity, while the latter represents the corresponding probabilities which are assigned to them in a subjective fashion. In this way we expressly consider en masse all the possible events concerning a given random quantity as well as all the corresponding probabilities. Hence, its prevision can be interpreted as the tensor product of two vector spaces over the same field R of real numbers. The tensorial approach ensures a priori that the results are intrinsic with respect to any change of basis: it is not necessary to consider binary n-tuples as basis vectors of R n because we express our subjective and non-predetermined opinion on what is uncertain or possible, that is to say, we do not consider what is definitively true or false.