Developable or Not Related to Information Loss

In this paper we present a lemma and two theorems. These theoretical results will be used to test whether or not a given surface model can be developed. We then choose some examples to demonstrate how to perform these tests. All of these theories and examples are for general purposes, and are not restricted to any particular field. Although all examples are in three-dimensional space, it can be expanded to finite n-dimensional Euclidean spaces. The objective of this paper is to link the relationship between developable surfaces and information loss.


Introduction
Fitting a set of observed data to a preselected model was a fundamental rule used by almost all researchers.For instance, ,in a clinical trial, Tweedie used the Inverse Gaussian Distribution to study the effect of a drug on its first passage time taken by a jejural biopsy capsule leaving the stomach, to travel from the pylorus through the duodenum and into the jejunum.Working in collaboration with statisticians at the Clinical Cancer Research Institute in Liverpool, Tweedie

Condition for Developable
We define our notation as follows: where I is called the first fundamental form, and II the second fundamental form, in which E, F and G are the coefficients of the first fundamental form; and e, f and g are the coefficients of the second fundamental form.Both N (usually mean normal direction) and x are surface functions of u and v, which in turn depend on curve C. If we take the derivative with respective to N and x, we will get the identities: , .
Theorem 2.1 A necessary and sufficient condition that a surface be developable is that the Gaussian Curvature vanishes.We shall prove that 2 0 eg f  is not only a necessary, but also a sufficient condition for a surface to be developable.
For this we appeal to the identity.
).In case (a), N depends on only one parameter and the surface is the envelope of a family of 1  planes, and hence a developable.In case (b), we take as one set of coordinate curves on the surface the asymptotic curves with equation , about the v-axis.The surface of revolution is called the catenoid.It can be shown that the catenoid is locally isometric to the helicoid.We now show that the surface is undevelopable.
Let S be a surface of revolution and let ,.
We show that the coefficient of the first fundamental form satisfies . (2.1) )  From equations (2.1) and ( 2.3), we can further derive the coefficient of second fundamental form as follow: (2.4) a coshv cosu a coshv sinu 0 -a coshv sinu a coshv cosu 0 a sinhv cosu a sinhv sinu a In appendix, the Gaussian Curvature is defined as (2.7) Even though the Gaussian Curvature is different from zero, K will decrease to zero, when v gradually increases to infinity.We may claim that Gaussian Curvature can asymptotically drop to zero, and ignore information loss.

MONGE Patch
It often occurs that the parametric equation is not available.However, we know the equation z=f(x,y),in other words, we are given z as a function of (x,y).Hence, we need to answer the following question: What is the second fundamental form when the surface is given by the equation z=f(x,y)? (3.1) if we let p = f ,q = f then ds = (1+ p )dx + 2pqdxdy +(1+ q )dy When z=f(x,y) the second fundamental form is given by (3.7) We can derive the following lemma.
If the rectangular coordinates x,y,z are the function of two independent variable u,v, then we can express surface ( , ) ( , , ) , where x=x(u,v), y=y(u,v), z=z(u,v) r r u v r x y z  .If we cancel (u,v) then we get equation ( , , ) 0. F x y z  On the other hand, if point ( , , ) x y z satisfies F ( , , ) 0 and 0, z ,, x u y v  and we can solve z and obtain , , ( , ).
The next theorem gives us another form of developable surface of ( , , ) 0 F x y z  .During the derivation of this theorem, we can apply the property of the lemma, The advantage of this matrix form is that it is easy to extend to n-dimensional functions.For example, if the given implicit function is 12 ( , ,... ) 0 n F x x x  then the Hessian matrix H(F) is of order nxn, whose elements are second-order partial derivative, plus one column or row of the gradient vector .F  Hence the numerator of Gaussian Curvature turns out to be an (n+1)(n+1)matrix.
We now repeat the previous example by moving term z to the right side and write Substituting the results of (3.18) in matrix (3.17) yields

xx yy xy
F F -F = 0 , that is required for any developable surface.

Concluding Remark
Experimental data usually can be fitted by a multiple linear regression model.Suppose that the response y is related to n covariates, also known as explanatory variables, regressors, or predictor 12 , ,... n x x x is in a linear functional form.
Statisticians call this regression analysis.Mathematically speaking, we fit the data in a hyper-plane that is the plane in n-dimensional space.Since the Hessian matrix has all second-order partial derivatives, the elements in that matrix are all zero.This means that the determinant of Gaussian Curvature equals zero.In this sense, we can be sure that the multiple linear regression analysis, in general, has no information loss problem.However, from Example 2.1 we see that Gaussian curvature is different from zero.Therefore, there is information loss.It is natural to ask how much information was lost due to the use of that model.Efron (1975) considers arbitrary one-parameter families and quantifies how nearly "exponential" they are.A quantity , ()  , called "the statistical curvature" at  is introduced such that ()  is identically zero if the model is exponential, and greater than zero, for at least some  values otherwise.Efron shows that the models with small curvature enjoy nearly the good statistical properties of exponential families.Large curvatures indicate a breakdown of this favorable situation.We adopt his fundamental proposition and extend it to n-dimensional space.If n-dimensional Gaussian curvature equals zero, we can be sure there is no information loss.However, if the Gaussian curvature in n-dimensional space is a positive number, then the smaller curvature is better than the larger one.In other words, the smaller Gaussian curvature causes less information loss.
If we are given Monge type polynomial z=f(x,y) then(3.11)= f f -f = rt -s = 0 where r = f , t = f , s = f + bx + cy + a (px + qy)where the coefficients of x and y are all constants.Is this surface developable?n(n -1)pq(px + qy) = s z = c + a nq(px + qy) an implicit function defined by the surface ( , , ) 0 F x y z  , the Gaussian curvature can be expressed in terms of the gradient vector F  and Hessian matrix H(F). )