Optimal Control and Necessary Optimality Conditions for Nonlinear and Perturbed Dynamic Problems

The main goal of this paper is to establish the first order necessary optimality conditions for a tumor growth model that evolves due to cancer cell proliferation. The phenomenon is modeled by a system of three-dimensional partial differential equations. We prove the existence and uniqueness of optimal control and necessary conditions of optimality are established by using the variational formulation.


Introduction
Mathematical models are increasingly used, particularly in medicine.Formalising biological phenomena such as tumours (Robiyn, 2004), which is the subject of our research, is a hot topic both elsewhere and in C ôte d'Ivoire.The first mathematical models of tumour growth that we know of date back to the 1930.However, it was essentially towards the end of the 20th century that many of them were developed (William, 1932).Among the different models, a distinction is usually made between discrete and continuous models.In a discrete model, each entity is represented individually and reacts to given biophysical rules.Biological processes, such as progression in the cell cycle, can then be translated in detail.Moreover, the fate of all entities can be known, which facilitates comparisons between the mathematical model and the experimental data.However, these advantages reach their limits in large cellular populations: it then becomes necessary to monitor many entities, which proves costly in terms of numerical resolution.The use of continuous models makes it possible to overcome this disadvantage and to model large populations.Indeed, in a continuous model, elements are described in terms of population density and their actions are modelled by partial differential equations: this is an advantage for studying the mathematical properties of the model, but it makes it difficult to establish direct links between model parameters and physical measurements.In this paper we propose the use of optimal control theory (Fursikov, 2000) and (Sritharan, 1998) to provide a complete explanation of biological phenomena.Optimal Control Theory is the contemporary framework for analyzing and solving optimization problems, born in the 1960s with the work of (Pontryagin and al, 1962) based on earlier contributions by (Lagrange, 1788) and (Hamilton, 1827).Essentially, optimal control theory considers the problem of how to achieve an objective subject to external constraints, and it has mainly been used in economics.To our knowledge, in the biosciences, optimal control theory has been applied to the design of optimal therapies, optimal harvesting policies and optimal investments in renewable resources, but not to the origin of observed biological behaviours.When designing an optimal therapy, optimal harvest or optimal investment, the goal is to achieve an objective external to the biological entities involved, namely: minimize (Raymond, 2013) the negative effects of drugs and diseases and maximize the current value of revenues, subject to biological laws describing existing effects.The appropriate mathematical approach to this problem is therefore the optimal control theory.However, in addition to these well-known applications, optimal control theory is also the most appropriate approach for studying biological phenomena understood as the result of the behaviour of semi-autonomous bio-entities.Therefore, optimal control theory provides a comprehensive explanation of observed behaviors: bio-entities pursue their own specific goals, the actions of one bioentity affect the ability of other entities to achieve their goals, and therefore, all behaviors are interdependent.However, the interpretation of biological phenomena as a result of a set of optimal control problems has not yet been considered by current biomathematics.In this respect, using non-linear dynamic models as a starting point, the aim of this paper is to show how this application of optimal control theory is a promising approach for the analysis of biomedical questions, i.e. to establish the necessary optimality conditions on a dynamic system on which one can act by means of a command to go from a given initial state to a very precise final state.One of the necessary (attractive) aspects of the order is to introduce a functional taking into account the entire trajectory of the system up to a final horizon.The objective will therefore be to determine a control that makes it possible to manipulate the system according to its dynamics while minimizing the cost function, that is to say, determine a solution having an optimal quality.The rest of the paper is organized as follows: after this introduction, in section 2 we describe the mathematical model that we will study and present some functional spaces, then using continuous linear operators of Nemytskii and Hammerstein, we get the linearization of the problem with some established assumptions.In section 3, we formulate the optimal control problem and prove the existence and uniqueness of an optimal solution for the controlled system with functional cost.In Section 4, we establish the functional gradient and formulate the adjoint problem of the initial problem and finally the establishment of the necessary conditions of optimality of the first order associated with the problem are studied in section 5.

Mathematical Models for Tumour Dynamics
Tumour dynamics modelling is an active research area for biologists, mathematicians and engineers.Different approaches are used in mathematical modelling of cancer and its control.(Swanson, 2000) models multiform tumor (malignant brain tumor) using partial differential equations.Some researchers have also studied the tumour growth model using cellular automata that may include very specific characteristics of the tumor, patient and effective drug in model (Kansal, 2000); (Gerlee, 2007).(Anderson & Chaplain, 1998) and (Anderson & Enderling, 2006) also used the approach of cellular automata to model tumour growth, angiogenesis and metastasis.Another different approach is the work of (Pillis & Radunskaya, 2003) in which they construct a general tumor growth model, using ordinary differential equations, which show the dynamics of tumor growth using the number of healthy cells and immune cells.In this paper, we present a model based on the one presented in (Gossan, Yoro & Bally, 2018) of non-linear differential reaction-diffusion equations, describing the proliferative evolution of tumor cells across a given domain.
Let us designate by x the size of the tumor, and t the time parameter.Consider a time-dependent reference region I t = I × (0, T ), T > 0 occupied by the tumor where I is a bounded open set of R 3 and either ∂I t = ∂I × (0, T ), its pretty smooth border.Let us − → n as the unit outward normal to the boundary ∂I.
Note ν = ν(x, t) and ξ e = ξ e (x, t), vector functions designating respectively the proliferation rate of cancer cells, the density of external forces ( healthy cells + nutrients + constant drug supply) and the scalar function ρ = ρ(x, t), the volume density of tumor cells.The model is then described by the following equations: where ) is the deformation rate tensor.
The system (2.1) is represented by the nonlinear differential equations, whose equation (2.1) 1 , called the continuity equation, reflects the principle of mass conservation.The function (ν • ∇)ρ is the moment of the cells, while the equation (2.1) 2 , called the quantity of motion equation, is derived from a combination of the fundamental principle of the dynamics of the equation (2.1) 1 , and (2.1) 3 translated the incompressibility of the system.It consists of a diffusion term div with viscosity coefficients µ and λ , and a tumor cell convection term (ν • ∇ν) is given by We assume that on the I boundary of the domain, the velocity verifies : For physical reasons, µ and λ meet the following conditions : 3) It thus appears in the equation of conservation of the momentum two diffusion terms modeling the effects of small scales.Indeed, the viscosity reflects the friction forces at the microscopic level.To get an idea, one could imagine such forces as those that force a liquid to flow slowly.The pressure π depends on the variable density, and is given by the following state law : π = κρ α , κ ≥ 1, (2.4) and α the adiabatic constant is such that α > (d − 1)/2 (d = 3).
Before announcing the results, it is necessary to define the areas in which we are working.In this sub-section, we introduce the notation that will be used throughout this document.

Notations and Functional Framework
In this work, a couple of symbols and definitions are used, which are generally introduced when they are needed.However, some general notations that belong to the mathematical norm are given here for reference in advance.The following function spaces provide a norm framework for studying optimal conditions of problem.(2.1) − (2.2).
The underlying domain.Let I ⊂ R 3 , a delimited domain ∂I its sufficiently smooth border.For T > 0, the interval (0, T ) defines the considered time interval and I t = I × (0, T ) a space-time domain with boundary ∂I t = ∂I × (0, T ).
Standard Lebesgue spaces.Let m be a non-negative integer.We denote by H m (I; R 3 ) the usual Sobolev space W m,2 (I; R 3 ) as defined in (Lions & Magenes, 1972) .We note by D(I), the space of infinitely differentiable functions with compact support.Its closing in the norm W m,p (I; R 3 ) (2 ≤ p < s < +∞) is noted by W m,p 0 (I; R 3 ).An alternate characteristic in the case where m = 1 and p = 2 is where γ 0 is the ν trace operator.We also note by L p (I) 3 = L p (I; R 3 ), the lebesgue space on I provided with the norm ∥.∥ p and by ∥.∥ E the norm associated with a space E. If E is a Banach space, L p (0, T ; E) is the Banach space composed of functions, measurable on (0, T ) which value in E. For details concerning these spaces, see (Adams, 1945) or (Girault ,1986).We consider zero divergence spaces introduced for the problem (2.1) − (2.2).
, and , spaces of the continuous functions of integrable square.
These are banach spaces for the respective norms ν X 0

Linearization of the Problem
The characteristics are defined as previously.That is to say, we consider a bounded domain I with the same initial conditions.In this paragraph, we construct a linear functional perturbation that linearizes the (2.1) 1 equation and we give the characteristics of the functions that compose it.However, let's look at the term (ν • ∇ν) that appears in the equation (2.1) 1 .It is at the root of the difficulties encountered in solving this problem.We will therefore linearize the system by substituting this term by the following perturbation : φ is a measurable function following (x, t), twice continuously differentiable with respect to (v , w)∈ R 3 × R 9 , and H p = Pϑ a continuous integral operator (see Silvia, 2014) which, at any function ϑ, matches H p .That is written in expanded form : where the P(x − y, t − t ′ ) operator is a linear and continuous application in I × (0, T ).Using the new functions introduced, the initial value problem (2.1) is reformulated as follows (2.7) Note that this system is a simpler version of the (2.1) system since the term (ν • ∇ν) has been replaced by F(H, φ).This approach introduced new variables v, w which are considered respectively as an argument of the ν(x, t) field and its divergence.We will then make some hypotheses about the functions φ(x, t, v, w) and ϑ(y, t ′ , v, w), defined on , and then give the definition of the general solution of the problem (2.7) 1 − (2.7) 5 , subject to the limit condition (2.2).

Assumptions and Definition
(H-1) : Let β, β > 0 and T > 0 (fixed).For every (v, w) ∈ R 3 × R 9 , the functions (x, t, v, w) −→ φ(x, t, v, w) and (y, t ′ , v, w) −→ ϑ(y, t ′ , v, w) are measurable and verify the following conditions : (2.9) are twice continuously differentiable with respect to couples (v, w).Morever : (2.11) (H-3) : Let A ϵ and B ϵ be two nonlinear F−differentiable and G−differentiable operators.We note by A ′′ ϵ and B ′′ ϵ , the respective second differential of A ϵ and B ϵ defined as follows (for these notations seeTrenoguine, 1985), the second derivative of A ϵ (ν) and For an increase of h, independent of g, we have : (2.13) For h = g we deduce the following formulas (2.15) Let us now give the definition of the generalized solution of the perturbed problem (2.7) 1 − (2.7) 5 .
Similarly, it has been prooved that, since P(x − y, t − t ′ ) is a continuous linear application and that operators A ′ ϵ (ν) and B ′ ϵ (ν) satisfy the Lipschitz condition, then using Hadamard's theorem, we can write that for all ν 0 ∈ V(0, T ), the operator ) defined on V(0, T ) with values in X 0 ξ e ×L 2s s+1 (I; R 3 ) admits a continuous inverse having the following form s+1 (I; R 3 ) in V(0, T ) and besides, it is a homomorphism.
Remark 1 For some smooth conditions on the operator and using Hadamard's theorem on the strong differentiability of inverse functions, the operator R ϵ (V) is strongly differentiable.This derivation is weaker than that of Fréchet.However, this allows us to establish the necessary conditions for optimal problems related to these equations.

The Formulation of the Optimal Control Problem
The theory of optimal control of dynamic problems has important applications in both engineering and human science.
Optimal control of a biological process in order to achieve a desired goal is important for many medical applications.
In such optimal control problems, the control variable that makes the optimal state can be obtained by minimizing or maximizing a performance function.Moreover, the general problems of optimal control of non-convex costs are studied in depth for nonlinear systems by many researchers (Fattorini, 1996;Barbu, 1993;Li and Yong 1993 and the references cited therein).However, in practical applications to differential partial equations, there is some research involving initial value controls and the cost function attached is not necessarily non-convex.With this question in mind, we're studying the problems of optimal convex cost control for (2.7) 1 − (2.7) 5 .Let ℓ(ν, ρ) and ψ(ξ e (x, t)) be to convex functions, respectively modelling the density of proliferating cells and necrotic cells (Jean Baptiste, 2003), at a time t ≤ T .The cost J(ν, ρ, ξ e ) attached to (2.7) 1 − (2.7) 5 is given by the following general full cost.
where ψ : We assume the following conditions on ψ and ℓ in (3.1).C1: The mapping ξ e −→ ψ(ξ e ) is convex and semi continuous inferiorly.C2: The mapping ξ e −→ ψ(ξ e ) is locally lipschitzian, The main objective is to establish the existence of optimal control that minimizes the functional cost (3.1)subject to the (2.7) 1 − (2.7) 5 constraint and to prove the necessary first-order optimality condition using the variational principle and the Fréchet differentiability of the functional.
Definition 3 Let Xad be a closed and convex subset composed of controls ξ e ∈ L 1 ( (0, T ); Definition 4 The permissible Ûad class of the triplet (ν, ρ, ξ e ) is defined as the set of states (ν, ρ) with the initial data ν 0 ∈ K 1 div and ρ 0 ∈ W 1,2 (I; R), resolving the system (2.7) 1 − (2.7) 5 with control ξ e ∈ Xad .Which is to say The optimal control problem that we will study in this paper is as follows and that constraints given in the form of equality and inequality are verified Definition 5 A solution to the problem (3.2), called an optimal solution and the optimal triplet, is denoted by (ν * , ρ * , ξ * e ).The control ξ * e is called an optimal control, i.e. a control corresponding to the best cost.

Existence and Uniqueness of Optimal Control
In this section, we use the notion of minimizing sequences to prove the existence and uniqueness of an optimal control (ν * , ρ * , ξ * e ) for the functional (3.1) in Ûad .This is the content of the following theorem.
Step 3. It remains to show that the limit (ν * , ρ * , ξ * e ) is an optimal triplet.By theassumptions C1 and C3, the functions ψ and ℓ are lower semi-continuous functions and convex so they are weakly continuous inferiorly.Since the cost function J is convex on L 2 ( (0, T ) ; K 0 div ) × L 2 ( (0, T ) ; W 1,2 (I; R) ) × Xad , let's show that J is lower semi-continuous.Let (ν n , ρ n , ξ en ) be a sequence converging weakly to (ν, ρ, ξ e ) in L 2 ( (0, T ) ; . The lower semi-continuity of ψ and ℓ results in (3.7) The function lim The hypotheses of lower semi-continuity make it possible to conclude that J is lower semi-continuous.Returning to the minimizing sequence we deduce that lim n→∞ in f J(ν n , ρ n , ξ en ) J(ν * , ρ * , ξ * e ). (3.9) As from (3.9),we obtain Moreover, taking in to account the fact that inf (ν,ρ,ξ e )∈ Ûad J(ν, ρ, ξ e ) ≤ J(ν * , ρ * , ξ * e ) by definition, we can deduce that (3.10) Uniquess: Since J is strictly convex, the minimum is unique.Thus, the problem admits a unique solution.Indeed, let ξ * e ∈ Ûad and ξ * e ∈ Ûad be two optimal controls, which respectively satisfy As Ûad is a convex and not empty admissible set, then for ϵ ∈ (0; 1) we can get the following We therefore deduce that = min ξ e ∈ Ûad J(ξ e ), which is a contradiction unless ξ * e = ξ * e .
This completes the proof.where d i (i = 1.2) independent of δξ e .
Proof.The estimation (3.11) is obtained from the fact that δν and δξ e satisfy the estimate ] .
Thereafter at t = 0 for q 0 = 0, we get the result.The estimation (3.12) is obtained using the mark-up (3.19) of lemma 4 (see Gossan D, 2018) and from inequality (3.11) we obtain the estimate (3.12).

Adjoint System
As it is well known from the literature of control theory, in order to obtain the necessary conditions of optimality, we need the adjoint equations corresponding to the system (2.7) 1 − (2.7) 5 .Using thetheorem 6 we obtain the following result.
Theorem 8 There is an optimal control ξ * e and a corresponding solution (ν * , ρ * ) such that with J(ξ e ) := Moreover, it exists adjoint variables (G, Y) ∈ C ) solution of the following dual problem Proof.Let's introduce a new functional L(ν, ρ, ξ e , G, Y), associated with the functional cost J defined in (3.1) by where G and Y denote the variables associated with ν and ρ respectively.The functions M 1 and M 2 are defined as follows, We derive the functional (4.2) in the sense of Fréchet with respect to variables (ν, ρ, ξ e ) and we get the following system.
Next, the adjoint variables G, Y and ξ e satisfy the following system (4.5) Thus, from (4.5) and taking into account the fact that ∇G = 0 , it follows that the adjoint variables (G, Y) satisfy the following adjoint system

Variation Calculation
The calculation of variations studies the optimal form, time, speed, energy, volume etc..The laws of physics of astronomical mechanics, as well as all natural and technical sciences obey variational principles.The main purpose of calculating the variations is to find the solution governed by these principles.The calculation of variations has a long history, and is renewed according to developments in mathematics and other sciences.These calculations make it possible to establish the necessary optimality conditions to solve this type of problem.For this, let's state the theorem following the Fréchet differentiability of the functional and the dual system of the problem (2.7) 1 − (2.7) 5 .
Theorem 9 Suppose all the conditions of the Definition 6 are verified.Let A ϵ and B ϵ be two differentiable nonlinear operators and ψ, differentiable following ξ e and its derivative ψ ξ e checks the Lipchitz conditions following ξ e .The function ℓ is differentiable wich respect to ν and ρ, and the partial derivative ℓ ρ and ℓ ν , fulfill the Lipchitz condition from ν and ρ.
Then the functional J(ξ e ) is differentiable and its gradient is determined by the formula: Proof.Suppose that all the hypotheses of theorem 9 on functions ψ and ℓ(ν, ρ) are verified.Then let's examine the problem (2.7) 1 − (2.7) 5 with perturbed command ξ ε e (x, t) that would be linked to the solution (ν ε , ρ ε ) of this problem and the value of the functional J(ξ ε e ).For that let us consider the following increases: The increase in functional J(ξ ε e ) is written : ] dx+ Let's put : ) δρ ] dτ (4.9) Taking into account (2.16)-(2.17)and the fact that the small variations δν, δρ and δξ e verify these integrable equalities, the functional increase becomes : Using the formulas(2.14)and (2.15), we obtain the following Due to the results(4.11)and (4.12), we deduce that From the existence of the solution of the conjugate problem (4.1), we finally have the expression for ∆J With the estimates of (3.11) and (3.12) fromtheorem 7 and the assumptions of theorem 9 we have for everything t ∈ (0, T ).The other members I 4 and I 5 are evaluated in the same way.we finally get to: Therefore, the functional J ( ξ e ) is Fréchet differentiable from ξ e , and its gradient is given by the formula where G is a solution to the dual problem(4.5).Thus, the theorem is proved.

Necessary and Sufficient Condition of Optimality
We have proved in subsection 3.2 that the problem (3.1) admits an optimal triplet (ν * , ρ * , ξ * e ) ∈ C ) × Ûad .In this section, we characterize optimal control by giving the necessary conditions of optimality of the first-order for a family of (P k ) integrable functions.However, we first give the following lemma according to the optimal control theory.

First Variation of Functionals
Let Ûad be a convex, closed and bounded set in the space L 2s s−1 (I , R 3 ) and ∆J ≡ ϵδJ + O(ϵ) wich ϵ ∈ (0, 1).δJ is called the first functional variation J. Let us recall that : δJ(ξ e , ⃗ p) = d dϵ J(ξ e + ϵ ⃗ p)| ϵ=0 where ⃗ p belongs to a vector space V. Consider a family of functions (P k ) , where G (k) are solutions to the conjugated problem.We have The first variation δJ (k) (ξ e ) of the functional J (k) (ξ e ) is determined as follows :

Establishing the Necessary Optimality Conditions
Let's designate by V a vector space of dimension dim(V) = p + q + 1 and F (k)  a a functional variation family of V defined by { F (k) a (ξ e , G (k) ) , where γ k are non-negative coefficients and Let's prove that the set K (pq) is a convex cone in vector space V.
Let's define the family ) with ϵ ∈ (0, 1).The first variation becomes So, we can conclude that the set K (pq) is a convex cone in vector space V.
Definition 11 The contsraints at the point ξ * e , part of restrictions J (k) (ξ * e ) ≤ 0, for which J (k) (ξ * e ) = 0, are called active.Those for which J (k) (ξ * e ) < 0, are called inactive at ξ * e .For J (k) (ξ * e ) < 0, it's clear that for a small enough ϵ , we will also have J (k) (ξ * e + ϵ ⃗ p) ≤ 0, but if J (k) (ξ * e ) = 0 for some indices k, it is not easy to find a vector ξ * e ∈ L 1 ( (0, T ); L 2s s−1 (I; R 3 ) ) , such that, for ϵ small enough, ξ * e + ϵ ⃗ p satisfies all the constraints in(3.2a).It is therefore necessary to impose additional conditions on the constraints called qualification conditions.It's with these conditions that we can make "variations" around a ξ * e point to test its optimality.In the following we consider a H (pq) hyperplane support to the K (pq) cone such that the entire cone is located in one of the closed half spaces defined by the H (pq) hyperplane (the hyperplane enjoying this property may not be unique).The equation of H (pq) can be written as ∑ p i=0 ϖ i x i = where x 0 , • • • , x p are current coordinates, ϖ i the coefficients of the equation of this hyperplane and ∈ R. As the product of all the coefficients ϖ i by the same non-zero number does not modify the hyperplane H (pq) , we can consider, by changing the signs of all the numbers ϖ i , that the K (pq) cone is located in a half-space H (pq) − : ∑ p i=0 ϖ i x i ≤ Either the cone K (pq) = {F (k)  a } is built from the optimal command ξ * e (x, t) and H (pq) − the half space of the hyperplane H (pq) is defined by a functional J (k) = (J (0) , • • • , J (p+q) ∈ V, we have the following assumptions: (5.9) pq) , ∀ϖ i ∈ H (pq)

−
(5.10) If J (k) is C −differentiable for k = 0, • • • , p + q at the point ξ * e (x, t), and J (k) , k = p + 1, • • • , p + q, continuous in the neighbourhood of ξ * e (x, t), so for J (k) 0 we obtain (5.17)Thus, (5.14) can be transformed using the formula (5.15) again, and taking into account the fact that F (k) a ∈ K (pq) checks for inequality (5.13).This completes the proof.If the control ξ * e ∈ Ûad is the minimum point of the functional J(ξ e ), then the necessary condition for control has been obtained.

Conclusion
In this paper, we have studied an optimal control problem governed by nonlinear dynamic equations with dynamic viscosity and volume.We obtain the existence of an optimal solution to this control problem and establish the first-order necessary condition.Our results lay the foundation of numerical experiments of this optimal control problem.This issue will be worked on in the near future.