Picard theorems for Keller mappings in dimension two and the phantom curve

Let $F=(P,Q)\in\mathbb{C}[X,Y]^{2}$ be a polynomial mapping over the complex field $\mathbb{C}$. Suppose that $$ \det\,J_{F}(X,Y):=\frac{\partial P}{\partial X}\frac{\partial Q}{\partial Y}- \frac{\partial P}{\partial Y}\frac{\partial Q}{\partial X}=a\in\mathbb{C}^{\times}. $$ A mapping that satisfies the assumptions above is called a Keller mapping. In this paper we estimate the size of the co-image of $F$. We give a sufficient condition for surjectivity of Keller mappings in terms of its Phantom curve. This curve is closely related to the asymptotic variety of $F$.

If F ∈ C[X, Y ] 2 satisfies det J F (X, Y ) ∈ C × , then These results are also true over fields K different from C. Also we prove: Let F (X, Y ) ∈ C[X, Y ] 2 : C 2 → C 2 be a Keller mapping. Then F (C 2 ) = C 2 .
The proofs are based on a careful analysis of the asymptotic behavior of the mapping at infinity. The set of all the asymptotic values of F is called the asymptotic variety of F and is denoted by A(F ). If F is not an automorphism of C 2 then this is a planar algebraic curve. Otherwise it is an empty set. Each component of A(F ) is a polynomial curve, i.e. it has a normal parametrization with polynomials. Equivalently, it has a unique place on the line at infinity (in the projectivization). However, every such a component is not isomorphic with C and hence must be singular. These are all well known results. We can refine the description of that structure. There is a finite set of rational but not polynomial mappings which we call a geometric basis of F . It is denoted by R 0 (F ) and contains rational mappings of the form L • (X −α , X β Y + X −α Φ(X)), where L(X, Y ) is a fixed linear invertible mapping (depending only on F ), α ∈ Z + , β ∈ Z + ∪ {0}, Φ(X) ∈ C[X] and deg Φ < α + β. Moreover, the powers of X that effectively appear in X α+β Y + Φ(X) have a gcd which equals 1. The cardinality of R 0 (F ) equals the number of components of the asymptotic variety A(F ). Each rational mapping R ∈ R 0 (F ) satisfies an identity of the form F • R = G R ∈ C[X, Y ] 2 . We call that a double asymptotic identity of F . We call the corresponding polynomial mapping G R the R-dual of F . The irreducible component of A(F ) that corresponds to R ∈ R 0 (F ) is the polynomial curve with the following normal parametrization (meaning a surjective parametrization), {G R (0, Y ) | Y ∈ C}. We call this component the R-component of A(F ). If its implicit representation is H R (X, Y ) = 0 for some irreducible H R ∈ C[X, Y ] then (H R • G R )(0, Y ) ≡ 0. In fact we prove that (H R • G R )(X, Y ) = X β−α S R (X, Y ), where 1 ≤ β − α, and where S R ∈ C[X, Y ]. The planar algebraic curve {S R (X, Y ) = 0}, is called the R-phantom curve. If ∀ R ∈ R 0 (F ), {S R (X, Y ) = 0} ∩ sing(R) = ∅ then the mapping F must be surjective! The reason is the following: If the finite set C 2 − F (C 2 ) = ∅ is non-empty then any (a, b) ∈ C 2 − F (C 2 ) is an asymptotic value of F . We call such an asymptotic value, a Picard-exceptional value of F (as is the terminology in the theory of analytic functions). The mapping F is surjective if and only if it has no Picard-exceptional values. Let R be an element of R 0 (F ) that corresponds to (a, b). This means that (a, b) ∈ {H R (X, Y ) = 0}−F (C 2 ). As explained above we have {H R (X, Y ) = 0} = {G R (0, Y ) | Y ∈ C}. The inverse image G −1 R ({H R (X, Y ) = 0} equals the union {X = 0} ∪ {S R (X, Y ) = 0}, i.e. the union of the singular set of R and the R-phantom curve. If these two sets are disjoint then R is defined on every point of the R-phantom curve {S R (X, Y ) = 0}. In particular G R ({S R (X, Y ) = 0}) = F (R({S R (X, Y ) = 0})) ⊆ F (C 2 ). This means that if F has Picard-exceptional values on {H R (X, Y ) = 0} they must belong to the difference set {H R (X, Y ) = 0} − G R ({S R (X, Y ) = 0}). We will prove that of the asymptotic variety A(F ) contains no Picard-exceptional values of F . Since this is true for any R ∈ R 0 (F ), it follows that F has no Picard-exceptional values. Hence F is a surjective mapping. We will prove that for certain choices of the parameters α and β the condition {S R (X, Y ) = 0} ∩ sing(R) = ∅ is satisfied. As a corollary we are able to prove that: If N ∈ Z + and a 1 , . . . , a N ∈ C then I((X −N , X N +1 Y + a N X N + . . . a 1 X)) contains no Jacobian pair. This result should be compared to the main theorems in [3,4,5] which handled other families of subalgebras of C[X, Y ]. The case N = 1 is in the intersection of the two families and originally was proved by L. Makar-Limanov using techniques from weighted graded algebras. We recall that Pinchuk's counterexample to the Real Jacobian Conjecture is contained in the real version R[V, V U, V U 2 + U ] of the case N = 1.

The structure of the asymptotic variety
An F -asymptotic value (a, b) ∈ C 2 is a limiting value of F : C 2 → C 2 along a smooth curve that tends to infinity. The set of all the F -asymptotic values is called the F -asymptotic variety and it is denoted by A(F ). Any smooth curve as above is called an asymptotic tract of F that corresponds to (a, b).
, deg φ < α + β and the gcd of the powers of X that effectively appear in X α+β Y + φ(X) is 1) with the following two properties: Moreover, the set of all those rational mappings R can be chosen to be a finite set.

Proof.
We consider the transformed mapping: Any asymptotic value of F is a limiting value of F 1 (U, V ) as U → 0 and V remains bounded. We use the following representation: These are the two polynomial coefficients of U j . When U → 0 and V remains bounded we will have F 1 (U, V ) → ∞ unless V tends to a zero of the free term A 0 (V ). This means a simultaneous zero of the pair of polynomials A 0 (V ). The reason is that if V is bounded away from zeros of A 0 (V ), then the term A 0 (V )/U N dominates the finite sum F 1 (U, V ) (in equation (*)) when U → 0. We conclude that in order to determine all the possible asymptotic values of F , we should consider the limits of F 1 (U, V ) when U → 0 and V → a 0 , where a 0 is a zero of A 0 . Let us represent the coefficients of A as follows, Where if A j ≡ (0, 0) we agree that p j = ∞ and otherwise p j ≥ 0 and B j (a 0 ) = (0, 0). Next we make the transformation, V = a 0 + W U p , where p will be a positive number that will be determined soon (in fact it is going to be rational). Putting together everything we obtain, We want to determine the desired value of p that will lead to a finite limiting value of F 1 (U, V ). If p is a small positive number, then ∀ 1 ≤ j ≤ N , pp 0 < pp j + j. Thus the term that will dominate the finite sum that represents If pp 0 = N and pp 0 < pp j + j for 1 ≤ j ≤ N , then with this choice of p = N/p 0 we obtain the family of limiting values B 0 (a 0 )W p 0 . In the complementary case, we choose p > 0 so that pp 0 ≤ pp j + j for 1 ≤ j ≤ N with equality for at least one value j 0 of j. We can write this choice of a value for p as follows, We substitute this and obtain: where C(U, W ) is a mapping with coordinates that are polynomial in U and U p . It is also a polynomial in W of degree N or less. Finally C(0, W ) is a polynomial in W of degree p 0 , which contains only the powers W p j that satisfy the condition pp 0 = pp j + j. Let p = b 0 /c where b 0 and c are co prime positive integers. Then to avoid fractional powers we make further the substitution U = Z c . We get, Here D(Z, W ) is a polynomial pair in Z and W and D(0, W ) = C(0, W ) is a polynomial pair in W of degree p 0 . We noted above that the powers W p j that appear in D(0, W ) satisfy pp 0 = pp j + j. Hence for these j's Since b 0 and c are co prime and since p 0 − p j ∈ Z, b 0 must be a divisor of j, so that the difference p 0 − p j is a multiple of c. We conclude that D(0, W ) is of the form D(0, W ) = W α E(W c ), where α ∈ Z + ∪ {0}, and E(X) is a polynomial pair in X of degree 1 or more. We see that the form of F 1 after our transformations, as given in equation (**) is of the same type in the indeterminates Z and W as it was in equation (*) in the indeterminates U and V . Thus we can repeat the sequence of transformations with suitable parameters. We conclude that any limiting value of F 1 will be attained along a curve of the form V = a 0 + W U p , where U → 0 and W being bounded and p is a positive rational number. If the parameter p is chosen to be smaller than our minimum value formula then F 1 → ∞ along the curve. So finite limiting values for F 1 can be achieved only for this value or larger ones for p. So with that minimum chosen value for p, asymptotic values of F 1 that correspond to the zero a 0 of the pair A 0 are achieved when Z → 0 and W being bounded. If the value of p is chosen to be larger than the minimum then, in fact W → 0. On repetition of the process, each asymptotic value will correspond to a zero a 1 of D(0, W ). If the multiplicity of the zero a 1 is q 0 , then the transformation we apply is W = a 1 + SZ q , where q is chosen using our minimum process (as with p).
We have q 0 ≤ p 0 ≤ N . The case q 0 = p 0 implies D(0, W ) = d · (W − a 1 ) p 0 with a non zero constant d. So D(0, W ) contains all the powers of W from 0 to p 0 . Hence the denominator of q, c = 1 and the power of Z in the denominator is N − p 0 or less. We conclude, that repeating the process will in every iteration either strictly reduce the degree of the leading term, or strictly reduce the power of the denominator by a positive integer. We conclude that the process must terminate. The way it will terminate is as follows. If we use the notation A(U, V )/U N , the final transformation must be of the form V = a 0 + W U p with p = N/p 0 . The other possibility is that this choice of value for p coincides with the minimum min{j/(p 0 −p j )}. We obtain a curve of asymptotic values D(0, W ), a polynomial in W of degree p 0 ≥ 1. For as the denominator in the last iteration becomes 1, our sequence of transformations assigns a new polynomial D(Z, W ) to the original F (X, Y ). Clearly if we perform the process to every zero of the leading term at every stage we will obtain all the asymptotic values of F . Clearly we obtain in this way finitely many rational mappings of the type prescribed in the statement of our theorem. We obtain those mappings by composing the sequence of the transformations (and invert the result). In general asymptotic values of F will be achieved as limits of F along more than one of the rational curves in our construction.
Remark 2.2. We did not use the Jacobian condition in our proof. This theorem is valid for polynomial mappings that are not necessarily Keller.
Y ] 2 be a Keller mapping which is not an automorphism. Then there exists a non empty set, R 0 (F ) of rational mappings R ∈ C(X, Y ) 2 − C[X, Y ] 2 that satisfies: (i) The cardinality of R 0 (F ) equals the number of the irreducible components of the F -asymptotic variety A(F ).

Proof.
We choose a linear invertible mapping L(U, V ) so that F • L satisfies the V -degree condition. Then the construction outlined in the proof of Theorem 2.1 gives us finitely many rational non polynomial mappings of the . This proves part (iv). Any asymptotic value of F equals G R (0, Y ) for some R and Y ∈ C. Thus proving e. a surjective one) of one of the irreducible components of A(F ) and we conclude the proof of Theorem 2.3.
, then any set R 0 (F )that satisfies properties (i)-(iv) of Theorem 2.3 is called a geometric basis of F .
Proof. Parts (also a refined version in [2]), each such a component can not be isomorphic to C. Since the only non singular irreducible plane algebraic curves are isomorphic images of C it follows that each such a component of A(F ) must be a singular plane algebraic curve. The proof of part (iv): This follows by G R = F • R off sing(R) and by the Jacobian condition (satisfied by F ). We denote and the above note we deduce that: which is not possible for an irreducible curve such as To clarify further the relations among the parameters α, β and γ(R) (after part (iv) of Theorem 2.5) we add the following proposition.
Proposition 2.6. More relations among the parameters α, β and γ(R) are given by:

Proof.
We start with the identity (H R • G R )(X, Y ) = X γ(R) S R (X, Y ). We use the notation G R = (G 1 , G 2 ) and H R (U, V ) and obtain: To prove part (i) we use the assumption γ(R) = 1 and we substitute in the system above X = 0. The resulting system is: Recalling the note we had in the proof of part (iii) of Theorem 2.5: det J G R (X, Y ) = −αX β−α−1 and so depending whether β − α − 1 = 0 or > 0 we conclude that the matrix , is either always invertible (when β − α − 1 = 0) or always not invertible (when β − α − 1 > 0). In our case the matrix J G R (0, Y ) T can not always be not invertible because this would mean that the two equations in the second system above are proportional which is an absurd because S R (0, Y ) ≡ 0. Thus in case γ = 1 we must have β − α − 1 = 0. So the second system has a unique solution ∀ Y ∈ C and in particular, To prove part (ii) we use the assumption γ(R) ≥ 2 and we substitute in the first system X = 0. We obtain the following system: In this case the matrix J G R (0, Y ) T can not always be invertible, because it would imply that Hence, in this case we must have β − α − 1 > 0. In particular if {S R (X, Y ) = 0} ∩ sing(R) = ∅ and γ = 1, then by (i) sing(H R (X, Y ) = 0) = ∅ which is a contradiction. Hence, γ ≥ 2.
Remark 2.7. On the next section we will prove a more accurate version of Theorem 2.5 (iv).
3 The relation γ = β − α, and the geometry of the R-phantom curve Theorem 3.1. Let F be a Keller mapping which is not a C 2 automorphism. Assume that F satisfies the Y -degree condition. Proof.
This follows by Hilbert's Nullstellensatz and the irreducibility of H R (where γ absorbs all the Xpowers). So We think of this system as a 2 × 2 linear system in the two unknowns (∂H R /∂U )(G R ) and (∂H R /∂V )(G R ). The coefficients matrix is We evaluate for X = 0: We consider the second equation and recall that the specialization X = 0 is an operator that commutes with ∂/∂Y . Hence ( But both possibilities can not occur because this curve is the R-component of A(F ), H R (X, Y ) = 0 which is a singular non-degenerate planar algebraic curve. We conclude that (

Proof.
Using the proof of Theorem 3.1 we get: This proves that

Proof.
If (X 0 , Y 0 ) is a singular point of the R-phantom curve which is off sing(R), Hence also (∂H R /∂U )(G R (X 0 , Y 0 )) = (∂H R /∂V )(G R (X 0 , Y 0 )) = 0. This follow by the determinantial formulas in the proof of Theorem 3.1. Hence G R (X 0 , Y 0 ) is a singular point of H R (X, Y ) = 0. This and Corollary 3.2 prove that G R (sing(S R = 0)) ∪ G R ({S R = 0} ∩ sing(R)) ⊆ sing(H R = 0). The singular locus of sing(H R = 0) contains also points ( a, b), G 2 (a, b)) for which S R (a, b) = 0 then if a = 0 G R is a local diffeomorphism at (a, b) which implies that (a, b) is also a singular point of S R (X, Y ) = 0. But such a point was already counted for on the set on the left hand side.
A very important fact that we would like to point out in the result on the next section is that the disjointness of the singular locus of R and the R-phantom curve implies the surjectivity of the mapping F . The next result proves that this disjointness holds true in the special case β = α + 1, Theorem 3.4. Let F be a Keller mapping which is not a C 2 automorphism.
In this case we have det J R (X, Y ) = −α, ∀ (X, Y ) ∈ sing(R). By the relation F • R = G R , the R-dual of F , it follows that det J G R ∈ C × (since F is Keller). Thus in this case G R is Keller as well. We know that the pre-image of the R-irreducible component of A(F ) by G R equals the union . Now let us assume, in order to get a contradiction that sing(R) ∩ {S R (X, Y ) = 0} = ∅. Say (a, b) ∈ sing(R) ∩ {S R (X, Y ) = 0}, (a = 0). Then there exist two sequences (a 1 n , b 1 n ) ∈ sing(R), (a 2 n , b 2 n ) ∈ {S R (X, Y ) = 0} so that: Hence (a, b) is singular point of the mapping G R (X, Y ) and in particular det J G R (a, b) = 0. This contradicts the fact that in our case β = α + 1, and as explained above this implies that G R (X, Y ) is Keller, i.e. det J G R ∈ C × . This completes the proof of the theorem. Corollary 3.5. If N ∈ Z + and a 1 , . . . , a N ∈ C then I((X −N , X N +1 Y + a N X N + . . . a 1 X)) contains no Jacobian pair.

Proof.
Suppose to the contrary that F ∈ I((X −N , X N +1 Y + a N X N + . . . a 1 X)) is a Keller mapping. Then R(X, Y ) = (X −N , X N +1 Y + a N X N + . . . a 1 X) ∈ R 0 (F ) and by Proposition 2.6(i) we have sing(H R (X, Y ) = 0) = {S R (X, Y ) = 0} ∩ sing(R). On the other hand, by case 2 in the proof of Theorem 3.4, this implies that sing(H R (X, Y ) = 0) = ∅. This contradicts the fact that the R-component of A(F ), {H R (X, Y ) = 0} is a singular planar algebraic curve.
Remark 3.6. By Corollary 3.5, with the value N = 1 we get the result that C[V, V U, V U 2 + U ] (which equals to I((X −1 , X 2 Y − X))) contains no counterexample to the Jacobian Conjecture. This was originally proved by L. Makar-Limanov,(See [3,4]). He used in a clever way a grading technique on this algebra giving the weights ±1 to the indeterminates U and V respectively. We recall that Pinchuk's counterexample to the Real Jacobian Conjecture is contained in the real version R[V, V U, V U 2 + U ].

A necessary condition on the phantom curves
for the surjectivity of the mapping Proof.
If F ∈ Aut(C 2 ), then the claim is true. If F ∈ Aut(C 2 ), then F is a counterexample to the Jacobian Conjecture. In this case it has a non empty geometric basis R 0 (F ). By pre composing F with a suitable invertible linear mapping L : C 2 → C 2 we can achieve the situation that F satisfies the Y -degree condition: deg F = deg Y P = deg Y Q. This implies that each R ∈ R 0 (F ) could be chosen to have the following form: , deg φ < α + β and the gcd of the set of X-powers that effectively appear in X α+β Y + φ(X) equals to 1. Also if H R (X, Y ) = 0 is an implicit representation of the Rirreducible component of A(F ), then by Theorem 3. we take all the rational mappings R ∈ R 0 (F ). As is the tradition in complex analysis we call the asymptotic values of F which do not belong to its image, the Picard exceptional values of F . We denote this set by Picard(F ), and ∀ R ∈ R 0 (F ) we denote the R-Picard exceptional values of F by Picard R (F ). Thus: Picard R (F ) = Picard(F ) ∩ {H R (X, Y ) = 0}. Hence we have the representation: Picard(F ) = R∈R 0 (F ) Picard R (F ). As is well known we have C 2 − F (C 2 ) = Picard(F ), and the finiteness 0 ≤ |Picard(F )| < ∞. Our theorem is merely the assertion Picard(F ) = ∅, or, equivalently ∀ R ∈ R 0 (F ), Picard R (F ) = ∅. We will prove this last assertion. Let us fix an element in the geometric basis of F , R ∈ R 0 (F ). By the above, the difference set is a finite set which is composed exactly of these asymptotic values of F which are the Picard exceptional values of F . We know that {X = 0} ∩ {S R (X, Y ) = 0} is empty. It follows that R is defined on all the points of the R-phantom curve {S R (X, Y ) = 0}. Hence by the definition of the R-dual mapping of F we have, Since R is an arbitrary member of the geometric basis R 0 (F ) of F this would imply that F is surjective. Thus we now turn to prove that: )} of the R-phantom curve is bounded. This is not possible It is worth giving a second proof of the claim: As in the first proof f (T ) → 0, g(T ) stays bounded. On the other hand (f (T ), g(T )) is a parametrization of L which is a component of S R (X, Y ) = 0. Now we have the representation S R (X, Y ) = e R + X · T R (X, Y ) for some e R ∈ C × and some T R (X, Y ) ∈ C[X, Y ]. Thus S R (f (T ), g(T )) ≡ 0 which implies that e R + f (T ) · T R (f (T ), g(T )) ≡ 0. But by f (T ) → 0 and g(T ) stays bounded we deduce that e R = 0 which contradicts e R ∈ C × . Thus only possibility (2) occurs. In this case f (T ) → c ∈ C × ∪ {∞}. If c ∈ C × then g(T ) → c 1 ∈ C (otherwise the curve R(L) = {(f −α , f β g + f −α φ(f ))} is not bounded). This implies that L = {(f (T ), g(T ))} is a bounded curve. This is not possible. We deduce that f (T ) → ∞ and We deduce that the asymptotic value of G R along L is F (0, d) and in particular, it belongs to the image of F , F (C 2 ). Conclusion: The R-dual mapping, G R of F maps the R-phantom curve S R (X, Y ) = 0 into F (C 2 ) and moreover the asymptotic values of G R along the components of S R (X, Y ) = 0 also belong to F (C 2 ). In fact they belong to , and concludes the proof of the surjectivity of the Keller mapping F . 5 More on the structure of the R-phantom curve and a type of a Picard's (small) Theorem Let F (U, V ) be a Keller mapping which is not a C 2 -automorphism. Then We recall that α, β ∈ Z + , β − α − 1 > 0, the gcd of all the X-powers that effectively appear in X α+β Y + Φ(X) equals to 1, where Φ(X)C[X], deg Φ < α + β.
We can further assume that is the R-dual of F . Thus we obtain by differentiations: We are interested in the intersection points of the R-phantom curve and of sing(R) = {X = 0}. Let (0, Y 0 ) be such a point. Then S R (0, Y 0 ) = 0. We can represent S R (X, Y ) as follows: where X |Ψ(X, Y ), deg X Ψ(X, Y ) ≤ α, deg X h 3 (X) < α. By computing the derivatives of S R (X, Y ) (in (*)) and substituting (X, Y ) = (0, Y 0 ) we obtain So far we have used the equation (∂H R /∂V )(G R (0, Y 0 )) = 0. We now turn to the second component of the gradient of H R at the singular point G R (0, Y 0 ).
We conclude that Let us suppose that the R-phantom curve S R (X, Y ) = 0 intersects sing(R) in the set of points (0, Y j ), 0 ≤ j ≤ N − 1. Then by the above calculation we have for each 0 ≤ j ≤ N − 1: We substitute X = 0 and obtain: Since Y 0 , . . . , Y N −1 are the total set of zeros of S R (0, Y ) we obtain:

Proof.
This follows by ) take X → 0 and remember that Ψ(0) = 0 we obtain the following estimate for X → 0 and Y fixed: The R-phantom curve does not intersect sing(R) iff {Y 0 , . . . , Y N −1 } = ∅ and this is equivalent to: This is equivalent to: This is equivalent to: which is equivalent to We proved the following: for some h 6 (X, Y ) ∈ C[X, Y ].
Remark 5.4. Theorem 4.1 says that the disjointness {S R (X, Y ) = 0} ∩ sing(R) = ∅ condition that appears in Theorem 5.3 is sufficient for the surjectivity of the Keller mapping F , i.e. for F (C 2 ) = C 2 . At this point it looks as if the condition given in Theorem 5.3 that is equivalent to {S R (X, Y ) = 0} ∩ sing(R) = ∅ is improbable to hold for all Keller mappings. The reason is that apriori we only have: This follows immediately from H R • F ∈ I(R). However according to Theorem 5.3 we would like to have: which is far away from what we have. However, we recall that in the beginning of this section we noticed that: This looks promising. We only have a difference of 1 between α and α − 1.

Proof.
Each R ∈ R 0 (F ) contributes the R-Picard exceptional values of F from among the G R images of the intersection points of the two two curves sing(R) and the R-phantom curve. These give the entire set of the Picard exceptional values of F . This proves the first equation. The second follows by Corollary 3.3 which implies that: How large can the set of the Picard exceptional values of F be?

Proof.
Our starting point will be the result in Theorem 5.5: By the Bezout Theorem |sing(R)∩{S R (X, Y ) = 0}| ≤ deg S R (X, Y ). In fact we have by Proposition 5.1 |sing(R)∩{S R (X, Y ). We know that G −1 R (G R (sing(R))) = sing(R) ∪ {S R (X, Y ) = 0} and we are led to consider It follows that We need to estimate deg G R . Off sing(R) we have: Let us take a coordinate of F , P (U, V ) = i+j≤N a ij U i V j . Then on composition with R we get The degree of a generic monomial is (β + 1)j − iα, i + j ≤ N . So we are looking at max{β + 1)j − iα | i + j = N } = (β + 1) · N . We arrive at the estimate deg G R ≤ (β + 1) · deg F and hence From this it follows that Theorem 5.6 gives a cubic estimate (in terms of the degree of F ) for the size of the set of the Picard exceptional values of F . Remark 5.7. The independent interest of bringing the results of this section is that we get a type of a Picard's (small) Theorem for polynomialétale mappings K 2 → K 2 . This result mostly, does not require the field K to be algebraically closed. It does assume the special form of the elements in the geometric basis of F , i.e.
In the algebraically closed case K = C we can do much better. This will be done in last (which is the next) section of this paper.
6 Keller mappings over C in dimension 2 are surjective Theorem 6.1. Let F (X, Y ) ∈ C[X, Y ] 2 : C 2 → C 2 be a Keller mapping.

Proof.
We are going to study the set of F Picard's exceptional values, C 2 − F (C 2 ). Let us denote the coordinates of F , F (X, Y ) = (P (X, Y ), Q(X, Y )). Then by the definition we have (a, b) ∈ F (C 2 ) iff the polynomial system has no solution. This is equivalent (Hilbert's Nullstellensatz) to ∃ A 1 (X, Y ), B 1 (X, Y ) ∈ C[X, Y ] such that Since we assumed that F is not onto, then it is not a C 2 automorphism. Hence R 0 (F ) = ∅. We precompose the last identity with R ∈ R 0 (F ), that R which corresponds to (a, b). We get, Here G R (U, V ) = (G 1 (U, V ), G 2 (U, V )) is the R-dual mapping of F . We clear the denominator U N in the last identity. Thus we get where either U |A(U, V ) or U |B(U, V ).We know that ∃ Y 0 ∈ C such that S R (0, Y 0 ) = 0 and (a, b) = (G 1 (0, Y 0 ), G 2 (0, Y 0 )). Thus the above is equivalent to where either U |A(U, V ) or U |B(U, V ). Let the R Picard's exceptional set of F be the set G R (0, Y j ) where R ∈ R 0 (F ) and S R (0, Y j ) = 0. Then the polynomial viewed as a polynomial of Y (U, V are parameters that are kept fixed) vanishes exactly at the points Y j (and at no other points). If we denote the multiplicities at the Y j by m j ≥ 1, then Note that the multiplicities m j = m j (U, V ) depend on (U, V ) but are locally constant functions defined on C 2 . Plugging into the last identity (U, V ) = (0, Y ) we obtain C(0, Y ) j (Y − Y j ) m j ≡ 0, hence C(0, Y ) ≡ 0 and so we have the representation C(U, V ) = U l D(U, V ), U |D(U, V ), l ≥ 1. Hence where either U |A(U, V ) or U |B(U, V ), N ≥ 0, l ≥ 1, U |D(U, V ), m j ≥ 1. We substitute U = 0 into (**) and obtain A(0, V )(G 1 (0, V )−G 1 (0, Y ))+B(0, V )(G 2 (0, V )−G 2 (0, Y )) = 0 if N > 0 1 if N = 0 .
If N = 0 we get the contradiction 0 = 1 when V = Y . If N > 0 then since Y, U and V are independent in (**) and U divides the right hand side, U must divide the left hand side A(U, V )(G 1 (U, V ) − G 1 (0, Y )) + B(U, V )(G 2 (U, V ) − G 2 (0, Y )) for every choice of V and Y . This forces U |A(U, V ) and U |B(U, V ) which is a contradiction to (**). Thus the assumption C 2 − F (C 2 ) = ∅ ((a, b) = F (C 2 )) leads to a contradiction and hence C 2 − F (C 2 ) = ∅.