Hodge index theorem

In mathematics, the Hodge index theorem for an algebraic surface V determines the signature of the intersection pairing on the algebraic curves C on V. It says, roughly speaking, that the space spanned by such curves (up to linear equivalence) has a one-dimensional subspace on which it is positive definite (not uniquely determined), and decomposes as a direct sum of some such one-dimensional subspace, and a complementary subspace on which it is negative definite.

In a more formal statement, specify that V is a non-singular projective surface, and let H be the divisor class on V of a hyperplane section of V in a given projective embedding. Then the intersection

where d is the degree of V (in that embedding). Let D be the vector space of rational divisor classes on V, up to algebraic equivalence. The dimension of D is finite and is usually denoted by ρ(V). The Hodge index theorem says that the subspace spanned by H in D has a complementary subspace on which the intersection pairing is negative definite. Therefore the signature (often also called index) is (1,ρ(V)-1).

The abelian group of divisor classes up to algebraic equivalence is now called the Néron-Severi group; it is known to be a finitely-generated abelian group, and the result is about its tensor product with the rational number field. Therefore ρ(V) is equally the rank of the Néron-Severi group (which can have a non-trivial torsion subgroup, on occasion).

This result was proved in the 1930s by W. V. D. Hodge, for varieties over the complex numbers, after it had been a conjecture for some time of the Italian school of algebraic geometry (in particular, Francesco Severi, who in this case showed that ρ < ∞). Hodge's methods were the topological ones brought in by Lefschetz. The result holds over general (algebraically closed) fields.

References

This article is issued from Wikipedia - version of the 3/9/2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.