请输入您要查询的百科知识:

 

词条 Elementary symmetric polynomial
释义

  1. Definition

  2. Examples

  3. Properties

      Proof sketch    Alternative proof  

  4. See also

  5. References

{{No footnotes|date=January 2017}}

In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial {{math|P}} is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree {{math|d}} in {{math|n}} variables for each nonnegative integer {{math|dn}}, and it is formed by adding together all distinct products of {{math|d}} distinct variables.

Definition

The elementary symmetric polynomials in {{math|n}} variables {{math|X1, …, Xn}}, written {{math|ek(X1, …, Xn)}} for {{math|k {{=}} 0, 1, …, n}}, are defined by

and so forth, ending with

In general, for {{math|k ≥ 0}} we define

so that {{math|ek(X1, …, Xn) {{=}} 0}} if {{math|k > n}}.

Thus, for each non-negative integer {{mvar|k}} less than or equal to {{mvar|n}} there exists exactly one elementary symmetric polynomial of degree {{mvar|k}} in {{mvar|n}} variables. To form the one that has degree {{mvar|k}}, we take the sum of all products of {{mvar|k}}-subsets of the {{mvar|n}} variables. (By contrast, if one performs the same operation using multisets of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.)

Given an integer partition (that is, a finite decreasing sequence of positive integers) {{math|λ {{=}} (λ1, …, λm)}}, one defines the symmetric polynomial {{math|eλ(X1, …, Xn)}}, also called an elementary symmetric polynomial, by

.

Sometimes the notation {{math|σk}} is used instead of {{math|ek}}.

Examples

The following lists the {{math|n}} elementary symmetric polynomials for the first four positive values of {{math|n}}. (In every case, {{math|e0 {{=}} 1}} is also one of the polynomials.)

For {{math|n {{=}} 1}}:

For {{math|n {{=}} 2}}:

For {{math|n {{=}} 3}}:

For {{math|n {{=}} 4}}:

Properties

The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity

That is, when we substitute numerical values for the variables {{math|X1, X2, …, Xn}}, we obtain the monic univariate polynomial (with variable {{math|λ}}) whose roots are the values substituted for {{math|X1, X2, …, Xn}} and whose coefficients are up to their sign the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are called Vieta's formulas.

The characteristic polynomial of a square matrix is an example of application of Vieta's formulas. The roots of this polynomial are the eigenvalues of the matrix. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain, up to their sign, the coefficients of the characteristic polynomial, which are invariants of the matrix. In particular, the trace (the sum of the elements of the diagonal) is the value of {{math|e1}}, and thus the sum of the eigenvalues. Similarly, the determinant is, up to the sign, the constant term of the characteristic polynomial; more precisely the determinant is the value of {{math|en}}. Thus the determinant of a square matrix is the product of the eigenvalues.

The set of elementary symmetric polynomials in {{math|n}} variables generates the ring of symmetric polynomials in {{math|n}} variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring {{math|[e1(X1, …, Xn), …, en(X1, …, Xn)]}}. (See below for a more general statement and proof.) This fact is one of the foundations of invariant theory. For other systems of symmetric polynomials with a similar property see power sum symmetric polynomials and complete homogeneous symmetric polynomials.

== Fundamental theorem of symmetric polynomials ==

For any commutative ring {{math|A}}, denote the ring of symmetric polynomials in the variables {{math|X1, …, Xn}} with coefficients in {{math|A}} by {{math|A[X1, …, Xn]Sn}}. This is a polynomial ring in the n elementary symmetric polynomials {{math|ek(X1, …, Xn)}} for {{math|k {{=}} 1, …, n}}. (Note that {{math|e0}} is not among these polynomials; since {{math|e0 {{=}} 1}}, it cannot be member of any set of algebraically independent elements.)

This means that every symmetric polynomial {{math|P(X1, …, Xn) ∈ A[X1, …, Xn]Sn}} has a unique representation

for some polynomial {{math|QA[Y1, …, Yn]}}. Another way of saying the same thing is that the ring homomorphism that sends {{math|Yk}} to {{math|ek(X1, …, Xn)}} for {{math|k {{=}} 1, …, n}} defines an isomorphism between {{math|A[Y1, …, Yn]}} and {{math|A[X1, …, Xn]Sn}}.

Proof sketch

The theorem may be proved for symmetric homogeneous polynomials by a double mathematical induction with respect to the number of variables {{math|n}} and, for fixed {{math|n}}, with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric).

In the case {{math|n {{=}} 1}} the result is obvious because every polynomial in one variable is automatically symmetric.

Assume now that the theorem has been proved for all polynomials for {{math|m < n}} variables and all symmetric polynomials in {{math|n}} variables with degree {{math|< d}}. Every homogeneous symmetric polynomial {{math|P}} in {{math|A[X1, …, Xn]Sn}} can be decomposed as a sum of homogeneous symmetric polynomials

Here the "lacunary part" {{math|Placunary}} is defined as the sum of all monomials in {{math|P}} which contain only a proper subset of the {{math|n}} variables {{math|X1, …, Xn}}, i.e., where at least one variable {{math|Xj}} is missing.

Because {{math|P}} is symmetric, the lacunary part is determined by its terms containing only the variables {{math|X1, …, Xn − 1}}, i.e., which do not contain {{math|Xn}}. More precisely: If {{math|A}} and {{math|B}} are two homogeneous symmetric polynomials in {{math|X1, …, Xn}} having the same degree, and if the coefficient of {{math|A}} before each monomial which contains only the variables {{math|X1, …, Xn − 1}} equals the corresponding coefficient of {{math|B}}, then {{math|A}} and {{math|B}} have equal lacunary parts. (This is because every monomial which can appear in a lacunary part must lack at least one variable, and thus can be transformed by a permutation of the variables into a monomial which contains only the variables {{math|X1, …, Xn − 1}}.)

But the terms of {{math|P}} which contain only the variables {{math|X1, …, Xn − 1}} are precisely the terms that survive the operation of setting {{math|Xn}} to 0, so their sum equals {{math|P(X1, …, Xn − 1, 0)}}, which is a symmetric polynomial in the variables {{math|X1, …, Xn − 1}} that we shall denote by {{math|(X1, …, Xn − 1)}}. By the inductive assumption, this polynomial can be written as

for some {{math|}}. Here the doubly indexed {{math|σj,n − 1}} denote the elementary symmetric polynomials in {{math|n − 1}} variables.

Consider now the polynomial

Then {{math|R(X1, …, Xn)}} is a symmetric polynomial in {{math|X1, …, Xn}}, of the same degree as {{math|Placunary}}, which satisfies

(the first equality holds because setting {{math|Xn}} to 0 in {{math|σj,n}} gives {{math|σj,n − 1}}, for all {{math|j < n}}). In other words, the coefficient of {{math|R}} before each monomial which contains only the variables {{math|X1, …, Xn − 1}} equals the corresponding coefficient of {{math|P}}. As we know, this shows that the lacunary part of {{math|R}} coincides with that of the original polynomial {{math|P}}. Therefore the difference {{math|PR}} has no lacunary part, and is therefore divisible by the product {{math|X1···Xn}} of all variables, which equals the elementary symmetric polynomial {{math|σn,n}}. Then writing {{math|PR {{=}} σn,nQ}}, the quotient {{math|Q}} is a homogeneous symmetric polynomial of degree less than {{math|d}} (in fact degree at most {{math|dn}}) which by the inductive assumption can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for {{math|PR}} and {{math|R}} one finds a polynomial representation for {{math|P}}.

The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the {{math|n}} polynomials {{math|e1, …, en}} are algebraically independent over the ring {{math|A}}.) The fact that the polynomial representation is unique implies that {{math|A[X1, …, Xn]Sn}} is isomorphic to {{math|A[Y1, …, Yn]}}.

Alternative proof

The following proof is also inductive, but does not involve other polynomials than those symmetric in {{math|X1, …, Xn}}, and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogeneous of degree {{mvar|d}}; different homogeneous components can be decomposed separately. Order the monomials in the variables {{mvar|Xi}} lexicographically, where the individual variables are ordered {{math|X1 > … > Xn}}, in other words the dominant term of a polynomial is one with the highest occurring power of {{math|X1}}, and among those the one with the highest power of {{math|X2}}, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree {{math|d}} (they are in fact homogeneous) as follows by partitions of {{math|d}}. Order the individual elementary symmetric polynomials {{math|ei(X1, …, Xn)}} in the product so that those with larger indices {{mvar|i}} come first, then build for each such factor a column of {{mvar|i}} boxes, and arrange those columns from left to right to form a Young diagram containing {{mvar|d}} boxes in all. The shape of this diagram is a partition of {{mvar|d}}, and each partition {{mvar|λ}} of {{math|d}} arises for exactly one product of elementary symmetric polynomials, which we shall denote by {{math|eλt (X1, …, Xn}}) (the {{math|t}} is present only because traditionally this product is associated to the transpose partition of {{mvar|λ}}). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables {{math|Xi}}.

Lemma. The leading term of {{math|eλt (X1, …, Xn)}} is {{math|X λ}}.

Proof. The leading term of the product is the product of the leading terms of each factor (this is true whenever one uses a monomial order, like the lexicographic order used here), and the leading term of the factor {{math|ei(X1, …, Xn)}} is clearly {{math|X1X2···Xi}}. To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers {{math|1, …, i}} of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is {{math|X λ}}.

Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogeneous symmetric polynomial {{mvar|P}} of degree {{mvar|d}} can be written as polynomial in the elementary symmetric polynomials. Since {{mvar|P}} is symmetric, its leading monomial has weakly decreasing exponents, so it is some {{math|X λ}} with {{mvar|λ}} a partition of {{math|d}}. Let the coefficient of this term be {{mvar|c}}, then {{math|Pceλt (X1, …, Xn)}} is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back {{math|ceλt (X1, …, Xn)}} to it, one obtains the sought for polynomial expression for {{math|P}}.

The fact that this expression is unique, or equivalently that all the products (monomials) {{math|eλt (X1, …, Xn)}} of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the {{math|eλt (X1, …, Xn)}} were zero, one focuses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables {{math|Xi}}) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction.

See also

  • Symmetric polynomial
  • Complete homogeneous symmetric polynomial
  • Schur polynomial
  • Newton's identities
  • MacMahon Master theorem
  • Symmetric function
  • Representation theory

References

  • {{cite book|authorlink=I. G. Macdonald|last=Macdonald|first=I. G.|date=1995|title=Symmetric Functions and Hall Polynomials|edition=2nd|location=Oxford|publisher=Clarendon Press|ISBN=0-19-850450-0}}
  • {{cite book|authorlink=Richard P. Stanley|last=Stanley|first=Richard P.|date=1999|title=Enumerative Combinatorics, Vol. 2|location=Cambridge|publisher=Cambridge University Press|ISBN=0-521-56069-1}}

3 : Homogeneous polynomials|Symmetric functions|Articles containing proofs

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/14 2:41:34