Hypergeometric Expansion#
This page describes how the function hyperexpand()
and related code
work. For usage, see the documentation of the symplify module.
Hypergeometric Function Expansion Algorithm#
This section describes the algorithm used to expand hypergeometric functions. Most of it is based on the papers [Roach1996] and [Roach1997].
Recall that the hypergeometric function is (initially) defined as
It turns out that there are certain differential operators that can change the
and
parameters by integers. If a sequence of such
operators is known that converts the set of indices
and
into
and
, then we shall say the pair
is reachable from
. Our general strategy is thus as
follows: given a set
of parameters, try to look up an origin
for which we know an expression, and then apply the
sequence of differential operators to the known expression to find an
expression for the Hypergeometric function we are interested in.
Notation#
In the following, the symbol will always denote a numerator parameter
and the symbol
will always denote a denominator parameter. The
subscripts
denote vectors of that length, so e.g.
denotes a vector of
numerator parameters. The subscripts
and
denote “running indices”, so they should usually be used in
conjunction with a “for all
”. E.g.
for all
.
Uppercase subscripts
and
denote a chosen, fixed index. So
for example
is true if the inequality holds for the one index
we are currently interested in.
Incrementing and decrementing indices#
Suppose . Set
. It is then easy to show that
, where
is the i-th unit vector.
Similarly for
we set
and find
.
Thus we can increment upper and decrement lower indices at will, as long as we
don’t go through zero. The
and
are called shift
operators.
It is also easy to show that , where
is
the vector
and similarly for
.
Combining this with the shift operators, we arrive at one form of the
Hypergeometric differential equation:
. This holds if all shift operators are defined, i.e. if no
and no
. Clearing denominators and multiplying through by z we
arrive at the following equation:
. Even though our derivation does not show it,
it can be checked that this equation holds whenever the
is
defined.
Notice that, under suitable conditions on , each of the
operators
,
and
can be expressed in terms of
or
. Our next aim is to write the Hypergeometric differential
equation as follows:
, for some operator
and some constant
to be determined. If
, then we can write this as
, and so
undoes the shifting of
,
whence it will be called an inverse-shift operator.
Now exists if
, and then
. Observe also that all the
operators
,
and
commute. We have
, so
this gives us the first half of
. The other half does not have such a
nice expression. We find
. Since the first
half had no constant term, we infer
.
This tells us under which conditions we can “un-shift” , namely
when
and
. Substituting
for
then tells us under what conditions we can decrement the index
. Doing a similar analysis for
, we arrive at the
following rules:
An index
can be decremented if
and
for all
.
An index
can be incremented if
and
for all
.
Combined with the conditions (stated above) for the existence of shift operators, we have thus established the rules of the game!
Reduction of Order#
Notice that, quite trivially, if , we have
, where
means
with
omitted, and similarly for
. We call this reduction of
order.
In fact, we can do even better. If , then
it is easy to see that
is actually a polynomial
in
. It is also easy to see that
. Combining these two remarks
we find:
If
, then there exists a polynomial
(of degree
) such that
and
.
Thus any set of parameters is reachable from a set of
parameters
where
implies
. Such a set of parameters
is called
suitable. Our database of known formulae should only contain suitable origins.
The reasons are twofold: firstly, working from suitable origins is easier, and
secondly, a formula for a non-suitable origin can be deduced from a lower order
formula, and we should put this one into the database instead.
Moving Around in the Parameter Space#
It remains to investigate the following question: suppose and
are both suitable, and also
,
. When is
reachable from
? It is clear that we can treat all
parameters independently that are incongruent mod 1. So assume that
and
are congruent to
mod 1, for all
and
. The same then follows for
and
.
If , then any such
is reachable from any
. To see this notice that there exist constants
,
congruent mod 1, such that
for all
and
, and similarly
. If
then
we first inverse-shift all the
times up, and then
similarly shift up all the
times. If
then we first inverse-shift down the
and then shift down the
. This reduces to the case
. But evidently we can
now shift or inverse-shift around the
arbitrarily so long as we
keep them less than
, and similarly for the
so long as
we keep them bigger than
. Thus
is reachable from
.
If then the problem is slightly more involved. WLOG no parameter
is zero. We now have one additional complication: no parameter can ever move
through zero. Hence
is reachable from
if
and only if the number of
equals the number of
, and similarly for the
and
. But in a suitable set
of parameters, all
! This is because the Hypergeometric function
is undefined if one of the
is a non-positive integer and all
are smaller than the
. Hence the number of
is
always zero.
We can thus associate to every suitable set of parameters ,
where no
, the following invariants:
For every
the number
of parameters
, and similarly the number
of parameters
.
The number
of integers
with
.
The above reasoning shows that is reachable from
if and only if the invariants
all
agree. Thus in particular “being reachable from” is a symmetric relation on
suitable parameters without zeros.
Applying the Operators#
If all goes well then for a given set of parameters we find an origin in our
database for which we have a nice formula. We now have to apply (potentially)
many differential operators to it. If we do this blindly then the result will
be very messy. This is because with Hypergeometric type functions, the
derivative is usually expressed as a sum of two contiguous functions. Hence if
we compute derivatives, then the answer will involve
contiguous functions! This is clearly undesirable. In fact we know from the
Hypergeometric differential equation that we need at most
contiguous functions to express all derivatives.
Hence instead of differentiating blindly, we will work with a
-module basis: for an origin
we either store
(for particularly pretty answers) or compute a set of
functions
(typically
) with the property that the derivative of
any of them is a
-linear combination of them. In formulae,
we store a vector
of
functions, a matrix
and a
vector
(the latter two with entries in
), with
the following properties:
.
Then we can compute as many derivatives as we want and we will always end up
with -linear combination of at most
special
functions.
As hinted above, ,
and
can either all be stored
(for particularly pretty answers) or computed from a single
formula.
Loose Ends#
This describes the bulk of the hypergeometric function algorithm. There a few further tricks, described in the hyperexpand.py source file. The extension to Meijer G-functions is also described there.
Meijer G-Functions of Finite Confluence#
Slater’s theorem essentially evaluates a -function as a sum of residues.
If all poles are simple, the resulting series can be recognised as
hypergeometric series. Thus a
-function can be evaluated as a sum of
Hypergeometric functions.
If the poles are not simple, the resulting series are not hypergeometric. This
is known as the “confluent” or “logarithmic” case (the latter because the
resulting series tend to contain logarithms). The answer depends in a
complicated way on the multiplicities of various poles, and there is no
accepted notation for representing it (as far as I know).
However if there are only finitely many
multiple poles, we can evaluate the function as a sum of hypergeometric
functions, plus finitely many extra terms. I could not find any good reference
for this, which is why I work it out here.
Recall the general setup. We define
where is a contour starting and ending at
, enclosing all of the
poles of
for
once in the negative
direction, and no other poles. Also the integral is assumed absolutely
convergent.
In what follows, for any complex numbers , we write
if
and only if there exists an integer
such that
. Thus there are
double poles iff
for some
.
We now assume that whenever for
,
then
. This means that no quotient of the relevant gamma functions
is a polynomial, and can always be achieved by “reduction of order”. Fix a
complex number
such that
is
not empty. Enumerate this set as
, with
non-negative integers. Enumerate similarly
as
.
Then
for all
. For finite confluence, we need to assume
for all such
.
Let be distinct
and exhaust the congruence classes
of the
. I claim
where is a hypergeometric function and
is a finite sum, both
to be specified later. Indeed corresponding to every
there is
a sequence of poles, at mostly finitely many of them multiple poles. This is where
the
-th term comes from.
Hence fix again , enumerate the relevant
as
. We will look at the
corresponding to
. The other
are not treated specially. The
corresponding gamma functions have poles at (potentially)
for
. For
, pole of the integrand is simple. We thus set
We finally need to investigate the other poles. Set ,
.
A computation shows
where .
Also
and
Hence
where the means to omit the terms we treated specially.
We thus arrive at
where designates the factor in the residue independent of
.
(This result can also be written in slightly simpler form by converting
all the
etc back to
, but doing so is going to require more
notation still and is not helpful for computation.)
Extending The Hypergeometric Tables#
Adding new formulae to the tables is straightforward. At the top of the file
sympy/simplify/hyperexpand.py
, there is a function called
add_formulae()
. Nested in it are defined two helpers,
add(ap, bq, res)
and addb(ap, bq, B, C, M)
, as well as dummys
a
, b
, c
, and z
.
The first step in adding a new formula is by using add(ap, bq, res)
. This
declares hyper(ap, bq, z) == res
. Here ap
and bq
may use the
dummys a
, b
, and c
as free symbols. For example the well-known formula
is declared by the following
line:
add((-a, ), (), (1-z)**a)
.
From the information provided, the matrices ,
and
will be computed,
and the formula is now available when expanding hypergeometric functions.
Next the test file
sympy/simplify/tests/test_hyperexpand.py
should be run,
in particular the test test_formulae()
. This will test the newly added
formula numerically. If it fails, there is (presumably) a typo in what was
entered.
Since all newly-added formulae are probably relatively complicated, chances
are that the automatically computed basis is rather suboptimal (there is no
good way of testing this, other than observing very messy output). In this
case the matrices ,
and
should be computed by hand. Then the helper
addb
can be used to declare a hypergeometric formula with hand-computed
basis.
An example#
Because this explanation so far might be very theoretical and difficult to
understand, we walk through an explicit example now. We take the Fresnel
function which obeys the following hypergeometric representation:
First we try to add this formula to the lookup table by using the
(simpler) function add(ap, bq, res)
. The first two arguments
are simply the lists containing the parameter sets of .
The
res
argument is a little bit more complicated. We only know
in terms of
with
a function of
, in our case
What we need is a formula where the hypergeometric function has
only as argument
. We
introduce the new complex symbol
and search for a function
such that
holds. Then we can replace every in
by
.
In the case of our example the function
could look like
We get these functions mainly by guessing and testing the result. Hence
we proceed by computing (and simplifying naively)
and indeed get back . (In case of branched functions we have to be aware of
branch cuts. In that case we take
to be a positive real number and check
the formula. If what we have found works for positive
, then just replace
exp
inside any branched
function by exp_polar
and what
we get is right for
.) Hence we can write the formula as
and trivially
which is exactly what is needed for the third parameter,
res
, in add
. Finally, the whole function call to add
this rule to the table looks like:
add([S(1)/4],
[S(1)/2, S(5)/4],
fresnelc(exp(pi*I/4)*root(z,4)*2/sqrt(pi)) / (exp(pi*I/4)*root(z,4)*2/sqrt(pi))
)
Using this rule we will find that it works but the results are not really nice
in terms of simplicity and number of special function instances included.
We can obtain much better results by adding the formula to the lookup table
in another way. For this we use the (more complicated) function addb(ap, bq, B, C, M)
.
The first two arguments are again the lists containing the parameter sets of
. The remaining three are the matrices mentioned earlier
on this page.
We know that the -th derivative can be
expressed as a linear combination of lower order derivatives. The matrix
contains the basis
and is of shape
. The best way to get
is to take the first
derivatives of the expression for
and take out useful pieces. In our case we find that
. For computing the derivatives,
we have to use the operator
. The
first basis element
is set to the expression for
from above:
Next we compute . For this we can
directly use SymPy!
>>> from sympy import Symbol, sqrt, exp, I, pi, fresnelc, root, diff, expand
>>> z = Symbol("z")
>>> B0 = sqrt(pi)*exp(-I*pi/4)*fresnelc(2*root(z,4)*exp(I*pi/4)/sqrt(pi))/\
... (2*root(z,4))
>>> z * diff(B0, z)
z*(cosh(2*sqrt(z))/(4*z) - sqrt(pi)*exp(-I*pi/4)*fresnelc(2*z**(1/4)*exp(I*pi/4)/sqrt(pi))/(8*z**(5/4)))
>>> expand(_)
cosh(2*sqrt(z))/4 - sqrt(pi)*exp(-I*pi/4)*fresnelc(2*z**(1/4)*exp(I*pi/4)/sqrt(pi))/(8*z**(1/4))
Formatting this result nicely we obtain
Computing the second derivative we find
>>> from sympy import (Symbol, cosh, sqrt, pi, exp, I, fresnelc, root,
... diff, expand)
>>> z = Symbol("z")
>>> B1prime = cosh(2*sqrt(z))/4 - sqrt(pi)*exp(-I*pi/4)*\
... fresnelc(2*root(z,4)*exp(I*pi/4)/sqrt(pi))/(8*root(z,4))
>>> z * diff(B1prime, z)
z*(-cosh(2*sqrt(z))/(16*z) + sinh(2*sqrt(z))/(4*sqrt(z)) + sqrt(pi)*exp(-I*pi/4)*fresnelc(2*z**(1/4)*exp(I*pi/4)/sqrt(pi))/(32*z**(5/4)))
>>> expand(_)
sqrt(z)*sinh(2*sqrt(z))/4 - cosh(2*sqrt(z))/16 + sqrt(pi)*exp(-I*pi/4)*fresnelc(2*z**(1/4)*exp(I*pi/4)/sqrt(pi))/(32*z**(1/4))
which can be printed as
We see the common pattern and can collect the pieces. Hence it makes sense to
choose and
as follows
(This is in contrast to the basis that would
have been computed automatically if we used just
add(ap, bq, res)
.)
Because it must hold that
the entries of
are obviously
Finally we have to compute the entries of the matrix
such that
holds. This is easy.
We already computed the first part
above. This gives us the first row of
. For the second row we have:
>>> from sympy import Symbol, cosh, sqrt, diff
>>> z = Symbol("z")
>>> B1 = cosh(2*sqrt(z))
>>> z * diff(B1, z)
sqrt(z)*sinh(2*sqrt(z))
and for the third one
>>> from sympy import Symbol, sinh, sqrt, expand, diff
>>> z = Symbol("z")
>>> B2 = sinh(2*sqrt(z))*sqrt(z)
>>> expand(z * diff(B2, z))
sqrt(z)*sinh(2*sqrt(z))/2 + z*cosh(2*sqrt(z))
Now we have computed the entries of this matrix to be
Note that the entries of and
should typically be
rational functions in
, with rational coefficients. This is all
we need to do in order to add a new formula to the lookup table for
hyperexpand
.
Implemented Hypergeometric Formulae#
A vital part of the algorithm is a relatively large table of hypergeometric function representations. The following automatically generated list contains all the representations implemented in SymPy (of course many more are derived from them). These formulae are mostly taken from [Luke1969] and [Prudnikov1990]. They are all tested numerically.
References#
Kelly B. Roach. Hypergeometric Function Representations. In: Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation, pages 301-308, New York, 1996. ACM.
Kelly B. Roach. Meijer G Function Representations. In: Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation, pages 205-211, New York, 1997. ACM.
Luke, Y. L. (1969), The Special Functions and Their Approximations, Volume 1.
A. P. Prudnikov, Yu. A. Brychkov and O. I. Marichev (1990). Integrals and Series: More Special Functions, Vol. 3, Gordon and Breach Science Publisher.