Topology Atlas || Conferences
 5th IMACS Conference on Iterative Methods in Scientific Computing May 28-31, 2001 Foundation for Research and Technology - Hellas (FORTH) Heraklion, Crete, Greece

Organizers
Apostolos Hadjidimos, Elias Houstis, Emmanuel Vavalis

Hermitian methods for computing eigenvalues
by
Ioannis Koutis
Carnegie Mellon University
Coauthors: Efstratios Gallopoulos (University of Patras)

0pt

Hermitian methods for computing eigenvalues

# Hermitian methods for computing eigenvalues

### I. Koutis   E. Gallopoulos Carnegie Mellon University  University of Patras jkoutis@cs.cmu.edu   stratis@hpclab.ceid.upatras.gr

In 1998, P. D. Oleary and G. W. Stewart proposed a simple iterative scheme (SVRQI) for the approximation of simple eigenvalues. If z is an approximation of a simple eigenvalue of a matrix M, and u and v are the left and right singular vectors corresponding to the minimum singular value \sigmamin of zI-M, then the next approximation is taken to be
 ^ z = uHMv uHv .
The method was derived as a variant of the well known Rayleigh Quotient iteration (RQI), from the observation that eigenvectors corresponding to a simple zero eigenvalue, are also singular vectors corresponding to a simple zero singular value. A proof that SVRQI is locally quadratically convergent was given.

In 1998 also, we independently proposed the following simple iteration for the computation of simple eigenvalues
 ^ z = z- \sigmamin |vHu| · vHu |vHu| .
At a first glance our method appears to be different, but a little algebraic manipulation shows that it is equivalent to SVRQI.

We derived our scheme in the context of our research related to the matrix \epsilon-pseudospectrum, which is defined as the set of points z that satisfy \sigmamin(zI-A) = \epsilon. For different values of \epsilon, the \epsilon-pseudospectra form nested curves around the eigenvalues which can be thought of as the 0-pseudospectrum. A theorem by J. Sun states that the gradient of a simple singular value \sigma(x, y)=\sigma((x+iy)I-A) with left and right eigenvectors u and v is given by
 Ñ\sigma(x, y)=(Re(vHu), Im(vHu)).
The gradient of \sigmamin gives the direction of the steepest decrease of \sigmamin.

It follows that in \epsilon-pseudospectra terms, the iterative step of SVRQI is (as it is now apparent in our formulation) a steepest descent to smaller values of \epsilon, with step size \sigmamin/|vHu|. Our choice of step size is based to an analysis of the behavior of the matrix pseudospectrum. One of the basic results of our analysis is the ''exclusion'' theorem which states that (with the exception of some small sets of points) the open disk with center z and radius \sigmamin/|vHu| does not contain an eigenvalue.

Our radically different perspective enables us to explain and quantitatively characterize the global convergence of SVRQI. It also suggests that the information collected in the course of a single eigenvalue computation in the form of exclusion disks, can subsequently be used to ''drive'' and improve the computation of other eigenvalues. More generally, we show that the exclusion theorem can serve as a very effective tool for a computationally inexpensive but often very tight localization of the matrix spectrum.

Our approach naturally leads to an interesting observation for the class of normal matrices; SVRQI always finds an eigenvalue in exactly one step. We extend this observation by proving that for any z, a single SVD of the matrix zI-M and a few additional computations, are enough to give all the eigenvalues of a normal matrix M. Recall that RQI never returns an eigenvalue in one step, and that there are cases in which divergence occurs provably. So, even though SVRQI was derived as modification of RQI, we could argue that RQI can be seen as a ''flawed'' version of SVRQI.