next up previous
Next: Conclusion and discussion Up: The hunt for local Previous: Dynamic range and sweep

A more intelligent hunt?

What are the prospects for a more intelligent `hunt' procedure? It is interesting that the individual matrix elements of $F(k)$ and $G(k)$ are linear up to a $k$-scale of $\sim {\mathsf{L}}^{-1}$, the inverse system length (this is because the basis functions generally oscillate at a wavenumber $k$). This scale is $O(k^{d-1}{\mathsf{A}})$ times larger than the average level-spacing in $k$. This implies that information about all the tension minima in a $k$ range $O({\mathsf{L}}^{-1})$ might be contained in $F$ and $G$ and their $k$-derivatives at a single $k$ value.

To take a `toy problem' example, imagine the values of $k$ are desired such that a parameter-dependent order-$N$ symmetric matrix $T(k)$ has a zero eigenvalue. We assume linear dependence $T(k) = T + k S$. The zero eigenvalue condition is written $(T + k S) {\mathbf x} = {\mathbf 0}$. This is simply the generalized eigenvalue equation between $T$ and $S$, whose once-off diagonalization predicts all $N$ solutions of $k$. This sounds promising, however the types of zero-crossings produced in the eigenvalues of $T(k)$ are linear (passing through zero with finite slope, changing sign in the process). Unfortunately the methods of this chapter require detection of eigenvalues of positive definite matrices which reach (close to) zero in a quadratic fashion (they cannot change sign). Therefore the above trick is no help.

In fact, the above linearization of $F(k)$ and $G(k)$ is deceptive, since their positive-definiteness cannot be maintained without considering higher-order powers of $k$. The positive-definite nature of $F$ and $G$ arises because they are the square of other matrices (e.g. $F = A^{{\mbox{\tiny T}}} A$, see Appendix G). It is these other matrices whose entries can unproblematically be linearized in $k$. If we imagine again the toy problem now with $T(k) = A^{{\mbox{\tiny T}}}(k) A(k)$ and $A(k) = A + k B$. Eigenvalues of $T$ are given by squares of singular values of $A$ [81]. So now the problem is to predict the singular value zero-intersections of $A(k)$ (of course, this is the problem common to all Class b) methods, and is therefore of huge interest). Unfortunately the generalized singular value decomposition [81] of $A$ and $B$ does not predict these $k$ values. Even though all the information about the $k$ values is contained in $A$ and $B$, I am unaware of a suitable decomposition which returns these values. It is an area for future research, and would have a huge impact on the large physics and engineering community currently using Class b) methods.

One untested idea on this front is the presentation of the toy problem as $( T + k D + k^2 E) {\mathbf x} = {\mathbf 0}$, where $D = B^{{\mbox{\tiny T}}} A + A^{{\mbox{\tiny T}}} B$ and $E = B^{{\mbox{\tiny T}}} B$. It is possible to convert this nonlinear eigenvalue problem into a linear one of order $2N$ (see e.g. [161]),

\begin{displaymath}
\left( \begin{array}{cc} 0 & 1 \\ -E^{-1} T & -E^{-1} D \en...
...gin{array}{c} {\mathbf x} \\ {\mathbf y} \end{array} \right) .
\end{displaymath} (5.24)

The factor 8 reduction in diagonalization time would only be bearable if more than a few $k$-values were returned each time. Tests of this are needed.

Figure 5.6: Inverse square-root of higher generalized eigenvalues of (5.14), as a function of $k$. As before, the minima in the largest eigenvalue correspond to billiard eigenstates. The long-lived nature of the parabolic form of $\lambda_n^{-1}$ is visible as the survival of the straight lines over a range of $O({\mathsf{L}}^1)$ around these minima. Note that avoided crossings are small, tending to zero at the bottom of the plot.
\begin{figure}\centerline{\epsfig{figure=fig_ipwdm/gev.eps,width=\hsize}}\end{figure}


next up previous
Next: Conclusion and discussion Up: The hunt for local Previous: Dynamic range and sweep
Alex Barnett 2001-10-03