The matrix representation of quantum mechanics

We use the matrix representation of quantum mechanics to approximate the energy spectrum of an anharmonic oscillator.

We make use of the particle-in-a-box basis. The box size has to be chosen sufficiently large so that the truncation of x -space to a finite domain is justified.

An alternative would have been to choose a harmonic oscillator basis (as done in Quantum Mechanics using Maple).

We construct the eigenfunctions using the new LinearAlgebra package of Maple 6. At the end we compare the approximate eigenfunction with a numerical solution to the differential-equation eigenvalue problem.

We solve the problem by calculating matrix elements in the coordinate representation. Note, however, that a worksheet exists to carry out the calculations based on the commutation relations of quantum mechanics. These worksheets are commut1.mws (Maple5), and define.mws (Maple6).

We start with the basis. The box size will put limits on the accuracy of the higher-lying eigenvalues/eigenfunctions. We also choose a truncation size for the matrix eigenvalue problem.

> restart; Digits:=15: with(LinearAlgebra):

> X:=8; N:=20;

X := 8

N := 20

> uB:=n->if is(n,even) then sqrt(1/X)*sin(n*Pi*x/(2*X)) else sqrt(1/X)*cos(n*Pi*x/(2*X)) end if:

> plot([uB(1),uB(2),uB(3),uB(4)],x=-X..X,color=[red,blue,green,black]);

[Maple Plot]

> int(uB(2)^2,x=-X..X);

1

The basis states are normalized; they represent the modes that the box can accomodate ( n -1 counts the number of nodes inside the box, and the waves vanish at the box edges (+ X and - X ). What are the eigenenergies? We have kinetic energy only, and evaluate it only over the allowed range; the result agrees with the textbook formula. The kinetic energy matrix will be diagonal in our basis, as the basis functions are eigenfunctions of the kinetic energy operator.

> Tkin:=psi->int(psi*(-1/2*diff(psi,x$2)),x=-X..X);

Tkin := proc (psi) options operator, arrow; int(-1/...

> [Tkin(uB(1)),Tkin(uB(2)),Tkin(uB(3)),Tkin(uB(4))];

[1/512*Pi^2, 1/128*Pi^2, 9/512*Pi^2, 1/32*Pi^2]

> seq(n^2*Pi^2/8/X^2,n=1..4);

1/512*Pi^2, 1/128*Pi^2, 9/512*Pi^2, 1/32*Pi^2

Now let us define our potential of interest:

> V:=x^4;

V := x^4

We define matrix elements where, again, space is restricted to a finite interval. This is imposed by our choice of basis! We allow two different functions in bra and ket.

> Vpot:=(phi,psi)->int(expand(phi*psi*V),x=-X..X);

Vpot := proc (phi, psi) options operator, arrow; in...

We are ready to assemble the hamiltonian matrix:

> HM:=Matrix(N):

We make use of the symmetry of the hamiltonian matrix: it allows to save almost a factor of 2 in computation time.

> for i from 1 to N do: for j from 1 to i do: HM[i,j]:=Vpot(uB(i),uB(j)); if i=j then HM[i,j]:=HM[i,j]+ Tkin(uB(i)); fi: od: od:

> for i from 1 to N do: for j from i+1 to N do: HM[i,j]:=HM[j,i]: od: od:

> print(SubMatrix(HM,1..4,1..4));

_rtable[18428556]

All integrals were calculated in closed form. The matrix has a sparse structure: the chosen potential respects parity symmetry, and the basis consists of even/odd functions. Therefore, one could solve the problem of the even and odd eigenfunctions of V separately. We do keep it general, so that one may insert a non-symmetric potential function above.

We cannot expect the matrix diagonalization to be carried out symbolically. Therefore, we switch to floating-point evaluation:

> HMf:=map(evalf,HM):

> evals:=Eigenvalues(HMf, output='list'):

> ev_s:=sort(map(Re,evals));

ev_s := [.670845440760000855, 2.40980154796841317, ...
ev_s := [.670845440760000855, 2.40980154796841317, ...
ev_s := [.670845440760000855, 2.40980154796841317, ...
ev_s := [.670845440760000855, 2.40980154796841317, ...

Exercise 1:

Carry out matrix diagonalizations with submatrices of the N-by-N Hamiltonian matrix, and observe the behaviour of the low-lying eigenvalues as a function of truncation size.

Now the eigenfunctions. We need a sorting procedure to arrange the result from the eigenvector calculation in proper order.

> VE:=Eigenvectors(HMf,output='list'):

> Vp:=[seq([Re(VE[i][1]),VE[i][2],map(Re,VE[i][3])],i=1..nops(VE))]:

> Min:=proc(x,y); if type(x,numeric) and type(y,numeric) then if x<=y then RETURN(true): else RETURN(false): fi; elif type(x,list) and type(y,list) and type(x[1],numeric) and type(y[1],numeric) then if x[1]<=y[1] then RETURN(true): else RETURN(false): fi; elif convert(x,string)<=convert(y,string) then RETURN(true): else RETURN(false): fi: end:

> VEs:=sort(Vp,Min):

Suppose that we would like to see the eigenfunctions corresponding to the four lowest-lying eigenvalues. The eigenvector for a given eigenvalue contains the expansion coefficients for the expansion of the eigenstate in terms of the chosen basis states.

> for i from 1 to 4 do:

> psi0:=add(VEs[i][3][j]*uB(j),j=1..N):

> No:=1/sqrt(int(expand(psi0^2),x=-X..X));

> phi_a[i]:=add(No*VEs[i][3][j]*uB(j),j=1..N): od:

> plot([seq(phi_a[i],i=1..4)],x=-5..5,color=[red,blue,green,black]);

[Maple Plot]

The four functions should have 0,1,2,3 nodes. The additional nodes at large x are clearly an artefact of the calculation.

Let us check the accuracy of these approximate eigenfunctions one by one:

> ev_s[1];

.670845440760000855

We can construct a shooting-method solution: we will use dsolve in numeric mode to integrate out the Schroedinger equation.

For the even-parity state we choose boundary conditions as u(0)=1, D(u)(0)=0 ; and for the odd-parity states u(0)=0, D(u)(0)=1

We use the energy as a trial value to ensure that the wavefunction vanishes at the 'boundary' (where the potential grows large).

This 'boundary' value is state-dependent. One iterates the procedure by hand. One can also write a bisection algorithm to automate the eigenvalue search.

> IC:=u(0)=1,D(u)(0)=0;

IC := u(0) = 1, D(u)(0) = 0

> E_t:=0.668;

> SE:=-1/2*diff(u(x),x$2)+(V-E_t)*u(x)=0;

> sol:=dsolve({SE,IC},u(x),numeric,output=listprocedure):

> u_s:=subs(sol,u(x)):

E_t := .668

SE := -1/2*diff(u(x),`$`(x,2))+(x^4-.668)*u(x) = 0

> P1:=plot('u_s(x)',x=0..2.5): plots[display](P1);

[Maple Plot]

Now we change the normalization on the result of the matrix diagonalization to make the comparison:

> P2:=plot(phi_a[1]/subs(x=0,phi_a[1]),x=0..4,color=blue):

> plots[display](P1,P2);

[Maple Plot]

Exercise 2:

Check the accuracy of the eigenenergies of the first and second excited states, and compare the graphs of the eigenfunctions as obtained by matrix diagonalization versus numerical integration.

Exercise 3:

Check the results obtained for different choices of the box size that defines the free-particle basis.

It is possible to assess the accuracy of the matrix diagonalization without referring to the numerical calculation. One can calculate the expectation value of the Hamiltonian in the coordinate representation for the given eigenstate. This agrees with the eigenvalue obtained from the matrix diagonalization. Then, one can ask about the distribution of energies in the state. The deviation from the average is calculated from the matrix elements of the square of the Hamiltonian.

The functions to calculate these are defined below. However, they work only when the truncation size of the matrix is relatively small. Otherwise too much memory is consumed in the evaluation of the integrals.

> Havg:=psi->int(simplify(psi*(-1/2*diff(psi,x$2)+V*psi)),x=-X..X);

Havg := proc (psi) options operator, arrow; int(sim...

> E1_avg:=Havg(phi_a[1]);

E1_avg := .670845440759

> H2me:=psi->int(simplify((-1/2*diff(psi,x$2)+V*psi)^2),x=-X..X);

H2me := proc (psi) options operator, arrow; int(sim...

> Edev:=sqrt(abs(E1_avg^2-H2me(phi_a[1])));

Edev := .266222705721850

The deviation is much larger than what we would have expected from our comparison with the numerical value. Nevertheless, the number is proportional to the accuracy for the given state: when the matrix size is increased the deviation from the average of the energy decreases for all x . The large deviation is likely to come from the x -range where the approximate wave function is oscillating. This can be verified by restricting the calculation to a smaller x -range where the true ground-state eigenfunction is non-zero.

Another way to explore the accuracy of the matrix diagonalization result is to graph the local energy in position space. For an eigenstate the function f ( x ) = (H*psi)/psi should be equal to the eigenvalue at all x .

> f:=psi->(-1/2*diff(psi,x$2))/psi+V;

f := proc (psi) options operator, arrow; -1/2*diff(...

> plot(f(phi_a[1]),x=0..3,view=[0..3,0..2]);

[Maple Plot]

This graph should have a sobering effect for any enthusiasm that may have developed after obtaining a reasonably accurate eigenvalue, and an approximate eigenfunction that appeared to follow the numerically calculated one: The local energy calculation is a highly sensitive quantity. In particular, we notice that the basis-state representation in the 'particle-in-a-box' basis has difficulties with obtaining the correct fall-off of the wavefunction.

We should not be deterred too much by this mixture of success and disappointment: we should be aware of the fact that eigenenergies are much easier to obtain than accurate eigenfunctions, and proceed with cautiously. Matrix diagonalization is often the only tool available to solve the Schroedinger equation. Usually we also have the possibility to improve matters by adjusting the basis to the problem at hand, and thereby reducing the errors.

Exercise 4:

Graph the local energy for matrix diagonalization solutions differing by matrix size, and observe how the better converged calculations approximate the eigenvalue locally. Repeat the calculation for a smaller value of X (with X larger than the point at which the numerically obtained wavefunction goes to zero), and make your observations.

We carry out the calculation for the first excited state:

> ev_s[2];

2.40980154796841317

> IC:=u(0)=0,D(u)(0)=1;

IC := u(0) = 0, D(u)(0) = 1

> E_t:=2.392;

> SE:=-1/2*diff(u(x),x$2)+(V-E_t)*u(x)=0;

> sol:=dsolve({SE,IC},u(x),numeric,output=listprocedure):

> u_s:=subs(sol,u(x)):

E_t := 2.392

SE := -1/2*diff(u(x),`$`(x,2))+(x^4-2.392)*u(x) = 0...

> P1:=plot('u_s(x)',x=0..2.5): plots[display](P1,view=[0..2.5,-1..1]);

[Maple Plot]

We observe that the numerically obtained eigenvalues are slightly below the ones obtained from the matrix diagonalization.

> P2:=plot(phi_a[2]/subs(x=0,diff(phi_a[2],x)),x=0..4,color=blue):

> P3:=plot(V-E_t,x=0..3,color=green):

> plots[display](P1,P2,P3,view=[0..3,-2.5..1]);

[Maple Plot]

We have shifted the graph of the eigenfunction to indicate the location of the classical turning point (the crossing of the x -axis with the potential curve). The plot shows size of the tunneling region, as well as the fact that the region where the numerical eigenfunction touches down to the x -axis (it diverges for large x, due to the finite accuracy of the trial eigenenergy, and due to numerical problems) is well inside the classically forbidden region for the given state. For higher-lying states one needs to go to larger x-values for the 'touchdown' point. Eventually, for large n the basis set for the matrix diagonalization will be inappropriate due to the finite X -value.

It is important to explore the accuracy of states higher than the ground states of the even- and odd-parity sectors. These states are harder to calculate, since one needs to find the locations of nodes that are not pre-determined. The eigenstates calculated from the matrix diagonalization method are all mutually orthogonal. Given their approximate nature, however, one cannot claim that the second excited state is orthogonal to the exact ground state. Experience shows that the accuracy of the higher-lying eigenvalues (and their eigenfunctions) is substantially less than what was found for the two lowest states (for the same hamiltonian matrix). One can try to beat the problem by increasing the matrix size.

We demonstrate the problem on the example of the first excitation in the even-parity sector:

> ev_s[3];

5.32254047313261225

> IC:=u(0)=1,D(u)(0)=0;

IC := u(0) = 1, D(u)(0) = 0

> E_t:=4.697;

> SE:=-1/2*diff(u(x),x$2)+(V-E_t)*u(x)=0;

> sol:=dsolve({SE,IC},u(x),numeric,output=listprocedure):

> u_s:=subs(sol,u(x)):

E_t := 4.697

SE := -1/2*diff(u(x),`$`(x,2))+(x^4-4.697)*u(x) = 0...

> P1:=plot('u_s(x)',x=0..3): plots[display](P1,view=[0..3,-1.1..1]);

[Maple Plot]

We observe that the numerically obtained eigenvalues are slightly below the ones obtained from the matrix diagonalization.

> P2:=plot(phi_a[3]/subs(x=0,phi_a[3]),x=0..4,color=blue):

> P3:=plot(V-E_t,x=0..3,color=green):

> plots[display](P1,P2,P3,view=[0..3,-5..1]);

[Maple Plot]

We observe that the third eigenvalue is not approximated well by the 20-by-20 matrix diagonalization. This could be improved upon by simply choosing a smaller box. The numerical results for the wavefunction combined with a demonstration of the vicinity of the classical turning point (beyond which the tail of the wavefunction indicates that tunneling takes place) indicate that the box size can be chosen to be substantially smaller. This would lead to more accurate results, as the basis functions would be more flexible in the region of interest.

>

Solution of the numerical problem by bisection for the general case of non-symmetric potentials.

We start the integration from x = - s and x = s , and try to match them at the origin, by requesting that u'(0)/u(0) match from the left and from the right.

The boundary condition at these two points distinguishes even- and odd-parity states. For even-parity states the derivatives of the wavefunction are opposite to each other, for odd-parity states they are identical. The choices determine the normalization of the propagated solution. Given that we match u'(0)/u(0) this should not matter at all. Let us illustrate what happens as we integrate from the outside inwards.

> s:=3.;

s := 3.

> eta:=3*10^(-4):

> IC1:=u(s)=0,D(u)(s)=eta: IC2:=u(-s)=0,D(u)(-s)=-eta:

>

> E_t:=4.697;

> SE:=-1/2*diff(u(x),x$2)+(V-E_t)*u(x)=0;

> sol1:=dsolve({SE,IC1},u(x),numeric,output=listprocedure): u_1:=subs(sol1,u(x)):

> sol2:=dsolve({SE,IC2},u(x),numeric,output=listprocedure): u_2:=subs(sol2,u(x)):

E_t := 4.697

SE := -1/2*diff(u(x),`$`(x,2))+(x^4-4.697)*u(x) = 0...

> P1:=plot('u_1(x)',x=-3..3): plots[display](P1,view=[-3..3,-1..1]);

[Maple Plot]

A matching method relies on the fact that we wish the following properties for an eigenfunction:

normalizability, i.e., vanishing function at s and - s ;

continuous and differentiable solution for - s < x < s ;

As the Schroedinger equation is of second order, we can produce two linearly independent solutions that both depend on the same trial value for the eigenenergy. One depends on initial conditions at - s , the other on two intial conditions at s . We calculate the difference between u'(0)/u(0) for the integrations from the left and from the right. We claim that this parameter crosses zero at an energy eigenvalue, and therefore we can invoke a root search. The latter is carried out by the bisection algorithm.

> Match:=proc(E_t) local SE,eta,IC1,IC2,sol1,sol2,u_1,up_1,u_2,up_2; global V,s;

> SE:=-1/2*diff(u(x),x$2)+(V-E_t)*u(x)=0; eta:=3*10^(-4):

> IC1:=u(s)=0,D(u)(s)=eta: IC2:=u(-s)=0,D(u)(-s)=-eta:

> sol1:=dsolve({SE,IC1},u(x),numeric,output=listprocedure): u_1:=subs(sol1,u(x)):

> sol2:=dsolve({SE,IC2},u(x),numeric,output=listprocedure): u_2:=subs(sol2,u(x)):

> up_1:=subs(sol1,diff(u(x),x)): up_2:=subs(sol2,diff(u(x),x)):

> up_1(0)/u_1(0)-up_2(0)/u_2(0); end:

The bisection algorithm is written with two procedures: a basic bisection step, and a driver which reports on progress:

> Bis1:=proc(x1,x2,x3,f1,f2,f3);
if evalf(f1*f3) < 0 then
RETURN([x1,x3,f1,f3]);
else
RETURN([x3,x2,f3,f2]); fi;
end:

> Bisect:=proc(f,a,b) local res,x1,x2,x3,f1,f2,f3,i;x1:=a: x2:=b: f1:=evalf(f(x1)): f2:=evalf(f(x2)):
if evalf(f1*f2)>0 then RETURN("No bracketed root",f1,f2);
else
x3:=0.5*(x1+x2); f3:=evalf(f(x3)); fi;
for i from 1 to 50 do:
res:=Bis1(x1,x2,x3,f1,f2,f3);
x1:=res[1]; f1:=res[3]; x2:=res[2]; f2:=res[4]; x3:=0.5*(x1+x2);
if abs(x1-x2) < 10^(-2) then print("Reached level 2",x3); fi; if abs(x1-x2) < 10^(-7) then RETURN(x3) fi;
f3:=evalf(f(x3)); od:
RETURN("Loop exhausted", x3); end:

> s:=3;

s := 3

> Bisect(Match,4.65,4.75);

4.69679541587831

Let us pick a non-symmetric potential and calculate the eigenvalues:

> V:=x^4-2*x^3-x^2/2+2/2*x+1;

V := x^4-2*x^3-1/2*x^2+x+1

> P0:=plot(V,x=-5..5,view=[-2..3,-1..5],color=green): plots[display](P0);

[Maple Plot]

> s:=3;

s := 3

> Bisect(Match,0,1);

.899318367242813

> E1:=%;

E1 := .899318367242813

To graph the solution we use the obtained best eigenvalue, and integrate the Schroedinger equation once more:

> eta:=1*10^(-6):

> IC1:=u(s)=0,D(u)(s)=eta: IC2:=u(-s)=0,D(u)(-s)=eta:

> E_t:=E1;

> SE:=-1/2*diff(u(x),x$2)+(V-E_t)*u(x)=0;

> sol1:=dsolve({SE,IC1},u(x),numeric,output=listprocedure): u_1:=subs(sol1,u(x)):

> sol2:=dsolve({SE,IC2},u(x),numeric,output=listprocedure): u_2:=subs(sol2,u(x)):

E_t := .899318367242813

SE := -1/2*diff(u(x),`$`(x,2))+(x^4-2*x^3-1/2*x^2+x...

> P1:=plot('u_1(x)',x=-s..s,color=red): P2:=plot('u_2(x)',x=-s..s,color=blue):

> P00:=plot(E1,x=-3..3,color=black):

> plots[display](P00,P0,P1,P2,view=[-3..3,-1..6]);

[Maple Plot]

We have chosen the initial condition for solution 2 to be such that a positive ground-state wavefunction is calculated. Solution 1 (which is integrated left-to-right), is small, and cannot be seen on this scale (it is covered by the x -axis). Both solutions are normalized through the choice of eta . It is interesting to observe how the wavefunction reaches through the classically forbidden regime to extend over the second minimum.

Exercise 5:

Calculate some of the excited eigenstates for this potential by supplying other bracketing values to the procedure Bisect .

Exercise 6:

Modify the potential function such that there be one broad, and one sharp minimum with approximately equal depth by choosing different coeffients for the monomials in x . Calculate the ground state and observe its shape. When does it become double-humped, and when does the second hump (over the narrower potential well) disappear?

Matrix diagonalization:

We can get an idea about the location of the low-lying eigenvalues by carrying out the matrix diagonalization. We simply copy the lines from the beginning of the worksheet.

> HM:=Matrix(N):

We make use of the symmetry of the hamiltonian matrix: it allows to save almost a factor of 2 in computation time.

> for i from 1 to N do: for j from 1 to i do: HM[i,j]:=Vpot(uB(i),uB(j)); if i=j then HM[i,j]:=HM[i,j]+ Tkin(uB(i)); fi: od: od:

> for i from 1 to N do: for j from i+1 to N do: HM[i,j]:=HM[j,i]: od: od:

> print(SubMatrix(HM,1..4,1..4));

_rtable[20163100]

The Hamiltonian matrix is no longer sparse, as the potential is not symmetric.

> HMf:=map(evalf,HM):

> evals:=Eigenvalues(HMf, output='list'):

> ev_s:=sort(map(Re,evals));

ev_s := [.923716591294165390, 1.98711814266109088, ...
ev_s := [.923716591294165390, 1.98711814266109088, ...
ev_s := [.923716591294165390, 1.98711814266109088, ...
ev_s := [.923716591294165390, 1.98711814266109088, ...

> VE:=Eigenvectors(HMf,output='list'):

> Vp:=[seq([Re(VE[i][1]),VE[i][2],map(Re,VE[i][3])],i=1..nops(VE))]:

> VEs:=sort(Vp,Min):

> for i from 1 to 4 do:

> psi0:=add(VEs[i][3][j]*uB(j),j=1..N):

> No:=1/sqrt(int(expand(psi0^2),x=-X..X));

> phi_a[i]:=add(No*VEs[i][3][j]*uB(j),j=1..N): od:

> plot([seq(phi_a[i],i=1..4)],x=-5..5,color=[red,blue,green,black]);

[Maple Plot]

We observe that the matrix diagonalization gets the right idea about the ground-state eigenfunction (the eigenvalue is off by nearly 3 %).

The first excited state has a bigger hump over the shallower part of the potential.

The higher states are not unlike the quartic harmonic oscillator eigenstates, but shifted to the right. For higher eigenenergies the particles explore mostly the steep rise in the x^4 -part of the potential.

Exercise 7:

Choose a potential with a deep and narrow potential well, and make a detailed comparison between the ground state as obtained by matrix-diagonalization, and by the numerical method.

>