{VERSION 5 0 "IBM INTEL NT" "5.0" } {USTYLETAB {CSTYLE "Maple Input" -1 0 "Courier" 0 1 255 0 0 1 0 1 0 0 1 0 0 0 0 1 }{CSTYLE "2D Math" -1 2 "Times" 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 1 }{CSTYLE "2D Input" 2 19 "" 0 1 255 0 0 1 0 0 0 0 0 0 0 0 0 1 } {CSTYLE "" -1 256 "" 1 16 0 0 0 0 0 1 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 257 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 258 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 259 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 }{CSTYLE "" -1 260 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 } {CSTYLE "" -1 261 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 262 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 263 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 264 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 265 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 } {CSTYLE "" -1 266 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 }{PSTYLE "Normal " -1 0 1 {CSTYLE "" -1 -1 "Times" 1 12 0 0 0 1 2 2 2 2 2 2 1 1 1 1 }1 1 0 0 0 0 1 0 1 0 2 2 0 1 }} {SECT 0 {EXCHG {PARA 0 "" 0 "" {TEXT 256 54 "The mapped Fourier grid m ethod for the Coulomb problem" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 42 "E.Fattal, R. Baer, R. Kosloff, Phys. Rev. " }{TEXT 257 4 "E 53" }{TEXT -1 49 ", 1217 (1996); F. Brau, C. Semay, J. Comp. Phys. " }{TEXT 258 3 "139" }{TEXT -1 84 ", 127 (1998); V. Ko kouline, O. Dulieu, R. Kosloff, F. Masnou-Seeuws, J. Chem. Phys. " } {TEXT 259 3 "110" }{TEXT -1 134 ", 9865 (1999) for implementation deta ils. The Fourier grid method itself goes back to the pioneering work o f R. Meyer, J. Chem. Phys. " }{TEXT 260 2 "52" }{TEXT -1 14 ", 2053 (1 970)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 218 "In fact, some of the references by Kosloff et al are confusing the is sues as to what basis is used. The work is in coordinate space in a po sition eigenstate basis (a Kronecker delta basis, which has the right \+ limit as " }{TEXT 19 2 "dx" }{TEXT -1 69 " is sent to zero, i.e., the \+ basis function is a Kronecker divided by " }{TEXT 19 2 "dx" }{TEXT -1 27 ", with the assumption that " }{TEXT 19 8 "x_j=j*dx" }{TEXT -1 117 ". Important implementation details concern questions like: even or od d number of grid points, and whether the entire " }{TEXT 19 1 "x" } {TEXT -1 18 "-axis or only the " }{TEXT 19 3 "x>0" }{TEXT -1 166 " reg ion are used in the problem. The Fourier grid method makes use of the \+ fact that the Fourier transform of the Kronecker delta function is kno wn as a plane wave in " }{TEXT 19 1 "p" }{TEXT -1 337 "-space. This pe rmits a direct calculation of kinetic energy matrix elements in coordi nate representation as a non-diagonal matrix. The potential matrix is, of course, calculated as a diagonal matrix with no matrix elements to be evaluated. For some Fourier grids the kinetic energy matrix can be calculated in closed form (cf. R. Meyer)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 116 "Our work here uses a mix of me thods from the above references. It implements the Fourier-sine repres entation on the " }{TEXT 19 6 "(0,Pi)" }{TEXT -1 787 " interval. This \+ then corresponds really to a Fourier series expansion which is close i n spirit to the original work of Meyer, and also to the more recent wo rk of Brau and Semay. It has a drawback in that we do not have a close d-form expression for the kinetic energy matrix. On the other hand the matrix can be calculated once and stored, and then re-used for many a pplications. Its computation cost will become only considerable for ve ry large grids. The advantage of the method is that it makes use of a \+ pure sine expansion, i.e., the symmetry of the radial problem is imple mented directly. In order to attack Coulomb-type problems we then impl ement a mapping along the lines of the first reference, i.e., the work of Kosloff et al with a further adaptation in order to connect to the " }{TEXT 19 6 "(0,Pi)" }{TEXT -1 99 " interval. At first, we demonstr ate, however, the virtues of the mapping as presented in that work." } }{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 173 "The rec tangular Fourier grid is not working well for problems in which there \+ is a correlation between position and momentum variables, such as in t he Coulomb problem: small " }{TEXT 19 1 "r" }{TEXT -1 21 " corresponds to high " }{TEXT 19 1 "p" }{TEXT -1 14 ", while large " }{TEXT 19 1 " r" }{TEXT -1 22 " corresponds to small " }{TEXT 19 1 "p" }{TEXT -1 45 ". The singularities in the potential in both " }{TEXT 19 1 "r" } {TEXT -1 5 " and " }{TEXT 19 1 "p" }{TEXT -1 176 " space are responsib le for an inadequacy of the rectangular Fourier grid. The idea behind \+ the mapping is to use the classical phase space region as a guide. For a given energy " }{TEXT 19 8 "E=H(r,p)" }{TEXT -1 203 " the classical ly allowed region maps out a volume in phase space. The wave function \+ for a bound state will decrease exponentially beyond this regime. The \+ idea is to apply a canonical transformation from " }{TEXT 19 7 "(p,q=r )" }{TEXT -1 6 " into " }{TEXT 19 5 "(P,Q)" }{TEXT -1 38 ", and to emp loy a rectangular grid in " }{TEXT 19 5 "(P,Q)" }{TEXT -1 151 ". Our o bjective is to demonstrate that a good approximation to the bound-stat e spectrum can be obtained when using a Fourier collocation method in \+ the " }{TEXT 19 5 "(P,Q)" }{TEXT -1 11 " variables." }}{PARA 0 "" 0 " " {TEXT -1 0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 42 "restart; \+ with(LinearAlgebra): with(plots):" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 135 "Let us demonstrate the problem by contour diagrams for classical \+ energies of some Rydberg levels. Atomic units (hbar=m_e=e=1) are used. " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 18 "En:=n->-1/(2*n^2);" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 15 "Hpq:=p^2/2-1/q;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 102 "contourplot(Hpq,q=-0..100,p=-1..1, levels=[En(1),En(2),En(3),En(4),En(5)],numpoints=10000,filled=true);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 388 "Kosloff et al. make the point \+ about the missing classical phase space volume at large momenta and sm all r. One might think that this is not all that relevant for the quan tum problem, because much of this space is inaccessible in quantum mec hanics due to the zero-point motion, but it turns out to be important. In any case, the picture shows dramatically that a rectangular region in the " }{TEXT 19 5 "(q,p)" }{TEXT -1 16 " variables with " }{TEXT 19 3 "q=r" }{TEXT -1 309 " is very inefficient. Therefore, the objecti ve is to optimize the calculation by filling the classically accessibl e region with a rectangular region in a mapped variable. Kosloff et al make use of an arctan transformation (which we have used in different form with finite-difference methods, e.g., J. Phys. B " }{TEXT 265 2 "17" }{TEXT -1 24 ", 2591 and Phys. Rev. A " }{TEXT 266 2 "44" }{TEXT -1 9 ", R5346)." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "beta:=2. 5E-4; A:=3999.9;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 26 "rM:=Q-> Q-A*arctan(beta*Q);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "plot (rM(Q),Q=0..5000);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 23 "An equidist ant mesh in " }{TEXT 19 1 "Q" }{TEXT -1 38 " will concentrate may poin ts at small " }{TEXT 19 1 "r" }{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 77 "One needs to calculate the tran sformation. For a single-variable map we have:" }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 39 "beta:='beta': A:='A': J:=diff(rM(Q),Q);" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 36 "The Hamiltonian then simply become s:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "HPQ:=P^2/(2*J^2)-1/rM (Q);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 61 "Re-introduce the numerica l values for the mapping parameters:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "beta:=2.5E-4; A:=3999.9;" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 111 "contourplot(HPQ,Q=-200..1500,P=-0.02..0.02,levels= [En(1),En(2),En(3),En(4),En(5)],numpoints=10000,filled=true);" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 59 "The remap of the variables (canoni cal transformation) from " }{TEXT 19 5 "(q,p)" }{TEXT -1 4 " to " } {TEXT 19 5 "(Q,P)" }{TEXT -1 200 " leads to a rectangular area in whic h one has not only good efficiency in terms of coverage of the classic ally accessible region, but, in fact, also enclosure of that region wh ich was not possible in " }{TEXT 19 5 "(q,p)" }{TEXT -1 88 " with the \+ same amount of effort. The diagrams are similar to Figs. 3,4 in the re ference." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 124 "Now we are armed to proceed with the spectrum calculation. The ki netic energy in the transformed variables is a function of " }{TEXT 19 1 "P" }{TEXT -1 5 " and " }{TEXT 19 1 "Q" }{TEXT -1 101 ". Therefor e, one has to figure out a proper way of how to deal with it, i.e., wh ether one should use " }{TEXT 19 6 "P/J(Q)" }{TEXT -1 42 " as an opera tor and square it, or whether " }{TEXT 19 10 "P^2/J(Q)^2" }{TEXT -1 90 " can be carried out in sequence. In any case, we do not have a mul tiplicative operator in " }{TEXT 19 1 "P" }{TEXT -1 112 "-space for th e kinetic energy. Meyer has included the possibility of such a kinetic energy operator in his work." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 202 "We are seeking a matrix representation o f the Hamiltonian which we will then diagonalize. In order to avoid tr ouble with the periodic boundary conditions implied in the Fourier met hod we discretize the " }{TEXT 19 6 "[-L,L]" }{TEXT -1 126 " interval \+ and look for anti-symmetric eigenfunctions. The first reference provid es some hints, but no details in this respect." }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 41 "restart; with(LinearAlgebra): Digits:=14:" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "n:=6; N:=2^n;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "dQ:=100;" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 21 "dP:=evalf(Pi/(N*dQ));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 14 "P_max:=N/2*dP;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 96 "These parameter settings provide a rectangular area that can be id entified in the above diagram." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 108 "Perhaps we should first demonstrate how \+ poorly the regular Fourier grid method does. In this case we choose:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "dq:=0.5;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "dp:=evalf(Pi/(N*dq));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 14 "p_max:=N/2*dp;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 103 "Construct the kinetic energy matrix in coordinate repres entation using the fact that it is diagonal in " }{TEXT 19 1 "p" } {TEXT -1 102 " space, i.e., by inserting plane-wave completeness relat ions approximated by their truncated versions." }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 16 "TM:=Matrix(N,N):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "L:=N*dq;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "BF:=(k,j)->sqrt(2/N)*sin(Pi*k*j/N);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 38 "simplify(add(BF(1,j)*BF(1,j),j=1..N));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 38 "simplify(add(BF(1,j)*BF(2,j),j=1..N ));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 79 "This serves as a check tha t the basis is orthonormal. Try some other values of " }{TEXT 19 1 "k " }{TEXT -1 11 " to verify." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 87 "The kinetic energy matrix is easy to assemble n ow: we have a basis of Kronecker deltas " }{TEXT 19 21 " = KND( i,j)/dx" }{TEXT -1 75 ". The Fourier series amplitudes of these functi ons are the basis functions " }{TEXT 19 17 " = BF(k,j)" }{TEXT -1 83 " for which we know the momentum eigenvalues, and therefore the \+ kinetic energies as " }{TEXT 19 14 "1/2*(Pi*k/L)^2" }{TEXT -1 61 ". On e way to prove it is to differentiate the basis function " }{TEXT 19 7 "BF(k,j)" }{TEXT -1 23 " twice with respect to " }{TEXT 19 1 "x" } {TEXT -1 17 " after replacing " }{TEXT 19 4 "j*dq" }{TEXT -1 4 " by " }{TEXT 19 1 "x" }{TEXT -1 28 ", which with the definition " }{TEXT 19 6 "L=N*dq" }{TEXT -1 14 " brings out a " }{TEXT 19 11 "-(Pi*k/L)^2" } {TEXT -1 33 " factor, and then to multiply by " }{TEXT 19 6 "(-1/2)" } {TEXT -1 17 " in atomic units." }}{PARA 0 "" 0 "" {TEXT -1 26 "This is implemented below:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 112 "for j from 1 to N do: for jp from 1 to N do: TM[jp,j]:=evalf(add(BF(k,j)* BF(k,jp)*(Pi*k/L)^2/2,k=1..N)); od: od:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "TM[1,1],TM[1,2],TM[2,1];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 69 "It looks like the matrix is symmetric, which shouldn't be a surprise." }}{PARA 0 "" 0 "" {TEXT -1 118 "Now let us add a potenti al matrix: fortunately we avoid the origin in our mesh and circumvent \+ the Coulomb singularity." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 52 "for j from 1 to N do: TM[j,j]:=TM[j,j]-1/(j*dq); od:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "EV:=Eigenvalues(TM):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "EV[1],EV[2];" }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 35 "EV0:=sort([seq(Re(EV[n]),n=1..N)]);" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 64 "This method results in not so good eigenvalues when compared to " }{TEXT 19 8 "-0.5/n^2" }{TEXT -1 116 " . They look like upper bounds, but seem to reflect a bad coverage of \+ the classically accessible phase space region." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT 261 11 "Exercise 1:" }}{PARA 0 " " 0 "" {TEXT -1 31 "Choose different values of the " }{TEXT 19 2 "dq" }{TEXT -1 175 " spacing and observe the dependence of the low-lying ei genvalues (the lowest bound states). Then increase the number of point s and record the improvement for given choices of " }{TEXT 19 2 "dq" } {TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 32 "How do we implement the idea in " }{TEXT 19 2 "PQ" } {TEXT -1 30 " space? The momentum operator " }{TEXT 19 1 "P" }{TEXT -1 12 " is given as" }{TEXT 19 17 " -I*1/J(Q) * d/dQ" }{TEXT -1 40 ", \+ which means that we have to implement " }{TEXT 19 26 "-0.5/J(Q)*d/dQ 1 /J(Q) d/dQ" }{TEXT -1 380 " as kinetic energy operator. We will follow two approaches: first, we implement this directly; then we follow an \+ idea implemented by Kokoouline et al to construct a symmetric matrix a fter transforming the wavefunction. The second method appears to be so mewhat less efficient than the first one, although some savings can be obtained from using a simplified matrix diagonalization." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 41 "restart; with(LinearAlgebra): Digits:=14:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 6 "N:=20;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "dx:=evalf(Pi/(N+1));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 76 "We supplement the Kosloff mapping with a scale transfo (u sing the parameter " }{TEXT 19 2 "s0" }{TEXT -1 27 ") to map from the \+ interval " }{TEXT 19 7 "(0,Pi) " }{TEXT -1 4 "for " }{TEXT 19 1 "x" } {TEXT -1 8 " into a " }{TEXT 19 1 "Q" }{TEXT -1 18 " range, then into \+ " }{TEXT 19 1 "r" }}{PARA 0 "" 0 "" {TEXT -1 14 "according to " } {TEXT 19 28 "r = s0*x-A*arctan(beta*s0*x)" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 60 "We need to assemble sine-sine and \+ sine-cosine matrices with " }{TEXT 19 3 "k^2" }{TEXT -1 5 " and " } {TEXT 19 1 "k" }{TEXT -1 139 " factors respectively. This follows from the observation that the kinetic energy operator now has both first a nd second derivatives, i.e., " }{TEXT 19 4 "d/dQ" }{TEXT -1 5 " and " }{TEXT 19 8 "d^2/dQ^2" }{TEXT -1 73 ". The first derivative turns the \+ sine-basis function into a cosine times " }{TEXT 19 1 "k" }{TEXT -1 108 ". We pre-assemble the matrices, and then we can use them for calc ulations with differently scaled variables." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "CSM:=Matrix(N,N): SSM:=Matrix(N,N):" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 22 "We do not include the " }{TEXT 19 2 "- " }{TEXT -1 22 "sign, because we want " }{TEXT 19 4 "-1/2" }{TEXT -1 70 " times the second derivative anyway, and therefore we skip the factor " }{TEXT 19 1 "2" }{TEXT -1 163 " as well! Rather than defining the b asis function as done in the previous section we work with sines and i ncorporate the normalization by correcting the sums by a " }{TEXT 19 7 "2/(N+1)" }{TEXT -1 8 " factor." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 182 "for i from 1 to N do: for j from 1 to N do: SSM[i,j] :=evalf(1/(N+1)*add(k^2*sin(k*j*dx)*sin(k*i*dx),k=1..N)): CSM[i,j]:=e valf(1/(N+1)*add(k*sin(k*j*dx)*cos(k*i*dx),k=1..N)): od: od:" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 27 "CSM[2,1],CSM[2,2],CSM[1,2]; " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 27 "SSM[2,1],SSM[2,2],SSM[1 ,2];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 107 "The latter matrix is no \+ longer symmetric. This can pose a problem for efficiency in matrix dia gonalization." }}{PARA 0 "" 0 "" {TEXT -1 105 "From these pre-computed matrices we can assemble the Hamiltonian matrix following a choice of parameters." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "r:=x->s0*x-A*arctan(beta*s0*x);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "J:=D(r);" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 9 "Jp:=D(J);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 97 "We include the possibility of non-zero angular momentum, i.e., we define a centrifugal potential." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "L:=0; Vcf:=x->L*(L+1)/2/r(x)^2;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "A:=3999.9: beta:=0.00025:" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 20 "In order to map the " }{TEXT 19 6 "(0,Pi)" }{TEXT -1 21 " interval to a large " }{TEXT 19 1 "Q" }{TEXT -1 99 " range (as is req uired according to the graph in the first section) we choose a large s cale factor " }{TEXT 19 2 "s0" }{TEXT -1 1 ":" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "s0:=500;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 16 "HM:=Matrix(N,N):" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 48 "Note th at the Jacobian factors that follow from " }{TEXT 19 6 "1/J(Q)" } {TEXT -1 158 " and simply multiply the derivative matrices that are co mputed in momentum representation (or in trig representation). The dif ferential operator is given for " }{TEXT 19 11 "F(Q)=1/J(Q)" }{TEXT -1 4 " as " }{TEXT 19 33 "F(Q)*F'(Q)*d/dQ - F(Q)^2*d^2/dQ^2" }{TEXT -1 1 "." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 170 "for i from 1 to N do: f1:=Jp(i*dx): f2:=1/J(i*dx): for j from 1 to N do: HM[i,j]:=(CS M[i,j]*f1*f2 + SSM[i,j])*f2^2: od: HM[i,i]:=HM[i,i]+evalf(-1/r(i*dx)+ Vcf(i*dx)): od:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "EV:=Eige nvalues(HM);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 6 "EV[1];" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "convert(EV,list);" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "EVs:=sort(map(Re,%));" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 56 "We have excellent results for a nu mber of eigenenergies!" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT 262 11 "Exercise 2:" }}{PARA 0 "" 0 "" {TEXT -1 89 "Explore the error (absolute and relative) of the lowest 10 eigenvalues by com paring with " }{TEXT 19 8 "-0.5/n^2" }{TEXT -1 57 ". Then increase the angular momentum quantum number from " }{TEXT 19 3 "L=0" }{TEXT -1 85 " to non-zero calues and observe how it agrees (note that the spect rum then starts at " }{TEXT 19 3 "n=L" }{TEXT -1 2 ")." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 165 "The next qu estion concerns the eigenvectors. We need to sort the spectrum, i.e., \+ put the eigenvectors as computed in Maple in an order that corresponds to the order " }{TEXT 19 10 "n=1,2,3..." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "VE:=Eigenvectors(HM,output='list'):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 66 "Vp:=[seq([Re(VE[i][1]),VE[i][2],map(Re,VE [i][3])],i=1..nops(VE))]:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 344 "Min:=proc(x,y); if type(x,numeric) and type(y,numeric) then if x< =y then RETURN(true): else RETURN(false): fi; elif type(x,list) and ty pe(y,list) and type(x[1],numeric) and type(y[1],numeric) then if x[1]< =y[1] then RETURN(true): else RETURN(false): fi; elif convert(x,string )<=convert(y,string) then RETURN(true): else RETURN(false): fi: end:\n " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 19 "VEs:=sort(Vp,Min):\n" } }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "n_sel:=2; VEs[n_sel];" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 70 "Let us look at the eigenvector as \+ a function of the radial coordinate " }{TEXT 19 7 "r(i*dx)" }{TEXT -1 1 ":" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 47 "plot([seq([r(i*dx), VEs[n_sel][3][i]],i=1..N)]);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 69 "H ow can we interpolate the functions? This is accomplished using the " }{TEXT 19 4 "sinc" }{TEXT -1 43 " function according to the Shannon th eorem." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 18 "sinc:=x->sin(x)/x ;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 60 "fE:=n->evalf(add(VEs[n ][3][i]*sinc(Pi/dx*(x-i*dx)),i=1..N));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 64 "We display the lowest five states as a function of the va riable " }{TEXT 19 10 "0 < x < Pi" }{TEXT -1 78 " . We graph their den sities, and keep in mind that these are radial densities:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 75 "plot([seq(fE(i)^2,i=1..5)],x=0..3.1 4,color=[red,blue,green,black,magenta]);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 23 "The results are smooth." }}{PARA 0 "" 0 "" {TEXT -1 41 "N ow we want to see them as a function of " }{TEXT 19 1 "r" }{TEXT -1 83 ", and to compare them with the real thing! A parametric plot allow s to have a look:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 102 "plot( [seq([r(x),fE(i)^2,x=0..3.14],i=1..5)],color=[red,blue,green,black,mag enta],view=[0..30,0..0.5]);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 153 "E vidently the orbitals are not normalized consistently. The area under \+ the curves was consistent from orbital to orbital when looked at as a \+ function of " }{TEXT 19 1 "x" }{TEXT -1 38 ", but clearly not so as a \+ function of " }{TEXT 19 1 "r" }{TEXT -1 98 ". Thus, we need to compute normalization constants if we wish to compare with the exact function s." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 48 "The constants can be calculated by summing over " }{TEXT 19 1 "x" }{TEXT -1 112 ", but the Jacobian has to be included. In this way the r-integ ral is reduced to an integral over x according to " }{TEXT 19 9 "dr=J( x)dx" }{TEXT -1 1 "." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 71 "Dx: =evalf(Pi/31): Nc:=evalf(Dx*add(subs(x=i*Dx,fE(1)^2*J(x)),i=1..30));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 75 "P1:=plot([r(x),-1/sqrt(Nc )*fE(1),x=0..3.14],color=red,view=[0..10,0..0.8]):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "int(4*r^2*exp(-2*r),r=0..infinity);" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 42 "P1e:=plot(2*r*exp(-r),r=0..1 0,color=blue):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 23 "plots[dis play](P1,P1e);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 70 "It is hard to d istinguish the two, so we can graph the relative error:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 97 "plot([r(x),2*r(x)*exp(-r(x))+1/sqrt (Nc)*fE(1),x=0..3.14],color=red,view=[0..10,-0.0002..0.0002]);" }}} {EXCHG {PARA 0 "" 0 "" {TEXT 263 11 "Exercise 3:" }}{PARA 0 "" 0 "" {TEXT -1 246 "Investigate the relative error for other low-lying eigen states. You will need to define the radial functions for these states. Investigate how a change in the scale parameter s0 affects the accura cy of individual eigenenergies and eigenfunctions." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 166 "Next we implemen t a method to obtain a symmetric Hamiltonian matrix as presented in th e work of Kokoouline et al. The first steps are similar to the previou s section:" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 41 "restart; with(LinearAlgebra): Digits:=14:" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 6 "N:=20;" }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 20 "dx:=evalf(Pi/(N+1));" }}}{EXCHG {PARA 0 "" 0 " " {TEXT -1 40 "We need only a second-derivative matrix:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "SSM:=Matrix(N,N):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 118 "for i from 1 to N do: for j from 1 to N \+ do: SSM[i,j]:=evalf(1/(N+1)*add(k^2*sin(k*j*dx)*sin(k*i*dx),k=1..N)): \+ od: od:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 27 "SSM[2,1],SSM[2, 2],SSM[1,2];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 53 "This matrix is sy mmetric. We avoid first derivatives." }}{PARA 0 "" 0 "" {TEXT -1 43 "T he scale transformation is used as before:" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 31 "r:=x->s0*x-A*arctan(beta*s0*x);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "J:=D(r);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 9 "Jp:=D(J);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 11 "Jpp:=D(Jp);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 130 "The diagonal \+ multiplicative terms of the kinetic energy matrix are collected in an \+ effective potential function. The wavefunction " }{TEXT 19 6 "Psi(Q)" }{TEXT -1 22 " has been scaled into " }{TEXT 19 17 "Phi(Q)/sqrt(J(Q)) " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 44 "Vc:=x->7/8*(Jp(x)/J(x)^ 2)^2-Jpp(x)/4/J(x)^3;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "A: =3999.9: beta:=0.00025:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "s 0:=800;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 16 "HM:=Matrix(N,N): " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "L:=0: Vcf:=x->L*(L+1)/2 /r(x)^2;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 161 "for i from 1 t o N do: f2:=1/J(i*dx): for j from 1 to N do: HM[i,j]:=SSM[i,j]*(f2^2+1 /J(j*dx)^2)/2: od: HM[i,i]:=HM[i,i]-evalf(1/r(i*dx)-Vcf(i*dx)-Vc(i*dx )): od:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "HM[1,2],HM[2,1], HM[2,2];" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "EV:=Eigenvalues (HM);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 6 "EV[1];" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 36 "EVs:=sort(map(Re,convert(EV,list))) ;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 23 "Eh:=n->evalf(-1/2/n^2) ;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "seq(Eh(i+L)-EVs[i],i=1 ..20);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 431 "Depending on the choic e of s0 we can reduce the error for either the lowest eigenvalues or f or some intermediate ones. It appears, however, as if the solution of \+ the resulting symmetric eigenvalue problem is harder than the solution of the original non-symmetric problem, i.e., the convergence is not a s fast. It is harder to converge on the low-lying states - this may be a price to be paid for scaling away the wavefunction with " }{TEXT 19 9 "1/sqrt(J)" }{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT 264 11 "Exercise 4:" }}{PARA 0 "" 0 "" {TEXT -1 279 "Change the Coulomb potential to a screened Coulomb potential form (e.g., an appropriate potential for Helium singlet structure as used \+ in AtomicModel.mws) and compare the efficiency of the methods of the l ast two sections in obtaining eigenenergy spectra for the L=0,1,2 sect ors." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{MARK "52 0 0" 47 }{VIEWOPTS 1 1 0 1 1 1803 1 1 1 1 } {PAGENUMBERS 0 1 2 33 1 1 }