Cooling.html

Classical kinetic theory for evaporative cooling in an atom trap

Classical statistical mechanics allows one to describe an ensemble of atoms which move in an external potential, and which interact with each other by means of elastic collisions. During these binary collisions energy is exchanged between the atoms. The common motion of many non-interacting particles in an external field is described by the classical Liouville equation, which represents a reformulation of the many Newton equations in terms of a single phase-space distribution functions. When an elastic collision integral is added to this equation it becomes the Boltzmann equation. The Liouville or Boltzmann equations are formulated for continuous distribution functions, but practical realizations are often performed by returning to discrete testparticle equations. These are Newton (Hamilton) equations for particles moving in a common potential - which may include a long-distance interaction between the particles - supplemented by a collision term which represents the elastic exchange of energy among particle pairs. The Boltzmann equation is interesting insofar as it displays irreversible behaviour caused by the collision term: the Liouville dynamics is based on Newton's equations which are symmetric under a reversal of time; the collision integral, however, introduces changes to the particle motions such that different initial distributions evolve towards Maxwell-Boltzmann type equilibrium distributions. Obviously, it cannot work in reverse, i.e., it cannot undo the relaxations and turn an equilibrium distribution (valid at large times) back into the original distribution by going backwards in time (the non-uniqueness alone would prohibit that). In that sense it contains the arrow of time as observed in nature (2nd law of thermodynamics), but at the same time it remains a heuristic equation. For a discussion of this and other features it is worthwhile to consult texts on statistical mechanics, a good starting point would be: Gregory H. Wannier: Statistical Physics  (paperback reprint by Dover from the 1966 Wiley edition).

This kinetic motion approach allows one to understand how a sample acquires an equilibrium state through energy exchange by elastic collisions. We can start the atoms all with equal velocity, or we can equidistribute the energies initially over all possible states: when we wait long enough an equilibrium state will develop which is characterized by the average kinetic energy available to the ensemble. The steady-state or equilibrium distribution produced by the elastic collisions between the atoms is the well-known Maxwell-Boltzmann distribution.

Collisions have the tendency to turn a pair of two particles of similar-energy into a pair in which one partner is slow, and one is faster. In the present worksheet this is demonstrated by simulating evaporative cooling. The idea is to remove the fast particles (in evaporative cooling in the coffee or tea cup the hottest molecules leave the fluid in the form of steam, thereby leaving an ensemble which is colder on average). In an atom trap one can have a finite barrier height which separates the trap from the environment. The idea is that once an atom has acquired enough energy to go over the barrier it is gone for good. If we keep the barrier height fixed, the cooling will work up to some point: once the whole ensemble has cooled off such that the average kinetic energy is well below the barrier height, it is unlikely that further eleastic collisions will produce many more atoms of sufficient energy to make it over the barrier. Therefore, efficient cooling requires a lowering of the trap height as the ensemble cools down.

Of course we cannot integrate the full Boltzmann equation in Maple for a three-dimensional system. In fact, a substantial simplification is possible when the system is ergodic. This means that the system visits all energetically accessible points in phase space if given sufficient time, and that different sequences of the time history selected at random look similar to each other. In such a system it is not important to know (in order to characterize the state) how the convection of the particle flow in phase space looks in detail. One can characterize the system by its energy dependence alone. The phase-space distribution function f(r,p,t)  is replaced by the product of a density of states rho(E)  times the level population f(E) . The density of states rho(E)  is usually a simple function of the potential. Commonly one employes the technique of counting the allowed quantum energy levels (taking into account their degeneracy in a multi-dimensional setting such as the spherical harmonic oscillator).

The details of reducing the six-dimensional phase-space Boltzmann equation to a one-dimensional energy-space equation are given, e.g., in the paper by J. Walraven et al., Phys. Rev. A 53, 381(1996). The idea is to discretize the energy range into bins according to E=(i-1/2)*dE  with i=1..N  covering predominantly the energy range from the trap bottom up to the barrier height. The time evolution of the level population f_i=f(E_i)  is determined by a collision term which involves the product of level populations for two arbitrary states with E_k  and E_l  to connect to the states E_i  combined with E_j . While determining the evolution of f(E_i)  one sums over all possible k,l while restricting E_j=E_k+E_l-E_i , thus ensuring energy conservation. The collision term is weighted with the probability for collisions to take place (the cross section which is assumed to be energy-independent at low energies enters). The equation is also weighted by the density of states: on the left by rho_i , and on the right by rho_h , where the energy index h=min(i,j,k,l) . The selection of the energy index on the right follows from the reduction of the collision integral when the phase-space probability is replaced by the energy-level population. Schematically we arrive at a system of differential equations:

rho_i diff(f_i,t) = c*add(rho_h*(f_k*f_l - f_i*f_j) , k=1..N ,l=1..N)    where   h=min(i,j,k,l), j=k+l-i

Here the level population f_i  is dimensionless and the constant c contains the dimensionful quantities c=m*sigma*dE^2/(Pi^2*hbar^3) such that c*dt  represents a measure of the total number of collisions that occured. Furthermore, rho_i = rho(E_i)   is usually a power-law density of states, i.e., one can show that for the isotropic 3D harmonic oscillator rho_i  goes like E_i^2 , since more generally it goes like E_i^(1/2+d)  for potentials of form V(r)=a*r^3/d .

This system of differential equations can be solved using dsolve[numeric] , but it is more straightforward to apply an extrapolation (Euler) method, i.e., to discretize time. To illustrate the method we assume that the energy scale is set by the trap depth (in our scaled dimensionless variables we distribute intially all particles equally over the energy range E=0..1 ). At first we will not remove particles, i.e., we start the simulation with f_i  representing a step function cut off at E=1 , and just watch the evolution of this distribution in time.

>    restart; with(plots):

Warning, the name changecoords has been redefined

We introduce parameters for the dimensionless trap height, the total number of energy bins N , and the number of energy bins devoted to the range above the barrier height N0 :

>    E_t:=1;

E_t := 1

>    N:=40;

N := 40

>    N0:=10;

N0 := 10

>    dE:=E_t/(N-N0);

dE := 1/30

The bin energies and the density of states are now specified. We begin by choosing the density of states as appropriate for a harmonic oscillator in 3 dimensions.

>    E:=i->(i-0.5)*dE;

E := proc (i) options operator, arrow; (i-.5)*dE end proc

>    rho:=i->E(i)^2;

rho := proc (i) options operator, arrow; E(i)^2 end proc

>    plot([seq([E(i),rho(i)],i=1..N)]);

[Maple Plot]

This is the level density of a potential that continues harmonically beyond E=1 . We choose all energy levels to be equally populated below E=1 , and to be empty above.

>    fv:=array([seq(1,i=1..N-N0),seq(0,i=N-N0+1..N)]);

fv := vector([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

The distribution over the energies has to take the density of states into account. Note that we should really be displaying a histogram - therefore the step is not sharp at E=1 .

>    PL0:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)]): display(PL0);

[Maple Plot]

The product of c*dt  has to be small for Euler's extrapolation method to work. The algorithm updates the level population according to: f_i(t+dt) = f_i(t) + dt*(df_i/dt) . Implementation details: the sums on the right are truncated when j  becomes negative or exceeds the energy range. The update loop at the end is easily modified to set the distribution to zero for energies above the half-way point, i.e., above E=1 .

>    cdt:=1/1000;

cdt := 1/1000

>    N_col:=100;

N_col := 100

>    for icol from 1 to N_col do:

>    for i from 1 to N do:

>    RHS:=0:

>    for k from 1 to N do:

>    for l from 1 to N do:

>    j:=k+l-i:

>    if j>0 and j<N+1 then

>    h:=min(i,j,k,l): RHS:=RHS+rho(h)/rho(i)*(fv[k]*fv[l]-fv[i]*fv[j]):

>    fi:

>    od: od:

>    if i<=N then fv[i]:=fv[i]+cdt*RHS: else fv[i]:=0: fi: od:

>    if trunc(icol/10)*10=icol then PL[trunc(icol/10)]:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)],title=cat("Collision number : ",(N/2)*icol*cdt/dE)): fi: od:

>    display([PL0,seq(PL[i],i=1..N_col/10)],insequence=true);

[Maple Plot]

We observe that the atoms distribute themselves over the available energy states and reach what looks like a Maxwell-Boltzmann distribution that corresponds to some temperature (the distribution peaks at some energy, and has a well-defined average).

>    E_avg:=add(E(i)*fv[i]*rho(i),i=1..N)/add(fv[i]*rho(i),i=1..N);

E_avg := .7297707934

Clearly the above result does not provide the correct average, because the distribution has been chopped off at (N-1/2)*dE . In fact, the truncation of the distribution at the tail is the key idea behind evaporative cooling. We will introduce now a removal of atoms at E(i)>1 , and look for long-term behaviour. We start with the same distribution as before:

>    fv:=array([seq(1,i=1..N-N0),seq(0,i=N-N0+1..N)]);

fv := vector([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

>    PL0:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)]):

We increase the length of the loop and in the update of the energy population fv[i]  we set the distribution function to zero for atoms above the barrier height, i.e., for E>1 . This corresponds to removal of atoms which have acquired such energies. This idea is valid on physical grounds provided the time between collision is long enough for evaporating atoms to disappear from the trap.

>    N_col:=100;

N_col := 100

>    for icol from 1 to N_col do:

>    for i from 1 to N do:

>    RHS:=0:

>    for k from 1 to N do:

>    for l from 1 to N do:

>    j:=k+l-i:

>    if j>0 and j<N+1 then

>    h:=min(i,j,k,l): RHS:=RHS+rho(h)/rho(i)*(fv[k]*fv[l]-fv[i]*fv[j]):

>    fi:

>    od: od:

>    if i<=N-N0 then fv[i]:=fv[i]+cdt*RHS: else fv[i]:=0: fi: od:

>    if trunc(icol/10)*10=icol then PL[trunc(icol/10)]:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)],title=cat("Collision number : ",(N/2)*icol*cdt/dE)): fi: od:

>    display([PL0,seq(PL[i],i=1..N_col/10)],insequence=true,view=[0..1,0..0.8]);

[Maple Plot]

>    E_avg:=add(E(i)*fv[i]*rho(i),i=1..N)/add(fv[i]*rho(i),i=1..N);

E_avg := .3811676206

One observes that the distribution changes rapidly for low collision numbers into a characteristic shape with a maximum level population inside the trap. With increasing collision number the maximum shifts towards lower energies, but it becomes a rather slow process in the long term, because it becomes less likely that particles be evaporated over the high barrier. A solution to this problem is to reduce the barrier height as the ensemble cools down.

The calculated daverage energy is now a more realistic measure given that the distribution is bounded over the interval. Before we proceed with the idea of lowering the barrier height we improve the algorithm in two respects.

Exercise 1:

Increase the number of collisions by running the loop over longer times and observe the modest change in the final distribution. Also perform small variations of the product c*dt  and verify whether your results are consistent for fixed collision number.

>   

A good question to ask is: How much would we gain from an O(dt^2)  method? Twice the computational effort per dt-step might be compensated by 4-fold improvement in accuracy.

>    restart; with(plots):

Warning, the name changecoords has been redefined

>    E_t:=1;

E_t := 1

>    N:=40;

N := 40

>    N0:=10;

N0 := 10

>    dE:=E_t/(N-N0);

dE := 1/30

>    E:=i->(i-0.5)*dE;

E := proc (i) options operator, arrow; (i-.5)*dE end proc

>    rho:=i->E(i)^2;

rho := proc (i) options operator, arrow; E(i)^2 end proc

We need to farm out the RHS-calculation to a procedure:

>    RHSpr:=proc(fv,i) local j,k,l,h,RHS; global N,cdt,rho;

>    RHS:=0:

>    for k from 1 to N do:

>    for l from 1 to N do:

>    j:=k+l-i:

>    if j>0 and j<N+1 then

>    h:=min(i,j,k,l): RHS:=RHS+rho(h)/rho(i)*(fv[k]*fv[l]-fv[i]*fv[j]):

>    fi:

>    od: od: RHS; end:

>    N_col:=100;

N_col := 100

>    cdt:=1/400;

cdt := 1/400

>    fv:=array([seq(1,i=1..N-N0),seq(0,i=N-N0+1..N)]);

fv := vector([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

>    PL0:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)]):

The collision loop is modified by introducing a simple predictor-corrector method: extrapolation is used as before in the first step to predict the level population at t+dt , and then the rate of change in fv[i]  is recomputed from this predicted population. The real time-step is carried out using an average of both rate of change calculations.

>    for icol from 1 to N_col do:

>    for i from 1 to N do: RHSp[i]:=RHSpr(fv,i):

>    if i<=N-N0 then gv[i]:=fv[i]+cdt*RHSp[i]: else gv[i]:=0: fi: od:

>    for i from 1 to N do: RHS:=0.5*(RHSpr(gv,i)+RHSp[i]):

>    if i<=N-N0 then fv[i]:=fv[i]+cdt*RHS: else fv[i]:=0: fi: od:

>    if trunc(icol/10)*10=icol then PL[trunc(icol/10)]:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)],title=cat("Collision number : ",trunc((N/2)*icol*cdt/dE))): fi: od:

>    display([PL0,seq(PL[i],i=1..N_col/10)],insequence=true,view=[0..1,0..0.5]);

[Maple Plot]

>    E_avg:=add(E(i)*fv[i]*rho(i),i=1..N)/add(fv[i]*rho(i),i=1..N);

E_avg := .3280081282

Exercise 2:

Explore the stability regime for the improved algorithm, i.e., increase cdt  and observe at what collision number the solution becomes unstable.

Exercise 3:

Explore the dependence of the results on the number of energy bins kept above the trap barrier. In order to make a quantitative comparison track the maximum of the distribution as a function of collision number.

To accomplish cooling to low temperatures (i.e., energies in the trap) the method becomes inefficient. The obvious strategy is to lower the trap height, i.e., to allow particles of lower energy to evaporate as time goes on. We lower the trap height from E(N-10)  to E(N/4)  over the time of the run. This is accomplished by defining an index N_trap  which varies with time (collision number), and which is used in the update algorithm as the cut-off parameter beyond which the distribution function fv[i]  is set to zero at the end of a propagation step.

First we improve the algorithm by avoiding bookkeeping for the over-the-barrier states. We discretize the energy range for the (initally) trapped states and propagate only those. The double sum over k,l  in the calculation of the right hand side is extended to cover N0  additional energy levels.

>    restart; with(plots):

Warning, the name changecoords has been redefined

>    E_t:=1;

E_t := 1

>    N:=40;

N := 40

>    dE:=E_t/(N);

dE := 1/40

>    E:=i->(i-0.5)*dE;

E := proc (i) options operator, arrow; (i-.5)*dE end proc

>    rho:=i->E(i)^2;

rho := proc (i) options operator, arrow; E(i)^2 end proc

>    cdt:=1/400;

cdt := 1/400

>    N_col:=50;

N_col := 50

>    RHSpr:=proc(fv,i) local j,k,l,h,RHS,fvk,fvl,fvj,N0; global N,cdt,rho;

>    RHS:=0: N0:=15;

>    for k from 1 to N+N0 do: if k<=N then fvk:=fv[k]: else fvk:=0: fi:

>    for l from 1 to N+N0 do: if l<=N then fvl:=fv[l]: else fvl:=0: fi:

>    j:=k+l-i:

>    if j>0 and j<=N+N0 then if j<=N then fvj:=fv[j]: else fvj:=0: fi:

>    h:=min(i,j,k,l): RHS:=RHS+rho(h)/rho(i)*(fvk*fvl-fv[i]*fvj):

>    fi:

>    od: od: RHS; end:

>    fv:=array([seq(1,i=1..N)]);

fv := vector([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])

>    PL0:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)]):

>    for icol from 1 to N_col do:

>    for i from 1 to N do: RHSp[i]:=RHSpr(fv,i):

>    gv[i]:=fv[i]+cdt*RHSp[i]: od:

>    for i from 1 to N do: RHS:=0.5*(RHSpr(gv,i)+RHSp[i]):

>    fv[i]:=fv[i]+cdt*RHS: od:

>    if trunc(icol/10)*10=icol then PL[trunc(icol/10)]:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)],title=cat("Collision number : ",trunc((N/2)*icol*cdt/dE))): fi: od:

>    display([PL0,seq(PL[i],i=1..N_col/10)],insequence=true,view=[0..1,0..0.5]);

[Maple Plot]

>    E_avg:=add(E(i)*fv[i]*rho(i),i=1..N)/add(fv[i]*rho(i),i=1..N);

E_avg := .3341093444

>    N_TP:=add(fv[i]*rho(i),i=1..N);

N_TP := 6.250135320

It is fairly straightforward now to explore the following questions: how does the average energy after M collisions depend on the number states kept in the calculation above the trap barrier? How sensitive are the results to the resolution dE ? How do the results differ for other powers in the density of states?

Now we come to the main point: we will lower the barrier height for the trap as a function of time (actually as a function of the number of collisions between the atoms). Over the total number of collisions we reduce the trap height from E(N)=1  to E(N1) . The index which is used as a criterion as to how to update fv[i]  is called N_trap  and is calculated in the collision loop to change gradually. For a start we choose to lower the barrier height by one half, i.e., N1=N/2 .

>    N1:=N/2;

N1 := 20

>    N_col:=100;

N_col := 100

>    cdt:=1/800;

cdt := 1/800

>    fv:=array([seq(1,i=1..N)]);

fv := vector([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])

>    PL0:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)]):

>    for icol from 1 to N_col do: N_trap:=N-trunc((N-N1)*(icol/N_col));

>    for i from 1 to N do: RHSp[i]:=RHSpr(fv,i):

>    if i<=N_trap then gv[i]:=fv[i]+cdt*RHSp[i]: else gv[i]:=0: fi: od:

>    for i from 1 to N do: RHS:=0.5*(RHSpr(gv,i)+RHSp[i]):

>    if i<=N_trap then fv[i]:=fv[i]+cdt*RHS: else fv[i]:=0: fi: od:

>    if trunc(icol/10)*10=icol then PL[trunc(icol/10)]:=plot([seq([E(i),(rho(i)*fv[i])],i=1..N)],title=cat("Collision number : ",(N/2)*icol*cdt/dE)): fi: od:

>    display([PL0,seq(PL[i],i=1..N_col/10)],insequence=true,view=[0..1.0,0..1.0]);

[Maple Plot]

>    E_avg:=add(E(i)*fv[i]*rho(i),i=1..N)/add(fv[i]*rho(i),i=1..N);

E_avg := .2084728006

>    N_TP:=add(fv[i]*rho(i),i=1..N);

N_TP := 4.278828749

Evidently the distribution peaks now at a lower energy value, but for the same number of collisions a smaller fraction of bound particles remains.

Exercise 5:

Carry out Boltzmann simulations in which the reduction in barrier height happens faster (or more slowly) by changing the total number of collisions (controlled by the parameter N_col , which is proportional to the collision number) in the above loop. Is the final result simply a function of collision number, or are there differences when the the barrier is lowered at a different rate with collision number?

Exercise 6:

Change the power law for the density of states from the harmonic oscillator case and observe the efficiency of evaporative cooling with collision number. How do the results depend on the power when it is changed in steps of 1/2 up or down from n=2 ?

>   

>