Nuclear decays: Poisson statistics

We consider sequences of nuclear decay data as collected by a Geiger counter. The data are shown to obey Poisson statistics, which represents evidence for the fact that the events originate from independent spontaneous decays. In the limit of a large sample of data (very high count rate) the distribution of events becomes Gaussian. A detailed discussion can be found in John R. Taylor: An Introduction to Error Analysis , chapters 10,11,12 (University Science Books 1982, 2nd ed. 1998). For readers who are less familiar with probability distributions we begin with a section on the binomial distribution.

> restart;

Binomial distribution

We begin with the binomial distribution to illustrate the statistics of a familiar experiment with several possible discrete outcomes (throwing of dice, etc.), before proceeding with nuclear decays. Any physics measurement (or indeed any measurement at all) results in a distribution of results (think of the repeated measurement of the period of a pendulum, or the measurement of the height of the female population in Canada, etc.). Usually we think that in physics measurements there should be a unique answer: in the case of the pendulum period we might think that only one outcome is possible in the measurement, while for the second example we clearly expect a distribution with a well-defined average and spread of data around the average. Nevertheless, both situations have something in common: due to measurement errors we will obtain a distribution of results in the first case as well (we can think of using an electronic stopwatch, and realize that at some level of precision we will start to investigate our reaction time). Even in automated experiments (using a light beam/photodetector set-up) the numerical result will differ from measurement to measurement when the readout is precise. The remaining question then is to understand the properties of the distribution of values in the limit that the number of measurements taken is very large, or infinite. This is the main reason for studying the mathematical properties of distributions. The distribution will be able to tell us the probability for achieving a certain result in a particular measurement. The knowledge of the latter is crucial while assessing the validity of a certain measurement.

The distribution used most often in any experimental science is the Gaussian (or normal) distribution. It applies when a measurement is subject to many sources of small and random errors. It is a limiting distribution in the sense that an experiment can be said to follow this distribution only in the limit of a large number of measurements.

The binomial distribution (and also the Poisson distribution discussed later) applies to finite sequences of data points. Let us illustrate it by the example of throwing three dice, and asking the question of finding the number of times that a particular face appears (e.g., let us call an ace when the face with six points appears). Obviously, we can have for a given throw the outcome of nu=0,1,2,3 aces.

This situation is easily figured out: the probability to throw one ace with a single die is 1/6 as there are six faces, and we assume that the die is perfect. Whether a die is perfect or not, can later be assessed by observing whether its runs are following the correct distribution. The same is true for a random number generator.

Clearly the probability to throw three aces with three independet dies is given by

> P3w3:=(1/6)^3; evalf(%);

[Maple Math]

[Maple Math]

This corresponds to a probability of about than 0.5 percent. Now we figure out the probability for two aces. It should be the product of throwing two aces, and not throwing a third one, i.e., we use the fact that not throwing an ace with one of the dies is given by the complement of 1/6 with 1. Naively, we might think that the answer would be given by the simple product:

> wrongP2w3:=(1/6)^2*(1-1/6); evalf(%);

[Maple Math]

[Maple Math]

This answer corresponds to the case that the first two dice show aces, and the third one does not. The problem is that unlike in the first case, where the three dice were undistinguishable (they showed the same result), we can have three different configurations now: suppose we mark by A a face called ace, and by B the 'no ace' face; then we can have with equal probabilities ( A , A , B ), ( A , B , A ), and ( B , A , A ). The probability for any one of these three configurations is the one calculated above by the simple product. Why is this so? One should think about the likelihood of the outcome of any such configuration, e.g., ( A , B , A ). Its probability is given by the product of probabilities to dial an ace with the first die (1/6) times the probability of not getting an ace with the second die (5/6), times the probability of getting one with the third die (1/6). This example shows that we need to be careful in probability theory, and that it is always easiest to find the probabilities for the most detailed (exclusive) information. The more inclusive information is obtained by summing over the possibilities that make up the statement.

Therefore, we have a probability of almost 7 %:

> P2w3:=3*(1/6)^2*(1-1/6); evalf(%);

[Maple Math]

[Maple Math]

Now we can figure out the remaining probabilities: we consider all the explicit possibilites for dialing one ace:

( A , B , B ), ( B , A , B ), ( B , B , A ) - this gives us the same multiplicity as with two aces:

> P1w3:=3*(1-1/6)^2*(1/6); evalf(%);

[Maple Math]

[Maple Math]

Finally we have an easy one again:

> P0w3:=(5/6)^3; evalf(%);

[Maple Math]

[Maple Math]

We can check that the probabilities sum to unity (otherwise these wouldn't be probabilities!)

> P0w3+P1w3+P2w3+P3w3;

[Maple Math]

A graphical representation of these results can be obtained by vertical bars:

> plot([[0,P0w3*t,t=0..1],[1,P1w3*t,t=0..1],[2,P2w3*t,t=0..1],[3,P3w3*t,t=0..1]],color=[red,blue,green,black],axes=boxed);

[Maple Plot]

Exercise B1:

Repeat the above steps with four dice, i.e., find the probabilites for 0, 1, 2, 3, 4 aces.

The general form of the binomial distribution is specified by the following parameters:

- the number of independent trials: n (throwing n dice, tossing n coins, looking at n electrons in an atomic shell, etc.).

- the number of successes: nu (getting an ace, getting a head on a coin; having an electron ionized, etc.)

- the probability of success: p , and of failure: q =1- p in any one trial ( p =1/6 for getting an ace on a die; p =1/2 for getting heads on a coin; probability of ionization of an electron in a given atomic shell by an X ray of a given energy).

The probability for nu successes in n trials has to be proportional to the nu-th power of p , and the (n-nu)-th power of q . The combinatorial factor that counts the number of possibilites can be figured out by using a mathematical identity. We know that 1 = p + q , and can raise this expression to the n th power. Examples are given below:

> expand(1^4=(p+q)^3);

[Maple Math]

> expand(1^4=(p+q)^4);

[Maple Math]

We recognize the following from our example discussed above:

- the expansion of the expression (p+q)^n involves the required powers of single-event probabilities for the desired success rates for all possible outcomes;

- the sum of the terms on the RHS equals one, i.e., we can think of the identity as summing all possible probabilities;

- for n =3 (and n =4 in the exercise) the binomial coefficients count the number of configurations for a given scenario as counted explicitly above.

Maple has a built-in function

> binomial(4,2);

[Maple Math]

but it is straightforward to define the function known from the mathematical expansion:

> bc:=(n,nu)->n!/(nu!*(n-nu)!);

[Maple Math]

> bc(4,2);

[Maple Math]

Armed with this coefficient we can define the binomial distribution:

> bc(n,nu)*p^nu*(1-p)^(n-nu);

[Maple Math]

> BD:=unapply(%,p,n,nu);

[Maple Math]

For a given case (given single-event probability value p and fixed number of trials n ) we can graph the probabilities for nu successes:

> ps:=0.35;

[Maple Math]

> plot([[0,BD(ps,4,0)*t,t=0..1],[1,BD(ps,4,1)*t,t=0..1],[2,BD(ps,4,2)*t,t=0..1],[3,BD(ps,4,3)*t,t=0..1],[4,BD(ps,4,4)*t,t=0..1]],color=[red,blue,green,brown,magenta],axes=boxed);

[Maple Plot]

Exercise B2:

Explore the possibilites of the binomial distribution for fixed n =4 by changing the single-event probability ps in the above example. What happens for ps=0.5 ? Explore also for n =5. Graph the probabilities for all possible successes nu=0..n as a function of the single-event probability.

There are further properties that one should explore. The general expression BD(ps, n, nu) gives the probability for nu successes in n trials for given single-trial success probability ps . One can ask what the average number for success is. This is obtained by averaging nu probabilistically, i.e., by summing nu with the probability as a weight coefficient.

We try a few examples (to anticipate the result of a mathematical derivation).

> N:=5; sum(nu*BD(p,N,nu),nu=0..N); simplify(%);

[Maple Math]

[Maple Math]

[Maple Math]

The average number of successes is simply given by n times the single-event probability. Sometimes this quantity is all that one is interested in. On the other hand one can argue that this is all that is required from a measurement, and that the individual success probabilities can be reconstructed via the deduced single-trial probability (assuming that the experiment obeys binomial statistics).

The standard deviation for the number of successes nu can also be calculated. It is defined as follows: sigma-squared is given as the average of the square of the deviation of nu from the average.

> nubar:=N*p;

[Maple Math]

> sigsq:=simplify(sum((nu-nubar)^2*BD(p,N,nu),nu=0..N));

[Maple Math]

We can generalize this finding (try other values of N ). It is also possible to show that this result is identical to the following difference:

> simplify(sum(nu^2*BD(p,N,nu),nu=0..N))-nubar^2;

[Maple Math]

The quantity sigma (or its square) measures the width of the distribution of probabilities for [Maple Math] successes. The two results for the average and the width of the binomial distribution can be used to observe which parameters to choose in a Gaussian approximation to the binomial distribution. The latter becomes a valid approximation to the binomial distribution for large n ).

> with(plots):

> N:=7; ps:=0.4;

[Maple Math]

[Maple Math]

> P1:=plot([seq([nu,BD(ps,N,nu)*t,t=0..1],nu=0..N)],color=blue):

> nubar:=ps*N; sigma:=sqrt(N*ps*(1-ps));

[Maple Math]

[Maple Math]

> P2:=plot(exp(-(nu-nubar)^2/(2*sigma^2))/(sqrt(2*Pi)*sigma),nu=-0.1..8,color=red):

> display(P1,P2,axes=boxed);

[Maple Plot]

The comparison shows that the discrete binomial distribution can be approximated reasonably by a continuous Gaussian distribution evaluated for discrete nu values for moderate values of n already.

We can also understand now how the two seemingly different tasks can be put on a common footing: examples of measurements where we obviously expect a broad distribution (height of some species population with average and deviation the meaningful quantities), and others where we are seeking a 'single number' only. In the latter case the 'single number' is given by the average of the distribution, and the deviation becomes a measure for the uncertainty in the measured result. While we would like in an ideal world the uncertainty to be negligible compared to the 'single number', it is essential to determine the uncertainty from the distribution of repeated measurements in order to assess the quality (accuracy) of the measured quantity.

To really understand the concept of the probability distribution it is essential to analyze sequences of real data. We encourage the reader to perform 'real' experiments with coins, dies, etc.. It is possible to analyze sequences produced with Maple's random number generator (whose quality falls short of state-of-the-art).

> die:=rand(1..6);

[Maple Math]

> die();

[Maple Math]

We generate a sequence of tosses with three dies:

> M:=200; n:=3;

[Maple Math]

[Maple Math]

> S1:=[seq([die(),die(),die()],i=1..M)]:

> S1[1],S1[2];

[Maple Math]

We can analyze the presence of particular faces:

> F:=6;

[Maple Math]

> DieAnalysis:=proc(S) local nu,C,i,toss,isucc,j; global F,M,n,p_s;

> for nu from 0 to n do: C[nu]:=0; od: p_s:=0;

> for i from 1 to M do: toss:=S[i]: isucc:=0:

> for j from 1 to n do:

> if toss[j]=F then isucc:=isucc+1; p_s:=p_s+1; fi;

> od: C[isucc]:=C[isucc]+1;

> od: p_s:=evalf(p_s/(M*n)); print("Single-event probability: ",p_s);

> [seq(evalf(C[nu]/M),nu=0..n)]; end:

> DieAnalysis(S1);

[Maple Math]

[Maple Math]

> ps:=evalf(1/6);

[Maple Math]

> [seq(BD(ps,n,nu),nu=0..n)];

[Maple Math]

> [seq(BD(p_s,n,nu),nu=0..n)];

[Maple Math]

We appear to have results that are consistent with the binomial distribution.

We note that in addition to measuring directly from the sample the nu-fold event probabilities we have the option to measure the single-event probability, and to generate nu-fold event probabilities from the binomial distribution. This allows one to 'beat' the statistics, i.e., to generate predictions for unlikely events.

We leave it as an exercise to consider longer sequences of tosses to observe whether the consistency improves with the sample size. We can also look at the distribution of frequencies of success for other faces:

> F:=1;

[Maple Math]

> DieAnalysis(S1);

[Maple Math]

[Maple Math]

> [seq(BD(p_s,n,nu),nu=0..n)];

[Maple Math]

> F:=2;

[Maple Math]

> DieAnalysis(S1);

[Maple Math]

[Maple Math]

> [seq(BD(p_s,n,nu),nu=0..n)];

[Maple Math]

The scatter in the answers for the different faces demonstrates that we have to be very careful when drawing conclusions from measurements with small sample sizes (low number of repetitions)! Even the single-event probabilities vary substantially. The deviations will shrink only slowly when the sample size is increased (typically with 1/sqrt(M) ).

Exercise B3:

Determine the single-event probabilities for two different faces. Determine whether the measurement from samples approaches the expected value of 1/6 when the sample size is increased by different factors (e.g., 2, 10, 50). Is the approach consistent with 1/sqrt(M) behaviour?

Exercise B4:

Analyze the case where four dies (or some other number) are tossed simultaneously.

>

Poisson distribution with chi-squared test

The Poisson distribution is discrete (as the binomial distribution) and applies to the following case: suppose a random process occurs, such as the decay of radioactive nuclei in a sample. We cannot predict at what time any single nucleus will undergo a transition, but we do know that nuclei in the sample decay with some rate, i.e., a certain number of events is expected per unit time. If we pick some time interval, we expect on average a certain number of decayed atoms (a fractional number as it is an average). In reality, when we record time sequences, we obtain different numbers of decays in such intervals. This is due to the fact that the number of decays is somehow distributed around the average. The Poisson distribution is used in this situation as a simplified version of the binomial distribution. It depends on some average rate, and a chosen time interval, and a decay is considered to be a success. The reason why we can simplify from the binomial distribution is that the number of trials is extremely large (a fraction of Avogadro's number), while the probability for success (decay of an individual nucleus) is extremely small (of the order of the inverse of Avogadro's number per unit time). Thus, we can ignore the factor for the probability of no successn raised to a huge power (it is practically one), and use a mathematical property for the binomial coefficient for large n .

The Poisson distribution is given as:

> PD:=(mu,nu)->exp(-mu)*mu^nu/nu!;

[Maple Math]

Here mu is the expected mean numberof counts in the time interval chosen, and nu labels the possibilities of observing 0, 1, 2,... counts.

Note that in an actual experiment with a Geiger-Mueller counter one does not observe all decays, but a fraction thereof (depending on the efficiency of the tube for particular forms of radiation; the count rate, which leads to a certain dead time for the tube, the supply voltage on the tube, etc.). This does not alter, however, the fact that one can determine the statistical nature of the radiation, as one simply looks at a subsample of the actual radiation sequence (one is also limited by the direction of observation).

Why is mu the mean (or average) count number for the chosen time interval? Let us calculate the probabilistic average number of observed decays (successes):

> mu:='mu':

> simplify(sum(nu*PD(mu,nu),nu=1..infinity));

[Maple Math]

> simplify(sum(PD(mu,nu),nu=0..infinity));

[Maple Math]

Note that the sum started at one, as no contribution is made towards the average by [Maple Math] . The mathematical steps required in the calculation (done for us by Maple) are shown in J.R. Taylor's book.

The procedure when analyzing a sequence is as follows:

First one can determine the average count rate, by taking the total number of decays and divide by the total time interval (which is subdivided into a certain number of sub-intervals). Then one arrives at an average number of decays per subinterval. Finally one forms a histogram by recording the frequency of intervals with 0, 1 , 2, ... counts. This histogram is to be compared with the predictions of the PD for the given mu value. It is possible to re-analyze the same decay sequence by using a different choice of sub-interval length.

Let us look at the Poisson distribution for two cases, one where [Maple Math] , for which the number of zero-count intervals dominates, and another one, where one can see how the Gaussian distribution emerges as a limit again.

> mu:=0.6;

> plot([seq([nu,PD(mu,nu)*t,t=0..1],nu=0..10)],color=blue,axes=boxed);

[Maple Math]

[Maple Plot]

> mu:=3.4;

> plot([seq([nu,PD(mu,nu)*t,t=0..1],nu=0..10)],color=blue,axes=boxed);

[Maple Math]

[Maple Plot]

It would make sense to add to the graph an indication of the mu value just to show the location of the average number of decays per chosen interval for the asymmetric distribution. To find out which Gaussian distribution one can use in the limiting case, one needs to find out the deviation.

> sigsq:=simplify(sum((nu-mu)^2*PD(mu,nu),nu=0..infinity));

[Maple Math]

Again, this can also be calculated from:

> simplify(sum(nu^2*PD(mu,nu),nu=0..infinity)-mu^2);

[Maple Math]

This result may appear to be puzzling: the deviation is given as sqrt(mu) ! Indeed, if one lists the average with deviation as an interval one obtains [Maple Math] . While using a larger count interval we do also obtain a larger uncertainty, which seems improper. The fractional uncertainty [Maple Math] , however, is reduced for an increased time interval.

> mu:=8.3;

> P1:=plot([seq([nu,PD(mu,nu)*t,t=0..1],nu=0..20)],color=blue):

[Maple Math]

> sigma:=sqrt(mu);

[Maple Math]

> P2:=plot(exp(-(nu-mu)^2/(2*sigma^2))/(sqrt(2*Pi)*sigma),nu=-0.1..20,color=red):

> display(P1,P2,axes=boxed);

[Maple Plot]

Note that there is a systematic tendency for the Gaussian distribution to underestimate the count probability to the left of the average. The Poisson distribution is asymmetric, and this asymmetry becomes negligible only for sufficiently large average count probability for the chosen time interval.

To properly understand the Poisson distribution one should analyze data runs from a Geiger counter. These are usually computer-interfaced in undergraduate laboratories today, which has the drawback that the software performs all the data analysis, and the students usually no longer manipulate the raw data. For this reason we use Maple to generate event sequences which are then analyzed.

A sequence of events, corresponds to the sequence of clicks that one hears as the Geiger counter picks up the radiation (either from natural background, i.e., mostly cosmic radiation, or from a small radioactive source, such as a ceramic plate or tile that has been coated with a radiactive layer containing, e.g., uranium oxide, to enhance the brightness).

We use a random number generator to produce a time sequence of events; we assume that a tick represents a second, and generate events at a given rate.

A random number generator for integers in the interval [0,100] is used in conjunction with a given rate to decide in a time unit whether an observable decay occurs or not. A decay is recorded as a one, a non-decay as a zero.

> rnd:=rand(0..100);

[Maple Math]

> rate:=1/10;

[Maple Math]

> tmax:=3000;

[Maple Math]

> TS:=[]: for i from 1 to tmax do: if rnd() <= 100*rate then TS:=[op(TS),[i,1]]: else TS:=[op(TS),[i,0]]: fi: od:

We graph a subset of the count data:

> plot([seq(TS[i],i=1..200)],style=point);

[Maple Plot]

This sequence could have come from a Geiger counter. Note the irregularity: there are stretches with no counts followed by segments where the counts appear to cluster. This irregularity is noticable when listening to the clicks from a Geiger counter. Now we pick a time interval for the decay analysis:

> tint:=25; # length of interval in seconds (time units)

[Maple Math]

We can determine the expected average number of events per time interval:

> nu_bar_exp:=tint*rate;

[Maple Math]

> Nt:=tmax/tint; # number of time intervals contained in sample

[Maple Math]

Now we need to generate the histogram (frequency distribution): for each time interval we find out how many events occured.

We begin by resetting the counters for the frequency. Assume that 10 events during the interval is the maximum (increase it, if the program below complains):

> Emax:=15;

[Maple Math]

> for icnt from 0 to Emax do: C[icnt]:=0; od: myrate:=0:

> for i from 1 to Nt do: icnt:=0: for j from 1 to tint do: if TS[(i-1)*tint+j][2]=1 then icnt:=icnt+1; myrate:=myrate+1; fi; od: if icnt <= Emax then C[icnt]:=C[icnt]+1; else print("Increase Emax, please"); fi; od: myrate:=evalf(myrate/tmax);

[Maple Math]

> mymu:=myrate*tint;

[Maple Math]

To compare the found results with the Poisson distribution we have to convert the probabilities calculated from the Poissonian to numbers.

> P2:=plot([seq([nu-0.05,Nt*PD(mymu,nu)*t,t=0..1],nu=0..Emax)],color=red):

> P1:=plot([seq([icnt+0.05,C[icnt]*t,t=0..1],icnt=0..Emax)],color=blue): display(P1,P2,axes=boxed);

[Maple Plot]

We can make the following observations:

1) for the chosen moderate number of recordings (event/no-event per second) we can find that the simulation does well on the event rate, and on the expected average number of events per time interval.

2) a reasonable result is obtained for the entire distribution; note that the Poissonian drawn for comparison is based on input from the data run (variable mymu for the average number of events per time interval), and not the predetermined rate used to generate the sequence. Substantial discrepancies occur for individual bins in the histogram. This should not cause too much of an alarm: the Poisson distribution represents a limiting probability distribution that should be reached in the limit of many performed experiments.

3) We should explore further the consistency of the data with a Poisson distribution: either by considering larger data sets, or by performing a systematic test on the hypothesis (chi-squared test). The latter is attempted further below.

Exercise P1:

Explore the above data by choosing different subinterval lengths tint .

Exercise P2:

Repeat the above steps for cases where the decay rate is increased and decreased respectively.

We now consider an interesting additional check, namely a test of the hypothesis that the distribution generated from the independent events is indeed compatible with a Poisson distribution. A chi-squared test of a hypothesis is a quantitative measure that essentially compares the data distributions as done in the above graph by summing the squares of the discrepancies, and then turning the result into a useful measure.

In the graph above we have used the data to deduce [Maple Math] value for the Poissonian, and then generated a distribution of expected values to observe [Maple Math] events per time interval. (A limitation of the Poissonian is perhaps noticable here: our simulation, and any Geiger counter sequence has a maximum number of observable events, namely when the counter is clicking all the time, while the Poissonian distributes the probability over an infinite interval; this is the price to be paid for the replacement of a binomial by a Poisson distribution; in practice this shouldn't be a problem, as the predictions for high event counts are small).

We can calculate the chi-squared for the results obtained above (make sure that the statements that produce the graph above were executed) from the expected and observed results:

> chi_sq:=add((Nt*PD(mymu,nu)-C[nu])^2/(Nt*PD(mymu,nu)),nu=0..Emax);

[Maple Math]

Is this an acceptable value or not? This depends on the number of degrees of freedom. Here we have a little problem: should we use Emax , which includes data where we have no counts and small expected probabilities, or not? After all, there was some arbitrariness in choosing this variable, which acts as a cut-off for the diagram.

The criterion for acceptability is that chi-squared be much less than the number of degrees of freedom d ; rejection of the hypothesis is given when chi-squared is much bigger than d . The question is whether we can say that d =16 ( =Emax+1 )?

Judging from the graph the right attitude should be to pick the 10 visible points:

> chi_sq:=add((Nt*PD(mymu,nu)-C[nu])^2/(Nt*PD(mymu,nu)),nu=0..9);

[Maple Math]

The conclusion based on the limited data set would be that the data are not inconsistent with a Poisson distribution, and that further testing is desirable. Further testing means to run with bigger data sets. An important issue when performing these tests is to choose reasonable time intervals (which depend on the actual count rate). The chi-squared comparison has to have a reasonable number of contributing data points (say six or more to account for two degrees of freedom to be subtracted as explained below).

One should also be a bit more precise when assessing the number of degrees of freedom d . We have simply taken the number of non-negligible theoretical data points (we could have used the number of data bins). It should be kept in mind that we really have one less independent value, as the data were used to infer the average number of decays per time interval ( mymu ). There can be other constraints (usually the number of data points; in the case of a Gaussian distribution also its deviation) which are used to determine a reduced number of degrees of freedom. A reduced chi-squared is defined as chi-squared/ d , and for this quantity the borderline between acceptance and non-acceptance of the hypothesis is the value of one. It is evident that for acceptance we need to obtain much less than one, and vice versa for rejection.

A healthier attitude for dealing with the open-ended Poisson distribution would be to call the bin neighbouring the largest [Maple Math] value that still received non-zero counts to be one bin for all events beyond the largest observed [Maple Math] value. In our case that would be nu=9 (corresponding now to [Maple Math] , and receiving 0 counts) and d would be 10-2=8, i.e., a borderline result [note that the precise value in the latter statement can change for repeated data runs].

> chi_sq/8;

[Maple Math]

Our own test with the code above produced the following interesting result:

- for data runs with up to 3000 time ticks (at a decay rate of 1/10 per tick) consistency with a Poissonian was observed (reduced chi-squared observed between 0.4 and 1.4)

- for data runs with 5000 time ticks and more the Poisson hypothesis was not confirmed (reduced chi-squared often around 1.5 rather than falling further below one in a consistent way).

This allows one to conclude that at the level of random number sequences in the hundreds of thousands the pseudo-random number generator has problems with correlations, i.e., the random numbers are not truly independent from each other.

We can also conclude that it is hard to test a hypothesis, i.e., we have to run long simulations, and we have to know that the tools are accurate enough. Short simulations are often sufficient in order to obtain a qualitative understanding.

Exercise P3:

Explore the chi-squared test of the Poisson distribution hypothesis with different count rates.

>