{VERSION 4 0 "IBM INTEL NT" "4.0" } {USTYLETAB {CSTYLE "Maple Input" -1 0 "Courier" 0 1 255 0 0 1 0 1 0 0 1 0 0 0 0 1 }{CSTYLE "2D Math" -1 2 "Times" 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 1 }{CSTYLE "2D Comment" 2 18 "" 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 } {CSTYLE "2D Input" 2 19 "" 0 1 255 0 0 1 0 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 256 "" 1 16 0 0 0 0 0 1 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 257 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 258 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 259 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 } {CSTYLE "" -1 260 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 261 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 262 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 263 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 264 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 } {CSTYLE "" -1 265 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 266 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 267 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 268 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 269 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 } {CSTYLE "" -1 270 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 271 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 272 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 273 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 274 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 } {CSTYLE "" -1 275 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 }{CSTYLE "" -1 276 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 }{CSTYLE "" -1 277 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 278 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 }{CSTYLE "" -1 279 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 } {CSTYLE "" -1 280 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 281 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 282 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 283 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 284 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 } {CSTYLE "" -1 285 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 286 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 287 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 }{CSTYLE "" -1 288 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 }{CSTYLE "" -1 289 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 } {CSTYLE "" -1 290 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 291 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 292 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 293 "" 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 }{CSTYLE "" -1 294 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 } {CSTYLE "" -1 295 "" 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 }{PSTYLE "Normal " -1 0 1 {CSTYLE "" -1 -1 "" 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 }0 0 0 -1 -1 -1 0 0 0 0 0 0 -1 0 }} {SECT 0 {EXCHG {PARA 0 "" 0 "" {TEXT 256 51 "Experimental uncertaintie s: the normal distribution" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 1435 "We consider the statistical analysis of random \+ experimental uncertainties in this worksheet. This will allow us to un derstand how to analyze repeated measurements of the same quantity, as suming that the uncertainties are caused by random measurement errors. These can include fluctuations in currents in a circuit (depending on someone else's loads in the power grid), variable reaction time when \+ a stopwatch is used, parallax problems when reading a length. Some of \+ these can also give rise to systematic errors, whereby all data values taken are consistently under- (or over-) estimated by the experimente r. These are much harder to track. The most typical systematic error t hat I observed in our laboratory over the years was the uncalibrated o ut-of-tune timebase of a time chart recorder (used in a Cavendish expe riment to track the motion of two lead balls via a photodiode pick-up \+ of a laser beam). This error was the easily spotted with a stopwatch, \+ but no student was able to pick up on it (they simply believed the rea ding on the knob). A 20% systematic error in the timebase resulted in \+ a wrong value of Newton's constant. Most students had a more accurate \+ value from the manual experiment (where they tracked the motion by mar kings on graph paper pasted onto a wall), and were surprised that the \+ automated experiment gave inferior results (and often inflated their u ncertainties to avoid conflict with the literature value)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 386 "A detailed analy sis of experimental uncertainties is crucial in order to assess whethe r systematic errors are present in someone's experiment or not. We wil l not enter into the question here as to why the experimental uncertai nties can (usually) be assumed to follow a normal (Gaussian) distribut ion. The prerequisites for this situation are that there are many sour ces of small errors." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 " " {TEXT -1 412 "We are interested in a single quantity that is measure d repeatedly. For the measurement of linear (or other) relationships a n extension of the arguments presented here leads to the linear least \+ squares (or quadratic) fit. In this worksheet we will simulate random \+ errors by using normally distributed random numbers to test the theore tical ideas which are proposed for the analysis of experimental uncert ainties." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 425 "Every experimenter will intuitively form averages from repeated m easurements in order to arrive at an improved experimental result for \+ the quantity to be measured. Understanding experimental uncertainties \+ will not only lead to a proper justification, but also to a measure of the expected accuracy of the averaged result. For this purpose we hav e to adopt a model which tells us how the repeated measurements are di stributed." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "restart;" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 109 "To illustrate the concept of aver age and standard deviation we follow the textbook approach (John R. Ta ylor: " }{TEXT 257 33 "An Introduction to Error Analysis" }{TEXT -1 418 ", chapters 4 and 5, University Science Books 1982, 2nd ed. 1998). Suppose that we have carried out a measurement for the spring constan t of a typical spring (out of a box containing a set of identically ma nufactured ones) repeatedly by timing the period of oscillation while \+ a well-known mass was attached. The series of 10 measurements is given as (in Newton/meter) the following list of values with 2-digit accura cy:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 36 "kL:=[86,85,84,89,86, 88,88,85,83,85];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 707 "The choice o f 2-digit accuracy was reasonable. Our timing device gave more digits \+ of readout, but due to the substantial fluctuations in the second digi t we decided to round the numbers to two digits. (Common mistakes made by students is to: (a) carry all 10 digits that their calculator prod uced while dividing two numbers that were measured only to 2 or 3 figu res; (b) to trust digitial devices that display much more than the sta ted accuracy in the manual (digital voltmeters and ammeters); (c) igno ring the fact that timing motion with digital stopwatches does not eli minate human error, and reaction time; (d) measurements with calipers \+ can be tricky due to bending, improper holding of the instrument." }} {PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 400 "Suppose \+ that all numbers would have been equal in the set of measurements. The n there would be no deviation from the average. In that case we might \+ ask ourselves whether more digits can be read from the instrument, and whether it is worthwhile to make an attempt at more precision. At thi s point we probably could proceed and carry out an uncertainty estimat e with the data set with 3-digit accuracy." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 50 "The average value is presumably our best estimate:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "N:=n ops(kL);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 34 "k_a:=evalf(add( kL[i],i=1..N)/N,3);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 141 "Before we proceed to calculate the deviation we illustrate the data by forming \+ a histogram (a frequency distribution or binning of the data)." }} {PARA 0 "" 0 "" {TEXT -1 189 "We choose a bin size and count how many \+ measurements fall into each bin. We could pick 10 bins around the calc ulated average value, or simply cover the range between 80 and 90 with 20 bins." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "dk:=0.5;" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "k_i:=i->80+(i-0.5)*dk;" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 156 "One way to proceed to bin the dat a is to subtract 80 from the value, multiply the remainder by 20 and r ound it to an integer (by adding 0.5 and truncating):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "i_k:=k->trunc(2*(k-80-0.5*dk)+0.5); " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 57 "j:=1; i_k(kL[j]),k_i(i_ k(kL[j])),kL[j],k_i(i_k(kL[j])+1);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 169 "The defined functions are consistent, i.e., the index used in \+ the binning process works out such that it assigns data values to fall to the right of the bordering value." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "for i from 1 to 20 do: C[i]:=0: od:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 90 "for n from 1 to N do: ict:=i_k(kL[n]); if ict>0 and ict<=20 then C[ict]:=C[ict]+1; fi; od:" }}}{EXCHG {PARA 0 " > " 0 "" {MPLTEXT 1 0 50 "with(plots): P1:=plot([k_a,t,t=0..2.5],color =red):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 88 "P2:=plot([seq([k_ i(ict)+0.5*dk,t*C[ict],t=0..1],ict=1..20)],color=blue): display(P1,P2) ;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 525 "Is this data set following \+ a Gaussian distribution? At first we might think that it does not (sus picious gap to the right of the average). On second thought (and suppo rted by testing done on the binomial and Poisson distributions in a se parate worksheet using a chi-squared test of a hypothesis) we remain o pen-minded: the data set is much to small to rule out that a much larg er data set obtained with the same measurement procedure will reach a \+ Gaussian as its limiting distribution (limit of infinitely many measur ements!)" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 467 "The standard deviation of the data is a quantity that looks at th e distances of the individual measurements from the calculated average . The distances are added quadratically in order not to cancel positiv e against negative errors. We average the squared deviations, and then take the square root to obtain a width (the deviation). When averagin g the sum of the squared deviation we make a small adjustment to accou nt for the fact that one degree of freedom from the " }{TEXT 258 1 "N " }{TEXT -1 97 " available from the data set is lost, as the average w as obtained from the data set itself. Thus:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 54 "sigma:=evalf(sqrt(add((kL[n]-k_a)^2,n=1..N)/(N-1)) ,3);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 35 "This number represents th e average " }{TEXT 262 35 "deviation of individual data points" } {TEXT -1 276 " from the average value. It is the typical uncertainty a ssociated with an individual measurement. If we make single measuremen ts for other springs from the box of supposedly identical springs, and find that the value is within sigma away from our calculated average \+ (based on " }{TEXT 259 1 "N" }{TEXT -1 223 " measurements on the first spring), then we should not be concerned at all about the question wh ether the springs are made identically to a sufficient degree, as they appear to be identical to within our measuring accuracy." }}{PARA 0 " " 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 182 "We did promise, h owever, more. We promised to be able to assess from the distribution t he accuracy of the determined average value, which is presumably the b est value we have (after " }{TEXT 260 1 "N" }{TEXT -1 35 " measurement s). The uncertainty or " }{TEXT 261 30 "standard deviation of the mean " }{TEXT -1 91 " - provided the data do have a Gaussian as its limitin g distribution - can be estimated as:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "sig_avg:='sigma/sqrt(N)';" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 15 "evalf(sig_avg);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 78 "This is smaller than the average uncertainty of the individual \+ data points by " }{TEXT 19 7 "sqrt(N)" }{TEXT -1 111 ", which is a goo d reason to take as many data as possible as the reduction in this unc ertainty with increasing " }{TEXT 263 1 "N" }{TEXT -1 9 " is slow." }} {PARA 0 "" 0 "" {TEXT -1 86 "Thus, the mean (average) together with it s deviation can be listed for our example as " }}{PARA 0 "" 0 "" {TEXT -1 27 "(85.9 plusminus 0.6) [N/m]." }}}{EXCHG {PARA 0 "" 0 "" {TEXT 264 19 "Normal distribution" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 445 "To compare the histogram to a normal dis tribution it has to be properly normalized. We have just determined th e two parameters that control the shape of the limiting Gaussian distr ibution, namely the position of the average (a Gaussian is symmetric), and the width (the deviation). The Gaussian (or normal) distribution \+ is normalized by an integral to unit area. The frequency distribution \+ at present is normalized to the number of measurements:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "add(C[ict],ict=1..20);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 41 "It is expedient to normalize it to unity: " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 55 "for ict from 1 to 20 do : C[ict]:=evalf(C[ict]/N,3); od:" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 95 "The average value of the spring constant can now be obtained by a \+ probabilistic sample average:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 49 "evalf(add((k_i(ict)+0.5*dk)*C[ict],ict=1..20),3);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 100 "Other averages can be obtained similarly , e.g., the deviation squared of the individual data points:" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 55 "evalf(add((k_i(ict)+0.5*dk-k _a)^2*C[ict],ict=1..20),3);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "evalf(sqrt(%),3);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 83 "This is \+ slightly less than the value obtained before, as it does not replace t he 1/" }{TEXT 266 1 "N" }{TEXT -1 64 " factor in the formation of the \+ average squared deviation by 1/(" }{TEXT 265 1 "N" }{TEXT -1 4 "-1)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 58 "To make comparison with the limiting distribution we plot:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 191 "P1:=plot([k_a,t,t=0..0.25],color=red): P 4:=plot([[k_a-sig_avg,t,t=0..0.25],[k_a+sig_avg,t,t=0..0.25]],color=ma genta): P5:=plot([[k_a-sigma,t,t=0..0.15],[k_a+sigma,t,t=0..0.15]],col or=violet):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 78 "P3:=plot(exp (-(k-k_a)^2/(2*sigma^2))/(sqrt(2*Pi)*sigma),k=80..90,color=green):" }} }{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 97 "P2:=plot([seq([k_i(ict)+0.5 *dk,t*C[ict],t=0..1],ict=1..20)],color=blue): display(P1,P2,P3,P4,P5); " }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 262 "We have added a few vertical lines to the comparison: the magenta-coloured lines indicate the stan dard deviation of the mean; the violet-grey coloured line indicate the standard deviation for the Gaussian distribution associated with the \+ individual measurements." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 " " 0 "" {TEXT -1 343 "It is the obligation of the experimenter to demon strate that the error in the data are indeed normally distributed, i.e ., that the agreement with a Gaussian distribution improves for larger data sets. Once this is demonstrated, the probability theory based on the normal distribution can be used to further assess the confidence \+ in our result." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 189 "First we demonstrate that the continuous Gaussian distri bution results in the expected values for the total probability (area \+ under the curve), the average value, and the deviation squared." }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 74 "int(exp(-(k-k_a)^2/(2*sigma^ 2))/(sqrt(2*Pi)*sigma),k=-infinity..infinity);" }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 76 "int(k*exp(-(k-k_a)^2/(2*sigma^2))/(sqrt(2*Pi)* sigma),k=-infinity..infinity);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 84 "int((k-k_a)^2*exp(-(k-k_a)^2/(2*sigma^2))/(sqrt(2*Pi)*sigma),k =-infinity..infinity);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 85 "One can ask how much probability is contained within the standard deviation i nterval:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 75 "int(exp(-(k-k_a )^2/(2*sigma^2))/(sqrt(2*Pi)*sigma),k=k_a-sigma..k_a+sigma);" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 334 "We say that we have 68% confidenc e that a measurement will fall within this interval. Why do we care ab out this statement (given that we have determined an accurate 'best' v alue by forming the average)? The answer is that what we learn about i ndividual measurements can be transferred later to assess the accuracy of the 'best' value." }}{PARA 0 "" 0 "" {TEXT -1 219 "So let us conti nue with considering the individual measurements. A 68% confidence sou nds not so terribly high. We can carry out area calculations, i.e., co nfidence estimates for larger than one-sigma intervals. For the " } {XPPEDIT 18 0 "2*sigma;" "6#*&\"\"#\"\"\"%&sigmaGF%" }{TEXT -1 6 "- an d " }{XPPEDIT 18 0 "3*sigma;" "6#*&\"\"$\"\"\"%&sigmaGF%" }{TEXT -1 19 "-intervals we have:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 79 " int(exp(-(k-k_a)^2/(2*sigma^2))/(sqrt(2*Pi)*sigma),k=k_a-2*sigma..k_a+ 2*sigma);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 79 "int(exp(-(k-k_ a)^2/(2*sigma^2))/(sqrt(2*Pi)*sigma),k=k_a-3*sigma..k_a+3*sigma);" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 217 "Therefore, once we have determine d sigma for our distribution, we can say with a definite confidence th at subsequent measurements (by ourselves, or someone else provided our systematics are identical) will fall into (" }{XPPEDIT 18 0 "t*sigma; " "6#*&%\"tG\"\"\"%&sigmaGF%" }{TEXT -1 17 ")-intervals, for " }{TEXT 267 1 "t" }{TEXT -1 9 "=1,2,...." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 533 "What relevance does this have for assess ing our best value? One could perform a statistical analysis of the av erage value, i.e., run 10 datasets with 10 measurements respectively, \+ compute the averages, and then the average of averages with known unce rtainty. The previous assessment of the standard deviation of the mean is a shortcut (but it is a guess as opposed to a measurement). It gue sses that these measurements of the averages ('best' values) will be d istributed randomly according to a Gaussian with a width estimated to \+ be " }{TEXT 19 7 "sig_avg" }{TEXT -1 207 ". Therefore, we can consider the interval surrounding the best estimate (red vertical line with ma genta-coloured sidebars) to be the 68% confidence interval for the tru e value. We can draw the interval with " }{TEXT 19 9 "2*sig_avg" } {TEXT -1 218 " and be 95% confident that the true value lies within th is interval. Taylor provides a proof about the Gaussian distribution o f the best values based on Gaussian distributed individual data measur ements in section 5.7." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 35 "The lessons to be learned here are:" }}{PARA 0 "" 0 "" {TEXT -1 365 "a) the one-sigma interval is not a holy grail. it con tains only a 68% confidence assessment! One-sigma 'new' physics (propa gated with big hoopla in Physical Review Letters or Physics Letters an d suspected by the critical minds) often goes away! For three-sigma ev ents the chances are much less that they are the result of hype rather than solid experimentation. By " }{TEXT 270 1 "t" }{TEXT -1 72 "-sigm a 'new' physics we mean unexpected results that lie outside of the " } {TEXT 271 1 "t" }{TEXT -1 51 "-sigma confidence interval of convention al physics." }}{PARA 0 "" 0 "" {TEXT -1 316 "b) the reader may wish to understand why the average represents the best estimate for the measu red value. The argument involves turning things around, i.e., assuming a limiting Gaussian distribution with yet undetermined parameters to \+ calculate the probability to arrive at a certain measured data set (a \+ product of " }{TEXT 268 1 "N" }{TEXT -1 298 " probabilities). A so-cal led principle of maximum likelihood is invoked which results in a dete rmination of the peak of the Gaussian at the position given by the ave rage of the measured data values (the arguments are given in Taylor's \+ book, and elaborated on in applied probability theory courses)." }} {PARA 0 "" 0 "" {TEXT -1 457 "c) there are probably lessons to be lear ned also for error propagation. Given that we are arriving at an under standing of uncertainties for one variable, we can learn that normally distributed errors in two (or more) variables conspire to form errors in some final result (that may involve some mathematical operation be tween the two basic variables). Taylor explains how addition of errors in quadrature follows from the normal distribution in section 5.6." } }{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT 269 11 "Simulations" }}{PARA 0 "" 0 "" {TEXT -1 223 "We can verify our claims by generating virtual measurements in Maple, i.e., computer simulatio ns of normally distributed measurements. Then we apply the formulae an d check whether our findings are consistent with the input." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 225 "The first task is to find a distribution of random numbers which follows a Gaussian. Normally pseudo-random number generators produce randoms that fill th e [0,1] interval uniformly. Normally distributed randoms fill the rang e" }{TEXT 19 19 "-infinity..infinity" }{TEXT -1 63 " in such a way tha t a histogram produces a Gaussian of width 1." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 12 "with(stats):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 35 "RNd:=[stats[random, normald](150)]:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 23 "with(stats[statplots]):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "histogram(RNd,area=1);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 115 "Without the option area=1 the histogram would \+ be arranged in such a way that each bar would contains the same area. " }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 314 "We c an use the numbers produced with a deviation of 1 to generate errors b y simply scaling the numbers with a constant factor. The average comes out to be zero, and thus we can add the scaled numbers to some hypoth etical measurement value. Suppose we have measured an angle of val=65 \+ degrees with some uncertainty." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "val:=38;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "N:=nops( RNd);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 36 "MData:=[seq(val+0. 5*RNd[i],i=1..N)]:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "histo gram(MData,area=1);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 151 "Let us ap ply our expressions to recover the true value. In the example below it is important to carry more than 3 digits when carrying out the averag e!" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 37 "v_a:=evalf(add(MData[ i],i=1..N)/N,7);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 57 "sigma:= evalf(sqrt(add((MData[n]-v_a)^2,n=1..N)/(N-1)),5);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "sig_avg:='sigma/sqrt(N)';" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "sig_avg:=evalf(sig_avg);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 67 "We have a 68% confidence interval which c an exclude the true value." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 44 "[evalf(v_a-sig_avg,4),evalf(v_a+sig_avg,4)];" }}}{EXCHG {PARA 0 " " 0 "" {TEXT -1 65 "For the 95% CI we almost certainly include the cor rect value for " }{TEXT 272 1 "N" }{TEXT -1 5 ">100:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 48 "[evalf(v_a-2*sig_avg,4),evalf(v_a+2*sig_a vg,4)];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 211 "If we don't include t he correct value in the two-sigma interval, the cause for this discrep ancy comes from the fact that the average of the sample taken accordin g to a normal distribution has a systematic drift:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "evalf(add(RNd[i],i=1..N)/N);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT 274 11 "Exercise 1:" }}{PARA 0 "" 0 "" {TEXT -1 140 "Explore the question whether the true value lies within the 1-sig ma and within the 2-sigma confidence intervals around the 'best' answe r by:" }}{PARA 0 "" 0 "" {TEXT -1 52 "a) changing the sample size (inc rease and decrease);" }}{PARA 0 "" 0 "" {TEXT -1 114 "b) changing the \+ true value of the measured quantity (increase and decrease), while the error size is kept the same" }}{PARA 0 "" 0 "" {TEXT -1 91 "c) changi ng the error size, while keeping the original true value of the measur ed quantity." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 244 "Keep in mind that the simulation in Maple provides us with the luxury of many repeated measurements. In the undergraduate laboratory setting we often forego this luxury (unless experiments are automated ), and pay the price of reduced precision." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 264 "We conclude the worksh eet with an example of error propagation. Let us assume that the quant ity measured above represents an angle of refraction in a simple exper iment with a lightbeam at an air/glass interface. Let us assume that w e know the index of refraction (" }{TEXT 273 1 "n" }{TEXT -1 167 "=1.5 1 for glass) to a higher degree of precision than the angle measuremen t. What is the deviation in the incident angle. We first look at the t ransformed data sample:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 49 " fun:=x->evalf(180/Pi*arcsin(sin(Pi/180*x)*1.51));" }}}{EXCHG {PARA 0 " > " 0 "" {MPLTEXT 1 0 23 "IAngle:=map(fun,MData):" }}}{EXCHG {PARA 0 " > " 0 "" {MPLTEXT 1 0 10 "IAngle[1];" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "histogram(IAngle,area=1);" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 39 "IA_a:=evalf(add(IAngle[i],i=1..N)/N,7);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 61 "sigmaIA:=evalf(sqrt(add((IAngle[n]- IA_a)^2,n=1..N)/(N-1)),5);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "sig_avgIA:='sigmaIA/sqrt(N)';" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "sig_avgIA:=evalf(sig_avgIA);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 59 "Let us compare the true value with the confidence inte rval:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 27 "IA_true:=evalf(fun (val),5);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 50 "[evalf(IA_a-si g_avgIA,4),evalf(IA_a+sig_avgIA,4)];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 64 "We notice that the deviation tripled. How can this be explained ?" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 105 "We \+ made the claim that the uncertainty in the index of refraction is negl igible. Thus we use Snell's law " }{TEXT 19 17 "sin(IA)=n*sin(RA)" } {TEXT -1 52 " to relate the uncertainties in the reflected angle " } {TEXT 19 2 "RA" }{TEXT -1 42 " to the uncertainty in the incident angl e " }{TEXT 19 2 "IA" }{TEXT -1 57 ". Considering the differentials res ults in the statement:" }}{PARA 0 "" 0 "" {TEXT -1 5 "LHS: " }{TEXT 19 24 "d[sin(IA)]=cos(IA)*d[IA]" }}{PARA 0 "" 0 "" {TEXT -1 5 "RHS: " }{TEXT 19 28 "n*d[sin(RA)]=n*cos(RA)*d[RA]" }}{PARA 0 "" 0 "" {TEXT -1 30 "We can solve the equation for " }{TEXT 19 5 "d[IA]" }{TEXT -1 2 ": " }}{PARA 0 "" 0 "" {TEXT 19 33 "d[IA] = n*(cos(RA)/cos(IA))*d[RA ]" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 51 "evalf(1.51*cos(val*Pi/ 180)/cos(Pi/180*IA_a)*sigma);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 26 " This calculated value for " }{TEXT 19 7 "sigmaIA" }{TEXT -1 111 " is v ery close indeed to the one measured above. Thus, it is important to c onsider error propagation carefully!" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 282 "We can illustrate the error propagatio n for this nonlinear function by a graph. This also serves to justify \+ why a linearized treatment of the function in the neighbourhood of the mean argument value is jsutified (the linearization is implied by the use of first-oder differentials)." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 40 "f_ex:=180/Pi*arcsin(1.51*sin(Pi/180*x));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 44 "f_lin:=convert(taylor(f_ex,x=38,2), polynom);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 6 "sigma;" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 49 "P1:=plot([f_ex,f_lin],x=37.. 39,color=[red,blue]):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 184 "P 2:=plot([[val-sigma,65],[val-sigma,subs(x=val-sigma,f_lin)],[0,subs(x= val-sigma,f_lin)]],color=green): P3:=plot([[val,65],[val,subs(x=val,f_ lin)],[0,subs(x=val,f_lin)]],color=yellow):" }}{PARA 0 "> " 0 "" {MPLTEXT 1 0 103 "P4:=plot([[val+sigma,65],[val+sigma,subs(x=val+sigma ,f_lin)],[0,subs(x=val+sigma,f_lin)]],color=brown):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 77 "display(P1,P2,P3,P4,scaling=constrained,t itle=\"incident vs refracted angle\");" }}}{EXCHG {PARA 0 "" 0 "" {TEXT 288 11 "Exercise 2:" }}{PARA 0 "" 0 "" {TEXT -1 35 "Determine th e relationship between " }{TEXT 19 5 "sigma" }{TEXT -1 5 " and " } {TEXT 19 8 "sigma_IA" }{TEXT -1 38 " for a different mean refracted an gle " }{TEXT 289 1 "x" }{TEXT -1 59 " both from the calculation, and f rom a corresponding graph." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 62 "We can also look at the relative deviatio ns in the two angles:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "si gmaIA/IA_a;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 10 "sigma/val;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 75 "The relative deviation doubled \+ when going from refracted to incident angle." }}}{EXCHG {PARA 0 "" 0 " " {TEXT -1 0 "" }{TEXT 276 11 "Exercise 3:" }}{PARA 0 "" 0 "" {TEXT -1 150 "Change the value of the exactly known reflected angle and repe at the simulation. Compare the prediction of the simulation on the rel ationship between " }{TEXT 19 5 "d[IA]" }{TEXT -1 5 " and " }{TEXT 19 5 "d[RA]" }{TEXT -1 148 ", and compare with the analytical result base d on error propagation analysis. What happens in the above example whe n the glass is replaced by water?" }}{PARA 0 "" 0 "" {TEXT -1 41 "The \+ index of refraction for water equals " }{TEXT 277 1 "n" }{TEXT -1 6 "= 1.33." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT 275 11 "Exercise 4:" }}{PARA 0 "" 0 "" {TEXT -1 289 "Use some other physics l aw that involves a nonlinearity to relate the uncertainties in two var iables (a measured variable with known uncertainty is connected to a d educed variable). Perform a simulation, and confirm your results by an analytical derivation (as done above for Snell's law)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT 287 17 "Systematic \+ errors" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 817 "So far we have assumed that the only source of errors were random , and that no systematic deviations were present, i.e., that the exper imental apparatus was optimized at the level of desired precision. One possibility to check whether the measuring apparatus is by comparison (e.g., identically built ammeters are compared in their readings of t he same current, known objects are measured with different calipers or micrometer screws, etc.). In this way one can establish the presence \+ of systematic errors in the equipment (e.g., one ammeter is measuring \+ consistently high values, etc.). Often we do not have the luxury of co mparing equipment in this way and rely on tolerance specifications of \+ the manufacturer who was able and obliged to establish the systematic \+ tolerances for the equipment as it left the factory." }}{PARA 0 "" 0 " " {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 890 "An important question \+ that arises in this context is how to deal with this systematic error. Suppose we have established a standard deviation interval for a measu rement sequence. How should we add the systematic measurement error to the statistical uncertainty? John R. Taylor points out (in chapter 4) that two situations can arise: either the error is known for the equi pment never to exceed a certain tolerance (e.g., 1% of the reading; or 2% of the maximum reading on the scale). In this case one would simpl y add the systematic error to the statistical one. On the other hand, \+ it might be true that based on a sequence of comparisons the systemati c error is known to fall within a normal distribution, i.e., 70% of th e instruments are better than 1%, but some of them are outside this in terval. In this latter case the systematic error has to be added in qu adrature to the uncertainty." }}{PARA 0 "" 0 "" {TEXT -1 50 "D[x]_tot \+ = sqrt(D[x]_systematic^2 + D[x]_random^2)" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 488 "A justification for adding indepe ndent errors (and uncertainties) in quadrature is given below. Usually it is the second method of adding systematic and random errors that w e wish to use in a teaching laboratory. Even if equipment was certifie d to be within a certain tolerance range, we cannot be assured that af ter years of use (and possibly abuse) it was still performing to norm. We have more confidence in the statement that with some likelihood it still performed to specifications." }}{PARA 0 "" 0 "" {TEXT -1 0 "" } }}{EXCHG {PARA 0 "" 0 "" {TEXT 278 46 "Uncertainty in a function of se veral variables" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 188 "We have demonstrated in the previous section how an erro r propagates for a non-linear function of one variable. Effectively, t he function was linearized in the range around the mean value " } {TEXT 19 3 "x_a" }{TEXT -1 15 " and the range " }{TEXT 19 22 "x_a-sigm a .. x_a+sigma" }{TEXT -1 51 " was mapped into a correpsonding range s urrounding " }{TEXT 19 6 "f(x_a)" }{TEXT -1 28 " using the first deriv ative " }{TEXT 280 1 "f" }{TEXT -1 2 "'(" }{TEXT 279 1 "x" }{TEXT -1 65 "). Depending on the magnitude of the derivative the deviation in \+ " }{TEXT 281 1 "f" }{TEXT -1 8 " (i.e., " }{TEXT 19 7 "sigma_f" } {TEXT -1 42 ") can increase or decrease as compared to " }{TEXT 19 5 " sigma" }{TEXT -1 79 ". Now we demonstrate for a function of two variab les that the uncertainties in " }{TEXT 286 1 "x" }{TEXT -1 5 " and " } {TEXT 285 1 "y" }{TEXT -1 61 " have to be added in quadrature to obtai n the uncertainty in " }{TEXT 284 1 "f" }{TEXT -1 1 "(" }{TEXT 283 1 " x" }{TEXT -1 2 ", " }{TEXT 282 1 "y" }{TEXT -1 2 ")." }}{PARA 0 "" 0 " " {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 75 "First we generate a sec ond sequence of normally distributed random numbers." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 36 "RNd2:=[stats[random, normald](150)]:" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 287 "W e pick Snell's law again, and consider the following problem: suppose \+ we measure corresponding incident and refracted angles in order to det ermine the index of refraction of the optically dense medium (we assum e that for the purposes of the experiment air has an index of refracti on of " }{TEXT 290 1 "n" }{TEXT -1 28 "=1, i.e., the vacuum value)." } }{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 216 "We meas ure the incident angle with the same accuracy as the refracted angle. \+ The data set for the refracted angle is kept from the previous section , and we produce a data set for the incident angles (consistent with \+ " }{TEXT 291 1 "n" }{TEXT -1 7 "=1.51)." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 43 "MDataIA:=[seq(IA_true+0.5*RNd2[i],i=1..N)]:" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 220 "For the purposes of our simulatio n we simply associate with each datapoint from the set of refracted an gles a corresponding datapoint from the set of incident angles to obta in a set of values for the index of refraction:" }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 73 "MDatan:=[seq(evalf(sin(Pi/180*MDataIA[i])/sin( Pi/180*MData[i])),i=1..N)]:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "histogram(MDatan,area=1);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 34 " We calculate the 'best' value for " }{TEXT 293 1 "n" }{TEXT -1 91 " by forming the average, the deviation of the data, and the standard devi ation of the mean:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 38 "n_a:= evalf(add(MDatan[i],i=1..N)/N,7);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 60 "sigma_n:=evalf(sqrt(add((MDatan[n]-n_a)^2,n=1..N)/(N- 1)),5);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "sig_avg_n:='sigm a_n/sqrt(N)';" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "sig_avg_n: =evalf(sig_avg_n);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 30 "[n_a- sig_avg_n,n_a+sig_avg_n];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 18 "The \+ true value of " }{TEXT 292 1 "n" }{TEXT -1 55 "=1.51 falls barely outs ide the 70% confidence interval." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 35 "How can we predict the uncertainty " } {TEXT 19 7 "sigma_n" }{TEXT -1 67 " from the known uncertainties in th e refracted and incident angles?" }}{PARA 0 "" 0 "" {TEXT -1 233 "Firs t we should verify that the uncertainty in the incident angle is of th e same magnitude as for the refracted angle (note that it independentl y given in this section, and not determined via Snell's law as in the \+ previous section!):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 40 "IA_a :=evalf(add(MDataIA[i],i=1..N)/N,7);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 63 "sigma_IA:=evalf(sqrt(add((MDataIA[n]-IA_a)^2,n=1..N)/ (N-1)),5);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 70 "We need to apply th e addition in quadrature rule given for a function " }{TEXT 19 18 "q(x 1, x2, ..., xn)" }{TEXT -1 4 " as:" }}{PARA 0 "" 0 "" {TEXT 19 89 "D[q ] = sqrt((diff(q, x1)*D[x1])^2 + (diff(q, x2)*D[x2])^2 + ... + (diff(q , xn)*D[xn] )^2)" }}{PARA 0 "" 0 "" {TEXT -1 60 "where the derivatives are evaluated at the average value of " }{TEXT 19 17 "[x1, x2, ..., x n]" }{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 25 "Our function is giv en as:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "RF_ind:=sin(x1)/s in(x2);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 58 "Dn:=sqrt((diff(R F_ind,x1)*Dx1)^2+(diff(RF_ind,x2)*Dx2)^2);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 66 "Here x1 and x2 are the incident and refracted angles resp ectively." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 82 "evalf(subs(x1= Pi/180*IA_a,x2=Pi/180*v_a,Dx1=Pi/180*sigma_IA,Dx2=Pi/180*sigma,Dn));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 204 "It is important in the above e xpression to provide the conversion factors for the angles from degree s to radians not only in the arguments to the sine functions, but also for the uncertainties themselves!" }}{PARA 0 "" 0 "" {TEXT -1 142 "Th e agreement of the predicted uncertainty with the one observed in the \+ simulation (which represents a statistical measurement!) is very good. " }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT 294 11 "Exercise 5:" }}{PARA 0 "" 0 "" {TEXT -1 135 "Repeat the above simu lation with a larger deviation in the incident and refracted angles. P ick different incident and refracted angles." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT 295 11 "Exercise 6:" }}{PARA 0 "" 0 " " {TEXT -1 264 "Repeat the simulation with a very much reduced statist ical sample as corresponds to teaching laboratory measurements (or as \+ happens sometimes in life science research when circumstances do not p ermit large sample sizes), e.g., N=5. Are the conclusions still valid? " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}{MARK "0 0 0" 41 } {VIEWOPTS 1 1 0 1 1 1803 1 1 1 1 }{PAGENUMBERS 0 1 2 33 1 1 }