# probability help on a practice exam

I NEED HELP WITH THE NON BOOK QUESTIONS ( NO QUESTIONS WITH NUMBERS). PLEASE PROVIDE CORRECT SOLUTIONS WITH A BRIEF EXPLANATION TO EACH PROBLEM I.E. (“THIS IS A NORMAL DISTRIBUTION”). i PROVIDED THE A PART OF THE BOOK JUST FOR REFERFENCE.
practice_exam.pdf

pages_from_statistical_inference_george_casella_roger_l.berger.pdf

Don't use plagiarized sources. Get Your Custom Essay on
probability help on a practice exam
Just from \$13/Page

Unformatted Attachment Preview

EXERCISE 1. A k-word is a sequence k letters. How many different 11-words can be obtained if we
swap at random the letters of the word Mississippi?
EXERCISE 2. Let {AÎ±, Î± â?? Î?} be a collection of sets. Prove that
â?¢ (â?ªÎ± AÎ± )c = â?©Î± AcÎ±
â?¢ (â?©Î± AÎ± )c = â?ªÎ± AcÎ±
EXERCISE 3. Suppose that a deck of 52 bridge cards is dealt to the four players W, N, S and E at
random.
a. What is the probability that each player will have an ace?
b. What is the probability that a given player receives 13 different face values?
EXERCISE 4. (a). The inclusion and exclusion principle.
(b). My telephone rings 12 times each week, the calls being randomly distributed among 7 days. What is
the probability that I get at least one call each day?
EXERCISE 5. Show that if P (Â·) and Q(Â·) are probability measures defined on the same Ï? – algebra A,
and if for a given number Î±, 0 < Î± < 1, we define PÎ± as follows PÎ± (E) = (1â??Î±)P (E)Î±Q(E), â??E â?? A, then PÎ± is also a probability measure on A. Use this construction to give an example of a random variable that is neither discrete nor absolutely continuous. EXERCISE 6. A two step consists in casting a die and then selecting a chip from one of two urns, as follows: A die is cast and urn I is selected if 5 or 6 shows up, otherwise urn II is selected. Urn I contains 3 red chips and 7 white chips, and urn II contains 8 red chips and 2 white chips. Then from the selected urn a chip is drawn at random. - What is P(R), the probability that the drawn chip is red? - What is the (posterior) probability that urn I has been selected if we know that a red chip was drawn? EXERCISE 7. Consider the experiment of drawing at random a card from a deck. Are the events E = â??diamondâ?, and F = â??aceâ?, independent? EXERCISE 8. 1.33, 36, 39 EXERCISE 9. Let E, F, G â?? A. Show that P(E â?© F â?© G) = P(G) Â· P(F |G) Â· P(E|F â?© G), provided that P(F â?© G) > 0.
EXERCISE 10. 1.26, 38
EXERCISE 11. 1.51
EXERCISE 12. 1.47, 1.53
EXERCISE 13. All hw assigned in class, that is not listed in the previous exercises.
EXERCISE 14. 2.3
EXERCISE 15. 2.9
EXERCISE 16. 2.20
EXERCISE 17. 2.22
EXERCISE 18. 2.30
EXERCISE 19. A standard drug is known to be effective in 80% of the cases in which it is used. A new
drug is tested on 100 randomly selected patients and found to be effective in 85 of them. Evaluate the
probability that this would have happened if those patients would have been administered the standard
drug. Which of the two drugs is more effective? Justify your answer.
EXERCISE 20. 2.36
EXERCISE 21. The arrival of trucks per hour at a receiving dock has a Poisson distribution with a
mean arrival of 2 per hour. Find the probability that at least two trucks arrive at this receiving dock in
one hour.
EXERCISE 22. 3.4
EXERCISE 23. 3.8
EXERCISE 24. 3.16
EXERCISE 25. 3.20
EXERCISE 26. 3.7
EXERCISE 27. 3.11a
EXERCISE 28. 3.17
EXERCISE 29. 3.23
EXERCISE 30. 3.37
EXERCISE 31. 3.38
EXERCISE 32. Assume X is a random variable that has up to the second order finite moments. Show
that the mean of X is the minimizer of the expected value of the square distance from X to an arbitrary
point.
EXERCISE 33. Assume X is an absolutely continuous r.v. whose density function is symmetric about a
point x = a. Show that if X has a mean, than the mean and the median are both equal to a.
EXERCISE 34. Assume X has a normally distribution with mean Âµ and standard deviation Ï?. For
k = 1, 2, 3, calculate P (|X â?? Âµ| â?¥ kÏ?). Are these values coming in agreement with Chebyshevâ??s
inequality?
2
EXERCISE 35. a. Show that the function f (x) =
x
â??1 eâ?? 2
2Ï?
, x â?? R is the p.d.f. of random variable X.
b. Find the p.d.f. of Y = X 2 , where X is the random variable in part a.
EXERCISE 36. Let Zn be the standardized form of a r.v. of Xn that has a binomial distribution
B(n, 0.5). What is the limiting distribution of Zn ? Justify your answer.
Table 1: Table of areas under the standard normal density curve from 0 to x.
x
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.0
0.00000
0.00399
0.00798
0.01197
0.01595
0.01994
0.02392
0.02790
0.03188
0.03586
0.1
0.03983
0.04380
0.04776
0.05172
0.05567
0.05962
0.06356
0.06749
0.07142
0.07535
0.2
0.07926
0.08317
0.08706
0.09095
0.09483
0.09871
0.10257
0.10642
0.11026
0.11409
0.3
0.11791
0.12172
0.12552
0.12930
0.13307
0.13683
0.14058
0.14431
0.14803
0.15173
0.4
0.15542
0.15910
0.16276
0.16640
0.17003
0.17364
0.17724
0.18082
0.18439
0.18793
0.5
0.19146
0.19497
0.19847
0.20194
0.20540
0.20884
0.21226
0.21566
0.21904
0.22240
0.6
0.22575
0.22907
0.23237
0.23565
0.23891
0.24215
0.24537
0.24857
0.25175
0.25490
0.7
0.25804
0.26115
0.26424
0.26730
0.27035
0.27337
0.27637
0.27935
0.28230
0.28524
0.8
0.28814
0.29103
0.29389
0.29673
0.29955
0.30234
0.30511
0.30785
0.31057
0.31327
0.9
0.31594
0.31859
0.32121
0.32381
0.32639
0.32894
0.33147
0.33398
0.33646
0.33891
1.0
0.34134
0.34375
0.34614
0.34849
0.35083
0.35314
0.35543
0.35769
0.35993
0.36214
1.1
0.36433
0.36650
0.36864
0.37076
0.37286
0.37493
0.37698
0.37900
0.38100
0.38298
1.2
0.38493
0.38686
0.38877
0.39065
0.39251
0.39435
0.39617
0.39796
0.39973
0.40147
1.3
0.40320
0.40490
0.40658
0.40824
0.40988
0.41149
0.41308
0.41466
0.41621
0.41774
1.4
0.41924
0.42073
0.42220
0.42364
0.42507
0.42647
0.42785
0.42922
0.43056
0.43189
1.5
0.43319
0.43448
0.43574
0.43699
0.43822
0.43943
0.44062
0.44179
0.44295
0.44408
1.6
0.44520
0.44630
0.44738
0.44845
0.44950
0.45053
0.45154
0.45254
0.45352
0.45449
1.7
0.45543
0.45637
0.45728
0.45818
0.45907
0.45994
0.46080
0.46164
0.46246
0.46327
1.8
0.46407
0.46485
0.46562
0.46638
0.46712
0.46784
0.46856
0.46926
0.46995
0.47062
1.9
0.47128
0.47193
0.47257
0.47320
0.47381
0.47441
0.47500
0.47558
0.47615
0.47670
2.0
0.47725
0.47778
0.47831
0.47882
0.47932
0.47982
0.48030
0.48077
0.48124
0.48169
2.1
0.48214
0.48257
0.48300
0.48341
0.48382
0.48422
0.48461
0.48500
0.48537
0.48574
2.2
0.48610
0.48645
0.48679
0.48713
0.48745
0.48778
0.48809
0.48840
0.48870
0.48899
2.3
0.48928
0.48956
0.48983
0.49010
0.49036
0.49061
0.49086
0.49111
0.49134
0.49158
2.4
0.49180
0.49202
0.49224
0.49245
0.49266
0.49286
0.49305
0.49324
0.49343
0.49361
Table 2: Table of areas under the standard normal density curve from 0 to x.
x
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
2.5
0.49379
0.49396
0.49413
0.49430
0.49446
0.49461
0.49477
0.49492
0.49506
0.49520
2.6
0.49534
0.49547
0.49560
0.49573
0.49585
0.49598
0.49609
0.49621
0.49632
0.49643
2.7
0.49653
0.49664
0.49674
0.49683
0.49693
0.49702
0.49711
0.49720
0.49728
0.49736
2.8
0.49744
0.49752
0.49760
0.49767
0.49774
0.49781
0.49788
0.49795
0.49801
0.49807
2.9
0.49813
0.49819
0.49825
0.49831
0.49836
0.49841
0.49846
0.49851
0.49856
0.49861
3.0
0.49865
0.49869
0.49874
0.49878
0.49882
0.49886
0.49889
0.49893
0.49896
0.49900
3.1
0.49903
0.49906
0.49910
0.49913
0.49916
0.49918
0.49921
0.49924
0.49926
0.49929
3.2
0.49931
0.49934
0.49936
0.49938
0.49940
0.49942
0.49944
0.49946
0.49948
0.49950
3.3
0.49952
0.49953
0.49955
0.49957
0.49958
0.49960
0.49961
0.49962
0.49964
0.49965
3.4
0.49966
0.49968
0.49969
0.49970
0.49971
0.49972
0.49973
0.49974
0.49975
0.49976
3.5
0.49977
0.49978
0.49978
0.49979
0.49980
0.49981
0.49981
0.49982
0.49983
0.49983
3.6
0.49984
0.49985
0.49985
0.49986
0.49986
0.49987
0.49987
0.49988
0.49988
0.49989
3.7
0.49989
0.49990
0.49990
0.49990
0.49991
0.49991
0.49992
0.49992
0.49992
0.49992
3.8
0.49993
0.49993
0.49993
0.49994
0.49994
0.49994
0.49994
0.49995
0.49995
0.49995
3.9
0.49995
0.49995
0.49996
0.49996
0.49996
0.49996
0.49996
0.49996
0.49997
0.49997
4.0
0.49997
0.49997
0.49997
0.49997
0.49997
0.49997
0.49998
0.49998
0.49998
0.49998
Preface to the First Edition
When someone discovers that you are writing a textbook, one (or both) of two questions will be asked. The first is “Why are you writing a book?” and the second is
“How is your book different from what’s out there?” The first question is fairly easy
to answer. You are writing a book because you are not entirely satisfied with the
available texts. The second question is harder to answer. The answer can’t be put
in a few sentences so, in order not to bore your audience (who may be asking the
question only out of politeness), you try to say something quick and witty. It usually
doesn’t work.
The purpose of this book is to build theoretical statistics (as different fron1 n1athematical statistics) from the first principles of probability theory. Logical development,
proofs, ideas, themes, etc., evolve through statistical arguments. Thus, starting from
the basics of probability, we develop the theory of statistical inference using techniques, definitions, and concepts that are statistical and are natural extensions and
consequences of previous concepts. When this endeavor was started, we were not sure
how well it would work. The final judgment of our success is, of course, left to the
The book is intended for first-year graduate students majoring in statistics or in
a field where a statistics concentration is desirable. The prerequisite is one year of
calculus. (Some fan1iliarity with matrix manipulations would be useful, but is not
essential.) The book can be used for a two-semester, or three-quarter, introductory
course in statistics.
The first four chapters cover basics of probability theory and introduce many fundamentals that are later necessary. Chapters 5 and 6 are the first statistical chapter:s.
Chapter 5 is transitional (between probability and statistics) and can be the starting
point for a course in statistical theory for students with some probability background.
Chapter 6 is somewhat unique, detailing three statistical principles (sufficiency, likelihood, and invariance) and showing how these principles are in1portant in modeling
data. Not all instructors will cover this chapter in detail, although we strongly recommend spending some time here. In particular, the likelihood and invariance principles
are treated in detail. Along with the sufficiency principle, these principles, and the
thinking behind them, are fundamental to total statistical_ understanding.
Chapters 7-9 represent the central core of statistical inference, estimation (point
and interval) and hypothesis testing. A major feature of these chapters is the division
into methods of finding appropriate statistical techniques and methods of evaluating
these techniques. Finding and evaluating are of interest to both the theorist and the
PREFACE TO THE FIRST EDITION
VIII
practitioner, but we feel that it is important to separate these endeavors. Different
concerns arc important, and different rules are invoked. Of further interest may be
the sections of these chapters titled Other Considerations. Here, we indicate how the
rules of statistical inference may be relaxed (as is done every day) and still produce
meaningful inferences. 1Jany of the techniques covered in these sections are ones that
are used in consulting and are helpful in analyzing and inferring from actual problems.
The final three chapters can be thought of as special topics, although we feel that
some familiarity with the material is important in anyone’s statistical education.
Chapter 10 is a thorough introduction to decision theory and contains the most modern material we could include. Chapter 11 deals with the analysis of variance (oneway
and randomized block), building the theory of the complete analysis from the more
simple theory of treatment contrasts. Our experience has been that experimenters are
most interested in inferences from contrasts, and using principles developed earlier,
most tests and intervals can be derived from contrasts. Finally, Chapter 12 treats
the theory of regression, dealing first with simple linear regression and then covering
regression with “errors in variables.” This latter topic is quite important, not only to
show its own usefulness and inherent difficulties, but also to illustrate the limitations
of inferences from ordinary regression.
As more concrete guidelines for basing a one-year course on this book, we offer the
following suggestions. There can be two distinct types of courses taught from this
book. One kind we might label “more mathematical,” being a course appropriate for
students majoring in statistics and having a solid mathematics background (at least
years of calculus, some matrix algebra, and perhaps a real analysis course). For
such students we recommend covering Chapters 1-9 in their entirety (which should
take approximately 22 weeks) and spend the remaining time customizing the course
,vith selected topics from Chapters 10-12. Once the first nine chapters are covered,
the material in each of the last three chapters is self-contained, and can be covered
in any order.
Another type of course is “more practical.” Such a course may also be a first course
for mathematically sophisticated students, but is aimed at students with one year of
calculus who may not be majoring in statistics. It stresses the more practical uses of
statistical theory, being more concerned vvith understanding basic statistical concepts
and deriving reasonable statistical procedures for a variety of situations, and less
concerned with formal opti1nality investigations. Such a course will necessarily omit
a certain amount of material, but the following list of sections can be covered in a
one-year course:
1Â½
Chapter
1
2
3
4
5
6
7
8
Sections
All
2.1, 2.2, 2.3
3.1, 3.2
4.1, 4.2, 4.3, 4.5
5.1, 5.2, 5.3.1, 5.4
6.1.1, 6.2.1
7.1, 7.2.1, 7.2.2, 7.2.3, 7.3.1, 7.3.3, 7.4
8.1, 8.2.1, 8.2.3, 8.2.4, 8.3.1, 8.3.2, 8.4
PREFACE TO THE FIRST EDITION
9
11
12
IX
9.1, 9.2.1, 9.2.2, 9.2.4, 9.3.1, 9.4
11.1, 11.2
12.1, 12.2
If time permits, there can be some discussion (with little emphasis on details) of the
material in Sections 4.4, 5.5, and 6.1.2, 6.1.3, 6.1.4. The material in Sections 11.3 and
12.3 may also be considered.
The exercises have been gathered from many sources and are quite plentiful. Ve
feel that, perhaps, the only way to master this material is through practice, and thus
we have included much opportunity to do so. The exercises are as varied as we could
make them, and many of them illustrate points that are either neV or complementary
to the material in the text. Some exercises are even taken from research papers. (It
makes you feel old when you can include exercises based on papers that were new
research during your own student days!) Although the exercises are not subdivided
like the chapters, their ordering roughly follows that of the chapter. (Subdivisions
often give too many hints.) Furthermore, the exercises become ( again, roughly) more
challenging as their numbers become higher.
As this is an introductory book with a relatively broad scope, the topics arc not
covered in great depth. However, Ve felt some obligation to guide the reader one
step further in the topics that may be of interest. Thus, we have included many
references, pointing to the path to deeper understanding of any particular topic. (The
Encyclopedia of Statistical Sciences, edited by Kotz, Johnson, and Read, provides a
fine introduction to many topics.)
To write this book, we have drawn on both our past teachings aud current ,vork. Ve
have also drawn on many people, to whom we are extremely grateful. Ve thank our
colleagues at Cornell, North Carolina State, and Purdue-in particular, Jim Berger,
Larry Brown, Sir David Cox, Ziding Feng, Janet Johnson, Leon Glcser, Costas Goutis,
Dave Lansky, George l:IcCabe, Chuck J’vicCulloch, J’viyra Samuels, Steve Schwager,
and Shayle Searle, vho have given their time and expertise in reading parts of this
manuscript, offered assistance, and taken part in many conversations leading to constructive suggestions. Ve also thank Shanti Gupta for his hospitality, and the library at Purdue, which was essential. vVe are grateful for the detailed reading and
helpful suggestions of Shayle Searle and of our reviewers, both anonymous and nonanonymous (Jim Albert, Dan Coster, and Torn Vehrly). Ve also thank David 1Ioore
and George l:IcCabe for allowing us to use their tables, and Steve Hirdt for supplying
us with data. Since this book was written by two people who, for most of the time,
were at least 600 miles apart, we lastly thank Bitnet for making this entire thing
possible.
George Casella
Roger L. Ber:qer
“vVe have got to the deductions and the inferences,” said Lestradc, winking at me.
“I find it hard enough to tackle facts, Holmes, without flying away
after theories and fancies.”
The Bascombe Valley Jvfystery
Contents
1 Probability Theory
1.1 Set Theory
1.2 Basics of Probability Theory
1.2.1 Axiomatic Foundations
1.2.2 The Calculus of Probabilities
1.2.3 Counting
1.2.4 Enumerating Outcomes
1.3 Conditional Probability and Independence
1.4 Random Variables
1.5 Distribution Functions
1.6 Density and l!Iass Functions
1.7 Exercises
1.8 :Miscellanea
2 Transformations and Expectations
2.1 Distributions of Functions of a Random Variable
2.2 Expected Values
2.3 l!Ioments and l!Ioment Generating Functions
2.4
2.5
2.6
3
Differentiating Under an Integral Sign
Exercises
JVIiscellanea
Common Families of Distributions
3.1 Introduction
3.2 Discrete Distributions
3.3 Continuous Distributions
3.4 Exponential Families
3.5 Location and Scale Families
1
1
5
5
g
13
16
20
27
29
34
37
44
47
47
55
59
68
76
82
85
85
85
98
111
116
XIV
4
5
6
CONTENTS
3.6 Inequalities and Identities
3.6.1 Probability Inequalities
3.G.2 Iclenti tics
3.7 Exercises
3.8 lviiscellanca
121
122
123
127
135
Multiple Rando1n Variables
4.1 .Joint and :Marginal Distributions
4.2 Conditional Distributions and Independence
4.3 Bivariate Transformations
4.4 Hierarchical Tvlodels and Iviixture Distributions
4.5 Covariance and Correlation
4.G Iviultivariate Distributions
4.7 Inequalities
4.7.1 Numerical Inequalities
4.7.2 Functional Inequalities
4.8 Exercises
4.9 1liscellanea
139
Properties of a Rando1n Sample
5.1 Basic Concepts of Random Samples
5.2 Sums of Random Variables from a Random Sample
5.3 Sampling from the Normal Distribution
5.3.1 Properties of the Sample IVIean and Variance
5.3.2 The Derived Distributions: Student’s t and Snedecor’s F
5.4 Order Statistics
5.5 Convergence Concepts
5.5.1 Convergence in Probability
5.5.2 Almost Sure Convergence
5.5.3 Convergence in Distribution
5.5.4 The Del ta 11Icthod
5.6 Generating a Random Sample
5.6.1 Direct :rvicthods
5.6.2 Indirect Tvicthods
5.6.3 The Accept/Reject Algorithm
5.7 Exercises
5.8 Nliscellanea
207
Principles of Data Reduction
6.1 Introduction
6.2 The Sufficiency Principle
6.2.1 Sufficient Statistics
6.2.2 1vfinimal Sufficient Statistics
6.2.3 Ancillary Statistics
6.2.4 Sufficient, Ancillary, and Complete Statistics
271
139
147
156
162
169
177
186
186
189
192
203
207
211
218
218
222
226
232
232
234
235
240
245
247
251
253
255
267
271
272
272
279
282
284
CONTENTS
xv
The Likelihood Principle
6.3.1 The Likelihood Function
6.3.2 The Formal Likelihood Principle
The Equivariance Principle
Exercises
l!Iiscellanea
290
290
292
296
300
307
7 Point Esthnation
7.1 Introduction
7.2 :tvlethods of Finding Estimators
7.2.1 :tvlethod of IVIoments
7.2.2 l!Iaximum Likelihood Estimators
7.2.3 Bayes Estimators
7.2.4 The E:tvI Algorithm
7.3 Nlethods of Evaluating Estimators
7.3.1 :rviean Squared Error
7.3.2 Best Unbiased Estimators
7.3.3 Sufficiency and Unbiasedness
7.3.4 Loss Function Optimality
7.4 Exercises
7. 5 J;Iiscellanea
311
311
312
312
315
324
326
330
330
334
342
348
355
367
8 Hypothesis Testin …
attachment

Pages (550 words)
Approximate price: -

Why Work with Us

Top Quality and Well-Researched Papers

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.

We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.

Free Unlimited Revisions

If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.

Prompt Delivery and 100% Money-Back-Guarantee

All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.

Original & Confidential

We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.

Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.

Try it now!

## Calculate the price of your order

Total price:
\$0.00

How it works?

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Our Services

No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.

## Essay Writing Service

No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.