Download answers to exercises - mathematical statistics with applications (7th edition).pdf PDF

Titleanswers to exercises - mathematical statistics with applications (7th edition).pdf
File Size1.3 MB
Total Pages20
Table of Contents
                            Answers to Exercises
Front Cover
Title Page
Copyright
Contents
Preface
Note to the Student
1 What Is Statistics?
	1.1 Introduction
		Exercises
	1.2 Characterizing a Set of Measurements: Graphical Methods
		Exercises
	1.3 Characterizing a Set of Measurements: Numerical Methods
		Exercises
	1.4 How Inferences Are Made
	1.5 Theory and Reality
	1.6 Summary
	References and Further Readings
	Supplementary Exercises
2 Probability
	2.1 Introduction
	2.2 Probability and Inference
	2.3 A Review of Set Notation
		Exercises
	2.4 A Probabilistic Model for an Experiment: The Discrete Case
		Exercises
	2.5 Calculating the Probability of an Event: The Sample-Point Method
		Exercises
	2.6 Tools for Counting Sample Points
		Exercises
	2.7 Conditional Probability and the Independence of Events
		Exercises
	2.8 Two Laws of Probability
		Exercises
	2.9 Calculating the Probability of an Event: The Event-Composition Method
		Exercises
	2.10 The Law of Total Probability and Bayes’ Rule
		Exercises
	2.11 Numerical Events and Random Variables
		Exercises
	2.12 Random Sampling
	2.13 Summary
	References and Further Readings
	Supplementary Exercises
3 Discrete Random Variables and Their Probability Distributions
	3.1 Basic Definition
	3.2 The Probability Distribution for a Discrete Random Variable
		Exercises
	3.3 The Expected Value of a Random Variable or a Function of a Random Variable
		Exercises
	3.4 The Binomial Probability Distribution
		Exercises
	3.5 The Geometric Probability Distribution
		Exercises
	3.6 The Negative Binomial Probability Distribution (Optional)
		Exercises
	3.7 The Hypergeometric Probability Distribution
		Exercises
	3.8 The Poisson Probability Distribution
		Exercises
	3.9 Moments and Moment-Generating Functions
		Exercises
	3.10 Probability-Generating Functions (Optional)
		Exercises
	3.11 Tchebysheff’s Theorem
		Exercises
	3.12 Summary
	References and Further Readings
	Supplementary Exercises
4 Continuous Variables and Their Probability Distributions
	4.1 Introduction
	4.2 The Probability Distribution for a Continuous Random Variable
		Exercises
	4.3 Expected Values for Continuous Random Variables
		Exercises
	4.4 The Uniform Probability Distribution
		Exercises
	4.5 The Normal Probability Distribution
		Exercises
	4.6 The Gamma Probability Distribution
		Exercises
	4.7 The Beta Probability Distribution
		Exercises
	4.8 Some General Comment
	4.9 Other Expected Values
		Exercises
	4.10 Tchebysheff’s Theorem
		Exercises
	4.11 Expectations of Discontinuous Functions and Mixed Probability Distributions (Optional)
		Exercises
	4.12 Summary
	References and Further Readings
	Supplementary Exercises
5 Multivariate Probability Distributions
	5.1 Introduction
	5.2 Bivariate and Multivariate Probability Distributions
		Exercises
	5.3 Marginal and Conditional Probability Distributions
		Exercises
	5.4 Independent Random Variables
		Exercises
	5.5 The Expected Value of a Function of Random Variables
	5.6 Special Theorems
		Exercises
	5.7 The Covariance of Two Random Variables
		Exercises
	5.8 The Expected Value and Variance of Linear Functions of Random Variables
		Exercises
	5.9 The Multinomial Probability Distribution
		Exercises
	5.10 The Bivariate Normal Distribution (Optional)
		Exercises
	5.11 Conditional Expectations
		Exercises
	5.12 Summary
	References and Further Readings
	Supplementary Exercises
6 Functions of Random Variables
	6.1 Introduction
	6.2 Finding the Probability Distribution of a Function of Random Variables
	6.3 The Method of Distribution Functions
		Exercises
	6.4 The Method of Transformations
		Exercises
	6.5 The Method of Moment-Generating Functions
		Exercises
	6.6 Multivariable Transformations Using Jacobians (Optional)
		Exercises
	6.7 Order Statistics
		Exercises
	6.8 Summary
	References and Further Readings
	Supplementary Exercises
7 Sampling Distributions and the Central Limit Theorem
	7.1 Introduction
		Exercises
	7.2 Sampling Distributions Related to the Normal Distribution
		Exercises
	7.3 The Central Limit Theorem
		Exercises
	7.4 A Proof of the Central Limit Theorem (Optional)
	7.5 The Normal Approximation to the Binomial Distribution
		Exercises
	7.6 Summary
	References and Further Readings
	Supplementary Exercises
8 Estimation
	8.1 Introduction
	8.2 The Bias and Mean Square Error of Point Estimators
		Exercises
	8.3 Some Common Unbiased Point Estimators
	8.4 Evaluating the Goodness of a Point Estimator
		Exercises
	8.5 Confidence Intervals
		Exercises
	8.6 Large-Sample Confidence Intervals
		Exercises
	8.7 Selecting the Sample Size
		Exercises
	8.8 Small-Sample Confidence Intervals for μ and μ[sub(1)] – μ[sub(2)]
		Exercises
	8.9 Confidence Intervals for σ[sup(2)]
		Exercises
	8.10 Summary
	References and Further Readings
	Supplementary Exercises
9 Properties of Point Estimators and Methods of Estimation
	9.1 Introduction
	9.2 Relative Efficiency
		Exercises
	9.3 Consistency
		Exercises
	9.4 Sufficiency
		Exercises
	9.5 The Rao–Blackwell Theorem and Minimum-Variance Unbiased Estimation
		Exercises
	9.6 The Method of Moments
		Exercises
	9.7 The Method of Maximum Likelihood
		Exercises
	9.8 Some Large-Sample Properties of Maximum-Likelihood Estimators (Optional)
		Exercises
	9.9 Summary
	References and Further Readings
	Supplementary Exercises
10 Hypothesis Testing
	10.1 Introduction
	10.2 Elements of a Statistical Test
		Exercises
	10.3 Common Large-Sample Tests
		Exercises
	10.4 Calculating Type II Error Probabilities and Finding the Sample Size for Z Tests
		Exercises
	10.5 Relationships Between Hypothesis-Testing Procedures and Confidence Intervals
		Exercises
	10.6 Another Way to Report the Results of a Statistical Test: Attained Significance Levels, or p-Values
		Exercises
	10.7 Some Comments on the Theory of Hypothesis Testing
		Exercises
	10.8 Small-Sample Hypothesis Testing for μ and μ[sub(1)] – μ[sub(2)]
		Exercises
	10.9 Testing Hypotheses Concerning Variances
		Exercises
	10.10 Power of Tests and the Neyman–Pearson Lemma
		Exercises
	10.11 Likelihood Ratio Tests
		Exercises
	10.12 Summary
	References and Further Readings
	Supplementary Exercises
11 Linear Models and Estimation by Least Squares
	11.1 Introduction
	11.2 Linear Statistical Models
	11.3 The Method of Least Squares
		Exercises
	11.4 Properties of the Least-Squares Estimators: Simple Linear Regression
		Exercises
	11.5 Inferences Concerning the Parameters β[sub(i)]
		Exercises
	11.6 Inferences Concerning Linear Functions of the Model Parameters: Simple Linear Regression
		Exercises
	11.7 Predicting a Particular Value of Y by Using Simple Linear Regression
		Exercises
	11.8 Correlation
		Exercises
	11.9 Some Practical Examples
		Exercises
	11.10 Fitting the Linear Model by Using Matrices
		Exercises
	11.11 Linear Functions of the Model Parameters: Multiple Linear Regression
	11.12 Inferences Concerning Linear Functions of the Model Parameters: Multiple Linear Regression
		Exercises
	11.13 Predicting a Particular Value of Y by Using Multiple Regression
		Exercises
	11.14 A Test for H[sub(0)]: β[sub(g+1)] = β[sub(g+2) = · · · = β[sub(k)] = 0
		Exercises
	11.15 Summary and Concluding Remarks
	References and Further Readings
	Supplementary Exercises
12 Considerations in Designing Experiments
	12.1 The Elements Affecting the Information in a Sample
	12.2 Designing Experiments to Increase Accuracy
		Exercises
	12.3 The Matched-Pairs Experiment
		Exercises
	12.4 Some Elementary Experimental Designs
		Exercises
	12.5 Summary
	References and Further Readings
	Supplementary Exercises
13 The Analysis of Variance
	13.1 Introduction
	13.2 The Analysis of Variance Procedure
		Exercises
	13.3 Comparison of More Than Two Means: Analysis of Variance for a One-Way Layout
	13.4 An Analysis of Variance Table for a One-Way Layout
		Exercises
	13.5 A Statistical Model for the One-Way Layout
		Exercises
	13.6 Proof of Additivity of the Sums of Squares and E (MST) for a One-Way Layout (Optional)
	13.7 Estimation in the One-Way Layout
		Exercises
	13.8 A Statistical Model for the Randomized Block Design
		Exercises
	13.9 The Analysis of Variance for a Randomized Block Design
		Exercises
	13.10 Estimation in the Randomized Block Design
		Exercises
	13.11 Selecting the Sample Size
		Exercises
	13.12 Simultaneous Confidence Intervals for More Than One Parameter
		Exercises
	13.13 Analysis of Variance Using Linear Models
		Exercises
	13.14 Summary
	References and Further Readings
	Supplementary Exercises
14 Analysis of Categorical Data
	14.1 A Description of the Experiment
	14.2 The Chi-Square Test
	14.3 A Test of a Hypothesis Concerning Specified Cell Probabilities: A Goodness-of-Fit Test
		Exercises
	14.4 Contingency Tables
		Exercises
	14.5 r × c Tables with Fixed Row or Column Totals
		Exercises
	14.6 Other Applications
	14.7 Summary and Concluding Remarks
	References and Further Readings
	Supplementary Exercises
15 Nonparametric Statistics
	15.1 Introduction
	15.2 A General Two-Sample Shift Model
	15.3 The Sign Test for a Matched-Pairs Experiment
		Exercises
	15.4 The Wilcoxon Signed-Rank Test for a Matched-Pairs Experiment
		Exercises
	15.5 Using Ranks for Comparing Two Population Distributions: Independent Random Samples
	15.6 The Mann–Whitney U Test: Independent Random Samples
		Exercises
	15.7 The Kruskal–Wallis Test for the One-Way Layout
		Exercises
	15.8 The Friedman Test for Randomized Block Designs
		Exercises
	15.9 The Runs Test: A Test for Randomness
		Exercises
	15.10 Rank Correlation Coefficient
		Exercises
	15.11 Some General Comments on Nonparametric Statistical Tests
	References and Further Readings
	Supplementary Exercises
16 Introduction to Bayesian Methods for Inference
	16.1 Introduction
	16.2 Bayesian Priors, Posteriors, and Estimators
		Exercises
	16.3 Bayesian Credible Intervals
		Exercises
	16.4 Bayesian Tests of Hypotheses
		Exercises
	16.5 Summary and Additional Comments
	References and Further Readings
Appendix 1 Matrices and Other Useful Mathematical Results
	A1.1 Matrices and Matrix Algebra
	A1.2 Addition of Matrices
	A1.3 Multiplication of a Matrix by a Real Number
	A1.4 Matrix Multiplication
	A1.5 Identity Elements
	A1.6 The Inverse of a Matrix
	A1.7 The Transpose of a Matrix
	A1.8 A Matrix Expression for a System of Simultaneous Linear Equations
	A1.9 Inverting a Matrix
	A1.10 Solving a System of Simultaneous Linear Equations
	A1.11 Other Useful Mathematical Results
Appendix 2 Common Probability Distributions, Means, Variances, and Moment-Generating Functions
	Table 1 Discrete Distributions
	Table 2 Continuous Distributions
Appendix 3 Tables
	Table 1 Binomial Probabilities
	Table 2 Table of e[sup(-x)]
	Table 3 Poisson Probabilities
	Table 4 Normal Curve Areas
	Table 5 Percentage Points of the t Distributions
	
	Table 7 Percentage Points of the F Distributions
	Table 8 Distribution Function of U
	Table 9 Critical Values of T in the Wilcoxon Matched-Pairs, Signed-Ranks Test; n = 5(1)50
	Table 10 Distribution of the Total Number of Runs R in Samples of Size (n1, n2); P(R < a)
	Table 11 Critical Values of Spearman’s Rank Correlation Coefficient
	Table 12 Random Numbers
Answers to Exercises
Index
                        
Document Text Contents
Page 1

ANSWERS

Chapter 1

1.5 a 2.45 − 2.65, 2.65 − 2.85
b 7/30
c 16/30

1.9 a Approx. .68
b Approx. .95
c Approx. .815
d Approx. 0

1.13 a ȳ = 9.79; s = 4.14
b k = 1: (5.65, 13.93); k = 2: (1.51,

18.07); k = 3: (−2.63, 22.21)
1.15 a ȳ = 4.39; s = 1.87

b k = 1: (2.52, 6.26); k = 2: (0.65,
8.13); k = 3: (−1.22, 10)

1.17 For Ex. 1.2, range/4 = 7.35; s = 4.14;
for Ex. 1.3, range/4 = 3.04; s = 3.17;
for Ex. 1.4, range/4 = 2.32, s = 1.87.

1.19 ȳ − s = −19 < 0

1.21 .84
1.23 a 16%

b Approx. 95%
1.25 a 177

c ȳ = 210.8; s = 162.17
d k = 1: (48.6, 373); k = 2:

(−113.5, 535.1); k = 3: (−275.7,
697.3)

1.27 68% or 231 scores; 95% or 323 scores
1.29 .05
1.31 .025
1.33 (0.5, 10.5)
1.35 a (172 − 108)/4 = 16

b ȳ = 136.1; s = 17.1
c a = 136.1 − 2(17.1) = 101.9;

b = 136.1 + 2(17.1) = 170.3

Chapter 2

2.7 A = {two males} = {(M1, M2),
(M1,M3), (M2,M3)}
B = {at least one female} = {(M1,W1),
(M2,W1), (M3,W1), (M1,W2), (M2,W2),
(M3,W2), (W1,W2)}
B̄ = {no females} = A; A ∪ B = S;
A ∩ B = null; A ∩ B̄ = A

2.9 S = {A+, B+, AB+, O+, A−, B−,
AB−, O−}

2.11 a P(E5) = .10; P(E4) = .20
b p = .2

2.13 a E1 = very likely (VL); E2 =
somewhat likely (SL); E3 =
unlikely (U); E4 = other (O)

b No; P(VL) = .24, P(SL) = .24,
P(U) = .40, P(O) = .12

c .48

2.15 a .09
b .19

2.17 a .08
b .16
c .14
d .84

2.19 a (V1, V1), (V1, V2), (V1, V3),
(V2, V1), (V2, V2), (V2, V3),
(V3, V1), (V3, V2), (V3, V3)

b If equally likely, all have
probability of 1/9.

c P(A) = 1/3; P(B) = 5/9;
P(A ∪ B) = 7/9;
P(A ∩ B) = 1/9

2.27 a S = {CC, CR, CL, RC, RR, RL,
LC, LR, LL}

b 5/9
c 5/9

877

Page 2

878 Answers

2.29 c 1/15
2.31 a 3/5; 1/15

b 14/15; 2/5
2.33 c 11/16; 3/8; 1/4
2.35 42
2.37 a 6! = 720

b .5
2.39 a 36

b 1/6
2.41 9(10)6
2.43 504 ways
2.45 408,408
2.49 a 8385

b 18,252
c 8515 required
d Yes

2.51 a 4/19,600
b 276/19,600
c 4140/19,600
d 15180/19,600

2.53 a 60 sample points
b 36/60 = .6

2.55 a
(

90
10

)
b
(

20
4

)(
70
6

)/(
90
10

)
= .111

2.57 (4 × 12)/1326 = .0362
2.59 a .000394

b .00355

2.61 a
364n

365n
b .5005

2.63 1/56
2.65 5/162
2.67 a P(A) = .0605

b .001344
c .00029

2.71 a 1/3
b 1/5
c 5/7
d 1
e 1/7

2.73 a 3/4
b 3/4
c 2/3

2.77 a .40 b .37 c .10
d .67 e .6 f .33
g .90 h .27 i .25

2.93 .364
2.95 a .1

b .9

c .6
d 2/3

2.97 a .999
b .9009

2.101 .05
2.103 a .001

b .000125
2.105 .90
2.109 P(A) ≥ .9833
2.111 .149
2.113 (.98)3(.02)
2.115 (.75)4
2.117 a 4(.5)4 = .25

b (.5)4 = 1/16
2.119 a 1/4

b 1/3
2.121 a 1/n

b
1

n
;

1

n

c
3

7
2.125 1/12
2.127 a .857

c No; .8696
d Yes

2.129 .4
2.133 .9412
2.135 a .57

b .18
c .3158
d .90

2.137 a 2/5
b 3/20

2.139 P(Y = 0) = (.02)3;
P(Y = 1) = 3(.02)2(.98);
P(Y = 2) = 3(.02)(.98)2;
P(Y = 3) = (.98)3

2.141 P(Y = 2) = 1/15; P(Y = 3) = 2/15;
P(Y = 4) = 3/15; P(Y = 5) = 4/15;
P(Y = 6) = 5/15

2.145 18!
2.147 .0083
2.149 a .4

b .6
c .25

2.151 4[p4(1 − p) + p(1 − p)4]
2.153 .313
2.155 a .5

b .15
c .10
d .875

Page 10

886 Answers

5.163 b F(y1, y2) =
y1 y2[1 − α(1 − y1)(1 − y2)]

c f (y1, y2) =
1 − α[(1 − 2y1)(1 − 2y2)],
0 ≤ y1 ≤ 1, 0 ≤ y2 ≤ 1

d Choose two different values for α
with −1 ≤ α ≤ 1.

5.165 a (p1et1 + p2et2 + p3et3)n
b m(t , 0, 0)
c Cov(X1, X2) = −np1 p2

Chapter 6

6.1 a
1 − u

2
, −1 ≤ u ≤ 1

b
u + 1

2
, −1 ≤ u ≤ 1

c
1√
u

− 1, 0 ≤ u ≤ 1
d E(U1) = −1/3, E(U2) =

1/3, E(U3) = 1/6
e E(2Y −1) = −1/3, E(1−2Y ) =

1/3, E(Y 2) = 1/6
6.3 b fU (u) ={

(u + 4)/100, −4 ≤ u ≤ 6
1/10, 6 < u ≤ 11

c 5.5833

6.5 fU (u) =
1

16

(
u − 3

2

)−1/2
,

5 ≤ u ≥ 53
6.7 a fU (u) =

1


π


2
u−1/2e−u/2,

u ≥ 0
b U has a gamma distribution with

α = 1/2 and β = 2 (recall that
�(1/2) = √π).

6.9 a fU (u) = 2u, 0 ≤ u ≤ 1
b E(U ) = 2/3
c E(Y1 + Y2) = 2/3

6.11 a fU (u) = 4ue−2u , u ≥ 0, a gamma
density with α = 2
and β = 1/2

b E(U ) = 1, V (U ) = 1/2
6.13 fU (u) = F ′U (u) =

u

β2
e−u/β , u > 0

6.15 [−ln(1 − U )]1/2
6.17 a f (y) = αy

α−1

θα
, 0 ≤ y ≤ θ

b Y = θU 1/α
c y = 4√u. The values are 2.0785,

3.229, 1.5036, 1.5610, 2.403.
6.25 fU (u) = 4ue−2u for u ≥ 0
6.27 a fY (y) =

2

β
we−w

2/β , w ≥ 0, which
is Weibull density with m = 2.

b E(Y k/2) = �
(

k

2
+ 1
)

βk/2

6.29 a fW (w) =
1


(

3
2

)
(kT )3/2

w1/2e−w/kT w > 0

b E(W ) = 3
2

kT

6.31 fU (u) =
2

(1 + u)3 , u ≥ 0
6.33 fU (u) = 4(80 − 31u + 3u2),

4.5 ≤ u ≤ 5
6.35 fU (u) = − ln(u), 0 ≤ u ≤ 1
6.37 a mY1(t) = 1 − p + pet

b mW (t) = E(etW ) = [1− p + pet ]n
6.39 fU (u) = 4ue−2u , u ≥ 0
6.43 a Ȳ has a normal distribution

with mean µ and variance σ 2/n
b P(|Ȳ − µ| ≤ 1) = .7888
c The probabilities are .8664, .9544,

.9756. So, as the sample size
increases, so does the probability
that P(|Ȳ − µ| ≤ 1)

6.45 c = $190.27
6.47 P(U > 16.0128) = .025
6.51 The distribution of Y1 + (n2 − Y2) is

binomial with n1 + n2 trials and success
probability p = .2

6.53 a Binomial (nm, p) where
ni = m

b Binomial (n1 = n2 + · · · nn , p)
c Hypergeometric (r = n,

N = n1 + n2 + · · · nn)
6.55 P(Y ≥ 20) = .077
6.65 a f (u1, u2) =

1


e−[u

2
1+(u2−u1)2]/2 =

1


e−(2u

2
1−2u1u2+u22)/2

b E(U1) = E(Z1) = 0,
E(U2) = E(Z1 + Z2) = 0,
V (U1) = V (Z1) = 1,
V (U2) = V (Z1 + Z2) =
V (Z1) + V (Z2) = 2,
Cov(U1, U2) = E(Z 21) = 1

Page 11

Answers 887

c Not independent since
ρ
= 0.

d This is the bivariate normal
distribution with µ1 = µ2 = 0,
σ 21 = 1, σ 22 = 2, and ρ =

1√
2

6.69 a f (y1, y2) =
1

y21 y
2
2

, y1 > 1,

y2 > 1
e No

6.73 a g(2)(u) = 2u, 0 ≤ u ≤ 1
b E(U2) = 2/3, V (U2) = 1/18

6.75 (10/15)5

6.77 a
n!

( j − 1)!(k − 1 − j)!(n − k)!
y

j−1
j [yk − y j ]k−1− j [θ − yk]n−k

θ n
,

0 ≤ y j < yk ≤ θ
b

(n − k + 1) j
(n + 1)2(n + 2) θ

2

c
(n − k + j + 1)(k − j)

(n + 1)2(n + 2) θ
2

6.81 b 1 − e−9
6.83 1 − (.5)n
6.85 .5
6.87 a g(1)(y) = e−(y−4), y ≥ 4

b E(Y(1)) = 5

6.89 fR(r) = n(n − 1)rn−2(1 − r),
0 ≤ r ≤ 1

6.93 f (w) = 2
3

(
1√
w

− w
)

, 0 ≤ w ≤ 1

6.95 a fU1(u) =




1

2
0 ≤ u ≤ 1

1

2u2
u > 1

b fU2(u) = ue−u , 0 ≤ u
c Same as Ex. 6.35.

6.97 p(W = 0) = p(0) = .0512,
p(1) = .2048, p(2) = .3264,
p(3) = .2656, p(4) = .1186,
p(5) = .0294, p(6) = .0038,
p(7) = .0002

6.101 fU (u) = 1, 0 ≤ u ≤ 1 Therefore, U has
a uniform distribution on (0, 1)

6.103
1

π(1 + u21)
, ∞ < u1 < ∞

6.105
1

B(α, β)
uβ−1(1 − u)α−1, 0 < u < 1

6.107 fU (u) =




1

4


u
0 ≤ u < 1

1

8


u
1 ≤ u ≤ 9

6.109 P(U = C1 − C3) = .4156;
P(U = C2 − C3) = .5844

Chapter 7

7.9 a .7698
b For n = 25, 36, 69, and 64, the

probabilities are (respectively)
.8664, .9284, .9642, .9836.

c The probabilities increase with n.
d Yes

7.11 .8664
7.13 .9876
7.15 a E(X̄ − Ȳ ) = µ1 − µ2

b V (X̄ − Ȳ ) = σ 21 /m + σ 22 /n
c The two sample sizes should be at

least 18.
7.17 P

(∑6
i=1 Z

2
i ≤ 6

)
= .57681

7.19 P(S2 ≥ .065) = .10
7.21 a b = 2.42

b a = .656
c .95

7.27 a .17271
b .23041
d .40312

7.31 a 5.99, 4.89, 4.02, 3.65, 3.48, 3.32
c 13.2767
d 13.2767/3.32 ≈ 4

7.35 a E(F) = 1.029
b V (F) = .076
c 3 is 7.15 standard deviations above

this mean; unlikely value.
7.39 a normal, E(θ̂) = θ =

c1µ1 + c2µ2 + · · · + ckµk
V (θ̂) =

(
c21
n1

+ c
2
2

n2
+ · · · + c

2
k

nk

)
σ 2

b χ2 with n1 + n2 + · · · + nk − k df
c t with n1 + n2 + · · · + nk − k df

7.43 .9544
7.45 .0548
7.47 153
7.49 .0217
7.51 664
7.53 b Ȳ is approximately normal: .0132.
7.55 a random sample; approximately 1.

b .1271

Page 20

Answers 895

15.49 a .0256
b An usually small number of runs

(judged at α = .05) would imply a
clustering of defective items in
time; do not reject.

15.51 R = 13, do not reject
15.53 rS = .911818; yes.
15.55 a rS = −.8449887

b Reject
15.57 rS = .6768, use two-tailed test, reject
15.59 rS = 0; p–value < .005

15.61 a Randomized block design
b No
c p–value = .04076, yes

15.63 T = 73.5, do not reject, consistent with
Ex. 15.62

15.65 U = 17.5, fail to reject H0
15.67 .0159
15.69 H = 7.154, reject
15.71 Fr = 6.21, do not reject
15.73 .10

Chapter 16

16.1 a β(10, 30)
b n = 25
c β(10, 30), n = 25
d Yes
e Posterior for the β(1, 3) prior.

16.3 c Means get closer to .4, std dev
decreases.

e Looks more and more like normal
distribution.

16.7 a
Y + 1
n + 4

b
np + 1
n + 4 ;

np(1 − p)
(n + 4)2

16.9 b
α + 1

α + β + Y ;
(α + 1)(β + Y − 1)

(α + β + Y + 1)(α + β + Y )
16.11 e Ȳ

(


nβ + 1
)

+ αβ
(

1

nβ + 1
)

16.13 a (.099, .710)
b Both probabilities are .025
c P(.099 < p < .710) = .95
h Shorter for larger n.

16.15 (.06064, .32665)
16.17 (.38475, .66183)
16.19 (5.95889, 8.01066)
16.21 Posterior probabilities of null and

alternative are .9526 and .0474,
respectively, accept H0.

16.23 Posterior probabilities of null and
alternative are .1275 and .8725,
respectively, accept Ha .

16.25 Posterior probabilities of null and
alternative are .9700 and .0300,
respectively, accept H0.

Similer Documents