Math 3339

Math 3339
Section 19377
MWF 10-11am SR 116
(and section 20582 – online)
Bekki George
bekki@math.uh.edu
639 PGH
Office Hours:
11:00 - 11:45am MWF or by appointment
Section 6.1 - Some General Concepts of Point Estimation
Definition
A point estimate of a parameter θ is a single number that can be regarded
as a sensible value for θ.
A point estimate is obtained by selecting a suitable statistic and computing
its value from the given sample data. The selected statistic is called the
point estimator of θ.
Example: Consider the accompanying 20 observations on dielectric
breakdown voltage for pieces of epoxy resin.
24.46 25.61 26.25 26.42 26.66 27.15 27.31 27.54 27.74
27.94
27.98 28.04 28.28 28.49 28.50 28.87 29.11 29.13 29.50
30.88
The pattern in the normal probability plot given there is quite straight, so
we now assume that the distribution of breakdown voltage is normal with
mean value µ.
Because normal distributions are symmetric, µ is also the median lifetime
of the distribution.
The given observations are then assumed to be the result of a random
sample X1, X2, . . . , X20 from this normal distribution.
Consider the following estimators and resulting estimates for µ :
a. Estimator = X, estimate = x = Σxi /n = 555.86/20 = 27.793
b. Estimator = x! , estimate = x! = (27.94 + 27.98)/2 = 27.960
c. Estimator = [min(Xi) + max(Xi)]/2 = the average of the
two extreme lifetimes,
estimate = [min(xi) + max(xi)]/2 = (24.46 + 30.88)/2 = 27.670
d. Estimator = Xtr(10), the 10% trimmed mean (discard the
smallest and largest 10% of the sample and then
average),
estimator = xtr(10)= 27.838
Each one of the estimators (a)–(d) uses a different measure of the center of
the sample to estimate µ. Which of the estimates is closest to the true
value?
We cannot answer this without knowing the true value.
A question that can be answered is, “Which estimator, when used on other
samples of Xi’s, will tend to produce estimates closest to the true value?”
We will shortly consider this type of question.
Definition
A point estimator is said to be an unbiased estimator of θ if E( θˆ ) = θ
for every possible value of θ. If θˆ is not unbiased, the difference E( θˆ ) – θ
is called the bias of θˆ .
That is, θˆ is unbiased if its probability (i.e., sampling) distribution is always
“centered” at the true value of the parameter.
Proposition
When X is a binomial rv with parameters n and p, the sample proportion
pˆ = X / n is an unbiased estimator of p.
No matter what the true value of p is, the distribution of the estimator
will be centered at the true value.
pˆ
Proposition
Let X1, X2, . . . , Xn be a random sample from a distribution with mean µ and
variance σ 2.
Then the estimator
is unbiased for estimating σ 2.
Proposition
If X1, X2, . . . , Xn is a random sample from a distribution with mean µ , then
X is an unbiased estimator of µ. If in addition the distribution is continuous
and symmetric, then X and any trimmed mean are also unbiased
estimators of µ.
Definition
The standard error of an estimator θˆ is its standard deviation σ θˆ = V (θˆ ) .
It is the magnitude of a typical or representative deviation between an
estimate and the value of θ .
EMCF 17
1.The sampling distribution of a statistic is
a. The probability that we obtain the statistic in repeated random
samples
b.The mechanism that determines whether randomization was
effective
c. The distribution of values taken by a statistic in all possible samples
of the same sample size from the same population
d.The extent to which the sample results differ systematically from
the truth
2. A statistic is said to be unbiased if
a. The survey used to obtain the statistic was designed so as to avoid
even the hint of racial or sexual prejudice
b.The mean of its sampling distribution is equal to the true value of
the parameter being estimated
c. Both the person who calculated the statistic and the subjects whose
responses make up the statistic were truthful
d.It is used for honest purposes only
3.The number of graduates at Johns Hopkins University is approximately
2000, while the number at Ohio State University is approximately
40,000. At both schools a simple random sample of about 3% of the
undergraduates is taken. Which of the following is the best conclusion?
a. The sample from Johns Hopkins has less sampling variability than
that from Ohio State
b.The sample from Johns Hopkins has more sampling variability than
that from Ohio State
c. The sample from Johns Hopkins has almost the same sampling
variability than that from Ohio State
d.It is impossible to make any statement about the sampling
variability of the two samples since the students surveyed were
different.
4.The sample statistic x is the point of estimate of
a. the population standard deviation σ .
b.the population median.
c. the population mean µ
d.the population mode.
5. Choose A