Stat 301 – HW 3

Due midnight, Friday, Jan. 27

 

Please remember to put your name(s) inside the file and if submitting jointly to join a HW 3 group first. Please use Word or PDF format only. Remember to integrate your output with your discussion.  Points will be deducted if you are missing output. 

 

Due to injury or other medical conditions, people can develop blindness on just one side of their vision field. “Blindsight” is a condition in which people are blind but can still respond to things they cannot consciously see. Because of brain damage, a patient whom researchers called GY (Persaud, et al., Nature Neuroscience, 2007) was right-side blind, meaning he could not see anything in his right field of vision.  To test GY for blindsight, they had him face a video monitor and told him that “either a square-wave horizontal grating pattern would be presented in his right visual field (within his scotoma) or no stimulus would be presented. The size, contrast, and position of the stimuli were selected on the basis of previous experiments with GY, so that he would be expected to make present-absent decisions with an accuracy above chance, but would report no visual sensation. He was told that pattern and blank trials would be equally frequent. … He was asked to respond 'yes' if he thought the stimulus had been presented and 'no' if he thought it had not.”  In a set of trials in his blind field, GY correctly made 70% of the yes-no discriminations.

(a) Identify the observational units and variable of interest in this study.

(b) Is it reasonable to model this process as binomial (in other words, how did they/would you try to ensure independence and constant probability of success for this random process)?

(c) Define the parameter in words (in context).

(d) Report the sample proportion, .

 

We can’t decide whether this result is better than random chance without knowing the sample size. 

(e) Suppose the sample size had been n = 10.  Use the One Proportion Inference applet to find the one-sided binomial p-value. Make sure you include a copy of the null distribution for the sample proportion with the p-value shaded and showing the mean and standard deviation. (Recommendation: Verify the SD calculation by hand.)

(f) Check the Normal Approximation box and report the theory-based p-value for the one-sample z-test.  Would you consider the p-values similar? Does the similarity/lack of similarity of these values surprise you? Explain.

 

(g) The actual sample size was n = 200.  Find the one-sided binomial p-value. Make sure you include a copy of the null distribution for the sample proportion with the p-value shaded, and showing the mean and standard deviation. Is this p-value larger or smaller than in (e)? Is this what you expected for this change in sample size? Explain. 

 

(h) Compare the shape, center, and spread of the two null distributions you created in (e) and (g). Which features are the same/which are different with the change in sample size?

(i) Describe another way the two distributions differ (visually).

 

(j) For n = 200, calculate the standardized statistic for the sample proportion (the “test statistic”).  (Show your work.) Then compare the exact binomial p-value to the normal approximation p-value.  Would you consider the p-values similar? 

(k) Ok, that was a silly question.  Suppose n = 200 and  = 0.55. Find the exact binomial and theory-based p-values.  Are they similar? Does the similarity/lack of similarity of these values surprise you? Explain.

 

(l) The continuity correction is discussed on p. 58-59.  Use the applet to perform a continuity correction for the calculations in (k).  Does this continuity correction improve the normal approximation of the exact binomial calculation for this situation? (Make sure you are including sufficient output.)

 

From the 200 trials ( = 0.70), we have strong evidence that GY’s probability is larger than 0.50.  So what is GY’s actual (long-run) probability of correctly answering? This is what a confidence interval tells us/helps us estimate.

(m) Use R or JMP to find the exact binomial confidence interval.  Interpret the interval in context. (If I don’t give you a confidence level, assume 95%.)

(n) Repeat (m) for a 95% z-confidence interval.  Are the z-interval and exact binomial confidence intervals similar? Is this what you would expect for these data? Explain.

 

(We can discuss on Friday)

(o) Repeat (m) for a 95% adjusted-Wald confidence interval.  Make sure it’s clear how you are doing so.

(p) Compare the widths of the three confidence intervals you have found. (Use 4 decimal places.) Which is the shortest?

 

(q) Consider the one-proportion z-confidence interval in (n). Describe (verify?) how the interval will differ for n = 200 and  = 0.55 (consider the midpoint and the width). Explain.

(r) Consider the one-proportion z-confidence interval in (n). Describe (verify?) how the interval will differ for n = 10 and  = 0.70 (consider the midpoint and the width). Explain.

(s) Report the half-width (aka margin of error) for the one-proportion z-confidence interval in (n).  Compare this value to . Based on the formula for the margin of error in the 95% one-proportion z-confidence interval, why does this approximation make sense? 

(t) So a short-cut approximation to the one-proportion z-confidence interval is .  If anything, this interval will be wider than it needs to be (is “conservative”).  Why?  (Hint: How does the standard deviation formula change with ?)