Content - How biases can arise in sampling
“President” Landon and the Literary Digest Poll. Article in Social Science History Why did the Digest poll fail so miserably? One view has come to prevail . Literary Digest poll of holds an infamous place in the history of survey research. interested in finding out why the Digest poll failed to work, they were then asked bias presumably arose from a consistent, but not high, relationship be-. “President” Landon and the Literary Digest Poll: Were Automobile and it was those who failed to participate in the poll (overwhelmingly supporters of . Erikson, Robert S. () “The relationship between public opinion and state.
In that regard, he succeeded. Funk, editor of The Literary Digest, as follows: Gallup would confine his political crystal-gazing to the offices of the American Institute of Public Opinion and leave our Literary Digest and its figures politely and completely alone We've been through many poll battles We've been buffeted by the gales of claims and counter-claims. But never before has any one foretold what our poll was going to show before it was even started.
There certainly is nothing to justify this kind of precision in divvying up error among specific causes.
Dr. George Gallup and the Literary Digest Poll
It's convenient to use the spectacular failure of their straw poll as a cautionary tale, but without any real documentation of what they did, or data to analyze, any attribution of specific causality is little more than speculation. Here are my comments. I am not replacing one myth by another as you seem to suggest - if I am: For example, Crossley wrote: I also agree when you say that it was meant to "generate publicity for Gallup's fledgling organization.
The statement has to be analyzed within an overall strategy to dominate the opinion polling domain.
election: Case Study I
They wanted to be top dogs and knock the Digest off its pedestal. They used the Digest incident, especially in the early years, to promote what they considered to be a superior "product": Of course, they were to receive a nasty shock in - although not fatal to them, as was for the Digest.
First the data in its raw form requires some editing because the file contains some anomalies. For example, you might have noticed that in Squire's table 2 p.
Squire analyzed the data at face value.
In its August 22, issue, the Litereary Digest announced: Once again, [we are] asking more than ten million voters -- one out of four, representing every county in the United States -- to settle November's election in October. Next week, the first answers from these ten million will begin the incoming tide of marked ballots, to be triple-checked, verified, five-times cross-classified and totaled.
When the last figure has been totted and checked, if past experience is a criterion, the country will know to within a fraction of 1 percent the actual popular vote of forty million [voters]. There were two basic causes of the Literary Digest's downfall: The first major problem with the poll was in the selection process for the names on the mailing list, which were taken from telephone directories, club membership lists, lists of magazine subscibers, etc.
Such a list is guaranteed to be slanted toward middle- and upper-class voters, and by default to exclude lower-income voters. One must remember that intelephones were much more of a luxury than they are today.
Furthermore, at a time when there were still 9 million people unemployed, the names of a significant segment of the population would not show up on lists of club memberships and magazine subscribers. At least with regard to economic status, the Literary Digest mailing list was far from being a representative cross-seciton of the population. This is always a critical problem because voters are generally known to vote their pocketbooks, and it was magnified in the election when economic issues were preeminent in the minds of the voters.
This sort of sample bias is called selection bias. The second problem with the Literary Digest poll was that out of the 10 million people whose names were on the original mailing list, only about 2. Thus, the size of the sample was about one-fourth of what was originally intended. People who respond to surveys are different from people who don't, not only in the obvious way their attitude toward surveys but also in more subtle and significant ways.
Gallup based his claim on a survey of people who were selected at random from the lists sample frame used by The Literary Digest.
Dr. George Gallup and the Literary Digest Poll | HuffPost
Gallup used the same method as The Literary Digest to collect information — by mail. Gallup got enormous publicity.
The population that Gallup and his colleagues wanted to survey was the population of voters who were to be surveyed by The Literary Digest. For this, they had a very well-defined sample frame: Gallup sampled randomly from this list. Problems with the response rate When sampling human populations in particular, the willingness of invited participants to respond to the invitation and participate in the study can be an important issue.
Increasingly, targeted potential respondents are unwilling to participate. The response rate is the percentage of targeted potential respondents that actually respond. In sampling from a human population, there are many reasons why a sampled unit might not participate in the study. If the details in the sample frame are incorrect, the individual might not be able to be contacted. Contact details might be correct, but busy individuals may be hard to find.
- Re: The Literary Digest Poll
People might refuse to participate for a wide range of reasons: This is an important issue in human surveys; it can be less problematic in surveys of other entities. In Gallup's news report on 31 Octoberwhere he gave the result of the final pre-election poll by the American Institute of Public Opinion, he stated that 'The number of ballots distributed in the poll was There is little information available about the response rate for the American Institute of Public Opinion poll.
It seems that the response rate to the American Institute of Public Opinion election poll must have been lower than that for the Literary Digest poll.
Gallup would have needed about 75 responses for a response rate comparable to that of The Literary Digest. Why is The Literary Digest's response rate so often highlighted as a concern? One reason is that, when there is a poor response rate, there is a potential for non-response bias.
We discuss this next. Non-response bias People who have been randomly selected to be part of a survey but refuse the invitation to participate can be different from the people who agree to participate. Non-response bias occurs when the people who respond to the survey are different, on average, from those who do not.
More generally, the units in a sample that cannot be contacted and the units in the sample that can be contacted may differ in important ways that relate to the purpose of the survey. Often, unfortunately, the possibility for non-response bias is ignored.
The lower the response rate, the more scope there is for non-response bias. If a large proportion of a sample fails to respond, having a large sample will not help: Exercise 5 Consider carrying out a long-term study of drug and alcohol use in young men, where you select a random sample of boys aged 13 to 16 and intend to follow them up several times into adulthood.
How might you select the sample of boys? What issues might arise in following these adolescents over time?