Quantcast
Channel: YourMorals.Org Moral Psychology Blog » yourmorals.org
Viewing all articles
Browse latest Browse all 10

Nationally Representative Data is (sometimes) Bad Data for Psychology

0
0

We psychologists hit a wall when we try to publish in sociology or political science journals: Our data is almost never obtained from nationally representative samples. We usually obtain convenience samples, either of undergraduate students at our universities, or from visitors to a website (such as YourMorals.org). These two methods invariably produce samples that are much better educated than would be obtained from nationally representative samples such as those used by the major polling organizations. Sociologists and political scientists therefore think that our findings are ungeneralizable and useless.

But the truth is more complex, and in some ways, our data is better than theirs. In an article just published in Public Opinion Quarterly, Linchiat Chang and Jon Krosnick report the results of a national field experiment in which they collected responses to a complex survey on political knowledge and voting behavior using three parallel methods:

  1. Representative-Telephone: They used Random Digit Dialing to obtain a nationally representative sample, which was then interviewed by a live interviewer (this was supposedly the gold standard)
  2. Representative-Internet: They hired Knowledge Networks to administer the survey. KN has painstakingly assembled a panel of  people that is more or less nationally representative.  Many of these people were given free Web-TV to allow them to connect to the internet.  They are paid to respond to specific studies that they do not select.
  3. Volunteer-Internet: They used a collection of volunteers who had signed up with the Harris Poll Online in response to advertisements on the search engine Excite.com. Harris then emailed invitations to a subset of these volunteers in an effort to approach representativeness by age, sex, and region of the country, but because each person chooses whether to respond to each invitation, based on its content, the final sample was not at all representative.

Chang and Krosnick found–not surprisingly–that methods 1 and 2 yielded data that was more representative of the US population than did method #3. So if what you really need to know is the percentage of Americans who believe X, then you should do your darndest to assemble a nationally representative sample.

But psychologists rarely want to know the percentage of Americans who believe X. We’re trying to figure out how minds work, particularly different kinds of minds, or minds that have just been exposed to different primes or procedures. We need participants who can read and understand directions and then answer questions honestly and thoughtfully. Chang and Krosnick found that the Volunteer-Internet method yielded the highest quality data.

Data collected by telephone was the lowest quality data: these participants showed more random measurement error, more survey satisficing (just giving any answer to keep moving), and more social desirability bias (because they were talking to a live person).

Data collected by internet was much better: the internet has so many advantages over the telephone, such as the fact that people can see the questions and response choices — they don’t have to hold it all in their heads, as in a telephone survey. But the best data of all came from the Volunteer-Internet sample–lowest rates of random error and satisficing–because they were, on average more interested in the topic, more motivated to do a good job, more experienced with internet usage and web surveys, and (most likely) somewhat smarter than the Representative-Internet population.

And here’s another problem with probability samples: they are so expensive to obtain that the sample sizes are usually not very large– typically just one or two thousand–and the researcher is usually forced to use very short instruments, often just a single item, to measure the key constructs. In contrast, on a volunteer site such as YourMorals.org, where people come because they are interested, we can use much longer and better instruments, and we can collect data from tens of thousands of people.

The bottom line is that each method of data collection has advantages and disadvantages. There are times when psychologists want to know the percent of Americans who believe X (as in a survey about explicit attitudes about affirmative action); in such cases they should try to obtain a representative sample. But in the great majority of cases our questions are different. If we want to know how explicit attitudes about race relate to implicit attitudes, we’re asking a question about mental mechanisms, not about population distributions. We’re probably better off using a Volunteer-Internet sample, such as YourMorals.org, or ProjectImplicit.org.

Of course, generalizability matters. If we’re going to make claims about differences between liberals and conservatives from data collected at YourMorals, then we’ve got to be confident that those claims will generalize to liberals and conservatives more broadly. One simple way we do this is to track participants by the site that referred them to YourMorals. If our results hold up for people who came from a social conservative site, a libertarian site, a scientist’s site, and a mainstream media site, then we have good reason to believe that they will generalize quite broadly. (Ravi Iyer has done such an analysis, and demonstrated robustness across referral sites.)

A second method of establishing generalizability is to attempt to replicate our findings in a nationally representative sample. However, such replication may be prohibitively expensive (if the task takes more than 10 minutes), and a failure to replicate would be inconclusive (it may simply be a Type-II error due to the lower interest, motivation, or ability of the participants. The effect might still be generalizable, it just can’t be found in lower-quality data).

There’s an old saw in the social sciences: Psychologists can’t sample, sociologists can’t measure. Both skills are valuable, but for psychologists, good measurement is usually more important than good sampling.


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images