Are telephone surveys statistically valid? [telecom]

With the coming of the election season, I've been bombarded with telephone survey calls on my landline. No sooner than the primary finished they already started on the fall election. Surveyors discovered robo-calling, where we're asked to press a button depending on our answer.

Fortunately, I do not get such calls on my cell phone.

My question: These days, lots of people have abandoned their residential landline. Given that, are landline surveys still statistically valid since so many voters no longer have a landline?

Historically, telephone polls have been wrong before. A

1932 telephone poll predicted Hoover would beat FDR. They were wrong. They later realized that, at that time, most telephone users were Republicans, so the sample was skewed.

And of course we all know about the 1948 polling fiasco, where everyone was sure Dewey would win, but Truman squeaked by (admittedly a narrow victory).

Reply to
HAncock4
Loading thread data ...

That's because they are illegal.

Quality polling operations who are still doing probability sampling (the "gold standard" for polling) will include cell numbers in their samples. This is more expensive because they have to have a live operator on the line to handle the call. Other times, pollsters will use nonprobability samples (like Internet panels) but will run less frequent cross-validation polls to ensure that their Internet surveys actually give comparable results.

The standard for all survey research is 95% confidence, meaning that the true value is likely to be within the stated margin of sampling error 19 times out of 20, which means that it can be wrong one time in twenty.

-GAWollman

Reply to
Garrett Wollman

Are *any* telephone surveys statistically valid? I see a number of problems (even if they call land lines and cell phones):

- People self-deselect themselves by hanging up when they hear that it's a survey. This is probably correlated to how much they are pestered by telephone solicitors giving surveys disguised as sales pitches, which relates to economic status.

- Some people won't "talk" to a robocaller, which may be correlated to how much they are targeted by robocallers, which may be correlated to economic status.

- Calling random phone numbers may result in more calls to the (relatively) wealthy families with more phone numbers than the (relatively) poorer families with only one phone number (landline or cellular). Of course, some of them may belong to teenagers and children, who generally aren't wanted in political polls.

- Certain people on the lower end of the economic spectrum (e.g. the adult(s) who doesn't/don't usually carry the only cell phone) may not be reachable by an incoming call unless you know who to ask for, and maybe not even then, if the phone is at someone's workplace.

- Some people won't give out personal information on the phone, which likely correlates to the person's perception of how much they have to lose by giving it out. Identity theft is a much bigger issue than it used to be 10-20 years ago.

- Many surveys won't continue if you don't give them personal information (age, sex, family size, income level, political party, etc.) (the survey taker hangs up) Some surveys apparently require that I tell them whether I'm a man or a woman even though it's quite obvious from my voice and even if they have my first name (some names could be male or female. As far as I know, mine isn't one of them). One survey asked whether I had a phone, and I asked in response if they intended to steal it. (They didn't take that as a "yes".) What did this guy think I was using to talk to him? A carrier pigeon?

- Some of the questions are horrible, and I'm likely to interpret them literally, like "what would you say your age is: 18-29, 30-39, 40-49, ...". The correct answer is that I'd say nothing. I'd *never* say it as a range.

- There are so many people asking you to take surveys now (including most salesclerks urging you to take the survey on the company website) and so much blatant campaigning for a higher rating (say, "highly satisfied" or "5 stars") that the word has gotten a bad image. I don't trust the results of such surveys because of the bribes (often coupons) given for a good rating (particularly bad for the Facebook "like").

- Some people may still have a landline but not necessarily answer incoming calls on it if their cell phone is working. It's for calling 911 in a power failure or if your cell phone battery is dead, and because the phone company hid the option to order DSL without phone service on it.

- Even (especially) surveys that include cell phones can get the caller's number onto a (personal) block list which, after a few dozen calls, might start to block a significant number of calls. I don't know the extent to which block lists are exchanged on the Internet. I do know that I can look up missed-call calling numbers on the Internet and find out something about why they are calling, and perhaps block them.

- Some people (cell phone users especially) don't answer calls from numbers they don't recognize. Isn't Caller ID (number) available on pretty much all cell phones? And even if you don't pay for Caller ID (name) a smartphone (or even not-so-smart phone) looks up the phone number in your contact list and displays it.

- I won't allow a survey taker to put words in my mouth. So if I don't like the choices, I'll make up one of my own and stick to it. "If the election were held today, which Presidential candidate would you vote for?" Richard Nixon, deceased. He's much better than any of the live candidates. Smells better, too.

Question: Some low-income people (in the USA) can get a subsidized cell phone with a limited calling package for almost nothing or nothing. Can they still get a subsidized landline? Or does the program give out only cell phones now? Could it be that all of the people with subsidized phones have cell phones only? That would be a significant bias for landline-only surveys, although there are other biases that might partially cancel that one out.

Reply to
Gordon Burditt

Probably not. Apparently back when phone surveys were new, everyone was happy to talk to surveys. These days, they're lucky to get responses from 5% of the people who answer. No doubt a lot of this is due as you say to the vast increase in junk calling, making people much less likely to talk to any stranger on the phone, and distrustful of anyone who claims to be taking a survey. ("If I told you that you'd won a free cruise to the Bahamas, would you be a) amazed, b) thilled, or c) excited?")

Yes, lifeline service for fixed phones is still around, but since you only get one lifeline phone, it might as well be one you can take with you when you're away from home.

R's, John

Reply to
John Levine

Yes. Around here, the going package seems to be about 600 minutes/month. Stupidphones (isn't that the antonym of smartphone?) that do texts, but no data.

Can they still get a subsidized landline? Or does the

Yes but... you can't get subsidies for both. Only one line, whichever it is, per address. There's clearly some central registry, because they check.

Could it be that all of the

My impression (living in a poor neighborhood) is that subsidized cell phones are far more common. Possibly because their carriers actually market them. setting up tables or booths in places where poor people congregate, whereas the LECs don't tell anyone about the service. Of course, a lot of people also find cellphones more convenient, and far easier to deal with if you're moving (or don't have a permanent residence).

Reply to
Dave Garland

The folks at the Pew Research Center still think otherwise. Although they are concerned about the non-participation rate, what it has meant is that they now have to call 20 numbers to get one survey participant rather then 5 as in the days of yore. They have released a lot of statistical information about who responds to surveys, over what modalities, and how their demographics line up with the population as a whole (as determined by the census).

Of course you could argue it another way: they are "statistically valid" by construction, the only question is whether the population being sampled is sufficiently similar to the population of interest to allow for generalization.

-GAWollman

Reply to
Garrett Wollman

If I were them, I'd compare the surveyed population with the US census, which is about as accurate a description of the US population as likely exists.

Reply to
Barry Margolin

That is exactly what they do do, as a part of the reweighting process. This does however mean that the effect of sampling error is magnified for some demographic groups, depending on how poorly those groups are represented in the sample.

-GAWollman

Reply to
Garrett Wollman

I don't think that is possible unless survey participation is mandated at gunpoint. And in such a survey, you won't get accurate opinions about gun control. I don't think "statistically valid by construction" exists. There are way more dimensions to people than age, sex, and race, and maybe religion, and it's easy to match someone in these but be on opposite sides of an issue (consider labor union strikes - in any issue related to a strike, it DOES MATTER whether the person in question is labor (striking union), labor (another union), management, disgruntled customer, or uninvolved).

(Did you make sure that the sample matches the population in income? weight? height? education level? number and ages of children? marital status? number of divorces? employer? personality? job title? food allergies? gun ownership? number of abortions? drug use? number of minutes in cell phone plan? favorite sports team? Internet Service Provider? )

Self-selection is a less blatant problem on satisfaction surveys, but it is still an issue. Those with extreme views (pro or con) are more likely to expend more effort making sure their survey is counted. If the call drops, they'll try again. They'll spend more effort getting the survey done on an overloaded internet server. They'll bug tech support for a way around the problem. Some guy who thinks he has been cheated out of $10 may be willing to spend $100 to take down the offending merchant (say, by ballot-box stuffing rating surveys, giving 1-star reviews) even if he's got no chance of getting his money back.

Any survey can be blatantly ruined by stupidity, and that includes limiting your view of the population to age, sex, and race. If self-selection or self-deselection (e.g. hanging up if they hear the word "survey", or putting that word in their SPAM filter for email) is correlated to the questions on the survey, you're in trouble. One of the worst problems is only being able to access the survey if you have a phone, or internet access, and you are being asked questions related to phone service or internet service.

That still doesn't protect you against a HUGE bias caused by the method you use to conduct the survey. If a self-selection bias is correlated with the questions on the survey, you've got a problem, especially if it has nothing to do with age, sex, and race.

Example: if Trump wants to shut down the Internet (and anyone takes this seriously), and Hillary wants to leave it open, you may well have a strong bias if you take a poll on the Presidential race over the Internet. Network neutrality is a similar issue. That's a problem for Presidential race polls that hasn't existed in the past.

I recall a discussion of some kind of campus-wide student opinion poll at RPI for some purpose, and it was proposed to do it by phone. (The internet didn't exist yet) Several problems were listed, including:

  1. Freshmen do not have phones. (True when I was there, and cell phones were pretty much nonexistent in the early 1970's. A dorm holding maybe 100 freshmen had 3 pay phones near the common room. The numbers to call *IN* were known to a few, and that excluded most freshman dorm residents.) Even today, a phone poll may not be feasable: presumably most every student has a cell phone, but is there a phone book of campus residents? another problem, if the poll were supposed to represent something beyond the limits of the campus:
  2. The female RPI engineering student is so rare less than half of the sophomore class claims to have seen one, ever. (Someone actually did a poll on this). Remember, early 1970's. Well, it was easy to stand up in an Electrodynamics class, look around, and identify a female or two, but if "engineering student" required "an Engineering Major", you'd have to approach them with "Hi, what's your major?" and if male engineering students could do that, they'd be talking about something else. You also couldn't be sure those females weren't girlfriends or parents of class members they were sitting next to and were not enrolled at RPI at all. Well, if this was supposed to be a campus-wide poll, well, women are rare, and the survey reflects that. If it was supposed to predict a national political race, women as voters are NOT rare, and it's a problem for the survey.

The basic 2010 census didn't ask whether you had a phone or not, or what kind of phone. Nor did it ask whether you had an indoor toilet. Nor did it ask whether you are a Democrat or a Republican or Tea Party or Weed Party. They also didn't ask about sexual preference, if any, which restroom they used, or what it said on their birth certificate, if any. It didn't ask if they were a citizen or were in the USA legally. They might have had longer forms that asked whether you have a phone, but what they asked on the basic 2010 census was pretty short, and seemed mostly related to a total count and distribution of age/sex/race.

The Census may be a great thing for checking that your survey ends up representing the population by age, sex, and race (whatever sex and race are - census takers were supposed to encourage people to give answers on the list, but if they insisted on saying they were Romulan, take them at their word, and do NOT look at them and try to guess what they are. If they said they are a woman, write it down, regardless of how much they look like a man dressed as a woman.) It doesn't help with political party affiliation. It doesn't help much with being sure that you are weighting the left and right "wings" of each party properly. And those "wings" are multi-dimensional: a person may be a fiscal conservative and liberal on social issues.

Suppose that about 48% of the population is in favor of the death penalty for making an unsolicited call for a survey, and 48% is against it. 4% undecided, (40% for those who don't have a phone) and it's a dead heat. Suppose you call a whole bunch of people asking whether they are in favor or against such a law, and whether they have a phone. What kind of results would you expect to get?

I think you'd end up with, out of the people who actually responded to the survey, about 20% in favor and 80% against the law, (survey writers tend to not allow "undecided" as a choice, which I think is a big mistake, although most political polls seem to allow it) and about 100.000000% of them have a phone. Of those who hung up, there might be 80% in favor and 20% against, and 100.000000% of these people also have a phone, but you don't get survey results from these people.

That survey got the opinion on the law badly wrong, and missed entirely the don't-have-a-phone people (yes, it's a very small number, but not zero, and you might conclude that Lifeline service was helping everyone that needed it so it doesn't need a bigger budget). It failed because of sampling bias, but not sampling bias that checking against the Census could warn you about.

Reply to
Gordon Burditt

Certainly it does. If you take a random sample of a population, statistics gives you limits on the error when you generalize from the sample to the population. As I indicated, this does not imply that the population you sampled is representative of any other population, including the population you *intended* to sample.

People who do serious survey work put a lot of effort into validating their samples to make sure that they actualy *are* representative of the population. Pew, for example, regularly does studies looking into the effect of different survey modalities on the sorts of questions that they are interested in.

In any sufficiently large sample to generalize meaningfully about public opinion, all of these groups have a high probability of being represented in proportion to their overall prevalence in the population. (That's practically the definition of "sufficiently large sample".)

The issue of non-response bias is a serious one, and may eventually spell the end of random-digit-dial surveys for opinion research. However, non-response bias is, today, still believed to be something that can be managed with fairly crude demographic reweighting.

A great deal of marketing research no longer uses telephone surveys, because the cost is not considered to be worth the marginal improvement in generalizability compared to the ease of running Internet-based pre-screened panels. See Pew's report on this at . (Pew in general does a fabulous job with explaining and validating their survey methods.)

-GAWollman

Reply to
Garrett Wollman

The question is what 5% do they get to answer? If they can get a good cross-section of the population worked out from that 5%, that's one thing. If, as I suspect, mostly people who have nothing else to do tend to answer, then it's very difficult to weight your sample accurately.

--scott

Reply to
Scott Dorsey

That's what they do. They make categories of people, put individual callers into those categories based on where they live and how they answer some demographic questions, then they weight the answers in each category based on the number of people in that category in the census (or similar population description).

Several problems come in: first of all you may have people with very different voting patterns that wind up in the same category. Secondly, if you have very few people in one category responding then each vote in that category counts for a lot and the noise floor rises. Thirdly, the actual population may not be the same as the census population.

Now, for election predictions it gets even more fun because the people who go out to vote are not an even cross-section of the population, so the first thing that the election prognosticators need to do is to figure out just who is going to vote and who is not so they can figure out what the population weights need to be. This turns out to be more difficult than expected sometimes.

--scott

Reply to
Scott Dorsey

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.