"Random" survey's

News: "Random" survey's

Change the way HH city does business…"random" survey's should not be "random" if residents want to participate in a survey but are told they cannot because the survey was/is random.  OPEN government is not conducting a random survey then using the results of that survey to make policy! Let's see an open call to ALL residents (taxpayers and others), property owners and renters alike! 

from my college psychology textbook (cira 1988) Southern New Hampshire University

Common problems with random surveys.
1.) Set questions with set answers (meaning the person taking the survey actually has to pick a certain answer even if it does not fit with how they feel on the subject.

2.) No way to expand on your answers. When I am asked a question I do not like to give just a yes or no answer I like to state my reasoning behind my decisions or my though process

3.) Giving surveys you have to make sure that it is being answered by a variety of people. I find most surveys are limited to a certain group, ie houswives, working mothers, students, middle class, etc,. To get the most out of the survey it has to be offered to everyone from everybackground, race, nationality, religious and political background this is a daunting task

4. ) Incentive to take the survey. If the survey takes more than a few mins then people will not complete it or even start it. Time is valuable and we all have so little of it.

5.) I know personally from taking a few surveys that it is aggravating not to know what has been deemed from the survey, I feel I have taken time out of my day to answer the survey at least I could be informed of the results.

6.) With surveys or really any other test you can never be sure if you are be given the truthful answer or the answer that the survey taker thinks you want to hear, So scientifically they provide no real data.

Here are some more,

Checklist of Potential Problems with Random Surveys
A. Sampling Procedures
1. Is the sample a haphazard (nonprobability) sample or some variant of a probability (random) sample?

Examples of haphazard samples are: "person-on-the-street" interviews, letters to the editor, call-in polls, "straw" polls, Literary Digest, etc. Problems are bias, nonrepresentativeness.

Probability samples give each individual from the population an equal chance of being selected. They allow for generalizability with some degree of sampling error.

2. What is the size of the sample? What is the "sampling error," or the "accuracy level" of the survey and how does this affect the interpretation of the survey findings?

Smaller samples (especially less than about 600 respondents) begin to yield intolerably high levels (4% and higher) of sampling error â€" the error or inaccuracy in being able to generalize from sample results to the population. For example, for a sample size of 600 and a sampling error of + or - 4%, if we find that 50% of the respondents in the sample prefer candidate X to candidate Y, this actually means that we are relatively certain (there is a 95% probability) that between 46% and 54% of the American public prefer candidate X to Y.

Also, sampling errors are larger for smaller subgroups (e.g., women vs. men) of the survey. Of course, if accuracy isn't all that important, higher levels of sampling error may be tolerable.

3. Was the interviewing done face-to-face or over the telephone? How does this affect results? If a telephone interview, was random digit dialing used to select respondents?

4. What was the "response rate" of the survey–i.e., percentage of those selected who refused to participate? How can this affect the survey results?
5. Note: Sampling errors are just the "tip of the iceberg" in terms of problems or errors with public opinion polls and reporting response rates, sampling errors, etc. in articles on political polls tends to give the reader a false sense of the accuracy of polling results, as if such errors are the only ones we need to know about and that most of the â€ceerror” in a survey can be estimated with scientific precision.

 In fact, other problems associated with question wording, question order and the interpretation of survey findings are often more important than sampling errors. In fact, if the poll is done by a reputable firm, the sampling procedure is probably one of the least important aspects of the survey to know about.

B. Question Wording
1. Is the question "loaded" or biased in some way? Does it "lead" respondents to answer in a particular manner? Does it present different sides of an issue fairly?

2. Is the question susceptible to social desirability biases so that some answers might appear more socially acceptable or "politically correct?"
3. Is the question clear and unambiguous, simple and straightforward? Or are there several issues at stake in an unnecessarily complicated question? And does the question require knowledge that many people may not have, or use terms that some people might not understand? If so, the question may be "testing" familiarity and measuring "nonattitudes" rather than soliciting real opinions.

4. Are responses affected by the context of the question–i.e., previous questions, question order, and the like?

5. Other question wording effects (see Erikson and Tedin, Ch 2): Are there likely to be framing effects? Are the arguments balanced? Are multiple Likert items balanced?

C. Interpreting Survey Results

1. Is there any reason to think that the polling organization or sponsor is distorting the results of the poll for its own benefit?

2. Are there alternative interpretations or explanations for the results, besides those being reported or intimated? Could differences in responses across groups, over time, etc. be due to some other reason than those suggested in the article?

3. What are the goals of the analyst? Mere description, explanation, or prediction?

4. What "model" of polling and public opinion do pollsters and reporters seem to have in mind in describing and interpreting the results of a poll? Two typical types of interpretations of political polls of candidate or policy support:

Public opinion as â€ceelections”: Is the public opinion poll being interpreted as a sort of "interim election" or a "mandate from the people" that should be followed by the nation's leaders (George Gallup's position)? Are the results being used to predict political behavior or support weeks and months from now? If so, the political attitudes being measured must be salient, stable, and "strong" so that the â€cesnap-shot” picture provided by the public opinion poll is not a serious distortion.

Public opinion as a â€cepuzzle” that needs further probing and explanation: Or is the poll being used to understand the sources and dynamics of public opinion, which is acknowledged to be complex and ever-changing? If so, is it acknowledged that much of public opinion is often subject to change, and is sometimes amorphous, somewhat weak and passive, with only a minority mobilized pro or con? Is there an attempt to understand how public opinion changes in response to events and how those changes produce trends in the "climate" of public opinion? Is there an attempt to document trends in public opinion over time, to understand the origins of public opinion, or document and explain differences in public opinion across different social, political, and information groups in the population?

How do politicians, journalists, social scientists and the public differ in the way they are likely to interpret polls, based on their goals?

5. Would using other methods in addition to, or in lieu of surveys help us to overcome limitations of opinion polls?

Use depth interviews or focus groups to: delve beneath the surface of superficial survey responses and understand how people arrive at their opinions in the first place?

Use lab experiments or survey experiments to disentangle causes from effects in public opinion?

· Use Q-methodology to understand the different meanings and subjective frames of reference that people use to interpret terms and questions?

Excellent

Excellent background piece.  The results can be found in the Administration Committee agenda and  will be discussed 6:00 Tuesday Feb 4, 2014 at the Administration Committee meeting - City Hall 6131 Taylorsville Rd.

Hope to see you there.


Agenda
1 guest and 0 members have just viewed this.
Control functions:

Contract Quick reply