Althaus (2003) Collective Preferences in Democratic Politics
This book examines the quality of political representation provided by surveys; thus conceptualizing public opinion research in the “classic tradition” (Lazarsfeld 1957), using empirical methods to pursue and refine the foundational questions about popular sovereignty raised by political theorists. “It does so through a statistically analysis of representation in surveys where quality is analyzed from the standpoint of two foundational concepts in democratic theory: the degree to which surveys regard and promote the political equality of all individuals in a population and the likelihood that surveys represent the political interests of all individuals in a population” (9).
Introduction
What is public opinion? In what form ought it be recognized?
What is its nature? What characteristics should it possess?
What kind of political power does it have? What kind of power should it be given?
These should/ought questions are more normative, than empirical, but still very important to ask.
Opinion surveys:
Relevant? Since the 1930s there has been a growing acceptance to use polls as proxy for peoples’ voice. Sample surveys allow scholars to look beyond one’s vote as a signal of needs and wants—its more a more expansive and comprehensive measure of preferences than a vote.
Advocates: Useful for mass democracies because they can reveal what the people are thinking (Verba 1996) (2). Page & Shapiro (1992) and Converse (1990) also find surveys to be laudable, and “conclude that the traditional understanding of public opinion as volatile and capricious is incorrect” (2). –this idea of a RATIONAL public. Also supported by Jacobs & Shapiro (2000). Yet these authors are all empiricists. (not all empiricists applaud polling results; many recognize that what individuals say in polls is shallow, vacillating, illogical, and coarse) (7). In contrast to these scholars, few political philosophers would aver that polling/samples offer a justification for democratic rule. Scholars are suspicious of aggregate preference to reveal the common good (Arendt 1958; Barber 1984; Habermas 1989, 1996).
Polling criticisms:
sampling error
question wording
draw attention away from “real” concerns (Ginsberg 1986)
construe a fictitious public mind (Bourdieu 1979)
Thus we still have to call into question “whether opinion surveys can tell us reliably what the people really want” (3).
This book calls into question two (2) major concerns:
(1) Do citizens have enough knowledge about the political world to regularly formulate policy preference that are consistent with their needs, wants, and values?
(2) Is the quality of political representation provided by opinion surveys adequate for the uses to which they are put in democratic politics?
Overall, political knowledge among citizens is low and thus not to be trusted (Almond 1950; Berelson, Lazarsfeld and McPhee 1954; Campbell, Converse et al. 1960; Converse 1964; Patterson 1980).
I. But some scholars contend that aggregation of the informed and uninformed balance each other out (Converse 1990; Erikson, MacKuen & Stimson 1992; Miller 1996; Bartels 1996; Delli Carpini & Keeter 1996). “When aggregated, the argument goes, the more or less random responses from ill-informed or unopinionated respondents should tend to cancel each other out” (12). Page and Shapiro say that informed opinions persist because uninformed opinion cancel out (why would they err in the same direction?)
Althaus argues there is NOT a balance/canceling out. Uninformed people do not give randomly balanced responses. They are influenced by the information environment… the uninformed are making the decisions, because the INFORMED are canceling one another out. And that uniformed opinions drive the collective choice. Althaus says the uninformed GROUP on the same answers (framing effects). Informed people group around the poles, and thus cancel each other out. The uninformed choose the middle, the more accessible frame, and… etc. etc. especially try in a 2 party system. Especially true in surveys that give a 7 point scale (uniformed choose 3—the middle).
** (informed, canceled out)
XXXXXXXXXXX (uniformed, mean)
** (informed, canceled out)
II. Others argue citizens—though low on political ken —use heuristic shortcuts “interpretive schema or cues from political elites” (13)—in place of factual knowledge (Popkin 1991, 1993; Kuklinski & Quirk 2000; Iyengar 1990).
But, Althaus finds both to lack adequate empirical support (14) and states: “both revisionist perspectives tend to overlook an important fact: low information levels are only half the problem. Just as important is the observation that some kinds of people tend to be better informed than others” (14).
What other way could we measure the “skewed noise.”
Political Knowledge:
Low levels and uneven social distribution of political knowledge affect the quality of representation afforded by collective preferences. Much of the variation in knowledge levels is due to individual differences in (1) motivation, (2) ability and (3) opportunity.
(1) Motivation: influenced by a person’s interest in politics, civic duty, and anxiety about the future (I would also argue social acceptance, given one’s social circle—in some, albeit small, pockets, its cool to know this stuff).
(2) Ability: influenced by education. It is the ability to process the info.
(3) Opportunity: mobilization efforts; exposure to media markets, thus geographical component here, as well.
Who are the politically savvy? They share characteristics (other than information advantages)
- groups with distinctive and competitive interests
- white, middle class, male, middle-aged, married
- college grads
- affluent
Consequences?
(1) people who are knowledgeable tend to give opinions more often
(2) people who are well informed are better able to form opinions consistent with political predispositions (Zaller 1992; Delli Carpini & Keeter 1996).
Consequences of the consequences?
Information effects: “a bias in the shape of collective opinion caused by the low levels and uneven social distribution of political knowledge in a population.”
Collective opinion: any aggregation of individual preferences.
Bias: a distortion away from what collective opinion might look like if all relevant groups in a population were equally and optimally well informed about politics.
Thus majority opinions are driven by a SMALL NUMBER of respondents who have an intense and unified view. People from higher SES groups tend to dominate the public opinion channels—contacting officials, volunteering for campaigns, contributing money.
Thus, conventional wisdom might lead one to recognize that the special value of a sample survey is that it is more representative. Yet Althus argues it may not be as representative as commonly thought, and we need to reconsider how we use opinion surveys in the political process.
Chapter 2
Putting the power of aggregation to the test; an assessment of the information-pooling properties of collective opinion (aggregation and Philip Converse).
Althus argues that collective rationality models (Converse 1990; Erikson, MacKuen & Stimson 1992; Miller 1996; Bartels 1996; Delli Carpini & Keeter 1996) suffer from several conceptual problems; using a computer simulation of 28,000 opinion distributions, Althus finds…
Chapter 3
Examines whether the opinions expressed in sample surveys possess the critical characteristics necessary for collective rationality to work.
Less-informed are more likely to respond “don’t know” or “no opinion.” This hints at a political knowledge issue, which I cover in detail, above. But indeed, opinions expressed in sample surveys are not expressive of the collective rationality—overresponse rate by well-informed; ill-informed are all like-minded in responses (whether purposively or not).
Chapter 4
Tests the impact of information effects (see definition above) on collective preferences.
Data: 1988, 1992, 1996 ANES survey.
Model: what collective prefs might look like if all respondents were as well informed about politics as most knowledgeable ones.
The model shows what Althus predicts in chapter 3—the mass public’s low levels of pol info cause surveys to misrepresent views. Other findings: collective opinions appear more progressive on some issues and more conservative on others than they would if fully informed; “After controlling for info effects, collective opinion tends to become less approving of presidents and Congress, more dovish and interventionist on foreign policy, less conservative of social, enviro, and equal rights issues, and more conservative on morality issues and questions about the proper limits of gov activity.”
Chapter 5
This chapter looks at when and why information effects are highest.
When questions are more cognitively demanding there are greater disparities in informed responses between the well and ill-informed. Differences also arise from the social and psychological factors that “influence how people establish and update their political prefs.”
Chapter 6
This chapter assesses information effects, over time (time series data from 1980-1998). Findings: “information effects tend to grow smaller when the political environment motivates citizens to process information systematically, although many of these changes turn out to be short-lived.”
This chapter also talks about the “simulated measures of fully informed opinion.” I am still confused as to how they “simulate” such a measure, but when it is done it is accurate in predicting collective policy prefs. But the author does note that although statistical simulations are spot-on they don’t represent the latent interests of the population (is this bc the latent interests are not expressed? Im still a bit confused, and REALLY am not motivated to answer my own question. Although im confuse, I am not interested )
Chapter 7
This chapter looks at how information effects complicate the use of opinion surveys in political decision making. Ive already touched upon many of the problems (misrep, overrep etc). This chapter addresses these problems in more depth, but also suggests two conditions that would help overcome representation issues (information effects):
(1) ensure the sample has the same demographic characteristics of the population they are suppose to represent.
(2) When the number of well-informed give the same number of mixed opinions as the less informed
But concludes “the absence of info effects should not be taken as a sign of enlightened prefs. Instead, the absence of info effects confirms only that a collective pref provides good info about what the people want, rightly or wrongly.”
Chapter 8
Information effects can influence the usefulness of survey results; surveys can be problematic, but also have potential. If the suggestions in chapter 7 are met then public opinion surveys have a place in directing democratic politics.
Conclusion:
The primary culprit is not any inherent shortcoming in the methods of survey research. Rather, it is the limited degree of knowledge held by ordinary citizens about public affairs and the tendency for some kinds of people to be better informed than others” (10).
Class notes:
Collective choice with more information doesn’t change at the aggregate.
General knowledge vs. specific issue
ISSUE KNOWLEDGE inconsistent with schematic network?
Aggregation on a particular point (i.e. blacks and affirmative action) even though in general, might be characterized as uninformed. Thus familiarity with an issue makes a BIG difference. The question being asked matters.
When people are not “activated” they can still make the policy decision? (not if public opinion doesn’t drive policy!).
Even without an authentic opinion, you can drive a policy.
Dispersion v. Consistency of opinions
Convergence hypothesis:
Althaus finds that it holds
If this is the case then info effects in the collective… there is not a converging in the middle. Increasing information effects. More people are not settled in the middle, but off to the sides, creating a divergence in expectation and what results.
Depletion hypothesis: information effect (the difference between opinion that you get at time one from simulated opinion—fully informed). people who give the DK answers have less knowledge, but demographically the same (151). The DKs are female; men guess (Luskin).
Not much of an impact. (is it because white men v. women is trying to explain the difference?)
If its not the depletion effect, what is it?
(1) Survey method (framing—is the way we are asking the question forcing people to answer questions a way in which they otherwise would not).
(2) 2
(3) social distribution of policy specific knowledge. Are you able to tap into policy specific ken instead of general?
(4) Relative salience of political ken; do we see that the info effects decreases in presidential elections because the information matters (socially/self-monitoring).
The purpose of these test and setting up infor effects:
Are we using polls to set up policy? Is policy not reflecting the people bc of information effects? If yes, we need to improve surveys; limit the availability of a middle answer, framing, etc. By knowing the general ways frames etc. influence info effects we can distinguish between good surveys and bad surveys (straw polls). Concludes: surveys are problematic. Duh.
Take home effects:
Value issues more vulnerable to change? Value issues that we thinking we know, and we more info we change them! Collective value opinion is more vulnerable to change than policy info; policy info is actually more authentic. (127)
THERE ARE INFORMATION EFFECTS—there are consistent differences. And they are LARGE. (1/5 TO 1/3 of collective prefs will change with more information. Milner links to institutions.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment