The 2018 Ranked Choice Voting Election in Maine’s Congressional District No. 2

Maine’s recent congressional election – the first-ever federal poll in the U.S. to be held under Ranked Choice Voting (RCV) – took place against a backdrop of continuing opposition by the Republican Party to the recently introduced voting system. State GOP leaders not only called on their voters to rank just the party’s candidates, but sought as well a court ruling to prevent RCV from determining the outcome of the U.S. House of Representatives election in Congressional District No. 2, and subsequently a recount of all ballots in the election (later called off while it was underway).

Nonetheless, a significant minority of Republican voters in the district ignored party exhortations and indicated valid rankings for at least two candidates, while substantial minorities of non-GOP voters only gave a first preference to Democratic or independent candidates. At the same time, the number of voters who engaged in bullet voting – indicating a preference for just one candidate – constituted a minority of the voting electorate in CD-2. This is notable when one considers that in both the 2016 and 2018 RCV referendums held in Maine, a majority of voters in CD-2 rejected the switch from plurality voting.

Moreover, a federal judge first allowed the RCV count to take place, and subsequently issued a ruling upholding the constitutionality of the election in the congressional district, where incumbent Republican Bruce Poliquin obtained the largest number of first preference votes, but fell short of an absolute majority; he then lost the second and final round of counting to Democratic challenger Jared Golden, who prevailed with 142,440 votes (50.6%) to Poliquin’s 138,931 (49.4%) following the elimination of independent candidates Tiffany Bond and Will Hoar, whose second preferences were transferred to the remaining two candidates. The First Circuit Court of Appeals subsequently denied Congressman Poliquin’s motion for an injunction to prevent Golden from being declared the winner, and Poliquin – who wanted the election outcome determined solely by the first preference count, or by a re-run under plurality voting – dropped the lawsuit challenging RCV shortly thereafter.

The following table, based on a tally of 296,077 cast vote records in CD-2, published by Maine’s Secretary of State on his official website, shows the distribution of first preference votes for each candidate with at least a valid second preference for another candidate (“Preference”), or no second and successive preferences for a different candidate (“Bullet”); the “Other” category groups ballots with valid first preferences, but no valid second or successive preferences due to either overvoting on the second preference ranking – indicating a second preference for more than one candidate – or undervoting i.e. leaving blank more than one consecutive ranking beyond first preference while indicating preferences for other candidates, or a combination of both. State of Maine 2018 Ranked Choice Voting (RCV) Election Data has frequency counts for all 1,564 tallied preference combinations in the CD-2 election.

Candidate Bullet % Preference % Other % Total
Bond (I) 4,333 26.2 12,106 73.1 113 0.7 16,552
Golden (D) 51,423 39.0 79,551 60.3 1,039 0.8 132,013
Hoar (I) 2,120 30.8 4,713 68.6 42 0.6 6,875
Poliquin (R) 89,228 66.5 43,955 32.8 1,001 0.7 134,184
Total 147,104 50.8 140,325 48.5 2,195 0.7 289,624

There were 435 overvotes and 6,018 undervotes in the first preference count; the latter figure – which included 5,711 ballots undervoted on all available rankings – is noticeably lower than the reported number of blank ballots in the plurality-based 2014 and 2016 U.S. House elections in CD-2, and it is also lower than the number of blank ballots in the district for this year’s gubernatorial election in Maine, which was also carried out by plurality voting. At the very least, these numbers indicate the introduction of RCV did not bring about an increase in the number of blank or invalid ballots. In addition, the very low number of overvotes strongly suggests there was little voter confusion about the new electoral system.

Of the 147,104 voters in CD-2 who indicated valid preferences for just one candidate, 137,971 indicated only a first preference, including 315 cases with a second preference but no first preference, while an additional 9,133 voters indicated a valid first preference (or a valid second preference without a first preference), as well as additional preferences, but only for a specific candidate; a large majority of these – 7,706 voters – gave all five preference rankings to their chosen candidate. Under Maine’s RCV counting rules, these votes had the same effect as having indicated only a first preference for the selected candidate. However, while ballots with valid preferences for just a single candidate constitute a narrow majority of the valid first preference votes, they represent a minority of 49.7% of all votes cast in the district. By contrast, in both the 2016 and 2018 RCV referendums, CD-2 reported a majority of votes against RCV both among the valid and overall vote totals. Moreover, even among voters casting valid first preferences, those who indicated a first preference only were a minority of 47.6%.

Bullet voting for the two major-party candidates had no impact in the CD-2 election outcome, since their first preferences were tallied in the second count as preferences for continuing candidates. However, the 6,453 ballots with valid rankings for either Bond only or Hoar only made up the bulk of the 8,253 non-transferable votes in the second count (most of the remaining 1,800 votes in that group had valid rankings for both Bond and Hoar, but not for the other two candidates). It has been suggested that these voters were confused as to which candidates would make it to the second count, but a far more likely explanation is that they simply wished to support the independent candidates only and didn’t care for either of the two major-party candidates. In fact, their behavior is functionally the same as that of voters in traditional runoff systems casting a blank or invalid ballot in the runoff election, after the candidates they originally supported were eliminated in the first round of voting. Moreover, Bond and Hoar first preference voters had the lowest proportion of bullet voting, at just under two out of seven ballots cast for them.

The overall distribution of bullet votes and preference votes in the CD-2 election closely resembles the 2016 and 2018 RCV referendum outcomes, and it would seem this is not entirely a coincidence: in towns with more than ten voters, there were moderately strong correlations between bullet voting in 2018 and opposition to RCV in 2016 (0.62), as well as between preference votes in 2018 and support for the new electoral system two years earlier (0.64); when the correlations were calculated on the basis of valid votes only, both stood at 0.63.

In conclusion, neither all GOP voters in CD-2 ranked Congressman Poliquin only (nearly a third cast a preference vote) nor did all Golden voters (or those backing independent candidates Bond and Hoar) rank other candidates – this was the case with almost three out of eight non-Poliquin voters. There was little evidence of voter confusion, and casting a preference vote or a bullet vote may have been indicative of ongoing support for RCV or opposition to it, respectively; if so, the election outcome did not point to growing opposition to the newly adopted electoral system. Just as important, these findings should leave no doubt that RCV is not a clear-cut partisan issue.

NYT endorses a larger House, with STV

Something I never thought I would see: The editorial board of one of the most important newspapers in the United States has published two separate editorials, one endorsing an increase in the size of the House of Representatives (suggesting 593 seats) and another endorsing the single transferable vote (STV) form of proportional representation for the House.

It is very exciting that the New York Times has printed these editorials promoting significant institutional reforms that would vastly improve the representativeness of the US House of Representatives.

The first is an idea originally proposed around 50 years ago by my graduate mentor and frequent coauthor, Rein Taagepera, based on his scientific research that resulted in the cube root law of assembly size. The NYT applies this rather oddly to both chambers, then subtracts 100 from the cube root result. But this is not something I will quibble with. Even an increase to 550 or 500 would be well worth doing, while going to almost 700 is likely too much, the cube root notwithstanding.

The second idea goes back to the 19th century (see Thomas Hare and Henry R. Droop) but is as fresh and valid an idea today as it was then. The NYT refers to it as “ranked choice voting in multimember districts” and I have no problem whatsoever with that branding. In fact, I think it is smart.

Both ideas could be adopted separately, but reinforce each other if done jointly.

They are not radical reforms, and they are not partisan reforms (even though we all know that one party will resist them tooth and nail and the other isn’t exactly going to jump on them any time soon). They are sensible reforms that would bring US democracy into the 21st century, or at least into the 20th.

And, yes, we need to reform the Senate and presidential elections, too. But those are other conversations…

The US Supreme Court gerrymandering case

I do not have time to dissect the arguments before the US Supreme Court in the case concerning the permissibility of the partisan gerrymander in Wisconsin. It clearly is a case of great importance to issues we care about at this blog. So, feel free to discuss here.

I highly recommend two pieces by Michael Latner:

Sociological Gobbledygook or Scientific Standard? Why Judging Gerrymandering is Hard (4 OCt.)

Can Science (and The Supreme Court) End Partisan Gerrymandering and Save the Republic? Three Scenarios (2 Oct.)

 

 

Republicans will likely keep their House majority – even if Clinton wins by a landslide – and it’s because of gerrymandering.

By Michael Latner

While the presidential race has tightened, the possibility of Donald Trump being defeated by a wide margin has some Republicans worried about their odds of retaining control of Congress. However, only a handful of Republican-controlled districts are vulnerable. Speaker Paul Ryan’s job security and continuing GOP control of the House is almost assured, even if Democrats win a majority of the national Congressional vote. How is it that the chamber supposedly responsive to “The People Alone” can be so insulated from popular sentiment? It is well known that the Republican Party has a competitive advantage in the House because they win more seats by narrow margins, and thus have more efficiently distributed voters. What is poorly understood is how the current level of observed bias favoring the GOP was the result of political choices made by those drawing district boundaries.

This is a controversial claim, one that is commonly challenged. However, in Gerrymandering in America: The House of Representatives, the Supreme Court, and the Future of Popular Sovereignty, a new book co-authored by Anthony J. McGann, Charles Anthony Smith, Alex Keena and myself, we test several alternative explanations of partisan bias and show that, contrary to much professional wisdom, the bias that insulates the GOP House Majority is not a “natural” result of demographic sorting or the creation of “majority-minority” districts in compliance with the Voting Rights Act of 1965. It is the result of unrestrained partisan gerrymandering that occurred after the 2010 Census, and in the wake of the Supreme Court’s 2004 decision in Vieth v Jubelirer, which removed legal disincentives for parties to maximize partisan advantage in the redistricting process.

Partisan bias tripled after congressional redistricting

We measure partisan bias using the symmetry standard, which asks: What if the two parties both received the same share of the vote under a given statewide districting plan? Would they get the same share of seats? If not, which party would have an advantage?

We calculate seats/votes functions on the assumption of uniform partisan swing – if a party gains 5% nationally, it gains 5% in every district, give or take an allowance for local factors (simulated through random effects that reduce our estimates of bias). Linear regressions provide an estimate of what level of support the Democrats expect to win in each district if they win 50% of the national vote, given the national level of support for the party in actual elections, and we generate a thousand simulated elections with hypothetical vote swings and different random local effects for each district, to create our seats/votes functions.

Figure 1: Seats/Votes function for Congress 2002-2010
SV2010//embedr.flickr.com/assets/client-code.js

Figure 2: Seats/Votes function for Congress 2012
sv2012

Figures 1 and 2 show the seats/votes functions under the 2002-2010 Congressional districts, and the 2012 post-redistricting districts, respectively. We observe a 3.4% asymmetry in favor of Republicans under the older districts. This is still statistically significant, but it is only about a third of the 9.39% asymmetry we observe in 2012. Graphically, the seats/votes function in Figure 1 comes far closer to the 50%votes/50%seats point. The bias at 50% of the vote is less than 2% under the older state districting plans, compared to 5% in 2012. That is, if the two parties win an equal number of votes, the Republicans will win 55% of House seats. Furthermore, the Democrats would have to win about 55% of the vote to have a 50/50 chance of winning control of the House in 2016. Thus it is not impossible that the Democrats will regain control of the House, but it would take a performance similar to or better than 2008, when multiple factors were favorably aligned for the Democrats.

Increased bias did not result from “The Big Sort” 

Perhaps the most popular explanation for increased partisan bias comes from the “Big Sort” hypothesis, which holds that liberals and conservatives have migrated to areas dominated by people with similar views. Specifically, because Democrats tend to be highly concentrated in urban areas, it is argued, Democratic candidates tend to win urban districts by large margins and “waste” their votes, leaving the Republicans to win more districts by lower margins.

The question we need to consider is whether the concentration of Democratic voters has changed relative to that of Republican voters since the previous districts were in place. In particular, if it is the case that urban concentration causes partisan bias, then we would expect to find relative Democratic concentration increasing in those states where partisan bias increases. In order to address this question, we measure the concentration of Democratic voters relative to that of Republicans with the Pearson moment coefficient of skewness, using county-level data from the 2004 and 2012 presidential elections. As shown in Table 1, in most states the level of skewness toward the Democrats actually decreased in 2012.

slide1_29768903842_o

In twenty-seven out of the thirty-eight states with at least three districts, the relative concentration of Democratic voters compared to Republican voters declines. Moreover, in those states where partisan bias increased between the 2000 and 2010 districting rounds, those with an increase in skewness are outnumbered by those where there was no increase in skewness, by more than two to one. We also find that there was reduced skewness in most of the states where there was statistically significant partisan bias in 2012.

Of course, we should not conclude that geographical concentration does not make it easier to produce partisan bias. North Carolina was able to produce a highly biased plan without the benefit of a skewed distribution of counties, but to achieve this, the state Generally Assembly had to draw some extremely oddly shaped districts. While the urban concentration of Democratic voters makes producing districting plans biased toward the Republicans slightly easier, it makes producing pro-Democratic gerrymanders very hard. In Illinois, the Democratic-controlled state legislature drew some extremely non-compact districts but still only managed to produce a plan that was approximately unbiased between the parties.

Increasing racial diversity does not require partisan bias

Another “natural” explanation for partisan bias, one that is especially popular among Southern GOP legislators, is that that it is impossible to draw districts that are unbiased while at the same time providing minority representation in compliance with the Voting Rights Act of 1965. The need to draw more majority-minority districts, it is argued, disadvantages the Democrats because it forces the inefficient concentration of overwhelmingly Democratic minority voters.

There are four states with four or more majority-minority districts – California, Texas, Florida, and New York – and they account for more than 60% of the total number of majority-minority districts. Of these, Texas and Florida have statistically significant partisan bias, but California, Illinois, and New York do not, so the need to draw majority-minority districts does not make it impossible to draw unbiased districting plans. Yet many of the states that saw partisan bias increase do have majority-minority districts – or rather a single majority-minority district in most cases. It is possible that packing more minority voters into existing majority-minority districts creates partisan bias.

To test this possibility, we subtract the average percentage of African-Americans and Latinos in districts where those races made up a majority of the population in the 110th Congressional districts from the 113th Congress averages. Figures 3 and 4 display the results for these states. For majority-Latino districts, we find no evidence that states with increased Latino density have more biased redistricting plans.

Figure 3: Majority-Latino District Density Change and Symmetry Change
latino_29847567976_o

Figure 4: Majority-Black District Density Change and Symmetry Change
black_29256018873_o

By contrast, states with increased majority-Black densities have clearly adopted more biased districting plans. Among the states with substantial reductions in partisan symmetry, only Louisiana (−1.9 %), Ohio (−1.0 %), and Pennsylvania (−0.9 %) had lower average percentages of African Americans in their majority-minority districts after redistricting. The three states with the largest increases in majority-Black district density, Tennessee (5.2 %), North Carolina (2.4 %), and Virginia (2.2 %), include some of the most biased plans in the country. This is not in any way required by the Voting Rights Act – indeed, it reduces the influence of African-American voters by using their votes inefficiently. However, it is consistent with a policy of state legislatures seeking partisan advantage by packing African-American voters, who overwhelmingly vote for the Democratic Party, into districts where the Democratic margin will be far higher than necessary.

Demography is not destiny

The bias we observe is not the inevitable effect of factors such as the urban concentration of Democratic voters or the need to draw majority-minority districts. It is for the most part possible to draw unbiased districting plans in spite of these constraints. Thus if state districting authorities draw districts that give a strong advantage to one party, this is a choice they have made – it was not forced on them by geography. The high level of partisan bias protecting GOP House control can only be explained in political terms. As we show in our book, Pro-Republican bias increased almost exclusively in states where the GOP controlled the districting process.

One of the immediate consequences of unrestrained partisan gerrymandering is that, short of a landslide Democratic victory resembling 2008, the Republicans are very likely to retain control of the House. But one of the more profound consequences is that redistricting has upended one of the bedrock constitutional principles, popular sovereignty. Without an intervention by the courts, political parties are free to manipulate House elections to their advantage without consequence.

 

Economix: Expand the US House

It is good to see the undersized nature of the US House of Representatives get attention in the New York Times‘s Economix blog. The author is Bruce Bartlett, who “held senior policy roles in the Reagan and George H.W. Bush administrations and served on the staffs of Representatives Jack Kemp and Ron Paul”.

Bartlett notes that,

according to the Inter-Parliamentary Union, the House of Representatives is on the very high side of population per representative at 729,000. The population per member in the lower house of other major countries is considerably smaller: Britain and Italy, 97,000; Canada and France, 114,000; Germany, 135,000; Australia, 147,000; and Japan, 265,000.

The strongest empirical relationship of which I am aware between population size and assembly size is the cube root law. Backed by a theoretical model, it was originally proposed by Rein Taagepera in the 1970s. A nation’s assembly tends to be about the cube root of its population, as shown in this graph.*

Fig 7.1

Note the flat line for the USA, indicating lack of increase in House size, since the population was less than a third what it is today. This recent static period is in contrast to earlier times, depicted by the zig-zag black line, in which the USA regularly adjusted House size, keeping it reasonably close to the cube-root expectation.

At only about two thirds of the cube-root value of the population (as of 2010 census), the current US House is indeed one of the world’s most undersized. However, there are some even more deviant cases. Taking actual size over expected size (from cube root) , the USA has the seventh most undersized first or sole chamber among thirty-one democracies in my comparison set. The seven are:

    .466 Colombia
    .469 Chile
    .518 India
    .538 Australia
    .590 Netherlands
    .614 Israel
    .659 USA

As expected, the mean ratio for the thirty-one countries is very close to one (0.992, with a standard deviation of .37). The five most oversized, all greater than 1.4, are France, Germany, UK (at 1.67), Sweden, and Hungary. (The latter was at a whopping 1.80, but has since sharply reduced its assembly size.) Spain, Denmark, Switzerland, Portugal, and Mexico all get the cube root prize for having assembly sizes from .975 to 1.03 of the expectation.

One thing I did not know is that an amendment to the original US constitution was proposed by Madison. According to Bartlett, it read:

After the first enumeration required by the first article of the Constitution, there shall be one representative for every 30,000, until the number shall amount to 100, after which the proportion shall be so regulated by Congress, that there shall be not less than 100 representatives, nor less than one representative for every 40,000 persons, until the number of representatives shall amount to 200; after which the proportion shall be so regulated by Congress, that there shall not be less than 200 representatives, nor more than one representative for every 50,000 persons.

Obviously, Madison’s formula would have run into some excessive size issues over time. And Bartlett does not suggest how much the House should be increased, only noting that its ratio of one Representative for very 729,000 people is excessive. On the other hand, Madison’s ratio of one per 50,000 would produce an absurdly large House! It is just the need to balance the citizen-representative ratio with the need for representatives to be able to communicate effectively with one another that Taagepera devised the model of the cube root, which as we have seen, fits actual legislatures very well.

The cube root rule says the USA “should have” a House of around 660 members today, which would remain a workable size. (If the USA and UK swapped houses, each would be at just about the “right” size!) Even an increase to just 530 would put it within about 80% of the cube root.

As Bartlett notes, at some point the US House will be in violation of the principle of one person, one vote (due to the mandatory representative for each state, no matter how small). However, a case filed in 2009 went nowhere.



* Each country is plotted according to its population, P (in millions), and the size, S, of its assembly. In addition, the size of the US House is plotted against US population at each decennial census from 1830 to 2010.
The solid diagonal line corresponds to the “cube root rule”: S=P^(1/3).
The dashed lines correspond to the cube root of twice or half the actual population, i.e. S=(2P)^(1/3) and S=(.5P)^(1/3).

A variant of the graph will be included in Steven L. Taylor, Matthew S. Shugart, Arend Lijphart, and Bernard Grofman, A Different Democracy (Yale University Press).

An even earlier version of the graph was posted here at F&V in 2005.

Citations are always nice–Increasing the size of the US House

David Fredosso, writing at Conservative Intelligence Briefing, makes the case for increasing the size of the US House, citing one of my previous posts advocating the same. He makes two additional and valuable points: (1) “The Wyoming Rule”, by which the standard Representative-to-population ratio would be that of the smallest entitled unit, is misleading as to how representation is currently (mal-)apportioned; (2) Increasing the size of the House would not, as is sometimes assumed, be of benefit to Democrats and liberals.

David quibbles with the Wyoming part of the story, noting that “Wyoming is not the most overrepresented state — by a long way, that distinction goes to Rhode Island, with its two districts, average population 528,000”, whereas Wyoming has a population of 568,000 (and one seat).

I would note that this is a very small quibble indeed, as the Wyoming Rule–which, to be fair, I neither named nor invented–refers to “smallest entitled unit” not to “most over-represented unit”. Of course, every state is a unit entitled to at least one, but sometimes a state with two members indeed will be over-represented to a greater degree than some state with one member. Whichever we base it on–smallest entitled or most over-represented–the principle is the same: expand the House.

David proposes a House of 535, and has a table of how that would change each state’s current representation. I would go higher (600 or so), but the precise degree of increase is an even smaller quibble. I am pleased to see this idea being promoted in conservative (or liberal, or whatever) circles. And it’s always nice to be cited.

________
See also:

Reapportionment–a better way?; this includes a discussion of the cube-root rule of assembly size, and a graph of how the US relationship of House size to population compares to that of several other countries, and how it has changed over time as the US population has grown, but the House stopped doing so.

US House size, continued

Distortions of the US House: It’s not how the districts are drawn, but that there are (single-seat) districts

In the New York Times, Sam Wang has an essay under the headline, “The Great Gerrymander of 2012“. In it, he outlines the results of a method aimed at estimating the partisan seat allocation of the US House if there were no gerrymandering.

His method proceeds “by randomly picking combinations of districts from around the United States that add up to the same statewide vote total” to simulate an “unbiased” allocation. He concludes:

Democrats would have had to win the popular vote by 7 percentage points to take control of the House the way that districts are now (assuming that votes shifted by a similar percentage across all districts). That’s an 8-point increase over what they would have had to do in 2010, and a margin that happens in only about one-third of Congressional elections.

Then, rather buried within the middle of the piece is this note about 2012:

if we replace the eight partisan gerrymanders with the mock delegations from my simulations, this would lead to a seat count of 215 Democrats, 220 Republicans, give or take a few.

In other words, even without gerrymandering, the House would have experienced a plurality reversal, just a less severe one. The actual seat breakdown is currently 201D, 234R. In other words, by Wang’s calculations, gerrymandering cost the Democrats seats equivalent to about 3.2% of the House. Yes, that is a lot, but it is just short of the 3.9% that is the full difference between the party’s actual 201 and the barest of majorities (218). But, actually, the core problem derives from the electoral system itself. Or, more precisely, an electoral system designed to represent geography having to allocate a balance of power among organizations that transcend geography–national political parties.

Normally, with 435 seats and the 49.2%-48.0% breakdown of votes that we had in 2012, we should expect the largest party to have about 230 seats. ((Based on the seat-vote equation.)) Instead it won 201. That deficit between expectation and reality is equivalent to 6.7% of the House, suggesting that gerrymandering cost the Democrats just over half the seats that a “normally functioning” plurality system would have netted it.

However, the “norm” here refers to two (or more) national parties without too much geographic bias to where those parties’ voters reside. Only if the geographic distribution is relatively unbiased does the plurality system work for its supposed advantage in partisan systems: giving the largest party a clear edge in political power (here, the majority of the House). Add in a little bit of one big party being over-concentated, and you can get situations in which the largest party in votes is under-represented, and sometimes not even the largest party in seats.

As I have noted before, plurality reversals are inherent to the single-seat district, plurality, electoral system, and derive from inefficient geographic vote distributions of the plurality party, among other non-gerrymandering (as well as non-malaportionment) factors. Moreover, they seem to have happened more frequently in the USA than we should expect. While gerrymandering may be part of the reason for bias in US House outcomes, reversals such as occurred in 2012 can happen even with “fair” districting. Wang’s simulations show as much.

The underlying problem is, again, because all the system really does is represent geography: which party’s candidate gets the most votes here, there, and in each district? And herein lies the big transformation in the US electoral and party systems over recent decades, compared to the party system that was in place in the “classic” post-war system: it is no longer as much about local representation as it once was, and is much more about national parties with distinct and polarized positions on issues.

Looking at the relationship between districts and partisanship, John Sides, in the Washington Post’s Wonk Blog, says “Gerrymandering is not what’s wrong with American politics.” Sides turns the focus directly on partisan polarization, showing that almost without regard to district partisanship, members of one party tend to vote alike in recent congresses. The result is that when a district (or, in the Senate, a state) swings from one party to another, the voting of the district’s membership jumps clear past the median voter from one relatively polarized position to the other.

Of course, this is precisely the point Henry Droop made in 1869, and that I am fond of quoting:

As every representative is elected to represent one of these two parties, the nation, as represented in the assembly, appears to consist only of these two parties, each bent on carrying out its own programme. But, in fact, a large proportion of the electors who vote for the candidates of the one party or the other really care much more about the country being honestly and wisely governed than about the particular points at issue between the two parties; and if this moderate non-partisan section of the electors had their separate representatives in the assembly, they would be able to mediate between the opposing parties and prevent the one party from pushing their advantage too far, and the other from prolonging a factious opposition. With majority voting they can only intervene at general elections, and even then cannot punish one party for excessive partisanship, without giving a lease of uncontrolled power to their rivals.

Both the essays by Wang and by Sides, taken together, show ways in which the single-seat district, plurality, electoral system simply does not work for the USA anymore. It is one thing if we really are representing district interests, as the electoral system is designed to do. But the more partisan a political process is, the more the functioning of democracy would be improved by an electoral system that represents how people actually divide in their partisan preferences. The system does not do that. It does even less well the more one of the major parties finds its votes concentrated in some districts (e.g. Democrats in urban areas). Gerrymandering makes the problem worse still, but the problem is deeper: the uneasy combination of a geography-based electoral system and increasingly distinct national party identities.

Spurious majorities in the US House in Comparative Perspective

In the week since the US elections, several sources have suggested that there was a spurious majority in the House, with the Democratic Party winning a majority–or more likely, a plurality–of the votes, despite the Republican Party having held its majority of the seats.

It is not the first time there has been a spurious majority in the US House, but it is quite likely that this one is getting more attention ((For instance, Think Progress.)) than those in the past, presumably because of the greater salience now of national partisan identities.

Ballot Access News lists three other cases over the past 100 years: 1914, 1942, and 1952. Sources disagree, but there may have been one other between 1952 and 2012. Data I compiled some years ago showed a spurious majority in 1996, if we go by The Clerk of the House. However, if we go by the Federal Election Commission, we had one in 2000, but not in 1996. And I understand that Vital Statistics on Congress shows no such event in either 1996 or 2000. A post at The Monkey Cage cites political scientist Matthew Green as including 1996 (but not 2000) among the cases.

Normally, in democracies, we more or less know how many votes each party gets. In fact, it’s all over the news media on election night and thereafter. But the USA is different. “Exceptional,” some say. In any case, I am going to go with the figure of five spurious majorities in the past century: 1914, 1942, 1952, 2012, plus 1996 (and we will assume 2000 was not one).

How does the rate of five (or, if you like, four) spurious majorities in 50 elections compare with the wider world of plurality elections? I certainly do not claim to have the universe of plurality elections at my fingertips. However, I did collect a dataset of 210 plurality elections–not including the USA–for a book chapter some years ago, ((Matthew Soberg Shugart, “Inherent and Contingent Factors in Reform Initiation in Plurality Systems,” in To Keep or Change First Past the Post, ed. By André Blais. Oxford: Oxford University Press, 2008.)) so we have a good basis of comparison.

Out of 210 elections, there are 10 cases of the second party in votes winning a majority of seats. There are another 9 cases of reversals of the leading parties, but where no one won over 50% of seats. So reversals leading to spurious majority are 4.8% of all these elections; including minority situations reversals are 9%. The US rate would be 10%, apparently.

But in theory, a reversal should be much less common with only two parties of any significance. Sure enough: the mean effective number (N) of seat-winning parties in the spurious majorities in my data is just under 2.5, with only one under 2.2 (Belize, 1993, N=2.003, in case you were wondering). So the incidence in the US is indeed high–given that N by seats has never been higher than 2.08 in US elections since 1914, ((The original version of this statement, that “N is almost never more than 2.2 here” rather exaggerated House fragmentation!)) and that even without this N restriction, the rate of spurious majorities in the US is still higher than in my dataset overall.

I might also note that a spurious majority should be rare with large assembly size (S). While the US assembly is small for the country’s population–well below what the cube-root law would suggest–it is still large in absolute sense. Indeed, no spurious majority in my dataset of national and subnational elections from parliamentary systems has happened with S>125!

So, put in comparative context, the US House exhibits an unusually high rate of spurious majorities! Yes, evidently the USA is exceptional. ((Spurious majorities are even more common in the Senate, where no Republican seat majority since at least 1952 has been based on a plurality of votes cast. But that is another story.))

As to why this would happen, some of the popular commentary is focusing on gerrymandering (the politically biased delimitation of districts). This is quite likely part of the story, particularly in some sates. ((For instance, see the map of Pennsylvania at the Think Progress link in the first footnote.))

However, one does not need gerrymandering to get a spurious majority. As political scientists Jowei Chen and Jonathan Rodden have pointed out (PDF), there can be an “unintentional gerrymander,” too, which results when one party has its votes less optimally distributed than the other. The plurality system, in single-seat districts, does not tote up party votes and then allocate seats in the aggregate. It only matters in how many of those districts you had the lead–of at least one vote. Thus a party that runs up big margins in some of its districts will tend to augment its total in its “votes” column at a faster rate than it augments its total in the “seats” column. This is quite likely the problem Democrats face, which would have contributed to its losing the seat majority despite its (apparent) plurality of the votes.

Consider the following graph, which shows the distribution (via kernel densities) of vote percentages for the winning candidates of each major party in 2008 and 2010.

Kernel density winning votes 2008-10
Click image for larger version

We see that in the 2008 concurrent election, the Democrats (solid blue curve) have a very long and higher tail of the distribution in the 70%-100% range. In other words, compared to Republicans the same year, they had more districts in which they “wasted” votes by accumulating many more in the district than needed to win it. Republicans, by contrast, tended that year to win more of their races by relatively tighter margins–though their peak is still around 60%, not 50%. I want to stress, the point here is not to suggest that 2008 saw a spurious majority. It did not. Rather, the point is that even in a year when Democrats won both the vote plurality and seat majority, they had a less-than optimal distribution, in the sense of being more likely to win by big margins than were Republicans.

Now, compare the 2010 midterm election, in which Republicans won a majority of seats (and at least a plurality of votes). Note how the Republican (dashed red) distribution becomes relatively bimodal. Their main peak shifts right (in more ways than one!) as they accumulate more votes in already safe seats, but they develop a secondary peak right around 50%, allowing them to pick up many seats narrowly. That the peak for winning Democrats’ votes moved so much closer to 50% suggests how much worse the “shellacking” could have been! Yet even in the 2010 election, the tail on the safe-seats side of the distribution still shows more Democratic votes wasted in ultra-safe seats than is the case for Republicans. ((It is interesting to note that 2010 was very rare in not having any districts uncontested by either major party.))

I look forward to producing a similar graph for the 2012 winners’ distribution, but will await more complete results. A lot of ballots remain to be counted and certified. The completed count is not likely to reverse the Democrats’ plurality of the vote, however.

Given higher Democratic turnout in the concurrent election of 2012 than in the 2010 midterm election, it is likely that the distributions will look more like 2008 than like 2010, except with the Republicans retaining enough of those relatively close wins to have held on to their seat majority.

Finally, a pet peeve, and a plea to my fellow political scientists: Let’s not pretend there are only two parties in America. Since 1990, it has become uncommon, actually, for one party to win more than half the House votes. Yet my colleagues who study US elections and Congress continue to speak of “majority”, by which they mean more than half the mythical “two-party vote”. In fact, in 1992 and every election from 1996 through at least 2004, neither major party won 50% of the House votes. I have not ever aggregated the 2006 vote. In 2008, Democrats won 54.2% of the House vote, Republicans 43.1%, and “others” 2.7%. I am not sure about 2010 or 2012. It is striking, however, that the last election of the Democratic House majority and all the 1995-2007 period of Republican majorities, except for the first election in that sequence (1994), saw third-party or independent votes high enough that neither party was winning half the votes.

Assuming spurious majorities are not a “good” thing, what could we do about it? Democrats, if they are developing a systematic tendency to be victims of the “unintentional gerrymander”, would have an objective interest in some sort of proportional representation system–perhaps even as much as that unrepresented “other” vote would have.

ESP champs 2008-2010

Just poking around a bit further in the Electoral Separation of Purpose data, as pictured and explained previously.

I wondered who the “ESP Champs” were of these cycles.

For 2008, I hereby crown Gene Taylor of Mississippi, who won 74.5% in his district on the same day that Obama managed 31.7%. Now that’s separation of purpose!

He still managed 47% even in 2010. Not bad, but not good enough.

In fact, that 2010 result makes Taylor one of only four Democrats to have won, at the midterm, more than 45% of the vote in a district in which Obama had won under 35%. But to be crowned champion for 2010, you should actually have won your race. So the 2010 title belongs to…

Dan Boren of Oklahoma, who won 56.5% in a district in which Obama had won 34.5%. This result still represented a massive adverse swing against Boren, who had 70.5% in 2008. But he held on.

Boren and Taylor, by the way, are both Blue Dogs.

With ESP numbers like these, we can see why some “blue” congressmen in deeply “red” districts were less than keen these past two years in coming to the support of Obama’s policy priorities. (This was a topic that generated considerable discussion in another thread earlier this month.)

The new House and polarization

Adam Bonica has posted some must-see graphs at Ideological Cartography. The graphs really drive home just how polarized the new US House of Representatives will be. The mean Democrat and mean Republican (and I suppose “mean” has both meanings here!) will be farther apart than any recent House, and the median of the entire House will be much more to the right than any in the past–notably more than the one elected in 1994. This follows the House elected in 2006, which was by far the most left-leaning House we have seen.

Another of Bonica’s graphs shows the extent to which entering Republicans are heavily skewed right. Exiting Democrats were less concentrated at any ideological position within their party, but the ranks of the moderates are going to be notably thinner.

Bonica concludes that “The polarization resulting from the 2010 Midterms is fundamentally different and more worrisome than what had preceded it.” Worrisome indeed.

As for the Senate, he has an animated view of polarization since 1967.

Obama and ESP

When Barack Obama was elected President in 2008, the election produced the second lowest value of “Electoral Separation of Purpose” of the preceding five decades.

Electoral Separation of Purpose (ESP) is a concept developed in David J. Samuels and Matthew S. Shugart, Presidents, Parties, and Prime Ministers (Cambridge, 2010). It starts with the difference between presidential and legislative votes, at the district level, for a given party. It then can be expressed in a summary indicator by the average of the absolute values of all these differences.

For Obama and the Democrats in 2008, ESP=10.45. In the book, we considered 42 observations for the USA (both parties in 21 elections through 2004); the only one lower than what we would see in 2008 was 8.79 for Democrats in 1996, when Bill Clinton won reelection.

That ESP would be relatively low in the Obama era is yet another window on the much talked-about “polarization” of US politics: votes for Congress now tend to be more similar to presidential votes at the (House) district level. In other words, the fates of members of the House are more tied to that of their co-partisan president (or presidential candidate) than used to be the case. Voters apparently do not “want different things” from congress and president as much as they once did (for instance, 1972 and 1974, ESPs of 20.4 and 25.8, respectively).

It is worth putting the 2008 election in comparative perspective, comparing both to other countries and to past US elections. When compared to other countries, a value of 10.45 is not especially low. Even when we eliminate all cases where presidential and legislative votes are “fused” (meaning ticket-splitting is impossible, so ESP=0), we still find that the 2008 Democratic ESP is at about the 60th percentile among 383 party-year observations from around the world. Even with polarization and tied fates, there is still a lot of room for divergence between presidential and congressional vote shares in the USA.

What is interesting is the pattern of this divergence. Below is the graph, where each data point is one of the House districts in 2008. Ignore the distinction between triangles and circles for now; we’ll get to that.

USA ESP Dems 2008

(Click the image for a larger view in a new window)

It is striking that in districts where the Democrat has over 50% of the legislative vote, Obama tends to run behind his co-partisan House candidate. That is, there are notably more points above the equality line for winning House Democratic districts than there are below the diagonal. Districts where he runs ahead of the Democratic House candidate tend to be where the party loses the congressional race. For instance, if Obama won about 60% of the vote in a given district, the Democrat tended to win around two thirds of the House vote. But if Obama won around 45% of the vote, the Democratic House candidate tended to get closer to 35% of the vote.

This pattern, which would be reflected by some sort of S-curve, had I bothered to try to plot it, seems to be a common feature of US elections. The graph for Republicans in 2004 (ESP=10.98) looks very similar (see p. 135 of the book). It is not a prevalent pattern in other countries. I suspect it has something to do with the “personal vote” of Representatives; incumbents run ahead of their party’s presidential candidate because some voters who vote for the presidential candidate of the other party nonetheless support the incumbent. However, I have not yet broken the data down by incumbency. In the losing districts, of course, much of it has to do with the Democrats’ not recruiting high-quality candidates in districts they were not likely to win anyway (but having a “high-quality” presidential candidate). Of course, this is a companion to the personal-vote story, whereby the Republican candidate was stronger and able to keep for the party voters who voted for Obama.

Does the graph shed any light on the electoral debacle suffered by Democrats this week? Not directly, although one can see at a glance the numerous districts in which the Democrat won despite the district having voted for McCain. Now here is where those triangles come in: they represent the districts that the Democrats lost in the 2010 midterm election. Not surprisingly, there are a lot of those in the part of the graph where Obama’s vote is less than 50%. In fact, over half the Democratic losses came in McCain 2008 districts. If that’s not a (mini-)realignment, it certainly is a readjustment.

However, the Democrats lost 29 districts in which Obama had won a majority in 2008. And here is where the pattern of 2008 Democratic House winners frequently having run ahead of Obama becomes so important. They had a “cushion” against an adverse swing against them, stemming from Obama’s unpopularity at the midterm, and they most certainly needed it!

ESP US Dems 2010

In this second graph we see that ESP actually declined further in 2010. At first, it may seem odd that one could go from unified to divided government, yet electoral separation of purpose decreased. But that is what happened. In 2010, ESP for Democrats dropped to 10.00. Note the near disappearance of winning Democrats who are more than about ten percentage points above where Obama was in their district in 2008. In fact, what really stands out here is the extent to which Democrats who won over 50% of their own district vote are concentrated very near, or slightly below, the equality line. That’s a good case of tied fates!

The S-curve pattern is gone, other than a continued bow in losing Democratic districts, where Obama’s 2008 vote is still higher (and often by a bigger margin) than the Democratic House candidate in 2010.

There are still some survivors in McCain districts, and they are about the only ones to still be running well ahead of Obama. If they could survive the great Democratic fall of 2010, they just might survive anything.

Now for the cross-time comparison. The following graph shows the ESP values for the president’s party for every US election since 1956, except for years following reapportionment and redistricting (and 1966, for mysterious reasons).

ESP in the USA since 1956

There is a clear trend in recent elections of declining ESP. No election for which we have data had ESP for the president’s party below 12.0 until 1996. The 1970s, and to a lesser extent the 1980s, were the days of high ESP, with Republicans often winning the presidency but Democrats keeping the House. Even in 1976, when Carter won, ESP was 14.55. Maybe this explains why Carter had so much trouble with his own party: they knew the president was less popular than they were. The graph from that election (not posted; I can’t post everything!) shows a huge bow of the S-curve above the equality line where practically all the Democratic House winners are found.

But note the almost steady downward trend after 1984, when Reagan was reelected. The 1994 midterm, when Democrats lost their House majority under Clinton, showed a downward trend. So 2010 is not unique in being an election that produces a transition to divided government yet sees ESP drop. However, in spite of the decline in ESP, it was still the case then that most Democratic winners in1994 were running ahead of where Clinton had been in 1992. Part of this is owed to the three-way presidential race in 1992. (All these graphs show actual vote percentages, not percentages of the “two-party vote.”) But then Clinton and the Democrats had tightly shared fates in 1996.

After a big upward blip in ESP in 1998, when Democrats had a rare seat gain in a midterm election, we enter the 2000s with ESP hovering in the 10-12 range.^

We really are in uncharted territory by US standards. We have not seen such closely tied presidential and legislative electoral fates at any other point in the last five decades or more.

What this might mean going forward is hard to say. I don’t have that kind of ESP! Or maybe it is not so hard. If Obama is reelected in 2012, it is unlikely to be with a broad personal victory like Nixon in 1972 and Reagan in 1984, which represent two of the three highest ESP concurrent elections. (The other is 1988, when the senior Bush effectively won Reagan’s “third term.”) But therein lies a ray of good news for Democrats–who are surely looking for such rays about now. Normally, if a President is reelected, he does so without much of a “pull” on the House races. However, we have already seen two incumbent presidents win a second term with a drop in ESP. In addition to Clinton, already mentioned as the lowest US ESP so far, the same happened with G.W. Bush (ESP=12.27 when he, uh, became president in 2000,* and a drop to 10.98 in 2004).

in such a low-ESP environment, with partisan fates so tied, it is entirely plausible that a reelected Obama would carry enough of that cluster of districts near 50% to regain a House majority. If he loses, of course, then so might several more Democratic House members. Such are the perils of governing and campaigning when electoral separation of purpose is tending to run so low, by historic US standards.


^ The 1998 plot shows a large number of Democratic winners well above where they had been in 1996, and thus also well above where Clinton ran in their districts in his low-ESP reelection in 1996. (This footnote was added a couple of days after initial planting.)

* ESP for Democrats in 2000 was a little higher (13.07), presumably because Gore ran well behind many Democratic incumbents. That the value would be so much higher than it had been for the Clinton-Gore team in 1996 really drives home how much Gore failed to cement the Democratic coalition that swung so tightly behind Clinton in 1996.

Self-execution

Given that, some time around the mid-1990s, the US has entered the brave new world of relatively unified partisan voting–relative to its own past, not to most democracies–it is hardly surprising that recent House of Representatives have used things like “self-executing” rules to pass bills.

I scarcely pay attention to the various noise machines that constitute “debate” in the US media, but some of it penetrates anyway, and I have been befuddled over all the flap over the use of such procedures.

Some on the right (which, to be clear, used the tactic when it was in power) even claim that what the Democrats are prepared to do today to pass their a bill is unconstitutional. Last time I checked, the constitution was pretty clear that each chamber of the legislature had blanket authority to do what it wants with regard to internal procedure. (In fact, it is that blanket authority upon which rests the right’s cherished–at least for now–Senate filibuster rule.)

I will count myself as among those who would like to see more, not less, use of self-executing rules. Along with similar (and similarly derided) rules like “fast track,” such rules are among the few devices that exist in the fragmented US political system for promoting collective accountability. By limiting amendments and debate, and likewise limiting individual accountability of members for difficult votes, self-executing rules and fast track enhance the capacity of parties to act–and to be held accountable at the next election. In other words, they are fundamental devices of democracy.

See also the good insights at PoliBlog (on the procedures and on the bill itself).