Quoted in 538 article about House size

Very pleased with this article by Geoffrey Skelley at FiveThirtyEight: “How The House Got Stuck at 435 Seats.” The author interviewed me for the piece, and references my work and that of coauthors, including Rein Taagepera, the discoverer of the cube root law of assembly size.

There is also a good visualization tool for how each state would be over- or under-represented with various House sizes.

Chamber size and party ‘strength’

What do folks think the correct answer to this question is: How does the size of an assembly affect the strength of political parties?

By strength, I mean the relative freedom of the individual member to cultivate constituency ties and to dissent from party leadership on votes on legislation. I also mean, holding other factors constant.

Suppose a country’s assembly is significantly smaller than its expected size, per the cube-root law. If nothing else changes, how would raising the size be expected to affect the strength of parties?

Obviously, I am thinking about potentially expanding the US House, so a starting point of non-hierarchical parties, and only two of them (and presidentialism, etc.). But I am interested in the question more broadly, and whether features of US party and legislative politics, aside from the small House size, change the impact of increased size on party strength in a manner that might be different from how it would play out in other contexts.

I ask because I genuinely do not know. I could see it going either way. A larger house, for a given population, means each member represents fewer voters, obviously. This could make personal-vote and constituency-service strategies more viable, thereby in some sense making parties “weaker”. On the other hand, a larger assembly (here, independent of population) makes internal collective action more challenging. This could result in members delegating (or simply losing) more authority to internal party leadership, making parties “stronger.” Note that these possible directions of change are closely connected to the two factors that go into the cube root law itself–this is a logical model that is based on balancing (and minimizing overall) two types of “communication channels”: those between legislators and constituents, and those among legislators themselves.

It is possible both directions of change can happen at the same time, implying parties get weaker in some ways and stronger in others. That is, more constituency-oriented behavior, but also more party leadership control over votes and especially over speaking time. I am not sure what that means for overall strength. Maybe that isn’t even the right way to frame the question; skepticism over my own question framing is why I use the inverted commas in the title of this post.

Finally, theoretically and all else equal, a larger assembly means more parties should be represented (per the Seat Product Model). I have my doubts that this would be realized in the US, however, given all the other barriers to third-party representation. Unless the House were truly huge, I do not expect much impact there as long as it is elected in single-seat districts, and with primaries (or with “top two” or even “top four” or five). However, parties’ internal strength could be affected. But which way?

Emergency electoral reform: OLPR for the US House

Because the constitutional emergency is likely too deep to just turn the page, small-d democrats face an emergency of another kind. The need to adopt proportional representation has never been greater. The country simply can’t afford the risk that the Republican Party does nothing fundamental to reform itself, and wins back the House in 2022. A change to some form of moderate proportional representation (PR) is essential.

Given the current balance of power in the House, the Republicans would need to flip only about seven seats in 2022. (There are currently three vacancies.) With rare exceptions, presidents’ parties lose votes and seats in midterm elections. With the balance so tight, there is almost nothing to stop Republicans from winning back control of the House, other than perhaps if they descend into internal party chaos. They just might do that. They might even split. But I don’t like seeing the fate of the republic depend on Republicans finding yet another way to squander an easy electoral win that’s there for their taking.

I am not arguing for a change to PR only for the sake of the Democratic Party. In fact, my argument is that this is a way for Republicans to save their own party. The country needs functioning pro-democratic parties on both the center-left and the center-right. At the moment, it has such a party only on the center-left, and even that is a temporary ceasefire amidst a deepening internal division.

Cleavages in American politics today and the need for PR

I would identify three key cleavages in American politics at the moment. (Note: issue positions and cleavages are very much not my academic speciality at all. I admit I am simplifying, but the divisions I identify should be reasonably accurate as a broad summary.) There is the Republican–Democratic cleavage. This one is almost evenly divided, which explains a lot of the current partisan polarization. Hold together just enough–avoid the proverbial circular firing squad–and you can win. Then there is the democratic–authoritarian cleavage. On this one, the pro-democratic segment extends all the way from the leftmost large-d Democrats to somewhere near the middle of the Republican Party. The pro-Trump, white-supremacist, election-denying wing of the Republican Party has shown itself to be completely willing to set aside democracy, and even to promote/tolerate political violence, in order to advance its political agenda. This wing is a cancer that must be removed from the right-wing bloc that currently consists solely of the Republican Party. Then there is, for lack of a better term, the capitalist–socialist cleavage. This one obviously divides the Democratic Party. On one side are Democrats who generally take a more gradualist view of the need for economic policy change, plus nearly all of the right, in being free-market oriented. On the other, left or “progressive” side are Democrats who emphasize various proposals to remake the economic model (including less commitment to free trade), whether or not “socialist” is the correct term or even the term they favor. Think Bernie Sanders and his supporters, as well as some of the “progressive” wing of the Democratic Party. Basically, the point is that there are (at least) two “rights” and two “lefts” but currently only one party on the right and one on the left. And the emergency is that one of the “rights” has abandoned democracy and shown a willingness to accept political violence.

The need for PR is to let the free-market small-d democrats in the currently existing parties act independently of their more extreme wings. This is precisely what PR systems permit–each side’s extreme can be its own party rather than a wing of one majority-seeking party, without raising concerns over “spoilers” that arise under plurality elections.1

As I already conceded, I am oversimplifying a complex political scene for the sake of argument. I also am not going to go into the details of how actual coalitions would work under this stylized latent four-party system that PR would allow to break forth. Both the need for electoral coalitions in single-winner offices (Senator, President, governors), and forging legislative coalitions among these parties in the House, would complicate the flexibility of alliances that one obtains when PR is used to elect a single dominant institution (as in many parliamentary systems). The point is simply that PR offers the best means of generating center-spanning coalitions to control House majority outcomes, in contrast to the current system’s generation of majorities that include a fringe–a nakedly authoritarian fringe in the case of the party most likely to win a majority in 2022 under current rules.

So, we need PR to save democracy. But what kind of PR? I would take any kind over the system we have now! But I think there is one that recommends itself because it is the easiest to implement, for voters to understand, and for election authorities to administer.

A model of open-list PR for the US

I favor open-list PR not because it is the “best” system or my personal favorite. Strong cases can be made for single transferable vote (STV, which is a form of ranked-choice voting in multi-seat districts) or for mixed-member proportional (MMP). However, open-list proportional representation (OLPR) best meets the criteria of simplicity in implementation, voting, and administration. My argument for OLPR is inspired partly by my own sense of what is workable, but more largely by a post by Jack Santucci.

It literally could be made our electoral system tomorrow, as follows (I am setting aside the fact that there is a reapportionment and redistricting taking place in 2021-22 in my “tomorrow” scenario). Take 3–5 existing contiguous single-seat districts and merge them into the multi seat districts needed for PR. Thus the proposal is for districts with district magnitude (M) of three to five. (Later I will address states that have fewer than three Representatives.)

Each voter would have one vote for a candidate, just as now, but the ballot would list all the candidates of each party that are running in the larger multiseat district (up to M candidates per list). The initial allocation of seats would be based on summing votes of party candidates nominated to each list, using one of the standard PR allocation rules (I’d favor D’Hondt, but various others could be fine). Then, once each list’s seat total is determined through the application of the PR formula to its collective vote total, its top s vote-earners get the list’s seats (where s is simply the number of seats the list has won). This is standard OLPR, or more formally, it is quasi-list PR, because there is no opportunity to cast a vote for the list as a whole.2

An important question is how to handle nominations to the lists. Personally, I’d prefer to get rid of primaries, as when there is a wider range of choice of both party lists and candidates on those lists, primaries arguably are not needed. However, no proposal that abolishes primaries is likely to fly, politically. I would not let that bog the emergency reform down. I propose embracing ideas that are already out there and being pushed by the independent-politics reformers, such as “top two” and “top four”.

How would this work? One could continue to hold a “primary” in each of the existing single-seat districts; I will now call these nominating districts to distinguish them from the larger general-election districts. The goal here is to avoid making it as unwieldy as it could be if primaries were held in the larger districts to be used in the general election. The first round (call it a primary even though it would stretch the definition thereof) would advance the top c candidates from each nominating district, where c could be four but could be some other number agreed upon.3 Presumably, as is the case in California’s “top two” currently, the candidates themselves would indicate what party they affiliated with on the primary ballot, but use of the label would not be restricted by any central actor in the state (or other level) party.4

Then, between this primary and some date in advance of the general election, let the top cM candidates for the larger general-election district negotiate who goes on whose list and how those lists are branded. The party labels could be ones that are registered in advance of the election (i.e., before the primary) as is currently the case in many states, or it could be left completely open for actors to negotiate between rounds. This is an important detail, but not one I think should be essential to advancing the wider proposal. It could even be a matter of individual states to sort out.

The idea here is that the top-c first round in nominating districts, followed by negotiations over lists for the general, encourages those who have advanced to a slot on the general election ballot to cooperate in order to maximize their seat-winning potential in the OLPR process. At the same time, however, it allows these candidates and their allies to reject anyone who has qualified for a slot on the ballot from being on their list if he or she is too extreme for the brand they want to cultivate.

If general-election lists are restricted to M candidates, then in any case where two or more of the same party have qualified from a given nominating district, one will have to be left off the list, unless there is another nominating district where no candidate of that party qualified. The objective here is not to force any set of candidates to run together. Local actors, including the candidates, decide. They have to balance the supporters that a given candidate can bring with the risk that some candidate drives away other voters in a context in which any given list is likely to win 1 or 2 seats in a three-seat general-election district (or 1–3 in a 5-seat district), rather than 100% of the representation of the single-seat districts, as under the current system. I am not wedded to the various components of this idea, and am completely open to other ideas. The wider point is that there are reformers who dislike parties and there are reformers who want stronger parties. I am looking for a way to thread a narrow needle and build a reform coalition–under emergency conditions.

When coordination fails and candidates who have qualified for a given party exceed M in some district, but they can’t agree on which M get to use the name, what do we do? While I would not normally advocate multiple lists within a party, I’d be willing to allow it to make the idea of lists and PR work. Also, any candidates who, having qualified in the primary, do not find partners to go in together on a list should be free to run as independents.

I should conclude this section by noting that my OLPR proposal is totally severable from my nominating-districts and “top c” proposals. If the latter get in the way of OLPR, I am happy to drop either or both. My ambition is to help make the transition to OLPR politically smoother, by retaining smaller geographic entities as politically meaningful aspects of the implementation of PR (through the nominating districts), and retaining the “bottom up” qualification of general-election candidates that is a hallmark of the current system. The overriding objective is to let different wings of current parties compete separately, outside of a majoritarian context in which splits become spoilers, and general-election candidates are sometimes extremists themselves or are in debt to extremists in their party. Avoiding these pitfalls of the current system is the very essence of PR.

Other issues

I am assuming this proposal stays within the current 435-member House. There are arguments to be made in favor of increasing the size of the House, but I have my doubts that a larger House is by itself inherently valuable. It certainly is not worth the risk of its becoming a poison pill that prevents PR. If advocates of electoral reform make a larger House seem like a condition of electoral reform, the cause of reform is probably doomed.

With a 435-seat House, and even with any House of reasonably achievable larger size, there will remain states with only one or two members. These states will obviously not be able to have districts that elect 3–5 members apiece. So what? Many PR systems have a few districts with one or two members, even when their national average magnitude is larger. This is not a reason to reject a proposal for reform. States that have one Representative could be encouraged to adopt ranked-choice voting, but should not be required to do so.

I should address why I do not advocate STV as the overall system for the House, given the current fashion in some circles for ranked-choice voting solutions. This is not the place to go into reasons why STV may not be desirable in its own right. It has some strong positive features, but also some negative ones. The biggest negatives are the need for voter education, substantially changed ballot formats, and already overstretched election administrators having to adapt their routines to make the more complex counts work. OLPR allows all of this to be as close to the status quo as possible, while still getting PR.5

What about MMP? I have been known to argue it is a good system. However, absent substantial increase in House size, it has some real drawbacks. The single-seat districts have to become considerably larger geographically for MMP to work with the existing state delegation sizes. (The list tier for MMP in the US surely would be state-by-state, or regions within larger states, not nationwide or otherwise multi-state.) The OLPR proposal that I am advancing here also means larger general-election districts, but has the advantage of having more than one member elected from each of these larger districts, while also retaining the more compact districts for nominations. An additional drawback of MMP in the American national context is in how you implement the list tier. It is either closed lists, which might be politically unpalatable, or it is open lists alongside the two-tier structure, adding a considerable further complication.6

So, no, I have not abandoned my general preference for MMP, nor am I claiming STV is a “bad” system. I simply am arguing that OLPR is a good solution to an immediate emergency for democracy.

Conclusion

We must find a way to prevent a new House majority from being elected in 2022 that is under the effective control of an anti-democratic wing. The voters who prefer a center-right party are not going to vote for the existing Democratic Party as long as they fear (rightly or wrongly) that that party is coming under the control of its own extreme “socialist” wing. Voters need choices that are more moderate, as well as parties that can represent voters with grievances that lead them to reject mainstream politics. What we need to avoid is a mainstream party winning a majority of seats while under the control of its grievance-based authoritarian extreme.

I am under no illusions that this will be easy. I certainly accept that any PR proposal is less likely to pass than likely. It requires more institutionally oriented Republicans to see a clear and present danger from continuing to work within a party that has a strong and undeniable anti-democratic tendency, as well as to believe that tendency is too large to be contained within. It also is not going to be immediately embraced by the Democratic establishment that just won all three elected components of the federal government, and so requires them to realize just how fragile and transient their control is.

Difficult though it is to get this proposal accepted, we are in a situation where an emergency exists for democracy. So let’s get to work!

_________________

[Over the years I have done many posts on the idea of adopting proportional representation (of some form) in the US. Please click here and scroll to see them all.]

_________________

Notes

  1. Advocates of ranked-choice voting in single-seat districts (also known as the “alternative vote: or “instant runoff”/IRV) will say that their preferred system also avoids the spoiler problem. This is not fully correct. The issue is that this view takes a district-level perspective. The point of PR is to avoid “spoilers” in larger ideological blocs. Getting the same result from IRV requires something approaching uniform distribution of those blocs across districts, or at least for each group within a bloc to have its own local strongholds, so that the parties/factions within a bloc can meaningfully trade preferences. Otherwise, it mostly leads to the same issues as plurality voting, whereby to win, the larger party/faction within the bloc must appeal to voters of the other. The case for IRV in the current emergency would rest on an assumption that, within the right, the authoritarians are the smaller component. If they are not, they will either win from preferences of those on the moderate right, or will potentially win pluralities of the vote when many voters don’t give second preferences. (We can’t be certain that voters for the mainstream center-right will preference a party on the mainstream left. Maybe they will, maybe not.) This brings me to the final issue: IRV advocates tend to overlook that the best case for the system assumes compulsory preferences, which are unlikely to be adopted (and may even be held unconstitutional) in the US. If many voters give only first choices, then IRV is more or less the same as plurality.
  2. Such an option could be added, but I am trying to keep it as familiar as possible while still getting PR.
  3. It might be wise to set c to the same value as the general-election M; it certainly should not be much smaller than M.
  4. I don’t think anything that generates such control over labels is politically palatable in current American politics, even though most political scientists would say it is desirable.
  5. If the reform included a clause allowing individual states to opt for STV instead of OLPR, I would not object.
  6. There is also the need to prevent parties from gaming MMP with “dummy” lists. This has been discussed previously on this blog. It can’t be dismissed as a serious problem, and so I’d rather just sidestep it in designing a proportional system for the US in the present moment.

US House size increase: Inherently valuable?

We have frequently discussed here the question of the size of the US House. As regular readers will know, the House is undersized, relative to the cube root law, under which an assembly is expected to be approximately the cube root of the population. The law is both theoretical (grounded in a logical model) and quite strong empirically (see the graph posted years ago). However, the US House is far smaller than the cube root predicts, which would be somewhere north of 600. In fact, the House has been fixed at 435 for more than a century,1 even as the population has grown greatly.

So there is a good political science case to be made for expanding House size. My question here is whether expanding the House is something that reformers should pursue for its own sake. Or is it of subordinate value?

I ask because many advocates of a move to proportional representation (PR) will tend to believe that PR would work better in a larger House. The larger the House, the fewer states there are with only one Representative, wherein obviously a plurality or majority system remains the only option.

Strategically, however, it could be a mistake for the PR movement to hitch its wagon to the House expansion movement. If PR is attached to the idea of “more politicians” it is probably in a lot of trouble. Advocates for democracy reform might prefer both a larger House and PR, but wouldn’t most of us prefer PR to a larger House, if we can have only one or the other? (Perhaps I will engage in blasphemy, but I might trade off a somewhat smaller House if it were necessary to get PR. In other words, I value PR ahead of almost any reform I can imagine.)

Another way to look at this is, would the reformist “capital” spent on getting a larger House be worth it if we ended up with 650 single-seat districts instead of 435? I have my doubts.

While a larger House should result in more parties represented, independent of the electoral system, I am not sure I believe that we would see it under otherwise existing US political and institutional conditions. As I’ve noted many times, the Seat Product Model says that the US “should” have a party system with more than two parties, and the largest one averaging around 47% of the seats, instead of our actual average which is obviously greater than 50%. It should have an effective number of seat-winning parties of about 2.75, even with 435 seats. With 650, the expectation rises to 2.94 (and a largest averaging just under 45% of the seats). In the real USA where there are really only two parties, and we keep single-seat districts, do we have any reason to believe just adding about 200 seats (let alone a more realistic 100 or so) would result in any increase in representation of other parties? I doubt it.

So, why bother? Is the value of a smaller number of people per Representative so strong that we want it regardless of how the party system pans out? I worry it actually could have a deleterious effect. Other things equal, more seats means more homogenous districts. Some of those could be minority districts that can’t now be drawn (given other criteria in district line-drawing) and, of course, those minorities in theory could be minority-party supporters as well as nonpartisan minorities (racial and ethnic, etc.). The latter is valuable, of course. But a concern is that in an existing and likely persistent two-party system that you simply end up with more safe seats (Brian Frederick notes this possibility in his book on US House size, even as he argues in favor of an increased size). We have plenty of safe seats already! If we had multiparty politics to start with, I think a larger House would help smaller parties win more seats, and possibly render districts on average more competitive. But in a two-party system, I think it makes districts on average less competitive. (I am not sure about this, so discuss away in the comments!) As for racial and ethnic minorities, I am skeptical that we get enough of a boost from a larger number of single-seat districts to make the tradeoffs in less competitive elections worth it. They’d be better represented by PR anyway, obviously.

Bottom line: With so many reformist needs in US democracy, I don’t think House size is worth pursuing, unless it can be in a package that gets us PR. It certainly should not be allowed to be the poison pill that prevents getting PR, as I fear it could be, were we ever otherwise in a place where PR was a live option.

____

  1. Except for temporary increases to accommodate Alaska and Hawaii; at the next census and reapportionment, it reverted to 435.

The 2018 Ranked Choice Voting Election in Maine’s Congressional District No. 2

Maine’s recent congressional election – the first-ever federal poll in the U.S. to be held under Ranked Choice Voting (RCV) – took place against a backdrop of continuing opposition by the Republican Party to the recently introduced voting system. State GOP leaders not only called on their voters to rank just the party’s candidates, but sought as well a court ruling to prevent RCV from determining the outcome of the U.S. House of Representatives election in Congressional District No. 2, and subsequently a recount of all ballots in the election (later called off while it was underway).

Nonetheless, a significant minority of Republican voters in the district ignored party exhortations and indicated valid rankings for at least two candidates, while substantial minorities of non-GOP voters only gave a first preference to Democratic or independent candidates. At the same time, the number of voters who engaged in bullet voting – indicating a preference for just one candidate – constituted a minority of the voting electorate in CD-2. This is notable when one considers that in both the 2016 and 2018 RCV referendums held in Maine, a majority of voters in CD-2 rejected the switch from plurality voting.

Moreover, a federal judge first allowed the RCV count to take place, and subsequently issued a ruling upholding the constitutionality of the election in the congressional district, where incumbent Republican Bruce Poliquin obtained the largest number of first preference votes, but fell short of an absolute majority; he then lost the second and final round of counting to Democratic challenger Jared Golden, who prevailed with 142,440 votes (50.6%) to Poliquin’s 138,931 (49.4%) following the elimination of independent candidates Tiffany Bond and Will Hoar, whose second preferences were transferred to the remaining two candidates. The First Circuit Court of Appeals subsequently denied Congressman Poliquin’s motion for an injunction to prevent Golden from being declared the winner, and Poliquin – who wanted the election outcome determined solely by the first preference count, or by a re-run under plurality voting – dropped the lawsuit challenging RCV shortly thereafter.

The following table, based on a tally of 296,077 cast vote records in CD-2, published by Maine’s Secretary of State on his official website, shows the distribution of first preference votes for each candidate with at least a valid second preference for another candidate (“Preference”), or no second and successive preferences for a different candidate (“Bullet”); the “Other” category groups ballots with valid first preferences, but no valid second or successive preferences due to either overvoting on the second preference ranking – indicating a second preference for more than one candidate – or undervoting i.e. leaving blank more than one consecutive ranking beyond first preference while indicating preferences for other candidates, or a combination of both. State of Maine 2018 Ranked Choice Voting (RCV) Election Data has frequency counts for all 1,564 tallied preference combinations in the CD-2 election.

Candidate Bullet % Preference % Other % Total
Bond (I) 4,333 26.2 12,106 73.1 113 0.7 16,552
Golden (D) 51,423 39.0 79,551 60.3 1,039 0.8 132,013
Hoar (I) 2,120 30.8 4,713 68.6 42 0.6 6,875
Poliquin (R) 89,228 66.5 43,955 32.8 1,001 0.7 134,184
Total 147,104 50.8 140,325 48.5 2,195 0.7 289,624

There were 435 overvotes and 6,018 undervotes in the first preference count; the latter figure – which included 5,711 ballots undervoted on all available rankings – is noticeably lower than the reported number of blank ballots in the plurality-based 2014 and 2016 U.S. House elections in CD-2, and it is also lower than the number of blank ballots in the district for this year’s gubernatorial election in Maine, which was also carried out by plurality voting. At the very least, these numbers indicate the introduction of RCV did not bring about an increase in the number of blank or invalid ballots. In addition, the very low number of overvotes strongly suggests there was little voter confusion about the new electoral system.

Of the 147,104 voters in CD-2 who indicated valid preferences for just one candidate, 137,971 indicated only a first preference, including 315 cases with a second preference but no first preference, while an additional 9,133 voters indicated a valid first preference (or a valid second preference without a first preference), as well as additional preferences, but only for a specific candidate; a large majority of these – 7,706 voters – gave all five preference rankings to their chosen candidate. Under Maine’s RCV counting rules, these votes had the same effect as having indicated only a first preference for the selected candidate. However, while ballots with valid preferences for just a single candidate constitute a narrow majority of the valid first preference votes, they represent a minority of 49.7% of all votes cast in the district. By contrast, in both the 2016 and 2018 RCV referendums, CD-2 reported a majority of votes against RCV both among the valid and overall vote totals. Moreover, even among voters casting valid first preferences, those who indicated a first preference only were a minority of 47.6%.

Bullet voting for the two major-party candidates had no impact in the CD-2 election outcome, since their first preferences were tallied in the second count as preferences for continuing candidates. However, the 6,453 ballots with valid rankings for either Bond only or Hoar only made up the bulk of the 8,253 non-transferable votes in the second count (most of the remaining 1,800 votes in that group had valid rankings for both Bond and Hoar, but not for the other two candidates). It has been suggested that these voters were confused as to which candidates would make it to the second count, but a far more likely explanation is that they simply wished to support the independent candidates only and didn’t care for either of the two major-party candidates. In fact, their behavior is functionally the same as that of voters in traditional runoff systems casting a blank or invalid ballot in the runoff election, after the candidates they originally supported were eliminated in the first round of voting. Moreover, Bond and Hoar first preference voters had the lowest proportion of bullet voting, at just under two out of seven ballots cast for them.

The overall distribution of bullet votes and preference votes in the CD-2 election closely resembles the 2016 and 2018 RCV referendum outcomes, and it would seem this is not entirely a coincidence: in towns with more than ten voters, there were moderately strong correlations between bullet voting in 2018 and opposition to RCV in 2016 (0.62), as well as between preference votes in 2018 and support for the new electoral system two years earlier (0.64); when the correlations were calculated on the basis of valid votes only, both stood at 0.63.

In conclusion, neither all GOP voters in CD-2 ranked Congressman Poliquin only (nearly a third cast a preference vote) nor did all Golden voters (or those backing independent candidates Bond and Hoar) rank other candidates – this was the case with almost three out of eight non-Poliquin voters. There was little evidence of voter confusion, and casting a preference vote or a bullet vote may have been indicative of ongoing support for RCV or opposition to it, respectively; if so, the election outcome did not point to growing opposition to the newly adopted electoral system. Just as important, these findings should leave no doubt that RCV is not a clear-cut partisan issue.

NYT endorses a larger House, with STV

Something I never thought I would see: The editorial board of one of the most important newspapers in the United States has published two separate editorials, one endorsing an increase in the size of the House of Representatives (suggesting 593 seats) and another endorsing the single transferable vote (STV) form of proportional representation for the House.

It is very exciting that the New York Times has printed these editorials promoting significant institutional reforms that would vastly improve the representativeness of the US House of Representatives.

The first is an idea originally proposed around 50 years ago by my graduate mentor and frequent coauthor, Rein Taagepera, based on his scientific research that resulted in the cube root law of assembly size. The NYT applies this rather oddly to both chambers, then subtracts 100 from the cube root result. But this is not something I will quibble with. Even an increase to 550 or 500 would be well worth doing, while going to almost 700 is likely too much, the cube root notwithstanding.

The second idea goes back to the 19th century (see Thomas Hare and Henry R. Droop) but is as fresh and valid an idea today as it was then. The NYT refers to it as “ranked choice voting in multimember districts” and I have no problem whatsoever with that branding. In fact, I think it is smart.

Both ideas could be adopted separately, but reinforce each other if done jointly.

They are not radical reforms, and they are not partisan reforms (even though we all know that one party will resist them tooth and nail and the other isn’t exactly going to jump on them any time soon). They are sensible reforms that would bring US democracy into the 21st century, or at least into the 20th.

And, yes, we need to reform the Senate and presidential elections, too. But those are other conversations…

The US Supreme Court gerrymandering case

I do not have time to dissect the arguments before the US Supreme Court in the case concerning the permissibility of the partisan gerrymander in Wisconsin. It clearly is a case of great importance to issues we care about at this blog. So, feel free to discuss here.

I highly recommend two pieces by Michael Latner:

Sociological Gobbledygook or Scientific Standard? Why Judging Gerrymandering is Hard (4 OCt.)

Can Science (and The Supreme Court) End Partisan Gerrymandering and Save the Republic? Three Scenarios (2 Oct.)

 

 

Republicans will likely keep their House majority – even if Clinton wins by a landslide – and it’s because of gerrymandering.

By Michael Latner

While the presidential race has tightened, the possibility of Donald Trump being defeated by a wide margin has some Republicans worried about their odds of retaining control of Congress. However, only a handful of Republican-controlled districts are vulnerable. Speaker Paul Ryan’s job security and continuing GOP control of the House is almost assured, even if Democrats win a majority of the national Congressional vote. How is it that the chamber supposedly responsive to “The People Alone” can be so insulated from popular sentiment? It is well known that the Republican Party has a competitive advantage in the House because they win more seats by narrow margins, and thus have more efficiently distributed voters. What is poorly understood is how the current level of observed bias favoring the GOP was the result of political choices made by those drawing district boundaries.

This is a controversial claim, one that is commonly challenged. However, in Gerrymandering in America: The House of Representatives, the Supreme Court, and the Future of Popular Sovereignty, a new book co-authored by Anthony J. McGann, Charles Anthony Smith, Alex Keena and myself, we test several alternative explanations of partisan bias and show that, contrary to much professional wisdom, the bias that insulates the GOP House Majority is not a “natural” result of demographic sorting or the creation of “majority-minority” districts in compliance with the Voting Rights Act of 1965. It is the result of unrestrained partisan gerrymandering that occurred after the 2010 Census, and in the wake of the Supreme Court’s 2004 decision in Vieth v Jubelirer, which removed legal disincentives for parties to maximize partisan advantage in the redistricting process.

Partisan bias tripled after congressional redistricting

We measure partisan bias using the symmetry standard, which asks: What if the two parties both received the same share of the vote under a given statewide districting plan? Would they get the same share of seats? If not, which party would have an advantage?

We calculate seats/votes functions on the assumption of uniform partisan swing – if a party gains 5% nationally, it gains 5% in every district, give or take an allowance for local factors (simulated through random effects that reduce our estimates of bias). Linear regressions provide an estimate of what level of support the Democrats expect to win in each district if they win 50% of the national vote, given the national level of support for the party in actual elections, and we generate a thousand simulated elections with hypothetical vote swings and different random local effects for each district, to create our seats/votes functions.

Figure 1: Seats/Votes function for Congress 2002-2010
SV2010//embedr.flickr.com/assets/client-code.js

Figure 2: Seats/Votes function for Congress 2012
sv2012

Figures 1 and 2 show the seats/votes functions under the 2002-2010 Congressional districts, and the 2012 post-redistricting districts, respectively. We observe a 3.4% asymmetry in favor of Republicans under the older districts. This is still statistically significant, but it is only about a third of the 9.39% asymmetry we observe in 2012. Graphically, the seats/votes function in Figure 1 comes far closer to the 50%votes/50%seats point. The bias at 50% of the vote is less than 2% under the older state districting plans, compared to 5% in 2012. That is, if the two parties win an equal number of votes, the Republicans will win 55% of House seats. Furthermore, the Democrats would have to win about 55% of the vote to have a 50/50 chance of winning control of the House in 2016. Thus it is not impossible that the Democrats will regain control of the House, but it would take a performance similar to or better than 2008, when multiple factors were favorably aligned for the Democrats.

Increased bias did not result from “The Big Sort” 

Perhaps the most popular explanation for increased partisan bias comes from the “Big Sort” hypothesis, which holds that liberals and conservatives have migrated to areas dominated by people with similar views. Specifically, because Democrats tend to be highly concentrated in urban areas, it is argued, Democratic candidates tend to win urban districts by large margins and “waste” their votes, leaving the Republicans to win more districts by lower margins.

The question we need to consider is whether the concentration of Democratic voters has changed relative to that of Republican voters since the previous districts were in place. In particular, if it is the case that urban concentration causes partisan bias, then we would expect to find relative Democratic concentration increasing in those states where partisan bias increases. In order to address this question, we measure the concentration of Democratic voters relative to that of Republicans with the Pearson moment coefficient of skewness, using county-level data from the 2004 and 2012 presidential elections. As shown in Table 1, in most states the level of skewness toward the Democrats actually decreased in 2012.

slide1_29768903842_o

In twenty-seven out of the thirty-eight states with at least three districts, the relative concentration of Democratic voters compared to Republican voters declines. Moreover, in those states where partisan bias increased between the 2000 and 2010 districting rounds, those with an increase in skewness are outnumbered by those where there was no increase in skewness, by more than two to one. We also find that there was reduced skewness in most of the states where there was statistically significant partisan bias in 2012.

Of course, we should not conclude that geographical concentration does not make it easier to produce partisan bias. North Carolina was able to produce a highly biased plan without the benefit of a skewed distribution of counties, but to achieve this, the state Generally Assembly had to draw some extremely oddly shaped districts. While the urban concentration of Democratic voters makes producing districting plans biased toward the Republicans slightly easier, it makes producing pro-Democratic gerrymanders very hard. In Illinois, the Democratic-controlled state legislature drew some extremely non-compact districts but still only managed to produce a plan that was approximately unbiased between the parties.

Increasing racial diversity does not require partisan bias

Another “natural” explanation for partisan bias, one that is especially popular among Southern GOP legislators, is that that it is impossible to draw districts that are unbiased while at the same time providing minority representation in compliance with the Voting Rights Act of 1965. The need to draw more majority-minority districts, it is argued, disadvantages the Democrats because it forces the inefficient concentration of overwhelmingly Democratic minority voters.

There are four states with four or more majority-minority districts – California, Texas, Florida, and New York – and they account for more than 60% of the total number of majority-minority districts. Of these, Texas and Florida have statistically significant partisan bias, but California, Illinois, and New York do not, so the need to draw majority-minority districts does not make it impossible to draw unbiased districting plans. Yet many of the states that saw partisan bias increase do have majority-minority districts – or rather a single majority-minority district in most cases. It is possible that packing more minority voters into existing majority-minority districts creates partisan bias.

To test this possibility, we subtract the average percentage of African-Americans and Latinos in districts where those races made up a majority of the population in the 110th Congressional districts from the 113th Congress averages. Figures 3 and 4 display the results for these states. For majority-Latino districts, we find no evidence that states with increased Latino density have more biased redistricting plans.

Figure 3: Majority-Latino District Density Change and Symmetry Change
latino_29847567976_o

Figure 4: Majority-Black District Density Change and Symmetry Change
black_29256018873_o

By contrast, states with increased majority-Black densities have clearly adopted more biased districting plans. Among the states with substantial reductions in partisan symmetry, only Louisiana (−1.9 %), Ohio (−1.0 %), and Pennsylvania (−0.9 %) had lower average percentages of African Americans in their majority-minority districts after redistricting. The three states with the largest increases in majority-Black district density, Tennessee (5.2 %), North Carolina (2.4 %), and Virginia (2.2 %), include some of the most biased plans in the country. This is not in any way required by the Voting Rights Act – indeed, it reduces the influence of African-American voters by using their votes inefficiently. However, it is consistent with a policy of state legislatures seeking partisan advantage by packing African-American voters, who overwhelmingly vote for the Democratic Party, into districts where the Democratic margin will be far higher than necessary.

Demography is not destiny

The bias we observe is not the inevitable effect of factors such as the urban concentration of Democratic voters or the need to draw majority-minority districts. It is for the most part possible to draw unbiased districting plans in spite of these constraints. Thus if state districting authorities draw districts that give a strong advantage to one party, this is a choice they have made – it was not forced on them by geography. The high level of partisan bias protecting GOP House control can only be explained in political terms. As we show in our book, Pro-Republican bias increased almost exclusively in states where the GOP controlled the districting process.

One of the immediate consequences of unrestrained partisan gerrymandering is that, short of a landslide Democratic victory resembling 2008, the Republicans are very likely to retain control of the House. But one of the more profound consequences is that redistricting has upended one of the bedrock constitutional principles, popular sovereignty. Without an intervention by the courts, political parties are free to manipulate House elections to their advantage without consequence.

 

Economix: Expand the US House

It is good to see the undersized nature of the US House of Representatives get attention in the New York Times‘s Economix blog. The author is Bruce Bartlett, who “held senior policy roles in the Reagan and George H.W. Bush administrations and served on the staffs of Representatives Jack Kemp and Ron Paul”.

Bartlett notes that,

according to the Inter-Parliamentary Union, the House of Representatives is on the very high side of population per representative at 729,000. The population per member in the lower house of other major countries is considerably smaller: Britain and Italy, 97,000; Canada and France, 114,000; Germany, 135,000; Australia, 147,000; and Japan, 265,000.

The strongest empirical relationship of which I am aware between population size and assembly size is the cube root law. Backed by a theoretical model, it was originally proposed by Rein Taagepera in the 1970s. A nation’s assembly tends to be about the cube root of its population, as shown in this graph.*

Fig 7.1

Note the flat line for the USA, indicating lack of increase in House size, since the population was less than a third what it is today. This recent static period is in contrast to earlier times, depicted by the zig-zag black line, in which the USA regularly adjusted House size, keeping it reasonably close to the cube-root expectation.

At only about two thirds of the cube-root value of the population (as of 2010 census), the current US House is indeed one of the world’s most undersized. However, there are some even more deviant cases. Taking actual size over expected size (from cube root) , the USA has the seventh most undersized first or sole chamber among thirty-one democracies in my comparison set. The seven are:

    .466 Colombia
    .469 Chile
    .518 India
    .538 Australia
    .590 Netherlands
    .614 Israel
    .659 USA

As expected, the mean ratio for the thirty-one countries is very close to one (0.992, with a standard deviation of .37). The five most oversized, all greater than 1.4, are France, Germany, UK (at 1.67), Sweden, and Hungary. (The latter was at a whopping 1.80, but has since sharply reduced its assembly size.) Spain, Denmark, Switzerland, Portugal, and Mexico all get the cube root prize for having assembly sizes from .975 to 1.03 of the expectation.

One thing I did not know is that an amendment to the original US constitution was proposed by Madison. According to Bartlett, it read:

After the first enumeration required by the first article of the Constitution, there shall be one representative for every 30,000, until the number shall amount to 100, after which the proportion shall be so regulated by Congress, that there shall be not less than 100 representatives, nor less than one representative for every 40,000 persons, until the number of representatives shall amount to 200; after which the proportion shall be so regulated by Congress, that there shall not be less than 200 representatives, nor more than one representative for every 50,000 persons.

Obviously, Madison’s formula would have run into some excessive size issues over time. And Bartlett does not suggest how much the House should be increased, only noting that its ratio of one Representative for very 729,000 people is excessive. On the other hand, Madison’s ratio of one per 50,000 would produce an absurdly large House! It is just the need to balance the citizen-representative ratio with the need for representatives to be able to communicate effectively with one another that Taagepera devised the model of the cube root, which as we have seen, fits actual legislatures very well.

The cube root rule says the USA “should have” a House of around 660 members today, which would remain a workable size. (If the USA and UK swapped houses, each would be at just about the “right” size!) Even an increase to just 530 would put it within about 80% of the cube root.

As Bartlett notes, at some point the US House will be in violation of the principle of one person, one vote (due to the mandatory representative for each state, no matter how small). However, a case filed in 2009 went nowhere.



* Each country is plotted according to its population, P (in millions), and the size, S, of its assembly. In addition, the size of the US House is plotted against US population at each decennial census from 1830 to 2010.
The solid diagonal line corresponds to the “cube root rule”: S=P^(1/3).
The dashed lines correspond to the cube root of twice or half the actual population, i.e. S=(2P)^(1/3) and S=(.5P)^(1/3).

A variant of the graph will be included in Steven L. Taylor, Matthew S. Shugart, Arend Lijphart, and Bernard Grofman, A Different Democracy (Yale University Press).

An even earlier version of the graph was posted here at F&V in 2005.

Citations are always nice–Increasing the size of the US House

David Fredosso, writing at Conservative Intelligence Briefing, makes the case for increasing the size of the US House, citing one of my previous posts advocating the same. He makes two additional and valuable points: (1) “The Wyoming Rule”, by which the standard Representative-to-population ratio would be that of the smallest entitled unit, is misleading as to how representation is currently (mal-)apportioned; (2) Increasing the size of the House would not, as is sometimes assumed, be of benefit to Democrats and liberals.

David quibbles with the Wyoming part of the story, noting that “Wyoming is not the most overrepresented state — by a long way, that distinction goes to Rhode Island, with its two districts, average population 528,000”, whereas Wyoming has a population of 568,000 (and one seat).

I would note that this is a very small quibble indeed, as the Wyoming Rule–which, to be fair, I neither named nor invented–refers to “smallest entitled unit” not to “most over-represented unit”. Of course, every state is a unit entitled to at least one, but sometimes a state with two members indeed will be over-represented to a greater degree than some state with one member. Whichever we base it on–smallest entitled or most over-represented–the principle is the same: expand the House.

David proposes a House of 535, and has a table of how that would change each state’s current representation. I would go higher (600 or so), but the precise degree of increase is an even smaller quibble. I am pleased to see this idea being promoted in conservative (or liberal, or whatever) circles. And it’s always nice to be cited.

________
See also:

Reapportionment–a better way?; this includes a discussion of the cube-root rule of assembly size, and a graph of how the US relationship of House size to population compares to that of several other countries, and how it has changed over time as the US population has grown, but the House stopped doing so.

US House size, continued

Distortions of the US House: It’s not how the districts are drawn, but that there are (single-seat) districts

In the New York Times, Sam Wang has an essay under the headline, “The Great Gerrymander of 2012“. In it, he outlines the results of a method aimed at estimating the partisan seat allocation of the US House if there were no gerrymandering.

His method proceeds “by randomly picking combinations of districts from around the United States that add up to the same statewide vote total” to simulate an “unbiased” allocation. He concludes:

Democrats would have had to win the popular vote by 7 percentage points to take control of the House the way that districts are now (assuming that votes shifted by a similar percentage across all districts). That’s an 8-point increase over what they would have had to do in 2010, and a margin that happens in only about one-third of Congressional elections.

Then, rather buried within the middle of the piece is this note about 2012:

if we replace the eight partisan gerrymanders with the mock delegations from my simulations, this would lead to a seat count of 215 Democrats, 220 Republicans, give or take a few.

In other words, even without gerrymandering, the House would have experienced a plurality reversal, just a less severe one. The actual seat breakdown is currently 201D, 234R. In other words, by Wang’s calculations, gerrymandering cost the Democrats seats equivalent to about 3.2% of the House. Yes, that is a lot, but it is just short of the 3.9% that is the full difference between the party’s actual 201 and the barest of majorities (218). But, actually, the core problem derives from the electoral system itself. Or, more precisely, an electoral system designed to represent geography having to allocate a balance of power among organizations that transcend geography–national political parties.

Normally, with 435 seats and the 49.2%-48.0% breakdown of votes that we had in 2012, we should expect the largest party to have about 230 seats. ((Based on the seat-vote equation.)) Instead it won 201. That deficit between expectation and reality is equivalent to 6.7% of the House, suggesting that gerrymandering cost the Democrats just over half the seats that a “normally functioning” plurality system would have netted it.

However, the “norm” here refers to two (or more) national parties without too much geographic bias to where those parties’ voters reside. Only if the geographic distribution is relatively unbiased does the plurality system work for its supposed advantage in partisan systems: giving the largest party a clear edge in political power (here, the majority of the House). Add in a little bit of one big party being over-concentated, and you can get situations in which the largest party in votes is under-represented, and sometimes not even the largest party in seats.

As I have noted before, plurality reversals are inherent to the single-seat district, plurality, electoral system, and derive from inefficient geographic vote distributions of the plurality party, among other non-gerrymandering (as well as non-malaportionment) factors. Moreover, they seem to have happened more frequently in the USA than we should expect. While gerrymandering may be part of the reason for bias in US House outcomes, reversals such as occurred in 2012 can happen even with “fair” districting. Wang’s simulations show as much.

The underlying problem is, again, because all the system really does is represent geography: which party’s candidate gets the most votes here, there, and in each district? And herein lies the big transformation in the US electoral and party systems over recent decades, compared to the party system that was in place in the “classic” post-war system: it is no longer as much about local representation as it once was, and is much more about national parties with distinct and polarized positions on issues.

Looking at the relationship between districts and partisanship, John Sides, in the Washington Post’s Wonk Blog, says “Gerrymandering is not what’s wrong with American politics.” Sides turns the focus directly on partisan polarization, showing that almost without regard to district partisanship, members of one party tend to vote alike in recent congresses. The result is that when a district (or, in the Senate, a state) swings from one party to another, the voting of the district’s membership jumps clear past the median voter from one relatively polarized position to the other.

Of course, this is precisely the point Henry Droop made in 1869, and that I am fond of quoting:

As every representative is elected to represent one of these two parties, the nation, as represented in the assembly, appears to consist only of these two parties, each bent on carrying out its own programme. But, in fact, a large proportion of the electors who vote for the candidates of the one party or the other really care much more about the country being honestly and wisely governed than about the particular points at issue between the two parties; and if this moderate non-partisan section of the electors had their separate representatives in the assembly, they would be able to mediate between the opposing parties and prevent the one party from pushing their advantage too far, and the other from prolonging a factious opposition. With majority voting they can only intervene at general elections, and even then cannot punish one party for excessive partisanship, without giving a lease of uncontrolled power to their rivals.

Both the essays by Wang and by Sides, taken together, show ways in which the single-seat district, plurality, electoral system simply does not work for the USA anymore. It is one thing if we really are representing district interests, as the electoral system is designed to do. But the more partisan a political process is, the more the functioning of democracy would be improved by an electoral system that represents how people actually divide in their partisan preferences. The system does not do that. It does even less well the more one of the major parties finds its votes concentrated in some districts (e.g. Democrats in urban areas). Gerrymandering makes the problem worse still, but the problem is deeper: the uneasy combination of a geography-based electoral system and increasingly distinct national party identities.

Spurious majorities in the US House in Comparative Perspective

In the week since the US elections, several sources have suggested that there was a spurious majority in the House, with the Democratic Party winning a majority–or more likely, a plurality–of the votes, despite the Republican Party having held its majority of the seats.

It is not the first time there has been a spurious majority in the US House, but it is quite likely that this one is getting more attention ((For instance, Think Progress.)) than those in the past, presumably because of the greater salience now of national partisan identities.

Ballot Access News lists three other cases over the past 100 years: 1914, 1942, and 1952. Sources disagree, but there may have been one other between 1952 and 2012. Data I compiled some years ago showed a spurious majority in 1996, if we go by The Clerk of the House. However, if we go by the Federal Election Commission, we had one in 2000, but not in 1996. And I understand that Vital Statistics on Congress shows no such event in either 1996 or 2000. A post at The Monkey Cage cites political scientist Matthew Green as including 1996 (but not 2000) among the cases.

Normally, in democracies, we more or less know how many votes each party gets. In fact, it’s all over the news media on election night and thereafter. But the USA is different. “Exceptional,” some say. In any case, I am going to go with the figure of five spurious majorities in the past century: 1914, 1942, 1952, 2012, plus 1996 (and we will assume 2000 was not one).

How does the rate of five (or, if you like, four) spurious majorities in 50 elections compare with the wider world of plurality elections? I certainly do not claim to have the universe of plurality elections at my fingertips. However, I did collect a dataset of 210 plurality elections–not including the USA–for a book chapter some years ago, ((Matthew Soberg Shugart, “Inherent and Contingent Factors in Reform Initiation in Plurality Systems,” in To Keep or Change First Past the Post, ed. By André Blais. Oxford: Oxford University Press, 2008.)) so we have a good basis of comparison.

Out of 210 elections, there are 10 cases of the second party in votes winning a majority of seats. There are another 9 cases of reversals of the leading parties, but where no one won over 50% of seats. So reversals leading to spurious majority are 4.8% of all these elections; including minority situations reversals are 9%. The US rate would be 10%, apparently.

But in theory, a reversal should be much less common with only two parties of any significance. Sure enough: the mean effective number (N) of seat-winning parties in the spurious majorities in my data is just under 2.5, with only one under 2.2 (Belize, 1993, N=2.003, in case you were wondering). So the incidence in the US is indeed high–given that N by seats has never been higher than 2.08 in US elections since 1914, ((The original version of this statement, that “N is almost never more than 2.2 here” rather exaggerated House fragmentation!)) and that even without this N restriction, the rate of spurious majorities in the US is still higher than in my dataset overall.

I might also note that a spurious majority should be rare with large assembly size (S). While the US assembly is small for the country’s population–well below what the cube-root law would suggest–it is still large in absolute sense. Indeed, no spurious majority in my dataset of national and subnational elections from parliamentary systems has happened with S>125!

So, put in comparative context, the US House exhibits an unusually high rate of spurious majorities! Yes, evidently the USA is exceptional. ((Spurious majorities are even more common in the Senate, where no Republican seat majority since at least 1952 has been based on a plurality of votes cast. But that is another story.))

As to why this would happen, some of the popular commentary is focusing on gerrymandering (the politically biased delimitation of districts). This is quite likely part of the story, particularly in some sates. ((For instance, see the map of Pennsylvania at the Think Progress link in the first footnote.))

However, one does not need gerrymandering to get a spurious majority. As political scientists Jowei Chen and Jonathan Rodden have pointed out (PDF), there can be an “unintentional gerrymander,” too, which results when one party has its votes less optimally distributed than the other. The plurality system, in single-seat districts, does not tote up party votes and then allocate seats in the aggregate. It only matters in how many of those districts you had the lead–of at least one vote. Thus a party that runs up big margins in some of its districts will tend to augment its total in its “votes” column at a faster rate than it augments its total in the “seats” column. This is quite likely the problem Democrats face, which would have contributed to its losing the seat majority despite its (apparent) plurality of the votes.

Consider the following graph, which shows the distribution (via kernel densities) of vote percentages for the winning candidates of each major party in 2008 and 2010.

Kernel density winning votes 2008-10
Click image for larger version

We see that in the 2008 concurrent election, the Democrats (solid blue curve) have a very long and higher tail of the distribution in the 70%-100% range. In other words, compared to Republicans the same year, they had more districts in which they “wasted” votes by accumulating many more in the district than needed to win it. Republicans, by contrast, tended that year to win more of their races by relatively tighter margins–though their peak is still around 60%, not 50%. I want to stress, the point here is not to suggest that 2008 saw a spurious majority. It did not. Rather, the point is that even in a year when Democrats won both the vote plurality and seat majority, they had a less-than optimal distribution, in the sense of being more likely to win by big margins than were Republicans.

Now, compare the 2010 midterm election, in which Republicans won a majority of seats (and at least a plurality of votes). Note how the Republican (dashed red) distribution becomes relatively bimodal. Their main peak shifts right (in more ways than one!) as they accumulate more votes in already safe seats, but they develop a secondary peak right around 50%, allowing them to pick up many seats narrowly. That the peak for winning Democrats’ votes moved so much closer to 50% suggests how much worse the “shellacking” could have been! Yet even in the 2010 election, the tail on the safe-seats side of the distribution still shows more Democratic votes wasted in ultra-safe seats than is the case for Republicans. ((It is interesting to note that 2010 was very rare in not having any districts uncontested by either major party.))

I look forward to producing a similar graph for the 2012 winners’ distribution, but will await more complete results. A lot of ballots remain to be counted and certified. The completed count is not likely to reverse the Democrats’ plurality of the vote, however.

Given higher Democratic turnout in the concurrent election of 2012 than in the 2010 midterm election, it is likely that the distributions will look more like 2008 than like 2010, except with the Republicans retaining enough of those relatively close wins to have held on to their seat majority.

Finally, a pet peeve, and a plea to my fellow political scientists: Let’s not pretend there are only two parties in America. Since 1990, it has become uncommon, actually, for one party to win more than half the House votes. Yet my colleagues who study US elections and Congress continue to speak of “majority”, by which they mean more than half the mythical “two-party vote”. In fact, in 1992 and every election from 1996 through at least 2004, neither major party won 50% of the House votes. I have not ever aggregated the 2006 vote. In 2008, Democrats won 54.2% of the House vote, Republicans 43.1%, and “others” 2.7%. I am not sure about 2010 or 2012. It is striking, however, that the last election of the Democratic House majority and all the 1995-2007 period of Republican majorities, except for the first election in that sequence (1994), saw third-party or independent votes high enough that neither party was winning half the votes.

Assuming spurious majorities are not a “good” thing, what could we do about it? Democrats, if they are developing a systematic tendency to be victims of the “unintentional gerrymander”, would have an objective interest in some sort of proportional representation system–perhaps even as much as that unrepresented “other” vote would have.

ESP champs 2008-2010

Just poking around a bit further in the Electoral Separation of Purpose data, as pictured and explained previously.

I wondered who the “ESP Champs” were of these cycles.

For 2008, I hereby crown Gene Taylor of Mississippi, who won 74.5% in his district on the same day that Obama managed 31.7%. Now that’s separation of purpose!

He still managed 47% even in 2010. Not bad, but not good enough.

In fact, that 2010 result makes Taylor one of only four Democrats to have won, at the midterm, more than 45% of the vote in a district in which Obama had won under 35%. But to be crowned champion for 2010, you should actually have won your race. So the 2010 title belongs to…

Dan Boren of Oklahoma, who won 56.5% in a district in which Obama had won 34.5%. This result still represented a massive adverse swing against Boren, who had 70.5% in 2008. But he held on.

Boren and Taylor, by the way, are both Blue Dogs.

With ESP numbers like these, we can see why some “blue” congressmen in deeply “red” districts were less than keen these past two years in coming to the support of Obama’s policy priorities. (This was a topic that generated considerable discussion in another thread earlier this month.)