Attorneys General–institutions matter

Now that indictments have been announced against the (outgoing–dare I say?) Prime Minister of Israel, it is worth reviewing the institutional basis of the office of Attorney General in Israel.

I am seeing some casual takes on Twitter about why the US doesn’t have an Attorney General who takes a tougher line against law-breaking at the top of government. But the offices could hardly be more different. The US Attorney General is a cabinet appointee. The President picks who holds that position, subject only to Senate majority confirmation. Of course, Trump has had a highly compliant Senate majority throughout his presidency.

Trump could not have had occupants of the office that have been as awful for the rule of law as they have been, if the office were structured like Israel’s. So it is worth sketching how the process of appointing the Israeli Attorney General works. My source for this is Aviad Bakshi, Legal Advisers and the Government: Analysis and Recommendations, Kohelet Policy Forum, Policy Paper No. 10, February 2016.

a. There shall be formed a permanent selection committee that shall screen suitable candidates, one of which shall be appointed to the position by the government. The term of each committee shall be four years. 

b. The chairman shall be a retired justice of the Supreme Court who shall be appointed by the President (Chief Justice) of the Supreme Court upon the approval of the Minister of Justice, and the other members shall be: a retired Minister of Justice or retired Attorney General appointed by the government; a Knesset Member elected by the Constitution, Law and Justice Committee of the Knesset; a scholar elected by a forum comprising deans of law schools; an attorney elected by the Israel Bar Association. 

c. The AGI term duration shall be six years, with no extension, irrespective of the term of the government. 

d. The government may remove the AGI from his position due to specific reasons.… These reasons include, in addition to personal circumstances of the AGI, disagreements between the AGI and the government that prevent efficient cooperation. In such an event the selection committee shall convene to discuss the subject and shall submit its opinion to the government, in writing. However, the opinion of the committee is not binding, and the government may decide to remove the AGI contrary to the recommendation of the committee. The AGI shall have the right to a hearing before the government and before the committee. 

All of this makes for a reasonably independent office. Even if appointment and dismissal are still in the hands of the government, the screening and term provisions make it an arms-length relationship. The occupant of the post is obviously not a cabinet minister, as in the US, and is not a direct appointee of the head of government or the cabinet.

Worlds apart, institutionally.

And this is even before we get into the parliamentary vs. presidential distinction. A president is–for better or worse–meant to be hard to indict, let alone remove. That’s why the main tool against a potentially criminal executive in the US and many other presidential systems is lodged in the congress, through impeachment, and not in a state attorney. A prime minister in a parliamentary system, on the other hand, by definition has no presumption of a fixed term.

The normal way to get rid of a PM is, of course, a vote of no-confidence or the PM’s own party or coalition partners withdrawing support. But that’s the point–they are constitutionally not protected when the political winds, let alone the legals ones, turn against them.

In the broader institutional context of a parliamentary system, it is presumably much easier to take the step of also designing an independent Attorney General’s office that has the ability to indict a sitting head of government.

On the other hand, there is still no obvious way to remove Netanyahu from office any time soon, unless his own party rebels against him. Even though Trump’s own party will probably block the super-majority in the Senate needed to remove him from office*, the resolution of the case against Trump might happen considerably sooner than any resolution of Netanyahu’s case. Barring a rebellion by his current allies, Netanyahu may remain PM fore another 4-5 months, through a now-likely third election (since last April) and the post-election coalition bargaining process.

* Assuming the House majority impeaches him, which now looks all but inevitable.

Early STV voting equipment

Voting technology is one obstacle to wider use of ranked-choice voting. Although groups like OpaVote have had open-source fixes for years, US jurisdictions tend to rely on commercial vendors. A decade ago, many of them resisited developing the technology. Now, of course, voters can “complete the arrow,” as is done in San Francisco, or bubble in a candidate-by-ranking matrix, as was done in Maine last week.

The challenges get thornier with STV elections. Due to the “multi-winner” nature of a race, there sometimes are very many candidates. That can result in confused voters and burdensome vote counts. Only in 1991 did Cambridge (MA) solve these problems by computerizing its electoral system. That could have happened as early as 1936, when many cities still were holding STV elections.

As it turns out, IBM had found a way to mechanize the voting process. George Hallett of the erstwhile Proportional Representation League writes:

Among the most persuasive arguments against P. R., in spite of their essential triviality, have been the objections that it required several days to get the result in a large election and that it required paper ballots and hand counting, both of which in plurality elections without the safeguards of a central count have acquired an evil reputation. In connection with the possible early use of P. R. in New York City, where these objectives would be stronger than ever psychologically, an effective answer to them has now been devised.

 

IBM’s system used standard, punch-card readers to count STV ballots at a rate of 400 per minute. According to Hallett, “the final result of a P. R. election in New York City can easily be determined by some time in the morning of the day after election.”

Voters would use a series of dials to rank candidates, one through 20. Then, as some will recall, the machine would record a voter’s votes when they pulled the lever to open the curtain. Opening the curtain punched the holes into the punch-card ballot.

Here is the quotation in its context (albeit a bit blurry):

Other features of the system were:

  • Precinct-based error correction. A voter could not give the same ranking to more than one candidate. Nor could a voter skip a ranking.
  • Freedom of choice. A voter could rank as few candidates as they wanted. They also could rank as many as they wanted. Although the machine was built for 20 rankings, there appears to have been accommodation for write-in and additional candidates. Finally, a voter could go back and change their mind about a ranking.
  • Early “cyber-security.” Now we worry about nefarious actors loading malware onto touchscreens. Back in the 1930s, however, the worry was that poll workers might stuff a ballot box or throw out ballots they did not like. IBM’s solution was simple. Poll workers would not have access to individual ballots. Once a voter voted, the ballot fell into a sealed container, only to be opened in the central-count location.

Why the machine did not catch on remains a mystery. IBM appears to have been pitching it to New York City in advance of the November referendum, which put STV into place from 1937 to 1947. Those passing by 41 Park Row could see a demonstration model at the Citizens Union office.

It is a shame that New York (and other cities) did not go with the system. According to Mott (1926), the average invalid-ballot rate in 19 elections to that point was 9.1 percent. My data reveal invalid rates of up to 18 percent (Manhattan and Brooklyn, 1941). Part of this was abstention altogether. Another part was the lack of interest in discerning voter intent, handling skipped rankings with compassion, and so forth. IBM’s machine, however, would have addressed some of those issues, all while educating voters at the same time that they voted.

Presidentialism and diverging intraparty electoral incentives

“We’ve got people running for president all trying to find their base, and then you’ve got people from Trump states that are trying to continue to legislate the way we always have — by negotiation.”
 
Thank you, Sen. Claire McCaskill (D-Missouri), for a wonderful quote about how presidential systems can fracture electoral incentives within a party.

The DNC is not the party leadership (in any meaningful sense)

Quick political science lesson. Political parties in the US are non-hierarchical.* That’s a fancy way of saying neither their candidates for office nor their platforms are determined by a central authority.
In other words, the DNC chair is not worth getting all worked up over. If you want to change the party, get some candidates who can win primaries for state legislative and congressional races. Oh, and make sure that said candidates also could realistically win the general election. That is all.

____
* As explained in Chapter 6 of A Different Democracy.

Voter choice or partisan interest? The case of ranked-choice voting in Maine

Galvanized by the first ever ranked-choice-voting (RCV) win in a U.S. state, reformers just hours ago held a conference call to build their movement. Ranked-choice voting is a set of voting rules more kind to “outsiders” than our ubiquitous plurality system. Given the unusual strength of America’s two-party system, why do outsider-friendly electoral reforms ever win?

My answer is: a replacement institutional template, losing-party self-interest, and ruling-party disunity. In a recently published paper, I show how this logic can explain the spread of “multi-winner ranked-choice voting” (i.e., proportional representation or PR) in the first part of the 20th century. Losing parties and disgruntled ruling-party factions promote voting-system change in a bid for policy-making influence. Voting reform organizations supply the replacement template.

Does my answer also explain the RCV win in Maine? Is that enough to buy my argument? If the answers are “yes,” reformers would concentrate on jurisdictions with sizable out-parties and fractious ruling parties.

Americanist political scientists would also change the way they think about election “reform.” The dominant trend for more than a century has been to see party and reform as exclusive. Fifty years ago, we would have read about conflict between “machine politics” and “good government.” Now we read about “activists” versus “compromisers,” legacies of Progressivism, and reformer “process-obsession.” What if party itself were a critical reform ingredient? As Jessica Trounstine reminds us in her excellent book, Democratic boss Thomas Pendergast was more than happy to turn the model city charter (without PR) to his own “machine” ends in Kansas City.

Let’s see if my template-loser-faction model explains what just happened in Maine.

The template

“Maine has not elected a governor to a first term with majority support since 1966,” said Jill Ward, President of the League of Women Voters of Maine. “Ranked Choice Voting restores majority rule and puts more power in the hands of voters.” – quoted from FairVote.org

Efforts to enact RCV began in 2001.

The losing party

Circumstantial evidence suggests that, from 2001 until the 2014 re-election of Gov. Paul LePage (R), the Democratic Party either:

1) controlled a policy veto point via the governorship, or

2) did not expect “independent” voters’ ballot transfers under single-winner RCV to help elect its candidates.

How is 2014 different for Democratic Party expectations? If the rhetoric of the current governor is any indication, the Maine Republican Party has become more socially conservative. Perhaps it is now so socially conservative (in Democrats’ minds) that the Democratic Party thinks “independent” voters would rank its candidates over Republicans. Maybe Democrats are thinking: “If we had RCV, we wouldn’t be the losing party.”

The disgruntled, ruling-party faction

My hunch is that this is a group of fiscal conservatives, no longer at home in either state party. That doesn’t make them a disgruntled, ruling-party faction, but it might have made them willing to consider Republicans in earlier years. Consider:

  • Proponent of record for Question 5: An Act to Establish Ranked-choice Voting. Liberal on some economic issues, but supports consumption taxes and income-tax reduction.
  • Two-time independent candidate for governor. Liberal on the environment, ambiguous on economics, but not a conventional Democrat of yore. Endorsed independent candidate Angus King (over the Democrat) to replace outgoing Sen. Olympia Snowe, a famed “moderate” Republican.
  • One-time independent candidate for governor. Quits Democratic Party to run. Wanted Maine “to be the Free Enterprise State.”

Predictions and evidence

Last month I predicted that a coalition of regular Democrats and “the independents” would put RCV over the top. Republicans threw me a curve ball by endorsing RCV the very next day, but, as the proprietor of this blog has written, such endorsements can be strategic.

If I was right, Democrats and “the independents” should have voted for RCV, but the Republicans should not have.

Below I give a rough test of these hypotheses. Here are precinct-level results of the vote in favor of RCV by the vote for each major-party presidential candidate. (Vote shares are overall, not of the two-party vote.) This is preliminary. I only have data so far for 87 percent of precincts, the state has not released official results, and I have not looked at the correlation of RCV support with partisanship in other offices. I don’t yet have a way to get at behavior by “the independents.” Finally, I have not yet run an ecological inference analysis, but I plan to remedy all this later.

As you can see, Democrats seemed to like RCV, and Republicans did not, at least as revealed by presidential voting.

The role of uncertainty

Why don’t “the independents” simply join the Democratic Party if they dislike current Republican positions as much as the Democrats? This is what’s really interesting about the adoption and use of RCV. I argue that groups in reformist alliances do not plan to cooperate on all pieces of legislation. Let’s say Maine ends up with an “independent” governor or a sizable contingent of “independents” in its state legislature. I would not be surprised if we see them working with Democrats on some legislation (e.g., “social”), then with Republicans on other bills (e.g., taxes).

Why don’t Democrats foresee this possibility? Perhaps they recognize that single-winner RCV is not the same as PR. Consequently they may reason that “independents” will not become a bargaining force. Rather, “independent” ballots will bolster the position of Democrats in government.

Then why are “independents” going along with a reform that’s good for Democrats? Perhaps they disagree with Democrats on who’s likely to benefit from strategic voting. As Gary Cox reminds us, strategic voting depends in the end on voter expectations, shaped by elite messaging about precisely which party or candidate is “hopeless” under a given electoral system. The perception that RCV has made elections kinder to outsiders is important. If there really are many sincerely “independent” voters, “independent” candidates may get a toehold in government.

And that’s when things get interesting.

First (and last?) USA 2016 post

I’ve had nothing to say here about the US election. Well, I voted today. I am glad it is almost over. It has been depressing to watch this campaign. I offer this space for readers who feel F&V should have such a space. That is all from me (for now).

More on American columnists discovering comparative politics

At Think Progress, Ian Millhiser offers another in the recent series of examples of American columnists noticing comparative politics. This is good!

Millhiser suggests we look to Chile’s current presidential democracy for models of how to prevent government shutdowns. As he notes, correctly, Chile’s president has exclusive power under the country’s constitution to propose legislation in areas relating to finance and budget, along with “urgency” provisions and restrictions on congressional authority to change executive proposals.

In other words, a presidential (separation-of-powers) model does not necessarily have to leave the executive dependent on legislative initiative to pass a budget or other financial matters.

While the recognition of other models is good, I am afraid I have to stop short of advocating the Chilean solution. If I decried the possible “Latin Americanization” of US presidentialism during the previous administration, I hardly can advocate it now.

A Different Democracy

The draft chapters for a co-authored book project in which I am involved are posted on my academic pages for anyone who might be interested.

A DIFFERENT DEMOCRACY?

A Systematic Comparison of the American System with 30 Other Democracies

By Steven L. Taylor, Matthew S. Shugart, Arend Lijphart, and Bernard Grofman

It is often said that the United States has an exceptional democracy. To what degree is this claim empirically true? If it is true, in what ways is US democracy different and do those differences matter? What explanations exist for these differences?

The study examines the choices made by the designers of the US government at the Philadelphia convention of 1787 and the institutional structures that evolved from those choices and compares them to 30 other democracies. The basic topics for comparison are as follows: constitutions, federalism, political parties, elections, interest groups, legislative power, executive power, judicial power, bureaucracies, and public policy.

Each chapter starts with a discussion of the feasible option set available on each type of institutional choice and the choices made by the US founders as a means of introducing the concepts, as well as discussing how specific choices made in the US led to particular outcomes. This is done by looking at the discussions on these topics from the Federalist Papers and the debates from the Philadelphia Convention. This approach allows a means of explaining the concepts in a comparative fashion (e.g., federal v. unitary government, unicameralism v. bicameralism, etc.) before moving into the comparisons of the US system to our other 30 democracies, which make up the second half of each chapter. Each chapter contains an explicit list of specific differences between the US and the other democracies as well as comparative data in tabular and graphical formats. The current draft of our book has 64 tables, 16 figures, and 10 text boxes. All of the figures and tables contain comprehensive comparative data featuring all 31 cases (save in a handful of instances) or specific thematic subsets of the 31 cases (e.g., presidential systems or bicameral legislatures).

The book is now under contract with Yale University Press.

Comments are welcome (but act fast!).

Distortions of the US House: It’s not how the districts are drawn, but that there are (single-seat) districts

In the New York Times, Sam Wang has an essay under the headline, “The Great Gerrymander of 2012“. In it, he outlines the results of a method aimed at estimating the partisan seat allocation of the US House if there were no gerrymandering.

His method proceeds “by randomly picking combinations of districts from around the United States that add up to the same statewide vote total” to simulate an “unbiased” allocation. He concludes:

Democrats would have had to win the popular vote by 7 percentage points to take control of the House the way that districts are now (assuming that votes shifted by a similar percentage across all districts). That’s an 8-point increase over what they would have had to do in 2010, and a margin that happens in only about one-third of Congressional elections.

Then, rather buried within the middle of the piece is this note about 2012:

if we replace the eight partisan gerrymanders with the mock delegations from my simulations, this would lead to a seat count of 215 Democrats, 220 Republicans, give or take a few.

In other words, even without gerrymandering, the House would have experienced a plurality reversal, just a less severe one. The actual seat breakdown is currently 201D, 234R. In other words, by Wang’s calculations, gerrymandering cost the Democrats seats equivalent to about 3.2% of the House. Yes, that is a lot, but it is just short of the 3.9% that is the full difference between the party’s actual 201 and the barest of majorities (218). But, actually, the core problem derives from the electoral system itself. Or, more precisely, an electoral system designed to represent geography having to allocate a balance of power among organizations that transcend geography–national political parties.

Normally, with 435 seats and the 49.2%-48.0% breakdown of votes that we had in 2012, we should expect the largest party to have about 230 seats. ((Based on the seat-vote equation.)) Instead it won 201. That deficit between expectation and reality is equivalent to 6.7% of the House, suggesting that gerrymandering cost the Democrats just over half the seats that a “normally functioning” plurality system would have netted it.

However, the “norm” here refers to two (or more) national parties without too much geographic bias to where those parties’ voters reside. Only if the geographic distribution is relatively unbiased does the plurality system work for its supposed advantage in partisan systems: giving the largest party a clear edge in political power (here, the majority of the House). Add in a little bit of one big party being over-concentated, and you can get situations in which the largest party in votes is under-represented, and sometimes not even the largest party in seats.

As I have noted before, plurality reversals are inherent to the single-seat district, plurality, electoral system, and derive from inefficient geographic vote distributions of the plurality party, among other non-gerrymandering (as well as non-malaportionment) factors. Moreover, they seem to have happened more frequently in the USA than we should expect. While gerrymandering may be part of the reason for bias in US House outcomes, reversals such as occurred in 2012 can happen even with “fair” districting. Wang’s simulations show as much.

The underlying problem is, again, because all the system really does is represent geography: which party’s candidate gets the most votes here, there, and in each district? And herein lies the big transformation in the US electoral and party systems over recent decades, compared to the party system that was in place in the “classic” post-war system: it is no longer as much about local representation as it once was, and is much more about national parties with distinct and polarized positions on issues.

Looking at the relationship between districts and partisanship, John Sides, in the Washington Post’s Wonk Blog, says “Gerrymandering is not what’s wrong with American politics.” Sides turns the focus directly on partisan polarization, showing that almost without regard to district partisanship, members of one party tend to vote alike in recent congresses. The result is that when a district (or, in the Senate, a state) swings from one party to another, the voting of the district’s membership jumps clear past the median voter from one relatively polarized position to the other.

Of course, this is precisely the point Henry Droop made in 1869, and that I am fond of quoting:

As every representative is elected to represent one of these two parties, the nation, as represented in the assembly, appears to consist only of these two parties, each bent on carrying out its own programme. But, in fact, a large proportion of the electors who vote for the candidates of the one party or the other really care much more about the country being honestly and wisely governed than about the particular points at issue between the two parties; and if this moderate non-partisan section of the electors had their separate representatives in the assembly, they would be able to mediate between the opposing parties and prevent the one party from pushing their advantage too far, and the other from prolonging a factious opposition. With majority voting they can only intervene at general elections, and even then cannot punish one party for excessive partisanship, without giving a lease of uncontrolled power to their rivals.

Both the essays by Wang and by Sides, taken together, show ways in which the single-seat district, plurality, electoral system simply does not work for the USA anymore. It is one thing if we really are representing district interests, as the electoral system is designed to do. But the more partisan a political process is, the more the functioning of democracy would be improved by an electoral system that represents how people actually divide in their partisan preferences. The system does not do that. It does even less well the more one of the major parties finds its votes concentrated in some districts (e.g. Democrats in urban areas). Gerrymandering makes the problem worse still, but the problem is deeper: the uneasy combination of a geography-based electoral system and increasingly distinct national party identities.

Impact of hypothetical congressional-district allocation of US presidential electoral votes

In recent weeks there has been considerable attention to proposals by some Republican politicians to change the allocation of presidential electoral votes from statewide winner-take-all to congressional districts–at least in states where doing so would help Republicans. If this method had been used for all electoral votes in presidential contests from 1968 to 2008, what would its impact have been?

I happen to have district-level presidential votes for each of these elections (but not, yet, for 2012*). The graph below plots both the actual and hypothetical** electoral vote percentages for each party against the popular vote. Red for Republican, blue for Democrat. The solid symbols indicate the actual percentage of electoral votes obtained, while the open symbols indicate the hypothetical allocation by congressional district. The plotted curves are local regression (lowess) curves for each party under each condition (solid for actual, dashed for hypothetical).

EV graph 2pty

The exercise shows how any discussion of shifting to this method of allocation should be talked about for what it is: a GOP-biased proposal. Note that, under the actual allocation, the two curves are close to one another, at least through the part of the graph where it really matters–the relatively close elections. There does appear to be a slight Republican bias in the actual method, as that party’s line crosses over 50% of the electoral votes at almost exactly 50% of the (two-party) popular vote, while the curve for Democrats crosses over at just over 50% of the popular vote. In other words, the data plot predicts the Democrat needs a bigger vote lead to get the electoral vote majority. But the effect appears very small, consistent with what Thomas, King, Gelman, and Katz find.

However, under the hypothetical congressional-district allocation, there is a clear Republican bias. The Republican curve crosses over 50% of the electoral vote well to the left of the 50% popular-vote line, while that for Democrats does not break over 50% of the electoral vote until the party has a clear majority of the popular (two-party) vote.

The 2012 result is shown in the graph for the actual allocation, though I did not have the district data readily available. The 2012 result closely matches the fitted curve for the actual result. If it also matched the fitted curve for district allocation, the electoral-college result would have been very close indeed.***

Only in the case of landslides in the popular vote does the congressional-district method result in greater “proportionality”, as indicated by the flatter curve for congressional-district allocation. Otherwise, there is no sense in which the Republican proposal is “proportional“; rather, it is a partisan power grab. It is a power grab especially when employed only in states where the Republican candidate tends to have a better geographical spread of the votes in the state; it is a power grab even if employed for all electors, as assumed in the hypothetical allocations shown here.

Let’s turn to individual elections. Below is the change for the Republican candidate in electoral votes if the congressional-district method had been used instead of the actual statewide winner-take-all:

1968: -8
1972: -46
1976: 28
1980: -97
1984: -57
1988: -49
1992: 48
1996: 34
2000: 15
2004: 31
2008: 64

As we already saw from the graph, in landslide years, the Republican wins fewer electors via congressional districts. The only electoral-college landslides we have had during this time have been by the Republican candidate: 1972, 1980, 1984, 1988. All of these but 1980 were also huge wins in the popular vote. In every election since 1992, the Republican gains electors regardless of whether he wins or loses the actual election. He also gains in 1976. (In 1968, both major-party candidates lose electoral votes, as George Wallace obtains 56 under the congressional-district allocation, against the 46 he actually won.)

In 2004, despite its being a very close election, Bush would have won 317 electoral votes with a district plan, against the actual 286. His total also would have been better in 2000: 286 vs. the “actual” 271. Had Florida’s electors been awarded properly in 2000 under statewide allocation, Bush’s total would have been only 246, meaning that the congressional-district plan would have netted him 40 extra electors, despite losing the popular vote. That’s even more than his 31-elector gain in 2004, when he actually won the popular vote. (It might have been a slightly bigger change in 2000, as I am missing three districts that year: AR3, IN10 and LA2. For two of these, congressional votes are also missing; IN10, is a very safe Dem district in 2000 House race.)

Of course, an objection to any simulation such as this is that we do not know how campaign strategy might have changed under different rules. That is certainly true; if each House district actually would have awarded an electoral vote, campaigns would have targeted the marginal districts, some of which would have swung the other way. In other words, the votes themselves could have been different.

We can get a broad understanding of the opportunities for potentially swinging electoral votes by considering how often a district is marginal in the presidential contest.

There are 4,782 observations.**** There have been 730 the entire time that were decided by less than 5 percentage points (15.26%).

Of course, this varies a great deal by year, as shown below (number in parentheses indicates winner’s margin under a congressional-district allocation):

1968, 71 (104)
1972, 28 (410)
1976, 102 (2)
1980, 77 (247)
1984, 36 (398)
1988, 61 (216)
1992, 103 (106)
1996, 82 (152)
2000, 62 (37)
2004, 44 (96)
2008, 64 (64)

Obviously, 1976 could have been swung by district-focused campaigning: there were many more close districts than the margin (two electors!) that Carter would have won by under a district-based allocation. Not surprisingly, 2000 is another year when districts within the margin of 5% outnumbered the overall electoral-vote margin under the hypothetical allocation. In 2008 there are as many close districts as the electoral-vote margin, and in 1992 the two figures are within a few districts of one another. Looking only at these four elections, we can see which party had the greater number of marginal district wins.

year Rep Dem
1976 62 40
1992 46 57
2000 27 35
2008 37 27

This suggests that Bush’s district-based win in 2000 would have been relatively secure, as he had fewer close races to defend against the Gore campaign’s (hypothetical) district-swing efforts. And there would have been little risk of the Republican swinging the 1992 or 2008 outcome, though the Republican could have made the race closer. But 1976 really would have been a complete toss-up, depending on how various individual district contests turned out.

We might think that the candidate who trails in the popular vote would have more marginal districts to defend, but this is not true in either 1992 or 2000.

All in all, it is clear that congressional-district allocation of electors benefits one party more than the other, and that in a close election, the Republican candidate would be likely to have an advantage. The Republican might even be able to win with less than 49% of the two-party vote.

It is easy to see why Republicans might like a district-based electoral college. It is much harder to see why anyone would think it was a democratic (small or large d) improvement over the current method, bad though that may be.

I am actually somewhat happy that some Republicans have opened the issue of electoral-vote allocation. The country needs this conversation. However, what it needs is not one party pushing a plan that would be blatantly distorting in its favor. It needs the Democrats to engage the conversation, and come out in favor of the National Popular Vote plan, which would remove partisan bias from presidential elections.


___________________
* See sixth comment, below.

** As is standard for such proposals, I assume that the winner of the statewide plurality of the popular vote would be awarded two electors, in addition to a number corresponding to the number of individual House districts won. Two small states, Maine and Nebraska, are the only two states to have used such an allocation in at least some of the years analyzed.

*** Andrew Gelman suggests that Romney might have won, given the “huge” distortion of congressional-district allocation.

**** Would be 435*11=4,785, if not for the missing districts.

Spurious majorities in the US House in Comparative Perspective

In the week since the US elections, several sources have suggested that there was a spurious majority in the House, with the Democratic Party winning a majority–or more likely, a plurality–of the votes, despite the Republican Party having held its majority of the seats.

It is not the first time there has been a spurious majority in the US House, but it is quite likely that this one is getting more attention ((For instance, Think Progress.)) than those in the past, presumably because of the greater salience now of national partisan identities.

Ballot Access News lists three other cases over the past 100 years: 1914, 1942, and 1952. Sources disagree, but there may have been one other between 1952 and 2012. Data I compiled some years ago showed a spurious majority in 1996, if we go by The Clerk of the House. However, if we go by the Federal Election Commission, we had one in 2000, but not in 1996. And I understand that Vital Statistics on Congress shows no such event in either 1996 or 2000. A post at The Monkey Cage cites political scientist Matthew Green as including 1996 (but not 2000) among the cases.

Normally, in democracies, we more or less know how many votes each party gets. In fact, it’s all over the news media on election night and thereafter. But the USA is different. “Exceptional,” some say. In any case, I am going to go with the figure of five spurious majorities in the past century: 1914, 1942, 1952, 2012, plus 1996 (and we will assume 2000 was not one).

How does the rate of five (or, if you like, four) spurious majorities in 50 elections compare with the wider world of plurality elections? I certainly do not claim to have the universe of plurality elections at my fingertips. However, I did collect a dataset of 210 plurality elections–not including the USA–for a book chapter some years ago, ((Matthew Soberg Shugart, “Inherent and Contingent Factors in Reform Initiation in Plurality Systems,” in To Keep or Change First Past the Post, ed. By André Blais. Oxford: Oxford University Press, 2008.)) so we have a good basis of comparison.

Out of 210 elections, there are 10 cases of the second party in votes winning a majority of seats. There are another 9 cases of reversals of the leading parties, but where no one won over 50% of seats. So reversals leading to spurious majority are 4.8% of all these elections; including minority situations reversals are 9%. The US rate would be 10%, apparently.

But in theory, a reversal should be much less common with only two parties of any significance. Sure enough: the mean effective number (N) of seat-winning parties in the spurious majorities in my data is just under 2.5, with only one under 2.2 (Belize, 1993, N=2.003, in case you were wondering). So the incidence in the US is indeed high–given that N by seats has never been higher than 2.08 in US elections since 1914, ((The original version of this statement, that “N is almost never more than 2.2 here” rather exaggerated House fragmentation!)) and that even without this N restriction, the rate of spurious majorities in the US is still higher than in my dataset overall.

I might also note that a spurious majority should be rare with large assembly size (S). While the US assembly is small for the country’s population–well below what the cube-root law would suggest–it is still large in absolute sense. Indeed, no spurious majority in my dataset of national and subnational elections from parliamentary systems has happened with S>125!

So, put in comparative context, the US House exhibits an unusually high rate of spurious majorities! Yes, evidently the USA is exceptional. ((Spurious majorities are even more common in the Senate, where no Republican seat majority since at least 1952 has been based on a plurality of votes cast. But that is another story.))

As to why this would happen, some of the popular commentary is focusing on gerrymandering (the politically biased delimitation of districts). This is quite likely part of the story, particularly in some sates. ((For instance, see the map of Pennsylvania at the Think Progress link in the first footnote.))

However, one does not need gerrymandering to get a spurious majority. As political scientists Jowei Chen and Jonathan Rodden have pointed out (PDF), there can be an “unintentional gerrymander,” too, which results when one party has its votes less optimally distributed than the other. The plurality system, in single-seat districts, does not tote up party votes and then allocate seats in the aggregate. It only matters in how many of those districts you had the lead–of at least one vote. Thus a party that runs up big margins in some of its districts will tend to augment its total in its “votes” column at a faster rate than it augments its total in the “seats” column. This is quite likely the problem Democrats face, which would have contributed to its losing the seat majority despite its (apparent) plurality of the votes.

Consider the following graph, which shows the distribution (via kernel densities) of vote percentages for the winning candidates of each major party in 2008 and 2010.

Kernel density winning votes 2008-10
Click image for larger version

We see that in the 2008 concurrent election, the Democrats (solid blue curve) have a very long and higher tail of the distribution in the 70%-100% range. In other words, compared to Republicans the same year, they had more districts in which they “wasted” votes by accumulating many more in the district than needed to win it. Republicans, by contrast, tended that year to win more of their races by relatively tighter margins–though their peak is still around 60%, not 50%. I want to stress, the point here is not to suggest that 2008 saw a spurious majority. It did not. Rather, the point is that even in a year when Democrats won both the vote plurality and seat majority, they had a less-than optimal distribution, in the sense of being more likely to win by big margins than were Republicans.

Now, compare the 2010 midterm election, in which Republicans won a majority of seats (and at least a plurality of votes). Note how the Republican (dashed red) distribution becomes relatively bimodal. Their main peak shifts right (in more ways than one!) as they accumulate more votes in already safe seats, but they develop a secondary peak right around 50%, allowing them to pick up many seats narrowly. That the peak for winning Democrats’ votes moved so much closer to 50% suggests how much worse the “shellacking” could have been! Yet even in the 2010 election, the tail on the safe-seats side of the distribution still shows more Democratic votes wasted in ultra-safe seats than is the case for Republicans. ((It is interesting to note that 2010 was very rare in not having any districts uncontested by either major party.))

I look forward to producing a similar graph for the 2012 winners’ distribution, but will await more complete results. A lot of ballots remain to be counted and certified. The completed count is not likely to reverse the Democrats’ plurality of the vote, however.

Given higher Democratic turnout in the concurrent election of 2012 than in the 2010 midterm election, it is likely that the distributions will look more like 2008 than like 2010, except with the Republicans retaining enough of those relatively close wins to have held on to their seat majority.

Finally, a pet peeve, and a plea to my fellow political scientists: Let’s not pretend there are only two parties in America. Since 1990, it has become uncommon, actually, for one party to win more than half the House votes. Yet my colleagues who study US elections and Congress continue to speak of “majority”, by which they mean more than half the mythical “two-party vote”. In fact, in 1992 and every election from 1996 through at least 2004, neither major party won 50% of the House votes. I have not ever aggregated the 2006 vote. In 2008, Democrats won 54.2% of the House vote, Republicans 43.1%, and “others” 2.7%. I am not sure about 2010 or 2012. It is striking, however, that the last election of the Democratic House majority and all the 1995-2007 period of Republican majorities, except for the first election in that sequence (1994), saw third-party or independent votes high enough that neither party was winning half the votes.

Assuming spurious majorities are not a “good” thing, what could we do about it? Democrats, if they are developing a systematic tendency to be victims of the “unintentional gerrymander”, would have an objective interest in some sort of proportional representation system–perhaps even as much as that unrepresented “other” vote would have.

Ballot order controversy

A legal issue in Connecticut over the order in which each party’s candidate will appear on the ballot:

[A] lawsuit is causing a delay on the final order of candidates for Election Day ballots. The GOP took Secretary of the State Denise Merrill to court after she decided Democrats should get the top ballot line. Republicans say state law dictates otherwise…

State statute says “the party whose candidate for governor polled the highest number of votes in the last-preceding election” gets the first position on the ballot. But Democrat Dannel P. Malloy appeared on the ballot twice in 2010, on the Democratic and Working Families Party ballot lines. More votes were cast for Tom Foley on the Republican line than they were for Malloy on the Democratic line, but the Working Families Party votes handed Malloy the election.

See the Post-Chronicle story for all the gory detail.

Not counting votes of withdrawn candidate?

Ed sends along the following note, based on a story at Daily Kos Elections:

Apparently the South Carolina election commission decided simply not to count the votes for a candidate on the ballot in a Democratic primary, who had stated earlier that he was withdrawing from the race. The South Carolina Democratic Party objected to this and has threatened a lawsuit.

I had always assumed that election commissions in the US were jointly controlled by the local Democratic and Republican Parties (actually a big problem with US elections, they should be more independent). This seems to be a pretty blantant violation of elections law by the Elections Commission itself, over the objections of one of the big two. Is the story being distorted and there is really more explanation or precedence for this than appears at first glance?

California’s new electoral system

This is what California’s ballot for US Senate looked like today.

2012 June top-two ballot columns
Click for detail of a portion of this ballot

This is an image from Orange County; there would be regional variations in format. This example seems especially bad, with some of the candidates, including the incumbent, listed in a short second column. ((The ballot where I voted managed to have all these candidates in a single column.))

That’s 24 candidates, including several with the same indicated “party preference” as others running. The electoral system is now “top two”. Rather than an actual primary, in which each of the recognized parties will winnow their field to one candidate for the general election in November, the top two–regardless of party and regardless of whether one obtains an overall majority today–will face each other in November. And only the top two, meaning no minority party presence (unless one of the third party candidates somehow manages to be in the top two). ((Strangely, one of the recognized parties, the Greens, has no candidate even in this first round.))

I am not a fan of this new system. I did not cast a vote in this particular contest.