Current navigation tree: JEBHP 13 >> Apportionment Simulator

Choose your preferred colorset: | Relative Luminance (for Accessibility):
Version Thirteen

Apportionment Simulator



If you're an American, you are likely aware that the House of Representatives is apportioned among the states after every Census. This is specified in Article I, Section 2, Clause 3 of The Constitution. That clause has an apportionment in it (loadable below as the 1789 Apportionment), but that was always designed to be temporary until a Census was taken in 1790. That clause also states that "each State shall have at Least one Representative", and includes the Three-Fifths Compromise (which was repealed by the first sentence of Amendment XIV).

What you may not know is that the mathematics behind the apportionment are not always the same; that is, there are multiple methods of performing this task mathematically. What this page was originally designed to do was to take population data and perform the apportionment using different methods of which I am aware.

In the process of trying to implement that, I learned a lot: about how these methods work (more similarly than I thought in some cases), about the history of theior actual implementation, and some things about American history in general and some data-driven perspectives about that, especially between 1860 and 1870.

What I Thought I Knew

When I've learned about apportionment in the past, including when I've taught it, we always start with Hamilton's Method. It goes as follows:

  1. Compute the Standard Divisor, which is the Total Population divided by the Size Of The House
  2. Compute each state's Standard Quota, which is the state's population divided by the Standard Divisor
  3. Give each state its Lower Quota of representatives (with a minimum of 1)
  4. Subtract the Lower Quota from the Standard Quota for each state, giving the Remainder
  5. Sort the states by descending order of their remainders
  6. Allocate one additional representative to each state in this sorted order until there are no more representatives to allocate

Then we learn about three Divisior Methods: Adams, Webster, and Jefferson. They all work the same way, with the only difference being in rounding:

  1. Pick a Divisor (usually we start with the Standard Divisor, though it doesn't often work)
  2. Compute each state's Rounded Quota, which is the state's population divided by the Divisor, then rounded as follows:
  3. Give each state its Rounded Quota of representatives (with a minimum of 1)
  4. Compute the Size Of The House under this allocation

We stop teaching at this point, which is a shame because we don't then teach what is actually used nowadays: the Huntington-Hill Method. When I learned this methid, it was described as follows:

  1. Allocate each state 1 representative (this satisfies the Constitutionally-required "minimum of 1" above)
  2. For each state, compute a Priority Value, which is the state's population divided by the geometic mean of the current allocated number of representatives and one more than the current allocated number
  3. Allocate one additional representative to the state with the highest Priority Value, and recalculate the next Priority Value for that state from the new allocated number
  4. Repeat the last setp until there are no more representatives to allocate

This is kind of like Hamilton's Method in the sense that the next representative is the next state in a sort, but it starts from the beginning rather than from the Lower Quotas, so it's really an entirely different algorithm than the Divisor Methods, which explains in part why we don't teach it.

What I Was Wrong About

I had always thought that apportionment required a Size Of The House at the start. In fact, prior to 1850, the apportionment was done without it; to quote from the Census Bureau, "The size of the House of Representatives was not predetermined, but resulted from the calculation." How is this even possible? It turns out that what they did was simply choose the Standard Divisor, run the algorithm, and then take whatever they got rather than tweak the Divisor to get the "right" Size. (I learned this on the second day of the project.)

I also believed, as I wrote above, that Huntington-Hill was a Priority method, and Adams/Webster/Jefferson were Divisor methods, requiring completely different algorithms. However, I found an unexpected contradiction to that. When reading the Census Bureau's briefs on apportionment (since 1940), they describe Huntington-Hill exactly as I have above, as a Priority algorithm, but looking at their method comparison list, they state, "The Huntington-Hill Method is a modified version of the Webster method, but it uses a slightly different rounding method." Again, how is this possible? If it's just a different rounding than Webster, that makes it a Divisor method! I discovered that this was accurate.

Implementation Notes

As with any programming project, I've changed how I'm doing things at least a dozen times since I started (which as I write this was only seven days ago).

Over the course of this project, I've had close to a dozen "different" apportionment methods, some of which turned out to actually not be different.

The first thing under "What I Was Wrong About" led to the first Big Change: I couldn't assume any longer that the mandate was the Size, but that it could be a Target Par Value. So I now had to have a way to choose the mandate, and I had to implement both ways.

Related to this: I had (but never implemented) three methods based on the Wyoming idea, viz., that the least-populous state (Wyoming, currently, hence the name) should receive one representative and the rest of the states should scale from there. I suddenly realized sometime on the fourth day, I think, of this project – after I knew about Target Par Value methods – that the Wyoming idea was just a specific case of Target Par Value, and I was able to implement it as such and remove them as separate cases.

The second Big Change was actually with the popsets. I had assumed that I would only have one population set for every Census, but then that went awry when I realized I could see what apportionment would have looked like without the Three-Fifths Compromise, and that was worth a lot more than limiting myself to one popset per year. So I had to label the popsets, which shifted their encodings one index right. Eventually I added the Apportionments as if they were population sets, which allowed some better testing, but then I wanted to streamline the calculation there, and that really cluttered the option list, so I split it into two, year and poptype.

By the time I got to writing the implementation (the doapportionment() function), I had four cases: Adams/Webster/Jefferson with a Target Par Value mandate, Hamilton with a Size Of The House mandate, Adams/Webster/Jefferson as a Divisor method with a Size Of The House mandate, and Huntington-Hill as a Priority method with a Size Of The House mandate. I also had an idea for another Priority method, where the next representative gets awarded to whatever state at that moment has the highest par value, which I called "Descending Average".

The really stunning discovery for me was learning, as I wrote in the second thing under "What I Was Wrong About", that there were two different implementations theoretically possible for Huntington-Hill. I tried making "Huntington-Hill D", a Divisor version, and tested if it gave the same results as the (known to me) Priority version. It did, as far as I could tell with hand-testing.

This made me wonder if there is a way to implement Adams/Webster/Jefferson as Priority methods. When I attempted it, I realized that what I thought should give Adams's Method by Priority was the "Descending Average" method I had invented, so if it worked, it was actually a method that was 200 years old. I put in the implementations that I thought might work, as Adams P/Webster P/Jefferson P, and then I wrote some code to try to test them. With every population set I had, with every House size from 100 to 998, they came out the same. I later modified the function after making some other interface changes (like the year and poptype split), but this button will still launch it, comparing algotypes 'division' and 'priority': It will, I believe, return 'true' for inputs of 'Jefferson', "Adams', 'Webster', and 'Huntington-Hill'. (It may take a couple minutes, though, since it's testing at least 80,000 cases.) It's not a proof, but it seems likely.

I am flabbergasted that two completely different implementations of Huntington-Hill, as a Divisor method and as a Priority method, appear to be identical. Not just that, either; every Divisor method I had (Adams/Webster/Jefferson) can be implemented as a Priority method. Why did I never learn this? How did I not know this?

So now that I had two copies of four different methods, I decided to enable the user to select the implementation (Divisor or Priority), even though as far as I know they come out the same.

Once I knew Huntington-Hill uses the Geometric Mean for rounding, I wondered what happens if the Harmonic Mean is used, so I tried to implement that. It turns out that if implemented correctly (which I'm sure my Divisor version is, but possibly not my Priority version) it's Dean's Method, which I'd never heard of until after I implemented it.

When typing above that the implementation I knew of Huntington-Hill as a Priority method was "kind of like Hamilton's Method in the sense that …" (which was on the seventh day of the project), it occurred to me to try to implement Hamilton as an actual Priority method, where the Priority at any given moment is the Standard Quota minus the Current Allocation. Much to my surprise, it worked. This disproved my earlier hypothesis that Hamilton could not be implemented as either Divisor or Priority, and questions another hypothesis that every Divisor method could be implemented as Priority and vice-versa. EDIT an hour later after typing more of the above: No, not every Priority method can be implemented as a Divisor method, because every Divisor method can be implemented with either mandate, and Hamilton absolutely cannot be implemented with the Target Par Value mandate.

Early on, I decided that I didn't want to have to recompute … well, anything … every time a change was made to the population form (i.e., a number got changed, a merge or unmerge was performed, or statehood was granted/removed). Yet late on the seventh day, I did just that.

Any population set listed as Actual – meaning the actual population numbers used for Apportionment that year – should be from the Census Bureau's Apportionment Brief from that year. Most of these Briefs also give other information, depending on the year. In 2020; there is both Apportionment (i.e., Actual) and Resident population. In 1830, there is Free and Slave population as well as Apportionment (i.e., Actual) population; I added the Free and Slave to get the Aggregate. Any population sets (not Apportionment sets) labeled without the word "Report" come from contemporaneous briefs.

For both 1850 and 1870, the Census Bureau put out a recap of all the Censuses from 1790 to the then-present. What makes this significant is geographic changes: in the 1850 report, there are population numbers for Maine back to 1790, and similarly in the 1870 report for West Virginia. This data is noted as (1850 Report) or (1870 Report) in the pull-downs. The data recapped gives White, Free Colored, Slave, and Total; I've computed the Free total and the Computed (i.e., Three-Fifths) total, along with White and Aggregate, from these reports.

I didn't have a problem with 'undefined' showing up when the population was, well, undefined (e.g., the population of Minnesota in 1850, when there was no Minnesota – though Minnesota does have an 1850 population in the 1870 report), but it caused some problems with merging data, so on the eighth day, I removed them (by defining everything undefined as '').

Ties are a problem. I had to set up an alternate breakpoint for the Divisor methods, when the interval allowed to tweak the divisor became so small as to not be able to move. (I used 5 * Number.EPSILON, because I saw it hang up on 4ε, and infinite loops are no fun.) As a practical matter, ties don't happen in real state populations, but once I put in apportionments as if they were populations, then I started seeing trouble. This doesn't happen with the Priority methods, though they have their own issues with ties. The algorithm I use to sort just takes the first tied state every time.

I decided to get around this by making a tiny adjustment to the population numbers: adding 100000ε times the index number to each population; that way, while they may still be equal to at least eight decimal places, they won't be tied in the algorithm, and so it won't break. I still have the safety break in the Divisor algorithm, but it shouldn't be needed, I thought. Ha ha. It was needed almost immediately: 2020 Apportionment, Jefferson, House Size 104; only slightly better than the House Size 101 that broke it before. So that workaround wasn't nearly good enough. Then I tweaked my example that I was testing breaking the divisor method, aptly titled 'break divisor method', and the safety break failed massively.

What I've done as of now is threefold:

Computation Area

Population Information

Number of Columns in Population Data form:
All/None Statehood
(changing either of the above will reset the form)
Population Data Set (changing this to not "Custom" will populate the form):

Population Form

Number of States: Total Population: Smallest State Population:

Apportionment Information

Mandate: Target Par Value: Size of the House:
Apportionment Method: Algorithm Type: Divisor Hamilton Priority None
Sort Results by: Number of Columns in Results:
Result Set
Compare Result Sets

Apportionment Results

More Information


Fairness Criteria

Other Examples