Monday, November 10, 2014

How skewed is the US Senate?

A recent article in the Economist recommends that the Senate fillibuster be repealed due to the low proportion of the population needed to block legislation.  In theory, as little as 11% of the population could be represented by the 41 senators needed to block anything except a budget.  A conversation with my bud PD led to the question that even though the Senate is skewed by design, how does the present Senate compare to the first one based on a census and would the Founding Fathers approve of today's circumstances?

It turns out that the fillibuster was created decades later and first used in 1837.  Therefore, even if the population is distributed the same, the represented population needed to block legislation is considerably higher.  Without a fillibuster, it would take at least 18% of the population to block legislation rather than the aforementioned 11%.  Working off the 1790 Decennial Census and combining Maine and Massachusetts and ignoring the fillibuster, it took senators representing 24% of the population to block legislation.  Interestingly, it takes senators representing a minimum of 26% of the population to pass legislation in face of a fillibuster which is very close to the 1790 proportion.

Note that some of the difference above is due to the fewer states and the discrete nature of the voting.  Below is the cumulative population distribution against the proportion of US Senate representation for 1790 (red) and 2010 (blue).  Visually, one can see that the 2010 Senate is more skewed.  For kicks, the Gini coefficients for 1790 and 2010 are 39% and 50%, respectively.  I'm no historian, so someone else will have to address the question of what the Founding Fathers would think of today's circumstances ... or if what they would think matters.  More to the point of the original article, it should be clear that small population states have a much bigger influence in the Senate than in the past obstructing legislation.

Cumulative Population against Cumulative Senate Representation
RED: 1790 population
BLUE: 2010 population
   

Saturday, June 28, 2014

Shimano and Tektro brake comparison ... or what models of v-brakes sit flush with the end of the brake boss stud?

During the process of switching from long pull to short pull v-brakes on a folding bike, I noticed a quirk between the Shimano and Tektro brakes with respect to mounting on the brake bosses.

Long story short, it appears that brake-boss-base -- or where the springs of the v-brakes mount -- is "too long" for the Shimano v-brakes.

Ordinary brake boss.  I'm calling the wider part of the boss
where the brake's spring resides the base.
Black Tektro is flush.  Notice that the entire silver Shimano brake
is moderately wider.  However the spring portion of the Tektro
brake is wider than it's Shimano counterpart. 
Notice the gap between the brake and
the left edge of the brake boss.  The spring end is in
the hole of the brake boss.

A quick comparison between the two brakes reveals the Tektro spring portion of the brake is deeper than it's Shimano counterpart.  In fact, when you take into account that the entire Shimano brake is somewhat wider than the Tektro brake, the brake extends quite a bit past the end of the stud.  

Tektro brake is flush with the stud.
Shimano brake is roughly 5-10 mm past the end of the stud.
I gather from a few online discussions that the gap between the brake-boss-base and the brake is standard for some models and not an issue: The post is more than stiff enough to support the brake and the bearings or whatever that allow the brake to rotate are inside the brake rather than between the stud and brake.   

The sad thing is that after learning all of this, I determined that these Shimano brakes will not work with the folding bike.  The front rack mounts onto the front of the brakes as well as to an eyelet.  Changing brakes would move the mounting points and make it more than a little difficult (impossible?) to fit the rack.  

So does anyone know if a Tektro short pull v-brake -- say the Tektro RX5 -- sits flush with the end of the stud?  Are there other options?  Thanks.  

Saturday, March 15, 2014

Let's insert some sanity into prison policy

The Onion nails a serious issue with spectacular satire.

15 Years In Environment Of Constant Fear Somehow Fails To Rehabilitate Prisoner

How terrible are our prisons?  The Justice Department studied the issue and estimated that in 2008 there were 216,000 rape victims in US prisons suggesting that more than half of all rape victims are men.  Notice that it is a count of rape victims such that repeated rapes of the same victim is counted once.  The bi-partisan Commission on Safety and Abuse in America's Prisons describes the high level of overcrowding and violence in their 2006 report.   

Of course, there are some terrible people in prison that warrant severe punishment.  However sixty percent of the inmate population are there for nonviolent offenses; one-fourth of the inmate population are there for nonviolent drug offenses.  Ignoring whether sending nonviolent criminals into this environment is moral, the natural response is that severe punishment is a deterrent to crime.   However, the literature overwhelmingly supports certainty of punishment as a far more effective deterrent than severity of punishment.

During this time of budget austerity pressures, people are finally noticing that incarcerating inmates is expensive.  In a state like California it annually costs $47,000 per inmate.  The average across states is a little more than $32,000.  Note that these are accounting costs that fail to capture lost work opportunities and time away from parenting and loved ones.

Naturally, we should support the Attorney General as he recommends lowering sentences for nonviolent drug offenders.  More broadly, we should consider whether it makes sense for us to pursue the expensive yet less effective strategy of incarcerating nonviolent inmates in inhumane environments for lengthy periods of time.  

Tuesday, March 4, 2014

The speed of a vehicle and pedestrian mortality

The over four thousand pedestrians killed ever year from motor vehicle collisions have motivated the following graphic.


Naturally, we expect the likelihood of mortality to increase as the vehicle travels faster and physics suggests that it will be nonlinear.   But like anything else, we want to make decisions based on accurate information and understanding of risk.

These estimates originate from research using 1980s and earlier data more likely to report serious injuries.  From the abstract of a literature review published in 2011.
Without exceptions, papers written before 2000 were based on direct analyses of data that had a large bias towards severe and fatal injuries. The consequence was to overestimate the fatality risks. We also found more recent research based on less biased data or adjusted for bias. While still showing a steep increase of risk with impact speed, these later papers provided substantially lower risk estimates than had been previously reported.
These authors produced the following table in their 2009 paper containing the following table.

Rosen and Sander (2009)
Clearly, estimates that correct for the bias or use less biased data are much lower than those suggested by the popular graphic.  Mind you, it's still the case that the likelihood of mortality rapidly increases as vehicle speed increases and that serious injuries are important too.

From DOT HS 809 021 October 1999
Note: Given the 2011 literature review, the table is potentially biased given that the paper is written so close to the year-2000-threshold.  The mortality estimates roughly match later estimates, however, suggesting that the table is based on less biased data.   
Broadly speaking, as a person who advocates transportation networks with strong walking and bicycling options, relying on bad or biased estimates when better options are readily available to make our points is a terrible strategy.  Besides being ethically questionable, it (1) makes advocates look naive, (2) sets expectations too high, and (3) leads to poor decision-making.  For example, suppose we believe the graphic such that a pedestrian struck by a vehicle traveling at 40 mph is almost certainly going to die.  One might reasonably conclude that there is no point in traffic calming a 50 mph arterial since there is nothing to be gained until you get below 40 mph.  However,  that's not the case based on the more robust estimates.

EDIT:

We can see the bias in this graph by Rosen.  You can see how the risk curves are dramatically shifted left when using the biased data.