Checklists, Cheap Heuristics

Checklists+ stifle creativity. Checklists are based on past experience and understanding, while creativity looks towards what has yet to be seen. The problem with checklists is that it is easy to assume every potentiality has been considered: if all the boxes are checked then success is assured. Instead we want people to learn the lessons for each item on a checklist, to internalize the truths of the items on the checklists. We strike a balance between relearning everything and taking away the freedom of individuals to find their own truths. We want to individuals to try items that wouldn’t be tried by rote following of the checklist.

Checklists for causality. If you want to prove A caused B, for example taking drug A caused the plaintiff to suffer damage B, then these lists can be helpful in determining causality. But the authors of these lists caution they are only mnemonics, and not mechanical exercises. Any good trial lawyer can get a yes or no for any of the items on either of these lists, depending on whether or not it serves the interests of their client.
Checklists stifle freedom. They are simplified models. Checklists are lists with a mandate. You must place a check in each of the boxes whether or not the item next to the box has any relevance to what you’re doing; you check things off the list that are not even important to what you’re trying to do. Some must even be filled out in 1 2 3 order. I resent being given a checklist I did not develop. Internally I don’t know whether or not to believe the items on the checklist – I submit but am not convinced. It takes an exceptional manager to bring the staff along on this journey. Show responsibility to learn the lessons of the checklist and we allow you the freedom to deviate from the checklist.

Checklists encourage legalistic+ and mechanical behaviors+. If you judge me on my filling in all the checks I’ll excel at filling them in. I’ll design my work and set your expectations around the checklists. Checklists also discourage thinking the unexpected. If I look first at the checklist then I’ll be blinded to other opportunities and options. I can convince myself that my observations magically fit into the current checklist. There are types of personalities in love with checklists and who insist on their rigorous application.

Lipinski's Rule-of-Five gives a thoughtful illustration. This is a rule of thumb to evaluate the drug-likeness of a molecule; to determine if a molecule has properties that would make it a likely drug in humans. The rule-of-five states that, in general, an oral drug (e.g., tablet, capsule) should meet the following four hurdles:

  • Not more than 5 hydrogen bond donors
  • Not more than 10 hydrogen bond acceptors
  • A molecular weight under 500 g/mol
  • A coefficient log P less than 5.

All numbers are multiples of five, hence the name. For purposes of this discussion the significance of each rule is unimportant. They affect the pharmacological properties of the drug (e.g., ability of the molecule to be absorbed by the gut, ability to be harmlessly excreted by the liver).

I casually dismissed the rule-of-five in discussions with a major pharmaceutical client in research. The Pharmaceutical Sciences department can often come up with workarounds for the effects of violating any of these rules (e.g., fast dissolve wafers for the tongue). The client shot back:

I very much believe in the Lipinski Rule of Five. If any of our drug candidates violates more than two of the criteria I discontinue the candidate. Pharmaceutical Industry Colleague (2008). Personal Communication

At first blush the client’s statement seemed illogical. What good are rules if you can violate two out of four of them? Later I recalled the three body+ problem. If these rules are dependent on each other, that is, if to achieve a molecular weight under 500 g/mol I must increase the number of hydrogen bond donors, etc., then a violation of more than two of rules can create an intractable problem. The Pharmaceutical Sciences can compensate for the effects of up to two violations. They may likely never succeed with three violations. So, for tactical lists like the rule-of-five, the compromise taken by this client may represent a workable balance between efficiency and questions of causality.

A. B. Hill and Daubert are examples of meta checklists. These lists simplistically suggest that you can demonstrate (or strongly infer) causality if you satisfy their rules. They are many times used by practitioners to justify their ignorance of underlying causal mechanisms. Observed correlations (direction, time, space) do not provide a basis for inferring underlying causality, even if used as part of a dynamic systems model. See The Hidden Side of Causality.

Austin Bradford (A. B.) Hill published his criteria for assessing causation in a seminal article. This list is often referred to as a checklist for causality, despite this not being Hill’s intention: ‘none of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required sine qua non.’ A. B. Hill himself gives several counterexamples to his rules in his article.

The Daubert standard is a legal precedent set in 1993 by the Supreme Court of the United States regarding the admissibility of scientific evidence during legal proceedings. The citation is Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993). Daubert directed the courts to consider several factors in determining whether or not to admit scientific evidence: testability of the evidence, the use of peer review, potential error rates, etc. The court specifically instructed that Daubert criteria were mere guidelines, not indispensable characteristics of admissible evidence. For example, as late as 1997 it was noted that latent fingerprint evidence might not pass the Daubert criteria. The Daubert criteria are not exclusive, with other courts adding other criteria to the list.

Occaam’s razor is often used as another simplistic gauge of causality: all else being equal the simpler of two arguments should be selected. A famous exception is provided by the Shapley-Curtis Debate in astronomy between Harlow Shapley and Heber Doust Curtis which took place on April 26, 1920. This debate and others show are examples of how Occaam’s Razor can fail in decision-making under uncertainty.

Checklists can be valuable if used judiciously: I use them myself. Checklists represent codified knowledge and provide shortcuts under tight deadlines. They help us to not go down unfruitful paths based on the codified experience of our predecessors. They help us to not overlook important factors and remind us to pay more attention when we see infrequent results. They can be used to educate newer employees. However they are merely a tool: a blunt tool. Experienced staff should consult them only retrospectively+ as a mnemonic.

Austin_Bradford_Hill_The_Environment_and_Disease_-_Association_or_Causation.doc49 KB