No Blowin' Smoke

Pharmaceutical R&D may be the only organization in the world that gets to fill out its own report card. R&D Management decides its own grading criteria (i.e., product profiles) and also decides whether or not a particular R&D prototype+ meets those criteria (i.e., check-marks on the profile).

It’s amazing. We forecast a full 18 months in advance how many drug candidates will reach each milestone (IND, Phase 1, 2, 3, and NDA). We do this is in the 3rd quarter of the preceding year, and we always either just hit or slightly exceed those numbers! Delivery against those milestones is always shaped like a hockey stick. All the results magically appear in the very last quarter of the forecast year! DIA Pharmaceutical Industry Survey+ Participant

Measuring interim results in innovative R&D is devilishly difficult. The challenge comes from the subjective nature of progress. As in the above example, how can we challenge whether or not a specific drug compound meets certain adsorption criteria? If I meet the criteria I want credit; if I don’t then I shrug it off as something that can be taken care of in later development steps (e.g., formulation, packaging). Counting drug candidates or milestones doesn’t work. Checklists+ and quality gates will never work. Our goal is to instill a measure of results that does not encourage legalism+ – adhering to the letter, but not the spirit of the law.

R&D for innovative products is an exercise in creativity, tacking & turning, exploration of many possible opportunities, and preparing for luck should it happen. Teams are assessed, in the end, on their achievements, and these are measured taking into account all the unexpected happenings throughout the year. We do not crush creativity (or drive it underground into skunk works) through our team assessments.

Progress, Promise, Portfolio, Prior Commitments, Participation (The 5 P’s). These are the assessment criteria for our mid-game+ investments.

  • Progress means results. What have you done for me lately? This incorporates the blockbuster+ pursuit, maintenance of a scientific competitive edge+, and interim commercial results.
  • Promise is what’s to come. Based on all the tacking-and-turning over the last period, what can you promise me for results in the up-coming period. Promises refer to future results, but you’re not held to results in any specific direction (i.e., goals are merely captured as a courtesy).
  • Portfolio refers to the strategic direction of the corporation: reduced investment in the cardiovascular market, increased reliance on personalized medicine, etc. Researchers can get stung by shifting portfolio priorities. You are constantly evaluated against our latest portfolio priorities, but are excused, and will do just fine, should they shift away from you.
  • Prior commitments refer a kitchen pass due to unforeseen events, for example delays in commercialization+ of a team’s product for reasons beyond their control. Also included are commitments to multi-year funding. This is the safety valve for missteps on the part of the funding agent+.
  • Participation refers to contributions to the franchise: flipped matrix+, annual confabs, corporate recruiting events, etc. This refers to commitments of time and energy by Investible Unit+ management and team members.

Of all of the P’s, progress is king, as discussed here. Do well in progress, and almost by extension you will have done well in the other P’s.

For the most part team assessments are a simple go, no-go decision. You gain access to continued funding, or you are scheduled for a gradual end to funding (end-game+). Exceptional results occasionally can gain access to exceptional funding (e.g., multi-year commitments).

Independent Evaluation

Independent Evaluation is an academic discipline with its own special tools, techniques and rigor. It’s a discipline dedicated to evaluation of team progress in the face of great uncertainty. Independent Evaluation finds its home at the American Evaluation Association – an organization dedicated to continuous improvement of evaluation. The beauty of this for industry is that the academic heavy lifting has and will continue to be carried out by a robust organization: we merely adapt the findings from the AEA to the industrial R&D setting. One key task is to translate AEA findings into the needs for interim team assessments.

The Independent Evaluator+ is a role similar to that of a general contractor. This is the professional that stands between the owner funder of the project and the tradesmen. The general contractor has a good relationship with the funder (i.e., the owner) and with the tradesmen. He or she is the conduit of trust between the two parties, smoothing out minor differences that can, if left unchecked, escalate out of proportion. The beauty of the general contractor model is that it allows evaluation at a distance – arms length+ evaluation. It eliminates the need for the owner to be on site every day and yet gives the owner constant assurance that his or her funds are being cared for. The owner can step back from the detailed work and make objective decisions about the overall direction of the investment.

It’s the job of the general contractor to make sure the tradesmen are doing the best they can with the resources of the funding agent. We want to delight the funding agent, to far exceed their expectations. The general contractor performs a subjective assessment of the work of the tradesmen, balancing their desire for craftsmanship with the owner’s sense of commercial urgency.

You can’t fool our Independent Evaluators. They can’t be co-opted; they won’t be allowed to ‘go native’. Even if one evaluator is more easily impressed than another, they themselves are subject to rigorous contrast and competition with their peers. The best an Independent Evaluator can do for you is to represent you fairly and objectively: hyperbole is quickly sensed and shot down by his or her peers.

The advantages are many for performing evaluations simultaneously, instead of relying on comparisons over time, serially, which is the current model in the Venture Capital industry. It’s easier to compare and contrast investments when both decisions are standing in front of you. It’s easier to avoid Retrospective Distortion+. It provides a more thorough evaluation. Evaluators stop to pick up insights touted by their colleagues that they may have missed or discounted. They learn to downplay certain arguments seen to be uninspiring in the eyes of their colleagues. They get better at evaluation by seeing how peers present evidence for their ‘clients’.

The Independent Evaluator plays the role of facilitator. We capture practices shown to improve productivity, and we share those practices across the teams. Independent Evaluators see which types of evidence are inspiring for their peers and carry this insight back in their facilitation of the research team, to improve performance for the next round of evaluations. These many times involve tactical or operational practices, for example, techniques for reducing investigator bias. A key role of the independent evaluator, being on the ground, is to facilitate the capture and sharing of these practices across their teams.

For example, teams who thread the skinny path from the laboratory to the clinic may come out ahead in side-by-side comparisons. Show me how all the evidence fits together. I’m much more swayed by a string of evidence, supported by commercial estimates, that runs from the laboratory to the marketplace. I’m quite tolerant of ambiguity in individual pieces of evidence. Teams that thread the skinny path can do better in assessments, and good Independent Evaluators make this clear to all teams.

Why not just designate the team management as the general contractor? Independence. The fate of management lies in the success of the team. The fate of the Independent Evaluator lies in the accurate assessment of the team progress. Independent Evaluation provides a greater sense of having a level playing field for all participants. We wouldn’t use the team managers to referee a football game would we? Throughout the year you are given a sense of how you’re doing, communicated to you by your personal evaluator, along with specific activities you can do to improve your position in the next evaluation.

Our independent evaluator is a seasoned industry veteran; one with significant technical skills in the industry, been-there-done-that experience, but also trained in the special ways of independent evaluation. The only way for anyone to gain a real sense of the challenges facing teams involved in creative endeavors is to jump in, with sleeves rolled up, and to contribute to the activities of the team on a daily basis. The independent evaluator is under-the-hood and in the marketplace, measuring the progress of the team against customer needs on almost a daily basis.

The independent evaluator sees many more ‘transactions’ between teams and funding agents than most team members will see in their entire careers. Evaluators are expert at translating the wants and needs of the funding agent into terms that can be acted upon by the technical experts, and vice versa. The independent evaluator develops a specialized evaluative skill set, and by virtue of this specialization allows both parties to achieve much more of what they want from their relationship.

The independent evaluator gives the team an assessment, level-set with assessments across all the teams. The Independent Evaluator defends the assessment against his or her evaluation peers. All evaluations are subjected to a thorough cross-calibration. In the end we develop a Consumer Reports+ for each team for each of the P’s: met, exceed or missed. This is not a forced ranking; it’s a forced revelation+ of the true meaning of the results.

Assent and consequences are key. Assessment of team progress is an exercise in which someone else decides my fate. You need my assent to your decision, even in cases where your decision is to shut me down. My decisions, the thousands of tactical and operational decisions I made throughout the year, collapse down to a single go, no go decision during evaluation. The consequences of all my past decisions are suddenly unambiguous. A complete decision mechanism requires my assent to someone else’s assessment of my progress, clarity on how all my daily decisions will eventually collapse into that assessment, and a communication mechanism that allows me to adjust my daily decisions along the way to gain the best results in the final assessment.

I may not like the outcome, but I will assent to it because I have come to believe the assessment is fair and balanced. Satisfaction is achieved. The thousands of decisions made by me throughout the year culminate in a single decision made for me by others. The accounting of consequences for these thousands of decisions will begin to be made with an eye toward the major decisions: how will today’s decision contribute to my team’s assessment at the end of the year? I am forced to incorporate the perceptions of others into my accounting. The interim assessment is, in effect, the customer for all the decisions I make throughout the year and I am smart enough to want to keep my customer and the Independent Evaluator informed of my progress.


Home Page July 2010

Further Reading