Re-read of the book
“How To Measure Anything (Finding the Value on ‘Intangibles’
in Business” (3rd edition) by Douglas W. Hubbard

Responses to Thomas Cagley’s re-read Saturday blog posting series.
I am posting my responses here, before a reply over there (chapter by chapter).

Part-1 (this) covers chapters 1 -7 of the re-read
Part-2 covers chapters 8 – 14 of the re-read

Chapter 1:  The Challenge of Intangibles

This is not a re-read for me, but two books I have read this year have referenced Hubbard’s “How To Measure Anything”, so I am excited to get into this material and this re-read series.

In Hubbard’s own words “measure what matters, make better decisions” which also could have been the title for this book.

And on page 7:  “Upon reading the first edition of this book, a business school professor remarked that he thought I has written a book about some esoteric field called “decision analysis” and disguised it under a title about measurement so that people from business and government would read it.  I think he hit the nail on the head.”

The book “Lean Enterprise” (O’Reilly) makes references this book and I found a quote which I think echos Hubbard’s thinking.  Chapter 5 (Lean Enterprise), a quote from another book “Lean Analytics” … (the quote)
“If you have a piece of data on which you cannot act, it’s a vanity metric”.

 

Chapter 2:  An Intuitive Measurement Habit:  Eratosthenes, Enrico, and Emily

This is the motivation chapter, where Hubbard provides three different, inspirational, and instructive examples of measurements by Chapter 2’s named heroes — Eratosthenes, Enrico, and Emily.

From Eratosthenes, we learn “He wrung more information out of the few facts he could confirm instead of assuming the hard way was the only way.” (p. 17).

From Fermi Enrico, we learn “start to ask what things about it you do know” (p. 19).

From Emily Rosa (and Hubbard), we learn a stated benefit should have something tangible associated with it — “If the therapists can do what they claim, then they must, Emily reasoned, at least be able to feel the energy field.” {in my experiment} (p.21).

Hubbard’s own example:  “If quality and innovation really did get better, shouldn’t someone at least be able to tell that there is any difference?” (p. 24)

Hubbard challenges us and sets up the coming chapters:  “Usually things that seem immeasurable in business reveal themselves to much simpler methods of observation, once we learn to see through the illusion of immeasurability.” (p. 25)

 

Chapter 3:  The Illusion of Intangibles:  Why Immeasurables Aren’t

There is so much valuable information in this chapter about measurements and defending measurements from the doubters.

One other interesting thing I did learn from footnote #14, Mark Twain did not originally come up with “Lies, Damned Lies, and Statistics”, although as Hubbard points out, Mark Twain did help popularize this saying.

My notes from Chapter 3:

  1. Claude Shannon (Electrical Engineer, Mathematician) – Information Theory and how it applies to measurements (p. 31)
  2. Stanley Smith Stevens (Psychologist) – Scales of Measurement (p. 33)
  3. There is a Measurement Theory (p. 34)
  4. Bayesian Measurement, reduce uncertainty of the observer (p. 34)
  5. Measurement Clarification Chain (p. 39)
  6. The Power of Small Sample:  The Rule of Five (pages: 42 -43)
  7. Usually, Only a Few Things Matter — But They Usually Matter a Lot (p. 49)
  8. Paul Meehl (Psychologist) – “showed simple statistical models were outperforming subjective expert judgements in almost every area of judgement he investigated including predictions of business failures and outcomes of sporting events.” (p. 51)
  9. Defending Measurements (e.g., “this measurement can not apply to this unique situation”)
    1. The Broader Objection to the Usefulness of Statistics (p. 52)
    2. Ethical Objections to Measurement (p. 55)
  10. Four Useful Measurement Assumptions (p. 59)
  11. Hubbard ends Chapter 3 with “Useful, New Observations Are More Accessible than You Think” (p. 65)

 

Chapter 4:  Clarifying the Measurement Problem

Thomas your summary of Chapter 4 is very thorough!

Did you know there is a companion HTMA workbook for the HTMA 3rd edition book we are re-reading?

One question from the workbook I missed is the concept of “false dichotomy”  (see p.  75 in the HTMA book) when exploring a decision to measure.

Make sure the decision you are supporting through a measurement is not a false dichotomy.  That is, not a feasible alternative.  Typically, as Hubbard explains “Yes/no choice between two extremes” or as I think about it, a “Mom and apple pie” decision statement.

My example:  suppose your decision is whether you should exercise or not.  Drill down on that decision statement and define what type of exercise program you should engage in instead (e.g., gym workouts, running, bike riding, hiking) and then figure-out the measurement to support that decision.

Hubbard’s examples were (1) clean drinking water (book) and (2) worker safety (workbook).

My notes from Chapter 4:

  1. 5 point process (p. 71) – starting with “What is the decision this measurement is supposed to support?”
  2. False dichotomy (p. 75)
  3. Requirements for a Decision (p. 78)
  4. If you understand it, you can model it (p. 80)
    1. decomposition to improve estimates (p. 81)
  5. Definitions of Uncertainty and Risk (p. 84)  **** (4 stars!)
  6. Clarified Decision example – IT Security at the VA – (pages 84 – 90)

 

Chapter 5:  Calibrated Estimates How Much Do You Know Now

I learned a new skill reading this chapter and practicing the sample calibration tests.

My approach for using the calibration tests in this chapter and in the appendix of the book, is to take 5 questions at-a-time.  That is, if you are time challenged.

I was happy that my 90% range did cover the actual value of the average percentage of Design in Software projects.

And I also recommend the Feakonomics podcast titled “How to Be Less Terrible at Predicting the Future” (January 14, 2016)

My notes from chapter 5

  1. Calibration is a skill that can be learned.
  2. Work on the low-end and high-end as two separate questions.
  3. Go wide enough to be more than 90% confident and bring in your estimates to at your 90% interval, on each end.
  4. For subjects you know nothing about, do a little research first.
  5. Practice.

 

Chapter 6:  Quantifying Risk through Modeling

I have drank the kool-aid on this stuff.  Forecasting using Monte Carlo simulation is a much better way.

Author Daniel S. Vacanti also has some words to say about this in his book “Action Agile Metrics for Predictability“, and Vacanti’s book also references Hubbard’s HTMA book.

Hubbard talks about selecting the correct probability distribution for you Monte Carlo simulation.  Vacanti states you need worry about this if you have the data.  And Vacanti’s book is all about collecting the data for processes (incoming and outgoing).

I use this exact approach to collect data about the scrum sprint processes.  The clock starts when a user story is accepted into a sprint and the sprint begins.  The clock ends when either that user story is deployed into production, postponed, or returned into the Backlog.  Start – Stop cycle-time is what I report on, but I also collect two other intermediate events – dev-complete and business accepted to help figure-out how-to reduce the end-to-end cycle-time.

I then use this data to forecast using Monte Carlo simulations with the help of Daniel S. Vacanti’s Actionable Agile online tool.

My Notes from chapter 6

  1. p. 123, “if a measurement matter to you at all, it is because it must inform some decision that is uncertain and has negative consequences if it turns out wrong.
  2. How NOT to quantify risk – low, medium, high; write a check to an insurance company with the amount “medium”.
  3. Not enough to just read this chapter, go get the books worksheets at the HTMA companion web-site and examine them.
  4. The book’s companion workbook challenges you not just read this but to work with material and develop the skills needed to use these concepts confidently.
  5. p. 134, “So we don’t ask whether a model lacks some detail.  If course it does.  What we ask is whether our model improved on the alternative model by enough to justify the cost of the new model.”
  6. p. 138 – 139, Exhibit 6.7 A Few Monte Carlo Tools
    (Also, note professor Sam Savage (Stanford) has a tool and Hubbard wrote about Professor Savage work (p, 136 – 137) “how-to institutionalize the whole process” and having a CPO = Chief Probability Officer.
  7. p. 140, “Risk Paradox  If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions.  The largest, most risky, decisions get the the least amount of proper risk analysis.”  (e.g., IT investments)

 

Chapter 7:  Quantifying the Value of Information

What is actually worth measuring?

How many times have we been part of a project where the convenient measurement is the focus of all the attention and we intuitively know this measurement is a proxy measurement at best.

This chapter gives the reader willing to study and work-at-it a method to figure out what to actually measure that will make an economic difference.

My Notes from Chapter 7

  1. Must work the accompanying Excel worksheets to develop this skill.
  2. (p. 145) re-read the “The McNamara Fallacy”, it captures what goes wrong with many measurement programs.
  3. (p. 149) Value of Information formula
  4. (p. 157) Zilliant: A Pricing Example (case study) …
  5. (p. 160) The Value versus Cost of Partial Information graph
  6. (P. 162) key concept – A Common Measurement Myth — when there is a lot of uncertainty, a lot of data is NOT required.
  7. (p. 167) The Measurement Inversion (what is really important to measure, and what is not) … go to top of page 168 for a summary
  8. (p. 171 – 172) Part-1 summary

End of part 1 of re-read of HTMA (How To Measure Anything), chapters 1 through 7