Risk of missing the delivery schedule because of poor predictions

Core to the Project Manager, Scrum Master, Product Manager, team, and stakeholders is
#2  Risk of missing the delivery schedule because of poor predictions
from a previous blog entry “Seven Risks in Software Development”.

On LinkedIn, I state as an Agile Project Manager I do this …

Improve your agile process, create reliable and predictable deliveries.  I coach teams to use data and analytics that show how the workflow is actually performing.  Which start conversations and inform actions resulting in measurable improvements.

A bold statement and not always easy to achieve.  What is behind this?

I have distilled Daniel S. Vacanti’s work “Actionable Agile Metrics for Predictability” into my value statement and practice.

Vacanti –
“Simply stated, flow is the movement and delivery of customer value through a process.”

The essence is to treat development as a process and be very clear on the start dates and end dates for all work items.  And then use simple analytics  and visualizations to see if you have a stable process.  For example, is the scrum process being used actually stable?

And what exactly is stable?  Vacanti answers this question very clearly by spelling out the
5 assumptions behind Little’s Law.  You can see a stable or unstable process by using the  analytics Vacanti recommends:  Cumulative Flow Diagrams, Cycle Time Scatter-plots, and Cycle Time Histograms.  Vacanti also warns, even if your process looks stable, it might not be stable.  You must also make sure that the process is not accumulating flow-debt.

Once we have that stable (or near stable) process, that is when we can use Monte Carlo simulations on past performance, to make those accurate and probabilistic schedule predictions.

Example:  we have a 85% chance that a user story accepted into a sprint will be completed in 20 days or less.  That might be the Service Level Objective the delivery team uses with the business stakeholders for delivery agreements.

In this example, the only estimation required is being confident that the user story taken into the sprint can be completed within the time frame of the sprint.  And we let past data and Monte Carlo simulations help us with lightweight planning and scheduling.

I have used this approach and it is great way to visualize the teams output and also start those improvement conversations.

As an Agile Project Manager, accurately stating the deliver schedule is often how the project’s performance is judged.

More from Vacanti’s book “Actionable Agile Metrics for Predictability”

Little’s Law, reframed for knowledge work, is Cycle Time = WIP / Throughput
Which tells us, for a stable processes, one lever we have is to reduce WIP to shorten Cycle Time and deliver value sooner to the customer.

Vacanti’s 5 assumptions for Little’s Law to hold, but in my words are …

  1. Work coming in = working leaving, on average.
  2. All work accepted into the process is started and eventually completes.
    Note:  we can have a tag for abandon work, when we know we must remove the work item because it will not be delivered, either because the work item “never will be delivered now” or “will not be delivered in the foreseeable future”.
  3. The amount of WIP is about the same, at the beginning and ending of the time interval being measured.
  4. Do not allow WIP to arbitrary age inside the process.  That is, work that is accepted into the process, does not stop very often.  It may stop because of an outside dependency.  It may stop because of a “self inflected process wound” by stopping work on an active item, to service a higher priority work item recently introduced.
  5. Must use consistent measurements for all three variables:  Cycle Time, WIP, and Throughput

Vacanti provides many great metaphors in his book.

  1. Airport arrivals and departures — what would happen to an airport if the aircraft arrival rate was constantly greater than the departure rate?
  2. Airport security lines and the effect of special queues (e.g., prescreened passengers) has on the average wait times for the regular queue in small airports where there is only one security agent attending all queues.  Good luck making your flight if you are in that “punter” queue.
  3. Is the process you are managing a Ponzi scheme?
    Yes it might be a Ponzi scheme, if resources are taken away from active work-items to service new work-items more often than not.  No wonder business stakeholders get frustrated with delivery teams, even though these stakeholders are often the primary source of new, high-priority work-items.



Re-read of The 5 Team Dysfunctions

Tom Cagley‘s re-read Saturday series continues with the book “The Five Dysfunctions of a Team – A Leadership Fable” by Patrick Lencioni.

Part 1 – setting the stage for the story line.  Lencioni introduces the setting, main character (Kathryn), and the team.  And hints of a turbulent ride for Kathryn, the new CEO of the fictitious company DecisionTech.

Right upfront Lencioni tells why he wrote this book and why readers should pay attention.

“Teamwork remains the one sustainable competitive advantage that has been largely untapped.” (from the companion book, “Overcoming The Five Dysfunctions of a Team – A Field Guide”)

More often teams focus their attention on customer needs, release dates, budgets, technology, usage data, etc.  All of these are important, but without good working relationships among the team, much waste is produced and opportunities missed.

Extra:  for a similar message, different angle, see Simon D’Arcy Next Level Culture site.

The User Experience Risk

What has influenced my thinking (so far) about
#7  Risk of end-users not using or liking the product
from a previous blog entry “Seven Risks in Software Development

From my work experience, everyone has an opinion about what a good user experience (UX) is.  Few are really talented in UX, except those who have studied and practiced UX.  And even the talented ones, designs and implementations must be checked to see if they work as intended with the targeted users.

UX is front and center in every product.  UX is the visible part of the product.  Thus good designers carefully and patiently explain the design.  Taking in feedback when appropriate, but most often explaining the design — several times to various stakeholders.

This risk is inspired by Marty Cagan‘s (product management consultant) basic question:  will the users use the product?

Dan Olsen, author of “The Lean Product Playbook” has two simple models I really like.

Model 1:  Olsen’s Hierarchy of Web User Needs, that helps put UX in perspective.

Layer#1:  Is the site available for the user?
Layer#2:  Is the site too slow?
Layer#3:  Does the functionality work?  (Quality)
Layer#4:  Does the functionality bring value to the user?
Layer#5:  How easy is the web page to use?

UX goes directly at Layer#5, but all layers are important to the users.  UX designers should be working with Product Management, User Research, Development, and others to make sure the right functionality is being developed and the product is working well at all layers.

Model 2, Dan Olsen, is the UX  Design Iceberg.  On-top is the Visual Design.  Below the Visual Design and below the water’s surface is Interaction Design, followed by Information Architecture, and at the base is Conceptual Design.

For more on Information Architecture, follow the thought provoking work
of Amy Covert who’s tag line is “Make the unclear be clear”.

(Dan) Olsen’s Law of Usability:
“The more user effort required to take an action, the lower the percentage of users who will take that action.  The less user effort required, the higher the percentage of users who will take that action.”

Olsen’s Law of Usability is good way for the broader team to think about UX and why a good UX matters.  For a deeper look, check-out BJ Fogg’s Behavior Model –> B = MAT.

That is, Behavior = Motivation + Ability + Trigger
where Ability (to perform an action) is what Olsen’s Law of Usability is referring to.

And Nir Eyal uses BF Fogg’s behavior model and other research to develop his popular Hooked model for products.

Dan Olsen runs and hosts a Lean Product Management & UX meet-up in Palo Alto, that has featured both BJ Fogg and Nir Eyal as speakers.  There is a Dan Olsen YouTube channel that captures many of these great talks, so enjoy from afar.

I firmly believe great products are a collaborative result of Product Management (business-side) + User Experience (end-user interests) + Development (technical side) all working well together and not one or two of these  disciplines dominating the conversation.


Seven Risks in Software Development

What might prevent us from delivering software that matters, on-time?

My top 7 list and what to-do about it in the “Further Information” links that will posted in the next few weeks (Sep – Oct – 2016).

#1 Risk of delivering little or no value to the customer or organization.
Product Management typically owns the product-market fit risk.

Very disappointing to spend time and energy to deliver a feature that is not used or cannot be supported by the stakeholders.  All team members should be asking challenging questions around what features are really important and  how do we know that.

Key questions are:

  1. Is there a business (or funding) model that makes sense to the organization.
  2. Will the end-user find the feature valuable?

#2  Risk of missing the delivery schedule because of poor predictions.
Project Managers / Scrum Masters / Delivery Managers typically own the schedule risk.

Schedule outcomes are best predicted from recent past performances; stick with predicting the likely outcome of the next sprint.  And worry about those features that cross over the 50% cycle time mark.  They are the candidates for being really late (see #5 below).

Further Information about risk #2

#3  Risk of unplanned work disrupting the work process and schedule.
Product Manager and the development team typically wrestle with the high-priority request disrupting the current work plan.

Scrum tries to prevent this by the scope-of-work agreement for a sprint.
Kanban has work-in-process (WIP) limits to help prevent this.

Essentially, when this happens the new work “cuts” in the queue and rarely will a team have enough capacity to absorb the new work without delaying at least one work-in-process item.  Most teams will not have enough “slack” in-place to deal with this, so some work item(s) stops temporarily, while the high priority item is serviced.

Kanban may also have a high-priority swim lane that really only acknowledges this situation comes up often, unless there is actually excess capacity in-place to service the request.

At the very least, make the high priority work requests visible and the effect it has on the work plan.

#4  Risk of poor quality in the delivery.
We all own product quality.

Quality is the silent lever when we focus too much on the schedule delivery.  Typically, scope is the only visible leverage the team has to negotiate towards a release schedule. Budget and schedule are almost always a given by the organization.  So if scope cannot be reduced and the schedule remains in-place, quality can suffer – either explicitly or implicitly.

#5  Risk of work item becoming an outlier … way off!
Project Managers / Scrum Masters / Delivery Managers typically own the schedule risk.

There is another schedule risk we touched upon in #2 above:  in software development, a few work items can be significantly delayed … way off.

Something is delaying a work-item and there is a chance for a big delay.  It probably has some sort of dependency that the development team cannot influence and thus is sitting in a wait state (on-hold).  The delivery of this work item can become atypical.  It skews to the right on the time line, several weeks past the delivery history mid-point.

These are the work items that development team should spend time identifying, so we can call attention to it as-soon-as-possible — coming up with alternate plans or mitigation.

As stated in #2 above, once a work item crosses the 50% cycle time marker from the past delivery data, then this work item becomes more at-risk to become a schedule outlier.  Treat WIP crossing the 50% cycle time as a trigger for team and program conversations.

#6  Risk of the team not working well together.
Universal risk to all roles.

People create and deliver products.  Good working relationships are essential.  In context to THIS post, make sure the team is not overburden with work and a tight deadline. Demand and capacity must match.

Scrum is important because of the idea the team can accept a work item into sprint, or not, depending upon the team’s capacity.

Kanban is important of the idea of WIP limits.

Good team dynamics go way beyond the overburdening of a team; however, overburdening leads to technical debt and a feeling of “whatever”.

#7  Risk of end-users not using or liking the product.
User experience matters – even when you have everything else right.

This is a different that #1 above.  Here we have confidence and data that the product is well supported by the organization and does provide value to the customer.  It is “just” the UI is bad.  The user maybe confused and cannot find the feature.  Or the user has a difficult time with the feature and will avoid using it.

Further information about risk #7

More …
Of course, this risk list is not complete.  We can think of security vulnerabilities as another risk, for example.  Use this list a spring board for your own risk lists, depending on your projects and purposes.

Ordering of risks
This list of risks is not in priority order.  However, #1 (product-market fit), for me, is most often the top priority.  But I can see a strong argument for #6 (Teams not working well together) too, especially in some situations.

Ash Maurya states in his book “Mastering the Key Metrics for Startup Growth – Scaling Lean” that “Incorrect prioritization of risks is one of the top contributors to waste” (p. 13), so prioritizing your risks is very important.


I will be creating “Further information” posts for each of these risks in the upcoming weeks

  1. Further Information on risk #2
    Risk of missing the delivery schedule because of poor predictions
  2. Further Information on #7
    Risk of end-users not using or liking the product





Extreme Programming Explained

Tom Cagley’s re-read Saturday series continues with the book “Extreme Programming Explained (Embrace Change)” by Kent Beck with Cynthia Andres (second addition)

XP is much about software practices, rather than Scrum which is more about project management in an agile setting.  Read this interesting Continuous Delivery blog which complains Scrum leaves off the software practices and mentions that Continuous Delivery was built upon XP and Lean practices.

[Note:  after reading this book and I have changed my mind on the above point I made going into the re-read.  XP is as much about team dynamics and software project management as Scrum is.  XP adds good technical practices into the framework and that difference is noticed more than the other parts.]

Preface & Chapter 1  Tom, I liked your back-then and now comments.

I am still waiting for my copy of the book to show-up. I look forward to the read, as good technical practices make agile work.

Example, scrum is a good framework to follow for several reasons, communications, feedback, and team improvements. However, without the good technical practices, scrum will fall flat.

Martin Fowler makes a much stronger case in his classic blog, FlaccidScrum.

Chapters 2 & 3 [“Learning to Drive” & “Values, Principles, and Practices”]

Chapter’s 3 figure 1 bridge, is an excellent visual { values <- principles -> practices }.

Chapters 4 & 5 [“Values” & “Principles”]

It is hard to argue with any vitreous Values and Principles set, whether they come from Agile, Lean, Kanban, XP, Waterfall …

I like Jurgen Appelo’s approach in Management 3.0 (the book) and the “Do-It-Yourself Team Values” exercise.  Pick what makes sense for the team and the situation and do not pick too many (3 – 7), so the team has a focus.  After mastering certain value – principle – practices combinations, you may  choose other values / principles to focus on.

And we read similar advise in Extreme Programming Explained, where other principles might be included depending the the team’s situation (e.g., traceability for safety-critical systems).

Note:  Waterfall, really?  Gil Broza actually documented the Waterfall values in his “The Agile Mind-Set” book.  The values themselves read well (e.g., “Get it right the first time”). But in most cases, for software development, they are VERY difficult to actually do without increasing project risks.

Chapters 6 & 7 (“Practices” & “Primary Practices”)

Put a “*” on chapter 7, great advice!  I particularly liked the last primary practice – Incremental Design — defer design changes to the last responsible moment (real options thinking).

Beck cautions us “Without daily attention to design, the cost of change does skyrocket.” (page 52)

Beck’s simple heuristic look for and eliminate duplication – excellent!

Chapters 8 & 9 (“Getting Started” & “Corollary Practices”)

Beck provides practical advice on how-to get started implementing XP practices within a team setting or even just by yourself.  At the end of chapter 8, there is a nice map of “Energetic Work” (figure 8) to keep in-mind.

Chapter 9 talks about 11 Corollary Practices in XP, and interesting “Real Customer Involvement” is listed as a corollary practice, rather than a primary practice.  However, Beck explains why listed here, some of the development fundamentals should be addressed first — the two mentioned are (1) accurate estimates and (2) low defect rates.

Chapters 10 & 11 (The Whole XP Team & The Theory of Constraints)

Wow, chapter 11 changes the subject from chapter 10 — really two separate topics here.

Beck’s explanation on how different disciplines (architects, interaction designers, testers) resist working together, at the same time, is classic.  However, for user experience and feature validation , I do think a Lean approach should be considered in most cases.

Validate the feature and/or user-interaction with a prototype and with customers BEFORE it becomes part of the development backlog.

Why?  “the amount of waste and rework is very high because backlog items have not been validated” from Marty Cagan’s Dual-Track Scrum  blog (first paragraph) .

Beck’s explanation of ToC (Theory of Constraints) using laundry is one I can easily understand!  Again, I wonder how this chapter would be re-written today in light of Kanban and WIP (work-in-process) limits in the software context.

Chapters 12 & 13  (Planning:  Manage Scope & Testing:  Early, Often, and Automated)

Two great, short, chapters here.

*  [comment #1]

“Planning:  Manage Scope” is a must read for any project manager of software.  Beck write’s well what an experienced Project Manager has already learned.

See page 92 …

  1. “Time and costs are generally set outside the project.”
  2. “Lowering the quality of your work [to meet schedule] doesn’t eliminate work, it just shifts it later so delays are not clearly your responsibility.”

Therefore, scope almost always is the ONLY negotiable item from the project management “iron triangle” – Schedule, Budget, and Scope.  Which explains why Scrum’s sprint planning sessions were invented and are so popular today.

And I like Beck’s definition of ‘”Complete” means ready for deployment; including all the testing, implementation, refactoring, and discussions with the users.’ (p. 93)

The opening sentence of Chapter 13 is profound – “Defects destroy the trust required for effective software development”.  Having seen defects from both the inside and outside, I agree!

“Here is the dilemma in software development:  defects are expensive, but eliminating defects is also expensive.”

And one argument I had not considered before – “Investments in defect reduction makes sense as an investment in teamwork.”  Normally we consider the impact of defects to the business and end-users, but not the impact of defects to the team itself.

Chapters 14 & 15 (Designing:  The Value of Time & Scaling XP)

I liked much of what Beck and Andres states about Design.

  1. Do design!  And design in small increments whenever possible.
  2. Design at the last responsible moment.
  3. Designing software is not the same as design physical things (e.g., buildings). Although in today’s complexity between numerous systems, this is less true.
  4. Simplicity in design and some specific ideas to achieve it.
  5. Was Beck the first to use the popular phrase “big ball of mud” back in 2004?

One idea I think should be reworded is “Weekly delivery of the requested functionality is the cornerstone of the relationship.” (p. 109)

I think this idea pushes teams to just follow the customer (or Product Owner) lead, even when the team knows technical debt is piling up and there likely will be no plan to address it in the future.  And we do know, there are always new features the customer wants delivered yesterday.

I actually think Beck and Andres was cautioning us against  Big-Design-Up-Front (BDUP), rather than telling us to ignore technical debt in favor of delivering the next new feature.

Scaling XP (Chapter 15)

I think Beck left a marketing opportunity behind by not defining some elaborate scaling framework for XP🙂

Some really good thoughts here.  Beck is a pragmatist, as am I.

  1. “The project manager makes sure the the organization’s expectations are met.”
    … while the team is using XP practices  (p. 113)
  2. “Don’t push your new-found (XP) knowledge and power on others for your own benefit.”  (p. 114)
  3. “I never became an actuary, but the resulting system (and team) was much stronger than if the actuary worked on his little corner of the system while I worked alone on the user interface.” (p. 115)
  4. “The XP strategy for dealing with excess complexity is always the same: chip away at the complexity while continuing to deliver.” (p. 115)

Chapters 16 & 17 (Interview & Creation Story)

Short and interesting!

About the Interview, I wonder how the XP transformation did the next of couple of years. XP was clearly a mandated change for the organization, train everyone and watch it take hold.  But to start, there were 1/3 were for it, 1/3 were neutral, and 1/3 had serious questions.

One bright call out, quality did improve!

[** comment #2]

The Creation Story (Chapter 17) is how the development teamed formed and became much better than the previous team using the XP framework and practices.  This XP Creation Story project ran like scrum projects that I am familiar with (p. 127)

How the backlog user stories were identified for each iteration:  the customer (product owner) choose.  But how did the customer know?  In this case, the customer was a subject matter expert and the domain space was already very well understood – payroll.

Contrast this to many of today’s projects.  Teams are often working in a domain that is emerging, have market acceptance risks, and there may not be a domain expert.  So we have ideas about quickly validating the selected backlog user stories (e.g., Lean Startup) before the true development starts (i.e., dual track agile).

(p. 128) “End-to-end is further than you think”

Chapters 18 & 19 (Taylorism and Software & Toyota Production System)

Taylorism – Frederick Taylor, early twentieth-century industrial engineer, concerned with efficiency.

Beck & Andres (p. 132)
“Things usually go according to plan,
Micro-optimization leads to macro-optimization,
People are mostly interchangeable and need to be told what to do.”

Does not work well in knowledge work (e.g., software development).
Agile values states the opposite.

Toyota Production System (TPS) – the foundation of lean software development.
A great, short summary of TPS!

Beck & Andres writes about a worker cautiously pulling the chord shortly after a quality issue was spotted. From Lean, this is the Andon Chord (pull it immediately).  And Yea!!! for any organization culture which supports the Andon Chord philosophy.  It definitively would take nerve to “stop the workflow” in an organization mainly focused on throughput – won’t happen very often, as this action would be viewed as a risky move.

Chapters 20 & 21 (Applying XP & Purity)

(p. 139) “You should see big improvements in the first weeks and months, but those improvements only set the stage for the big leaps forward that happen further down the road.”  — this selling statement could apply to any framework.  Could happen that way, but no guarantee.

Two implementation tips I like are …

  1.  Speaking about using a unit test framework  –
    (p. 141) “Expecting others to do what you are not willing to try yourself is disrespectful and ineffective.”   This advice, of course, applies in many other situations.
  2. (p. 141) “Your organization learns to deploy solid software predictably, then invites external customers to be part of planning.”
    Wow, once you arrive here, your team is golden!

Chapter 21 (Purity)

(p. 146) “It’s worse to fail with an XP team than to succeed with a pure waterfall team.  The goal is successful and satisfying relationships and projects, not membership in the XP club.”

XP, as with any process & practices framework, is a means to a goal, not the goal itself.

Chapters 22 & 23 (Offshore Development & The Timeless Way of Programming)

Beck and Andres make a good case to use the wording of “multi-site”, instead of “offshore”.  But the chapter is titled with Offshore — well this chapter, “Offshore Development”, was a few years back  (2005).  The term “distributed teams” is commonly used today (2016).

(p. 150)  “Jobs aren’t going in search of low salaries.  Jobs are going in search of integrity and accountability.”  That is, capability to deliver value first and labor costs second.

Chapter 23 (The Timeless Way of Programming)

Wow!  Note to self, re-read this chapter for culture-change thoughts about creating higher value software deliveries from  teams.

It is vital that we achieve a balance in the organization between business concerns, user experience concerns, and technical concerns.

*** [comment #3]

(p. 154) “With more experience I began to see the opposite imbalance [of technical dominance], where business concerns dominated development.  Deadlines and scope set for only business reasons do not maintain integrity of the team.  The concerns of users and sponsors are important, but the needs of the developers are also valid.  All three need to inform each other.”

(p.154) “My goal is now to help teams routinely bring technical and business concerns into harmony.”
Note:  I bet this sentenced would be re-worded slightly today by adding User Experience as a separate concern to Business and Technical.

A 2005 prophecy about Agile Adoption …
(p. 154) “Without a change of heart, all the practices and principles in the world will produce only small, short-term gains.”

And there is two more good thoughts …
(p. 155) “XP relies on the growth of powerful [capable] programmers; able to quickly estimate, implement, and deploy reliable software.”

(p. 155) “Sharing power is pragmatic, not idealistic.”

Chapters 24 & 25 (Community and XP & Conclusion)

Brief wrap-up statements in both of these chapters.

(p. 158)  “Accountability is particularly important when making changes.”

Do NOT ignore the great Annotated Bibliography at the end of the written text.  Some really interesting categories and titles there.


Great read (or re-read) for software professionals, including software project managers and scrum masters!

My favorite quote from this book is
“End-to-end is further than you think” (Chapter 17 – Creation Story, page 128)

End of the re-read of “Extreme Programming Explained (Embrace Change)”


Re-read Saturday of “Commitment …

Re-read of the book
“Commitment Novel About Project Risk” (1st edition) by Olav Maassen, Chris Matts, Chris Geary (a.k.a., Options Expire)

Responses to Thomas Cagley’s re-read Saturday blog posting series.
I am posting my responses here, before leaving a reply over there (chapter by chapter)

This book is a graphic business novel with some blog like writing sprinkled in.  So this book reads fast.

Chapter 1 and 2:  a traditional Project Manager is replaced by Rose.

Rose takes up where David leaves off, since that is the only way of running a project she knows, and is heading down the same, no-win path.  The project plan is rejected again at the end of chapter 2.

The sketch of the project team facing their newly appointed (and unpopular) project manager (Rose) is classic — nobody is happy

Rose has been and is overworking, and her younger sister Lilly points out she is heading down the path of being a “moany old maid soon”.

The team is not happy with being asked to work extra hours to make this project happen with the current plan.  A plan the team was handed and did not help create.

And the only thing that will make these stakeholders happy is profit, and this project is at-risk.

Chapter 3:  Rose learns a different way to think about project management.

Jon:  “So they can co-ordinate their activity.”
Rose:  “But I do that.”

I like the idea a Host Leadership, rather than Servant Leadership, see “I’m Not a Servant – I’m a Host! A New Metaphor for Leadership in Agile? ”  Pierluigi Pugliese (other have said this too)

Notes from chapter 3

  1. Intro to flow principles, Kanban — and away from directive leadership.
  2. Avoid committing to early.  That is, avoid those big, detail plans upfront when things are likely to change (e.g., software development projects).
  3. Blog about technical debt (“dirty dishes”)

Chapter 4:  Rose practices a new style of project management, based on kanban, agile, and lean …

We  now see

  1. Planning collaboration with the team
  2. Visualization of work
  3. Stop starting and start finishing
  4. A reminder:
  5. Staff liquidity … people breaking out of their silos to break through the project bottlenecks.
  6. Probing on the project bottleneck to see what the problem is.  In this story it was “the specifications do not have enough detail for us to test”
  7. Deliver value to the customer in small value, so adjustments can be made along the way, and not risking getting way off track and not delivering what the customer values.
  8. Value analogy:  ordering a cup of tea, rather than ordering a tea-bag.
  9. Feature Injection is added to the story line.  Figure-out the value by starting at the end — ask “why” questions to figure-out what the customer is really after or really values.  [And then test you hypothesis] … “Hunt for Value” [and then verify you found it]

Chapter 5:  Rose explains “staff liquidity” to a skeptical execute.
This is like T-shaped skills.  And fortunately, most people are curious, enjoy learning and developing new skills.

Chapter 6:  the plot thickens, as Rose’s project is under scrutiny.
But first a little explanation of Game Theory and how it applies to group dynamics.

People’s tendencies are to avoid uncertainty, even if that means to make a decision too early.  Rose is managing the project under “real options”, or deferring the key decisions until that last practical moment, introducing more uncertainty and fostering collaboration.

There is blog post in chapter 6 to tell us not to go too crazy with real options and only consider options for the key decisions.  Plus, introducing too much uncertainty is bad for team dynamics, “choice overload” or “decision paralysis”.

Perhaps “choice overload” affects team performance and perhaps not, see “More Is More: Why the Paradox of Choice Might Be a Myth“.  The situation and the end results will help you and the team make that call.

An aside:  in the last re-read, How To Measure Anything by Douglas Hubbard, in the last chapter, discusses real options also.  I think both this book and Hubbard are saying similar things about real options, you can’t put a financial value using the Black-Scholes formula for business decisions.  Evaluating the probability value for the formula is the difficult task that does not translate well outside of the financial markets.

One last note:  I applaud the authors often subtle references of exercise to manage stress.  Rose, after an emotional setback, has a great workout in chapter 6.

Chapter 7:  Rose finds out that the project sponsor is pulling out; option planning, starting with brain storming the possible scenarios.

And for each scenario, a plan of action to either lessen the blow or be in position to take advantage of a possible opportunity.

Rose ushers out the elephant in the room (people may lose their jobs very soon), by helping the team put their personal scenarios in-place, before tackling the scenarios for the project.

This chapter’s blog title is “Increasing your psychic odds with Scenario Planning”.  Now Rose has the team thinking on their feet to come with options to save the project.

Chapter 8:  Rose becomes the hero.  She is totally prepared for the customer meeting by taking the customer’s perspective and knowing several options.  Rose is very confident and poised during the customer meeting, which impresses her managers.

Final Thoughts:  I totally misestimated the time this re-read would take.  I re-read this book in 2-3 hours in the first week.  Mainly because this book is such a great read with engaging pictures.  But slowing down the re-read pace did help me learn more about what was behind Rose’s growth and success.

End of the re-read of “Commitment …” blog entry




Re-Read Saturdays: HTMA (part-2)

Re-read of the book
“How To Measure Anything (Finding the Value on ‘Intangibles’
in Business” (3rd edition) by Douglas W. Hubbard

Responses to Thomas Cagley’s re-read Saturday blog posting series.
I am posting my responses here, before a reply over there (chapter by chapter).

Part-1:  key concepts are what are measurements, why measure, what is risk, choosing what to measure (Chapters 1 – 7).

Part-2 (this post) is about the rest of the book, chapters 8 – 14 of the re-read.

Chapter 8:  The Transition: from What to Measure to How to Measure

The title of this chapter (The Transition: from What to Measure to How to Measure) is perfect for moving forward into part-2 of HTMA.

Hubbard summarizes what we can do to improve our measurements at the end of the chapter (pages 195 – 195) — (1) Work through the consequences, (2) Be iterative (yes! sounds familiar), (3) Consider multiple approaches, (4) What’s the really simple question that makes the rest of the measurements moot, and (5) Just do it.

My Notes from Chapter 8

  1. (p. 176-177) 6 questions to help determine the measurement methods
  2. (p. 178-179) 7 measurement instruments
  3. (p. 181) Decompose it (definition)
  4. (p. 183) Decomposition effect
  5. (p. 187) Some basic methods of observations
  6. (p. 190) Quick Glossary of (Measurement) Error
  7. (p. 193) A Few Types of Observation Biases
  8. (p. 194) Chose and Design the Instrument


Chapter 9: Sampling Reality How Observing Some Things Tells Us About All Things

There is a lot of information in this chapter.  Hubbard’s narrative discussing how to measure the number of fish in a lake (p. 214 – 215) helps me understand how this book lives up to its title.

My Notes from Chapter 9

  1. Mathless 90% CI, p. 211 Exhibit 9.4
  2. See relations in the data, p. 236  Examples of Correlated Data
  3. p. 242  The tw0 biggest mistakes in interpreting correlation.
    1. Correlation proves causation
    2. Correlation isn’t evidence of causation

Chapter 10:  Bayes:  Adding to What You Know Now

p. 247  “One of the key assumptions in most introduction-to-statistics course is that the only thing you ever knew about a population are the samples you are about to take.  In fact, this is virtually never true in real-life situations.”

p. 248 “Bayes’ theorem is simply a relationship of probabilities and “conditional” probabilities. …

p. 258 the Instinctive Bayesian Approach

p. 262 Exhibit 10.5 Confidence versus Information Emphasis

p. 264 Peter Tippett overcoming the “all things must be done” thinking that prevents measurements.

P. 276 “The Lessons of Bayes” (summary)

Chapter 11:  Preference and Attitudes:  The Softer Side of Measurement

The hypothetical utility curves helps with subjective trade-off evaluations (pages 300-301).

Example:  how does the performance of software team completing on-time at 99% with a 95% error-free rate compare with another software team completing on-time at 92% with a 99% error-free rate?  Check the organization’s utility curve for this trade-off.

My Notes

  1. Stated preferences versus Revealed preferences … Lean thinking
  2. This chapter has some good tips about designing surveys that can help measure and reduce uncertainty.
  3. p. 291 correlate subjective responses with objective measures (see Measuring Happiness)

Chapter 12:  The Ultimate Measurement Instrument:  Human Judges

Very good chapter notes!

Adding to (p. 325) “The Big Measurement Don’t – Above all else, don’t use a method that adds more error to the initial estimate.”, Hubbard warns us about using arbitrary scores (e.g., a scale of 1 – 5).

(p. 327) “I’ve always considered an arbitrary score to be a sort of measurement wannabe.” and then proceeds to list six reasons to support his statement.

Chapter 13:  New Measurement Instruments for Management

This chapter is about new ways of measuring using resources on the internet:  two books called out are

  1. Eric Siegel “Predictive Analytics:  The Power to Predict Who Will Click, Buy, Lie, or Die”
  2. Hubbard’s third book “Pulse:  The New Science of Harnessing Internet Buzz to Track Threats and Opportunities”

Pages 351 – 352, Hubbard summarizes four Subjective Assessment Methods to Prediction Markets, including what Hubbard discussed in this chapter, the Prediction Market.  The other three from previous chapters include (1) Calibration Training, (2) Lens Model, and (3) Rasch Model.

Thank you Thomas for selecting Commitment – Novel About Managing Project Risk by Olav Maassen, Chris Matts, and Chris Geary (Illustrator)  as the next re-read.  It will be a quick and fun read, and help any project leader.

My 90% calibration estimate to complete this re-read is 2 – 3 weeks, even though it is 216 pages (hard cover edition); the pages turn fast!

Chapter 14:  A Universal Measurement Method: Applied Information Economics

I like the summary of this book which comes from question #23 (Chapter 14)  in the HTMA Workbook, and I am quoting both the question and answer …

Summarize six points the author makes about the AIE philosophy.

  1. If it’s really that important, it’s something you can define.  If it’s something you think exists at all, it’s something you’ve already observed somehow.
  2. If it’s something important and something uncertain, you have a cost of being wrong and a chance of being wrong.
  3. You can quantify your current uncertainty with calibrated estimates.
  4. You can compute the value of information by knowing the “threshold” of the measurement where it begins to make a difference compared to you existing  uncertainty.
  5. Once you know what it is worth to measure something, you can put the measurement effort in context and decide on the effort it should take.
  6. Knowing just a few methods for random sampling, controlled experiments, Bayesian methods, or even merely improving on the judgements of experts can lead to a significant reduction in uncertainty.

Hubbard last paragraph in the HTMA book tells to how to start applying this knowledge (p. 385) “… and the practical cases described make you a little more skeptical about claims that something critical to your business cannot be measured”.

Last words about HTMA

Nice summary of HTMA Thomas!

This material, like running, takes more than just reading about it. It takes practice / training, and there are supplemental Excel worksheets online to study.

One of my favorite parts of HTMA is where Hubbard explains how to estimate the number of fish in a lake (p, 214 – 215).

The bridge: HTMA briefly discusses options and how “real” Options are over-used (p. 383-384). One of the themes in the next book up “Commitment…” is “real” Options. It will be interesting to compare notes.



Re-Read Saturdays: HTMA (part-1)

Re-read of the book
“How To Measure Anything (Finding the Value on ‘Intangibles’
in Business” (3rd edition) by Douglas W. Hubbard

Responses to Thomas Cagley’s re-read Saturday blog posting series.
I am posting my responses here, before a reply over there (chapter by chapter).

Part-1 (this) covers chapters 1 -7 of the re-read
Part-2 covers chapters 8 – 14 of the re-read

Chapter 1:  The Challenge of Intangibles

This is not a re-read for me, but two books I have read this year have referenced Hubbard’s “How To Measure Anything”, so I am excited to get into this material and this re-read series.

In Hubbard’s own words “measure what matters, make better decisions” which also could have been the title for this book.

And on page 7:  “Upon reading the first edition of this book, a business school professor remarked that he thought I has written a book about some esoteric field called “decision analysis” and disguised it under a title about measurement so that people from business and government would read it.  I think he hit the nail on the head.”

The book “Lean Enterprise” (O’Reilly) makes references this book and I found a quote which I think echos Hubbard’s thinking.  Chapter 5 (Lean Enterprise), a quote from another book “Lean Analytics” … (the quote)
“If you have a piece of data on which you cannot act, it’s a vanity metric”.


Chapter 2:  An Intuitive Measurement Habit:  Eratosthenes, Enrico, and Emily

This is the motivation chapter, where Hubbard provides three different, inspirational, and instructive examples of measurements by Chapter 2’s named heroes — Eratosthenes, Enrico, and Emily.

From Eratosthenes, we learn “He wrung more information out of the few facts he could confirm instead of assuming the hard way was the only way.” (p. 17).

From Fermi Enrico, we learn “start to ask what things about it you do know” (p. 19).

From Emily Rosa (and Hubbard), we learn a stated benefit should have something tangible associated with it — “If the therapists can do what they claim, then they must, Emily reasoned, at least be able to feel the energy field.” {in my experiment} (p.21).

Hubbard’s own example:  “If quality and innovation really did get better, shouldn’t someone at least be able to tell that there is any difference?” (p. 24)

Hubbard challenges us and sets up the coming chapters:  “Usually things that seem immeasurable in business reveal themselves to much simpler methods of observation, once we learn to see through the illusion of immeasurability.” (p. 25)


Chapter 3:  The Illusion of Intangibles:  Why Immeasurables Aren’t

There is so much valuable information in this chapter about measurements and defending measurements from the doubters.

One other interesting thing I did learn from footnote #14, Mark Twain did not originally come up with “Lies, Damned Lies, and Statistics”, although as Hubbard points out, Mark Twain did help popularize this saying.

My notes from Chapter 3:

  1. Claude Shannon (Electrical Engineer, Mathematician) – Information Theory and how it applies to measurements (p. 31)
  2. Stanley Smith Stevens (Psychologist) – Scales of Measurement (p. 33)
  3. There is a Measurement Theory (p. 34)
  4. Bayesian Measurement, reduce uncertainty of the observer (p. 34)
  5. Measurement Clarification Chain (p. 39)
  6. The Power of Small Sample:  The Rule of Five (pages: 42 -43)
  7. Usually, Only a Few Things Matter — But They Usually Matter a Lot (p. 49)
  8. Paul Meehl (Psychologist) – “showed simple statistical models were outperforming subjective expert judgements in almost every area of judgement he investigated including predictions of business failures and outcomes of sporting events.” (p. 51)
  9. Defending Measurements (e.g., “this measurement can not apply to this unique situation”)
    1. The Broader Objection to the Usefulness of Statistics (p. 52)
    2. Ethical Objections to Measurement (p. 55)
  10. Four Useful Measurement Assumptions (p. 59)
  11. Hubbard ends Chapter 3 with “Useful, New Observations Are More Accessible than You Think” (p. 65)


Chapter 4:  Clarifying the Measurement Problem

Thomas your summary of Chapter 4 is very thorough!

Did you know there is a companion HTMA workbook for the HTMA 3rd edition book we are re-reading?

One question from the workbook I missed is the concept of “false dichotomy”  (see p.  75 in the HTMA book) when exploring a decision to measure.

Make sure the decision you are supporting through a measurement is not a false dichotomy.  That is, not a feasible alternative.  Typically, as Hubbard explains “Yes/no choice between two extremes” or as I think about it, a “Mom and apple pie” decision statement.

My example:  suppose your decision is whether you should exercise or not.  Drill down on that decision statement and define what type of exercise program you should engage in instead (e.g., gym workouts, running, bike riding, hiking) and then figure-out the measurement to support that decision.

Hubbard’s examples were (1) clean drinking water (book) and (2) worker safety (workbook).

My notes from Chapter 4:

  1. 5 point process (p. 71) – starting with “What is the decision this measurement is supposed to support?”
  2. False dichotomy (p. 75)
  3. Requirements for a Decision (p. 78)
  4. If you understand it, you can model it (p. 80)
    1. decomposition to improve estimates (p. 81)
  5. Definitions of Uncertainty and Risk (p. 84)  **** (4 stars!)
  6. Clarified Decision example – IT Security at the VA – (pages 84 – 90)


Chapter 5:  Calibrated Estimates How Much Do You Know Now

I learned a new skill reading this chapter and practicing the sample calibration tests.

My approach for using the calibration tests in this chapter and in the appendix of the book, is to take 5 questions at-a-time.  That is, if you are time challenged.

I was happy that my 90% range did cover the actual value of the average percentage of Design in Software projects.

And I also recommend the Feakonomics podcast titled “How to Be Less Terrible at Predicting the Future” (January 14, 2016)

My notes from chapter 5

  1. Calibration is a skill that can be learned.
  2. Work on the low-end and high-end as two separate questions.
  3. Go wide enough to be more than 90% confident and bring in your estimates to at your 90% interval, on each end.
  4. For subjects you know nothing about, do a little research first.
  5. Practice.


Chapter 6:  Quantifying Risk through Modeling

I have drank the kool-aid on this stuff.  Forecasting using Monte Carlo simulation is a much better way.

Author Daniel S. Vacanti also has some words to say about this in his book “Action Agile Metrics for Predictability“, and Vacanti’s book also references Hubbard’s HTMA book.

Hubbard talks about selecting the correct probability distribution for you Monte Carlo simulation.  Vacanti states you need worry about this if you have the data.  And Vacanti’s book is all about collecting the data for processes (incoming and outgoing).

I use this exact approach to collect data about the scrum sprint processes.  The clock starts when a user story is accepted into a sprint and the sprint begins.  The clock ends when either that user story is deployed into production, postponed, or returned into the Backlog.  Start – Stop cycle-time is what I report on, but I also collect two other intermediate events – dev-complete and business accepted to help figure-out how-to reduce the end-to-end cycle-time.

I then use this data to forecast using Monte Carlo simulations with the help of Daniel S. Vacanti’s Actionable Agile online tool.

My Notes from chapter 6

  1. p. 123, “if a measurement matter to you at all, it is because it must inform some decision that is uncertain and has negative consequences if it turns out wrong.
  2. How NOT to quantify risk – low, medium, high; write a check to an insurance company with the amount “medium”.
  3. Not enough to just read this chapter, go get the books worksheets at the HTMA companion web-site and examine them.
  4. The book’s companion workbook challenges you not just read this but to work with material and develop the skills needed to use these concepts confidently.
  5. p. 134, “So we don’t ask whether a model lacks some detail.  If course it does.  What we ask is whether our model improved on the alternative model by enough to justify the cost of the new model.”
  6. p. 138 – 139, Exhibit 6.7 A Few Monte Carlo Tools
    (Also, note professor Sam Savage (Stanford) has a tool and Hubbard wrote about Professor Savage work (p, 136 – 137) “how-to institutionalize the whole process” and having a CPO = Chief Probability Officer.
  7. p. 140, “Risk Paradox  If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions.  The largest, most risky, decisions get the the least amount of proper risk analysis.”  (e.g., IT investments)


Chapter 7:  Quantifying the Value of Information

What is actually worth measuring?

How many times have we been part of a project where the convenient measurement is the focus of all the attention and we intuitively know this measurement is a proxy measurement at best.

This chapter gives the reader willing to study and work-at-it a method to figure out what to actually measure that will make an economic difference.

My Notes from Chapter 7

  1. Must work the accompanying Excel worksheets to develop this skill.
  2. (p. 145) re-read the “The McNamara Fallacy”, it captures what goes wrong with many measurement programs.
  3. (p. 149) Value of Information formula
  4. (p. 157) Zilliant: A Pricing Example (case study) …
  5. (p. 160) The Value versus Cost of Partial Information graph
  6. (P. 162) key concept – A Common Measurement Myth — when there is a lot of uncertainty, a lot of data is NOT required.
  7. (p. 167) The Measurement Inversion (what is really important to measure, and what is not) … go to top of page 168 for a summary
  8. (p. 171 – 172) Part-1 summary

End of part 1 of re-read of HTMA (How To Measure Anything), chapters 1 through 7




Mythical Man-Month Re-read Replies (part 3 of 3)



Software Process and Measurement (SPaM Cast) podcaster, author, and Agile Consultant Thomas Cagley ran a re-read Saturday series on Dr. Fred Brook’s classic computer science book “The Mythical Man-Month“.

I replied to all 18 of Thomas’s Mythical Man-Month wordpress Posts.

Herein are the replies 13 through 18 I made .
My replies posts of Part-1 and Part-2 are already out.

Essay 13 reply (October 04, 2015) …
The Whole and the Parts

Masterpieces! The tar-pit picture on the front cover of The Mythical Man-Month and the introduction to this chapter. Brooks combines a picture of Mickey Mouse as the wizard in Fantasia (The Walt Disney Company) and a profound quote from Shakespeare, King Henry IV, part 1.

“I can call the spirits from the vasty deep.
Why so can I, or so can any man; but will they come when you do call for them?”

This quote rings true across many disciplines; for technology, this quote and image is as fun today, as it was 40 year ago when Brooks was writing.

Essay 14 reply (October 25, 2015) …
Hatching a Catastrophe

Thomas, excellent analogy about frogs in boiling water. Brook’s memorable line “How does a project get to be a year late? … One day at a time.” has been a favorite of mine, my whole career.

This essay should be read by every Project / Program Manager, it is a classic.

On page 161, Brook’s calls what today we would call the Program Management Office an “irritant”, but tells us an A. M. Pietrasanta was able to run an effective PMO. I wish Brook’s would have written details around this remarkable accomplishment.

Essay 15 reply (November 01, 2015) …
The Other Face

Roll your eyes, this (documentation) isn’t a very exciting subject for most.

I had to comb this chapter a couple of times to find some useful tips from Brooks; unlike the previous chapters, where usefulness and interesting information just jumps out.

Are you using a standard that is not being used as intended?

Flow charting was such a standard practice. Mandated as a deliverable when I started my technology career.

p. 168 “Many shops proudly use machine programs to generate this “indispensable design tool” from the completed code.”

Flow charting back then meant documenting the entire program flow. Flowcharts are a great teaching tool and also a good visual design tool when used informally (paper and pencil, and high-level). But formally replicating the exact code logic is a waste of time.

I doubt if anyone consulted independent flow charts more than once. Why? Because they could not be trusted as a source-of-truth, but source-code has always been a trusted source-of-truth.

Any documentation extracted from the source-code can be trusted (assuming reliable software). Brooks does discuss “self-documenting programs” and I am sure Brooks became a fan of Javadoc.  Wait, perhaps Brooks writing in this chapter helped influence the creation of Javadoc.

Essay 16  reply (November 08, 2015) …
No Silver Bullet – Essence and Accident In Software Engineering

I am no longer in re-read territory; I read the original publication and this chapter was published 11 years after.

In 1986, IBM’s mainframe dominance that Brooks started is now eroding with departmental computers from the likes of HP, Dec, Sun, and even IBM.

The era of the PC was in full swing, IBM PC clones abound running Microsoft’s MS-DOS, Apple’s MacIntosh has entered the market, Byte magazine is in its heyday, and the HTTP protocol has yet to-be invented.

As Thomas states above, this essay is the longest yet. I wonder if Brook’s continued career as the Computer Science Chair at the University of North Carolina (Chapel Hill) has influenced his writing style. I like the earlier chapters succinct style better.

And as Thomas infers above, there are many seeds of Agile written about in this essay.

Essay 17  reply (November 15, 2015) …
No Silver Bullet, Refired

Agree, Brook’s target audience has definitively changed in this chapter from the the practitioner to academia. It reminds of the director cuts you find on some DVDs, most viewers are only interested in the full feature movie.

Brook’s is asking and answering a very important question: what in software development has improved and not improved so much over the last 20 years?

I want to focus my response on an area Brook’s wrote about in the previous chapter and one that I currently wrestle with.

p. 199 “The hardest single part of building a software system is deciding precisely what to build.”

Some use Lean techniques in software today to discover the right thing in software to build.

SPaMCAST 342 – Gorman, Gottesdiener, Discover to Deliver Revisited
discusses this subject and is an excellent Podcast to re-listen to.

PMI is now offering training and certification as a “Professional in Business Analysis” (PBA).

During “The Mythical Man-Month” time period (20 years), the recognition of “What to build?” emerges as an equally important question as “How to build it?”.

Essay 18 reply (December 06, 2015) …
Mythical Man-Month After 20 Years

Thank you Thomas for choosing
“The Mythical Man-Month” and running with it!

The graphics and quotes at the beginning of each essay are always interesting. This essay begins with …

“I know no way of judging the future but by the past.” Patrick Henry
followed by
“You can never plan the future by the past.” Edmund Burke

Brook’s is back in form with this long and thoughtful essay.

As Thomas states above; this re-read was insightful to today’s world, we either learn or re-learn from this re-read experience.

For me during this re-read, I paid a little more attention to Brooks the man. I learned Brooks, besides being very perceptive and a good writer, is also a person of high-integrity, religious, and a computer scientist above being a manager.

I also learned through Amazon recommendations, that Brooks has another interesting book patterned after the “The Mythical Man-Month” which joins my already crowded book-list – “The Design of Design: Essays from a Computer Scientist” (copyright 2010).


End of Part-3 (of 3),  contains the replies from essays 13 through 18
Part-1 contains the replies from essays 1 through 6
Part-2 contains the replies from essays 7 through 12

Mythical Man-Month Re-read Replies (part 2 of 3)



Software Process and Measurement (SPaM Cast) podcaster, author, and Agile Consultant Thomas Cagley ran a re-read Saturday series on Dr. Fred Brook’s classic computer science book “The Mythical Man-Month“.

I replied to all 18 of Thomas’s Mythical Man-Month wordpress Posts.

Herein are the replies 7 through 12 I made .
You can anticipate my next wordpress Post and Part-1 is already out.

Essay 7 reply (August 26, 2015) …
Why Did the Tower of Babel Fall?

Reduce the need for communication: Dr. Brooks states the D. L. Parnas has proposed a radical solution (back in 1970s), information hiding in modular programming. A person doesn’t need to know everything, (p. 78) “the programmer is most effective if shielded from, rather than exposed to the details of construction of the system parts other than his (her) own”.

We see this everyday; I can know the time without knowing how the clock works. I can drive a car without knowing how the motor works. I can “leave a reply” without knowing how the web-site works.

My favorite quote from this chapter (p. 80): “Thinkers are rare; doers are rarer; and thinker-doers are rarest.”

Essay 8 reply (August 30, 2015) …
Calling the Shot

Thomas, you nailed this chapter.

“Prediction is very difficult, especially if it’s about the future.”
– Nils Bohr

I like Brooks good-enough estimating rules for his time / situation (p. 93),
“My guidelines in the morass of estimating complexity is that compilers are three times as bad as normal batch application programs, and operation systems are three times as bad as compilers.”

Essay 9 reply (September 06, 2015) …
Ten Pounds in a Five–Pound Package

Thomas to amplify on #3 above and something I see a lot. Brooks (p, 100) “Fostering a total-system, user-oriented attitude may well be the most important function of the programming manager.”

From Lean, we learn the words “may well be” can be replaced by “is”. And for that matter, “programming manager” can be replaced by “everyone”, certainly everyone that strives to provide leadership.

Another quote I like (p. 98) “Like any cost, size itself is not bad, but unnecessary size is”.

Essay 10 reply (September 13, 2015) …
The Documentary Hypothesis

It was the Brook’s reference of Conway’s Law (p. 111), which I was researching months ago, that sparked my interest in re-reading The Mythical Man-Month.

p (112) “But only the written plan is precise and communicable. Such a plan consists of documents on what, when, how much, where, and who.”

Brook’s must have been thinking about project management / project execution when he wrote this. The “why” is missing. We all, on occasion, fall into the trap of assuming the “why” is understood. Or worse, telling people to “just get it done”.

Essay 11 reply (September 20, 2015) …
Plan to Throw One Away

As Thomas states above, Brooks understood the nature of software development and foreshadowed much of what we now call agile.

Brooks also discussed software defects in this chapter, see figure 11.2 (p. 121) “Bug occurrence as a function of release age”

“These things get shaken out, and all goes well for several months. Then the bug rate begins to climb again. Miss Campbell believes this is due to the arrival of users at a new plateau of sophistication”

Essay 12 reply (September 27, 2015) …
Sharp Tools

There are many ideas to think about this week. I will respond to something that wasn’t around when The Mythical Man-Month was written and take a broader definition of software tools – open-source.

Specifically the open-source that finds its way into the software product / solution being delivered. Security concerns do need to drive the list of software tools / components selected to a smaller and vetted list.

1) Reduce the products attack surface
2) More focused response when security vulnerabilities are found for the organization

End of part-2, contains the replies from essays 7 through 12
Part-1 has replies from 1 through 6 and
Part-3 has replies from 13 – 18