Appropriate Gating

I spotted a question, asked by Jim Stewart, in LinkedIn.  It was in a Group I don't belong to so I couldn't post a reply immediately.  So I thought, I'd write a slightly more explanatory comment and post it here instead.  The original discussion is here

As per Jim’s questions: How to maintain a robust stage gate process for ’lighter’ projects, but make it less onerous? Is there anything beyond the standard options of removing gates or consolidating gates? How to get around the problem that if everything in a stage gate checklist is required how would you determine the nonessential?

Graham’s reply contained good Portfolio Management solutions. (BTW, hi Graham - good to bump into you). As well as alternative assurance (tolerance, management by exception, independent assurance checks) you can reach some form of consensus about the risk inherent in projects and smaller initiatives by classifying them. The level of gating would then match the level of risk explicitly. As Jim points out, this is good, although it leaves the problem of having an effective vetting process for those that “escape” the tougher classifications. 

To make it worse, the root of the problem is that organisations really do need to deliver faster, so it leads to fast-tracking (bypassing controls) and the taking of more risk. 

Let’s not to be under any illusions as to how difficult this problem that Jim highlights really is. It is very difficult for the following reasons:

  • 9 out of 10 organisations don't have any organisational memory, no actionable data about what failed when, why and under whose watch. This rules out any objective possibility of improvement that can be called cost beneficial. Compared to what?
  • The vast majority of organisations have a very low level of maturity with respect to the change and delivery process. This rules out swift action that can be cascaded in a controlled manner. 
  • The capability to estimate properly is very rare in organisations. There is an optimism bias in nearly all project workers, the effects of which are made much worse by the management’s need to challenge. Most projects end up with estimates that are as beautiful and satisfying as the Emperor’s new clothes. However, this situation is quite positive and pragmatic compared to the estimation of risk, which is what matters in this case: it is very, very rare to find risk that is quantified by calibrated estimators. Most risks and issues are colourfully graded on random five-point scales, totally trapped in silos of subjectivity, which make a mockery of Portfolio rankings. Without a way to estimate risk you cannot even know which projects are relatively riskier, so you can’t put them in buckets safely. 
  • And finally, just when you thought you could mitigate all this, may I remind you of the loss-aversion bias identified by Kahneman & Tversky...
But there is always a way out. There is always a best next step, no matter how long and tortuous the road ahead.

My view is that the most pragmatic thing to do under the pressure to cut corners and go faster is to pick one or more of the most “in demand for fast-tracking” projects and run them as a pilot under experimental controls. This achieves the following:

  • It responds to the needs of the organisation (customer)
  • It brings the risks of the project(s) in question out in the open so that they can be managed explicitly
  • It creates real, local data about risk vs controls, success vs failure
  • It creates a baseline for potential new, less onerous controls, for which there would be evidence that they are more effective. At least in the target organisation. 

The “how” depends totally on the organisational environment, but the most impressive gains come from the simple decision to make the people who are accountable (PMs, sponsors, key stakeholders) reach a consensus about the risk of each pilot project. Accountability is the key. Decisions must be documented so that it is clear who signed off on the minimal controls in each case. They will get the credit when it works. They will have to shoulder the blame for any pilot project failing the organisation. 

The key to working out the “how” is to realise that this is a risk evaluation problem foremost, the mechanics of stage gate checklists follow fairly safely if the the risk analysis and mitigation are reliable. The problem of vetting projects for the lighter touch approach also becomes pretty objective. 

The alternative controls need to enlist as many BAU operational risk processes as possible. The more integrated to the organisation the controls are the faster you can get the pilots going, and the safer they will be. Don't invent too many things at this point. Measure the success of the controls along the pre-agreed risk categories used to analyse the project’s risk in the first place. 

Please let me know how it goes. I'm genuinely interested.