A few weeks ago one of my guys sent me this article from The Agile Executive which is - although not explicitly said - about the collision between ITIL infrastructure governance and agile development methodologies. I use the word 'collision' very deliberately because those guys deserve credit for how well they've managed to knit those (often opposing) worlds together. Which brings me nicely onto gates.
Gates, defined as the "You Shall Not Pass" checkpoints which must be navigated throughout a project, are not inherently bad. They just get misused in the same way that anything we do has the potential to be applied over zealously.
Gates have a place in delivery and fulfill an important function. They serve as a kind of final quality checklist to make sure that all those things you said mattered and had to be done actually have been done. As long as they are few, important, and near the end of the development cycle, you can make this work.
Where this tends to go wrong is when gates are established without clear and measurable clearance criteria defined up front. What you have then is an obstacle with a variable and subjective success criteria - and then everyone wonders why a project arrives at the gate and struggles to meet the requirements to move on. An understanding of exactly what's on the checklists a project will face as it goes live and exactly how that will be measured has to be front loaded, so that development teams know exactly how to make a piece of work transition through the lifecycle smoothly and organize resources in advance.
If this isn't established early and you get a project logjam downstream then you often end up having to compromise - and that means compromising on key quality or operational requirements that you believed are important enough to gate - in order to restart the flow. The good old fashioned 'defects fixed later cost more' curve still applies here.
Another common misuse of gates is introducing them into a process primarily for reporting purposes. It is true that a regular set of gates established throughout the end-to-end project process provides a set of convenient handholds against which to benchmark progress (we are at 'customer feedback on specification' or entering 'test run 3' etc) but there are a couple of downsides that go along with that.
Firstly, it tends to encourage over-reliance on artifacts such as documentation and reports rather than working software as a primary work output. Secondly, progress becomes measured by the clearance of gate after gate, step after step, and revisiting a previous step is seen as a step backwards. That lines up with an easy to describe liner view of the world, however with most complex projects in most organizations things are not that simple and progress often involves degrees of overlap and leapfrogging.