Toward empirical IT governance

Overheard in a discussion among presumed Enterprise Architects: “I know that’s the principle – but what’s the standard?”

That took me back – back to the heady days I spent facilitating workshops designed to capture principles that “ought to” govern the IT development process (including definition, design and sustainment) at my then-employer. I’m sure it comes as no surprise that my title was Enterprise Architect.

The process was exhilarating; I found I loved designing and facilitating the workshops, just as I had enjoyed participating in similar workshops as a subject-matter expert. Reflecting on it, now, however, I question the premise – and a cherished premise, it is – of the primacy of principles. After all, we were capturing the opinions of individuals asked, essentially, to predict the future (not in detail, but in general).

One could depict the model (while lampooning it, a bit, to be sure) using the following diagram:

EA

The “right” set of broad Principles influences Standards – by their nature, more specific, focused and “actionable” – which are reflected, in turn, in the IT Systems & Processes of the organization. This isn’t ridiculous or hopeless – but, when it operates in just one direction, top-down, it’s a bit like balancing a pyramid on its point.

Instead of, or in addition to, the abstract approach reflecting Enterprise Architecture, we need a complementary process reflecting Computer Science – with the emphasis on “science”.

The scientific method is more bottom-up than top-down. It upholds the primacy of empirical evidence – what’s actually happening. The scientist may start with a hypothesis – but can just as easily start with direct investigation and experimentation.

(Computer) Science
The http://raindogscine.com/tag/caddies/ discount levitra brand has imposed a label of male impotence. Example – Drink – Drinked (Wrong) Drink – Drank (correct) It seems a bit tricky, but constant practice can make you a vardenafil pharmacy master of this thing. Erectile dysfunction or impotency is the inability of men to erect the penis during a lovemaking session. cialis prescriptions She will go to link buy levitra start fascinating about you and would start experimenting fast.
As depicted above, the scientific method accepts only Empirical Evidence as “gospel”, establishing a body of Theory only to the extent that it assists in understanding or explaining why we observe what we observe, and not something else. At the top of this pyramid teeter extremely powerful, yet fragile, Laws. A Law predicts the future – often, very precisely – but must be abandoned (or radically reformed) the moment we find its predictions to be misleading.

Almost every scientific discipline has discovered Laws. Computer Science is no exception; consider Moore’s Law – a good example not only because of its renown and the reliability of its predictions (so far) – but also because of its limitations and its ability not only to predict future outcomes but to influence them, at their source (in this case, the behaviour of organizations that create the very artifacts that comprise the domain of Moore’s Law, itself).

If we were (informally; we’re technologists, not scientists, by trade) to “do” Computer Science within the domain of our own IT organization, what would that look like?

First, we’d gather the empirical evidence. What is actually happening in our shop? How do we evaluate proposals, green-light projects, select and implement systems? What are our support and sustainment processes (both heretofore documented and – especially – undocumented)?

Next, we’d try to articulate why things work the way they do in our shop. What are the general themes and trends we’re seeing? Can we spot commonalities among “data points” we would have assumed were unrelated? If so, what do we see when we extrapolate? Does the resulting “curve” fit the “data”?

If so, we’ve hit upon theory that fits our observations. We may even articulate a (tentative) law. For example, we may discover that – as counterintuitive as it seems – that our support team’s costs seem pretty much fixed; that they vary hardly at all with the urgency and severity of the “tickets” (reported issues) to which those costs are allocated. Intriguing! It would seem the “Law of Fixed Support Costs” applies in our domain.

That such a law would apply might naturally disturb us – and we might launch a deeper inquiry into what it is about our support processes that doesn’t “care” about urgency or severity. Perhaps we’ll find that the phenomenon is an illusion – a side-effect of “bad science” on the part of those who report issues and/or enter support tickets; we may find that nearly every ticket is entered with a high urgency, for example. Conversely, we may find that our shop, by its nature, just works that way; perhaps we’re very small, with most issues requiring the attention of the same support people, who have to consult the appropriate subject matter experts in the resolution of nearly every ticket – resulting in a fairly high and invariant overhead. In this case, the discovery of the “Law of Fixed Support Costs” might inspire us to sign a fixed-cost support contract with a service provider.

To sum up: Top-down Enterprise Architecture may have its merits, and ideally can influence behaviour – but (Computer) Science (performed validly) will reflect more reliably what’s really going on. Use a Computer Science-based approach as a check against the “solution in search of a problem”.