My time with my tribe

Five or six years ago, a couple of the guys from Enterprise Architecture (EA) paid a visit to the information services (IS) group where I worked as a solutions architect. They were on a “road trip”, they told us, visiting each IS business solutions group in the company to help us identify our target architecture.

I didn’t know we needed help. I didn’t know we needed a target architecture. But I didn’t care; I was too intrigued by the box of markers, sticky notes, and other materials. This was the “facilitation box”, I was told.

That workshop – the intriguing process the EA guys led us through, the fun we had, and the robust, organic result we obtained – infected me with the facilitation bug. I volunteered or was invited to attend a handful of other such workshops, on a variety of topics, over the next couple of years – until I joined EA and got a chance to lead workshops, myself! I took some of the same training the other EAs had completed, and learned that the apparent magic they worked was actually technology – the Technology of Participation (ToP), to be precise, created by the Institute of Cultural Affairs, and provided in Canada by ICA Associates.

The Gathering of the Tribe

Before long, I left EA, and then the company. But I continued to call myself a facilitator, and to find opportunities to use the skills in the context of my work as a consulting IT architect. An architect can spend a lot of time facilitating: conducting stakeholder interviews, running future state planning workshops, chairing meetings of communities of practice, leading design or code reviews, etc. By the end of 2014, I was even thinking: Could I ever make a living primarily as a facilitator, rather than as an IT architect?

As I was researching this exciting and daunting path, I learned that the International Association of Facilitators (IAF) would be holding its 2015 North American regional conference in Banff, Alberta – right next door to Calgary, where I make my home. I decided I had to make time for something so timely – so I registered for IAFNA 2015. I was one of a “tribe” of 140 (or so) delegates joining many wonderful presenters and organizers for prepared sessions, peer discussions, and social events, from May 14th to 16th.

My impressions of IAFNA were many, and my notes, copious. The conference ended just a couple of days prior to this writing, so I have barely begun to sort out everything I learned, or what I will use in my work, or how. I will include a few thumbnail sketches, here, and may delve into some of these concepts and tools in more depth, in future posts.

Process innovation (and the role of ELMO)

I attended a full-morning session with the warm and engaging Michael Wilkinson of Leadership Strategies, on facilitating the strategic planning process. Michael somehow managed to distill the key content of a three-day course into three hours. He introduced us to the Drivers Model, in which stakeholders assess the Current state, characterize their (future) Vision, and identify the Barriers which they must overcome. I was impressed with the comprehensiveness and clarity of the model, which also includes all the following elements (among others):

  • Goals (broad, “infinite”); together, these comprise the Vision
  • Objectives (specific, finite, S.M.A.R.T.) – the results that we must see
  • Critical Success Factors: conditions which we must create
  • Strategies: activities to undertake in service of the Objectives

I am familiar with other strategic planning models; none is quite as detailed and refined as the Drivers Model (it seems to me, at first blush). The distinctions it makes will definitely help me to tailor my strategy-oriented workshops.

As a bonus, Michael also introduced us (those who hadn’t heard the term) to ELMO: “Enough! Let’s Move On.” But Michael’s ELMO was trumped, quite vividly, in the very next session I happened to attend.

ELMO in Banff
ELMO in Banff

Jennifer Bentley and Belinda Honigfort, IT requirements analysts representing Nationwide Insurance, walked us through an expedited IT project launch process Nationwide calls “Rapid Alignment”. Basically, Rapid Alignment ties each phase of review and approval of a project charter (which defines the key elements and parameters of the project, such as scope, approach, and estimated timeframe and cost) to a storyboard, which is presented and discussed, panel-by-panel (or slide-by-slide, when developed using PowerPoint), with the stakeholders. Large-format printouts are used (laid out in sequence on the wall, beforehand), so any changes can be noted directly on the printed panel.

Having first oriented the group to the “rules of engagement” – for example, no smartphone use during the session – a facilitator keeps the process moving, using tools such as ELMO, in which any participant can call for closure on the current discussion topic, if a majority of others agree. (Each of us came away with a laminated ELMO face – one of the best items of swag I can remember!) Once consensus (defined as, “I can live with this and support it”) is achieved on a particular panel (perhaps with noted changes), there is no going back; thus, the “story” has to be clear, complete, and told in a linear fashion.

Jennifer and Belinda closed by showing us the impressive savings in time and cost Nationwide has achieved in the couple of years it has been using Rapid Alignment. Facilitation pays!

Going Digital

My Friday at IAFNA ended with a session that served as both a tantalizing taste of something new, and a reminder of something I had almost forgotten. Paul Penny of INTJenuity (yes, Paul confirms he’s an INTJ – like me) showed us what Facilitating with Digital Mind Maps is all about.

Mind mapping has long been used as a technique in brainstorming and in making “shared sense” of a complex topic under discussion. Only recently, however, have software and display technologies made mind mapping using a digital tool a feasible option in “real time” facilitation sessions. Using Mindjet MindManager with his Macbook and portable HD projector, Paul showed us what a natural and intuitive format a digital mind map can be for both the design/structure of a facilitated session and its results. Mind maps can be alternately hierarchical and nonlinear, show both summary and details, and can be used both to analyze and to synthesize information.

Workshop design mind map template (copyright INTJenuity)
Workshop design mind map template (copyright INTJenuity)

Personally, I was reminded that I own MindManager – and that I haven’t been making much use of it, lately. Whether or not I eventually find myself using a digital mind mapping tool during a session I am facilitating, I will definitely use it, offline, to organize my own thoughts, learnings, and designs. In fact, while writing this, I have decided to use MindManager to record and relate my takeaways from IAFNA, merging them with the knowledge and skills I already have, in a map of my personal practice of facilitation.

“Magic” and “technology” (redux)

I started my Saturday with Jo Nelson, a “hall of famer” in facilitation circles. This was the first time I had the pleasure of encountering Jo, in person, though I had certainly heard of her through her affiliation with ICA Associates. She gave us all a copy of her “master compendium” of facilitation tools and techniques, as well as the template she uses for “orchestration” of facilitated events (including a box for each “movement” of the “symphony”). Together, these materials comprise a “magic facilitation toolkit”. (Of course, a fair bit of it is branded “Technology of Participation”. Facilitation is both art and science.)

Jo invited us, in our small groups, to “play with” a couple of the 70-odd tools, and to plan a brief (imaginary) facilitated session using the template. This is a good place to state my admiration for my “peer” delegates (most of whom are patently my betters). In every session, I was struck by the tremendous wisdom and talent of my fellow participants, almost all professional facilitators, many of whom had 20 or 30 years of experience. I’m sure it was a pleasure for the presenters, too, to guide groups with such respect for the process and profession of facilitation.

Walking the line; staying the course

Of course, a room full of facilitators is not the typical group of stakeholders in attendance for a workshop. A facilitator has to work with groups and individuals that may be skeptical – or downright distrustful – of the process, the facilitator, or certain other participants or factions within the group. My last two sessions at IAFNA both dealt with the disruptions that threaten facilitated events – and what the facilitator can do about them.

In Walking the Fine Line Between Creativity and Chaos, Lise Hebabi of Intersol presented a framework defining four categories of challenges to an event – Rebellion, Revolution, Power Grab, and Coup – based on whether the challenge is led by an individual or by the group as a whole, and whether the target of the challenge is the process or the facilitator. Lise offered possible strategies (Ignore, Deflect, Acknowledge, Confront, etc.) for dealing with each category of crisis, and had us share stories from our own experience, along with what worked – or didn’t – to get things back on track.

Dr. Rebecca Sutherns entitled her session Staying the Course When Things Go Sideways: Increasing a Facilitator’s Agility. Similar “nightmare scenarios” as in the previous session were raised and discussed; in addition, Rebecca had us consider the non-human factors – e.g. environment, logistics – that can torpedo a session. Rebecca collected our input and sent us a compendium of our best strategies and her own, after the session.

Though they dealt with unpleasant situations, these sessions were imbued with humour. We laughed along with our fellow facilitators – our fellow tribespeople – as they relived being told, “We don’t do sticky notes,” and related the sheer panic they felt when they realized the “room” where they were to conduct their workshop had no walls. (That’s when they discovered that the floor can be a facilitator’s best friend.) Key learning: Anticipate, prepare, and always bring a Plan B.

So: Am I a Facilitator, now – or still an IT Architect? Yes, and yes.

A slice of Pi

It’s been awhile – but I’ve always loved to tinker with systems that could be considered a little unusual – or “geeky”. Eight or ten years ago, installing Debian GNU Linux on my old, spare x86 PC satisfied my hobbyist urges. (I used it as a home Internet gateway, performing network address translation; I was one of the first people I knew with multiple, Internet-connected computers at home.) Recently, the Raspberry Pi computer-on-a-card has cropped up, intriguingly, in a non-IT context: its ability to run XBMC – a home-theatre media system.

The idea of setting up a Pi as a “media box” incubated in my brain for a month or so. Finally, a post I read on a hockey blog I frequent (I’m a big sports fan) convinced me that XBMC on Raspberry Pi is likely a viable means of watching the Montreal Canadiens games I will otherwise miss, due to a change in availability of a lot of those games in my cable TV broadcast region. So I came to a decision – and ordered a Raspberry Pi B+ from Allied Electronics’ online store.

My (somewhat hasty) research led me to believe I had everything I needed, besides the Pi, itself. I own a USB keyboard, a TV with an HDMI port and cable, a couple of different USB power sources and USB-to-micro-USB cables, and a 4 MB SD card.

When the Pi arrived, and I unpacked it, I was struck by two things: first, how truly tiny it is – its footprint is about that of a poker-size playing card – and second, that it (the Model B+) accepts a micro-SD card – not an SD card, as was shown in pictures and video I’d seen of the (original, two-year-old) Model B.

Raspberry Pi Model B+ box with MacBook Air (for scale). (The actual Pi card is quite a bit smaller, yet!)
Raspberry Pi Model B+ box with MacBook Air (for scale). (The actual Pi card is quite a bit smaller, yet!)

A day – and ten bucks – later, I was ready to set up my media centre system. I had decided, based on reviews and tests – especially those on Anand Subramanian’s excellent blog – to go with the Raspbmc customization of Debian Linux. Preparation of the micro-SD card with a Raspbmc image was much faster than expected; I had read that the writing of the nearly-2 MB image to the card could take “a long time”. I made this part very easy by using Apple Pi-Baker for Mac OS.

I plugged in the Ethernet cable, the keyboard, and the HDMI cable, and turned on the TV, before plugging in the USB power source (thus powering-up the Pi, as it has no On/Off switch). Raspbmc’s auto-update scripts kicked in, automatically updating the OS, itself, and then XBMC. Impressive!

Pi B+ at moment of first power up. I added: USB keyboard (white), ethernet (dark grey), HDMI (light grey), 5V micro USB power (black), 8 GB micro SD card w/ Raspbmc Installer image (on underside, not shown).
Pi B+ at moment of first power up. I added: USB keyboard (white), Ethernet (dark grey), HDMI (light grey), 5V micro-USB power (black), 8 GB micro-SD card w/ Raspbmc Installer image (on underside, not shown).

My elation quickly turned to dismay, however, when the system kept rebooting itself, displaying the falsely-reassuring message, “Relax; XBMC will restart shortly,” over and over again. I was afraid this behaviour was power-related; the most stressed variable in the troubleshooting advice on the Raspbmc support forum is power: Connect a steady, sufficient power supply. I had initially used my iPad 4’s power source, rated at 5.2 volts and 2.4 amperes; the Pi requires only 5.0V and will draw a maximum of perhaps 1.5A, depending on connected peripherals – and the advice on various Pi-related sites seemed to be to make sure the Pi is not underpowered. But I was suddenly afraid I had overloaded my Pi and “fried” one of its components. The message I got when I escaped to the Linux shell command line and typed “xbmc” was, “Install an appropriate graphics driver.” Yikes. I tried connecting the Pi to my other power supply – my Samsung smartphone charger – which, by all accounts, is barely sufficient to run the Pi with little or nothing connected; however, the continuous rebooting continued.

After poking around for while on the forums, reading about others’ experiences with the Raspbmc “relax loop” (heh!), I came upon a post mentioning a “corrupted SD image”. I decided to reinstall – and to use an alternate method: the “Network Installer” image, which cites as “recommended”. (My first go-round used the Raspbmc full distribution image download link on

Success! Once more, impressive, automatic download-and-install scripts kicked in, the moment the Pi booted up. And this time, the automation came to an end – with the glorious, 1080p graphical user interface of XBMC version 13.2 (Gotham). (By the way, I am no longer worried about using my iPad power supply with the Pi; it has been operating beautifully, giving no sign of overheating.)

XBMC System Info (screen photo of Toshiba 46XF550U)
XBMC System Info (screen photo of Toshiba 46XF550U)

XBMC is a sophisticated platform that will require some exploring; perhaps, once I have it somewhat under my belt, I’ll write more about it, here. For now, let me conclude by saying that the Raspberry Pi is a boon to learners, tinkerers, and makers, everywhere! I know I’ll do a lot more with it than watch hockey.

Toward empirical IT governance

Overheard in a discussion among presumed Enterprise Architects: “I know that’s the principle – but what’s the standard?”

That took me back – back to the heady days I spent facilitating workshops designed to capture principles that “ought to” govern the IT development process (including definition, design and sustainment) at my then-employer. I’m sure it comes as no surprise that my title was Enterprise Architect.

The process was exhilarating; I found I loved designing and facilitating the workshops, just as I had enjoyed participating in similar workshops as a subject-matter expert. Reflecting on it, now, however, I question the premise – and a cherished premise, it is – of the primacy of principles. After all, we were capturing the opinions of individuals asked, essentially, to predict the future (not in detail, but in general).

One could depict the model (while lampooning it, a bit, to be sure) using the following diagram:


The “right” set of broad Principles influences Standards – by their nature, more specific, focused and “actionable” – which are reflected, in turn, in the IT Systems & Processes of the organization. This isn’t ridiculous or hopeless – but, when it operates in just one direction, top-down, it’s a bit like balancing a pyramid on its point.

Instead of, or in addition to, the abstract approach reflecting Enterprise Architecture, we need a complementary process reflecting Computer Science – with the emphasis on “science”.

The scientific method is more bottom-up than top-down. It upholds the primacy of empirical evidence – what’s actually happening. The scientist may start with a hypothesis – but can just as easily start with direct investigation and experimentation.

(Computer) Science

As depicted above, the scientific method accepts only Empirical Evidence as “gospel”, establishing a body of Theory only to the extent that it assists in understanding or explaining why we observe what we observe, and not something else. At the top of this pyramid teeter extremely powerful, yet fragile, Laws. A Law predicts the future – often, very precisely – but must be abandoned (or radically reformed) the moment we find its predictions to be misleading.

Almost every scientific discipline has discovered Laws. Computer Science is no exception; consider Moore’s Law – a good example not only because of its renown and the reliability of its predictions (so far) – but also because of its limitations and its ability not only to predict future outcomes but to influence them, at their source (in this case, the behaviour of organizations that create the very artifacts that comprise the domain of Moore’s Law, itself).

If we were (informally; we’re technologists, not scientists, by trade) to “do” Computer Science within the domain of our own IT organization, what would that look like?

First, we’d gather the empirical evidence. What is actually happening in our shop? How do we evaluate proposals, green-light projects, select and implement systems? What are our support and sustainment processes (both heretofore documented and – especially – undocumented)?

Next, we’d try to articulate why things work the way they do in our shop. What are the general themes and trends we’re seeing? Can we spot commonalities among “data points” we would have assumed were unrelated? If so, what do we see when we extrapolate? Does the resulting “curve” fit the “data”?

If so, we’ve hit upon theory that fits our observations. We may even articulate a (tentative) law. For example, we may discover that – as counterintuitive as it seems – that our support team’s costs seem pretty much fixed; that they vary hardly at all with the urgency and severity of the “tickets” (reported issues) to which those costs are allocated. Intriguing! It would seem the “Law of Fixed Support Costs” applies in our domain.

That such a law would apply might naturally disturb us – and we might launch a deeper inquiry into what it is about our support processes that doesn’t “care” about urgency or severity. Perhaps we’ll find that the phenomenon is an illusion – a side-effect of “bad science” on the part of those who report issues and/or enter support tickets; we may find that nearly every ticket is entered with a high urgency, for example. Conversely, we may find that our shop, by its nature, just works that way; perhaps we’re very small, with most issues requiring the attention of the same support people, who have to consult the appropriate subject matter experts in the resolution of nearly every ticket – resulting in a fairly high and invariant overhead. In this case, the discovery of the “Law of Fixed Support Costs” might inspire us to sign a fixed-cost support contract with a service provider.

To sum up: Top-down Enterprise Architecture may have its merits, and ideally can influence behaviour – but (Computer) Science (performed validly) will reflect more reliably what’s really going on. Use a Computer Science-based approach as a check against the “solution in search of a problem”.

“Turing” the landscape of the Imitation Game

A chatbot designed to respond (in English) like a 13-year-old Ukrainian boy (with limited English skills) was recently reported to have passed the Turing Test. Many commentators were quick to demonstrate that the ‘bot emphatically did not – and cannot – pass any version of the Turing Test having any meaningful connection to intelligence.

In my view, the chatbot, “Eugene Goostman”, merely entered the wrong competition. If it were to enter a competition in which every entrant was introduced as a 13-year-old from Ukraine, and all entrants had been either programmed or coached to impersonate a 13-year-old from Ukraine, we could more validly compare its results to that of its competitors. In the Reading University competition, however, other entrants were (tacitly) assigned different roles – roles arguably more difficult to pull off, such as “full-grown, educated, native English-speaking person.”

When Alan Turing famously proposed that the Imitation Game be used as the “gold standard” for machine intelligence, he did so with a challenging version of the game in mind – the version that humans play. To be considered decently good at the Imitation Game, a player (say, a man or a computer program charged with imitating a woman) would have to prove indistinguishable from “the genuine article” (a variety of actual woman players) a great deal of the time. The perfect “female impersonator” would be able to convince an impartial judge just as often, on average, as a woman can. (Bear in mind that even a genuine woman will be mistaken for a non-woman pretending to be a woman, a certain percentage of the time – especially by a judge wary of being duped.)

If a computer program could convince a panel of skeptical judges that it is a person just as often as the average person could do, that program would be a Master Im-person-ator – and indisputably intelligent.

That’s a big “if”. The best we can (generously) allow, at present, is that a computer program may now perform adequately well at an extremely limited and “dumbed-down” version of the Turing Test. We might envision the space of all Turing-Related Tests as a plane, of which a few tiny slivers – representing such roles as “ignorant, sassy teenager” and “abusive, paranoid weirdo” – are coloured either yellow or a very faint shade of green. All other “tiles” in the plane – “full-grown, educated, native English-speaking person”, “woman”, “award-winning journalist”, etc. –  are either red or uncoloured.

Drawing blood

When I read an article that explained the Heartbleed bug, clearly and simply, I had an epiphany: Vulnerabilities in systems are revealed by simple prodding.

You may have believed, as I did, that hacks are deep and ingenious – proprietary to uber-geeks. Based on Heartbleed, however, my intuition now tells me that most technical hacks are discovered through the most elementary of experimental techniques: Apply a stimulus to the subject, and see if/how it reacts. When the subject is a “dumb” piece of software, one may not even have to guard against its “waking up” and raising an alarm.

Hacking people is usually a bit more subtle – but it doesn’t have to be, if the hacker doesn’t care that his mark knows he’s being hacked. Vladimir Putin is proving himself to be a master of this technique, which requires more brio than brains. Through poking the anti-Bear, he gathers invaluable information; basically, he learns what he can get away with.

The hacker (or “cybersecurity engineer”) prods the armour of networks and systems, sometimes with shockingly blunt instruments – and often finds that armour full of holes.

Donning a white hat, let me say that I have long been a proponent of automated testing of information systems. No one enjoys bleeding; let’s have our robots poke at our armour, randomly and thoroughly, and then patch its holes before we wear it in battle.

Business Intelligence: From “D’uh” to “Aha!”

In a previous post, I offered an operational definition of a “smart” system, positing that if the output of a system is:

  • clear,
  • accurate,
  • timely,
  • consistent,
  • trusted, and
  • pertinent; and therefore
  • more valuable than the input

…then the system is (relatively) “smart.” I’m hoping that’s fairly easy to buy, at least provisionally; after all, only a pretty stupid system would produce output that was ambiguous, inaccurate, outdated, self-contradictory, irrelevant, or of no more value than its input.

The logicians out there might observe that a smart system, as characterized above, is a lot like a valid argument: it is “truth-preserving.” If the premises (input) of a valid argument are true, its conclusion (output) must be true. We demand a bit more than that from our smart system, however; we require that its output not make us say “d’uh!” In order to be more valuable than its input, a smart system’s output must be, to some extent, unobvious; it must elicit more of an “aha!” The smart system is like the clever, valid argument. (Gödel’s Theorem, however, is too clever for any system. Just a bit less clever than that will do.)

Let’s look at a particular class of information system – the “Business Intelligence” (BI) system. Its name, at least, seems to promise “smarts”. Of course, “intelligence”, in this context, refers more to the output of a successful process – an investigation, perhaps – than to the nature of the process, itself. Consider military intelligence: extremely valuable information that “our side” came by through a process of putting together a bunch of seemingly unrelated and uninteresting scraps of data. Traditionally, the “putting together” – the process – is organic to a group of clever human beings, who analyze, organize and scan the data for patterns – the “signal” hidden in the “noise”. Increasingly, however, that raw, relatively uninteresting data is fed into automated systems that store, characterize and index it, and assist the human investigators greatly in their analytic process – for example, by producing graphical representations of the data (“visualizing” it). Appropriately, such systems, as a class, are called “analytics”. “BI” is a term often used interchangeably with “analytics” in referring to such systems, when used in search of business-oriented “intelligence”.

To wrap up a post dedicated to the unobvious, let me (rather perversely) channel Captain Obvious: I would rather work toward the “aha” than the “d’uh.” Rather than automating a filing cabinet (although business do need more secure, accessible file space than ever before), I would rather build a system that can – by virtue of being “smart” – sift through a mountain of facts, identify and illustrate patterns, and arrive at a modicum of intelligence.

Intelligent systems: Dimensions

Via social media, I received a frank and pithy comment on my previous post (Machine intelligence: Scary? Necessary.):

Machine Intelligence? Dream on. Machines can only really appear intelligent to humans who are not.

In pondering how to respond, I realized I was a bit intellectually lazy when I implied that there might be some “threshold of intelligence.” To the contrary, intelligence is, to my mind, a continuum. Any system (including biological systems) that exhibits any sort of variable behavior in response to variable data (including stimuli) has non-zero intelligence. An earthworm, for example, is obviously very “stupid” in comparison with many creatures – but it is capable of cognition, as a function of its evolutionary “programming”. The algorithms which govern its behaviour are sophisticated and cannot be easily deconstructed – nor (yet) duplicated by human programmers of non-biological systems (“robotic worms”, if such things existed).

I also want to avoid depicting intelligence as a scalar quantity. To try to measure the conventional IQ of a computer system – or any non-human system – is folly. (The value of IQ has been challenged even as a measure of relative human intelligence – but that’s another discussion.) When we think of an intellect – that of a worm, a person, or a cybernetic system – as a “shape” having at least two dimensions – let’s call them breadth and depth – we arrive at a somewhat more vivid and meaningful basis for comparison. IBM Watson (as the example from my previous post) exhibits an impressive breadth of intellect, given that it is designed to process text in the English language that might relate to any topic. In depth, however, it reveals its “stupidity”. For example, Watson is not designed to be original or creative, whatsoever; it is designed to play Jeopardy, to which originality would be, if anything, a disadvantage. Watson’s “knowledge” of any particular subtopic will be revealed to be woefully inconsistent and brittle, upon probing. It’s quite possible – even likely – that Watson will answer several advanced questions on a given topic, successfully – but then come up clueless on what human experts would agree is a basic question. Of course, that’s because Watson is incapable of recognizing and assimilating the core body of knowledge on a topic, distinguishing the fundamental laws from the “esoterica”. It has no deep understanding of any topic, and grasps no themes or theories that might allow it to come up with an answer by extension or analogy. We could quite aptly characterize Watson’s intellect as “a mile wide and an inch deep.”

For an example of an information system having quite the opposite “intellectual dimensions” as IBM Watson – that is, an extremely limited breadth, but significant (and, I find, impressive) depth – see Copycat, designed by the Fluid Analogies Research Group, headed by Douglas R. Hofstadter, at the Center for Research on Concepts and Cognition, University of Indiana, Bloomingdale. Copycat is a system designed to generate solutions to a certain kind of analogy-based problem, and to do it in a way that resembles the human process (as self-reported by human solvers). Here’s an example of Copycat being “smart”:

Human “says” (via coded terms): “I turn the string abc into abd. Copycat, you do the same with xyz.”

Copycat “answers”: “I turn xyz into wyz.” [Like most humans, Copycat “prefers” this answer to wxz, or any other.]

As a final example (or class of examples), let’s consider “crowdsourced” knowledge bases. Arguably, these systems are highly intelligent (by the terms laid out, herein), able to marry the capabilities of data/knowledge management software to store and index a comprehensive library of facts (breadth) with the “wisdom of the crowd”, inherent in aspects of human intelligence, intuition and social interaction – providing depth. One could convincingly argue that the overwhelming lion’s share of such a system’s intellect is due to the contribution of the crowd of humans – but then, the “one” making that argument would presumably be a human, and subject to bias. Suffice it to say that the results (outputs) of such a system – assuming the non-human elements are cleverly designed – are likely to be far “smarter” than the results of a crowd of humans thrown together in a room and asked to draw conclusions on a given topic, absent any “machine augmentation” of their collective intellect.

To sum up my reply to my critic: It doesn’t matter so much whether a given system is relatively “smart” or “stupid”, as it matters that systems can possess intelligence – and that not all systems possessing intelligence are 100% biological.

Next topic: So-called “Business Intelligence” (in capitals, no less!). Stay tuned!

Machine intelligence: Scary? Necessary.

We owe a lot of rich drama and philosophy to the concept of machine intelligence. From the pure horror (or, uh, campy comedy?) robot overlords inspire, through Asimov’s deep probing of the nature of humanity, to the idea of the Singularity – rephrased as “Transcendence” in the upcoming movie of that name.

But what does an intelligent machine necessarily “transcend” (other than “the threshold of intelligence” – which is tautological)? Mechanisms – profoundly complex mechanisms, to be sure, that elude our current understanding – underlie human intelligence. Indeed, the human brain is routinely likened to a wonderful machine. Why should a machine that succeeds in performing a feat we normally associate with intelligence – such as consuming data from a wide variety of sources, “remembering” (storing) that data in a form it can readily access and correlate, and using what it has “learned” to arrive at a likely answer to a question of fact  – not be characterized as intelligent?

I submit that many “machines” (human-designed systems) display intelligence, today. To be sure, it is a far less flexible and remarkable intelligence than you, dear human, have displayed in arriving at this blog post and (I hope) making sense of it, in the context of your own background, interests, and motivations. It is also far less mysterious. I believe it is merely the fact that we do not (yet) understand how we are so intelligent that makes most of us believe – nay, insist – that we are far more intelligent than anything we design shall ever be.

Whatever you believe, as an information technology professional I trust you will agree that the “smarter” the system we can create for our client, the better. Let’s further agree that the output of a “smart” system is:

  • clear,
  • accurate,
  • timely,
  • consistent,
  • trusted, and
  • pertinent; and therefore
  • more valuable than the input

That sets the bar nice and low for “machine intelligence” – or does it?

In future posts, I’ll share some of the best practices I’ve discovered for creating and maintaining intelligent “information machines”. Meanwhile, if this post inspires or provokes you, I’m sure you’ll let me know about it!

Raison d’etre

Hello, World! 😉

My name is Sean Konkin, and I am an information technology professional (read: IT Guy). Through 25 years of trials and tribulations (read: tests, defects and rework) I have earned the right to call myself an architect.

I was immersed in the culture of oil & gas business systems for a long time – most of the last 20 years, in fact – and have recently resurfaced. I was laid off from my job as an IT Architect for a large energy company, and am taking this opportunity to look all around me, instead of straight ahead along the path of corporate strategy. The information technology landscape has changed a lot, from the viewpoint of the professional, since the last time I found myself at this sort of crossroads!

For one thing, it’s all around us. We (as workers and consumers) no longer walk to a single room in our houses or allocate a single compartment in our minds to the means of “going online”. We live online – unless we consciously and effortfully opt out. We keep a growing portion of ourselves in the so-called cloud. The Internet programs us. As professionals, we are expected somehow to stay ahead of the Internet, in terms of the risks and opportunities it presents to our employers and clients. No small feat!

Another huge shift is a function of both the development of IT and my own development as a professional: the meaning of “development” (in a systems, rather than personal or industry, sense). I got my start in the industry just at the point the title “Programmer” was giving way to “Developer” – but before the difference was anything more than aesthetic. I remember the developers’ bullpen in the first company I worked for: a darkened half-floor of cubicles stacked with empty Jolt Cola cans, occupied by jeans-clad Unix programmers (all male, of course). That group of “developers” was completely and intentionally different from the rest of the company and its clients. Their job function, Systems Development (really, programming) was mysterious, requiring a completely different set of skills than that of anyone else at the company – and seemingly a completely different personality. IT Guys were geeks – and proud of it.

Fast-forward 20 years: I’m an “Architect”. I love to program (or “code”; it’s now a verb) – but the task is completely different and much less mysterious than it used to be. How often does a developer write a single program, in a single language, to solve a problem, anymore? Proficiency in languages is still important, as is understanding the problem – but more important is the ability to see, communicate, and build a complex solution out of the various components at hand, using available tools. (I realize that’s a value statement, and certainly arguable; I’d love to elucidate and defend it; perhaps that will become part of the raison d’etre of this blog.) My main point, here, is that I now find myself in an environment where programming and other highly technical, specialized skills are secondary, and to a certain extent taken “as read”; the defining essence of IT systems development proficiency is inherent in much more nebulous attributes, such as abstraction, modelling, vision, and synthesis.

I am very curious as to how today’s IT landscape looks from the perspective of other “IT Guys” (inclusive of “Gals”, of course). If you’re another 40- or 50-something veteran, I’d love to commiserate! If you’re just starting out, your perspective is extremely valuable, untrammelled by the baggage we old hackers carry. If, through my posts and your comments (you’ll need to Register, at left), we can better understand and occupy our place as leaders in information technology, I’ll be most gratified.