Drawing blood

When I read an article that explained the Heartbleed bug, clearly and simply, I had an epiphany: Vulnerabilities in systems are revealed by simple prodding.

You may have believed, as I did, that hacks are deep and ingenious – proprietary to uber-geeks. Based on Heartbleed, however, my intuition now tells me that most technical hacks are discovered through the most elementary of experimental techniques: Apply a stimulus to the subject, and see if/how it reacts. When the subject is a “dumb” piece of software, one may not even have to guard against its “waking up” and raising an alarm.

Hacking people is usually a bit more subtle – but it doesn’t have to be, if the hacker doesn’t care that his mark knows he’s being hacked. Vladimir Putin is proving himself to be a master of this technique, which requires more brio than brains. Through poking the anti-Bear, he gathers invaluable information; basically, he learns what he can get away with.

The hacker (or “cybersecurity engineer”) prods the armour of networks and systems, sometimes with shockingly blunt instruments – and often finds that armour full of holes.

Donning a white hat, let me say that I have long been a proponent of automated testing of information systems. No one enjoys bleeding; let’s have our robots poke at our armour, randomly and thoroughly, and then patch its holes before we wear it in battle.

Business Intelligence: From “D’uh” to “Aha!”

In a previous post, I offered an operational definition of a “smart” system, positing that if the output of a system is:

  • clear,
  • accurate,
  • timely,
  • consistent,
  • trusted, and
  • pertinent; and therefore
  • more valuable than the input

…then the system is (relatively) “smart.” I’m hoping that’s fairly easy to buy, at least provisionally; after all, only a pretty stupid system would produce output that was ambiguous, inaccurate, outdated, self-contradictory, irrelevant, or of no more value than its input.

The logicians out there might observe that a smart system, as characterized above, is a lot like a valid argument: it is “truth-preserving.” If the premises (input) of a valid argument are true, its conclusion (output) must be true. We demand a bit more than that from our smart system, however; we require that its output not make us say “d’uh!” In order to be more valuable than its input, a smart system’s output must be, to some extent, unobvious; it must elicit more of an “aha!” The smart system is like the clever, valid argument. (Gödel’s Theorem, however, is too clever for any system. Just a bit less clever than that will do.)

Let’s look at a particular class of information system – the “Business Intelligence” (BI) system. Its name, at least, seems to promise “smarts”. Of course, “intelligence”, in this context, refers more to the output of a successful process – an investigation, perhaps – than to the nature of the process, itself. Consider military intelligence: extremely valuable information that “our side” came by through a process of putting together a bunch of seemingly unrelated and uninteresting scraps of data. Traditionally, the “putting together” – the process – is organic to a group of clever human beings, who analyze, organize and scan the data for patterns – the “signal” hidden in the “noise”. Increasingly, however, that raw, relatively uninteresting data is fed into automated systems that store, characterize and index it, and assist the human investigators greatly in their analytic process – for example, by producing graphical representations of the data (“visualizing” it). Appropriately, such systems, as a class, are called “analytics”. “BI” is a term often used interchangeably with “analytics” in referring to such systems, when used in search of business-oriented “intelligence”.

To wrap up a post dedicated to the unobvious, let me (rather perversely) channel Captain Obvious: I would rather work toward the “aha” than the “d’uh.” Rather than automating a filing cabinet (although business do need more secure, accessible file space than ever before), I would rather build a system that can – by virtue of being “smart” – sift through a mountain of facts, identify and illustrate patterns, and arrive at a modicum of intelligence.