Today is a worldwide day of action for Internet freedom. I can’t add much to the eloquence and passion of the founder of Reddit. Please read and share:
It’s been awhile – but I’ve always loved to tinker with systems that could be considered a little unusual – or “geeky”. Eight or ten years ago, installing Debian GNU Linux on my old, spare x86 PC satisfied my hobbyist urges. (I used it as a home Internet gateway, performing network address translation; I was one of the first people I knew with multiple, Internet-connected computers at home.) Recently, the Raspberry Pi computer-on-a-card has cropped up, intriguingly, in a non-IT context: its ability to run XBMC – a home-theatre media system.
The idea of setting up a Pi as a “media box” incubated in my brain for a month or so. Finally, a post I read on a hockey blog I frequent (I’m a big sports fan) convinced me that XBMC on Raspberry Pi is likely a viable means of watching the Montreal Canadiens games I will otherwise miss, due to a change in availability of a lot of those games in my cable TV broadcast region. So I came to a decision – and ordered a Raspberry Pi B+ from Allied Electronics’ online store.
My (somewhat hasty) research led me to believe I had everything I needed, besides the Pi, itself. I own a USB keyboard, a TV with an HDMI port and cable, a couple of different USB power sources and USB-to-micro-USB cables, and a 4 MB SD card.
When the Pi arrived, and I unpacked it, I was struck by two things: first, how truly tiny it is – its footprint is about that of a poker-size playing card – and second, that it (the Model B+) accepts a micro-SD card – not an SD card, as was shown in pictures and video I’d seen of the (original, two-year-old) Model B.
A day – and ten bucks – later, I was ready to set up my media centre system. I had decided, based on reviews and tests – especially those on Anand Subramanian’s excellent blog – to go with the Raspbmc customization of Debian Linux. Preparation of the micro-SD card with a Raspbmc image was much faster than expected; I had read that the writing of the nearly-2 MB image to the card could take “a long time”. I made this part very easy by using Apple Pi-Baker for Mac OS.
I plugged in the Ethernet cable, the keyboard, and the HDMI cable, and turned on the TV, before plugging in the USB power source (thus powering-up the Pi, as it has no On/Off switch). Raspbmc’s auto-update scripts kicked in, automatically updating the OS, itself, and then XBMC. Impressive!
My elation quickly turned to dismay, however, when the system kept rebooting itself, displaying the falsely-reassuring message, “Relax; XBMC will restart shortly,” over and over again. I was afraid this behaviour was power-related; the most stressed variable in the troubleshooting advice on the Raspbmc support forum is power: Connect a steady, sufficient power supply. I had initially used my iPad 4’s power source, rated at 5.2 volts and 2.4 amperes; the Pi requires only 5.0V and will draw a maximum of perhaps 1.5A, depending on connected peripherals – and the advice on various Pi-related sites seemed to be to make sure the Pi is not underpowered. But I was suddenly afraid I had overloaded my Pi and “fried” one of its components. The message I got when I escaped to the Linux shell command line and typed “xbmc” was, “Install an appropriate graphics driver.” Yikes. I tried connecting the Pi to my other power supply – my Samsung smartphone charger – which, by all accounts, is barely sufficient to run the Pi with little or nothing connected; however, the continuous rebooting continued.
After poking around for while on the forums, reading about others’ experiences with the Raspbmc “relax loop” (heh!), I came upon a post mentioning a “corrupted SD image”. I decided to reinstall – and to use an alternate method: the “Network Installer” image, which Raspbmc.com cites as “recommended”. (My first go-round used the Raspbmc full distribution image download link on RaspberryPi.org.)
Success! Once more, impressive, automatic download-and-install scripts kicked in, the moment the Pi booted up. And this time, the automation came to an end – with the glorious, 1080p graphical user interface of XBMC version 13.2 (Gotham). (By the way, I am no longer worried about using my iPad power supply with the Pi; it has been operating beautifully, giving no sign of overheating.)
XBMC is a sophisticated platform that will require some exploring; perhaps, once I have it somewhat under my belt, I’ll write more about it, here. For now, let me conclude by saying that the Raspberry Pi is a boon to learners, tinkerers, and makers, everywhere! I know I’ll do a lot more with it than watch hockey.
Overheard in a discussion among presumed Enterprise Architects: “I know that’s the principle – but what’s the standard?”
That took me back – back to the heady days I spent facilitating workshops designed to capture principles that “ought to” govern the IT development process (including definition, design and sustainment) at my then-employer. I’m sure it comes as no surprise that my title was Enterprise Architect.
The process was exhilarating; I found I loved designing and facilitating the workshops, just as I had enjoyed participating in similar workshops as a subject-matter expert. Reflecting on it, now, however, I question the premise – and a cherished premise, it is – of the primacy of principles. After all, we were capturing the opinions of individuals asked, essentially, to predict the future (not in detail, but in general).
One could depict the model (while lampooning it, a bit, to be sure) using the following diagram:
The “right” set of broad Principles influences Standards – by their nature, more specific, focused and “actionable” – which are reflected, in turn, in the IT Systems & Processes of the organization. This isn’t ridiculous or hopeless – but, when it operates in just one direction, top-down, it’s a bit like balancing a pyramid on its point.
Instead of, or in addition to, the abstract approach reflecting Enterprise Architecture, we need a complementary process reflecting Computer Science – with the emphasis on “science”.
The scientific method is more bottom-up than top-down. It upholds the primacy of empirical evidence – what’s actually happening. The scientist may start with a hypothesis – but can just as easily start with direct investigation and experimentation.
As depicted above, the scientific method accepts only Empirical Evidence as “gospel”, establishing a body of Theory only to the extent that it assists in understanding or explaining why we observe what we observe, and not something else. At the top of this pyramid teeter extremely powerful, yet fragile, Laws. A Law predicts the future – often, very precisely – but must be abandoned (or radically reformed) the moment we find its predictions to be misleading.
Almost every scientific discipline has discovered Laws. Computer Science is no exception; consider Moore’s Law – a good example not only because of its renown and the reliability of its predictions (so far) – but also because of its limitations and its ability not only to predict future outcomes but to influence them, at their source (in this case, the behaviour of organizations that create the very artifacts that comprise the domain of Moore’s Law, itself).
If we were (informally; we’re technologists, not scientists, by trade) to “do” Computer Science within the domain of our own IT organization, what would that look like?
First, we’d gather the empirical evidence. What is actually happening in our shop? How do we evaluate proposals, green-light projects, select and implement systems? What are our support and sustainment processes (both heretofore documented and – especially – undocumented)?
Next, we’d try to articulate why things work the way they do in our shop. What are the general themes and trends we’re seeing? Can we spot commonalities among “data points” we would have assumed were unrelated? If so, what do we see when we extrapolate? Does the resulting “curve” fit the “data”?
If so, we’ve hit upon theory that fits our observations. We may even articulate a (tentative) law. For example, we may discover that – as counterintuitive as it seems – that our support team’s costs seem pretty much fixed; that they vary hardly at all with the urgency and severity of the “tickets” (reported issues) to which those costs are allocated. Intriguing! It would seem the “Law of Fixed Support Costs” applies in our domain.
That such a law would apply might naturally disturb us – and we might launch a deeper inquiry into what it is about our support processes that doesn’t “care” about urgency or severity. Perhaps we’ll find that the phenomenon is an illusion – a side-effect of “bad science” on the part of those who report issues and/or enter support tickets; we may find that nearly every ticket is entered with a high urgency, for example. Conversely, we may find that our shop, by its nature, just works that way; perhaps we’re very small, with most issues requiring the attention of the same support people, who have to consult the appropriate subject matter experts in the resolution of nearly every ticket – resulting in a fairly high and invariant overhead. In this case, the discovery of the “Law of Fixed Support Costs” might inspire us to sign a fixed-cost support contract with a service provider.
To sum up: Top-down Enterprise Architecture may have its merits, and ideally can influence behaviour – but (Computer) Science (performed validly) will reflect more reliably what’s really going on. Use a Computer Science-based approach as a check against the “solution in search of a problem”.
A chatbot designed to respond (in English) like a 13-year-old Ukrainian boy (with limited English skills) was recently reported to have passed the Turing Test. Many commentators were quick to demonstrate that the ‘bot emphatically did not – and cannot – pass any version of the Turing Test having any meaningful connection to intelligence.
In my view, the chatbot, “Eugene Goostman”, merely entered the wrong competition. If it were to enter a competition in which every entrant was introduced as a 13-year-old from Ukraine, and all entrants had been either programmed or coached to impersonate a 13-year-old from Ukraine, we could more validly compare its results to that of its competitors. In the Reading University competition, however, other entrants were (tacitly) assigned different roles – roles arguably more difficult to pull off, such as “full-grown, educated, native English-speaking person.”
When Alan Turing famously proposed that the Imitation Game be used as the “gold standard” for machine intelligence, he did so with a challenging version of the game in mind – the version that humans play. To be considered decently good at the Imitation Game, a player (say, a man or a computer program charged with imitating a woman) would have to prove indistinguishable from “the genuine article” (a variety of actual woman players) a great deal of the time. The perfect “female impersonator” would be able to convince an impartial judge just as often, on average, as a woman can. (Bear in mind that even a genuine woman will be mistaken for a non-woman pretending to be a woman, a certain percentage of the time – especially by a judge wary of being duped.)
If a computer program could convince a panel of skeptical judges that it is a person just as often as the average person could do, that program would be a Master Im-person-ator – and indisputably intelligent.
That’s a big “if”. The best we can (generously) allow, at present, is that a computer program may now perform adequately well at an extremely limited and “dumbed-down” version of the Turing Test. We might envision the space of all Turing-Related Tests as a plane, of which a few tiny slivers – representing such roles as “ignorant, sassy teenager” and “abusive, paranoid weirdo” – are coloured either yellow or a very faint shade of green. All other “tiles” in the plane – “full-grown, educated, native English-speaking person”, “woman”, “award-winning journalist”, etc. – are either red or uncoloured.
When I read an article that explained the Heartbleed bug, clearly and simply, I had an epiphany: Vulnerabilities in systems are revealed by simple prodding.
You may have believed, as I did, that hacks are deep and ingenious – proprietary to uber-geeks. Based on Heartbleed, however, my intuition now tells me that most technical hacks are discovered through the most elementary of experimental techniques: Apply a stimulus to the subject, and see if/how it reacts. When the subject is a “dumb” piece of software, one may not even have to guard against its “waking up” and raising an alarm.
Hacking people is usually a bit more subtle – but it doesn’t have to be, if the hacker doesn’t care that his mark knows he’s being hacked. Vladimir Putin is proving himself to be a master of this technique, which requires more brio than brains. Through poking the anti-Bear, he gathers invaluable information; basically, he learns what he can get away with.
The hacker (or “cybersecurity engineer”) prods the armour of networks and systems, sometimes with shockingly blunt instruments – and often finds that armour full of holes.
Donning a white hat, let me say that I have long been a proponent of automated testing of information systems. No one enjoys bleeding; let’s have our robots poke at our armour, randomly and thoroughly, and then patch its holes before we wear it in battle.
In a previous post, I offered an operational definition of a “smart” system, positing that if the output of a system is:
- trusted, and
- pertinent; and therefore
- more valuable than the input
…then the system is (relatively) “smart.” I’m hoping that’s fairly easy to buy, at least provisionally; after all, only a pretty stupid system would produce output that was ambiguous, inaccurate, outdated, self-contradictory, irrelevant, or of no more value than its input.
The logicians out there might observe that a smart system, as characterized above, is a lot like a valid argument: it is “truth-preserving.” If the premises (input) of a valid argument are true, its conclusion (output) must be true. We demand a bit more than that from our smart system, however; we require that its output not make us say “d’uh!” In order to be more valuable than its input, a smart system’s output must be, to some extent, unobvious; it must elicit more of an “aha!” The smart system is like the clever, valid argument. (Gödel’s Theorem, however, is too clever for any system. Just a bit less clever than that will do.)
Let’s look at a particular class of information system – the “Business Intelligence” (BI) system. Its name, at least, seems to promise “smarts”. Of course, “intelligence”, in this context, refers more to the output of a successful process – an investigation, perhaps – than to the nature of the process, itself. Consider military intelligence: extremely valuable information that “our side” came by through a process of putting together a bunch of seemingly unrelated and uninteresting scraps of data. Traditionally, the “putting together” – the process – is organic to a group of clever human beings, who analyze, organize and scan the data for patterns – the “signal” hidden in the “noise”. Increasingly, however, that raw, relatively uninteresting data is fed into automated systems that store, characterize and index it, and assist the human investigators greatly in their analytic process – for example, by producing graphical representations of the data (“visualizing” it). Appropriately, such systems, as a class, are called “analytics”. “BI” is a term often used interchangeably with “analytics” in referring to such systems, when used in search of business-oriented “intelligence”.
To wrap up a post dedicated to the unobvious, let me (rather perversely) channel Captain Obvious: I would rather work toward the “aha” than the “d’uh.” Rather than automating a filing cabinet (although business do need more secure, accessible file space than ever before), I would rather build a system that can – by virtue of being “smart” – sift through a mountain of facts, identify and illustrate patterns, and arrive at a modicum of intelligence.
Via social media, I received a frank and pithy comment on my previous post (Machine intelligence: Scary? Necessary.):
Machine Intelligence? Dream on. Machines can only really appear intelligent to humans who are not.
In pondering how to respond, I realized I was a bit intellectually lazy when I implied that there might be some “threshold of intelligence.” To the contrary, intelligence is, to my mind, a continuum. Any system (including biological systems) that exhibits any sort of variable behavior in response to variable data (including stimuli) has non-zero intelligence. An earthworm, for example, is obviously very “stupid” in comparison with many creatures – but it is capable of cognition, as a function of its evolutionary “programming”. The algorithms which govern its behaviour are sophisticated and cannot be easily deconstructed – nor (yet) duplicated by human programmers of non-biological systems (“robotic worms”, if such things existed).
I also want to avoid depicting intelligence as a scalar quantity. To try to measure the conventional IQ of a computer system – or any non-human system – is folly. (The value of IQ has been challenged even as a measure of relative human intelligence – but that’s another discussion.) When we think of an intellect – that of a worm, a person, or a cybernetic system – as a “shape” having at least two dimensions – let’s call them breadth and depth – we arrive at a somewhat more vivid and meaningful basis for comparison. IBM Watson (as the example from my previous post) exhibits an impressive breadth of intellect, given that it is designed to process text in the English language that might relate to any topic. In depth, however, it reveals its “stupidity”. For example, Watson is not designed to be original or creative, whatsoever; it is designed to play Jeopardy, to which originality would be, if anything, a disadvantage. Watson’s “knowledge” of any particular subtopic will be revealed to be woefully inconsistent and brittle, upon probing. It’s quite possible – even likely – that Watson will answer several advanced questions on a given topic, successfully – but then come up clueless on what human experts would agree is a basic question. Of course, that’s because Watson is incapable of recognizing and assimilating the core body of knowledge on a topic, distinguishing the fundamental laws from the “esoterica”. It has no deep understanding of any topic, and grasps no themes or theories that might allow it to come up with an answer by extension or analogy. We could quite aptly characterize Watson’s intellect as “a mile wide and an inch deep.”
For an example of an information system having quite the opposite “intellectual dimensions” as IBM Watson – that is, an extremely limited breadth, but significant (and, I find, impressive) depth – see Copycat, designed by the Fluid Analogies Research Group, headed by Douglas R. Hofstadter, at the Center for Research on Concepts and Cognition, University of Indiana, Bloomingdale. Copycat is a system designed to generate solutions to a certain kind of analogy-based problem, and to do it in a way that resembles the human process (as self-reported by human solvers). Here’s an example of Copycat being “smart”:
Human “says” (via coded terms): “I turn the string abc into abd. Copycat, you do the same with xyz.”
Copycat “answers”: “I turn xyz into wyz.” [Like most humans, Copycat “prefers” this answer to wxz, or any other.]
As a final example (or class of examples), let’s consider “crowdsourced” knowledge bases. Arguably, these systems are highly intelligent (by the terms laid out, herein), able to marry the capabilities of data/knowledge management software to store and index a comprehensive library of facts (breadth) with the “wisdom of the crowd”, inherent in aspects of human intelligence, intuition and social interaction – providing depth. One could convincingly argue that the overwhelming lion’s share of such a system’s intellect is due to the contribution of the crowd of humans – but then, the “one” making that argument would presumably be a human, and subject to bias. Suffice it to say that the results (outputs) of such a system – assuming the non-human elements are cleverly designed – are likely to be far “smarter” than the results of a crowd of humans thrown together in a room and asked to draw conclusions on a given topic, absent any “machine augmentation” of their collective intellect.
To sum up my reply to my critic: It doesn’t matter so much whether a given system is relatively “smart” or “stupid”, as it matters that systems can possess intelligence – and that not all systems possessing intelligence are 100% biological.
Next topic: So-called “Business Intelligence” (in capitals, no less!). Stay tuned!
We owe a lot of rich drama and philosophy to the concept of machine intelligence. From the pure horror (or, uh, campy comedy?) robot overlords inspire, through Asimov’s deep probing of the nature of humanity, to the idea of the Singularity – rephrased as “Transcendence” in the upcoming movie of that name.
But what does an intelligent machine necessarily “transcend” (other than “the threshold of intelligence” – which is tautological)? Mechanisms – profoundly complex mechanisms, to be sure, that elude our current understanding – underlie human intelligence. Indeed, the human brain is routinely likened to a wonderful machine. Why should a machine that succeeds in performing a feat we normally associate with intelligence – such as consuming data from a wide variety of sources, “remembering” (storing) that data in a form it can readily access and correlate, and using what it has “learned” to arrive at a likely answer to a question of fact – not be characterized as intelligent?
I submit that many “machines” (human-designed systems) display intelligence, today. To be sure, it is a far less flexible and remarkable intelligence than you, dear human, have displayed in arriving at this blog post and (I hope) making sense of it, in the context of your own background, interests, and motivations. It is also far less mysterious. I believe it is merely the fact that we do not (yet) understand how we are so intelligent that makes most of us believe – nay, insist – that we are far more intelligent than anything we design shall ever be.
Whatever you believe, as an information technology professional I trust you will agree that the “smarter” the system we can create for our client, the better. Let’s further agree that the output of a “smart” system is:
- trusted, and
- pertinent; and therefore
- more valuable than the input
That sets the bar nice and low for “machine intelligence” – or does it?
In future posts, I’ll share some of the best practices I’ve discovered for creating and maintaining intelligent “information machines”. Meanwhile, if this post inspires or provokes you, I’m sure you’ll let me know about it!
Hello, World! 😉
My name is Sean Konkin, and I am an information technology professional (read: IT Guy). Through 25 years of trials and tribulations (read: tests, defects and rework) I have earned the right to call myself an architect.
I was immersed in the culture of oil & gas business systems for a long time – most of the last 20 years, in fact – and have recently resurfaced. I was laid off from my job as an IT Architect for a large energy company, and am taking this opportunity to look all around me, instead of straight ahead along the path of corporate strategy. The information technology landscape has changed a lot, from the viewpoint of the professional, since the last time I found myself at this sort of crossroads!
For one thing, it’s all around us. We (as workers and consumers) no longer walk to a single room in our houses or allocate a single compartment in our minds to the means of “going online”. We live online – unless we consciously and effortfully opt out. We keep a growing portion of ourselves in the so-called cloud. The Internet programs us. As professionals, we are expected somehow to stay ahead of the Internet, in terms of the risks and opportunities it presents to our employers and clients. No small feat!
Another huge shift is a function of both the development of IT and my own development as a professional: the meaning of “development” (in a systems, rather than personal or industry, sense). I got my start in the industry just at the point the title “Programmer” was giving way to “Developer” – but before the difference was anything more than aesthetic. I remember the developers’ bullpen in the first company I worked for: a darkened half-floor of cubicles stacked with empty Jolt Cola cans, occupied by jeans-clad Unix programmers (all male, of course). That group of “developers” was completely and intentionally different from the rest of the company and its clients. Their job function, Systems Development (really, programming) was mysterious, requiring a completely different set of skills than that of anyone else at the company – and seemingly a completely different personality. IT Guys were geeks – and proud of it.
Fast-forward 20 years: I’m an “Architect”. I love to program (or “code”; it’s now a verb) – but the task is completely different and much less mysterious than it used to be. How often does a developer write a single program, in a single language, to solve a problem, anymore? Proficiency in languages is still important, as is understanding the problem – but more important is the ability to see, communicate, and build a complex solution out of the various components at hand, using available tools. (I realize that’s a value statement, and certainly arguable; I’d love to elucidate and defend it; perhaps that will become part of the raison d’etre of this blog.) My main point, here, is that I now find myself in an environment where programming and other highly technical, specialized skills are secondary, and to a certain extent taken “as read”; the defining essence of IT systems development proficiency is inherent in much more nebulous attributes, such as abstraction, modelling, vision, and synthesis.
I am very curious as to how today’s IT landscape looks from the perspective of other “IT Guys” (inclusive of “Gals”, of course). If you’re another 40- or 50-something veteran, I’d love to commiserate! If you’re just starting out, your perspective is extremely valuable, untrammelled by the baggage we old hackers carry. If, through my posts and your comments (you’ll need to Register, at left), we can better understand and occupy our place as leaders in information technology, I’ll be most gratified.