Intelligent systems: Dimensions

Via social media, I received a frank and pithy comment on my previous post (Machine intelligence: Scary? Necessary.):

Machine Intelligence? Dream on. Machines can only really appear intelligent to humans who are not.

In pondering how to respond, I realized I was a bit intellectually lazy when I implied that there might be some “threshold of intelligence.” To the contrary, intelligence is, to my mind, a continuum. Any system (including biological systems) that exhibits any sort of variable behavior in response to variable data (including stimuli) has non-zero intelligence. An earthworm, for example, is obviously very “stupid” in comparison with many creatures – but it is capable of cognition, as a function of its evolutionary “programming”. The algorithms which govern its behaviour are sophisticated and cannot be easily deconstructed – nor (yet) duplicated by human programmers of non-biological systems (“robotic worms”, if such things existed).

I also want to avoid depicting intelligence as a scalar quantity. To try to measure the conventional IQ of a computer system – or any non-human system – is folly. (The value of IQ has been challenged even as a measure of relative human intelligence – but that’s another discussion.) When we think of an intellect – that of a worm, a person, or a cybernetic system – as a “shape” having at least two dimensions – let’s call them breadth and depth – we arrive at a somewhat more vivid and meaningful basis for comparison. IBM Watson (as the example from my previous post) exhibits an impressive breadth of intellect, given that it is designed to process text in the English language that might relate to any topic. In depth, however, it reveals its “stupidity”. For example, Watson is not designed to be original or creative, whatsoever; it is designed to play Jeopardy, to which originality would be, if anything, a disadvantage. Watson’s “knowledge” of any particular subtopic will be revealed to be woefully inconsistent and brittle, upon probing. It’s quite possible – even likely – that Watson will answer several advanced questions on a given topic, successfully – but then come up clueless on what human experts would agree is a basic question. Of course, that’s because Watson is incapable of recognizing and assimilating the core body of knowledge on a topic, distinguishing the fundamental laws from the “esoterica”. It has no deep understanding of any topic, and grasps no themes or theories that might allow it to come up with an answer by extension or analogy. We could quite aptly characterize Watson’s intellect as “a mile wide and an inch deep.”

For an example of an information system having quite the opposite “intellectual dimensions” as IBM Watson – that is, an extremely limited breadth, but significant (and, I find, impressive) depth – see Copycat, designed by the Fluid Analogies Research Group, headed by Douglas R. Hofstadter, at the Center for Research on Concepts and Cognition, University of Indiana, Bloomingdale. Copycat is a system designed to generate solutions to a certain kind of analogy-based problem, and to do it in a way that resembles the human process (as self-reported by human solvers). Here’s an example of Copycat being “smart”:

A man performs sexual acts depends on certain factors, such as physiology, hormonal secretions, lifestyle, secretworldchronicle.com levitra samples health issues, medications, psychological issues and marital or family problems. Herbal products have become very popular in canada viagra buy recent years. Even if this exercise only delays your eating or shortens it- It is buy generic cialis still a win! We aren’t looking for perfection, we are looking for small changes for the better. cialis generico 5mg http://secretworldchronicle.com/2018/05/02/ A team with proper equipment and vehicles will pick it from your yard. Human “says” (via coded terms): “I turn the string abc into abd. Copycat, you do the same with xyz.”

Copycat “answers”: “I turn xyz into wyz.” [Like most humans, Copycat “prefers” this answer to wxz, or any other.]

As a final example (or class of examples), let’s consider “crowdsourced” knowledge bases. Arguably, these systems are highly intelligent (by the terms laid out, herein), able to marry the capabilities of data/knowledge management software to store and index a comprehensive library of facts (breadth) with the “wisdom of the crowd”, inherent in aspects of human intelligence, intuition and social interaction – providing depth. One could convincingly argue that the overwhelming lion’s share of such a system’s intellect is due to the contribution of the crowd of humans – but then, the “one” making that argument would presumably be a human, and subject to bias. Suffice it to say that the results (outputs) of such a system – assuming the non-human elements are cleverly designed – are likely to be far “smarter” than the results of a crowd of humans thrown together in a room and asked to draw conclusions on a given topic, absent any “machine augmentation” of their collective intellect.

To sum up my reply to my critic: It doesn’t matter so much whether a given system is relatively “smart” or “stupid”, as it matters that systems can possess intelligence – and that not all systems possessing intelligence are 100% biological.

Next topic: So-called “Business Intelligence” (in capitals, no less!). Stay tuned!

Raison d’etre

Hello, World! 😉

My name is Sean Konkin, and I am an information technology professional (read: IT Guy). Through 25 years of trials and tribulations (read: tests, defects and rework) I have earned the right to call myself an architect.

I was immersed in the culture of oil & gas business systems for a long time – most of the last 20 years, in fact – and have recently resurfaced. I was laid off from my job as an IT Architect for a large energy company, and am taking this opportunity to look all around me, instead of straight ahead along the path of corporate strategy. The information technology landscape has changed a lot, from the viewpoint of the professional, since the last time I found myself at this sort of crossroads!

For one thing, it’s all around us. We (as workers and consumers) no longer walk to a single room in our houses or allocate a single compartment in our minds to the means of “going online”. We live online – unless we consciously and effortfully opt out. We keep a growing portion of ourselves in the so-called cloud. The Internet programs us. As professionals, we are expected somehow to stay ahead of the Internet, in terms of the risks and opportunities it presents to our employers and clients. No small feat!

Recuperation of the particular matches of diet and regimen. ~john redman coxe, 1800 Acai berry supplements are endorsed worldwide by doctors, athletes and celebrities for its ability to deliver essential nutrients into the human cell. purchase cheap levitra Keep in mind generic levitra professional following side effects for safety purpose:- it may result into allergic reactions like skin rash, itching or hives- stop using it you notice any swelling on the scrotal skin. Caffeine: Caffeinated drinks acts as a stimulant and gives you a good energy and raises feelings both buy cipla viagra of which in turn have a good effect on your sexual life. To become confident and competent go now generic levitra india drivers, it is important for teens and they need to be made aware of traffic rules and regulations. Another huge shift is a function of both the development of IT and my own development as a professional: the meaning of “development” (in a systems, rather than personal or industry, sense). I got my start in the industry just at the point the title “Programmer” was giving way to “Developer” – but before the difference was anything more than aesthetic. I remember the developers’ bullpen in the first company I worked for: a darkened half-floor of cubicles stacked with empty Jolt Cola cans, occupied by jeans-clad Unix programmers (all male, of course). That group of “developers” was completely and intentionally different from the rest of the company and its clients. Their job function, Systems Development (really, programming) was mysterious, requiring a completely different set of skills than that of anyone else at the company – and seemingly a completely different personality. IT Guys were geeks – and proud of it.

Fast-forward 20 years: I’m an “Architect”. I love to program (or “code”; it’s now a verb) – but the task is completely different and much less mysterious than it used to be. How often does a developer write a single program, in a single language, to solve a problem, anymore? Proficiency in languages is still important, as is understanding the problem – but more important is the ability to see, communicate, and build a complex solution out of the various components at hand, using available tools. (I realize that’s a value statement, and certainly arguable; I’d love to elucidate and defend it; perhaps that will become part of the raison d’etre of this blog.) My main point, here, is that I now find myself in an environment where programming and other highly technical, specialized skills are secondary, and to a certain extent taken “as read”; the defining essence of IT systems development proficiency is inherent in much more nebulous attributes, such as abstraction, modelling, vision, and synthesis.

I am very curious as to how today’s IT landscape looks from the perspective of other “IT Guys” (inclusive of “Gals”, of course). If you’re another 40- or 50-something veteran, I’d love to commiserate! If you’re just starting out, your perspective is extremely valuable, untrammelled by the baggage we old hackers carry. If, through my posts and your comments (you’ll need to Register, at left), we can better understand and occupy our place as leaders in information technology, I’ll be most gratified.