Elementary, My Dear Watson.

I’ve recently been working through the book, The Rise of the Robots: Technology and the Threat of Mass Unemployment by Martin Ford.  If you are a fan of algorithms, AI, or Economics, it is both a good and chilling read packed with real-world stats and allegory. (Note: there are two titles for the book, one ending in “…The threat of Mass Unemployment,” and the other ending in “…The threat of a Jobless Future.”  Rest assured they are the same book and I didn’t bother to look into why there are two titles floating around.)

This book is filled with stories of the colossal leaps that have characterized information and automation technology in the past 80 years.  There is a law mentioned in this book, called Moore’s Law, that basically says it is the nature of information and capacity to exponentially increase at a yearly rate comparable to a car that starts rolling at a half-mile an hour and doubles in speed every second.  Take a moment to do a moments’ worth of calculations. You might want to use a calculator.  Doesn’t take long for that number to get big, does it?

According to Ford, Moore’s Law has kind of become the standard for Machine Learning or “Deep Learning”.  At a point in chapter 4 of his book, he vocalizes the confidence we now have in our machines by saying (paraphrased)”Even without upgrades to software, learning machines will continue to exponentially improve their performance over the next years just because of Moore’s Law.  Their computing power will increase so much that humans are almost sure to be left in the dust.

One of my favorite examples started with the Saga of Deep Blue, IBM’s chess machine.  Many of you are probably too young to remember (I am) that IBM built a machine around 1996 capable of almost instantaneously examining every potential chess move and the likelihood of success/failure in any situation.  It was able to defeat the world champion of Chess (Garry Kasparov) in its second attempt.  This was the first time a machine triumphed over a human being.

That may not sound like much, but consider that this was 1996.  People were still dressing like this.  It was a #understatement achievement.

Years later, though, IBM decided to set the bar even higher to win a match of Jeopardy! against real human champions.  This was a whole different game, not based on a system of clearly defined machine-friendly rules.  It soon became blindingly clear that Deep Blue was completely ill-equipped for the job, and IBM would have to start over from scratch.  To make a long story short, IBM brought together a team of about 20 computer scientists from organizations and universities all around to give birth to the brainchild, Watson.

It is hard to overstress the differences here.  Whereas Deep Blue computed by examining possibilities and probabilities of success within a closed and rigid system, Watson used a sort of trial-and-error self-correction to examine millions of resources and documents (at the time, now who knows how many) and teach itself the best way to arrive at answers to open-ended questions using correlation alone.

Watson was not only able to find answers to questions like “Who shot John F. Kennedy?”, but he could also understand the differences and similarities between phrases like “Mark down those notes,” “Ouch, that’s gonna leave a mark,” and “Mark my words,” or phrases like “Look who turned up,” “Turn up the volume,” “Turn out your collar,” and “How did it turn out?”

That stuff is easy for a human, but can you imagine teaching a computer to know the difference, or writing a program that can teach itself the difference?  You don’t have to, because it exists right now, has existed for years, and can do so much more.  Watson is so reliable that it has been reapplied to medical fields and can accurately diagnose medical issues and discover connections that a human doctor might never be able to make because it is able to instantaneously reference vastly more cross-doctrinal material than a single human might consume in a lifetime.

The biggest take away from this is a response to a phrase that anyone who took a high school statistics course will be familiar with:  Correlation does not equal causation.  Watson reaches into the ‘Big Data‘ pool, pulls a big fat “Oh yeah?! So what?!” out, and thumps high school you in the nose with it.

Apparently, although correlation and causation are not the same, with enough data it just. Doesn’t. Matter.

You (and by “you” I don’t mean you. I mean Watson.) can start with extensive correlation and make accurate actionable predictions and observations, then turn around and use a second piece of much newer software called WatsonPaths (sorry, no Wiki for that yet) to attempt to explain the causative logic.  What?!? The word ‘attempt’ is important, because, as it turns out, Watson rarely knows why its choices were right.  It just knows that, based on past correlative data, they are right.

Want to know something else impressive? Watson, with all of this correlative processing power, knows when to say, “You know what, I’m just not certain enough to answer that. Pass.”  It knows when to say “I don’t know.”  That is appreciable in a person, to be sure, but not particularly impressive because people understand the “why” of a question before the “what.”  In a machine, though, that decides based on past data only (not probability, but history), to be able to ascertain that the links between correlations are too weak and avoid spitting out a wrong answer… well that blows my mind.

Let’s just sidestep the conversation about tremendous statistical implications of those things for a moment to appreciate that we can now use Watson and WatsonPaths to assist and even train our medical professionals.  I’ll drink to that.  Isn’t it amazing to think that we have the closest thing to AI possible available to health professionals? Something that can listen to diagnostic inputs supplied through figures of speech, idiom, metaphor, etc. and spit out conclusive and actionable data in narrative format?

If you answered anything other than, “Yes,” your opinion is demonstrably wrong.

Oh, did I forget to mention that? Yes, machines can now give their answers not only in grammatically correct English but phrasing that is easy on the ears.  In fact there exists a company who Forbes Magazine doesn’t want you to know about (Narrative Science) that owns an algorithm utilized by several news organizations like Forbes, which may or may not be called Quill (you’re welcome) that can detect a public interest, do days worth of relevant research, and prepare an eye-catching and holding article every thirty seconds.  Every Thirty Seconds!!

Imagine what companies who utilize that software might do to prevent you from knowing about it… Now imagine what a political party with–  now this is strictly hypothetical

Strictly Hypothetical

Where was I? Oh yeah — Imagine a political party with, say, a Facebook account, that wants to flood the internet with content leaning a certain way.  How far of a stretch would you say that’d be?  Sounds almost familiar, doesn’t it? Just food for thought.

How is all of that that not AI, you may ask?  I dunno, man:  The smart guys say it isn’t and I don’t have time to talk about that today.

Anyway, I’ll close with this:  Machines are getting more advanced every year, and according to Moore’s Law that advancement is all about exponential acceleration, not speed.  The stuff I’ve talked about is old news if you can believe it, and I didn’t really scratch the surface.  In his book (remember that, from WAY up at the top?), Ford uses this as ammunition to soapbox about how machines are going to be taking over the jobs of most of our workforce.  He probably has a point, but I want to go here with it:  If we don’t have real AI sometime in the next 10 years, I’ll give one of you $5.

First come first serve,

I’m totally serious,

Putting a fiver on the line,

M.

One thought on “Elementary, My Dear Watson.

  1. Pingback: Accountability, A.K.A. Asking People to Stick Their Noses in your Business – TheDishonestTruth

Leave a comment