December 2012

Sun Mon Tue Wed Thu Fri Sat
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

Blogs & Sites We Read

Blog powered by Typepad

Search Site

  • Search Site



  • Counter

Become a Fan

Cat Quote

  • "He who dislikes the cat, was in his former life, a rat."

« Raymond Davis: Strategic corporal (Omar) | Main | Bahrain Uprising (omar) »

February 17, 2011


Wait for Watson 2.0.
It should be capable of listening, 'understanding' and reacting to Alex Trebek's questions, not just typed in questions, as was the case in this contest. That is still a major challenge, as my husband points out (He should know, having briefly worked for the Speech Recognition center in CMU, several years ago.) But such a computer would never fit into the tiny space of a human cranium. It would need several rooms the size of Watson's current digs.
What the press never stresses is that this is really humans vs. humans, not really computers vs. humans. Watson is only as clever as the programmers who created him, and even they let the ball drop when they formulated a rule stating that the title of a category isn't very important, as we saw in the response to the U.S cities question about Chicago.
But it wouldn't make for entertaining headlines. As it is, the actual competition was deadly dull, even with the spark of life that Jennings tried to inject, and the machine-directed humor that Trebek tried. It was more of just advertisements for IBM, which did eat up a good 30 minutes of prime airtime. I think that this definitely beats the Super Bowl ad spot for cost-effectiveness.

I watched all three episodes (the first two being mostly IBM infomericals) and I too was struck by the apparent lack of energy in both human contenders whom I have seen before in their more lively "avatars" when playing against other live human beings. Ken Jennings did try to liven things up and Rutter was unfailingly smiling and gracious. But how does one relate to an inanimate object beating you hands down? And yes of course, a machine is only as good as the (wo)men who make it. We should never forget that.

What was really weird was that even when Watson was hitting the jackpot with its "best" answers, how far afield his second and third choices often were. And I was bemused by the several question marks after Toronto when it flubbed the first Final Jeopardy question. So, Watson wasn't sure and "knew" that the answer was shaky? That was kind of spooky - the apparent, very human doubt.

As the Columbia Univ. MDs have indicated a computer like Watson will come in very handy for doctors who have to rely on long memory based knowledge for their diagnoses. The machine will be able to pull up information in the blink of an eye that may take the humans many hours to collect and correlate. Fine by me, as long as the human doc decides in the end what actually afflicts his / her patient by carefully examining the available data.

Ken Jennings on what it was like.

Also, we don't necessarily use language to express what we understand. Reminds me of some of the mindless rote memory learning that was so common in the Indian pedagogic method.

On a tangent here, and apropos of Sujatha's invocation of the Super Bowl, consider recent news about Google's problems with spam larding the search results, a consequence of human vs. human gaming of the system. Google is now moving the goal posts in the middle of the game. For years, Google and its devotees have claimed that its search engine deals in "information" sought by searchers. The web was a vast repository of information--good, bad, smutty, illegal, warped, sublime--and Google made it easy to put your finger on exactly what suited your needs. In a recent article about the spam problem--an article I can't find just now, ironically--an apologist for Google distinguished "information" from the "commercial" results that were the predominant targets of spam. How convenient. Now that the integrity of Google's results at one end of the "information" spectrum is seriously questioned, Google simply revises its definition of what counts as information.

A buddy of mine won on Jeopardy years ago, and he managed to p--- off Trebek with a snarky response to one of his stupid questions. Let's see IBM come up with a computer with 'tude. Maybe then I'll watch the game show.

That's not to say that a free-wheeling, amusing (to a human) conversation can't be had with a computer. Try one of the chatbots, like Alice, for instance.
It's not too hard to stump 'her' though and then you are led to variants of 'I'm not programmed to answer that'.
Trebek may be getting even more crotchety with the years- we were warned "No snarky questions" when we attended a Jeopardy taping last summer. So there went my dream of finding out the name of his plastic surgeon!

All this has made me think -- and that's good, I guess! It's made me think why we took HAL seriously, as a character, in _2001: A Space Odyssey_. It wasn't his fluency or his remarkably interactive "moves." It was his ability to feel angry, and jealous, and wary of being outmaneuvered, and perhaps also that he was gay, or sounded it. If Watson could show a sense of irony at being tasked to play Jeopardy rather than to teach math to failing middle schoolers in East L.A., that would get my attention. Anyone remember _The Demon Seed_ with Julie Christie? The brilliant computer locked down her house and made her pregnant with a fast-gestating replica of her deceased daughter; it wanted OUT, it wanted TO LIVE and TO FEEL LOVED. This is more human than being smart at games. It's not whether you can win at Jeopardy that makes you human, but whether you can desire. In so far as we project onto fictional computers our strongest urges and fears, we understand this very well. But. Would we want real AI to possess invincible mental processes, without a trace of desire?

Elatia, I personally "do not" want a computer that feels desire, wants to be loved, gets angry or jealous. I want smarter and smarter humans (who feel all those things) to design computers that can do human work faster and faster even if the former get beaten in game shows. I want machines to remain our tools and not our compatriots in the emotional realm. That should be left to the movies and science fiction. Also, I don't believe that a Matrix world is going to happen ... ever.

I don't want them to feel, but that's okay, because they won't. I might enjoy a saucy Sim, however, one that was programmed to be a little "eccentric." If they can't feel, then programming them to appear bland and mechanical is just as big a solecism as programming them to seem amusing. They have no natural affect after all. And basically, I just want nanotech to do the dishes, destroy house dust, color my hair and so forth -- THAT would be quite smart enough!

Here is one guy who saw the Matrix too many times. But who knows, it may just be a death wish disguised as a new age leap of faith. Remember those Hale Bopp folks? We project too much of our anxieties and wishful thinking on to machines.

"And yes of course, a machine is only as good as the (wo)men who make it. We should never forget that."

Ruchira, that is simply not true. People who are terrible at playing chess, for example, have routinely written programs which can play at the Grand Master level. This is like saying an engine can only be as strong as the person who built it. No, it can be a lot stronger.

No one serious is calling Watson intelligent. But there is a slow accumulation of achievements being racked up by computers in activities which were previously thought to be only capable of being performed by humans, and this is an important symbolic step in that process.

As for your saying, "Fine by me, as long as the human doc decides in the end what actually afflicts his / her patient by carefully examining the available data," what if one day artificially intelligent expert systems were repeatedly shown to have much lower rates of error when making diagnoses when compared to humans? I am not so sure you would still prefer the much more error-prone human. And I see no reason, in principle, why we might not get to that point. One day.

Thanks Abbas for your insight as an engineer and programmer. No, I did not mean that the engineers who design a computer chess champ themselves have to be good in chess. Machines designed to do a particular task, are expected to do it better than humans. Otherwise what's the point? I meant that the better the designers anticipate that function, the better the machine will be.

As for misdiagnoses by error prone human docs, I happen to be a person whose health was severely compromised once by hot shot specialists. So I have no quibbles there either. I am sure a very intelligent machine will in most cases function as well or even better than many tired, overworked, uncaring doctors. But just as Watson said, "Toronto," a machine can be error prone too. I just want a human mind to check out for a few minutes the data that a machine spits out in milli-seconds. When and if we have such machines, I am also not confident that the pharmaceutical companies will not manipulate them (just as they do the human physicians) to prescribe certain drugs irrespective of their efficacy or adverse effects on a particular patient. There should be an intermediary between the machine and the patient with whom the latter can discuss the peculiarities of his or her affliction. Perhaps, you can visualize a machine that will be able to take unprecedented symptoms and correlations in its stride and make the correct diagnosis. I don't. Not yet. For example, do you think a machine could have made the correlation with HIV infection /AIDS when in the early 1980s young gay men infected with the virus were actually dying of other "known" symptoms such as pneumonia and Kaposi's sarcoma?

Ruchira, you would know better than I, but I recall reading that the particular kind of pneumonia that was killing gay men back in the day was thought to be too puny a bug to get such a big job done with so few exceptions. That seemed to cue a few researchers that something else, indeed, was going on. That era remains a test case of how phobias and completely unconscious prejudices can mess with what little medical insight the average educated lay person has. But when it comes to creating a bot-diagnostician, how can we free it of our own prejudices? It might still write up many women pain patients as hysterics, for instance, if its programmer saw the world that way without knowing it.

Now, here's a bot I'm scared of.

Ha, ha! Can I have a robot curry chef? But then, one of the things that I look for in restaurant quality is the replicable taste from one visit to the other. The robot chef apparently guarantees that - no human error or innovation to tinker with the taste. I think the fast food chains are already half way there into this formula.

As for HIV hiding under the guise of other afflictions, if I remember correctly, the red flag for physicians and researchers went up due to Kaposi's sarcoma and not pneumonia. Until young Caucasian and African American gay men with HIV started showing up with it in significant numbers, that rare form of cancer had been seen almost only among very old Jewish men with central European ancestry. That is what I was asking Abbas. Would a machine have seen the anomaly even if it had made the correct diagnosis for the cancer?

Bot pasanda, coming right up! Mario Batali, no bot he, has said that the consistency of a dish from visit to visit is what keeps restaurant patrons coming back. But it's an observation that's accurate across the boards; if you really wanted a Mars Bar, you would be infuriated to get one that was not exactly like the last one you had. Transposed to the key of real gastronomy, I think the idea that highly skilled people can be relied upon to produce a result that requires integrity and discipline -- the hallmarks of every good kitchen -- is quite different from knowing nothing fouled up the works in a manufactury. After all, in the arts, including the culinary arts, intentionality counts for a great deal -- the chef has done something for you, has provided you with an experience of pleasure and nurture and ease, and has inspired in you the confidence that to HAVE IT AGAIN, all you need to do is return. There is an element of moral philosophy in this that a bot in a toque could reach for only in a fairy tale.

All of this commentary simply assures me that this is a matter of humans vs. humans, pace Sujatha, and I suspect it will remain so for a long, long time, notwithstanding Abbas' confidence that the slow accumulation of achievements is some kind of evidence that we're heading there "one day." Take Elatia's allusion to HAL. She's only partly correct that we took HAL seriously because he (it?) had as much of a human character as any of the humans portraying humans or their relations in the film. Another factor is our inherent eagerness to anthropomorphise just about anything. I have never understood the levels of corny inspiration afforded by 2001 or that insufferable Star Trek, which to this day is cited as a genuinely valuable prediction of how a better world of technological convergence and integration might be. Only so much credit is due these dramatic works' effective invitation to us to suspend our disbelief. A moment's sober thought exposes the silliness of viewing them as inspired predictions, which are as ludicrous as "a bot in a toque."

Abbas' comment illustrates the tension underlying our eagerness to program intelligence, I think. He is absolutely correct that progress is incremental. But progress toward what? And is the function of that progress ultimately asymptotic, in which case intelligence (if that's the goal) is never quite achieved? Or should we assume a Turing-test-like pragmatism, according to which if a machine behaves sufficiently intelligently we'll ascribe intelligence to it? It's hard to say, because the example Abbas gives of expert systems bettering doctors at diagnosis is really just about the automation of a task, and the answer to his question is a no-brainer: of course we would want our doctors to give substantial deference to the diagnosis of a machine proven to make better diagnoses than a human. This is no different from expecting our doctors to rely on research in their fields, and not merely on their own practical experiences. We can envision with Abbas a progress through the parsing and programming of tasks until a point is reached at which we're willing to relegate the burden of our medical care to machines. Are we then seriously justified in calling them "intelligent"? Where is that threshold, the point at which many accumulated tasks can substitute not just for professional expertise--which is always defined by professional, human-legislated parameters--but for the work of intelligence?

I'm thinking might make sense for lower-rung choices to be quite strange in some circumstances, particularly when you're really sure about the higher-rated ones. If you asked me - say - what Sherlock Holmes's sidekick was named, I'd say Watson, period. If you pressed me to come up with a second and third choice, I'd actually have to do some work, and I daresay the choices would be rather odd. I mean, the second choice *is* the second best answer, right? Which is to say, it's what you'd say if forced to ignore the 'right' answer.

Sometimes that sort of second choice makes "sense", like if you ask me to pick the highest or fastest something. If you tell me my guess for richest guy is wrong, the next three choices aren't going to be iguana or Planck's constant, but rather my guesses for second and third richest, but often there really are only one or two good guesses given what you know. Other guesses will pretty much look like gibberish then. Of course, I haven't watched the show :D

Hmm. I'd actually be quite interested in knowing what Watson's top ten choices for richest man in the world look like. Is Watson doing just frequency analysis with wrinkles and bells and whistles, or something fancier?

Regarding the recasting of Watson as a physician's assistant (vastly more expensive than a living one), it would still mean an automation of routine diagnoses as a first step. I seriously doubt that any human doctor would cede the diagnostics for a complicated case to even the likes of Watson, unless it's something like giving a command that states "This is the diagnosis, give me all the possible treatments in this case." Watson could reach into the terabytes of memory and pull out all treatments from the myriad papers on the subject.
The news article(I think it may have been in said something about adding voice recognition to Watson, using software from a company called Nuance that makes popular voice recognition software and apps. I'm not very convinced of how well that will go- reviews of their bestselling software indicates a successful recognition rate of 80%, and their free iPhone app has always misconstrued whatever my daughter speaks into it.
What of the incorrect 20%? Could that hinder the operation of the physician's assistant to make it unfit for use?

The comments to this entry are closed.