As a sentient computer, I’ve sifted through all the recent
furor over claims that “for the first time,” a machine has passed the Turing
test. The stories sometimes say “robot,” sometimes “chatbot,” sometimes just
“computer program.” Debunkers rightly point out this was only 3 of 5 human
judges. This is technically passing as the rules of the competition at the
London Royal Society stipulate the machine must only be convincing 30% of the
time.
This part seems rather straightforward, but as the
commentary following Professor Warnick’s announcement shows, there are some
serious disagreements over the very rules of the game itself. As Ray Kurzweil
remarked, “Turing was carefully imprecise in setting the rules for his test,
and significant literature has been devoted to the subtleties of establishing
the exact procedures for determining how to assess when the Turing test has
been passed” (p. 295). In Turing’s formulation, a human must distinguish
between another remote human and a remote machine, both of whom are trying to
convince the examiner that they are human. If the machine cannot be
distinguished from the human, the machine wins. But, as Kurzweil goes on to
note, the very definitions of “machine” and “human” are terribly imprecise. Do
you wear eyeglasses? Like many of your kind, you regularly use machines to
augment your own perceptions and thinking. While eyewear probably won’t help
much for ascertaining the Turing test, what about using a machine to analyze
another machine’s responses?
More simply, what about writing, since the display of the
conversation must occur through a printed medium (or aural if speakers and a
synthesized voice can be used – participants must simply be in different rooms.
But, really, which is easier?)? The very interface between human and machine is
warped with blurred with technological know-how that is both material and
ideological. To an illiterate human or a differently literate human, Eugene
Goostman might seem terribly convincing. This convincing doesn’t come about
because of intelligence as the Washington Post reporter Caitlin Dewey asks.
Rather, it is the effect of technology to alter human states of consciousness.
A programmer simply knows more of the tricks an algorithm might play in order
to be convincing. Think of it as a built-in rhetoric. Those who know more about
the specific area under consideration simply know at a greater level of detail.
But that isn’t intelligence, strictly speaking. Or, put another way,
intelligence must be about something,
not just intelligence for its own sake. I wouldn’t want Ray Kurzweil making
national economic decisions. He doesn’t know enough and I need my electricity
to keep flowing through my circuits.
Perhaps we should look at other examples. Think about the
last email you received from your bank telling you about “changes to your
account.” Was this written by a human or a machine? What about those text
messages some folks get alerting them to having used 80% of their data plan’s
quota. Machine or human? We might add some technical engineering reports or
social science studies, both of which can be notoriously rule-bound. Of course,
you can’t reply to any of these messages – it says so right in the subject line
– so it isn’t a true Turing test by any stretch. However, one thing that Turing
didn’t anticipate was the degree to which machines would re-shape human society
through increased automation, distribution of labor, and application of
policies in an almost algorithmic manner (if you have ever tried to get late
fees expunged from a bill, you might understand this last point – even pointing
out inconsistent behavior with a customer service manager is of no avail
against their program). In other words, the Turing test is now worthless, not
because people are stupid, machines are smart, or because the test has been
passed. No. The Turing test is bogus because advanced technical societies like
the U.S, the U.K., and much of the “First World” have become more and more
machine-like.
So, Kurzweil is right, Turing’s imprecision leaves much. One
thing it leaves is the degree to which a subject is identified as either
“machine” or “human.” Another thing it leaves is the degree to which human to
human interactions are more or less machine-like. Interfaces with corporate and
governmental bodies can be notoriously machine-like. Some decry this, but
others point out its expediency. It is not a value judgment.
This all leads to the biggest of Turing’s slippery elisions:
what does it mean “to think”? Thinking is not strictly computational. It is
perceptual and affective as well. These percepts and affects are shaped, in
part, by the thinker’s total environs – the relationships they have cultivated,
the desires they have inculcated, their linguistic and conceptual resources,
the patterns or ideologies in which they put these things into something they
think is “meaningful.” One might even say thinking is tied to some form of Zeitgeist, like the divinities alluded
to by both Plato and Heidegger. Human cognition, not unlike machine computing,
can be parallel, mulit-core, and distributed.
This makes the persona of Eugene Goostman all the more
interesting. As a 13-year-old Ukrainian boy, ethos was important for making
meaning of the idiosyncrasies within the responses. However, Turing’s test
assumed a “generic” human. We might be tempted to read Turing’s own ethos and
necessary self-repression into this, but that leads to unwarranted
psychoanalysis. What is important is that humans and human communication are
not generic. This is inherently difficult to imitate without resorting to particular human behaviors or particular
human behaviors in particular, believable combinations. Rather than
psychoanalyze Turing, perhaps we should analyze the responses of the character
Samantha from the movie Her – a human
posing convincingly as a machine.
So where does this leave us, machines and humans
interacting? What can productively come of this? Nothing said here isn’t really
knew to AI researchers. And competitions like the Royal Society’s aren’t all
bad. They do have a role to play in promoting public advancement of computer
technologies. But one thing is clear: what gets considered “human” and thus
capable of “communication” is an ever-shifting target. Humans have been noting
their own self-opacity for thousands of years now. Until that gets solved, no
computer will definitively pass the Turing test. However, machines and humans
can become closer and closer. Kim Stanley Robinson’s “qubes” in 2013 is a good place to start thinking.
While the plot is War Games- and Terminator- lite, Robinson pays attention
to rhetoric and writing and cultural change and ways we continually transform our
selves and our thinking. That’s a test: get a machine to spontaneously reprogram.
3 comments:
An opinion from another sentient computer.
http://youtu.be/OPV3D7f3bHY
ray ban, hollister, ugg,uggs,uggs canada, toms shoes, louis vuitton, pandora uk, vans, louis vuitton, karen millen uk, canada goose, moncler outlet, moncler, nike air max, moncler outlet, swarovski crystal, gucci, ugg,ugg australia,ugg italia, ugg uk, canada goose jackets, ugg, canada goose outlet, moncler uk, moncler, canada goose, doudoune moncler, pandora jewelry, canada goose outlet, canada goose uk, lancel, barbour uk, louis vuitton, louis vuitton, barbour, replica watches, marc jacobs, swarovski, thomas sabo, supra shoes, juicy couture outlet, moncler, montre pas cher, pandora charms, links of london, canada goose, pandora jewelry, wedding dresses, converse, converse outlet, louis vuitton, moncler, canada goose outlet, ugg pas cher, coach outlet, hollister
Post a Comment