This article is set in the context of the pursuit for understanding the mathematical underpinnings of “intelligence,” however ill-defined. In particular, I question the formal soundness of our current best “tool” for getting a sense of whether we are coming close: The Turing Test. To remedy inherent philosophical problems in its design and motivations, we reuse its axiomatic foundation — but guided by “what we plan to use it for” — to glean the design of a new test, free of ambiguities and ready for application in practice, which one could argue is the only correct way of certifying that intelligence has been attained “synthetically.” Interestingly, the nature of this new test implies that the pursuit for understanding intelligence can be viewed as a societal game, continuous in time, whose rules command that there be only one winner and the requisite defeat of non-winners must be attained in the form of a voluntary distributed consensus.
While the formal essense of a test is somewhat axiomatic (circular) — in that we introduce it to a formal study from an external position of taste as a definion (a given) — one can refine a taste, basing it on simpler and clearer principles (Occam's Razor), just like one can refine a seemingly complete theory of Mechanical Physics to a lower-level Particle Physics that explains more in fewer words.
The fundamental current issue, according to Chomsky, with "studying intelligence" is that you can't study "how something works" if you don't know "what something is" first. That is, if one is unable to define the object of study in a semantically meaningful — however loose — way, one could not presumably begin to formulate sensible (i.e. semantically meaningful) sentences on that subject to begin with.
It is, however, presently possible to formulate a more modest question about the notion of “intelligence”, which pays back with an illuminating answer. Nothing precludes one from asking:
When the future moment comes, when mankind would have fully understood intelligence, how would mankind be certain — in a conclusive manner — that this state of the art has been attained?
In other words, what does the event in time that we are waiting for look like?
Now that I have your attention, I want to highlight the paradigm that the Turing Test is better understood when discussed in light of what we plan to use it for. It is a commonly-held belief that the Turing Test is our tool for drawing the end of the line on the pursuit of understanding the mathematical foundation of intelligence.
This state of affairs is unsatisfactory — in the same way the medieval explanation that “water falls to the ground because that's where it came from, and vapor goes to the sky because that's where it came from” is unsatisfactory. In particular, it leaves a glaring ambiguity — arising from the unspecified identity of the “judge” in the Turing Test — in interpreting the results of a Turing Test with purpose of calling victory over intelligence. Which is why the Turing Test really is just “hypothetical” in that it cannot be applied in practice meaningfully.
Consider a thinker, knocking on the gates to MIT, claiming she understands the mind, asking to meet with Prof. Noam Chomsky. They would sit at the table. She would present her claim and Prof. Chomsky would promptly respond: “It's too late, my dear!” I will tell you why.
If, at the time the thinker publicly announces they know the mind, the Turing Test is the best known test for verifying such claims, then the thinker is doomed to never really be pronounced the clear "resolver" of the puzzle of intelligence — and the puzzle of intelligence will never really be declared solved
Therefore, should we ever expect to be able to be certain when understanding of intelligence has been attained (in time), a new test must be formulated before the "attainment" is reached in time. One such test is offered below.
Consider what the thinker could have done, instead of rushing to Prof. Chomsky and therefore permanently (and, I submit, also prematurely) etching the moment in time when their claimed discovery became public, from a state of being private to their thoughts.
If indeed they had understood intelligence (and could thus reproduce it), they would be able to put up a web page, which demonstrates a “puzzlingly intelligent” response behavior to any passer by who chooses to be puzzled, as opposed to stroll by with a superficial explanation akin to “the water goes to the ground because that's where it came from”. (Presuming demonstrability of the thinker's device is really just saying that non-demonstrable discoveries are unacceptable.)
This “intelligent behavior” could have any shape or form. Those are not essential, just like modality is not essential to the semantic meaning of natural language. But one example, just to illustrate and give some flesh, could be a web page that behaves very much like a Google Video Conference or Hangout session, but is in fact generated — and I mean “generated” in the technical sense of Chomsky's — by the thinker's design.
Consider what would happen if such a test were freely available for everyone to browse to, with a clear indication as to the fact that the web-page is owned and made by a specific thinker.
Three events should take place, assuming currently commonly-accepted axiomatic scientific beliefs, in this order:
In the utterances of all folk, intelligence and the thinker's device are semantically (meaningfully) indistinguishable, for if it were to be the other way around, it would be implied (in contradiction) that mankind were able to explicit a meaningful and thus rigorous distinction between intelligence and the thinker's device.
(Interestingly, any such distinction would technically bring mankind closer to understanding intelligence, plainly by power of elimination, albeit from an infinite set.)
See what happened? If the thinker wanted to confirm to herself that she had understood intelligence, instead of asking one man for a blessing, she needed to make all men and women confirm to themselves.
A careful re-read of all said above elicits that, in fact, any device for which nothing semantically meaningful can be said is indeed synonymous with intelligent, for the duration of this condition.
So not only is “intelligence” relative to the beholder — so e.g. if Alice shows Bob an algorithm that recognizes faces and Bob could not reproduce it, Alice's algorithm would be genuinely intelligent to Bob — but if the thinker wants to convince the world they invented intelligence, they better keep the internal algorithm secret, until everyone has voluntarily admitted inability to understanding the thinker's device — presumably after multiple tedious attempts at it: A proof by frustrtion of sorts.
What is quite ironic is that the subtle nature of “intelligent”, that makes itself meaningful only in relation to others, is already embedded in the way the academic ecosystem works: No paper is accepted to a conference if the reviewer finds it “obvious”. Already it is a well-known issue, not uncommon, whereby a smart reviewer will reject a paper, only to find out later that the community at large considers the paper more than worthy of acceptance.
Perhaps worth mentioning is another “obvious” fact, gleaned directly from the construction of the test, whereby the pursuit for understanding intelligence, when viewed as a process over time, is precisely modeled by a continuous-time social game with a singular winner and a consensus defeat, whose internal rules (the mind's kitchen) are yet unknown.
In words of good-bye:
Synthetic intelligence has to appear to us in the same way intelligence did: out of the blue, like a magician performing puzzling tricks on us, so stimulating as to motivate a societal study of it — intently focused on defeat.