was your post inspired by that photoset of the film though? i’d never heard of it before but it looks good. i think?
I did indeed think about the Chinese Room because of that photoset!
Following on from your post:
One interesting thing to me about this thought experiment is that, while ‘Chinese’ in this sense is simply meant to stand in for any language you have no frame of reference for, one which has no recognizable antecedents or links with your own language - written standard Chinese may actually be one of the worst languages to pick as an example, as many characters still have a meaningful ideographic formulation. Aside from direct pictographic representation, the radical system can even sometimes allow meaning to be inferred from an unfamiliar textual element. But to come to this understanding, lacking a Rosetta, the naive user would have to make one (or many) quite massive intuitive leaps. They might have to somehow develop a recognition that, for example, the character 人 looks a little like a person - that the character 口 might in fact be a mouth. However, given enough time (and a sufficient lack of other stimulation) - is this so terribly implausible?
In considering the implications of this, we do end up circling back around to the problem of strong AI. I am not going to pretend for a moment that you can actually understand written Chinese in any depth based purely on what the characters look like; even the most basic elements have in many cases drifted far from their pictographic origins, and puzzling out a compound character based solely on the meaning of its constituent parts has a very low chance of success. But if one could feasibly make such an initial leap - could intuit that first link between orthography and meaning - what would you be drawing on? A lifetime of experience in abstraction and pictorial representation, certainly. A feeling for what is likely to be important in human communication. A strong skill in pattern recognition. Etc. We could further break down the elements required for that tremendous mental jump, and perhaps even quantify them. And having quantified, to a minute degree, the essential components, can we transfer them to a machine?
The question then becomes, to me, can we endow machines with intuition? This seems on the face of it nonsensical - our understanding of intuition is that it is by nature illogical, that it requires you to leap from proposition to conclusion with no intermediate steps. But we are also (ironically?) terrible at intuiting the processes of our own brain. Is intuition simply reasoning based on operations too fast and subconscious for us to break them down and bring to conscious analysis? Aka, can it be reverse engineered? The problem of intuition then simply becomes one of mechanical complexity, which is nominally (if not realistically) reducible and therefore solvable.
I’m not a philosopher and not much of a neuroscientist, and I don’t want to dive into the ghost in the shell problem. I started replying mostly because the Chinese room to me always raises more questions than it answers. The one I usually get hung up on is that the biggest difference between locking a machine in a room and having it regurgitate set answers to opaque statements, and doing the same with a human, is eventually the human will get bored. It’s when we’re bored that we start looking for patterns, making up games, deliberately fucking around, or even trying to understand the impenetrable. Before trying to make a machine that understands language, if we can instead make a machine that gets bored, will we have constructed the basis for strong AI? Or is it a circular problem, and we first need to have the intelligence in order to get bored?