Wednesday, February 29, 2012

On Artificial Wisdom: For U of T's Cognitive Séance 45

It’s rather unfortunate that I couldn’t be there for the Mind Matters conference, and that I’m missing this wonderfully-themed Cognitive Séance as well. Even so, I’ll contribute by adding a few thoughts. I started typing this stuff out as a wall post, but my words have once again outgrown their container, so I’ve moved them to my blog. Well, here’s what I have:
A couple of useful approximations of wisdom that I’ve found useful are “problem solving ability in the absence of expertise”, and “meta-expertise”. Neither is perfect, but both are instructive. The first can be understood as being “execution-level”, while the second is “development-level”. The performance-competence distinction comes to mind.
Goals and problems are relevant to discussions of wisdom. Problem detection and goal selection are flip-sides of the same coin. The first indicates an external disruption of homeostasis, while the second suggests an internal redefinition of the same. The agitation or dissonance that this state of affairs produces is what moves the agent to act.
Action can be external or internal. External action, as Searle might say, attempts to fit the world to the agent’s preferences, while internal action attempts to fit the agent’s preferences to the world. So long as the world-mind difference is reduced sufficiently, the “problem” will have been solved satisfactorily.
To witness wisdom in action typically requires an unfamiliar state of affairs. One kind of foolishness is to mistake the unfamiliar-yet-irrelevant as a problem. If the unfamiliar scenario is genuinely unacceptable, though, a wiser agent (understood in some manner that is independent of this problem) will be more likely to find a solution than a less wise agent (assuming that neither have domain expertise).
Since expertise is out of the picture, wise agents appear to rely on heuristics. If two agents have identical repertoires of heuristics, the one that selects a subset for use in a more context-sensitive fashion is more likely to reach a solution. Having a larger repertoire of heuristics, I would imagine, might initially provide an advantage, but would eventually lead to inefficient heuristic selection.
These execution-level thoughts about wisdom apply “at runtime”, so to speak. On the other hand, wisdom also involves development-level components. To continue the analogy, they are “compile-time” factors. This is where wisdom as meta-expertise enters the discussion. For example, the heuristics an agent learns over time affects how well it can solve an unfamiliar problem. This means that some deviation from homeostasis encouraged the agent to revise its repertoire of heuristics. It (partially) solved its “sub-optimal heuristics” problem, and if it practises this skill enough, it might gain expertise in heuristics optimization. Similarly, acquiring expertise various specific domains might lead to “expertise” in expertise acquisition. The reason I’ve used scare-quotes is that “domain-general expertise” is an oxymoron, though I’m not sure whether “expert generalizer” is.
Now I’ll turn to Artificial Wisdom (AW). Some artificial expert systems that exist today have been spoon-fed their abilities, while others have become experts through practise. It’s this second type of artificial systems that are more likely to act as the forerunners to AW. One important tool that helps humans acquire expertise is generalizing across different instances of the same kind of problem. “The star-shaped peg fits in the star-shaped hole” and other blindly-discovered solutions to similar problems can lead an intelligent agent to heuristics like “look for a hole that’s the same shape as this peg”.
Notice how I’ve performed a bit of recursion above. Generalizing across instances of the “acquire expertise” problem, I’ve picked out the “generalize over problem instances” heuristic. Continuing this recursive trend might prove fruitful to AW research.
Wise agents probably excel in identifying accidental successes and developing them into reliable algorithms and heuristics, kind of how Jeff Hawkins describes the neuroscience of rehearsal in On Intelligence. Other essential mechanisms for developing AW include ways to evaluate similarity and relevance.
I hope all of you have a great Séance!