Moravec’s paradox
The more intuitive and easier something seems to humans, the harder it typically is to replicate in artificial systems. High-level math and chess are easy for computers, vision and movement are hard.
Take-away: Our intuitive judgments can integrate much more information than our explicit reasoning.
Move 47
AlphaGo got strategic advantage against the grandmaster Lee Sedol by playing a move that was considered clearly wrong by all Go experts.
Take-away: Even disciplines with long traditions and good feedback mechanisms can get stuck in sub-optimal approaches.
Inserting randomness
Machine learning systems can avoid getting stuck in local optima by inserting randomness into their learning.
Take-away: obvious
Montezuma’s revenge
Agents learning behaviors from external rewards have a hard time learning behaviors that don’t
Proposed solutions? Internal signals like artificial curiosity, random exploration, imitation of human behavior.
Take-away: in domains with scant external rewards, try to find a source of intrinsic motivation or great mentors.
Navigation
It seems likely that out high-level cognition is partly based on repurposed mechanisms for navigating in the physical environment. This might be the reason why thinking spatially (e.g. methods like the memory palace) works so well for creative problem solving.