[Rough Notes] General AI and Cultural Evolution

(Completely speculative)

Related: Raw notes: Henrich (2016) Google Talk – The Secret of Human Success

  • Small changes in brain architecture cause big changes in intelligence. An important intuition pump for AGI being a big deal (and possibly fooming past humans) is the difference between humans and chimpanzees in terms of their capacity to succeed across environments.
    • The genomic differences between humans and chimps are quite small. Varki (2005):
      • Last common ancestor 5-7 million years ago
      • 4% of nucleotides differ between chimps and humans. That is, 35 million SNPs, 90 Mb of insertions and deletions (of the ~725 Mb total)
  • (Controversial) A key part of human advantage lies in cultural evolution. It seems that the core difference between humans and chimps is the capacity for sustained cultural evolution, powered by behavior imitation and language.
  • The ability to decompose problems may be a core ingredient. The most impressive things that humans have achieved (like the Apollo program) are not just a result of transmitting trigger-action patterns throughout a relevant group, but also the ability to break down a problem (safely get a person to the moon) to sub-parts, then delegate the parts to other agents.
  • Human-level intelligence will not be a milestone. Computers will already be vastly superior in memory, transmission fidelity and communication speed (as well as many other things) once they get to the chimp level of perceptual intelligence and the human level of cultural transmission and real-world problem decomposition.
    • Random thought: lossy compression like humans do with language might be necessary for speeding up progress in multi-agent system intelligence.
  • The ability to open source and copy software changes evolutionary dynamics. Much faster.
  • Understanding memetic evolution may be key to the alignment of general AI. A natural problem in a model of general AI such as the one outlined above will be information cascades that lead to not-OK trigger-action patterns and outcomes. Related: Hypolinks: Cranial Deformation
  • Systems management, operations research and other models of group alignment might prove useful. Humans managed to generate lots of practical knowledge about aligning groups (and individuals within them) with goals. Some of those may work for aligning multi-agent systems.

Some related thoughts

  • The basic unit of interest in cultural evolution is something like “trigger-action patterns”. I.e. whether someone can repeat a passage from the bible is not of interest in terms of “positive sum intelligence”, unless it influences their behavior (like the 10 commandments). This breaks down to something like perceive-classify-intervene. Sometimes, with uncertainty, there will be more explicit computation between perceive and classify. The ability to change the link between classify and intervene is also important.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s