Humans in general rely way too much on their inside views and too little on outside views.
The question of how much weight to put on one or the other often comes up in discussions of AI timelines, among other things (this time is different vs Another Winter is Coming)
I’ve heard a reasonable critique of the EA community for being too far on the outside view spectrum (which I parse as a euphemism for “a bunch of philosophers trying to make technology go well without understanding it”).
An analogy that I find quite persuasive: if you were wondering about the prospects for nuclear bombs in the 1930’s and trying to assess expert opinion, you’d like to gauge whether both sides have thought about atoms (yes), chain reactions (not everyone), and then be able to zoom in on
In the case of deep learning, it’s very clear that, say, Gary Marcus is strawmaning the developers and not citing developments like the Neural Turing Machine, NALU and many others that aim at symbolic and arithmetic reasoning in neural nets. The timeline estimates of deep learning optimists already incorporate this information, so it doesn’t give much reason to update.
I certainly feel like I can make more informed judgments as a result of studying machine learning, math and the natural sciences more in the last years. That said, getting good feedback in the domains I’m interested in is hard and I might just be kidding myself. Fortunately, prediction platforms like Metaculus seem to be getting to a threshold of usefulness for giving useful feedback on inside-view based forecasts.
- I expect that it will often be the case that people skeptical of a technology will not be able to correctly represent the other position and perhaps even show the signs of worse epistemic standards, but will in fact be closer to correct based in fuzzy intuitions about how society and institutions work that the optimists haven’t built up.
- I’m worried about a trap of the form: for all plausible big deal technologies, there will be good inside view arguments justifying hype around them. If you care about expected values of your actions, you will almost inevitably end up working on whatever technology has the nicest fairy tale of impact.