Human priors: you can’t have your cake and eat it too

by Emin Orhan

This is just a brief post to highlight a brilliant paper I have read recently and to share some thoughts sparked off by the paper. The paper in question is Investigating human priors for playing video games by Rachit Dubey and colleagues. In this paper, the authors address a very basic question: What kinds of specific prior knowledge do humans bring to the task of learning a new video game that make them much more efficient (in terms of the amount of training experience required) than neural networks learning a similar video game through deep RL algorithms.

Their approach to this question is thoroughly (and admirably) empirical. By designing various versions of the basic game that eliminate different kinds of structure in the game, they are able to not only identify, but also quantify the relative importance of, various different regularities or meaningful structures in the game for human players. The idea is that if the removal of a particular meaningful structure or regularity in the game severely affects the learning performance of human players, then that must be an important regularity that humans rely on heavily in learning the game. Conversely, if the removal of a meaningful structure or regularity does not affect the learning performance of human players, it is reasonable to assume that that regularity is not an important part of the prior knowledge that human players bring to the learning of the new game.

Using this logic, the authors show that (i) knowledge about how to interact with objects (e.g. one can climb a ladder by pressing the up key as opposed to zigzagging left and right, one can jump over monsters by pressing the up and right (or left) keys etc.), (ii) knowledge about object affordances (e.g. platforms support walking and ladders support climbing) and object semantics (e.g. one has to avoid fires and angry-looking monsters), (iii) the concept of an object as distinct from the background and a prior that expects visually similar things to behave similarly, are all parts of the prior knowledge that people utilize (in the order of increasing importance) in learning a new game.

One might naively think that it should be a great idea to build all this prior knowledge in our neural networks (if only we knew how to do that!). That would be expected to greatly increase the sample efficiency of neural networks when they encounter similar problems. However, one has to be careful here, since building in any kind of prior knowledge always comes at a cost: namely, if the problems encountered by the model do not conform to the assumed prior, the prior, rather than improving the sample efficiency, may in fact worsen it. The authors are (again, admirably) very clear about this potential drawback of prior knowledge (which they illustrate with a simple example video game):

However, being equipped with strong prior knowledge can sometimes lead to constrained exploration that might not be optimal in all environments… Thus, while incorporating prior knowledge in RL agents has many potential benefits, future work should also consider challenges regarding under-constrained exploration in certain kinds of settings.

Building in too many human priors could also make models vulnerable to a sort of “adversarial attack” where an adversary might design “unnatural” versions of a task that would be difficult to solve for a model with a lot of human priors (similar to the games created in the paper), a sort of inverse-CAPTCHA. A less constrained model with fewer human priors would be less vulnerable to such attacks. Besides, although human priors (especially more general ones such as the concept of an object or the visual similarity prior) are probably broadly useful in vision- or language-related problems, increasingly many problems we face today in applied sciences are not obviously amenable to any kind of useful human prior: problems in bioinformatics or molecular design come to mind, for example. Trying to prematurely incorporate human priors in our models (whatever they might be) may hinder performance in such cases. Furthermore, human priors (especially more specific ones such as those that arise in naive physics or in naive psychology) are often plain wrong, in which case incorporating them in our models doesn’t make any sense.

This is where the importance of meta-learning comes in, I think. Rather than trying to build in some assumptions a priori, in a lot of cases, meta-learning some prior knowledge or assumptions that would be broadly useful for a particular class of problems would be a more promising approach. Although there has recently been some interesting work in this direction, the prior knowledge or assumptions meta-learned in these works are quite primitive, e.g. meta-learning a good initialization. Meta-learning more substantial prior knowledge (for example, in the form of a model architecture) very much remains an interesting open problem.

Advertisements