In this post, I’d like to take issue with arguments of this sort. Actually, my objection to these kinds of arguments is already well-known. Zador has a section in his paper addressing precisely this objection (the section is titled “supervised learning or supervised evolution?”). So, what is the objection? The objection is that arguments of this sort conflate biology and simulation. They assume that the learning that happens in an artificial neural network is comparable to the learning that happens in a biological system over its individual lifespan. But there’s no good reason to think of artificial learning in this way. We should rather think of it as a combination of the learning that happens over an individual lifespan *and* the adaptations that take place over evolutionary time scales. When we think of artificial learning in this light, the sample efficiency argument in favor of animals falls by the wayside, because biological evolution has been running the most intense optimization algorithm in the biggest and the most detailed simulation environment ever (called “the real world”) for billions of years (so much for “one-shot” learning!).

As I said, Zador is aware of this objection, so what is his response to it? As far as I can tell, he doesn’t really have a very convincing response. He correctly points out the differences between biological optimization and learning in artificial networks, but this doesn’t mean that they can’t generate functionally equivalent networks.

For example, biological optimization runs two nested optimization loops, the inner loop characterizing the learning processes in individual lifespans, the outer loop characterizing the adaptations over evolutionary time scales. This is similar to a learning paradigm called *meta-learning* in machine learning. And because of its similarity to biology, Zador is very much sympathetic to meta-learning. But in my mind the jury is still out on whether meta-learning has any significant advantages over other standard learning paradigms in machine learning. There are recent results suggesting that in practical problems one doesn’t really need the two separate optimization loops in meta-learning (one loop is all you need!). Moreover, if one trains one’s model in a sufficiently diverse range of problems (but crucially using a standard learning paradigm, such as supervised learning or reinforcement learning), “meta-learning” like effects emerge automatically without any need for two separate optimization loops.

The core problem here, I think, is again conflating biology and simulation. Just because we see something in biology doesn’t mean we should emulate it blindly. Biology is constrained in many ways simulation isn’t (and vice versa). Of course it makes sense to use two separate optimization loops in biology, because individual lifespans are limited, but this isn’t true in simulation! We can run our models arbitrarily long on arbitrarily many tasks in simulation.

I think this (i.e. the mismatch between biology and simulation) is also why naive ways of emulating the brain’s innate inductive biases, like trying to directly replicate the concept of “cell types” in the brain is usually not very effective in artificial neural networks, because in my opinion these features are essentially consequences of the brain’s suboptimal learning algorithms (over developmental time scales), so that it has to off-load a significant chunk of the optimization burden to evolution, which needs to craft these intricate cell types to compensate for the suboptimality of learning (over developmental time scales). Learning in artificial neural networks, on the other, is much more powerful, it is not constrained by all the things that biological learning is constrained by (for example, locality and limited individual lifespans), so it doesn’t really need to resort to these kinds of tricks (like different innate cell types) to easily learn something functionally equivalent over the individual lifespan.

]]>I have to first clarify what exactly I mean by “deep credit assignment” or “something like backprop”. This is still not going to be very exact, but by this I basically mean a global credit assignment scheme that propagates precise “credit signals” from elsewhere in a deep network in order to compute a local credit signal. I thus include any gradient-based method (first-, second-, or higher-order) in this category, as well as imperfect, heuristic versions of it such as feedback alignment or weight mirrors. There are some borderline cases such as decoupled neural interfaces that compute credits locally, but also learn gradient estimates over longer timescales. I’m inclined to include these in the “deep credit assignment” category as well, but I would have to think a bit more carefully about it before doing so confidently.

Now moving on to the two reasons that make me think the brain probably doesn’t do “deep credit assignment”. The first reason is this. I think it is very natural to think that the brain should be doing something like backprop, because it is what works today! Deep neural networks trained with gradient descent have been enormously successful in a variety of important and challenging tasks, like object recognition, object detection, speech recognition, machine translation etc. But it is very important to also remember that these successes depend on the current hardware technology. The methods that work well today are methods that work well on *current hardware*. But the current hardware technology is constrained in a variety of ways that brains aren’t.

Take the example of memory. Fast memory (for example, on-chip memory) is extremely limited in size in the current chip technology (although this may be beginning to change with novel chip designs such as Graphcore’s IPU and Cerebras’s WSE). But, there is no reason to think that the brain is limited in the same way, because the brain has a completely different computational architecture!

What is the significance of this observation? Well, even today we know that there are promising alternatives to backprop training in deep nets that are currently intractable precisely because of memory constraints: I’m thinking, in particular, of the *deep nets as Gaussian processes* (GPs) perspective. Amazingly, this method doesn’t require * any *training in the usual sense. Inference is accomplished through a single forward pass just like in backprop-trained nets. The catch is that, unlike in backprop-trained nets, this forward pass doesn’t scale well with the data size: it requires manipulating huge matrices. To my knowledge, these methods currently remain completely intractable for, say, ImageNet scale data, where the said matrices become terabyte-sized (there’s this recent paper that carries out exact GP computations on a million data points, but they use low-dimensional datasets and they don’t use deep-net kernels in their experiments; exact GPs on high-dimensional data using deep-net kernels remain highly intractable, to the best of my knowledge).

As a side note here, I would like to express my gut feeling that there may be more tractable versions of this idea (i.e. *training-free* or *almost training-free* deep nets) that are not being explored thoroughly enough by the community. One simple idea that I have been thinking about recently is the following. Suppose we take the architecture of your favorite deep neural network and set the parameters of this network layer by layer using only information inherent in the training data, say, a large image dataset. This would work as follows. Suppose the network has *k* filters in its first (convolutional) layer. Then we can either crop *k* random patches of the appropriate size from the training images or maybe do something more intelligent, like exhaustively cropping all non-overlapping patches of the appropriate size and then doing something like *k*-means clustering to reduce the number of crops to *k* and then setting those to be the first layer filters. This then fixes the first layer parameters (assuming the biases are zero). We can iterate this process layer by layer, at each layer computing a number of clusters over the activations of the previous layer across the entire training data and then basically “pasting” those clusters to the layer weights. Note that in this scheme even though there is learning (after all we are using the training data), it is minimal and non-parametric (we’re doing only *k*-means, for example), and nothing like a gradient-based learning scheme. I think it would be interesting to find out how well one can do with a scheme like this that uses minimal learning and utilizes almost exclusively the prior information inherent in the network architecture and the training data instead.

So, this was the first reason why I think the brain probably doesn’t do something like backprop, to wit: backprop seems to me too closely wedded to the current hardware technology. My hunch is that there are many more interesting, novel (and probably more biologically plausible) ways of building intelligent systems that don’t require anything like backprop, but we’re currently not exploring or considering these because they remain intractable with current hardware (large scale GPs being one concrete example).

The second reason is that even with the current hardware we have some recent hints that purely local learning schemes that don’t require any deep credit assignment can rival the performance of backprop training in realistic tasks (and if the brain doesn’t *have to* do something, the chances are it’s not going to do it!). I’d like to mention two recent papers, in particular: *Greedy layerwise learning can scale to ImageNet* by Belilovsky et al. and *Training neural networks with local error signals* by Nokland and Eidnes. These papers both introduce completely local, layerwise training schemes and show that they can work as well as end-to-end backprop in standard image recognition problems.

Although these results are impressive, in a way I consider these as still initial efforts. I feel pretty confident that if more collective effort is put into this field, even better local training schemes will be discovered. So, to me these results suggest that the problems we typically solve with end-to-end deep learning these days may not be hard enough to require the full force of end-to-end backprop. Furthermore, with each new clever local learning trick, we will discover these problems to be even easier than we had imagined previously, so in the end coming full circle from a time when we had considered computer vision to be an easy problem, to discovering that it is in fact hard, to discovering that it isn’t *that* hard after all!

**Update (01/16/2020):** I just found out that the idea I described in this post for building a gradient descent-free image recognition model using *k*-means clustering was explored in some early work by Adam Coates and colleagues with promising results (for example, see this paper and this). They use relatively small datasets and simple models in these papers (early days of deep learning!), so maybe it is time to reconsider this idea again and try to scale it up.

The previous sota results on ImageNet-A, ImageNet-C, and ImageNet-P were reported in a paper I posted on arxiv in July this year, and they were achieved by a large model trained by Facebook AI researchers on ~1B images from Instagram using “weak” (i.e. noisy) labels, then fine-tuned on ImageNet (these models are called ResNeXt WSL models, WSL standing for *weakly supervised learning*). People who have worked on these benchmarks before will appreciate how impressive these numbers are. Particularly impressive for me are the ImageNet-A results. This benchmark itself was introduced in the summer this year and given the lackluster performance of even the best ResNeXt WSL models reported in my paper, I thought it would take a while to see reasonably high accuracies on this challenging benchmark. I was spectacularly wrong!

So, how did they do it? Their method relies on the old idea of co-training: starting from a model trained on a relatively small amount of high-quality labeled examples (in this case, ImageNet trained models), they infer labels on a much larger unlabeled dataset (in this case, the private JFT-300M dataset), then they train a model on the combined dataset (labeled + unlabeled) using random data augmentation during training, then they iterate this whole process several times.

In my arxiv paper posted back in July, I had confidently claimed that:

We find it unlikely that simply scaling up the standard object classification tasks and models to even more data will be sufficient to feasibly achieve genuinely human-like, general-purpose visual representations: adversarially robust, more shape-based and, in general, better able to handle out-of-sample generalization.

Although the Google paper doesn’t use a “standard” training paradigm, I would definitely consider it pretty close (after all, they simply find a much better way to make use of the large amount of unlabeled data, bootstrapping from a relatively small amount of labeled data, otherwise the setup is a pretty standard semi-supervised learning setup). So, I would happily admit that these results at least partially disprove my claim (it still remains to be seen to what extent this model behaves more “human-like”, I would love to investigate this thoroughly once the trained models are released).

This paper also highlights a conflict that I feel very often these days (also discussed in this earlier post). Whenever I feel pretty confident that standard deep learning models and methods have hit the wall and that there’s no way to make significant progress without introducing more priors, somebody comes along and shatters this idea by showing that there’s actually still a lot of room for progress by slightly improved (yet still very generic) versions of the standard models and methods (with no need for stronger priors). I guess the lesson is that we really don’t have very good intuitions about these things, so it’s best not to have very strong opinions about them. In my mind, the empirical spirit driving machine learning these days (“just try it and see how well it works”) is probably the best way forward at this point.

Another lesson from this paper is that bootstrapping, self-training type algorithms might be powerful beyond our (or at least my) paltry imagination. GANs and self-play type algorithms in RL are other examples of this class. We definitely have to better understand when and why these algorithms work as well as they seem to do.

**Update (12/03/19):** Another interesting paper from Google came out recently, proposing adversarial examples as a data augmentation strategy in training large scale image recognition models. Surprisingly, this doesn’t seem to lead to the usual clean accuracy drop if the BatchNorm statistics are handled separately for the adversarial examples vs. the clean examples and the perturbation size is kept small. Interestingly for me, the paper also reports non-trivial ImageNet-A results for the large baseline EfficientNet models. For example, the standard ImageNet-trained EfficientNet-B7 model has a reported top-1 accuracy of 37.7%. This is far better than the 16.6% top-1 accuracy achieved by the largest ResNeXt WSL model. These large EfficientNet models use higher resolution images as inputs, so it seems like just increasing the resolution gains us non-trivial improvements on ImageNet-A. This doesn’t diminish the impressiveness of the self-training results discussed in the main post above, but it suggests that part of the improvements there can simply be attributed to using higher resolution images.

In a reading group on program induction, we’ve recently discussed an interesting paper by Mollica & Piantadosi on learning kinship words, e.g. *father*, *mother*, *sister*, *brother*, *uncle*, *aunt*, *wife*, *husband* etc. In this paper, they are formalizing this as a probabilistic program induction problem. This approach comes with all the benefits of explicit program-like representations: compositionality, sample efficiency, systematic generalization etc. However, I’m always interested in neurally plausible ways of implementing these types of representations. The paper discussed an earlier work by Paccanaro & Hinton, which proposes a vector space embedding approach to the same problem. So, I decided to check out that paper.

Paccanaro & Hinton model people as vectors and relations between people as matrices (so might mean “ is the father of “). The idea is then to learn vector representations of the people in the domain, , and matrix representations of the relations between them, , such that the distance between and is minimized if the corresponding relation holds between and , and maximized otherwise. This is (by now) a very standard approach to learning vector space embeddings of all sorts of objects. I have discussed this same approach in several other posts on this blog (e.g. see here and here).

Paccanaro & Hinton model each relation with a separate unconstrained matrix. Unfortunately, I think this is not really the best way to approach this problem, since it ignores a whole lot of symmetry and compositionality in relationships (which very likely negatively impacts the generalization performance and the sample efficiency of the model): for example, if is the father of , then is a son or a daughter of . Most primitive relations are actually identical up to inversion and gender. Other relations can be expressed as compositions of more primitive relations as in Mollica & Piantadosi.

So, I tried to come up with a more efficient scheme than Paccanaro & Hinton. My first attempt was to use only two primitive relations, (e.g. *mother of*) and (e.g. *wife of*) and to use matrix inversion and transpose to express the symmetric and opposite-gendered versions of a relationship. Here are some examples:

: *mother*

: *father*

: *daughter*

: *son*

: *father’s mother*

: *mother’s father*

: *mother’s son* *(brother)*

: *father’s daughter* *(sister)*

At this point, we run into a problem: *mother’s daughter* and *father’s son* always evaluate to *self* (the identity matrix), and this just doesn’t quite feel right. Intuitively, we feel that extensions of these concepts should include our sisters and brothers as well, not just us. The fundamental problem here is that we want at least some of these concepts to be able to pick out a *set* of vectors, not just a single vector; but this is simply impossible when we’re using matrices to represent these concepts (when applied to a vector, they will give back another vector). This seems like a fairly basic deficiency in the expressive power of this type of model. If anybody reading this has any idea about how to deal with this issue in the context of vector space models, I’d be interested to hear about it.

Another question is: assuming something like this is a reasonably good model of kinship relations (or similar relations), how do we learn the right concepts given some relationship data, e.g. , etc.? If we want to build an end-to-end differentiable model, one idea is to use something like a deep sparsely gated mixture of experts model where at each “layer” we pick one of our 7 primitive relations (indexed from 0 to 6):

and the specific gating chosen depends on the input and output, and , .

So, to give an example, if we allow up to 5 applications of the primitives, the output of the gating function for a particular input-output pair might be something like: or a suitable continuous relaxation of this. This particular gating expresses the relationship, , whereas would correspond to . If we use a suitably chosen continuous relaxation for the discrete gating function, the whole model becomes end-to-end differentiable and can be trained in the same way as in Paccanaro & Hinton. We can also add a bias favoring the identity primitive over the others in order to learn simpler mappings (as in Mollica & Piantadosi). It would be interesting to test how well this model performs compared to the probabilistic program induction model of Mollica & Piantadosi and compared to less constrained end-to-end differentiable models.

**Update (10/11/19):** There’s some obvious redundancy in the scheme for representing compositional relations described in the last paragraph: applications of the identity don’t have any effect on the resulting matrix and successive applications of and (or and ) cancel out each other. So, a leaner scheme might be to first decide on the number of non-identity primitives to be applied and generate a sequence of exactly that length using only the 6 non-identity primitives. The successive application of inverted pairs can be further eliminated by essentially hard-coding this constraint into . These details may or may not turn out to be important.

To do this, I compared models that employ a mixture of learning and memorizing strategies. Given a finite amount of “training” data, a *k*%-learner uses *k*% of this data for learning and memorizes the rest of the data using a simple key-value based cache memory. A 100%-learner is a pure learner that is typical in machine learning. For the learning model, I used a ResNet-32 and for the memory model, I used the cache model described in this paper. The predictions of a *k*%-learner are given by a linear combination of the predictions obtained from the learner (ResNet-32) and the predictions obtained from the cache memory:

*prediction = w * prediction from the learning model + (1-w ) * prediction from the cache memory*

where *w* is a hyper-parameter that is estimated separately for each *k*%-learner (I assume that the cost of learning a single hyper-parameter is negligible compared to the cost of learning the parameters of a model).

Suppose I already used up *k*% of the data for training my ResNet-32 model and this achieves a generalization accuracy of *x*. Now the question is: what should I do with the rest of the data? I can either use that data to continue to train my model, which leads to a 100% learner and let’s say this 100% learner achieves an accuracy of *y*; alternatively I can just memorize the remaining data by caching (with the help of my partially trained ResNet-32 model), which leads to a *k*%-learner and let’s say this *k*%-learner achieves an accuracy of *z*. Then, given that I have already used *k*% of the data for learning, the relative value of learning the remaining data over just memorizing it is defined by:

*relative_value_of_learning(k) = (y-x) / (z-x)*

that is, the improvement in accuracy achieved by a 100%-learner divided by the improvement in accuracy achieved by the *k*%-learner. A large value here indicates that learning is much more valuable than memorizing (*i.e.* it pays off to learn from the remaining data rather than just memorizing it) and a value of 1 would indicate that learning and memorizing are equally valuable. In the latter case, given that learning is usually computationally much more expensive than memorizing, we would probably be inclined to memorize rather than learn.

The following figure shows the *relative_value_of_learning(k)* as a function of *k* for the CIFAR-10 benchmark.

So, by this measure learning is ~10 times as valuable as memorizing in this task. There appears to be a decreasing trend in the value of learning as *k *becomes larger*, *but the data is a bit noisy (ideally, I should have run this simulation multiple times to get more reliable estimates).

Is this result surprising? It was surprising to me! I was expecting the relative value of learning to be smaller and the curve shown above to approach 1 much more quickly. So, now I am a bit less skeptical of the growing literature on biologically plausible analogues of backpropagation after this exercise. There is definitely a lot of value in learning good representations (much more value than I had initially thought).

Some caveats: this exercise is specific to a particular task and particular learning and memorizing models. The results might be different in different setups. Given that much of the effort in machine learning is directed toward coming up with better pure learning models (rather than better memory models), I expect that the relative value of learning estimated here is an overestimate, in the sense that one can improve the performance of memorizing models by using more sophisticated memory models than the simple key-value cache model assumed in this exercise.

Finally, an analysis like this should help us perform a cost-benefit analysis for learning vs. memorizing both in natural and artificial agents. Coming up with cost estimates is probably easier in artificial agents: for example, one can estimate the FLOPS involved in learning vs. memorizing a given amount of data; or one can include memory costs as well. Depending on our exact cost function, the optimal strategy would involve a specific mix, or a specific trajectory, of learning vs. memorizing during the lifetime of the agent.

]]>To be fair, there are good reasons to think like this. Deep learning models trained in this way are currently the best models of the ventral visual cortical areas in primates and just considering their zeroth-order performance, nothing else really even comes close to achieving near human or, in some cases, even super-human performance in sufficiently challenging object recognition tasks.

Of course, when we look a bit more closely, there are also very good reasons to be skeptical of the claims that these models are adequate models of the primate visual systems in general (and human visual system in particular). Chief among those reasons is the surprising (almost shocking) sensitivity of these models to adversarial *and* natural perturbations, very unlike human vision. Another reason to be skeptical is that when people actually do a more fine-grained analysis of how humans vs. deep vision models perform on realistic image recognition tasks, they find significant differences between how the two behave.

In this post, I would like to add one more reason to the skeptic’s arsenal and argue that current deep learning models for object recognition behave *psychologically *unrealistically and that our brains don’t seem to me to be solving object recognition type problems in the same way. My argument is exceedingly simple. It’s an argument from subjective experience and it goes as follows. When I recognize an object, it usually comes with a strong sense of novelty or familiarity. When I recognize a coffee mug, for instance, I don’t just recognize it as *a* mug, but as *this particular* mug that I have seen before (maybe even as my own mug) or as a novel mug that I haven’t seen before. This sense of familiarity/novelty comes automatically, involuntarily, even when we are not explicitly trying to judge the familiarity/novelty of an object we are seeing. More controlled psychological experiments also confirm this: humans have a phenomenally good memory for familiarity with a massive capacity even in difficult one-shot settings (see e.g. this classic study by Lionel Standing or this more recent study by Tim Brady and colleagues).

In other words, our recognitions have a strong and automatic episodic component. This episodic component is mostly lacking in current deep vision models. They don’t have a natural way of telling whether an object is novel or familiar at the same time as they are performing the recognition task.

There may be indirect ways of doing this in trained networks, for example, maybe novel and familiar –i.e. training and test– objects produce different activation distributions in a trained network. I actually don’t know if this is the case or not, but my point is just that current deep vision models do not perform this computation naturally and automatically as part of the computation they perform for recognizing the objects in the first place. This appears to me to be a big difference from the way we humans seem to do similar tasks.

So, how can we add this episodic component to the current generation of deep vision models? Shameless plug: I wrote a paper on this. The solution turns out to be really simple: just cache everything you can (ideally everything you have seen so far), using sufficiently high-level features (not too low-level stuff). And use the cache while making predictions. Retrieval from the cache is essentially a form of episodic memory. This is not even a novel solution. People have been proposing similar ideas in reinforcement learning and in language modeling (in fact, my paper was directly inspired by this last paper). In my paper, I showed that this cache-based model is incredibly robust to adversarial perturbations, so much so that when using only the cache memory to make predictions, I wasn’t able to generate *any* convincing adversarial examples, even with very strong attack methods (similar robustness results have been demonstrated in other papers as well). I strongly believe such cache-based models will also be much more adequate models of the human (and primate) visual system.

In a recent interview, Geoff Hinton said something quite similar to what I have tried to argue in this post about the difference between the current generation of deep learning models and the brain (if I interpret it correctly):

The brain is solving a very different problem from most of our neural nets… I think the brain isn’t concerned with squeezing a lot of knowledge into a few connections, it’s concerned with extracting knowledge quickly using lots of connections.

I think Hinton is fundamentally right here and I think a massive episodic memory is one of the basic mechanisms the brain uses to “extract knowledge quickly using lots of connections.” Among other things, I think one of the important implications of this point of view is that the current emphasis in some circles on trying to find sophisticated and powerful learning algorithms in the brain, which I alluded to above, may be misplaced. I actually think that backpropagation is probably much more sophisticated and powerful than anything we will find in the brain. Any learning algorithm in the brain is restricted in various ways machine learning algorithms don’t have to be (e.g. locality, respecting the rules governing different cell types etc.). On the other hand, in terms of the sheer number of neurons and the sheer number of connections, the human brain is massive compared to any model we have ever trained. It seems to me that we will soon find out that the algorithms relevant for the kind of machine the brain is are much more different than the machine learning algorithms relevant for today’s piddling models (piddling relative to the size of the human brain, of course). For example, I have always thought that hashing algorithms, essential for performing similarity search over very large sets of high-dimensional objects, should be at least as important and as relevant as backpropagation (and probably more) in our quest to understand the brain. And I have at least some corroborating evidence from the fly brain, of all places!

]]>There are several problematic aspects of this line of argument. The first and probably the most obvious one is that when people make a claim like this, they rarely, if ever, quantify how much linguistic data a child *actually* receives during the first few years of its life. Secondly, even if one does go ahead and quantify exactly the kind and amount of data a child receives during language acquisition, one still has to do the hard work and show that convergence to the correct grammar cannot happen (or is very unlikely to happen) with relatively weak, generic biases, but instead requires strong language-specific biases (i.e. that the biases have to be in the form of some kind of universal *grammar*). This can be tested either with architecture-agnostic methods such as Bayesian learners or with specific learning architectures like neural networks. Perfors et al., for example, show, through the Bayesian route, that linguistic input contains enough information to favor a hierarchical (as opposed to linear or flat) grammar with no prior bias favoring hierarchical grammars, directly refuting the often-made Chomskyan claim that language learners must have a strong innate prior bias in favor of hierarchical grammars. This just demonstrates how error-prone our intuitions can be regarding the learnability or otherwise of certain structures from data without strong priors and the importance of *actually* *checking* what one can or cannot learn from given data.

As we are increasingly able to train very large models on very large datasets, I think we are beginning to grapple with a fundamental question about the nature of human-level or super-human intelligence: how far can we go with fairly generic, but very large architectures trained on very large datasets optimizing very generic objectives like prediction or curiosity? Is it possible to get all the way to human-level or super-human perceptual and cognitive abilities in this way, or alternatively is it necessary to incorporate strong inductive biases into the network architecture and we simply don’t know how to do this yet, both because we don’t know what the right inductive biases are and also because we don’t know how to implement them in our models? Personally, I would easily rate this as one of the most important outstanding questions in cognitive sciences and AI today.

My own thinking on this question has been evolving in the direction of the first possibility lately, i.e. that we can learn a lot more than we might naively imagine using fairly generic architectures and fairly generic unsupervised objectives. Part of the reason for this shift is a whole slew of recent work demonstrating that one can indeed learn highly non-trivial, surprising things even from relatively modest amounts of data using very generic network architectures and generic training objectives. In this post, I’d like to highlight a few of these recent results.

Building on earlier work demonstrating the power of transfer learning in language-related tasks, there has been a lot of progress this year in unsupervised pre-training of language models with large amounts of unlabeled data. For example, Radford et al. first pre-trained a large Transformer model on a language modeling task (i.e. given some prior context, predict the next token) with a relatively large dataset (see also this earlier paper by Howard & Ruder that implements essentially the same idea, but with a different model and different datasets). They then fine-tuned this pre-trained model on a variety of downstream supervised classification tasks and observed large gains in most downstream tasks over state of the art models specifically trained on those tasks. The dataset that they pre-trained the model on was a corpus of ~7000 books. Although this may seem like a big dataset (and it is big compared to the datasets typically used in NLP research), it is in fact a miniscule dataset relative to how large it could *potentially be*. For example, as of October 2015, Google Books contained ~25 million books, i.e. , which is about 4 orders of magnitude larger than the corpus used in this study. I’m not sure about the number of parameters in the Transformer model used in this paper, but my rough estimate would be that it must be . By comparison, the human brain has synapses. We’ve never even come close to running models this big. Now try to imagine the capabilities of a system with parameters trained on a corpus of or more books. It’s almost certain that such a system would shatter all state of the art results on pretty much any NLP benchmark that exists today. It would almost definitely lead to qualitatively recognizable improvements in natural language understanding and common-sense reasoning skills, just as today’s neural machine translation systems are recognizably better than earlier machine translation systems, due in large part to much bigger models trained on much bigger datasets.

Another conceptually similar model that has been shown to work even better is the more recent BERT model by Devlin et al. The two major innovations in this paper over the Radford et al. paper are (i) the use of a bidirectional attention model, instead of the unidirectional –strictly left-to-right– attention model used in Radford et al.; and (ii) the use of two novel unsupervised pre-training objectives. Specifically, they use a masked token prediction task, where the goal is to predict some masked word or words in a sequence, rather than the more standard left-to-right prediction task used in Radford et al. and other language modeling papers. This objective allows the bidirectional attention model to make use of both the left and the right context in order to predict the masked words. In addition, they also use a novel unsupervised next sentence prediction task, where the objective is to simply predict whether two given input sentences actually follow each other or not. Training examples for this objective can be easily generated from the corpus. The motivation behind this second objective is to force the model to learn the relationships between sentences, rather than relationships between lower-level units such as words. This second objective turns out to be crucial for significantly improved performance in question answering and natural language inference tasks. The datasets used for pre-training the model amount to the equivalent of some ~30000 books by my estimation. This is significantly bigger than the dataset used by Radford et al., however it’s still a few orders of magnitude smaller than the number of books that were available on Google Books as of October 2015.

The bidirectional BERT model significantly outperforms the Radford et al. model on the GLUE benchmark even after controlling for the model size. This suggests that although both the model architecture and the pre-training objectives in the paper are still quite generic, not all generic architectures and objectives are the same, and finding the “right” architectures and objectives for unsupervised pre-training requires careful thinking and ingenuity (not to mention a lot of trial and error).

Large-scale study of curiosity-driven learning is another paper that came out this year demonstrating the power of unsupervised learning in reinforcement learning problems. In this quite remarkable paper, the authors show that an agent receiving absolutely no extrinsic reward from the environment (not even the minimal “game over” type terminal reward signal) and instead learning entirely based on an internally generated prediction error signal can learn useful skills in a variety of highly complex environments. The prediction error signal here is the error of an internal model that predicts a representation of the next state of the environment given the current observation and the action taken by the agent. As the internal model is updated over training to minimize the prediction error, the agent takes actions that lead to more unpredictable or uncertain states. One of the important messages of this paper is, again, that not all prediction error signals, hence not all training objectives, are equal. For example, trying to predict the pixels or, in general, some low-level representation of the environment doesn’t really work. The representations have to be sufficiently high-level (i.e. compact or low-dimensional). This is consistent with the crucial importance of the high-level next sentence prediction task in the BERT paper reviewed above.

As the authors note, however, this kind of prediction error objective can suffer from a severe pathology, sometimes called the noisy TV problem (in the context of this paper, this problem can be more appropriately called a “pathological gambling” problem): if the agent itself is a source of stochasticity in the environment, it may choose to exploit this to always choose actions that lead to high-entropy “chancy” states. This strategy may in turn lead to pathological behaviors completely divorced from any external goals or objectives relevant to the task or tasks at hand. The authors illustrate this kind of behavior by introducing a “noisy TV” in one of their tasks and allowing the agent to change the channel on the TV. Predictably, the agent learns to just keep changing the channel, without making any progress in the actual external task, because this strategy produces high-entropy states that can be used to keep updating its internal model, i.e. an endless stream of “interesting”, unpredictable states (incidentally, this kind of pathological behavior seems to be common in humans as well).

Once more, this highlights the importance of choosing the right kind of unsupervised learning objective that would be less prone to such pathologies. One simple way to reduce this kind of pathology might be to yoke the intrinsic reward of prediction error to whatever extrinsic reward is available in the environment: for example, one may value the intrinsic reward only to the extent that it leads to an increase in the extrinsic reward after some number of actions.

To summarize the main points I’ve tried to make in this post and to conclude with a few final thoughts:

- Unsupervised learning with generic architectures and generic training objectives can be much more powerful than we might naively think. This is why we should refrain from making
*a priori*judgments about the learnability or otherwise of certain structures from given data without hard empirical evidence. - I predict that as we apply these approaches to ever larger models and datasets, the capabilities of the resulting systems will continue to surprise us.
- Although fairly generic architectures and training objectives have so far worked quite well, not all generic training objectives (and architectures) are the same. Some work demonstrably better than others. Finding the right objectives (and architectures) requires careful thinking and a lot of trial and error.
- One general principle, however, seems to be that one should choose objectives that force the model to learn
*high-level*features or variables in the environment and the relationships between them. Understanding more rigorously why this is the case is an important question in my opinion: are low-level objectives fundamentally incapable of learning the kinds of things learnable through high-level objectives or is it more of a sample efficiency problem? - In addition to the examples given above, another great example of the importance of this general principle is the generative query network (GQN) paper by Deepmind, where the authors demonstrate the power of a novel objective that forces the model to learn the high-level latent variables in a visual scene and relationships between those variables. More specifically, the objective proposed in this paper is to predict what a scene would look like from different viewpoints given its appearance from a single viewpoint. This is a powerful objective, since it requires the model to figure out the 3d geometry of the scene, properties of the objects in the scene and their spatial relationships with each other etc. from a single image. Coming up with similar objectives in other domains (e.g. in language) is, I think, a very interesting problem.
- Probing the capabilities of the resulting trained systems in detail to understand exactly what they can or cannot do is another important problem, I think. For example, do pre-trained language models like BERT display compositionality? Are they more or less compositional than the standard seq2seq models? Etc.

**Update:** Here‘s an accessible NY Times article on the recent progress in unsupervised pre-training of language models.

**Update (01/07/19):** Yoav Goldberg posted an interesting paper evaluating the syntactic abilities of the pre-trained BERT model discussed in this post on a variety of English syntactic phenomena such as subject-verb agreement and reflexive anaphora resolution, concluding that “BERT model performs remarkably well on all cases.”

Researchers have developed methods to deal with this degeneracy problem, hence to overcome training difficulties in RNNs. One of the most well-known of these methods is the identity initialization for the recurrent weight matrix. Others proposed constraining the weight matrix to always be orthogonal, instead of orthogonalizing it at initialization only. The logic behind both of these methods is that since orthogonal transformations are isometries of the Euclidean space, applying a bunch of these transformations in a cascade does not lead to a degeneration of the metric (by “degeneration” here, I mean the collapse of the metric along the overwhelming majority of the directions in the input space and the astronomical expansion of the metric along a very small number of remaining directions). This is guaranteed in the linear case and, again, one hopes and expects (with some justification) that things are not all that different in the nonlinear case as well. So, in other words, a sequence of orthogonal transformations propagate vectors in Euclidean space without distortion, i.e. without changing their norms or the distances between them.

This is all true and fine, however, this analysis ignores a crucial factor that is relevant in training neural networks, namely the effect of *noise*. Noise comes in both through the stochasticity of SGD and sometimes through direct noise injection (as in Dropout) for regularization purposes. It is a bit hard to precisely characterize the noise that arises due to SGD, but let us assume for the sake of simplicity that the noise is additive so that what we propagate in the end is some kind of “signal + noise”. Now, although it is true that orthogonal transformations propagate the signal without distortion, they also propagate the noise without distortion as well. But, ultimately, we probably want a transformation that maximizes something like the signal-to-noise ratio (SNR) of the propagated signal + noise. Then, it is no longer obvious that orthogonal transformations are optimal for this purpose, because, one can, for example, imagine transformations that would amplify the signal more than they would amplify the noise (hence distorting both the signal and the noise), thus yielding a better SNR than an orthogonal transformation.

And indeed it turns out that for linear systems with additive Gaussian noise, one can mathematically show that optimal transformations (in the sense of maximizing the total SNR of the propagated signal + noise) are *not orthogonal*. In fact, one can say something even stronger: any optimal transformation has to be non-normal (a normal matrix is a unitarily diagonalizable matrix; all orthogonal matrices are normal, but the reverse is not true). This is the main result of this beautiful and insightful paper by Surya Ganguli and colleagues. Perhaps the simplest example of an optimal transformation in this sense is a feedforward chain: , where is the Kronecker delta function. This particular example maximizes the total SNR through a mechanism known as transient amplification: it exponentially amplifies the norm of its input transiently before the norm eventually decays to zero.

This brings me to the main message of this post: *that the commonly used orthogonal initializations for recurrent neural networks are likely suboptimal because of the often neglected effect of noise*. Another evidence for this claim comes from looking at the trained recurrent connectivity matrices in tasks that require memory. In this work (currently under review), we have shown that the trained recurrent connectivity matrices in such tasks always end up non-normal, with a feedforward structure hidden in the recurrent connectivity, even when they are initialized with an approximately normal matrix. How non-normal the trained matrices end up depend on a wide range of factors and investigating those factors was the main motivation for our paper. So, initializing RNNs with a non-normal matrix would potentially be a useful inductive bias for these networks.

In ongoing work, I have been investigating the merits of various non-normal initialization schemes for non-linear RNNs. One particular non-normal initialization scheme that seems to work quite well (and that is very easy to implement) is combining an identity matrix (or a scaled identity matrix) with a chain structure (which was shown by Ganguli et al. to be optimal in the case of a linear model with additive Gaussian noise). More details on these results will be forthcoming in the following weeks, I hope. Another open question at this point is whether non-normal initialization schemes are also useful for the more commonly used gated recurrent architectures like LSTMs or GRUs. These often behave very differently than vanilla recurrent networks, so I am not sure whether non-normal dynamics in these architectures will be as useful as it is in vanilla RNNs.

**Update (06/15/19):** Our work on a new non-normal initialization scheme for RNNs described in this post is now on arxiv. The accompanying code for reproducing some of the results reported in the paper is available in this public repository.

Coming up with the right inductive biases for a particular type of task (or types of tasks) is not always straightforward and it is, in my mind, one of the things that make machine learning a creative enterprise. Here, by the “right inductive biases”, I mean inductive biases that (i) only exploit the structure in the problem (or problems) we are interested in and nothing more or less, but (ii) are also flexible enough that if the same model is applied to a problem that doesn’t display the relevant structure exactly, the model doesn’t break down disastrously (some “symbol”-based neural machines may suffer from such fragility).

In this post, I’d like to briefly highlight two really nice recent papers that introduce very simple inductive biases that enable neural networks to train faster and generalize better in particular types of problems.

The first one is from Uber AI: An intriguing failing of convolutional neural networks and the CoordConv solution. In this paper, the authors first observe that state of the art convolutional networks fail quite badly in tasks that require spatial coordinate transformations, for example, changing from Cartesian coordinates to image-based coordinates or vice versa (e.g. given the Cartesian coordinates , draw a square of a certain size centered at ). This may not be too surprising, since convolutional networks are explicitly designed to be translation-invariant, hence to ignore any spatial information, but the authors correctly note that ignoring spatial information completely (being rigidly translation-invariant) may not always be advisable (this may lead to failures of the type mentioned in (ii) above). It is rather much better to provide the model with the spatial information and let it figure out itself how much translation-invariant it needs to be in any particular task. This is exactly what the authors do. Specifically, they provide the spatial information in an explicit format through additional (fixed) channels that represent the Cartesian coordinates of each “pixel”. For image-based tasks, one thus needs only two additional channels, representing the and coordinates of each pixel. Pictorially, their scheme, which they call *CoordConv*, looks like this (Figure 3 in the paper):

That’s basically it. If the task at hand is highly translation-invariant, the model can learn to set the weights coming from those two Cartesian coordinate channels to small values; if the task at hand requires precise spatial information, on the other hand, the model can learn to utilize those channels appropriately. NLP people may recognize the conceptual similarity of this scheme to the positional encodings of items in sequence-based tasks. For the NLP people, we may thus summarize their contribution by saying that they extend the positional encoding idea from the temporal domain (in sequence-based tasks) to the spatial domain (in image-based tasks). It’s always a good idea to think about such exchanges between different domains!

The authors then go on to demonstrate that introducing a few of these *CoordConv* layers in standard architectures improves performance in a diverse range of tasks (but *not* in *all* tasks), including object detection, GAN training and Atari playing.

The second paper I’d like to highlight, called Neural Arithmetic Logic Units, starts from the observation that generic neural network architectures cannot generalize well in numerical tasks requiring arithmetic operations such addition, multiplication etc., even when they may successfully fit any given training data in such tasks (and sometimes they cannot even achieve that). The authors of this paper introduce very simple, elegant and easy-to-impement inductive biases that enable generic models (LSTMs and MLPs) to extrapolate from training data much better in such tasks. The basic idea is to “nudge” standard neural network operations (linear combination, pointwise nonlinearity etc.) to behave like arithmetic operators. For instance, for addition, they parametrize a dense weight matrix as:

where denotes elementwise multiplication, and is the sigmoid nonlinearity. In the saturated regime, this parametrization encourages to have entries, , and so a linear combination using this kind of , i.e. , tends to behave like an addition or subtraction of its inputs (without scaling). In light of the preceding discussion, it is important to note here again that the model does not *force* this kind of behavior, but rather it merely *facilitates* it.

As an inductive bias for multiplication, they use the exponentiated sum of logs formulation:

using the same matrix as above. This (approximately) expresses the multiplication of the elements in . A linear combination of these addition and multiplication operations gated by a sigmoid unit (called a NALU in the paper) then can function as either an addition or a multiplication operation (which can be learned as appropriate). One can then stack these operations to express, in principle, arbitrarily complex arithmetic operations.

This beautiful, simple idea apparently works fantastically well! I was quite impressed by the results in the paper. However, I would have liked to see (i) some results with more complex arithmetic operations than they report in the paper and also (ii) some results with tasks that do not have a strong arithmetic component to gauge how strong the introduced arithmetic inductive biases are. Again, the idea is to see whether, or how badly, the model fails when faced with a task without a strong arithmetic component. Ideally, we would hope that the model does not fail too badly in such cases.

**Note:** I will collect, and report here, examples of inductive biases, like the ones I discussed in this post, that I encounter in the literature, with brief descriptions of the bias introduced, how it is supposed to work and what kinds of problem it is intended to be applied to. To facilitate this, I tagged this post with the tag inductive biases and I will file similar posts under the same tag in the future.

There is a well-known construction of real numbers from rational numbers that defines real numbers as equivalence classes of Cauchy sequences of rational numbers. Two Cauchy sequences of rational numbers are counted as equal if their difference converges to zero in the Euclidean metric (choosing other metrics here gives rise to some interesting, alternative number systems, e.g. *p*-adic numbers).

In an analogous fashion, one can define a new number system starting from the reals this time. This new number system, called *hyperreal numbers *(), is defined in terms of equivalence classes of sequences of real numbers. Two sequences of real numbers, and , are considered equal (hence represent the same hyperreal number) iff the set of indices where they agree, , is *quasi-big*. The technical definition of a quasi-big set is somewhat complicated, but the most important properties of quasi-big sets can be listed as follows:

- No finite set is quasi-big.
- If and are quasi-big sets, then so is .
- If is quasi-big and , then is also quasi-big.
- For any set , either or its complement is quasi-big.

Given this definition of hyperreal numbers, the following properties can be established relatively straightforwardly:

- , i.e. hyperreals strictly contain the reals (we map any real to the equivalence class containing the sequence ).
- contains an
*infinitesimal*, i.e. a number such that , yet for every positive real number (to see this, consider the sequence ). - In a formal language that is expressive enough to develop the entire calculus in, any given sentence is true with respect to iff it is true with respect to . This is an extremely useful property that can be derived from Łoś’s Theorem.

At this point, you might be (and you should be!) wondering whether Properties 2 and 3 above are consistent with each other. Property 2 says there’s an infinitesimal number in , but we know that there’s no infinitesimal number in . So, how is it possible that every sentence true in is also true in ? The answer is that the language mentioned in Property 3, although powerful enough to allow us to do calculus, is a rather restricted language. In particular, it doesn’t allow us to *define* what a real number is. This is because the definition of a real number crucially relies on the completeness axiom. The completeness axiom cannot be expressed in , because it requires talking about sets of numbers, something that turns out not to be possible in the language . So, Property 2 cannot be expressed in the language , hence is not powerful enough to distinguish hyperreals from reals.

Here are some further useful properties of hyperreal numbers:

- We mentioned above that there is at least one infinitesimal hyperreal. Are there more? Yes, definitely! There are in fact an infinite number of them: for, if is an infinitesimal and is a real number, then is also infinitesimal. Moreover, if and are infinitesimals, then so are and .
- We say that a hyperreal is
*infinite*iff it is either greater than all real numbers or smaller than all real numbers. A hyperreal is*finite*iff it is not infinite. It is easy to show that if is infinitesimal, is an infinite hyperreal. - contains infinite integers. The proof is very easy, using Property 3 above. We know that for any in , there is an integer greater than (note that one can express this statement in the language , because “ is an integer”, unlike “ is a real”,
*can*be defined in ). Since this is true for , it must be true for as well. In particular, there must be an integer greater than . But we just observed that is infinite, hence that integer must be infinite too! In fact, it immediately follows that there must be an infinite number of infinite integers. - We say that two hyperreals and are
*infinitely close*, denoted by , if is infinitesimal or zero. - Let’s say that a hyperreal is
*nonstandard*if is not real. Then, it’s easy to show that for any real and nonstandard , (or ) is nonstandard. - If is any finite hyperreal, then there exists a unique real infinitely close to

One can develop the entire calculus using hyperreal numbers, instead of the reals. This is sometimes very useful, as some results turn out to be much easier to state and prove in than in and we know, by Property 3 above, that we won’t ever be led astray by doing so. Just to give a few examples: in , we can define a function to be *continuous* at iff implies . In , on the other hand, we would have to invoke the notion of a *limit* to define continuity, which in turn is defined in terms of a well-known -type argument.

Using hyperreals, derivatives are also very easy to define. The derivative of a function at is defined as:

where is an infinitesimal (it’s easy to check that the definition doesn’t depend on which infinitesimal is chosen). Again, the definition of a derivative in standard analysis would involve limits.

Taking the derivatives of specific functions is equally easy, since we can manipulate infinitesimals just like we manipulate real numbers (again by Property 3 above).

One important difference between and is that is complete (in fact, the construction of reals as equivalence classes of Cauchy sequences of rationals, which was mentioned at the beginning of this post, is known as the completion of rationals with respect to the Euclidean metric), whereas is not complete. As mentioned before, this is not inconsistent with Property 3 above, because completeness cannot be expressed in the language .

**Bonus:** I recently discovered this video of a great lecture by Terry Tao on ultraproducts. Ultraproduct is the required technical concept I glossed over above when I defined *quasi-big*ness. Most of the lecture is quite accessible and I recommend it to anybody who wants to learn more about this topic.