NIPS 2016 — Final Highlights Days 4–6: Likelihood-free inference, Dessert analogies, and much more.

Ross Fadely
Insight
Published in
5 min readDec 12, 2016

--

Jeremy Karnowski & Ross Fadely, Insight Artificial Intelligence

Simulation of particle collisions generating the Higgs Boson. Credit CERN.

Missed our highlights from NIPS 2016? Check out our Day 1, Day 2, and Day 3 posts. Want to learn about applied Artificial Intelligence from leading practitioners in Silicon Valley or New York? Learn more about the Insight Artificial Intelligence Fellows Program.

Particle Physics and Likelihood-Free Inference

The past two years at NIPS have featured workshops focused on applying machine learning and probabilistic inference on particle physics problems. Continuing this theme, this year Kyle Cranmer was featured as a keynote speaker. In his talk he discussed the many areas where recent advances have improved analyses in particle experiments, including the use of feed-forward, convolutional, and generative adversarial neural networks.

The graphical model describing the ATLAS experiment at CERN. Each stage involves significant knowledge, analysis, and/or inference to make the experiment a success. In his keynote, Kyle Cranmer talked over tackling various parts of this model.

A key problem featured in Cranmer’s talk focuses around what one should do when you want to infer model parameters, but cannot evaluate a likelihood function. In cases where you have a generative model (or simulation) for the data, one can use a likelihood-free inference approach (like Approximate Bayesian Computation) which allows one to evaluate the likelihood of a model without an explicit likelihood function.

These techniques and ideas underlie many unsupervised and generative models, which are currently an extremely active area of research. During his talk and followup discussions, it is clear Cranmer is developing ideas where these likelihood-free techniques are no longer approximate. We are excited, as these ideas may have a huge impact on autoencoding and adversarial models. Stay tuned for more.

Comic relief via cake

If you have seen Yann LeCun talk this year, you have quite likely seen the above slide describing the information distribution across different machine learning tasks. Whether people were just looking to have a laugh, or were growing tired of the analogy, this slide became the source of many jokes that pervaded NIPS, including a panelist who jokingly asked Yann if “he was ready to talk about the cake”.

The crescendo, perhaps, was during Kyle Cranmer’s keynote where he took the same cake image and created a version of a cosmic pie chart with cake as Dark Energy, icing Dark Matter, and normal matter as the cherry. The audience (and us) erupted with laughter. Just to add to the fun, we created a similar version relating Data Science, Data Engineering, and AI roles (see post here).

Highlights: Talks and Workshops

In some ways, the remainder of NIPS was no different from the previous days — tons of cool results and interesting discussions. Yet the remainder of the conference differed in that the sessions were much more focused on specific areas. We found a number of these really exciting, here are just a few highlights:

  • Killian Weinberger talked about convolutional networks with immense depth. His team’s paper on Deep Networks with Stochastic Depth shows that randomly sampling depth during training is a better version of the award-winning ResNets from Microsoft’s group. Discussing the work, he showed that they can train a network 1202 layers deep on CIFAR-10 and get better performance than ResNet models. Finally, he talked about Densely Connected Convolutional Networks which allow directed connections to a layer from all layers below, leading to state-of-the-art performance.
  • One trend has been the recent advances in tensor methods and their applications. Saturday’s workshop on Learning with Tensors featured a talk On Depth Efficiency of Convolutional Networks: the use of Hierarchical Tensor Decomposition for Network Design and Analysis (work with Nadav Cohen and Or Sharir). This talk demonstrated an equivalence between convolutional networks and hierarchical tensor decompositions which allows for a more theoretical understanding of the space of network configurations, the expressive power of deep networks, and the benefits gained by adding more layers. Additional work in this area will help formalize many current techniques and help generate future directions.
  • Another interesting trend we saw across a few different workshops was more straight-talk and practical advice. Talks like Soumith Chintala’s How to train a GAN or John Schulman’s The Nuts and Bolts of Deep Reinforcement Learning Research are showing the importance of sharing what actually happens during research, if we want to accelerate its pace. This knowledge is hard to put in final paper versions, so we are excited to see it being shared.

Wrapping up NIPS 2016

NIPS this year was full of excitement and innovation. Once again there were record numbers in attendance and submissions, and a larger presence of researchers and teams in industry. Its really a shame we can’t distill all of the interesting results that emerged but we do hope some of our highlights were useful.

Like all interesting events, we also walked away with a number of questions:

  • Memory networks made huge advances in 2016, how come they were not more prevalent this year?
  • The presence of women attending rose from 13% to just 15%. How and when will diversity increase at NIPS and other conferences?
  • Almost all reinforcement learning efforts revolved around game playing. When will research step more heavily outside of (e.g.,) Atari?
  • How will the increasing monetization of AI affect research? Will the trend towards openness continue?

All in all, our experience at NIPS this year was great — hats off to the organizers! Looking forward to seeing what next year brings.

--

--