Future of machine learning, w. Geoffrey Hinton, Yoshua Bengio, Yann LeCun (Part 2)
We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU), who talk with us about the history (and future) of research on neural nets.
We explore how to use Determinantal Point Processes. Alex Kulesza and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic.
Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).
If you liked this article, you may also be interested in:
- Talking Machines: History of machine learning, w. Geoffrey Hinton, Yoshua Bengio, Yann LeCun (part 1)
- Artificial General Intelligence that plays Atari video games: How did DeepMind do it?
- Inside DeepMind
- Google’s robot and artificial intelligence acquisitions are anything but scary
- Google’s DeepMind acquisition in reinforcement learning
- Why robots will not be smarter than humans by 2029