news    views    podcast    learn    |    about    contribute     republish    

Talking Machines

In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don’t get fooled. Our guest for this episode is Jeff Dean,  Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for diabetic retinopathy, and equality in data and the community.

by   -   June 16, 2017
Credit: Wikipedia Commons

In episode three, season three of Talking Machines, we dive into overfitting, take a listener question about unbalanced data and speak with Professor (Emeritus) Tom Dietterich from Oregon State University.

by   -   May 30, 2017

In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It’s more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft.

by   -   April 28, 2017

Talking Machines is entering its third season and going through some changes. Our founding host Ryan is moving on and in his place, Neil Lawrence of Amazon is taking over as co-host. We say thank you and goodbye to Ryan with an interview about his work.

by   -   September 9, 2016

talking-machines

In episode sixteen of season two, we get an introduction to Restricted Boltzmann Machines, take a listener question about tuning hyperparameters, plus, speak with Eric Lander of the Broad Institute.

by   -   August 17, 2016
Generative Art on PBS/YouTube
Generative Art on PBS/YouTube

In episode fifteen of season two, we talk about Hamiltonian Monte Carlo, take a listener question about unbalanced data, plus, speak with Doug Eck of Google’s Magenta project.

by   -   July 27, 2016
Testing lead in water during the Flint water crisis. Image credit: CC0 Public Domain
Testing lead in water during the Flint water crisis.

In episode fourteen of season two, we discuss Perturb-and-MAP and answer a listener question about classic artificial intelligence ideas being used in modern machine learning. Plus, we speak with Jake Abernethy from the University of Michigan about municipal data and his work on the Flint water crisis.

by   -   July 8, 2016
Reuters dataset (in 2D) landmark t-SNE using semantic hashing. Source: vdmaaten.github.io/tsne
Reuters dataset (in 2D) landmark t-SNE using semantic hashing. Source: vdmaaten.github.io/tsne

In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE), take a listener question about statistical physics, plus, speak with Hal Daume of the University of Maryland (who is great to follow on Twitter).

by   -   June 20, 2016
Generating faces with Torch. Photo source: torch.ch
Generating faces with Torch. Photo source: torch.ch

In episode twelve of season two, we discuss generative adversarial networks, take a listener question about using machine learning to improve or create products, and lastly, speak with Iain Murray from University of Edinburgh.

by   -   June 6, 2016
Source: Pexels/CC0
Source: Pexels/CCO

In episode eleven of season two, we talk about the machine learning toolkit Spark and answer a listener question about the difference between Neural Information Processing Systems (NIPS) and International Conference on Machine Learning (ICML). Plus, we speak with Sinead Williamson from The University of Texas at Austin.

by   -   May 24, 2016
Stem cell. Source: CC0
Stem cell. Source: CC0

In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.

by   -   May 9, 2016

Deep_Mind_Atari_AI_algorithm

In episode nine of season two, we talk about sparse coding, take a listener question about the next big demonstration for AI after AlphaGo. Plus we talk with Clement Farabet about MADBITS and the work he’s doing at Twitter Cortex.

by   -   April 8, 2016

talking-machinesEpisode seven of season two is a little different than our usual episodes; Ryan and Katherine returned from a conference where they got to talk with Neil Lawrence from University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow.

by   -   March 27, 2016

talking-machinesIn episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.

by   -   February 17, 2016

In episode three of season two Ryan walks us through the Alpha Go results, plus, we talk with Michael Littman about his work, robots, and making music videos.



AI Powered Robotic Picking at Promat 2019
July 9, 2019


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign