It is Saturday and yes, the conference is going on. It is light and no posters though.

Herded Gibbs Sampling by Luke Bornn from Harvard is very promising. The Gibbs sampling they proposed can achieve convergence rate at O(1/T). He showed the results from their Herded Gibbs sampling and regular Gibbs sampling, in terms of accuracy, they both reach similar level, but the Herded Gibbs sampling is much faster. Worth a try if applicable.

The talk Feature Learning in Deep Neural Networks – A Study on Speech Recognition Tasks by Dong Yu from Microsoft yesterday is also very impressive. He showed that deep networks indeed help quite dramatically in speech recognition.

The Manifold of Human Emotions by Seungyeon Kim from Georgia Tech is very interesting. He used review data to define 32 emotions and then with very intuitive assumptions, he was able to find the manifold of the 32 emotions. I appreciate his passionate and detailed explanation of his work.

Overall, my feeling is that deep learning indeed works exceedingly well on one hand, if judged only by performance. On the other hand, we really don’t know why it works, or, I mean, what knowledge can be gained exclusively. Anyway, better is better, nobody wants their products impress users with more falses. The product is a blackbox to users anyway.