Pixel Club: Curriculum Learning in Perceptual Learning and Affect Recognition, With and Without Deep Learning

Speaker:
Daphna Weinshall (Hebrew University of Jerusalem)
Date:
Tuesday, 5.12.2017, 11:30
Place:
Room 337 Taub Bld.

I will talk about two recent results concerning problems motivated by human cognition. (i) In the first part I will talk about modeling of phenomena in perceptual learning. Building on the powerful tools currently available for the training of Convolution Neural Networks (CNN), networks whose original architecture was inspired by the visual system, we revisited some of the open computational questions in perceptual learning. We first replicated two representative sets of perceptual learning experiments by training a shallow CNN to perform the relevant tasks. These networks qualitatively showed some hallmark phenomena of perceptual learning, including specificity and learning enabling (the latter is strongly related to curriculum learning). By analyzing the dynamics of weight modifications in the networks, we identified patterns which appeared to be instrumental for the transfer (or generalization) of learned skills from one task to another in the simulated networks. (ii) In the second part I will talk about the prediction of one’s emotional response from spontaneous facial activity. Our approach in addressing this problem is based on the inferred activity of facial muscles over time, as captured by a depth camera recording an individual's facial activity. I will present a method which successfully predicts a 4 dimensional representation of affect - Valence, Arousal, Likability, Rewatch - with numerical correlation between the predicted response and the actual video emotional tag ranging from 0.64 to 0.76. These numbers are comparable to human performance in affect recognition.

Back to the index of events