Thursday, June 13, 2024
11:00 - 12:00
Deep neural networks develop representations of the data they are trained with, with a level of abstraction that increases with depth, in a similar way as our brain does. But what is abstraction? Abstraction is qualitatively assessed by looking at the features of the data to which internal nodes respond to. Can the level of abstraction be quantified in terms of the statistical properties of the activity of internal nodes without reference to what is represented (i.e. the data)? We address these questions analysing the internal representation of deep belief networks trained on benchmark datasets.
Dutch Institute for Emergent Phenomena (DIEP)
IAS second floor library room
2nd floor library
Group Seminar
biophysics, complexity, condensed matter theory, emergence, soft matter
Matteo Marsili