https://www.youtube.com/watch?v=0eE-CTX96v8
https://www.youtube.com/watch?v=_5SnvhRyxkc
Our December 2022 Understanding the Nature of Inference colloquium talk will be given by Brian Cantwell Smith, the Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto. The title of this talk and subsequent discussion is “Inference in a Non-Conceptual World.”
Classical models of inference, such as those based on logic, take inference to be *conceptual* – i.e., to involve representations formed of terms, predicates, relation symbols, and the like. Conceptual representation of this sort is assumed to reflect the structure of the world: objects of various types, exemplifying properties, standing in relations, grouped together in sets, etc. These paired roughly algebraic assumptions (one epistemic, the other ontological) form the basis of classical logic and traditional AI (GOFAI).
In this talk, Professor Brian Cantwell Smith will argue that the world itself is not conceptual, in the sense of not consisting (at least au fond) of objects, properties, relations, etc. That is, he will argue against the ontological assumption. Rather, he believes that taking the world to consist of the familiar ontological furniture of objects, properties, etc. results from epistemic processes of abstraction and idealization. Denser representations with so-called “nonconceptual content” can be closer to what is known as “ground truth”. Deep learning models and other developments in contemporary AI can therefore be understood as initial steps to understand inference over surpassingly rich fields of undiscretized features.