Close

Unfortunately, registration is closed. If you have any questions or concerns please email us at comco@uni-osnabrueck.de

Computational
Cognition 2021

online video conference
Hosted by the RTG "Computational Cognition"
from Osnabrück, Germany

Topics covered

The COMCO 2021 workshop pursues to contribute to the re-integration of Cognitive Science and Artificial Intelligence. There is a schism between low- and high-level cognition: a lot is known about the neural signals underlying basic sensorimotor processes and also a fair bit about the cognitive processes involved in reasoning, problem solving, and language. However, explaining how high-level cognition can arise from low-level mechanisms is a long-standing open problem in Cognitive Science.

In order to bridge this gap, this workshop tackles problems such as grammar learning, structured representations, and the production of complex behaviors with neural modeling. With COMCO 2021 we are bringing together experts studying the mind from a computational point of view to better understand human and machine intelligence.

Featured Speakers

Tom Griffiths

Director of the Computational Cognitive Science Lab at Princeton University. Research focused on developing mathematical models of higher level cognition and understanding the underlying principles of our ability to solve computational problems in everyday life. Co-author of the best-selling book "Algorithms to Live By: the Computer Science of Human Decisions".

Associated Talk: Understanding human intelligence through human limitations

Richard Baraniuk

Victor E. Cameron Professor of Electrical and Computer Engineering at Rice University, Founding Director of OpenStax, Fellow of the American Academy of Arts and Sciences, National Academy of Inventors, and American Association for the Advancement of Science, and recipient of the DOD Vannevar Bush Faculty Fellow Award, the IEEE Signal Processing Society Technical Achievement Award, and the IEEE James H. Mulligan, Jr. Education Medal. Research focused in new theory, algorithms, and hardware for sensing, signal processing, and machine learning.

Associated Talk: Deep Network Spline Geometry

Andrea E. Martin

Group leader at Max Planck Institute for Psycholinguistics, Principal investigator in the Language and Computation in Neural Systems Group at Donders Centre for Cognitive Neuroimaging. Research focused on how language is represented and processed in the mind and brain.

Associated Talk: Boundary conditions for language in biological and artificial neural systems

Agnieszka Wykowska

Professor at the Italian Institute of Technology heading the research group "Social Cognition in Human-Robot Interaction", Editor-in-Chief of International Journal of Social Robotics and Associate Editor of Frontiers in Psychology, President-elect of European Society for Cognitive and Affective Neuroscience (ESCAN). Research focused on human-robot interaction, cognitive and social neuroscience, and intentional cognition.

Associated Talk: TBD, {Title to be determined}

Jacob Andreas

X Consortium Assistant Professor at MIT. Research focused on understanding the computational foundations of efficient language learning and building general-purpose intelligent systems that can communicate effictively with humans and learn from human feedback.

Associated Talk: Implicit representations of meaning in neural language models

Claire Sergent

Professor and researcher in the Integrative Neuroscience and Cognition Center at the Universite dé Paris. Research focuses on perception attention and consciousness using cognitive psychology, EEG, MEG and fMRI.

Associated Talk: Brain dynamics associated with conscious processing

Agenda

Day 1, Thursday, September 23 (CEST)

Welcome
Agnieszka Wykowska

Using robots to study mechanisms of human cognition

Speaker: Agnieszka Wykowska, Professor
Affiliation: Italian Institute of Technology, Italy

Robots receive increasingly more attention in scientific areas beyond robotics. The field of human-robot interaction, for example, focuses not only on developing new technological solutions for robots, but also on how the human interacts with such artificial entities. In my research, I use robots as sophisticated stimuli for examining the mechanisms of human cognition. As such, robots allow for more ecological validity than screen-based stimuli and for excellent experimental control at the same time. In this talk, I will present a series of studies that, with the use of a humanoid robot iCub, addressed specific cognitive mechanisms, such as attention, cognitive control, decision making processes, and theory of mind. Results of these studies reveal that the social component should be taken into account in models of cognition, specifically attention and cognitive control. This might also inspire the way in which artificial cognitive architectures are built.
Break
Johannes Niediek

Contributed talk: Understanding rat behavior in a naturalistic task via non-deterministic policies

Speaker: Johannes Niediek, Postdoctoral researcher
Affiliation: The Hebrew University of Jerusalem, Israel
Collaborators: Maciej M Jankowski, Ana Polterovich, Alex Kazakov and Israel Nelken

Jasmin Walter

Contributed talk: Finding landmarks – an investigation of viewing behavior during spatial navigation in VR using a graph-theoretical analysis approach

Speaker: Jasmin Walter, PhD student
Affiliation: Osnabrück University, Germany
Collaborators: Lucas Essmann, Sabine U. König and Peter König

Lunch break
Claire Sergent

Brain dynamics associated with conscious processing

Speaker: Claire Sergent, Professor
Affiliation: Université de Paris, France

Using experimental psychology and neuroimaging, my team and I investigate the brain dynamics associated with conscious processing of sensory stimuli. I will present empirical evidence suggesting that conscious processing might specifically relate to a bifurcation in global brain activity following the first 200ms of sensory processing. Our results also suggest that, contrary to the first stages of sensory processing, the onset of these “conscious” mechanisms can be quite flexible in time. I will discuss how these findings might help update the global neuronal workspace model of conscious access.
Break
Pau Vilimelis Aceituno

Contributed talk: Emergent resonances in recurrent neural networks and their effects on learning

Speaker: Pau Vilimelis Aceituno, Postdoctoral researcher
Affiliation: ETH Zurich, Switzerland

Huang Ham

Contributed talk: Social Context Shapes Value Representation during Learning

Speaker: Huang Ham, Research Specialist
Affiliation: University of Pennsylvania, USA
Collaborator: Adrianna Jenkins

Coffee break & chat in Gathertown

Poster session in Gathertown

Richard Baraniuk

Deep Network Spline Geometry

Speaker: Richard Baraniuk, Professor
Affiliation: Rice University, USA

We study the geometry of deep learning through the lens of approximation theory via spline functions and operators. Our key result is that a large class of DNs can be written as a composition of max-affine spline operators (MASOs), which provide a powerful portal through which to view and analyze their inner workings. For instance, conditioned on the input signal, the output of a MASO DN can be written as a simple affine transformation of the input. This implies that a DN constructs a set of signal-dependent, class-specific templates against which the signal is compared via a simple inner product; we explore the links to the classical theory of optimal classification via matched filters and the effects of data memorization. The spline partition of the input signal space that is implicitly induced by a MASO directly links DNs to the theory of vector quantization (VQ) and K-means clustering, which opens up new geometric avenue to study how DNs organize signals in a hierarchical and multiscale fashion.
Session close of Day 1

Day 2, Friday, September 24 (CEST)

Andrea E. Martin

Boundary conditions for language in biological and artificial neural systems

Speaker: Andrea E. Martin, Group Leader/ PI
Affiliation: Max Planck Institute for Psycholinguistics, Netherlands

Human language is a fundamental biological signal with computational properties that are markedly different than in other perception-action systems: hierarchical relationships between sounds, words, phrases, and sentences, and the unbounded ability to combine smaller units into larger ones. These and other formal properties have long made language difficult to account for from a biological systems perspective, and within models of cognition. I focus on this foundational puzzle – essentially “what does a system need to represent information that is both algebraic and statistical?” - and discuss the computational requirements, including the role of neural oscillations across time, for what I believe is necessary for a system to represent language. I build on examples from cognitive neuroimaging data and computational simulations, and outline a developing theory that integrates basic insights from linguistics and psycholinguistics with the currency of neural computation, in turn demarcating the boundary conditions for artificial systems making contact with human language.
Break

Poster session in Gathertown

Lunch break
Samuel David Jones

Contributed talk: Under-resourced or overloaded? Rethinking working memory deficits in developmental language disorder

Speaker: Samuel David Jones, Senior Researcher
Affiliation: Lancaster University, UK
Collaborator: Gert Westermann

Lucas Castillo

Contributed talk: Human Random Generation as a Locally-Bound Process

Speaker: Lucas Castillo, PhD student
Affiliation: University of Warwick, UK
Collaborators: Pablo León-Villagrá, Nick Chater and Adam Sanborn

Break
Jacob Andreas

Implicit representations of meaning in neural language models

Speaker: Jacob Andreas, Assistant Professor
Affiliation: MIT, USA

Neural language models are trained on text corpora to place probability distributions over sequences of words. They produce representations of language that have led to dramatic improvements in downstream tasks as diverse as translation, question answering, and image captioning. Language models' usefulness is partly explained by the fact that they robustly encode aspects of linguistic *structure*, including syntactic categories and dependency relations. But the extent to which language modeling induces representations of *meaning*---and the broader question of whether it is even in principle possible to learn about meaning from text alone---have remained a subject of ongoing debate in NLP and linguistics. I'll describe recent work showing that current neural language models build structured representations of meaning that simulate entities and situations as they evolve throughout a discourse. These representations can be linearly decoded into formal descriptions of semantic state analogous to the "file cards" of Heim (1983) and discourse representation structures of Kamp (1984). They can be directly manipulated to produce predictable changes in generated text, and supervised to improve generation quality. Together, these results suggest that the effectiveness of the modern NLP toolkit stems in part from its ability to learn some aspects of meaning with only linguistic form as training data.
Panel discussion

Panel discussion

With Andrea Martin, Jacob Andreas, Agnieszka Wykowska, and Tom Griffiths

Best poster award of 250€ kindly sponsored by halocline

Break
Sarah Fabi

Contributed talk: Fostering Compositionality in Generative RNNs to Solve the Omniglot challenge

Speaker: Sarah Fabi, Postdoctoral researcher
Affiliation: University of Tübingen, Germany
Collaborators: Sebastian Otte and Martin V. Butz

Tom Griffiths

Understanding human intelligence through human limitations

Speaker: Tom Griffiths, Professor
Affiliation: Princeton University, USA

As machines continue to exceed human performance in a range of tasks, it is natural to ask how we might think about human intelligence in a future populated by super intelligent machines. One way to do this is to think about the unique computational problems posed by human lives, and in particular by our finite computational resources and finite lifespan. Thinking in these terms highlights two problems: making efficient use of our cognitive resources, and being able to learn from limited amounts of data. It also sets up a third problem: solving computational problems beyond the scale of any one individual. I will argue that these three problems pick out the key characteristics of human intelligence, and highlight some recent progress in understanding how human minds solve them.
Session close & Goodbye
Virtual happy hour in Gathertown

Poster sessions

Dear participants, you are free to prepare your poster presentation in one of the following ways: 1. The “classical” poster format (a single big slide which you can zoom in and out of) or 2. multiple separate slides (we recommend no more than 5 slides).

Day 1, Thursday

Download PDFTemporal Distortion Related Connectivity Mapping in Auditory Event Recognition

Bora Çelebi, Alp Tuna, Ahmet Mete Karayaka, Filiz Tezcan and Funda Yildrim

Download PDFImplausible alternatives paradoxically increase confidence in a perceptual decision

Nicolás A. Comay, Gabriel Della Bella, Mariano Sigman, Guillermo Solovey and Pablo Barttfeld

Download PDFA Framework to Infer Movement Planning from Observed Trajectories using Inverse Planning

Paulina Friemann, Joschka Boedecker and Andrew D. Straw

Download PDF(La)Place Cells for Robot Navigation

Howard Goldowsky

Download PDFExplaining empirical data of speaker’s use of conditionals with a probabilistic model

Britta Grusdt and Michael Franke

Download PDFToward Learning-Aided Interactive and Inclusive Robot Museum docent

Tribhi Kathuria, X. Jessei Yang, and Maani Ghaffari Jadidi

Download PDFTo understand is to predict: machine learning identifies low-frequency entrainment to visual stimuli as the basis of sign language comprehension via predictive processing

Evie A. Malaia, Julia Krebs, Sean Borneman and Ronnie Wilbur

Download PDFIrrelevant robot social signals

Lorenzo Parenti, Abdulaziz Abubshait, Jairo Perez Osorio and Agnieszka Wykowksa

Download PDFProspective Temporal Locations Tracked by Neural Power Modulations and Captured by Recurrent Neural Networks

Xiangbin Teng and Ru-Yuan Zhang

Download PDFLearning Hidden Causal Structure from Event Sequences

Simon Valentin, Neil R. Bramley and Christopher G. Lucas

Download PDFUnsupervised text segmentation predicts eye fixations during reading

Jinbiao Yang, Antal van den Bosch and Stefan L. Frank





Day 2, Friday

Download PDFAdaptive and Satisficing Cognition for Theory of Mind in Interaction

Jan Pöppel and Stefan Kopp

Download PDFExpectation Adaptation Models Hindi Preverbal Constituent Ordering

Sidharth Ranjan, Rajakrishnan Rajkumar and Sumeet Agarwal

Download PDFToM (Theory of Mind)-ML: Machine Learning predicts Mentalization

Varad Srivastava and Minaxi Goel

Download PDFModelling the acquisition of grammars with STDP

Sophie Lehfeldt, Jutta L. Mueller and Gordon Pipa

Download PDFInvestigating factors that influence human visual attention on city car rides

Marc Vidal De Palol, Debora Nolte, Peter König and Gordon Pipa