A central motive of my work is the interplay between higher level visual processes and low level feature encoding. I study this by combining behavioural and neural measurements with computational modelling and quantitative characterizations of the environment in which an...
In the fall term 2017, I have been teaching "Sensation and Perception" (PSYC 2220B) and "Intermediate Research Methods" (PSYCH 3010)....
Lassonde Building Room 0003F
4700 Keele St
Canada, M3J 1P3
Office: +1 416 736 2100 ext. 22932
Fax: +1 416 736 5814
A central motive of my work is the interplay between higher level visual processes and low level feature encoding. I study this by combining behavioural and neural measurements with computational modelling and quantitative characterizations of the environment in which an organism acts.
Visual processing often operates in a recurrent fashion. For example, I showed that paying attention to different task requirements can modulate some of the first cortical responses to a visual stimulus (e.g. Fründ et al, 2008), or that the temporal context in which a simple visual stimulus is presented influences how an observer responds to that stimulus (Fründ et al, 2014). Stimulus representations at intermediate levels of the visual hierarchy are ideal to study this interplay of low level feature encoding and high level visual cognition. I was able to demonstrate that the visual system of primates is remarkably well adapted to the statistical properties of curvature in natural images (Fründ & Elder, 2013, 2014). Yet, although curvature is an important characteristic of two dimensional shapes in our environment, the shapes we perceive in our in environment are more than a lose collection of curved line segments and it is likely that recurrent processing is a core neural mechanism for representing the complex and highly non-linear inter-dependencies in natural images that we perceive as objects and shapes.
Understanding the human visual system becomes harder and harder as tasks and stimuli become more complex. Despite limited understanding of the human visual system, recent advances in machine vision have allowed artificial systems to achieve impressive performance in complex visual tasks such as large scale categorization or image captioning. This is made possible by using a class of very flexible machine learning algorithms referred to as artificial neural networks, and in particular by developments that allow these algorithms to mimic some of the hallmarks of processing in the brain.
My current research focusses on the question what human perception can learn from the success of these models and how learnings from these models can be applied to gaining a deeper understanding of human perception and the neural process underlying visual processing.
If necessary, the perception of objects from simple shapes can be very quick. As part of my doctoral thesis, I observed that even one of the very first responses of the human electroencephalogram discriminates between meaningful and meaningless shapes.
However, what exactly makes a "shape"? What features do observers use to answer this question? I address these questions in a current collaboration with James Elder. With James, we developed a class of generative statistical models for shapes that occur in natural images, such as photographs. We can adapt these models to match natural shapes with respect to a well defined set of features and be maximally random otherwise. Being generative, these models allow us to generate synthetic shapes, that match natural shapes with respect to the features represented by the distribution. Using psychophysical measurements and ideal observer modeling, we could show that humans are sensitive to local contour properties of shapes but most likely also use global properties of shapes to discriminate between different shapes and to segment coherent shapes from random backgrounds.
Behavioral studies of visual perception typically present a sequence of images to an observer (by observer, vision scientists mean either a human or potentially also an animal). Observers often report that they adapt their response behaviour over the time course of a psychophysical experiment. In other words, the response of an observer depends on the image that the observer currently sees and and things that happened so far—including other responses—in the experiment.
This differs from one of the standard assumptions in virtually all models for visual perception: These models assume that all responses in an experiment are independent realizations of a respective random variable. We used one of the simplest models for visual perception to study the impact of the violation of this independence assumption. The psychometric function is routinely used in psychophysical studies of visual perception to quantify sensitivity or bias of observers. We showed that these violations may indeed result in incorrect inference on psychometric functions and we propose a very generic way to correct for these errors.
More recently, we built a model that combines both, the current stimulus and events on previous trials. This model allows us to tear apart the effects from the current stimulus and events on previous trials. We observe that effects from previous trials are very heterogeneous but at the same time very strong: On difficult trials, the recent experimental history is nearly as good a predictor as the current trial. This contradicts the naive assumption that our perception is mainly a representation of the environment. It suggests that our perception is rather a combination of the world around us and our own assumptions and expectations of this world.
In the fall term 2017, I have been teaching "Sensation and Perception" (PSYC 2220B) and "Intermediate Research Methods" (PSYCH 3010).
In "Sensation and Perception" we study how our mind creates the experience of a coherent world from the physical phenomena that arrive at our receptors. After a brief introduction to perception in general, the course focusses on vision from both a behavioural as well as a neuroscientific perspective.
Assessment of learning progress is achieved through a combination of brief, weekly online tests and a written final exam in which students summarize their learning outcomes.
In "intermediate research methods", we address questions about operationalization of theoretical concepts and the formulation of a research question and we discuss aspects of planning an experiment, proper experimental control, and appropriate embedding of a study in the context of prior research. Students practise these skills by designing and executing a small research study of their own.
Assessment of learning progress is achieved through a series of written assignments that cover the study's motivation, design, critique of other students' writing, and discussion of results and limitations.
I am an Assistant Professor for Computational Neuroscience as York University, Toronto, ON. I am also a member of the Centre for Vision Research at York University, Toronto, ON and an associate member of the Vision: Science To Applications (VISTA) program at York.
I accept students from the graduate programs in Psychology (PhD, MSc) and Computer Science (MSc).
My research combines behavioural and neural data with computational modeling to understand how high level visual processes interact with low level feature encoding.
I received my PhD in Christoph Herrmann's lab and then went on to do postdoctoral work with Felix Wichmann and James Elder. Before taking on my current position at York, I worked as a Data Scientist at Zalando and later as an A.I. engineer at Twenty Billion Neurons.
I am an undergraduate student majoring in Psychology and minoring in Kinesiology at York University. My interest in research of perception stems from a background in photography, as well as a curiosity of visual perception.
I am a BSc Psychology student at York University. I'm interested in learning the neural activity behind visual processing. My previous research experience is in behavioural studies, working at Baycrest hospital with seniors with varying degrees of dementia.