D-131 | Integrating Ideal Bayesian Searcher and Neural Networks Models for Eye Movement Prediction in a Hybrid Search Task

D-131 | Integrating Ideal Bayesian Searcher and Neural Networks Models for Eye Movement Prediction in a Hybrid Search Task 150 150 SAN 2024 Annual Meeting

Theoretical and Computational Neuroscience
Author: Gonzalo Ruarte | Email: gruarte@dc.uba.ar


Gonzalo Ruarte1°2°, Gastón Bujía1°2°, Damián Care,  Matías Ison, Juan Kamienkowski1°2°4°

CONICET – Universidad de Buenos Aires. Instituto en Ciencias de la Computación (ICC). Laboratorio de Inteligencia Artificial Aplicada (LIAA). Buenos Aires, Argentina.
Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Ciencias de la Computación. Buenos Aires, Argentina.
University of Nottingham, Nottingham, United Kingdom
Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Maestría en Explotación de Datos y Descubrimiento del Conocimiento. Buenos Aires, Argentina.

Visual search, where observers search for a specific item, is a crucial aspect of daily human interaction with the visual environment. Hybrid search extends this by requiring observers to search for any item from a memory set. While there are models simulating human eye-movement in visual search tasks within natural scenes, none of them have been extended to memory search tasks. In this work, we present an improved version of the Bayesian Searcher model based on the Entropy Limit Minimization (ELM) model, that not only outperformed previous models in Visual search but it is also capable of performing Hybrid search tasks. Briefly, by adjusting the model’s (peripheral) visibility, we made early search stages more efficient and closer to human behavior. Additionally, limiting the model’s memory reduced its success in longer searches, mirroring human performance. The key challenge in Hybrid search is that participants might search for different objects at each step. To address this, we developed target selection strategies. We tested this model on the VISIONS benchmark (https://github.com/NeuroLIAA/visions) and against human participants performing a novel Hybrid search task that includes natural scenes backgrounds. Altogether, our improved model not only performs Hybrid search tasks but also shows a behavior closely aligned with human performance across both tasks, advancing our understanding of the complex processes in visual search while maintaining interpretability.

Masterfully Handcrafted for Awesomeness

WE DO MOVE

YOUR WORLD

Greatives – Design, Marketing, Sales

Working Hours : 09:00 – 19:00
Address : 44 Oxford Street, London, UK 22004
Phone : +380 22 333 555