rice potential sites
Abstract
The binding problem—how the visual system links features like color, shape, and location into coherent object representations—remains a foundational challenge in both neuroscience and artificial intelligence. Inspired by the dual-stream theory of visual processing, this study investigates whether spatial constraint maps outperform non-spatial maps (e.g., luminance and orientation) in supporting accurate feature binding. We conducted a mixed-methods study involving a behavioral experiment with 36 participants and a computational simulation using dual-pathway convolutional neural networks. Participants completed a visual matching task under two conditions: one with a spatial map and the other with a non-spatial map. Results showed significantly higher accuracy (92.6% vs. 84.2%), faster reaction times (615 ms vs. 748 ms), and fewer misbinding errors (3.2% vs. 9.5%) in the spatial map condition. Computational models mirrored this pattern: a spatial-aware neural network (SABN) achieved superior performance and attributed 67% of its decision weight to spatial features. Simulated neural activations revealed increased engagement in the parietal cortex during spatial binding. These findings align with previous neuroscientific and AI research, affirming that spatial constraints play a central role in solving the binding problem. The study advances a scalable and biologically plausible framework for visual feature integration.
