New results from AUSMURI project

We’re excited to report a number of new results and publications from the AUSMURI project.

November 2020

Learning receptive field properties of complex cells in V1

Yanbo Lian, Ali Almasi, David B. Grayden, Tatiana Kameneva, Anthony N. Burkitt, Hamish Meffin, “Learning receptive field properties of complex cells in V1”, DOI: 10.1101/2020.05.18.101873

This paper presents new results on learning the properties of complex cells in the primary visual cortex, as these results show that a biologically based learning model can account for the experimental data of complex cells. Together with our previous work demonstrating how simple cells in the primary  visual cortex can be learnt using efficient coding, this work provides a strong basis for understanding the structure and function of the primary visual cortex, and thereby the foundation for how vision is processed in the brain.

Learning an efficient place cell map from grid cells using non-negative sparse coding

Yanbo Lian, Anthony N. Burkitt, “Learning an efficient place cell map from grid cells using non-negative sparse coding”, DOI: 0.1101/2020.08.12.248534

These new results on learning an efficient place map using the principle of sparse coding demonstrate the importance of sparse coding in understanding the brain: it is not only an underlying principle of processing visual information, but also for the processing of spatial information in the navigational system of the brain.

Predictive visual motion extrapolation emerges spontaneously and without supervision from a layered neural network with spike-timing-dependent plasticity

Anthony N. Burkitt, Hinze Hogendoorn, “Predictive visual motion extrapolation emerges spontaneously and without supervision from a layered neural network with spike-timing-dependent plasticity”, DOI: 10.1101/2020.08.01.232595

This paper sheds new light on our ability to track and respond to rapidly changing visual stimuli, such as a fast-moving tennis ball. This ability indicates that the brain is capable of extrapolating the trajectory of a moving object in order to predict its current position, despite the delays that result from neural transmission. In this study we show how the neural circuits underlying this ability can be learned through spike-based learning, and that the neural circuits emerge spontaneously and without supervision.

Event-based visual place recognition with ensembles of temporal windows

Tobias Fischer, Michael Milford, “Event-based visual place recognition with ensembles of temporal windows”, IEEE Robotics and Automation Letters (vol. 5, issue 4, pages 6924–6931, 2020). DOI: 10.1109/LRA.2020.3025505

In this article we dived deeper into examining the relationship between temporal intervals and place recognition performance, finding that it is highly dependent on factors like environment, conditions, camera motion and other factors. We developed a new ensemble scheme (both a full and computationally efficient ‘approximate’ version) that enabled consistently superior performance against single-window and conventional model-based ensemble approaches.

Bioinspired bearing only motion camouflage UAV guidance for covert video surveillance of a moving target

Andrey V. Savkin, Hailong Huang, “Bioinspired bearing only motion camouflage UAV guidance for covert video surveillance of a moving target”, IEEE Systems Journal, pages 1–4, 2020. [link]

This paper focuses on video monitoring of a moving ground target using an unmanned aerial vehicle (UAV). The aim is to navigate the UAV so that it is able to monitor the target while concealing its apparent motion in respect to the target’s visual system. A computationally simple sliding mode closed-loop control algorithm mimicking motion camouflage stealth behavior observed in some attacking animals is developed. The proposed guidance law is based on bearing only measurements and does not require any information on the targets’ velocity and the distance to the target. Simulations are conducted to demonstrate the effectiveness of the proposed method.