Skip to content

Can machines be taught to see like humans?

There are robots that walk like humans and computer algorithms that mimic human behavior. But what about machines that see like humans? That is a project the USC School of Engineering is currently working on.

To develop new algorithms for visual processing, the researchers learn from the human brain and try to transfer two principles to machines: Top-down attention and compositionality. The former refers to the decision tree we automatically engage in when we are looking for something. For example, if you want to find a stapler in an office, you look for it on the desk or on the table, but not at the ceiling. Compositionality refers to our hierarchical way of recognizing objects: You know what a wheel is and you can recognize whether it is on a bike or a car.

Both of these tasks are difficult for a computer. Today’s recognition software is mostly task-specific. The current research focuses on outlining a dictionary of basic components and writing algorithms that define ways they can combine to form different objects. The longe-range objective is a smart camera that approaches the cognitive abilities of the human cortex.
Thomas Jagau
Read more:

– A. Borji, D. N. Sihite, L. Itti, Vision Research 91, 62 (2013).