READ

Putting the ‘eye’ in AI: what can computers teach us about human vision?

Studying how AI process visual information could help humans understand our own visual system.
Zaya Altangerel
Zaya Altangerel
Senior Content Producer
Putting the ‘eye’ in AI: what can computers teach us about human vision?
Image credit: v2osk /  Deep Dream Generator

Artificial intelligence (AI) systems used for facial recognition are notorious for their racial and gender bias. The lack of diversity in the photographic data used for AI training is usually highlighted as the root cause. Well, it turns out that the human brain has the same problem – and we don’t really know why.

The own-race bias is a phenomenon where humans struggle differentiating between individuals of another race. It’s been researched for decades but the scientific jury is still out on what the true cause of the phenomenon is – because we’re still trying to understand the finer details of how the human visual system works.

“If what we’re trying to do is to predict whether something is likely to be visible or not, then we’re pretty reliable at that and we can do that taking into account a fair range of clinical disorders.” says Winthrop Professor David Badcock, Director of the Human Vision Laboratory at UWA.

“There are other things which seem incredibly fundamental and profound, which we still can't explain.”

And the phenomenon of own-race bias is one such fundamental question that is yet to be answered.

Level up

From the instant light rays enter your eye, millions of cells interact with each other to process the information and turn it into an action. To fully understand vision, we need to understand every step of that process.

“You’ve got multiple stages of processing, which all feed information forward,” says David.

We have a good idea of how to teach computers to recognise objects. Could that help us investigate human vision too?

Video credit: TED talks
We have a good idea of how to teach computers to recognise objects. Could that help us investigate human vision too?

“And in each of those stages there’s also feedback that comes back from higher order areas [of the brain] to low order areas, which influence the way the lower areas process.”

So far, researchers know that the early stages of the visual system can only process very small bits of information because there’s a limit to what one single cell can do. One cell can detect only a tiny fraction of the object that you’re looking at.

Somewhere in the middle, David says, our brains put those fractions together into objects.

“They put together the information from the lower levels into groups and say, right, well, that’s all one object. And that’s all another object,”

Like computer vision, human vision seems to be processed in multiple layers – and also like machine vision, we’re not always sure how those layers work together.

Image credit: Getty images
Like computer vision, human vision seems to be processed in multiple layers – and also like machine vision, we’re not always sure how those layers work together.

Later in the visual system, our brains work out what those groups of things actually are.

“The difficulty is connecting one level to another,” says David.

So scientists are still trying to figure out the exact rules our brain use to process visual information through the different levels. This is where AI could help.

A potential path

One way of doing it is to build a computer simulation that has the same sort of stages that the human visual system does. Then, researchers show it lots of images, and ask it to learn to identify objects in them.

“Then when that’s done, we can look at the layers of this artificial network we’ve created and say, ‘Well, what does it seem to be doing at this level?’ Which gives us a hint about what kind of information it’s using and what rules [it’s] choosing to put things together,” explains David.

View Larger

It’s way easier to look inside an AI than it is to look inside a working human brain.

Image credit: Getty images
It’s way easier to look inside an AI than it is to look inside a working human brain.

And the theory is that if we build a model that’s “reasonably close” to what the human visual system is like, it will give us some information about what rules are being used by our brains.

“Identifying faces in the crowd and those sorts of technologies … if you want to learn how to do that, then you can do that without trying to copy the human visual system,” says David.

“But we want to understand how human vision does it. And so, we use the AI as a tool to try and give us clues about what would be useful, what might work,”

And hopefully, the results will finally help us understand our own visual biases too. Maybe, like AI, we just need a little diversity in the data we learn from.

Zaya Altangerel
About the author
Zaya Altangerel
Zaya is a writer with a question for everyone and everything, but she's especially interested in music, food and science.
View articles
Zaya is a writer with a question for everyone and everything, but she's especially interested in music, food and science.
View articles

NEXT ARTICLE

We've got chemistry, let's take it to the next level!

Get the latest WA science news delivered to your inbox, every fortnight.

This field is for validation purposes and should be left unchanged.

Republish

Creative Commons Logo

Republishing our content

We want our stories to be shared and seen by as many people as possible.

Therefore, unless it says otherwise, copyright on the stories on Particle belongs to Scitech and they are published under a Creative Commons Attribution-NoDerivatives 4.0 International License.

This allows you to republish our articles online or in print for free. You just need to credit us and link to us, and you can’t edit our material or sell it separately.

Using the ‘republish’ button on our website is the easiest way to meet our guidelines.

Guidelines

You cannot edit the article.

When republishing, you have to credit our authors, ideally in the byline. You have to credit Particle with a link back to the original publication on Particle.

If you’re republishing online, you must use our pageview counter, link to us and include links from our story. Our page view counter is a small pixel-ping (invisible to the eye) that allows us to know when our content is republished. It’s a condition of our guidelines that you include our counter. If you use the ‘republish’ then you’ll capture our page counter.

If you’re republishing in print, please email us to let us so we know about it (we get very proud to see our work republished) and you must include the Particle logo next to the credits. Download logo here.

If you wish to republish all our stories, please contact us directly to discuss this opportunity.

Images

Most of the images used on Particle are copyright of the photographer who made them.

It is your responsibility to confirm that you’re licensed to republish images in our articles.

Video

All Particle videos can be accessed through YouTube under the Standard YouTube Licence.

The Standard YouTube licence

  1. This licence is ‘All Rights Reserved’, granting provisions for YouTube to display the content, and YouTube’s visitors to stream the content. This means that the content may be streamed from YouTube but specifically forbids downloading, adaptation, and redistribution, except where otherwise licensed. When uploading your content to YouTube it will automatically use the Standard YouTube licence. You can check this by clicking on Advanced Settings and looking at the dropdown box ‘License and rights ownership’.
  2. When a user is uploading a video he has license options that he can choose from. The first option is “standard YouTube License” which means that you grant the broadcasting rights to YouTube. This essentially means that your video can only be accessed from YouTube for watching purpose and cannot be reproduced or distributed in any other form without your consent.

Contact

For more information about using our content, email us: particle@scitech.org.au

Copy this HTML into your CMS
Press Ctrl+C to copy