Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Posted by Poplicola 9 years ago to Science
1 comments | Share | Flag

This research offers some interesting evidence that may call into question the semi-popular contention (particularly among some "Strong AI" and "Singularity" advocates) that human brains can be fully modeled in software with artificial neural networks.
SOURCE URL: http://www.evolvingai.org/fooling


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by iroseland 9 years ago
    Actually..


    This mostly points to some holes in the way the work is currently being done..

    As humans we have some interesting abilities. We can see something, and have it remind us of something else. This is because our brain speeds up recognition through association. Its why we see faces in wood grain.. Or can look at clouds and see something other than just clouds.
    These image recognition systems are only good at one thing and do not yet know the difference between what is and what it looks kind of like. Training neural nets on the second part is going to be massively more complicated than just teaching it to pick up enough data points to say object = baseball. Since this would require looking at even more data and the net being able to say this reminds me of a baseball but it is not a baseball. Once it can get there it would be able to say that's just static, but I see the shape of a baseball buried in the static.
    Reply | Mark as read | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo