Follow

"these models are vulnerable to a kind of “typographic attack” where adding adversarial text to images can cause them to be systematically misclassified."
📄 distill.pub/2021/multimodal-ne

@liaizon
This paper explains it in a very interesting way too! With plenty of data supporting the idea of neurons which respond to concepts however they appear.

@liaizon

WHO WILL WIN

A neural network trained on thousands of images

VS

iPod iPod iPod iPod iPod iPod iPod iPod

@liaizon Who knew that the people wearing shirts that said "pants" were way ahead of their time?

Sign in to participate in the conversation
wakest's server

the personal instance of Liaizon Wakest