{"id":40476,"date":"2021-08-02T11:05:29","date_gmt":"2021-08-02T11:05:29","guid":{"rendered":"https:\/\/www.labri.fr\/perso\/barla\/blog\/?p=40476"},"modified":"2021-08-02T11:05:31","modified_gmt":"2021-08-02T11:05:31","slug":"perception-dynamics-w-roland","status":"publish","type":"post","link":"https:\/\/www.labri.fr\/perso\/barla\/blog\/?p=40476","title":{"rendered":"Perception dynamics w\/ Roland"},"content":{"rendered":"<p id=\"top\" \/>\n<p class=\"has-text-align-right\"><em>I was following your keynote at EGSR from Youtube, so I could not ask questions easily. But as you might guess I do have some questions, especially because I had read your paper with Kate and Bart. You put an incredible amount of work in it, and even though I&#8217;m always on the fence with deep learning, I&#8217;ve found it very interesting.<\/em><\/p>\n\n\n\n<p>Thanks! \u00a0I think deep learning is just a tool like any other. \u00a0There are bogus ways of using it, but I think it\u2019s a good tool for testing the feasibility that certain kinds of learning objective can solve vision tasks. \u00a0That\u2019s how we used it here. \u00a0It\u2019s not just \u2018deep learning can simulate the brain\u2019, but a bit more nuanced that (btw, those are my words, not yours of course)<\/p>\n\n\n\n<p class=\"has-text-align-right\"><em><br>I could ask you tons of questions, but I&#8217;ll restrict myself to one particular observation you made during the Q&amp;A. You pointed out that the search for a metric for material perception does not make much sense from a visual perception standpoint, since it does not consider the task that could drastically affect the way we attend at a material.<br><\/em><\/p>\n\n\n\n<p>Sorry, then I misspoke, or misunderstood the question. \u00a0I don\u2019t think that a material perception metric does not make sense. \u00a0I think that some graphics researchers seem to treat the visual system like it is some kind of black box that takes an image as input and produces a per-pixel \u2018perceptual estimate&#8217; as output. \u00a0Like when people talk about visual difference predictors. \u00a0I wasn\u2019t just talking about BRDF similarity metrics or something like that, I meant more broadly, when we look at images, we usually have some task in mind, and depending on the task we will draw on different information in the image. \u00a0So the same input can lead to different judgments depending on the task. \u00a0That\u2019s all I meant.<br>There are still challenges to developing a perceptual BRDF metric though \u2026 Like the fact that a given difference in BRDF can be super easy to distinguish in one shape+lighting+viewpoint configuration, and totally undetectable in another. \u00a0So then in absolute terms, are the BRDFs similar or different? \u00a0That\u2019s a challenge.<br><\/p>\n\n\n\n<p class=\"has-text-align-right\"><em>Then I wondered: how does this translate into your latent space representation? Would it mean that different subsets of latent variables could take the lead depending on the task? More generally, how do the dynamics of vision (e.g., eye fixations) come into play in your current view of material perception? I know this is not part of the Nature paper, but you did point out that idea in another paper (Current Opinions on Behavioral Sciences I think); so I guess you already have something in mind about that&#8230;<\/em><\/p>\n\n\n\n<p>Well we definitely use different eye movement strategies depending on the task. &nbsp;Here is a classic figure from Yarbus (one of the first to study eye movements) where people viewed a painting but were asked to make different judgments about it. &nbsp;Obviously different information is more or less relevant. &nbsp;Matteo Toscani and Katja Doerschner have done a little bit of work on this in the context of material perception.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.labri.fr\/perso\/barla\/blog\/wp-content\/uploads\/2021\/08\/image.png\"><img loading=\"lazy\" decoding=\"async\" width=\"612\" height=\"923\" src=\"https:\/\/www.labri.fr\/perso\/barla\/blog\/wp-content\/uploads\/2021\/08\/image.png\" alt=\"\" class=\"wp-image-40477\" srcset=\"https:\/\/www.labri.fr\/perso\/barla\/blog\/wp-content\/uploads\/2021\/08\/image.png 612w, https:\/\/www.labri.fr\/perso\/barla\/blog\/wp-content\/uploads\/2021\/08\/image-199x300.png 199w\" sizes=\"auto, (max-width: 612px) 100vw, 612px\" \/><\/a><\/figure>\n\n\n\n<p>But regarding dynamics more generally, absolutely, it is crucial. &nbsp;I didn\u2019t have time to show any of Kate\u2019s fantastic work using video prediction objectives. &nbsp;We find that such networks learn to disentangle scene factors, and even contain specific units that are sensitive to particular types of causal event (e.g., reflectance edges, vs shadows). &nbsp;It makes sense because predicting what is going to come next in terms of pixel values is only really successful if you have a deep, causal understanding of the processes that are responsible for the observed pixel values. &nbsp;Predictions get a lot better if you understand that a given edge in the image is due to a reflectance edge, an object boundary, a shadow or a highlight, for example.<\/p>\n\n\n\n<p class=\"has-text-align-right\"><em>I should say that I&#8217;m still not entirely convinced that the mechanisms underlying visual perception can be directly related to what you obtained with unsupervised learning\u2026<br><\/em><\/p>\n\n\n\n<p>That\u2019s a longer conversation \ud83d\ude42<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I was following your keynote at EGSR from Youtube, so I could not ask questions easily. But as you might guess I do have some questions, especially because I had read your paper with Kate and Bart. You put an incredible amount of work in it, and even though I&#8217;m always on the fence with &#8230; <a title=\"Perception dynamics w\/ Roland\" class=\"read-more\" href=\"https:\/\/www.labri.fr\/perso\/barla\/blog\/?p=40476\" aria-label=\"Read more about Perception dynamics w\/ Roland\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[],"class_list":["post-40476","post","type-post","status-publish","format-standard","hentry","category-discuss"],"_links":{"self":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/40476","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=40476"}],"version-history":[{"count":1,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/40476\/revisions"}],"predecessor-version":[{"id":40478,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/40476\/revisions\/40478"}],"wp:attachment":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=40476"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=40476"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=40476"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}