{"id":40188,"date":"2019-11-19T12:39:33","date_gmt":"2019-11-19T12:39:33","guid":{"rendered":"http:\/\/www.labri.fr\/perso\/barla\/blog\/?p=40188"},"modified":"2019-11-19T18:14:56","modified_gmt":"2019-11-19T18:14:56","slug":"scales-w-jacob","status":"publish","type":"post","link":"https:\/\/www.labri.fr\/perso\/barla\/blog\/?p=40188","title":{"rendered":"Scales w\/ Jacob"},"content":{"rendered":"<p id=\"top\" \/>\n<p><strong>Jacob:<\/strong> We are still working on \u201cScale Ambiguities in Material Recognition\u201d as  it\u2019s now called. I presented preliminary results of this at ECVP, which  you can see&nbsp;<a class=\"\" href=\"https:\/\/www.dropbox.com\/s\/7gvtiefkour9387\/Session7.2-JacobCheeseman.pptx?dl=1\">here<\/a>.  We have several ideas of what we want to do next, including some  image-computable metrics (e.g., fractal dimensionality) that might  correlate with our other measures, synthesizing new examples from the  image statistics of each material category, taking a different  approach to the analyses that better deals with individual images  rather than the whole set, and so on.<\/p>\n\n\n\n<p class=\"has-text-align-left\"><strong>Pascal: <\/strong>I&#8217;m wondering what gives away scale in the non-ambiguous cases, even when context cues are carefully removed\u2026 If the world were actually fractal, you should be able to find ambiguous images much more easily, right? <br>This in itself is very interesting.<br>In your opinion, what aspects of an image are sufficient to disambiguate its scale?  Looking at your images, I can see at least two candidates: the light field, revealed through shadows and shading &#8211; it seems that fronto-parallel lighting is most ambiguous; and perspective &#8211; somehow we can recognize an oblique view in most cases; again a top view seems to increase ambiguity. <br>I&#8217;m wondering whether high-frequency details could be added or removed to change the perceived scale. If you start from a synthesized texture, then you could also imagine changing shadows, shading or perspective\u2026<\/p>\n\n\n\n<p><strong>Jacob:<\/strong> In the current set of experiments we haven\u2019t systematically limited  scene context, but the composition of the photographs certainly  contributes to their ambiguity. We tried taking photographs ourselves  early on, but it turns out to be quite difficult to first  imagine alternative interpretations of a given scene, and then compose  the photo in a way that captures all of them. For this first attempt we  decided to simply scour the internet looking for images that already  contained multiple interpretations\u2014or so we  thought! <br>I think you\u2019re probably right about lighting and  perspective being used to disambiguate scale, although I suspect that  the exact arrangement that maximizes ambiguity depends on the material  category. For example, I think the planted fields can  look like textiles under diverse conditions of lighting and viewpoint,  but the confusion between water and marble depends on a relatively more  limited conditions (e.g., fronto-parallel). It might be different for  confusions between other categories like stone  and wood, but this is just my hunch at this point.&nbsp; <\/p>\n\n\n\n<p><strong>Pascal: <\/strong>That&#8217;s perhaps what I find most interesting about this subject. It suggests that we are sensitive to specific image patterns: vein-like\/web-like patterns for marble or foam on water, slightly fluffy\/sticking-out patterns for planted fields and textiles. But without additional spatial cues (light field, perspective), it can become hard to tease apart what gave rise to those patterns. Some patterns may be characteristic of specific material categories and may thus be less scale-ambiguous: stone or wood like you mentioned. Still, an artist (or experimenter ;-)) could play with ambiguities and imitate stone- or wood-like patterns on much larger scales! <br>I guess what&#8217;s interesting here is how much you can mess up with the image patterns to increase ambiguity; or to the opposite how you could manipulate patterns to push subjects toward a wrong interpretation\u2026<\/p>\n\n\n\n<p><strong>Jacob: <\/strong>Regarding effects of spatial frequency\u2014we\u2019ve thought about \u2018<a class=\"\" href=\"https:\/\/en.wikipedia.org\/wiki\/Miniature_faking\">miniaturizing<\/a>\u2019  the images by blurring the surrounding edges, but we haven\u2019t run this  experiment just yet. The effectiveness  of this manipulation seems like it would also depend on lighting and  perspective information, so it could be interesting to see which images  where this works best. \ud83d\ude42<\/p>\n\n\n\n<p><strong>Pascal: <\/strong>That&#8217;s one very interesting instance of what I meant just above :-);     did you mean adding &#8220;tilt-shift&#8221; effects for instance ? I wonder if it&#8217;s also possible to add perspective or shadow\/shading patterns that push interpretation towards larger scales.<br>And in cases where the images are in a front-parallel configuration, maybe playing with some frequency bands (as in <a href=\"http:\/\/www.cs.cornell.edu\/projects\/band_sifting_filters\/\">http:\/\/www.cs.cornell.edu\/projects\/band_sifting_filters\/<\/a>) could raise ambiguity, or the opposite tip interpretation towards small or large scales?<\/p>\n\n\n\n<p><strong>Jacob:<\/strong> I would think that blurring the top and bottom of the image to simulate  tilt-shift would only tend to work if this is congruent with perspective  information already in the image. That is, in a fronto-parallel  situation, doing  this might tilt the apparent surface plane a bit, but maybe it wouldn\u2019t  indicate much about scale. As you suggest, maybe different combinations  of shading and blurring could give a convincing impression of scale; my  guess is that such manipulations would be  image-specific. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Jacob: We are still working on \u201cScale Ambiguities in Material Recognition\u201d as it\u2019s now called. I presented preliminary results of this at ECVP, which you can see&nbsp;here. We have several ideas of what we want to do next, including some image-computable metrics (e.g., fractal dimensionality) that might correlate with our other measures, synthesizing new examples &#8230; <a title=\"Scales w\/ Jacob\" class=\"read-more\" href=\"https:\/\/www.labri.fr\/perso\/barla\/blog\/?p=40188\" aria-label=\"Read more about Scales w\/ Jacob\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[],"class_list":["post-40188","post","type-post","status-publish","format-standard","hentry","category-discuss"],"_links":{"self":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/40188","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=40188"}],"version-history":[{"count":2,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/40188\/revisions"}],"predecessor-version":[{"id":40190,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/40188\/revisions\/40190"}],"wp:attachment":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=40188"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=40188"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=40188"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}