{"id":4241,"date":"2010-07-07T09:29:07","date_gmt":"2010-07-07T09:29:07","guid":{"rendered":"http:\/\/www.labri.fr\/perso\/barla\/blog\/?p=4241"},"modified":"2019-08-17T10:41:29","modified_gmt":"2019-08-17T10:41:29","slug":"front-end-vision-and-multiscale-image-analysis","status":"publish","type":"post","link":"https:\/\/www.labri.fr\/perso\/barla\/blog\/?p=4241","title":{"rendered":"~Front-end vision and multiscale image analysis"},"content":{"rendered":"<p id=\"top\" \/><em>Bart M. ter Haar Romery (Ed.)<\/em><\/p>\n<h2>Foundations of scale space<\/h2>\n<ul>\n<li>Axioms of a visual front-end : linearity (no knowledge, no model, no memory), spatial shift invariance (no preferred location), isotropy (no preferred orientation), scale invariance (no preferred size, or scale of the aperture) &#8211; p.15<\/li>\n<li>All partial derivatives of the Gaussian kernel are solutions too of the diffusion equation. &#8211; p.27<\/li>\n<li>So the \ufb01rst important result is that we have found the Gaussian kernel and all of its partial derivatives as the unique set of kernels for a front-end visual system that satis\ufb01es the constraints : no preference for location, scale and orientation, and linearity. We have found a one-parameter family of kernels, where the scale s is the free parameter. &#8211; p.27<\/li>\n<li>Differentiation and observation are done in a single step : convolution with a Gaussian derivative kernel. &#8211; p.28<\/li>\n<li>Differentiation is now done by integration, namely by the convolution integral. &#8211; p.28<\/li>\n<li>The Gaussian kernel is the physical analogue of a mathematical point, the Gaussian derivative kernels are the physical analogons of the mathematical differential operators. Equivalence is reached for the limit when the scale of the Gaussian goes to zero &#8211; p.29<\/li>\n<li>One should never change the input data, but only make modi\ufb01cations to the process of observation where one has access : the \ufb01lter through which the measurement is done. The visual system does the same : it employs \ufb01lters at many sizes and shapes. &#8211; p.30<\/li>\n<li>There exist many such derivations for an uncommitted kernel, all leading to the same unique result : the Gaussian kernel. &#8211; p.35<\/li>\n<\/ul>\n<h2>The Gaussian kernel<\/h2>\n<ul>\n<li>The half width at half maximum is often used to approximate \u03c3 . &#8211; p.38<\/li>\n<li>The Gaussian is a self-similar function. Convolution with a Gaussian is a linear operation, so a convolution with a Gaussian kernel followed by a convolution with again a Gaussian kernel is equivalent to convolution with the broader kernel. Note that the squares of \u03c3 add, not the \u03c3 \u2019s themselves. Of course we can concatenate as many blurring steps as we want to create a larger blurring step. &#8211; p.39<\/li>\n<li>If we walk along the spatial axis in footsteps expressed in scale-units all kernels are of equal size or \u2019width\u2019 (but due to the normalization constraint not necessarily of the same amplitude). We now have a \u2019natural\u2019 size of footstep to walk over the spatial coordinate : a unit step in x is now \u03c3 2 , so in more blurred images we make bigger steps. The new coordinate is called the natural coordinate. &#8211; p.40 <em>Useful for fitting at different scales !<\/em><\/li>\n<li>Because higher dimensional Gaussian kernels are regular products of one-dimensional Gaussians, they are called separable. &#8211; p.43<\/li>\n<li>When applied to a signal, the gaussian kernel operates as a lowpass \ufb01lter. p.45<\/li>\n<\/ul>\n<h2>Gaussian derivatives<\/h2>\n<ul>\n<li>The zeroth order derivative is the Gaussian function itself. The even order (including the zeroth order) derivative functions are even functions (i.e. symmetric around zero) and the odd order derivatives are odd functions (antisymmetric around zero). &#8211; p.53<\/li>\n<li>Here are the Hermite functions from zeroth to \ufb01fth order: 1; 2x; \u22122 + 4&#215;2; \u221212x + 8&#215;3;&nbsp; 12 \u2212 48&#215;2 + 16&#215;4; 120x \u2212 160&#215;3 + 32&#215;5;&nbsp; &#8211; p.55<\/li>\n<li>So now we are able to calculate the 1D Gaussian derivative functions gd(x, n, s) directly with the Hermite polynomials, again incorporating the normalization factor \u03c3 \u221a2\u03c0 &#8211; p.55<\/li>\n<li>Gaussian derivative kernels also act as bandpass \ufb01lters. The maximum is at w = n &#8211; p.58<\/li>\n<li>The number of zerocrossings is equal to the order of differentiation, because the Gaussian weighting function is a positive de\ufb01nite function. &#8211; p.59<\/li>\n<li>In the limiting case of in\ufb01nite order the Gaussian derivative function becomes a sinusoidal function &#8211; p.60<\/li>\n<li>The Gabor family of receptive \ufb01elds are given by a sinusoidal function (at the speci\ufb01ed spatial frequency) under a Gaussian window. &#8211; p.65<\/li>\n<li>Gabor functions can look very much like Gaussian derivatives, but there are essential differences : Gabor functions have an in\ufb01nite number of zero-crossings on their domain ; The amplitudes of the sinusoidal function never exceeds the Gaussian envelope. &#8211; p.65 <em>But we don&#8217;t care about zero-crossings!<\/em><\/li>\n<li>Gaussian derivative kernels of higher dimensions are simply made by multiplication. &#8211; p.67<\/li>\n<\/ul>\n<h2>Differential structure of images<\/h2>\n<ul>\n<li>What we want is invariance under the transformations of translation and rotation. A function is said to be invariant under a group of transformations, if the transformation has no effect on the value of the function. The only geometrical entities that make physically sense are invariants. In the words of Hermann Weyl : &#8220;any invariant has a speci\ufb01c meaning&#8221;, and as such they are widely studied in computer vision theories. &#8211; p.100<\/li>\n<li>We introduce the notion of intrinsic geometry : we like to have every point described in such a way, that if we have the same structure, or local landscape form, no matter the rotation, the description is always the same. This can be accomplished by setting up in each point a dedicated coordinate frame which is determined by some special local directions given by the landscape locally itself. &#8211; p.103<\/li>\n<li>The isophote curvature k is a rotationally and translationally invariant feature. It takes high values at extrema. &#8211; p.111<\/li>\n<li>The eigen values of the Hessian matrix at a point correspond to the principal curvatures of that point of the surface &#8211; p.119<\/li>\n<li>The shape index runs from -1 (cup) via the shapes trough, rut, and saddle rut to zero, the saddle (here the shape index is unde\ufb01ned), and the goes via saddle ridge, ridge, and dome to the value of +1, the cap &#8211; p.120<\/li>\n<li>The length of the vector de\ufb01nes how curved a shape is, which gives Koenderink\u2019s de\ufb01nition of curvedness &#8211; p.120<\/li>\n<li>The principal curvature directions are given by the Eigenvectors of the Hessian matrix &#8211; p.122<\/li>\n<li>The Gaussian curvature K is de\ufb01ned as the product of the two principal curvatures. It is equal to the determinant of the Hessian matrix &#8211; p.123<\/li>\n<li>The mean curvature H is related to the trace of the Hessian matrix : H = (Lxx + Lyy ) &#8211; p.124<\/li>\n<li>The directional derivative of the principal curvature in the direction of the principal direction is called the extremality. Be cause there are two principal curvatures, there are two extremalities. The product of the extremalities is called the Gaussian extremality, a true local invariant. &#8211; p.125<\/li>\n<li>When we study the curvature of the isophotes in the middle of the image, at the location of the T-junction, we see the isophote \u2019sweep\u2019 from highly curved to almost straight for decreasing intensity. So the geometric reasoning is &#8220;the isophote curvature changes a lot when we traverse the image in the w direction&#8221;. &#8211; p.129<\/li>\n<li>The derivative of the isophote curvature in the direction of the gradient is quite a complex third order expression. &#8211; p.129<\/li>\n<li>The intensity of images and invariant features at larger scale decreases fast. This is due to the non-scale-invariant use of the differential operators. &#8211; p.132<\/li>\n<li>It has been shown by Hilbert that any invariant of \ufb01nite order can be expressed as a polynomial function of a set of irreducible invariants. This is an important result. For e.g.scalar images these invariants form the fundamental set of image primitives in which all local intrinsic properties can be described. In other words : any invariant can be expressed in a polynomial combination of the irreducible invariants &#8211; p.134<\/li>\n<li>Note that the \ufb01rst derivative to v is missing. But Lv \u2261 0 is just the gauge condition ! There is always that one degree of freedom to rotate the coordinate system in such a way that the tangential derivative vanishes. &#8211; p.134<\/li>\n<li>The number of irreducible invariants for a given order is equal to the number of partial derivative coef\ufb01cients in the local Taylor expansion, minus 1 for the gauge condition. These irreducibles form a basis for the differential invariant structure. &#8211; p.134<\/li>\n<\/ul>\n<h2>Natural limits on observation<\/h2>\n<ul>\n<li>There is a limit to the order of differentiation for a given scale of operator and required accuracy. The limit is due to the no longer \u2019\ufb01tting\u2019 of the Gaussian derivative kernel in its Gaussian envelop, known as aliasing. &#8211; p.141<\/li>\n<li>As a rule of thumb, for derivatives up to 4th order, the scale should be not less than one pixel. &#8211; p.141<\/li>\n<\/ul>\n<h2>Differentiation and regularization<\/h2>\n<ul>\n<li>Regularization is the technique to make data behave well when an operator is applied to them. Such data could e.g. be functions, that are impossible or dif\ufb01cult to differentiate, or discrete data where a derivative seems to be not de\ufb01ned at all. &#8211; p.143<\/li>\n<li>In mathematical terms it is said that the operation of differentiation is ill-posed, the opposite of well-posed. Jacques Hadamard stated the conditions for well-posedness : The solution must exist ; The solution must be uniquely determined ; The solution must depend continuously on the initial or boundary data. &#8211; p.143<\/li>\n<li>When we recall the importance of doing a measurement uncommitted, we surely should not modify our data in any way. We need a regularization of the operator, not the operand. Actually, the only control we have when we do a measurement is in our measurement device. There we can change the size, location, orientation, sensitivity pro\ufb01les etc. of our \ufb01ltering kernels. That is something completely different from the methods described above. It is one of the cornerstones in scale-space theory that the only control allowed is in the \ufb01lters. As such, scale-space theory can be considered the \u2019theory of apertures\u2019. &#8211; p.144<\/li>\n<li>Taking the derivative of this \u2019observed\u2019 function is then equivalent to convolving with the derivative of the test function. This is just what the receptive \ufb01elds of the front-end visual system do : regularization and differentiation. It is one of the key results of scale-space theory. &#8211; p.152<\/li>\n<\/ul>\n<h2>The front-end visual system : the retina<\/h2>\n<ul>\n<li>The human visual system is a multi-scale sampling device of the outer world. It exploits this strategy by the creation of so-called receptive \ufb01elds (RF\u2019s) on the retina : groups of receptors assembled in such a way that they form a set of apertures of widely varying size. They together measure a scale-space of every image. The hierarchical structure of the input image is contained in this multi-scale stack of images measured at a range of scales. We call this the deep structure. &#8211; p.154<\/li>\n<li>The human visual system does ensemble measurements : for every (perceivable) aspect of the stimulus it has a dedicated set of detector (receptive \ufb01elds or receptive \ufb01eld pairs). They span the full measurement range of the parameter, i.e. for every location, order of spatial and temporal differentiation of the stimulus, for every orientation, for every velocity in every direction, for every disparity, etc. &#8211; p.154<\/li>\n<li>The visual system is considered layered : its \ufb01rst stages measure the geometrical structure by multi-scale partial derivatives in space and time, and subsequent layers perform an analysis of the contextual structure, by perceptual grouping and hierarchical topological analysis, the highest stages do the cognitive, highly associative tasks. This rough division in processing layers is also known as front-end, intermediate or high level visual processing. &#8211; p.154<\/li>\n<\/ul>\n<h2>A scale-space model for the retinal sampling<\/h2>\n<ul>\n<li>It is an intriguing observation that the multi-scale sampling of the outside world by the visual system takes place at the retinal level. All scales are separately and probably independently sampled from the incoming intensity distribution. In multi-scale computer vision applications the different scale representations are generated afterwards. The fundamental reason to sample at this very \ufb01rst retinal level is to observe the world at all scales simultaneously. &#8211; p.176 <em>Doesn&#8217;t it depend on excentricity?<\/em><\/li>\n<li>The retina is a multi-scale sampling device. A scale-space inspired model for the retinal sampling at this very \ufb01rst level considers the retina as a stack of superimposed retinas each at a different scale. As a consequence of scale invariance, each scale is likely to be treated equally, and be equipped with the same processing capacity in the front-end. This leads to the model that each retina at a particular scale consists of the same number of receptive \ufb01elds that tile the space, which may explain the linear decrease of acuity with eccentricity. &#8211; p.177 <em>Really?<\/em><\/li>\n<\/ul>\n<h2>The front-end visual system &#8211; LGN and cortex<\/h2>\n<ul>\n<li>The receptive \ufb01eld sensitivity pro\ufb01le of LGN cells are the same as those of retinal ganglion cells : circular center-surround receptive \ufb01elds, with on-center and off-center in equal numbers, and at the same range of scales. However, the receptive \ufb01elds are not constant in time, or stationary. &#8211; p.182<\/li>\n<li>A good model describing this spatio-temporal behavior is the product of a Laplacian of Gaussian for the spatial component, multiplied with a \ufb01rst order derivative of a Gaussian for the temporal domain. &#8211; p.182<\/li>\n<li>As in the retina, 50% of the center-surround cells is on-center, 50% is off-center. This may indicate that the foreground and the background are just as important. &#8211; p.182<\/li>\n<li>It is well known that the main projection area after the LGN for the primary visual pathway is the primary visual cortex in Brodmann\u2019s area 17. A striking recent \ufb01nding is that 75% of the number of \ufb01bers in this bundle are corticofugal (\u2019from the cortex away\u2019) and project from the cortex to the LGN ! &#8211; p.183<\/li>\n<li>This is an ideal mechanism for feedback control to the early stage of the thalamus. We discuss two possible mechanisms : Geometry-driven diffusion ; Long-range interactions for perceptual grouping. &#8211; p.183<\/li>\n<li>One possible model is the possibility to adapt the receptive \ufb01eld pro\ufb01le in the LGN with local geometric information from the cortex, leading e.g. to edge-preserving smoothing : when we want to apply small scale receptive \ufb01elds at edges, to see them at high resolution, and to apply large scale receptive \ufb01elds at homogeneous areas to exploit the noise reduction at coarser scales, the model states that the edginess measure extracted with the simple cells in the cortex may tune the receptive \ufb01eld size in the LGN. At edges we may reduce the LGN observation scale strongly in this way. &#8211; p.184<\/li>\n<li>Of course we may modulate with any order differential geometric information that we need in modeling this geometry-driven, adaptive \ufb01ltering process. We also may modulate the size of the LGN receptive \ufb01eld, or its shape. Making a receptive \ufb01eld much more elongated along an edge than across an edge, we can smooth along the edge more than we smooth across the edge, thus effectively reducing the local noise without compromising the edge strength. In a similar fashion we make the receptive \ufb01eld e.g. banana-shaped by modulating its curvature so it follows even better the edge locally, etc. &#8211; p.184<\/li>\n<li>An intriguing possibility is the exploitation of the \ufb01lterbank of oriented \ufb01lters we encounter in the visual cortex. &#8211; p.184<\/li>\n<li>The mapping from the retina to the cortical surface is a log-polar mapping. &#8211; p.185<\/li>\n<li>The cortical columns form a repetitive structure of little areas, about 1 x 1 mm, which can be considered the visual pixels. Each column contains all processing \ufb01lters for local geometrical analysis of that pixel. Hubel and Wiesel found a wide variety of \ufb01lter responses, and classi\ufb01ed them broadly as simple cells, complex cells and hypercomplex (end-stopped) cells.<\/li>\n<li> The receptive \ufb01eld sensitivity pro\ufb01les of simple cells have a remarkable resemblance to Gaussian derivative kernels, as was \ufb01rst noted by Koenderink. He proposed the Gaussian derivative family as a taxonomy (structured name giving) for the simple cells. &#8211; p.187<\/li>\n<li>As with the LGN receptive \ufb01elds, all the cortical simple cells exhibited a dynamic behaviour. The receptive \ufb01eld sensitivity pro\ufb01le is not constant over time, but the pro\ufb01le is modulated. &#8211; p.188<\/li>\n<li>Complex cell receptive \ufb01elds are not that interesting when measured with just one stimulus, but they reveal very interesting internal structure when studied with two or more stimuli simultaneously. &#8211; p.189<\/li>\n<li>Many cells exhibit some form of strong directional sensitivity for motion. &#8211; p.189<\/li>\n<\/ul>\n<h2>The front-end visual system &#8211; cortical columns<\/h2>\n<ul>\n<li>A hypercolumn is a functional unit of cortical structure. It is the hardware that processes a single \u2019pixel\u2019 in the visual \ufb01eld for both eyes. There are thousands of identical hypercolumns tiling the cortical surface. &#8211; p.197<\/li>\n<li>They contain cells of all sizes, orientations, differential order, velocity magnitude and direction, disparity and color for both left and right eye. It is a highly redundant \ufb01lterbank representation. &#8211; p.198<\/li>\n<li>It is not known how the different scales (sizes of receptive \ufb01elds) and the differential orders are located in the hypercolumns. The distance from the singularity in the pinwheel and the depth in the hypercolumn form possible mapping possibilities. &#8211; p.199<\/li>\n<li>Connections in-between cortical columns may be particularly important for close-range perceptual grouping. It has been shown that the projections are only with those neighboring cells that have the same functional speci\ufb01city. &#8211; p.200<\/li>\n<li>Worth mentioning in the context of the biomimicking of vision into a mathematical framework is the amazing fact that vision totally disappears in a few seconds ( !) when the image is stabilized on the retina. &#8211; p.200<\/li>\n<li>Two cells can determine if they have a neighborhood relation if they are correlated. Neighboring geometrical properties have to correspond, such as intensity, contours with the same orientation, etc. &#8211; p.202<\/li>\n<li>In differential geometric terminology : There need to be similar differential structure between two neighboring pieces. The zero-th order indicates that the same intensity makes it highly likely to be connected. So does a similar gradient with the same slope and direction, and the same curvature etc. It of course applies to all descriptive features : the same color, texture etc. The N-jet has to be interrelated between neighboring cortical hypercolumns at many scales. &#8211; p.203<\/li>\n<li>Receptive \ufb01elds substantially overlap, and they should in order to create a correlation between neighboring \ufb01bers. However, they overlap because we have a multi-scale sampling structure. At a single scale, our model presumes a tight hexagonal tiling of the plane. There is a deep notion here of the correlation between different scales, and the sampling at a single location by receptive \ufb01elds of different scale. &#8211; p.203<\/li>\n<\/ul>\n<h2>Deep structure<\/h2>\n<ul>\n<li>The key point is that not only do we need to connect observations at different localizations &#8211; we also need to link observations at different scales. In the words of Koenderink, we must study the family of scale-space images as a family, and de\ufb01ne the \u2019deep\u2019 structure. \u2019Deep\u2019 refers to the extra dimension of scale in a scale-space, like the sea has a surface and depth. &#8211; p.215<\/li>\n<li>Since the normalized feature detector allows comparison of detector responses across scale, the scale selection can be done automatically. &#8211; p.219<\/li>\n<li>Conceptually, we follow the singularity points for the feature detector through scale-space and locate the scale where the normalized feature strength is maximal. This is the appropriate scale for detecting the feature and for extracting information about the feature. However, we have more information : the nice continuous behaviour across scales allows us to locate the optimal scale explicitly as the singularities for a local differential operator output in scalespace. &#8211; p.220 <em>This is way too sensitive!<\/em><\/li>\n<\/ul>\n<h2>Deblurring Gaussian blur<\/h2>\n<ul>\n<li>We can replace the derivative of the image to scale with the Laplacian of the image in the diffusion equation, and that can be computed by application of the Gaussian derivatives on the image. &#8211; p.278<\/li>\n<li>It is a well known fact in image processing that subtraction of the Laplacian (times some constant depending on the blur) sharpens the image. We see here that this is nothing else than the \ufb01rst order result of our deblurring approach using scale-space theory. For higher order deblurring the formulas get more complicated and higher derivatives are involved. &#8211; p.280<\/li>\n<li>The regularization property of the Gaussian kernel makes the scale-space continuous, which means in\ufb01nitely differentiable in both the spatial as the scale domain. It was proposed by Florack to expand the scale-space of a blurred image into the negative scale direction by means of a Taylor expansion. The high order derivatives to scale in this expansion can be expressed in spatial Laplacians of the image, due to the constraint of the isotropic diffusion equation. &#8211; p.284<\/li>\n<\/ul>\n<h2>Color differential structure<\/h2>\n<ul>\n<li>The front-end visual system has implemented the shifted spatial kernels with a grid on the retina with receptive \ufb01elds, so the shifting is implemented by the simultaneous mea- surement of all the neighboring receptive \ufb01elds. The temporal kernels are implemented as time-varying LGN and cortical receptive \ufb01elds. However, in order to have a wide range of receptive \ufb01elds which shift over the wavelength axis in sensitivity, would require a lot of different photo-sensitive dyes (rhodopsins) in the receptors with these different -shifted-color sensitivities. &#8211; p.315<\/li>\n<li>The visual system may have opted for a cheaper solution : The convolution is calculated at just a single position on the wavelength axis, at around \u03bb0 = 520nm, with a standard deviation of the Gaussian kernel of about \u03c3\u03bb = 55nm. The integration is done over the range of wavelengths that is covered by the rhodopsins, i.e. from about 350 nm (blue) to 700 nm (red). The values for \u03bb0 and \u03c3lambda are determined from the best \ufb01t of a Gaussian to the spectral sensitivity as measured psychophysically in humans, i.e. the Hering model. &#8211; p.315<\/li>\n<li>We just do a single measurement with a Gaussian aperture over the wavelength axis at the position \u03bb0 . Similarly, the derivatives to \u03bb describe the \ufb01rst and second order spectral derivative respectively.- p.315<\/li>\n<li>We recall from the human vision chapter that the color sensitive receptive \ufb01elds come in the combinations red-green and yellow-blue center-surround receptive \ufb01elds. The subtraction of yellow and blue in these receptive \ufb01elds is well modeled by the \ufb01rst order derivative to \u03bb , the subtraction of red and green minus the blue is well modeled by the second order derivative to \u03bb . Alternatively, one can say that the zero-th order receptive \ufb01eld measures the luminance, the \ufb01rst order the \u2019blue-yellowness\u2019, and the second order the \u2019red-greenness\u2019. &#8211; p.316<\/li>\n<li>Geusebroek et al. give the best linear transform from the XYZ values to the Gaussian color model &#8211; p.318<\/li>\n<\/ul>\n<h2>Steerable kernels<\/h2>\n<ul>\n<li>Orientation plays an important role as parameter in establishing similarity relations between neighboring points. As such, it is an essential ingredient of methods for perceptual grouping. &#8211; p.329<\/li>\n<li>The \ufb01rst order Gaussian derivative kernel in another orientation can readily be made from its basic substituents : it is well known that a kernel with orientation \u03c6 can be constructed from cos(\u03c6 ) \u03b4 G + sin(\u03c6 ) \u03b4 G &#8211; p.331<\/li>\n<li>A class of \ufb01lters where a \ufb01lter of any orientation can be constructed from a linear combination of other functions is called a steerable \ufb01lter. The rotational components form a basis. A basis will contain more elements when we go to higher order. &#8211; p.331<\/li>\n<li>There are two important classes of basis functions : Basis functions that are rotated copies of the Gaussian derivative itself ; Basis functions taken from the set of all partial deriva- tives in the Cartesian framework. &#8211; p.331<\/li>\n<li>A Gaussian derivative kernel can be steered, i.e. made in any orientation, by a linearly weighted sum of rotated versions of itself, the basis functions. There are n + 1 functions required, equally spaced over an angle range of 0 \u2212 \u03c0. &#8211; p.335<\/li>\n<li>When we just rotate our coordinates, we get a particular convenient representation for computer implementations. &#8211; p.336<\/li>\n<\/ul>\n<h2>Scale-time<\/h2>\n<ul>\n<li>Time and space are incommensurable dimensions (measurements along these dimensions have different units), so we need a scale-space for space and a scale-space for time. &#8211; p.345<\/li>\n<li>Time measurements can essentially be processed in two ways : as pre-recorded frames or instances, or real-time. Temporal measurements stored for later replay or analysis, on whatever medium, fall in the \ufb01rst category. Humans perform continuously a temporal analysis with their senses, they measure real-time and are part of the second category. The scale-space treatment of these two categories will turn out to be essentially different. &#8211; p.345<\/li>\n<li>Prerecorded sequences can be analyzed in a manner completely analogous with the spatial treatment of scaled operators, we just interchange space with time. &#8211; p.346<\/li>\n<li>In the real-time measurement and analysis of temporal data we have a serious problem : the time axis is only a half axis : the past. There is a sharp and unavoidable boundary on the time axis : the present moment. This means that we can no longer apply our standard Gaussian kernels, because they have an (in theory) in\ufb01nite extent in both directions. There is no way to include the future in our kernel, it would be a strong violation of causality. &#8211; p.346<\/li>\n<li>Mixed partial spatio-temporal operators are spatial Gaussian derivative kernels concatenated with temporal Gaussian derivative kernels. This concatenation is a multiplication due to the separability of the dimensions involved. &#8211; p.347<\/li>\n<li>So the appearance of a spatiotemporal operator is as a spatial operator changing over time, with a speed indicated (\u2019tuned\u2019) by the temporal scale parameter. &#8211; p.348<\/li>\n<li>For real-time systems the situation is completely different. We noted in the introduction that we can only deal with the past, i.e. we only have the half time-axis. The solution, proposed by Koenderink, is to remap (reparametrize) the half t -axis into a full axis. The question is then how this should be done. &#8211; p.349<\/li>\n<li>Interestingly, we encounter more often a logarithmic parametrization of a half axis when the physics of observations is involved &#8211; p.352<\/li>\n<li>Recent precise measurements of the spatio-temporal properties of macaque monkey and cat LGN and cortical receptive \ufb01elds give support for the scale-time theory for causal time sampling. &#8211; p.355<\/li>\n<\/ul>\n<h2>Geometry-driven diffusion<\/h2>\n<ul>\n<li>Linear, isotropic diffusion cannot preserve the position of the differential invariant features over scale. A solution is to make the diffusion, i.e. the amount of blurring, locally adaptive to the structure of the image. &#8211; p.361<\/li>\n<li>This adaptive \ufb01ltering process is possible by three classes of (all nonlinear) mathematical approaches, which are in essence equivalent : Nonlinear partial differential equations (PDE\u2019s), i.e. nonlinear diffusion equations which evolve the luminance function as some function of a \ufb02ow (known as the \u2019nonlinear PDE approach\u2019) ; Curve evolution of the isophotes (curves in 2D, surfaces in 3D) in the image (known as the \u2019curve evolution approach\u2019) ; And variational methods that minimize some energy functional on the image (known as the \u2019energy minimization approach\u2019 or \u2019variational approach\u2019). &#8211; p.361<\/li>\n<li>The word \u2019nonlinear\u2019 implies the inclusion of a nonlinearity in the algorithm. This can be done in an in\ufb01nite variety, and it takes geometric reasoning to come up with the right nonlinearity for the task. &#8211; p.361<\/li>\n<li>In the early visual pathway we see an abundance of feedback. A striking \ufb01nding is that the majority of \ufb01bers (roughly 75% !) in the optic radiation (the \ufb01bers between the LGN and the primary visual cortex) are projecting in a retrograde (backwards) fashion, from cortex to LGN. These cortico-thalamic projections may well tune the receptive \ufb01elds with the differential geometric information extracted with the receptive \ufb01elds in the visual cortex. &#8211; p.362<\/li>\n<li>The nonlinear diffusion paradigm enables geometric reasoning, we may put knowledge in the task of the evolution of the image. Examples of such reasoning statements are : \u201cReduce the diffusion at locations where edges (or other local features such as corners, Tjunctions, etc.) occur\u201d ; or \u201cadapt the diffusion so it is maximized along edges and min- imized across edges\u201d ; or \u201cenhance the diffusion in the direction of ridges and reduce the diffusion perpendicular to them\u201d ; etc. &#8211; p.363<\/li>\n<li>Perona and Malik proposed to make the conductivity a function of the gradient magnitude in order to reduce the diffusion at the location of edges. The geometric reasoning here is to let intra-region smoothing occur preferentially over interregion smoothing. &#8211; p.364<\/li>\n<li>The principal in\ufb02uence on the local conductivity should be to direct the \ufb02ow in the direction of the gradient only : we want a lot of diffusion along the edges, but virtually no diffusion across the edges. &#8211; p.378<\/li>\n<li>There are a number of differences between this equation and the Perona &amp; Malik equation : the \ufb02ow (of \ufb02ux) is independent of the magnitude of the gradient ; There is no extra free parameter ; in the P&amp;M equation the diffusion decreases when the gradient is large, resulting in contrast dependent smoothing ; this equation is gray-scale invariant. &#8211; p.379<\/li>\n<li>Table of some popular nonlinear diffusion equations with their name, PDE formula for the luminance evolution, the PDE formula for (isophote) curve evolution, the maximum allowed timestep for nearest neighbor implementations (N.N.), and the maximum allowed timestep for Gaussian derivative implementation. &#8211; p.384<\/li>\n<li>The relation between mathematical morphology and normal \ufb02ow now becomes clear : the motion of the contour of the image is governed by the structuring element in exactly the same way as the level set is moved in the direction of the normal. This is only true for an isotropic convex (i.e. round) structuring element. &#8211; p.387<\/li>\n<li>It was shown by van den Boomgaard and Dorst that a parabolic structuring element leads to Gaussian blurring. This establishes an elegant equivalence between mathematical morphology and Gaussian scale-space. Florack, Maas and Niessen related mathematical morphology and Gaussian scale-space by showing that both theories are cases from a more general formulation. It can be shown that dilation or erosion with a ball is mathematically equivalent to constant motion \ufb02ow, where the isophotes are considered as curves and they are moved in the gradient (or opposite) direction. &#8211; p.390<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Bart M. ter Haar Romery (Ed.) Foundations of scale space Axioms of a visual front-end : linearity (no knowledge, no model, no memory), spatial shift invariance (no preferred location), isotropy (no preferred orientation), scale invariance (no preferred size, or scale of the aperture) &#8211; p.15 All partial derivatives of the Gaussian kernel are solutions too &#8230; <a title=\"~Front-end vision and multiscale image analysis\" class=\"read-more\" href=\"https:\/\/www.labri.fr\/perso\/barla\/blog\/?p=4241\" aria-label=\"Read more about ~Front-end vision and multiscale image analysis\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[621],"tags":[],"class_list":["post-4241","post","type-post","status-publish","format-standard","hentry","category-books"],"_links":{"self":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4241","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4241"}],"version-history":[{"count":4,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4241\/revisions"}],"predecessor-version":[{"id":40142,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=\/wp\/v2\/posts\/4241\/revisions\/40142"}],"wp:attachment":[{"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4241"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4241"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.labri.fr\/perso\/barla\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4241"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}