(Go: >> BACK << -|- >> HOME <<)

Visual cortex: Difference between revisions

Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5) (Whoop whoop pull up - 14407
Citation bot (talk | contribs)
Add: bibcode, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Abductive | #UCB_webform 59/3850
Line 64:
The visual information relayed to V1 is not coded in terms of spatial (or optical) imagery{{citation needed|date=July 2020}} but rather are better described as [[edge detection]]. As an example, for an image comprising half side black and half side white, the dividing line between black and white has strongest local contrast (that is, edge detection) and is encoded, while few neurons code the brightness information (black or white per se). As information is further relayed to subsequent visual areas, it is coded as increasingly non-local frequency/phase signals. Note that, at these early stages of cortical visual processing, spatial location of visual information is well preserved amid the local contrast encoding (edge detection).
 
A theoretical explanation of the computational function of the simple cells in the primary visual cortex has been presented in.<ref name=Lin13BICY>{{cite journal | last1 = Lindeberg | first1 = T. | year = 2013| title = A computational theory of visual receptive fields | doi = 10.1007/s00422-013-0569-z | pmid = 24197240 | pmc = 3840297 | journal = Biological Cybernetics | volume = 107 | issue = 6| pages = 589–635 }}</ref><ref name=Lin21Heliyon>{{cite journal | last1 = Lindeberg | first1 = T. | year = 2021| title = Normative theory of visual receptive fields | doi = 10.1016/j.heliyon.2021.e05897 | journal = Heliyon | volume = 7 | issue = 1| pages = e05897:1–20 | pmid = 33521348 | pmc = 7820928 | bibcode = 2021Heliy...705897L | doi-access = free }}</ref><ref name=Lin23Front>[https://dx.doi.org/10.3389/fncom.2023.1189949 T. Lindeberg "Covariance properties under natural image transformations for the generalized Gaussian derivative model for visual receptive fields", Frontiers in Computational Neuroscience, 17:1189949, 2023.]</ref> It is described how receptive field shapes similar to those found by the biological receptive field measurements performed by DeAngelis et al.<ref>{{cite journal | last1 = DeAngelis | first1 = G. C. | last2 = Ohzawa | first2 = I. | last3 = Freeman | first3 = R. D. | year = 1995 | title = Receptive field dynamics in the central visual pathways | journal = Trends in Neurosciences | volume = 18 | issue = 10| pages = 451–457 | doi=10.1016/0166-2236(95)94496-r | pmid=8545912| s2cid = 12827601 }}</ref><ref>G. C. DeAngelis and A. Anzai "A modern view of the classical receptive field: linear and non-linear spatio-temporal processing by V1 neurons. In: Chalupa, L.M., Werner, J.S. (eds.) The Visual Neurosciences, vol. 1, pp. 704–719. MIT Press, Cambridge, 2004.</ref> can be derived as a consequence of structural properties of the environment in combination with internal consistency requirements to guarantee consistent image representations over multiple spatial and temporal scales. It is also described how the characteristic receptive field shapes, tuned to different scales, orientations and directions in image space, allow the visual system to compute invariant responses under natural image transformations at higher levels in the visual hierarchy.<ref name=Lin13PONE>{{cite journal | last1 = Lindeberg | first1 = T. | year = 2013| title = Invariance of visual operations at the level of receptive fields | doi = 10.1371/journal.pone.0066990 | pmid = 23894283 | pmc = 3716821 | journal = PLOS ONE | volume = 8 | issue = 7| page = e66990 | arxiv = 1210.0754 | bibcode = 2013PLoSO...866990L | doi-access = free }}</ref><ref name=Lin21Heliyon/><ref name=Lin23Front/>
 
{{anchor|saliencyMap}}
Line 105:
V4 is the third cortical area in the [[Two-streams hypothesis#Ventral stream|ventral stream]], receiving strong feedforward input from V2 and sending strong connections to the [[Inferior temporal gyrus|PIT]]. It also receives direct input from V1, especially for central space. In addition, it has weaker connections to V5 and [[Angular gyrus|dorsal prelunate gyrus]] (DP).
 
V4 is the first area in the [[Two-streams hypothesis#Ventral stream|ventral stream]] to show strong attentional modulation. Most studies indicate that [[selective attention]] can change firing rates in V4 by about 20%. A seminal paper by Moran and Desimone characterizing these effects was the first paper to find attention effects anywhere in the visual cortex.<ref>{{cite journal|last1=Moran|firstfirst1=J|last2= Desimone|first2=R|title=Selective Attention Gates Visual Processing in the Extrastriate Cortex|journal=Science|date=1985|volume=229|issue=4715|pages=782–4|doi=10.1126/science.4023713|pmid=4023713|bibcode=1985Sci...229..782M|citeseerx=10.1.1.308.6038}}</ref>
 
Like V2, V4 is tuned for orientation, spatial frequency, and color. Unlike V2, V4 is tuned for object features of intermediate complexity, like simple geometric shapes, although no one has developed a full parametric description of the tuning space for V4. Visual area V4 is not tuned for complex objects such as faces, as areas in the [[Inferior temporal gyrus|inferotemporal cortex]] are.