Europe PMC

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


It is widely claimed that interactions among simultaneously presented visual stimuli are suppressive and that these interactions primarily occur when stimuli fall within the same receptive field (Desimone and Duncan 1995). Here, we show evidence for a novel form of interaction between simultaneously presented but distant stimuli that does not fit either pattern. To examine interactions between simultaneously presented stimuli, we measure the response to a single stimulus as a function of whether or not other stimuli are also presented simultaneously, and we further ask how the response to a given stimulus is affected by whether the simultaneously present stimuli are identical or different from each other. Our method reveals a new phenomenon of "redundancy gain:" the visual response to a stimulus is higher when accompanied by identical stimuli than when that stimulus is presented alone, even though the stimuli are displayed in separate visual quadrants. This pattern is observed throughout the visual hierarchy, including V1 and V2, and we show that it is distinct from the well-known simultaneous suppression effect (Kastner et al. 1998). We propose that the redundancy gain in early retinotopic cortex results from feedback from higher visual areas and may underlie perceptual averaging and other ensemble coding phenomena observed behaviorally.

Free full text 


Logo of jnLink to Publisher's site
J Neurophysiol. 2013 Nov 1; 110(9): 2227–2235.
Published online 2013 Jul 31. https://doi.org/10.1152/jn.00175.2013
PMCID: PMC3841923
PMID: 23904496

Redundancy gains in retinotopic cortex

Abstract

It is widely claimed that interactions among simultaneously presented visual stimuli are suppressive and that these interactions primarily occur when stimuli fall within the same receptive field (Desimone and Duncan 1995). Here, we show evidence for a novel form of interaction between simultaneously presented but distant stimuli that does not fit either pattern. To examine interactions between simultaneously presented stimuli, we measure the response to a single stimulus as a function of whether or not other stimuli are also presented simultaneously, and we further ask how the response to a given stimulus is affected by whether the simultaneously present stimuli are identical or different from each other. Our method reveals a new phenomenon of “redundancy gain:” the visual response to a stimulus is higher when accompanied by identical stimuli than when that stimulus is presented alone, even though the stimuli are displayed in separate visual quadrants. This pattern is observed throughout the visual hierarchy, including V1 and V2, and we show that it is distinct from the well-known simultaneous suppression effect (Kastner et al. 1998). We propose that the redundancy gain in early retinotopic cortex results from feedback from higher visual areas and may underlie perceptual averaging and other ensemble coding phenomena observed behaviorally.

Keywords: redundancy gain, ensemble coding, early retinotopic cortex, long-range interaction

how can we quickly extract the gist of complex visual scenes (Potter 1976, 2012) given the well-documented limitations in our ability to process and represent multiple objects at the same time (Kastner et al. 1998, 2001; Luck and Vogel 1997; Pylyshyn and Storm 1988)? Part of the answer is that we do not merely represent objects independently from each other, instead extracting more efficient representations of the overall visual array (Alvarez 2011; Brady and Tenenbaum 2013). But how exactly does the visual system represent multiple objects, and is the neural representation of a display containing multiple objects sensitive to the similarity and differences among the objects? Using functional magnetic resonance imaging (fMRI), we addressed these questions by presenting participants with multiple objects that are either identical to or different from one another and measuring brain activity as a function of the number of objects (one or multiple) and object similarity (same or different). Because the nature of visual interaction may depend on the receptive field size (Kastner et al. 1998, 2001), we investigated both high-level visual areas, where receptive fields are large, and early visual areas, where receptive fields are small.

Behavioral studies have revealed two patterns of interaction among multiple objects: competition and ensemble coding. Competition occurs when multiple unrelated objects must be identified and retained. For example, several studies have demonstrated a “simultaneous presentation disadvantage,” where accuracy in identifying masked words, digits, or colors is lower if the items are presented simultaneously rather than sequentially (reviewed by Pashler 1998). It has even been proposed that momentary perceptual awareness may be limited to a single visual feature at a time (Huang et al. 2007). In contrast to the limited ability to represent individual unrelated items, the visual system has complementary mechanisms for processing the ensemble properties of a large group of objects (Ariely 2001; Chong and Treisman 2003). When presented with a visual array, participants can rapidly extract the mean size, location, orientation, gender, and facial expression, even though they may be at chance level in identifying each individual stimulus (Alvarez and Oliva 2008; Haberman and Whitney 2007, 2009; Parkes et al. 2001). What is the neural basis for these two kinds of interactions?

Competitive interactions among multiple objects have been studied both neurophysiologically and with neuroimaging. According to the biased competition model (Desimone and Duncan 1995), multiple visual objects compete for neural representation within the same receptive field. This competition results in a simultaneous presentation disadvantage (analogous to the behavioral findings), particularly in higher visual areas whose receptive fields are large enough to encompass multiple objects (Beck and Kastner 2009). However, few imaging studies have examined mechanisms that could support ensemble coding. In fact, fMRI studies that measure responses to arrays containing multiple stimuli have generally revealed suppression rather than enhancement. For example, the blood oxygen level-dependent (BOLD) response to a Gabor patch is reduced when the patch is flanked by identically oriented Gabors (Joo et al. 2012). In contrast, behavioral studies have shown that people perceive and remember a face better if it co-occurs with other, identical faces (Jiang et al. 2010), and they are more accurate judging the biological motion of an individual walker when it is surrounded by other, similar walkers (Sweeny et al. 2013). To date, few fMRI studies have explored the neural correlate of the redundancy gain in behavior.

To examine the interaction of neural representations for multiple stimuli, we presented participants with complex stimuli and manipulated their number and similarity. Each display contained a single stimulus, four identical stimuli, or four different stimuli from the same general category (faces, scenes, or common objects; Fig. 1). To reduce crowding and low-level lateral masking, the four items occupied different visual quadrants. We measured brain activity in retinotopic visual areas (V1 through V4v and V3A) as well as ventral category-selective regions, including the fusiform face area (FFA), the parahippocampal place area (PPA), and the lateral occipital complex (LOC), while participants engaged in a demanding central fixation task. This design allowed us to address several questions.

An external file that holds a picture, illustration, etc.
Object name is z9k0201321460001.jpg

A schematic illustration of 3 stimulus conditions (single stimulus, 4-same, and 4-different) in 3 stimulus categories (faces, scenes, and objects). Participants monitored the brief dimming of the central fixation dot that occurred at random moments on 20% of the trials.

First, how does the visual system represent multiple objects, and is the neural representation of multiple objects sensitive to the similarity and differences among the objects? One possibility is that identical visual objects could produce neural adaptation (much like repetition attenuation; Grill-Spector and Malach 2001), in which case brain activity should be lower for identical stimuli than for different stimuli. Alternatively, similar objects may facilitate ensemble coding, as found behaviorally (Jiang et al. 2010; Sweeny et al. 2013), resulting in increased brain activity for identical stimuli compared with different stimuli or a single stimulus.

Second, does the same rule that governs neural competition within the receptive field also apply to ensemble coding? If so, then any redundancy effects from identical stimuli should be restricted to visual areas with large receptive fields. Alternatively, ensemble coding at higher visual areas may effectively feedback to early visual areas, yielding redundancy effects in even V1 and V2.

Third, are any redundancy gain effects dissociable from the well-known simultaneous suppression effect (Kastner et al. 1998)?

METHODS

Participants.

Eight volunteers (7 women) completed experiment 1 and eight others (4 women) completed experiment 2. Participants were 26–31 yr old and had normal or corrected-to-normal visual acuity and normal color vision. In experiment 1, one participant was excluded due to excessive head motion (>5 mm), and one participant was excluded due to poor behavioral performance (<60% accuracy). The study was approved by Institutional Review Boards at Harvard University, MGH/Partners, and MIT. All participants completed retinotopic mapping, localizer scans for ventral category-selective regions of interest (ROI), and the main experiment.

Experiment 1 stimuli, procedure, and design.

The main purpose of experiment 1 was to examine neural interactions among multiple visual stimuli as a function of the number of objects (1 or multiple) and object similarity (same or different).

Each item subtended 5.25° × 5.25° and was centered 5.13° away from a red fixation point (0.15° × 0.15°). The items were selected from three superordinate categories: faces, scenes, and common objects. Face and object images were selected from Face Place and the Object Databank provided by Michael J. Tarr (Center for the Neural Basis of Cognition and Department of Psychology, Carnegie Mellon University; http://www.tarrlab.org/; funding provided by National Science Foundation Award 0339122), and scene images were selected from Aude Oliva's scene database (Oliva and Torralba 2001). After items were selected, they were converted into grayscale images. Each category contained 108 possible exemplars in experiment 1 and 160 possible exemplars in experiment 2.

Participants viewed images containing 1) a single item in one of the visual quadrants along with three placeholders (an outline white box) in the empty quadrants (“single stimulus”), 2) four identical items, one in each quadrant (“4-same”), or 3) four different items from the same category (e.g., 4 faces), one in each quadrant (“4-different”). These items were presented simultaneously for 0.5 s, followed by a 0.5-s blank interval. There were four separate single-stimulus blocks. The single stimulus (plus 3 white-box placeholders) occupied the same visual quadrant within a given block but varied in its position across the four blocks. This design allowed us to examine retinotopically mapped visual activity for the quadrant that was stimulated throughout a block of testing.

Participants completed nine scans using a blocked design. The stimuli used for a given scan came from the same category: faces, scenes, or objects. There were three scans per stimulus category. Each scan lasted 434 s and comprised 14 stimulus blocks (each for 16 s, containing 16 trials of 1 s each: 0.5-s stimulus plus 0.5-s blank) preceded and followed by blank fixation periods (each 16 s). The 14 stimulus blocks contained 7 experimental conditions presented twice in a mirror-reversed testing order. The order of the 7 conditions was random. These conditions were 4-same, 4-different, single-upper left, single-upper right, single-lower left, single-lower right, and one more condition designed to test other hypotheses, which is not reported here.

Experiment 1 task.

Because active tasks may lead to different task difficulties across conditions, we did not impose a task on the visual stimuli. Instead, participants performed a demanding central fixation task (Kastner et al. 1998): they pressed a key whenever the fixation dot briefly dimmed. Dimming occurred at an unpredictable moment on 20% of the trials. The fixation task also helped participants maintain central fixation.

Experiment 2 design and procedure.

Experiment 2 contrasted two different methods that have been used to examine the neural interactions among multiple visual objects. First, we used the direct method of experiment 1 and compared BOLD responses to a single stimulus, four identical items, and four different items presented simultaneously. Second, we adapted the paradigm of Kastner et al. (1998) to examine simultaneous suppression. To this end, some experimental blocks included sequentially rather than simultaneously presented stimuli.

Stimuli in experiment 2 (4-same, 4-different, or a single stimulus plus 3 placeholders) were presented either sequentially or simultaneously, producing six experimental conditions (Fig. 2). In the simultaneous mode, items in the four quadrants were presented simultaneously for 0.5 s, followed by a blank interval of 1.5 s. In the sequential mode, the four items were presented sequentially at a rate of 0.5 s per item. In both presentation modes, when a trial contained a single stimulus plus placeholders, the precise quadrant of the single stimulus was randomly determined in each trial (rather than fixed in a block of trials), with the constraints that all quadrants were stimulated equally often.

An external file that holds a picture, illustration, etc.
Object name is z9k0201321460002.jpg

An example of a 4-different trial in 2 different presentation modes of experiment 2.

Participants completed nine scans (3 scans per stimulus category) in experiment 2. Each scan lasted for a total of 424 s, comprising 12 stimulus blocks (each for 18 s) preceded and followed by blank fixation periods (each for 16 s). Each stimulus block of 18 s contained 9 trials (2 s each). The 12 stimulus blocks contained 6 experimental conditions repeated twice in a mirror-reversed condition order. The order of conditions was counterbalanced across participants.

Retinotopic mapping.

Retinotopic mapping of the visual cortical areas (V1, V2, V3, V4v, and V3A) was conducted using standard meridian mapping (Grill-Spector et al. 1999; Shim et al. 2010) with horizontal and vertical “bowtie” stimuli composed of counterphase flashing checkerboards. Visual field representations were delineated by alternating representations of the vertical and horizontal meridians (Wandell et al. 2007).

Localizer scans.

Category-selective ventral areas were localized using a blocked design involving faces, scenes, objects, and scrambled images (16 stimulus blocks lasting for 16 s each, preceded and followed by 16-s blank fixation periods every 4 blocks; Schwarzlose et al. 2008). For each participant, voxels that showed greater activation for the preferred category than a nonpreferred category at P < 0.01 (uncorrected for multiple comparisons) were included in the subsequent ROI analysis: FFA (faces > objects), PPA (scenes > objects), and LOC (objects > scrambled objects). One participant in experiment 2 did not show significant activation for faces in the localizer scan; the FFA analysis in this experiment did not include data from that participant. All other category-selective ROIs were found in all participants. Because activity in response to nonpreferred stimuli in the category-selective areas was low and did not yield meaningful data, ROI data from a given category-selective region (e.g., FFA) are from the three scans where a preferred category was presented (e.g., faces).

fMRI scanning.

fMRI data were collected on a Siemens 3T Trio scanner using a standard 12-channel head coil. Six participants of experiment 2 were tested at the Martinos Imaging Center in Charlestown, MA; all other participants were tested at the MIT McGovern Institute. For all participants, we collected two scans of a high-resolution T1 structural image (1 × 1 × 1.33 mm) using the magnetization-prepared rapid acquisition gradient-echo (MP-RAGE) sequence, which was used for brain surface reconstruction. All fMRI scans used the standard T2*-weighted gradient-echo echo-planar (EPI) sequences (repetition time = 2,000 ms, echo time = 30 ms, flip angle = 90°, in-plane resolution = 3.125 × 3.125 mm). We obtained data from 28 axial slices, 4 mm thick with no space between slices, providing coverage for the whole brain except the base of the cerebellum. Participants viewed visual stimuli through a back-projected mirror; the viewing distance was 110 cm.

fMRI data analysis.

The fMRI data were preprocessed to correct for head motion and to remove linear drifts. Voxels were smoothed with a Gaussian kernel (full width at half-maximum = 6 mm) and were intensity normalized. The cortical surface of each participant's brain was reconstructed using FreeSurfer (http://surfer.nmr.mgh.harvard.edu; Fischl et al. 1999, 2001). fMRI data were analyzed using FreeSurfer and in-house MATLAB scripts. Hemodynamic response of each voxel was estimated using a gamma function (delta = 2.25, tau = 1.25).

Activation corresponding to each stimulus condition in each experiment was calculated in independently defined ROIs (see Retinotopic mapping). The V1, V2, and V3 ROIs consisted of four subregions, one for each quadrant; the V4v and V3A ROIs consisted of two subregions, one for each hemifield. We included voxels in the ROIs that showed significantly greater activation during visual stimulation (all conditions collapsed) than blank fixation in the main experimental runs (P < 0.01, uncorrected for multiple comparisons). Because the selection of the voxels was based on all conditions, the ROI analysis was not intrinsically biased toward any condition. In experiment 2, one participant was excluded from the retinotopic analysis (V1 through V3A) because no voxels in V1 showed greater activity during visual stimulation than the fixation baseline. The data from retinotopically mapped areas were collapsed across all three stimulus categories (faces, scenes, and objects) because results were similar for all categories. We averaged percent signal change from 6 to 20 s after the onset of each stimulus block using the modeled estimates of fMRI signal. Mean responses of each ROI were calculated for each participant. A repeated-measures analysis of variance (ANOVA) was then conducted to produce a second-level, random-effects group analysis.

RESULTS

Experiment 1.

In experiment 1 we examined neural activity to a single stimulus, four identical stimuli, and four different stimuli presented simultaneously in different visual quadrants. If identical stimuli produce repetition attenuation, then activity in the 4-same condition should be lower than that in the 4-different condition. Contrary to this prediction, throughout the visual hierarchy we observed greater activity in the 4-same condition than in both the 4-different and the single-stimulus conditions (Fig. 3). We refer to the enhanced BOLD response to identical stimuli as a “redundancy gain.”

An external file that holds a picture, illustration, etc.
Object name is z9k0201321460003.jpg

Results from experiment 1. A: data from the quarterfield of V1, V2, and V3 constantly stimulated by the single stimulus or nonstimulated. B: data from the category-selective ventrotemporal regions (FFA, fusiform face area; PPA, parahippocampal place area; and LOC, lateral occipital complex). Error bars show ± 1 SE of the mean.

The clearest evidence for a redundancy gain came from retinotopically mapped regions with quarterfield representations. Stimuli from different quadrants project to spatially segregated regions of V1, V2, and V3 (Wandell et al. 2007). We therefore examined activation to a single stimulus in the stimulated quarterfield as a function of whether the other quadrants were empty (single stimulus) or were occupied by the same stimuli (4-same) or by different stimuli (4-different). As shown in Fig. 3A, activity in V1, V2, and V3 was higher in the 4-same condition than in both the single-stimulus condition [t(5) = 6.046, P = 0.002 in V1; t(5) = 4.005, P = 0.010 in V2; t(5) = 3.657, P = 0.015 in V3] and the 4-different condition [t(5) = 3.165, P = 0.025 in V1; t(5) = 3.293, P = 0.022 in V2; t(5) = 3.295, P = 0.022 in V3]. These data indicate that the presence of identical stimuli in other quadrants enhances BOLD responses. In addition, activity in the 4-different condition was not significantly lower than that in the single-stimulus condition [t(5) = −1.483, P = 0.198 in V2; t(5) = −1.184, P = 0.290 in V3; t(5) = 4.288, P = 0.008 in the opposite direction in V1], revealing no evidence for suppression among multiple different stimuli. Thus the presence of multiple identical stimuli enhances early visual activity, showing a redundancy gain in retinotopically mapped visual areas.

The presence of a redundancy gain in V1 and V2 is surprising given that the receptive field in these regions is too small to encompass stimuli from separate quadrants. One may be concerned about the selection of the ROIs: if the ROIs for a given visual quadrant include voxels from neighboring quadrants, then activity could be higher in the multiple-stimulus conditions than in the single-stimulus condition. However, this concern does not explain why activity is higher in the 4-same condition than in the 4-different condition. Moreover, if the ROIs had included voxels from neighboring quadrants, then ROIs from the nonstimulated quadrants should include voxels that received visual stimulation, and hence activity in the nonstimulated quadrants should exceed the fixation baseline. This was not the case. As shown in Fig. 3A, mean activation in nonstimulated quadrants in the single-stimulus condition was below baseline [t(5) = −2.424, P = 0.060 in V1; t(5) = −3.487, P = 0.018 in V2; t(5) = −4.040, P = 0.010 in V3], making it unlikely that the ROIs had included neighboring voxels. Finally, because no stimulus fell in the central 2° around the horizontal and vertical meridians, and only voxels that showed greater activation during visual stimulation than fixation were included in the ROIs, voxels corresponding to the central 2° were mostly excluded from our analysis, minimizing the possibility that voxels from different quadrants were mixed within an ROI.

What could be the source of the redundancy gain? Because of the small receptive size of V1 and V2, the redundancy gain likely reflects feedback activity from higher visual areas. Consistent with this interpretation, the redundancy gain was seen in ventral category-selective regions (Fig. 3B). BOLD response was higher in the 4-same condition than in the 4-different condition in the FFA [t(5) = 4.101, P = 0.009] and the LOC [t(5) = 3.623, P = 0.015], with a similar trend in the PPA [t(5) = 2.07, P = 0.093]. In addition, all three regions showed greater activity in the 4-same condition than in the single-stimulus condition [t(5) = 7.235, P = 0.001 in FFA; t(5) = 5.954, P = 0.002 in LOC; t(5) = 3.920, P = 0.011 in PPA] and greater activity in the 4-different condition than in the single-stimulus condition [t(5) = 3.798, P = 0.013 in FFA; t(5) = 2.426, P = 0.06 in LOC; t(5) = 2.966, P = 0.031 in PPA].

Experiment 2.

Most neurophysiology and neuroimaging studies have found that interactions among simultaneously presented visual stimuli are suppressive and that these interactions primarily occur when stimuli fall within the same receptive field (Beck and Kastner 2009; Desimone and Duncan 1995). However, the redundancy gain revealed in experiment 1 does not fit either pattern, suggesting that it reflects a novel form of interaction. To more directly examine the relationship between redundancy gain and simultaneous suppression, in experiment 2 we presented stimuli from the three conditions (single-stimulus plus 3 placeholders, 4-same, and 4-different) either simultaneously or sequentially. Because the 4-same and 4-different conditions are well matched in low-level properties, our analyses focus on these conditions (see Table 1 for the full data set).

Table 1.

Mean percent signal change in all conditions of experiment 2

AreanSIM 4-DiffSEQ 4-DiffSIM 4-SameSEQ 4-SameSIM SingleSEQ Single
V170.64 (0.06)0.63 (0.04)0.79 (0.03)0.62 (0.12)0.32 (0.09)0.33 (0.08)
V270.55 (0.06)0.60 (0.06)0.69 (0.03)0.57 (0.10)0.33 (0.08)0.36 (0.07)
V370.45 (0.06)0.60 (0.06)0.59 (0.03)0.55 (0.09)0.30 (0.07)0.39 (0.06)
V4v70.51 (0.07)0.75 (0.08)0.67 (0.07)0.68 (0.11)0.34 (0.07)0.43 (0.07)
V3A70.34 (0.04)0.60 (0.05)0.48 (0.02)0.56 (0.06)0.28 (0.06)0.42 (0.05)
FFA70.45 (0.08)0.72 (0.06)0.69 (0.09)0.76 (0.10)0.45 (0.07)0.43 (0.08)
PPA80.32 (0.08)0.84 (0.08)0.59 (0.06)0.69 (0.08)0.26 (0.09)0.33 (0.07)
LOC80.46 (0.07)0.87 (0.10)0.56 (0.07)0.69 (0.12)0.30 (0.07)0.51 (0.10)

Values are means (SE) from 3 stimulus conditions: 4-different (4-Diff), 4-same, or single stimulus with simultaneous (SIM) or sequential (SEQ) presentation. Data from V1 to V3A were from all 6 runs (including objects, faces, and scenes). Data from the fusiform face area (FFA), parahippocampal place area (PPA), and lateral occipital complex (LOC) were from 2 runs containing a preferred category.

When the four stimuli were different from each other, we observed a simultaneous suppression effect that scaled to the receptive field size (Fig. 4A): activity was significantly lower when stimuli were presented simultaneously rather than sequentially. This effect was seen in ventral category-selective regions whose receptive field spans up to 30° (Desimone et al. 1984) [t(6) = −3.701, P = 0.01 in the FFA; t(7) = −4.421, P = 0.003 in the PPA; t(7) = −5.683, P = 0.001 in the LOC]. Simultaneous suppression was also evident in retinotopically mapped regions with relatively large receptive field size [t(6) = −4.753, P = 0.003 in V3; t(6) = −6.390, P = 0.001 in V4v; t(6) = −7.736, P < 0.001 in V3A] but was absent in early visual areas with small receptive field size [t(6) = 0.202, P = 0.846 in V1; t(6) = −1.212, P = 0.271 in V2]. Thus, consistent with the findings of Kastner et al. (1998), BOLD responses to four different stimuli are lower when they are presented simultaneously rather than successively, and we extend this result to complex meaningful stimuli (faces, scene, and objects) presented in different visual quadrants.

An external file that holds a picture, illustration, etc.
Object name is z9k0201321460004.jpg

Effects of presentation mode (simultaneous or sequential presentation) in experiment 2. A: data from the 4-different condition revealed a simultaneous suppression effect that scaled to receptive field size. B: simultaneous suppression was abolished in the 4-same condition. SIM, simultaneous presentation; SEQ, sequential presentation.

In contrast, when the four stimuli were identical to each other (Fig. 4B), simultaneous suppression was rendered insignificant in ventral category-selective regions [t(6) = −1.158, P = 0.291 in FFA; t(7) = −0.764, P = 0.47 in PPA; t(7) = −1.431, P = 0.195 in LOC]. It was also absent in retinotopically organized higher visual areas [t(6) = 0.535, P = 0.612 in V3; t(6) = −0.033, P = 0.975 in V4v; t(6) = −1.295, P = 0.243 in V3A] as well as in early retinotopic cortex [t(6) = 1.591, P = 0.163 in V1; t(6) = 1.389, P = 0.214 in V2]. An ANOVA on presentation mode (simultaneous or sequential) and stimulus similarity (4-same or 4-different) revealed a significant interaction in most ROIs, indicating that simultaneous suppression was attenuated when the four stimuli were identical [category-selective regions: F(1,6) = 6.635, P = 0.042 in FFA; F(1,7) = 6.190, P = 0.042 in PPA; F(1,7) = 11.892, P = 0.011 in LOC; higher retinotopic visual areas: F(1,6) = 6.188, P = 0.047 in V3; F(1,6) = 6.014, P = 0.05 in V4v; F(1,6) = 11.135, P = 0.016 in V3A; early visual areas: F(1,6) = 2.093, P = 0.198 in V1 and F(1,6) = 3.416, P = 0.114 in V2].

Replicating the findings from experiment 1 when stimuli were presented simultaneously, the BOLD response was significantly higher in the 4-same condition than in the 4-different condition (Fig. 5). This redundancy gain effect was shown throughout the visual hierarchy, including the ventral category-selective regions [t(6) = 2.595, P = 0.041 in FFA; t(7) = 2.987, P = 0.02 in PPA; t(7) = 2.127, P = 0.071 in LOC] and retinotopically organized higher visual areas [t(6) = 3.891, P = 0.008 in V3; t(6) = 3.547, P = 0.012 in V4v; t(6) = 4.786, P = 0.003 in V3A], as well as V1 and V2 [t(6) = 3.107, P = 0.021 in V1; t(6) = 3.602, P = 0.011 in V2].

An external file that holds a picture, illustration, etc.
Object name is z9k0201321460005.jpg

Percent signal change in response to simultaneously presented stimuli in experiment 2.

The ROI analyses reported above included voxels that showed greater activity to all conditions than the fixation baseline. To ensure that the voxel selection did not somehow introduce a disadvantage for the 4-different condition, we re-defined the ROIs by including voxels that showed greater activity in the 4-different condition than in the fixation baseline. Defining the ROIs on the basis of the 4-different condition did not change the pattern of results. Visual activity was still higher in the 4-same condition than in the 4-different condition, suggesting that the redundancy gain is a robust phenomenon.

The redundancy gain cannot be accounted for by differences in participants' attentional state. Performance in the central fixation task was not at ceiling: 88.7% in experiment 1 and 80.5% in experiment 2. It was unaffected by stimulus condition (4-same, 4-different, or single) [F(2,10) = 0.486, P = 0.629 in experiment 1 and F(2,14) = 1.726, P = 0.214 in experiment 2].

DISCUSSION

In this study we demonstrate redundancy gains in retinotopic cortex: neural activity in response to a stimulus in early retinotopic cortex was greater when identical stimuli appeared simultaneously in other visual quadrants than when that stimulus was presented alone. This long-range contextual effect was observed in all early retinotopic regions with quarterfield representation (V1, V2, and V3; Wandell et al. 2007). Because the objects were separated by 5° or more and appeared in distinct visual quadrants, local computations occurring within the classic receptive field cannot account for this effect. In contrast to the increase in response to a given item when identical stimuli were presented simultaneously in other quadrants, there was no effect on an item's response when it appeared simultaneously with other different items. Thus visual responses in early visual cortex are enhanced by distant, redundant context but are not suppressed by distant, heterogeneous context. This effect is importantly different from any contextual effects reported previously, including the simultaneous suppression effect (Kastner et al. 1998). Instead, this effect appears to reflect a novel form of feedback from higher cortical areas, one that may reflect the neural correlate of the redundancy gain observed behaviorally when multiple copies of the identical stimulus are presented simultaneously, compared with a single instance (Jiang et al. 2010). Next we discuss the relationship between our findings and other kinds of context effects in retinotopic cortex reported previously.

Biased competition.

According to the biased competition model, simultaneously presented visual stimuli compete for neural representation when they are presented in the same receptive field, leading to suppressive effects that increase in magnitude as one ascends the visual hierarchy (Kastner et al. 1998, 2001). These effects, which were first shown physiologically (Miller et al. 1993; Moran and Desimone 1985; Reynolds et al. 1999), were later reported in fMRI as a lower response when stimuli were presented simultaneously than when they were presented successively (Kastner et al. 1998, 2001). Using this method, we show lower responses to simultaneous than successive stimuli in ventral category-selective regions, in the FFA, PPA, LOC, and higher retinotopic regions with relatively large receptive field size, including V3, V4v, and V3A, but not in early areas V1 and V2. Because simultaneous suppression strengthens in cortical regions with greater receptive field size, our findings on simultaneous suppression are consistent with the biased competition model, and we extend the applicability of the model to complex visual stimuli (faces, objects, and scenes) placed in different visual quadrants. However, because the stimuli in our study are far apart in different quadrants (and hence different receptive fields), the redundancy gain we observed in early visual areas must reflect a different phenomenon than biased competition (which occurs only when stimuli fall within the same receptive fields). Although several prior studies have reported that biased competition was eliminated when nearby items were identical in color and orientation (Beck and Kastner 2005, 2007) or when they formed strong perceptual groups via good continuity cues (McMains and Kastner 2010, 2011), these studies tested stimuli within the same visual quadrant. In contrast, the stimuli in our study do not land in overlapping receptive fields in V1 and V2, and the redundancy gain effect we observe consists of a higher response when multiple (identical) stimuli are presented compared with a single stimulus. Thus the redundancy gain effect reported here is a new phenomenon that is not related to simultaneous suppression.

Attentional enhancement of responses in retinotopic cortex.

It is well established that attention can affect visual responses throughout retinotopic cortex, including V1 (Brefczynski and DeYoe 1999; Ress et al. 2000; Somers et al. 1999). Might the redundancy gain reported here reflect top-down attentional enhancement for identical stimuli? Indeed, the biased competition model predicts that interactions between stimuli can be modulated by top-down attention. Although possible, this account faces empirical challenges.

First, subjects were engaged in a task at the fovea, which would discourage attention to the peripheral stimuli. This central fixation task was demanding: although the flicker only occurred on 20% of the trials, participants could not predict when they would occur and therefore had to monitor all trials. Their behavioral performance was below ceiling (88.7% in experiment 1 and 80.5% in experiment 2), suggesting that the task was moderately difficult. Importantly, performance in the fixation task did not differ significantly across conditions, suggesting that attention was not more attracted to the peripheral stimuli when they were identical than when they were different. Second, postexperiment interviews showed that most participants never noticed the fact that the stimuli were identical in some blocks and different in others, a feature of this experiment that should have been very salient if attention had strayed to the stimuli. Furthermore, if participants had attended to the stimuli, the results should have resembled findings from studies in which participants were required to attend to multiple stimuli. In one such study (Xu 2010), participants attended to and memorized either multiple identical stimuli or multiple stimuli in different colors or shapes. Presumably because of differences in attention and working memory demands, activity in the LOC was lower in the identical-stimuli condition than in the different-stimuli condition. Although we do not think that the redundancy gain reflects greater attention to identical stimuli, future studies that manipulate attention and task demands are needed to further test this hypothesis.

Feedback to retinotopic cortex.

A variety of fMRI findings have been reported that appear to demonstrate feedback of high-level information to retinotopic cortex including V1. For example, an object that appears farther and larger (because of scene context cues) activates a larger area in V1 than an object of equal angular size that is perceived to be closer and smaller (Murray et al. 2006). Similarly, cortical activity in retinotopic areas including V1 is correlated with perceived (rather than actual) brightness (Boyaci et al. 2007). In a different vein, Williams et al. (2008) showed that the category of an object presented in the periphery could be decoded from the pattern of response in foveal cortex, an effect attributed to feedback projection from higher visual areas. Finally, Smith and Muckli (2010) demonstrated that when one quadrant of a picture is occluded by blank space, the pattern of response in a region of V1 that corresponded to the empty quadrant can predict the picture shown in the remaining quadrants. The results presented here are broadly consistent with these prior findings that high-level information is apparently fed back to early retinotopic cortex including V1. The same pattern of results from category-selective ventrotemporal areas (FFA/PPA/LOC) suggests that they could be the source of the feedback. The novel finding of the present study is that the pattern of V1 and V2 activity depends on the relationship between the distant stimuli: whereas visually different stimuli have little influence on V1 and V2, visually identical stimuli enhance activity in V1 and V2, demonstrating redundancy gains across wide distances.

Predictive coding.

Our finding appears to conflict with prior evidence for “predictive coding,” in which responses in LOC have been found to be higher for a bistable shape when it produces a coherent shape percept than when it produces an incoherent percept, whereas the opposite was found in V1, with lower responses to the percept of the coherent shape (Murray et al. 2002; see also Fang et al. 2008). These results have been interpreted as showing that inferences made in high-level areas are essentially subtracted from incoming sensory information in lower areas. In these examples of predictive coding, a more coherent higher-level representation goes along with a lower response in retinotopic cortex, whereas in our case the presumably stronger higher-level representation goes with a higher response in retinotopic cortex. Thus our effects are clearly distinct from predictive coding effects.

An important difference between these phenomena may be the involvement of perceptual grouping. In previous studies, activity in V1 is reduced when individual stimuli induce strong perceptual grouping. In contrast, the redundant stimuli used in our study are highly complex and are unlikely to form a perceptual group. Prior evidence that multiple faces do not form perceptual groups comes from the finding that searching for repeated objects or faces is an attentionally demanding, laborious process (Cavanagh and Parkman 1972; Hayes et al. 2010), whereas search should be efficient if these stimuli were grouped. We suspect that the long-distance redundancy gain we observe more likely results from the extraction of global visual statistics (Alvarez and Oliva 2009) over the entire visual field, including redundant visual features and identities.

Redundancy gains.

If our effect is not explainable in terms of prior-reported effects such as simultaneous suppression, top-down attentional enhancement, or predictive coding, how are we to understand it? We suggest that the effect reported here arises from feedback from high-level visual areas, especially object- or category-selective ventrotemporal areas. Furthermore, these redundancy gain effects in both retinotopic cortex and extrastriate areas closely parallel behavioral work showing perceptual benefits under the same circumstances. Compared with a single item, a display containing four identical stimuli results in enhanced perceptual representations and more robust visual short-term and long-term memory (Jiang et al. 2010). The fMRI data reported here may provide a neural basis for perceptual averaging (Sweeny et al. 2009) and other ensemble coding in behavior.

A parsimonious interpretation of all of these effects is that information from disparate visual locations is pooled through ensemble-coding mechanisms (Alvarez 2011) at higher levels of processing, which increases the robustness of the resulting representation when the stimuli in those locations are identical, and this enhanced higher-level representation produces increased feedback to the sources of that information in retinotopic cortex. When object information in high-level category-selective areas is fed back to early visual cortex, it may increase the population response of neurons that are tuned to low-level featural correlates of that object, presumably via re-entrant processing as proposed by the reverse-hierarchy theory (Hochstein and Ahissar 2002). Previous behavioral work suggests that visual perception is initially dominated by global gist, which facilitates the subsequent processing of local details (Hochstein and Ahissar 2002, Sweeny et al. 2011). Thus ensemble-coding mechanisms in high-level visual areas may rapidly extract summary statistics of simultaneously presented stimuli by neural averaging (Zoccolan et al. 2005) and send this information back to the early visual cortex, strengthening representations of low-level features of the stimuli.

Our data suggest that ensemble coding is not simply a method to increase the economic representation of visual stimuli. By facilitating rather than reducing BOLD activities, ensemble coding may serve to improve the representational precision of multiple objects, given that the representation of each item may be too noisy. Consistent with behavioral findings and theories (Alvarez 2011), the redundancy gain we have reported here may reflect the enhancement of perceptual representation that results from ensemble coding.

A speculation about the limited representational capacity of each category-selective region.

The pattern of response we observe in category-selective regions in the ventral visual pathway (see Fig. 3B) suggests possible limits in the representational capacity of each of these regions. Specifically, consider the somewhat higher response in each of these regions when four different stimuli are presented, compared with just one. A plausible interpretation of this result is that partly separate populations of neurons respond to the stimuli presented in each quadrant such that more neurons are activated when four stimuli are presented than when just one stimulus is presented. The challenge for the account is to explain why responses are lower in the 4-different than the 4-same condition. If the higher response for multiple vs. single stimuli arose because of nonoverlapping spatial receptive fields, it should make no difference whether the stimuli in those receptive fields were the same as or different from each other. The fact that it does make a difference and the response is overall higher when identical rather than different stimuli are presented suggests a different interpretation of our data. Namely, each of these regions may be limited in representational capacity not by the overlap in spatial receptive fields contained within each region, but instead by the overlap in higher-level category representations themselves. In particular, if each of these regions can only represent one or two exemplars of that category at a time, then we might expect that representation to degrade from cross talk (and the BOLD response to become consequently lower) when four different exemplars of the same category are presented at once, compared with when four instances of the same item are presented. Further consistent with this view is the fact that responses in FFA, PPA, and LOC are higher for sequential than simultaneous presentation, fitting the standard pattern of simultaneous suppression (Kastner et al. 1998), but importantly this effect is greater when the items are four different exemplars than when they are identical, a finding that further suggests capacity limits in the simultaneous representation of multiple different exemplars of the same category. This speculation that the representational capacity of these regions is limited to only one or two items at a time is consistent with recent findings that working memory capacity is greater when to-be-remembered items come from different categories rather than the same category (Cohen et al. 2012). It also fits naturally with the idea that these regions can encode summary statistics, by essentially averaging over multiple simultaneously-presented stimuli from the same category.

To conclude, we have demonstrated a novel context effect in which the representation of a stimulus in retinotopic cortex is enhanced when identical copies of that stimulus appear far away in other visual quadrants. This redundancy gain in retinotopic cortex is distinct from other kinds of context effects reported previously and may reflect the strengthening of perceptual representations when high-level information is pooled across spatially disparate instances of that same stimulus, and the resulting strengthened representation produces feedback to the corresponding source representations in retinotopic cortex. Future studies should directly test the feedback account and characterize visual properties that yield redundancy gain.

GRANTS

This study was supported by National Institutes of Health Grants EY13455; (to N. Kanwisher) and MH071788 (to Y. V. Jiang) and by a Minnesota Grant-in-Aid (to Y. V. Jiang).

DISCLOSURES

No conflicts of interest, financial or otherwise, are declared by the authors.

AUTHOR CONTRIBUTIONS

W.M.S., Y.V.J., and N.K. conception and design of research; W.M.S. performed experiments; W.M.S. and Y.V.J. analyzed data; W.M.S., Y.V.J., and N.K. interpreted results of experiments; W.M.S. and Y.V.J. prepared figures; W.M.S., Y.V.J., and N.K. drafted manuscript; W.M.S., Y.V.J., and N.K. edited and revised manuscript; W.M.S., Y.V.J., and N.K. approved final version of manuscript.

REFERENCES

  • Alvarez GA. Representing multiple objects as an ensemble enhances visual cognition. Trends Cogn Sci 15: 122–131, 2011 [Abstract] [Google Scholar]
  • Alvarez GA, Oliva A. The representation of simple ensemble visual features outside the focus of attention. Psychol Sci 19: 392–398, 2008 [Europe PMC free article] [Abstract] [Google Scholar]
  • Alvarez GA, Oliva A. Spatial ensemble statistics are efficient codes that can be represented with reduced attention. Proc Natl Acad Sci USA 106: 7345–7350, 2009 [Europe PMC free article] [Abstract] [Google Scholar]
  • Ariely D. Seeing sets: representation by statistical properties. Psychol Sci 12: 157–162, 2001 [Abstract] [Google Scholar]
  • Beck DM, Kastner S. Stimulus context modulates competition in human extrastriate cortex. Nat Neurosci 8: 1110–1116, 2005 [Europe PMC free article] [Abstract] [Google Scholar]
  • Beck DM, Kastner S. Stimulus similarity modulates competitive interactions in human visual cortex. J Vis 7: 1–12, 2007 [Abstract] [Google Scholar]
  • Beck DM, Kastner S. Top-down and bottom-up mechanisms in biasing competition in the human brain. Vision Res 49: 1154–1165, 2009 [Europe PMC free article] [Abstract] [Google Scholar]
  • Boyaci H, Fang F, Murray SO, Kersten D. Responses to lightness variations in early human visual cortex. Curr Biol 17: 989–993, 2007 [Europe PMC free article] [Abstract] [Google Scholar]
  • Brady TF, Tenenbaum JB. A probabilistic model of visual working memory: incorporating higher-order regularities into working memory capacity estimates. Psychol Rev 120: 85–109, 2013 [Abstract] [Google Scholar]
  • Brefczynski JA, DeYoe EA. A physiological correlate of the ‘spotlight’ of visual attention. Nat Neurosci 2: 370–374, 1999 [Abstract] [Google Scholar]
  • Cavanagh JP, Parkman JM. Search processes for detecting repeated items in a visual display. Percept Psychophys 11: 43–45, 1972 [Google Scholar]
  • Chong SC, Treisman A. Representation of statistical properties. Vision Res 43: 393–404, 2003 [Abstract] [Google Scholar]
  • Cohen MA, Konkle T, Rhee J, Nakayama K, Alvarez GA. High-level neural similarity predicts perceptual encoding of different object categories. J Vis 12: 9, 2012 [Google Scholar]
  • Desimone R, Albright TD, Gross CG, Bruce C. Stimulus-selective properties of inferior temporal neurons in the macaque. J Neurosci 4: 2051–2062, 1984 [Abstract] [Google Scholar]
  • Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annu Rev Neurosci 18: 193–222, 1995 [Abstract] [Google Scholar]
  • Fang F, Kersten D, Murray SO. Perceptual grouping and inverse fMRI activity patterns in human visual cortex. J Vis 8: 2.1–2.9, 2008 [Abstract] [Google Scholar]
  • Fischl B, Sereno MI, Dale AM. Cortical surface-based analysis. II. Inflation, flattening, and a surface-based coordinate system. Neuroimage 9: 195–207, 1999 [Abstract] [Google Scholar]
  • Fischl B, Liu A, Dale AM. Automated manifold surgery: constructing geometrically accurate and topologically correct models of the human cerebral cortex. IEEE Trans Med Imaging 20: 70–80, 2001 [Abstract] [Google Scholar]
  • Grill-Spector K, Kushnir T, Edelman S, Avidan G, Itzchak Y, Malach R. Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron 24: 187–203, 1999 [Abstract] [Google Scholar]
  • Grill-Spector K, Malach R. fMR-adaptation: a tool for studying the functional properties of human cortical neurons. Acta Psychol (Amst) 107: 293–321, 2001 [Abstract] [Google Scholar]
  • Haberman J, Whitney D. Rapid extraction of mean emotion and gender from sets of faces. Curr Biol 17: 751–753, 2007 [Europe PMC free article] [Abstract] [Google Scholar]
  • Haberman J, Whitney D. Seeing the mean: Ensemble coding for sets of faces. J Exp Psychol Hum Percept Perform 35: 718, 2009 [Europe PMC free article] [Abstract] [Google Scholar]
  • Hayes MT, Swallow KM, Jiang YV. The unilateral field advantage in repetition detection: effects of perceptual grouping and task demands. Atten Percept Psychophys 72: 583–590, 2010 [Abstract] [Google Scholar]
  • Hochstein S, Ahissar M. View from the top-hierarchies and reverse hierarchies in the visual system. Neuron 36: 791–804, 2002 [Abstract] [Google Scholar]
  • Huang L, Treisman A, Pashler H. Characterizing the limits of human visual awareness. Science 317: 823–825, 2007 [Abstract] [Google Scholar]
  • Jiang YV, Kwon MY, Shim WM, Won BY. Redundancy effects in the perception and memory of visual objects. Vis Cogn 18: 1233–1252, 2010 [Google Scholar]
  • Joo SJ, Boynton GM, Murray SO. Long-range, pattern-dependent contextual effects in early human visual cortex. Curr Biol 22: 781–786, 2012 [Europe PMC free article] [Abstract] [Google Scholar]
  • Kastner S, De Weerd P, Desimone R, Ungerleider LG. Mechanisms of directed attention in the human extrastriate cortex as revealed by functional MRI. Science 282: 108–111, 1998 [Abstract] [Google Scholar]
  • Kastner S, De Weerd P, Pinsk MA, Elizondo MI, Desimone R, Ungerleider LG. Modulation of sensory suppression: Implications for receptive field sizes in the human visual cortex. J Neurophysiol 86: 1398–1411, 2001 [Abstract] [Google Scholar]
  • Luck SJ, Vogel EK. The capacity of visual working memory for features and conjunctions. Nature 390: 279–281, 1997 [Abstract] [Google Scholar]
  • McMains S, Kastner S. Defining the units of competition: influences of perceptual organization on competitive interactions in human visual cortex. J Cogn Neurosci 22: 2417–2426, 2010 [Europe PMC free article] [Abstract] [Google Scholar]
  • McMains S, Kastner S. Interactions of top-down and bottom-up mechanisms in human visual cortex. J Neurosci 31: 587–597, 2011 [Europe PMC free article] [Abstract] [Google Scholar]
  • Miller EK, Gochin PM, Gross CG. Suppression of visual responses of neurons in inferior temporal cortex of the awake macaque by addition of a second stimulus. Brain Res 616: 25–29, 1993 [Abstract] [Google Scholar]
  • Moran J, Desimone R. Selective attention gates visual processing in the extrastriate cortex. Science 229: 782–784, 1985 [Abstract] [Google Scholar]
  • Murray SO, Kersten D, Olshausen BA, Schrater P, Woods DL. Shape perception reduces activity in human primary visual cortex. Proc Natl Acad Sci USA 99: 15164–15169, 2002 [Europe PMC free article] [Abstract] [Google Scholar]
  • Murray SO, Boyaci H, Kersten D. The representation of perceived angular size in human primary visual cortex. Nat Neurosci 9: 429–434, 2006 [Abstract] [Google Scholar]
  • Oliva A, Torralba A. Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42: 145–175, 2001 [Google Scholar]
  • Parkes L, Lund J, Angelucci A, Solomon JA, Morgan M. Compulsory averaging of crowded orientation signals in human vision. Nat Neurosci 4: 739–744, 2001 [Abstract] [Google Scholar]
  • Pashler H. The Psychology of Attention. Cambridge, MA: MIT Press, 1998 [Google Scholar]
  • Potter MC. Short-term conceptual memory for pictures. J Exp Psychol Hum Learn 2: 509–522, 1976 [Abstract] [Google Scholar]
  • Potter MC. Recognition and memory for briefly presented scenes. Front Psychol 3: 32 2012 [Europe PMC free article] [Abstract] [Google Scholar]
  • Pylyshyn ZW, Storm RW. Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spat Vis 3: 179–197, 1988 [Abstract] [Google Scholar]
  • Ress D, Backus BT, Heeger DJ. Activity in primary visual cortex predicts performance in a visual detection task. Nat Neurosci 3: 940–945, 2000 [Abstract] [Google Scholar]
  • Reynolds JH, Chelazzi L, Desimone R. Competitive mechanisms subserve attention in Macaque areas V2 and V4. J Neurosci 19: 1736–1753, 1999 [Abstract] [Google Scholar]
  • Schwarzlose RF, Swisher JD, Dang S, Kanwisher N. The distribution of category and location information across object-selective regions of visual cortex. Proc Natl Acad Sci USA 105: 4447–4452, 2008 [Europe PMC free article] [Abstract] [Google Scholar]
  • Shim WM, Vickery TJ, Alvarez GA, Jiang YV. The number of attentional foci and their precision are dissociated in the posterior parietal cortex. Cereb Cortex 20: 1341–1349, 2010 [Europe PMC free article] [Abstract] [Google Scholar]
  • Smith FW, Muckli L. Nonstimulated early visual areas carry information about surrounding context. Proc Natl Acad Sci USA 107: 20099–20103, 2010 [Europe PMC free article] [Abstract] [Google Scholar]
  • Somers DC, Dale AM, Seiffert AE, Tootell RB. Functional MRI reveals spatially specific attentional modulation in human primary visual cortex. Proc Natl Acad Sci USA 96: 1663–1668, 2010 [Europe PMC free article] [Abstract] [Google Scholar]
  • Sweeny TD, Grabowecky M, Paller KA, Suzuki S. Within-hemifield perceptual averaging of facial expressions predicted by neural averaging. J Vis 9: 1–11, 2009 [Europe PMC free article] [Abstract] [Google Scholar]
  • Sweeny TD, Grabowecky M, Suzuki S. Simultaneous shape repulsion and global assimilation in the perception of aspect ratio. J Vis 11: 1–16, 2011 [Europe PMC free article] [Abstract] [Google Scholar]
  • Sweeny TD, Haroz S, Whitney D. Perceiving group behavior: sensitive ensemble coding mechanisms for biological motion of human crowds. J Exp Psychol Hum Percept Perform 39: 329–337, 2013 [Abstract] [Google Scholar]
  • Wandell BA, Dumoulin SO, Brewer AA. Visual field maps in human cortex. Neuron 56: 366–383, 2007 [Abstract] [Google Scholar]
  • Williams MA, Baker CI, Op de Beeck HP, Shim WM, Dang S, Triantafyllou C, Kanwisher N. Feedback of visual object information to foveal retinotopic cortex. Nat Neurosci 11: 1439–1445, 2008 [Europe PMC free article] [Abstract] [Google Scholar]
  • Xu Y. The neural fate of task-irrelevant features in object-based processing. J Neurosci 30: 14020–14028, 2010 [Abstract] [Google Scholar]
  • Zoccolan D, Cox DD, DiCarlo JJ. Multiple object response normalization in monkey inferotemporal cortex. J Neurosci 25: 8150–8164, 2005 [Abstract] [Google Scholar]

Articles from Journal of Neurophysiology are provided here courtesy of American Physiological Society

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1152/jn.00175.2013

Supporting
Mentioning
Contrasting
2
4
0

Article citations

Similar Articles 


To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

Funding 


Funders who supported this work.

NEI NIH HHS (2)

NIMH NIH HHS (1)