Crowding is the inability to recognize peripheral objects in clutter, usually considered a fundamental low-level bottleneck to object recognition. Here we advance and test an alternative hypothesis, that crowding, like “serial dependence”, results from optimizing strategies that exploit redundancies in natural scenes. This notion leads to several testable predictions: (1) crowding should be greatest for unreliable targets and reliable flankers; (2) crowding-induced biases should be maximal when target and flankers have similar orientations, falling off for differences around 20°; (3) flanker interference should be associated with higher precision in orientation judgements, leading to lower overall error rate; (4) effects should be maximal when the orientation of the target is near that of the average of the flankers, rather than to that of individual flankers. All these effects were verified, and well simulated with ideal-observer models that maximize performance. The results suggest that while crowding can impact strongly on object recognition, it is best understood not as a processing bottleneck, but as a consequence of efficient exploitation of the spatial redundancies of the natural world.