Visual perception must balance two competing goals: invariance and sensitivity. One can recognize a bird, despite significant variation in its pose, color, or texture, yet can also describe those identity-orthogonal features. How do our brains achieve this balance? We test a theory that the brain learns an equivariant representation of objects, in which identity-preserving transformations, specifically 3D rotation, are encoded by a common, predictable transformation of the neural response. Using a stimulus set of 3D objects rendered from sphericallysampled viewpoints, we develop a metric to assess rotational equivariance by and test this on neural activity from primate inferior temporal (IT) cortex as well as features from Imagenet-trained deep neural networks. Although category and identity information are evident in IT cortical responses, evidence for rotation equivariance is weak. We find an optimal subspace of IT cortex that possesses more equivariance than would be expected by chance, but no more than in a deep neural network model or the pixel space of the images. Our results indicate that IT cortex lacks rotation-equivariant representations and suggest the need to explore other cortical systems downstream of IT that may serve as the basis for equivariant visual object perception.