Due to their negligible cost, small energy footprint, compact size and passive nature, cameras are emerging as one of the most appealing sensing approaches for the realization of fully autonomous intelligent mobile platforms. In defence contexts, passive sensors, such as cameras, represent an important asset due to the absence of a detectable external operational signature – with at most some radiation generated by their components. This characteristic, however, makes targeting them a quite daunting task, as their active neutralization requires pinning a small angular diameter moving at a high speed. In this paper we introduce an interpretational countermeasure acting against autonomous platforms relying on featurebased optical workflows. We classify our approach as an interpretational disruption because it exploits the heuristics of the model used by the on-board artificial intelligence to interpret the available data. To remove the struggle of accurately pinpointing such an imperceptible target, our approach consists in passively corrupting, from a perception point of view, the whole environment with a small, sparse set of physical observables. The concrete design of these systems is developed from the response of a feature detector of interest. We define an optical attractor as the collection of pixels inducing an exceptionally strong response for a target feature detector. We also define a physical object inducing these pixel structures for defense purposes as a CLOAK: Countermeasure Leveraging Optical Attractor Kits. Using optical attractors, any optical based algorithm relying on features extraction can potentially be disrupted, in a completely passive and nondestructive fashion.