Machine Learning (ML) has an increasing role within many mission areas across the Laboratory. Yet, it remains to be seen how robust and secure these algorithms are to inputs that are intentionally designed to cause a ML model to make a mistake, i.e., adversarial examples. Many potential issues arise from the existence of adversarial examples. Examples include an adversary biasing the training data via data poisoning techniques for Automatic Target Recognition (ATR) systems or attacking cybersecurity systems by inserting malicious content that appears legitimate. Building a framework to evaluate and build robustness into ML algorithms will become increasingly important as the USG invests in new capabilities. The unique set of mission areas across the Laboratory offers a set of challenging problems for generating attacks; namely, the adversary is often limited in its ability to fully understand the defenses capability. Yet, the adversary may have a good understanding of the training data used for any given ML model; e.g., for ATR radar systems such as those needed for Ballistic Missile Defense, the adversary controls what the defense observes when conducting system capability tests leading to the possibility of potential data poisoning attacks. This project aims to adapt and expand on existing approaches to effectively red team adversarial attacks for evaluating the robustness of Laboratory ML algorithms, while also developing techniques to build resiliency into the models. Radio frequency (RF) sensors are used alongside other sensing modalities to provide rich representations of the world. Given the high variability of complex-valued target responses, RF systems are susceptible to attacks masking true target characteristics from accurate identification. In this work, we evaluate different techniques for building robust classification architectures exploiting learned physical structure in received synthetic aperture radar signals of simulated 3D targets.