Deploying multiple autonomous systems that coordinate as a cohesive swarm on the battlefield is no longer science fiction. As new technologies disrupt the character of war, the American military is investing in algorithms to allow its drone forces to conduct swarm tactics across all domains. However, the current frameworks in development for conducting drone swarm tactics are reliant on centralized control. These frameworks limit the speed and flexibility of the swarm by placing an overreliance on perfect communication and by overtasking the centralized human controller. To overcome these limitations, the American Way of War should adapt the military must explore novel strategic frameworks that can rapidly train drone algorithms to be effective at decentralized execution, thereby rebalancing the workload of the resulting human-autonomy teams. This thesis proposes that training decentralized swarming algorithms, using the synergy of wargames and machine learning techniques, provides a powerful framework for optimizing drone decision making. The research uses a genetic algorithm to iteratively play a base defense wargame to train local drone interaction rules for a decentralized swarm that generates a desired global behavior. The results show a reduction in average base damage of 7882 p0.001 when comparing the mission effectiveness between a pre-trained and a post-trained defensive drone swarm against a baseline adversary.