Artificial Intelligence is rapidly reshaping how information is gathered, analyzed, and acted upon in both policing and military operations, and even the private sector. In this episode, we examine the practical implications of AI tools—especially large language models (LLMs)—and how they are already influencing intelligence analysis, operational planning, and day-to-day decision-making across our professions.
Mike and Jim explore the promise and limitations of human–machine teaming, the risks associated with data security and data poisoning, and how adversaries can exploit AI systems to manipulate information environments, accelerate decision cycles, and disrupt traditional OODA loops. As tacticians, we also have to consider the threats with AI teaming; deepfakes, synthetic media, and automated influence campaigns can distort perception and undermine trust during critical incidents or conflicts.
We focus on practical leadership considerations: when to trust automated tools, how to validate AI-generated information, and how organizations can integrate these technologies without surrendering judgment, sovereignty, or operational advantage. The ultimate goal is to help practitioners understand how AI changes the competitive landscape—and how professionals can adapt without becoming dependent on systems they don’t fully control.
Links:
Great article from Red Beard Tactical on how to use AI to write better OPORDS: https://www.patreon.com/posts/tactics-ai-opord-150850002
Like what we’re doing? Head over to Patreon and give us a buck for each new episode. You can also make a one-time contribution at GoFundMe.
Intro music credit Bensound.com