We are pleased to offer five in-person courses (three of which have a virtual option) and one virtual course during the 2025 RAAINS Workshop, covering a wide variety of AI topics. When registering for the RAAINS Workshop, please specify your course preference(s) and we will do our best to accommodate them based on space and time availability. Descriptions of the available courses can be found below.
In-Person Courses
AI System Architecture and Deployment Guidelines
Instructors: David R. Martinez
Duration: 4 hours
Course Format: In-person and Webcast
AI is revolutionizing many industries, including national security, energy, automotive, climate change, healthcare, and many others. But too often, organizations take a limited view of AI, focusing almost exclusively on machine learning (ML) methods. AI technologies are, in fact, key enablers to complex systems. They require not only ML technologies, but also trustworthy data sensors and sources, appropriate data conditioning processes, responsible governance frameworks, and a balance between human and machine interactions. In short, organizations must evolve into a systems engineering mindset to optimize their AI investments. This course will equip professionals to lead, develop, and deploy AI systems in responsible ways that augment human capabilities. Taking a broader, holistic perspective, it emphasizes an AI systems architecture approach applied to products and services and provides techniques for transitioning from development into deployment. The course will also provide a background on Large Language Models (LLMs) and Agentic AI. Upon completion of this course, participants will have the skills to understand the AI fundamentals necessary to develop end-to-end systems, lead AI teams, and successfully deploy AI capabilities.
Those participating in this course in-person will receive a copy of the book, Artificial Intelligence: A Systems Approach from Architecture Principles to Deployment, co-authored by David R. Martinez and published by MIT Press.
Safe Reinforcement Learning for Autonomous Systems
Instructors: Dr. Trevor T. Ashley
Duration: 2 hours
Course Format: In-person only
Deep reinforcement learning (DRL) has gained remarkable popularity over the past decade, driven by its impressive successes across multiplayer games and decision-making tasks. However, successfully deploying these methods in real-world autonomous systems remains challenging, as they often lack formal guarantees on critical properties such as stability, safety, and collision avoidance. To address this gap, researchers have begun developing techniques that provide certificates with respect to the policies generated through DRL. These advances are making it possible for autonomous systems to exhibit sophisticated behaviors while still operating safely.
This course is divided into two parts. The first half will survey the state of the art in safe DRL, highlighting both current achievements and open research challenges. The second half will feature a hands-on tutorial demonstrating how control barrier functions can enforce safety guarantees in practice. Upon completion, participants will have gained theoretical insights and practical skills for applying deep reinforcement learning to robotic systems in a safety-critical context.
Operator-Focused Explainable AI: From Concept to Comparison to Deployment
Instructors: Ngaire Underhill, Izzy Hurley, Melanie Platt, Harry Li, Dr. Erfaun Noorani, Daniel Stabile, Dr. Edward Kao
Duration: 3 hours
Course Format: In-person and Webcast
Explainability for any AI is critical – from the earliest beta implementation, ensuring outcomes are calculated correctly, to operational development, enabling users to check determinations, identify prediction factors, and audit outcomes. This course explores the advanced concepts of transparency, traceability, interpretability, and explainability tools designed to provide insights into the inner workings of AI models. These tools assist users in evaluating outcomes by identifying when the AI may be wrong (outliers), when outcomes may need adjustment (exceptions), and when implementations may no longer align with the problem (deprecation).
Upon completion of this course, participants will learn the core principles of Explainable AI (XAI) including distinctions between transparency, interpretability, and explainability, and their importance for both AI operators and users. This course will cover major AI model types (e.g., black-box vs interpretable), key XAI methods and concepts (e.g., global vs local explanations), and essential XAI tools (e.g., LIME, SHAP, and “inherently” interpretable models). Through examples and best practices, attendees will evaluate the quality and effectiveness of AI explanations, with special consideration to the end-user's task and context. This course is tailored for professionals, researchers, and decision-makers focused on designing, implementing, or deploying AI systems with transparency and usability in mind.
This course will also feature a series of efforts demonstrating XAI techniques addressing current and ongoing challenges across a variety of domains. These detailed examples will illustrate real world instances of AI implementations and associated XAI challenges and solutions. These featured talks include:
- “Robust Counterfactual Explanations for Neural Networks with Probabilistic Guarantees” - Dr. Erfaun Noorani
- “AI-Assisted Causal Explanations of Network Processes” - Dr. Edward Kao
- “Human-Agent Teaming Capabilities for Cyber Operations” - Harry Li
- “Agentic Drone Mission Planning with AI and Human Review” - Daniel Stabile
Note that this course does not cover production-level engineering, advanced mathematical theory, or custom AI model architecture design.
Introduction to Human-AI Integration: Core Concepts and Practical Considerations
Instructors: Dr. Vincent Mancuso, Dr. Sarah McGuire
Duration: 4 hours
Course Format: In-person only
As artificial‑intelligence and machine‑learning (AI/ML) technologies become increasingly embedded in critical systems and mission‑oriented operations, the need for effective, robust, and safe Human‑AI Integration (HAI) is more evident than ever. This four‑hour workshop provides an introduction to the core concepts, methods, and practical considerations of HAI as they relate to integrating AI into national security applications. In this course we will provide a brief overview of theories and methodologies that underpin successful human-AI integration, and provide resources to help better understand integration needs. We will also introduce the concept of Human-AI Interfaces, as technology that can be used to allow humans and AI to collaborate. To demonstrate these areas, MIT LL researchers will provide a series of lightning talk examples of how they applied Human-AI Integration for National Security sponsors. Upon completion, participants will have an understanding of key HAI considerations when designing AI systems and methods that can be utilized during the design process. This is an introductory workshop intended for AI researchers, data scientists, systems engineers, and technology practitioners who are interested in applying HAI and related concepts to their work. No prior background in human factors, psychology, or human‑centered design is required; essential principles will be introduced and reinforced throughout the session.
A Practical Guide to Applied Generative AI
Instructors: Dr. Pooya Khorrami, Evan Young, Dr. Charlie Dagli, Kenneth Alperin, Trang Nguyen, Ashok Kumar
Duration: 4 hours
Course Format: In-person and Webcast
Generative AI is the key to unlocking the full potential of artificial intelligence, allowing machines to create new and original content. Within a short span, Generative AI has changed the technology landscape and promises to unlock transformational use cases in commercial and national security arenas. This course will introduce participants to the basics of Generative AI, including various types of models and how they work across different modalities. It will begin with participants learning about Generative AI for image and video generation, specifically, the types of models/architectures used (e.g., Diffusion models), how the models are applied, and what areas they have impacted. Participants will also be introduced to Large Language Models (LLMs), covering topics such as architecture and training of LLMs, how LLMs work in practice, and their application to real-world cybersecurity scenarios. In the final section of the course, participants will learn about emerging agentic frameworks that aim to leverage the aforementioned generative technologies to perform sophisticated tasks. Upon completion of this course, participants will move past buzz-worthy headlines to gain a deeper technical understanding of Generative AI, discover applications in diverse domains, and become familiar with challenges and risks posed by this transformative technology.
Virtual Course
Hands-On Amazon Web Services (AWS) for AI
Instructors: Brian McCarthy
Duration: 4 hours
Course Format: Webcast
Join us for this 4-hour hands-on training on a unified data and AI development platform that streamlines machine learning model development, generative AI applications, and secure data processing in one integrated environment. Through live demonstrations and practical workshops, Research and IT professionals will learn to build custom AI models, deploy intelligent applications with foundation models, and create secure collaborative research workflows while maintaining defense-grade security standards. Participants will gain practical experience developing end-to-end AI/ML solutions from data preparation through model deployment, enabling faster research timelines without compromising data protection or compliance requirements.
