Courses

We are pleased to offer seven in-person courses during the 2024 RAAINS Workshop, covering a wide variety of AI topics. When registering for the RAAINS Workshop, please specify your course preference(s) and we will do our best to accommodate them based on space and time availability. Descriptions of the available courses can be found below.

In addition, all RAAINS participants are highly encouraged to take our online course, Artificial Intelligence Foundations, prior to attending the RAAINS Workshop. More information on the ConductorOS workshop can be found at the bottom of this page.

 

Virtual Course (optional pre-requisite)

The Artificial Intelligence Foundations course provides valuable background for the in-person RAAINS courses and technical sessions. It is self-paced and available to all participants free of charge. The course contains nine different modules that introduce learners to various concepts in the world of AI, including data requirements and conditioning, deep learning, computer vision, computing and hardware requirements, and human-machine teaming. To access the course, click on the Register button at the top right to create an account and complete the enrollment process (email verification is required; please use an email address that is not likely to block messages from [email protected]). For any questions, please reach out to [email protected]

 

In-Person Courses

AI System Architecture and Deployment Guidelines

Instructors: David R. Martinez, Laboratory Fellow, MIT Lincoln Laboratory

Duration: 4 hours

Course Format: In-person and Webcast

AI is revolutionizing many industries, including national security, energy, automotive, climate change, healthcare, and many others. But too often, organizations take a limited view of AI, focusing almost exclusively on machine learning (ML) methods. AI technologies are, in fact, key enablers to complex systems. They require not only ML technologies, but also trustworthy data sensors and sources, appropriate data conditioning processes, responsible governance frameworks, and a balance between human and machine interactions. In short, organizations must evolve into a systems engineering mindset to optimize their AI investments. This course will equip professionals to lead, develop, and deploy AI systems in responsible ways that augment human capabilities. Taking a broader, holistic perspective, it emphasizes an AI systems architecture approach applied to products and services and provides techniques for transitioning from development into deployment. Upon completion of this program, participants will have the skills to understand the AI fundamentals necessary to develop end-to-end systems, lead AI teams, and successfully deploy AI capabilities.

Those participating in this course in-person will receive a copy of the book, Artificial Intelligence: A Systems Approach from Architecture Principles to Deployment, co-authored by David R. Martinez and published by MIT Press.

 

A Practical Guide to Applied Generative AI

Instructors: Dr. Pooya Khorrami, Evan Young, Dr. Swaroop Vattam, Dr. Olga Simek, Trang Nguyen, Dr. Miriam Cha, Keegan Quigley

Duration: 4 hours

Course Format: In-person and Webcast

Generative AI is the key to unlocking the full potential of artificial intelligence, allowing machines to create new and original content. Within a short span, generative AI has changed the technology landscape and promises to unlock transformational use cases in commercial and national security arenas. This course will introduce participants to the basics of generative AI, including various types of models and how they work across different modalities. It will begin with participants learning about generative AI for image and video generation—specifically, the types of models/architectures used (e.g., generative adversarial networks [GANs], diffusion models), how the models are applied, and what areas they have impacted. Participants will also be introduced to large language models (LLMs), covering topics such as architecture and training of LLMs, how LLMs work in practice, and the risks and evaluation of LLMs. With a foundation in visual and textual generative modeling, participants will lastly be introduced to cross-modal generative AI with a look at the range of input and output modalities, common techniques for cross-modal generation, and a deeper dive into both image-to-image and image captioning models. Upon completion of this course, participants will move past buzz-worthy headlines to gain a deeper technical understanding of generative AI, discover applications in diverse domains, and become familiar with challenges and risks posed by this transformative technology.

 

Counter Influence Operations Using AI and Causal Inference

Instructors: Dr. Edward Kao and Dr. Erika Mackin

Duration: 2 hours

Course Format: In-person and Webcast

Effective monitoring of and response to disinformation campaigns waged by nation states require accurate influence assessments and automated analytics on the vast amount of (social) media data. Lincoln Laboratory has developed a data-driven, human–machine teaming framework to address this technology need. This framework leverages: 1) the recent advancement of AI models to rapidly discover influence operation narratives, and 2) a novel extension of causal inference on social networks to quantify the influence of actors and pathways between communities. These analytical products can inform mission planners of emerging threat narratives and prominent actors, as well as effective channels for counter messaging. This course will present and demonstrate: 1) novel applications of natural language processing techniques with a focus on the transformer models to detect and summarize narrative content within a large text corpus, and 2) the technical foundation of network causal inference and its applications to provide more accurate measures of effectiveness (MOEs) in influence campaigns. Upon completion, participants will gain technical understanding and experience real-world applications of key technologies for counter-influence operations.

 

Artificial Intelligence for Cyber

Instructors: Jensen Dempsey, Dr. Timothy Reid, Dr. William Stephenson, Dr. Ashley Suh

Duration: 3 hours

Course Format: In-person and Webcast

Cyber security professionals are overburdened with repetitive and time-consuming tasks. Artificial intelligence (AI) systems show promise at automating and speeding up repetitive tasks in many industries; however, there are unique challenges in applying AI to cyber problems. These challenges can include combating adversarial attacks, ensuring data integrity, enabling real-time responsiveness, and facilitating human–machine collaboration while promoting trust in the AI. This course will begin with an overview of the potential capabilities and obstacles of using AI for the cyber domain. We will then provide insights into a selection of current state-of-the-art AI techniques, such as reinforcement learning (RL), large language models (LLMs), and explainable AI (XAI), that developers, evaluators, and cyber operators can use to augment current practices in cyber defense. Finally, we will solidify these concepts with a case study from the national security regime. Upon completion, participants will have knowledge about how state-of-the-art AI technologies are applied to cyber security problems and awareness of the inherent complexities associated with their integration. This course is not a deep dive on any particular technology and is intended as a higher-level survey suitable for a general audience to help familiarize the attendee with the needs for AI in Cyber.

 

Test and Evaluation of AI Systems

Instructors: Dr. Michael Yee and Dr. Lei Hamilton

Duration: 4 hours

Course Format: In-person and Webcast

Artificial intelligence (AI) is powering advances in many different domains such as computer vision, natural language, and autonomous systems. Although AI systems (using deep neural networks) often surpass human-level performance, they have been found to be vulnerable to both naturally arising challenges and malicious attacks. For example, they can fail when the environment changes (e.g., when new objects are sensed under different lighting conditions) or when inputs are corrupted by noise. They are also susceptible to small intentional input modifications that drastically change system output despite being imperceptible to humans (known as digital “adversarial examples”) as well as their physical world analogues. Moreover, the confidence scores reported by AI systems along with their predictions can be poorly calibrated, leading to issues with user trust. Although many tools exist that can be used for various test and evaluation (T&E) tasks, integrating them into reliable T&E workflows can be challenging.

Through a combination of lectures, Jupyter notebook walkthroughs, and discussions, this course will provide an introductory overview of core concepts in AI T&E. Topic areas will include: what can go wrong when applying trained models to new input data, what can go wrong with test data and the T&E process itself, and T&E tools available from the Joint AI Test Infrastructure Capability (JATIC) program. No prior experience is necessary, although basic familiarity with AI and Python will be helpful for understanding some of the course content. Upon completion, participants will be able to explain the importance of evaluating AI systems for vulnerabilities, describe common types of vulnerabilities and various methods for identifying them, and identify several tools for evaluating AI models and datasets. This understanding is critical for building robust AI systems that can be trusted to perform as expected upon deployment.

 

Introduction to Human-Machine Teaming for Systems Engineering

Instructors: Kimberlee Chang, Dr. Vincent Mancuso, Dr. Sarah McGuire

Duration: 4 hours

Course Format: In-person only

There is growing recognition of the importance of human–machine teaming (HMT) to enable effective AI/ML technologies as they are incorporated into increasingly critical systems. HMT bridges the gap between humans and AI to enable human and system partners to efficiently communicate, coordinate, and adapt in complex scenarios. In this course, we will provide an introduction to HMT as it relates to AI development and testing. We will provide an overview of research methods and theories that contribute to human–machine teams and provide resources to help better understand your problem space. We will also introduce the concept of human–AI teaming testbeds and the importance of applying multidimensional metrics that allow performance benchmarking and inform further development. Upon completion, participants will have an understanding of key HMT considerations when designing AI systems and methods that can be utilized during the design process. This course is designed for AI researchers and practitioners who are interested in applying HMT and related concepts to their work, and is intended for people with limited background in areas such as human factors, psychology, or human-centered design.

 

Ethical and Responsible AI in 2024: Definitions, Expectations, Standards, Techniques, and Failures

Instructors: Ngaire Underhill and Isabelle Hurley

Duration: 3 hours

Course Format: In-person and Webcast

Machine learning (ML) and artificial intelligence (AI) systems are increasingly being applied to national security problems. However, many programs target maximizing an AI model’s accuracy. This singular measure of success is insufficient, as many AI programs have found when their resulting model is biased, unfair, unexplainable, invalid, or unusable. As AI/ML is continually improved, expanded, and implemented, the standards for what is expected and what is possible also evolve. This course provides a condensed and accelerated curriculum for participants to gain a foundational understanding of ethical and responsible AI, understand common AI challenges and resulting outcomes, and gain the tools, techniques, and know-how to take this knowledge and apply it to their own programs. Real-world examples will be heavily used throughout the course.

  1. A comprehensive introduction to ethical and responsible AI
  2. Early critical considerations for the design of AI systems
  3. Responsibilities, tools, and techniques for PMs, developers, and users
  4. Common AI myths, misconceptions, and impacts
  5. Current ongoing and unresolved ethical and responsible AI challenges

The course will have several opportunities for Q&A. The end of the course will feature in-depth quick-looks into a participant-selected set of available AI subtopics.

Upon completion participants will have an understanding of: Ethical and Responsible AI, early critical AI design considerations, AI challenges, misconceptions, problems, failures, as well as available tools and techniques to assist them in their own AI development.

 

ConductorOS Developer Workshop: Wednesday 20 November

Operationalizing AI at the Edge (Limited Seating)

Instructors: Tori McCaffrey (BigBear.ai), Chris Wilhite (BigBear.ai), Arnav Kaul (BigBear.ai), Grace Williamson (BigBear.ai)

Duration: 6 hours

Course Format: In-person only

In this hands-on workshop, participants will gain deep technical insight into ConductorOS (cOS), a cutting-edge MLOps technology designed for deploying and managing machine learning algorithms and data across heterogenous edge environments.

Over the course of six-hour workshop, attendees will leverage cOS to deploy and manage machine learning models in real-time.
Key topics covered in the workshop will include:

  • Overview of ConductorOS: Setup, features, core functionalities, and best practices
  • Managing machine learning models and data on edge devices using cOS
  • Best practices for monitoring, scaling, and securing machine learning models in dynamic edge environments

If interested in participating in this course, please contact [email protected].