Courses-2022

This year the RAAINS Courses will include: Robust and Explainable AI for National Security, Design Assurance for Autonomous Systems, Human-Machine Teaming for Systems Engineering, Early Considerations of Ethical AI and Cyber AI. The course format includes a blend of online preparation material and a two hour live virtual session to be led by Lincoln Laboratory staff and will be held on Tuesday November 8th, Wednesday November 9th and Thursday November 10th 2022.

 

Title: Human–Machine Teaming for Systems Engineering

Instructors: Dr. Sarah McGuire, Ms. Kimberlee Chang, and Dr. Vincent Mancuso, MIT LL 

Description: In this course, we will provide an overview of human–machine teaming (HMT) as it relates to AI development and testing. We will discuss the concept of HMT and how to select the right level of autonomy for the intended application. We will also introduce the concept of human–AI teaming testbeds and the importance of applying multidimensional metrics that allow performance benchmarking and inform further development. We will then discuss both established and emerging methods for obtaining those metrics. 

Participants: 100 

Time: Tuesday, 8 November, 10:00 a.m. – 12:00 p.m.

 

Title: Robust and Explainable AI for National Security

Instructors: Mr. Andrew Curtis and Mr. Adam Kern 

Description: As machine learning (ML) and artificial intelligence (AI) systems are increasingly applied to national security problems, it is vital to consider whether these systems will behave as expected in operational settings. These robustness concerns may include issues such as changes in environment, degradation or loss of sensors, or threat uncertainty. In addition, these systems must be explainable to both leaders and warfighters in order to engender trust in the system outputs and facilitate its use in uncertain environments. This course will provide an overview of potential robustness and interpretability issues for ML and AI systems and discuss methods for dealing with and mitigating these challenges. A core focus of the course will be on case studies that provide insight into some of these challenges. Participants will consider the design, evaluation, and operation of AI/ML systems through the use of these case studies, working in groups to generate critical questions that need to be addressed for successful use of the proposed systems. 

Participants: 40-50 

Time: Tuesday 8 November, 2:00 – 4:00 p.m.

 

Title: AI for Advanced Cyber Capability Development 

Instructors: Dr. Dennis Ross 

Description: Cyber security and cyber operations are critical components of our national security posture. In an increasingly interconnected, digital age, cyber threats can quickly outpace traditional approaches to data security that are otherwise inundated with false alarms or unable to recognize new attacks. Organizations need to be able to evolve and adapt to the changing landscape. While AI and ML have shown astounding ability to automatically analyze and classify large amounts of data in complex scenarios, the techniques are not still widely adopted in real world security settings, especially in cyber systems.

The course will address some technologies and their applications in security, such as, machine learning, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions. Attendees will also participate in a guided brainstorm to develop an AI cyber prototype plan. 

Participants: 30

Time: Wednesday, 9 November, 10:00 a.m. – 12:00 p.m.

 

Title: Early Critical Considerations of Ethical AI Systems 

Instructors: Ms.Ngaire Underhill, Ms.Consuelo Cuevas 

Description: Machine Learning (ML) and Artificial Intelligence (AI) systems are increasingly applied to national security problems. Advanced, careful, and robust consideration of AI/ML design and implementation, however, is critical. These considerations can include: potential misuse of AI, AI system deprecation, AI design nuances, and the role of human–machine teaming in the AI pipeline. Strategic thought about these issues in the inception and design phase of an AI/ML system will save resources, time, and energy, and enable the creation of a more accurate and well-designed system. Disregarding these critical design questions can lead to impacts not limited to mission failure, but can also result in unintended and undesired moral, legal, and financial consequences.

This course will walk participants through a guided example in which these types of strategic questions regarding system design and implementation are identified, discussed, and evaluated. Course participants will then be put into teams and challenged to perform this same exercise on a variety of potential AI/ML applications across different domains. Additionally, participants will need to strategically identify, articulate, and address the potential impacts and considerations that are required for the AI/ML system that their case study targets. Upon completion, participants will have experience and knowledge on strategic thinking about the implementation and design of AI/ML systems. 

Participants: 60

Time: Wednesday, 9 November, 2:00 – 4:00 p.m.

 

Title: Design Assurance for Autonomous Systems 

Instructors: Dr. Kevin Leahy, Mr. Andrew Heier, and Dr. Makai Mann 

Description: This course focuses on design principles for assured autonomy. The course will focus on challenges to assured autonomy, how to anticipate specific challenges, and how to design a system to mitigate their impact. The course will discuss design paradigms for developing requirements for assured autonomy, as well as verifying the system meets those requirements, including state-of-the-art tools from formal methods for autonomous systems. The course will be anchored by a case study. 

Participants: 20

Time: Thursday, 10 November, 10:00 a.m. – 12:00 p.m.