AI-Guided Vascular Access for the Battlefield
Three applications within this broad biomedical domain will be highlighted: axon tracing in microscopy of tissue-cleared brain samples, liver fibrosis staging based on ultrasound shear-wave elastography, and semi-automated vascular access based on portable ultrasound. For the brain image processing, the need is to process large-scale, high resolution image volumes automatically. 3D CNN has been used to process 250 GB or more axon data on the Lincoln Laboratory Supercomputer Center and the performance is being compared to that from manual tracing. For liver fibrosis staging, the need is to reduce the subjective variability in manual image acquisition and interpretation and to improve accuracy. Deep learning has been used to significantly improve specificity from 5% for a baseline manual method to 71% for a sensitivity of 95%. For semi-automated vascular access, the need is to detect the femoral vein and artery automatically with near-perfect accuracy, in order to aid a non-expert. Deep learning has resulted in 90% recall with 100% precision, with further improvements underway. Future work to broaden these applications will also be outlined.
Applications of Model Predictive Control
We will present a conceptual overview of model-predictive control (MPC). Emphasis is placed on how MPC is a way to make an autonomous/AI agent's reasoning less myopic and more robust. National defense case studies are highlighted (with video demonstrations). Viewers will acquire a clear picture of the level of autonomy needed by cutting-edge defense CONOPs and learn how MPC is being employed to meet those needs.
Assuring Tactile Intelligence in Motion
Intelligent robotic manipulators have the potential to transform multiple DoD operations, such as EOD disposal and search-and-rescue missions. We describe advances in runtime monitoring and imitation learning that have been developed to improve the reliability of mobile manipulation platforms. These results include a fully autonomous demonstration, wherein a mobile robotic manipulator uses logic-based value iteration networks (LVINs) to accomplish complex behaviors, such as object retrieval from closed containers. The developmental robotic testbed (a Pioneer 3 base + Jaco2 arm) will also be present for demonstration. Attendees will be invited to try a seemingly simple action—picking up and placing an object—in order to demonstrate the task’s unseen complexities
Augmented Annotation—Crowdless Training Data for Machine Learning
The Augmented Annotation System applies computer vision and machine learning to the manual process of generating annotated datasets for machine learning applications, producing these datasets 25 times faster than current methods.
A Brain–Computer Interface for Hearing Enhancement
Hearing-impaired and normal-hearing individuals are daily confronted with the complex auditory challenge of separating out one source from among many distractors. This research describes the development of a brain–computer interface to control a hearing assistive device for such situations. We leverage artificial intelligence to decode from brain signals to which acoustic source a person wishes to attend.
DARPA-D3M Benchmark Corpus for AutoML
AutoML technology promises to offer methods and tools that make machine learning available for non-machine learning experts by automating the difficult parts of building a machine-learning pipeline from scratch. To aid the development of robust and flexible AutoML frameworks, there is a need for a benchmark corpus that is comprehensive, diverse, and representative of real-world domains. MIT Lincoln Laboratory, through a DARPA-sponsored research initiative, is leading an effort to build such an enabling data infrastructure. In this poster, we present details of this data infrastructure.
Decentralized Multi-Agent Coordination
Teams of autonomous systems are being developed for both military and non-military use and require the ability to cooperate in uncertain, complex, and dynamic environments. Decentralized control is critical to maintaining performance in communication-constrained or -contested environments. A significant hurdle to realizing teams of autonomous systems that can operate in these environments is the lack of coordination methods that reason about uncertainty, communication limitations, and adversarial opposition, and that also scale to problems of national interest. This research is developing principled methods for multi-robot coordination. We use a generic model that represents multi-robot cooperative planning in the presence of stochasticity, uncertain sensing, and communication limitations. Using a generic model enables the framework to be applied to a wide variety of cooperative multi-robot problems. Our specific contributions include developing advanced methods of coordination that scale better to large problems than previous approaches and demonstrating these methods on problems of national interest.
Deep Reinforcement Learning for Discovery of Incomplete Networks
Complex networks are often either too large for full exploration, partially accessible or partially observed. Downstream learning tasks on incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks and given resource collection constraints are of great interest. Here, we formulate the task-specific network discovery problem in an incomplete network setting as a sequential decision making problem. Our downstream task is vertex classification. We propose a framework, called Network Actor Critic (NAC), that learns concepts of policy and reward in an offline setting via a deep reinforcement learning algorithm. A quantitative study is presented on several synthetic and real benchmarks. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms.
End-to-End UAV Simulation Framework
The Laboratory has been developing advanced Unmanned Aerial Vehicles (UAVs) for a wide range of applications, including reconnaissance, surveillance, target acquisition, and force protection. As applications demand more sophisticated levels of autonomy, testing complex algorithms in the field becomes costly, introduces risk, and often yields confounding results. This demo will showcase an internally developed end-to-end UAV simulation framework that focuses on enabling quicker software development cycles and the ability to discover issues before costly field tests. Demo will consist of a hardware-in-the-loop (HIL) UAV flying in a simulated outdoor virtual environment.
Exploiting Risk Taking in Group Operations
Recent advancements in deep reinforcement learning (RL) have enabled AI to solve previously intractable problems such as the game of Go. The next frontier for deep RL is applications to multi-agent decision making in real-world scenarios that may feature partial observability and hazardous or adversarial environments. To this end, we develop a new multi-agent RL algorithm that exploits risk taking at a fundamental level in order to improve the learning process. We demonstrate this algorithm by training decentralized, multi-agent systems to solve persistent surveillance and ad hoc communication networking problems in simple simulated environments.
FALCONS: LowSWaP Imaging Sensor with On-Board Processing for Future UAV Autonomy Algorithms
We will present architecture and test data from a MIT Lincoln Lab-developed UAV sensor with high temporal resolution and integrated processing on-chip. The concept is to execute basic image processing algorithms on-chip, freeing the platform processor or mission planner to execute fewer image processing tasks and spend time on more complex planning tasks. The developed ASIC is a 256x256 Geiger-mode APD array with 256 SIMD cores on-chip.
Global Synthetic Weather Radar
The U.S. Air Force operates manned and unmanned flight missions all over the world. Accurate weather information in both the pre-flight planning and execution portion of flight is very important for mission success. The U.S. Air Force is partnering with MIT Lincoln Lab to develop a machine-learning application that generates global radar-like mosaics and radar-forward forecasts in support of U.S. Air Force flight operations. This capability will utilize a number of data sources, including global lightning, the Global Air-Land Weather Exploitation Model (GALWEM) numerical model, and global weather satellite images as input. The capability fuses these data sources together with a convolutional neural network that creates global synthetic weather radar mosaics, and these mosaics are used in conjunction with GALWEM to produce radar-forward forecasts out to 12 hours.
HALEStorm—Automatic Target Recognition Using Active Learning on SAR Imagery
HALEStorm is a user-in-the-loop system in which analyst expertise is used to improve the system’s AI algorithms while the AI makes the analyst more efficient. The system proposes detections to analysts, which allows them to quickly focus on areas of interest in imagery products. Analysts can then confirm or deny the validity of the system’s proposals, shift the detections to improve accuracy, and create new detections that may have been missed. This feedback is then incorporated back into the system to adjust weights in the AI algorithms, which improves automated detection results on subsequently processed imagery products.
Joint Cognitive Operational Research Environment (JCORE)
AI is often developed without consideration of how the technology will affect the user or how to adapt it to the user. JCORE is a human-focused virtual platform that is a combination of simulation, game-like UI, and data collection capability. This technology provides a means to capture data on human decision making while being able to assess effects of AI technology on performance. With this platform, we are able to test, assess, and adapt AI algorithms in DoD-relevant scenarios. We will discuss the JCORE platform and some of the AI algorithms that have been tested to date.
Learning Dynamic Complex Distributions with Recurrent Autoregressive Flow Models
Modeling decision making processes is a challenge due to hidden temporal and contextual relationships. In this work, we propose a novel generative model that is able to accurately capture these relationship dynamics.
Neural Control of Exoskeletons
The Neural Control of Exoskeletons line program is developing an intuitive, closed-loop fluent ankle exoskeleton that can be easily embodied by an operator for use in real-world environments. The system includes elements of movement prediction, personalized control mappings, and sensory feedback. The program seeks to significantly expand the kinds of operating environments that lower extremity exoskeletons can operate in and provide a quantitative basis for optimizing the performance of a human–exoskeleton system.
Neural Network Techniques for Profiling of Cloudy Atmospheres
In recent decades, space-borne microwave and hyperspectral infrared (IR) sounding instruments have led to significant enhancements in the accuracy of numerical weather prediction (NWP), including short-term NWP forecasts of severe weather such as tornado outbreaks. MIT Lincoln Laboratory has pioneered the use of neural networks in retrieving the three-dimensional distribution of atmospheric temperature and water vapor from these observations, and, in recent operational use by NASA, this work has significantly improved accuracy and yield in cloud-covered scenes versus previous efforts. Current research efforts are focused on applying deep learning and sensor fusion techniques to improve sounding performance in the planetary boundary layer, the region closest to the Earth’s surface.
Proactive Cyber Situational Awareness via High-Performance Computing
Cyber situation awareness technologies have largely been focused on present-state conditions, with limited abilities to forward-project nominal conditions in a contested environment. We demonstrate an approach that uses data-driven, high-performance computing (HPC) simulations of attacker/defender activities in a logically connected network environment that enables this capability for interactive, operational decision making in real time. Our contributions are three-fold: (1) we link live cyber data to inform the parameters of a cybersecurity model, (2) we perform HPC simulations and optimizations with a genetic algorithm to evaluate and recommend risk remediation strategies that inhibit attacker lateral movement, and (3) we provide a prototype platform to allow cyber defenders to assess the value of their own alternative risk reduction strategies on a relevant timeline. We present an overview of the data and software architectures, and results are presented that demonstrate operational utility alongside HPC-enabled runtimes.
RACECAR: Human Races Against Autonomously Driven Racecar
We will present a hardware-based demo utilizing the MIT Lincoln Lab RACECAR and remote driving station to compare the lap times of a participant to those of an AI-driven vehicle.
Strike Group Defender
The successful development and transition of AI for DoD human–machine teaming must incorporate feedback from the Warfighter throughout the entire technology development cycle. Strike Group Defender is an instrumented virtual environment that enables the Warfighter to team with AI in realistic, mission-relevant scenarios for the purpose of soliciting Warfighter feedback and evaluating teaming performance. We will demonstrate an implementation of an AI algorithm for electronic warfare and some of the scenarios that were created to measure teaming performance.
sUAS Swarm Demonstration
We will present a hardware-based demo showcasing centralized control of an autonomous swarm utilizing the Autonomous Systems Development Facility’s Motion Capture System.
Unity
Start bringing your vision to life today. Unity’s real-time 3D development platform empowers you with all you need to create, operate, and monetize. From games to aerospace, medical to manufacturing and beyond, Unity is the go-to solution for creating world-class interactive and immersive real-time experiences that bring products and ideas to life.
Virtual–Physical Environment for Autonomy Research
In collaboration with the MIT Aero/Astro department, Lincoln Laboratory has developed a mixed-reality training environment for testing and developing perception and control algorithms for autonomous vehicles. The system allows users to control a physical vehicle in the real world while generating synthetic visual data from a virtual environment. The use of synthetic data enables researchers to vary testing conditions that can’t otherwise be matched by the physical environment in which they conduct their experiments. The use of physical vehicles generates real inertial measurements and real actuator noise, improving the accuracy of the synthetic imagery. We will provide an overview of the system architecture, examples of previous experiments, and an opportunity to interact with the system in real time.
Visual Inertial Odometry for non-GPS AI Applications
Visual Inertial Odometry (VIO) has the potential to play a large role in many DOD mission areas; it offers the ability for passive, low-SWaP relative motion estimation in non-GPS mission environments. Engineering a general purpose and domain independent VIO system for these environments is challenging for many reasons, including but not limited to: identification of reliable features, scale estimation, and robustness to varied scene geometry. We show evaluation of a candidate Visual Odometry (VO) system that could be used for the creation of a general purpose VIO system.
|