Laboratory for Autonomy in Data-Driven and Complex Systems: Research
NRI 3.0: Innovations in Integration of Robotics
Integration of Autonomous UAS in Wildland Fire Management
Mechanical and Aerospace Engineering; : The Ohio State University
School of Environment and Natural Resources: The Ohio State University
Mechanical and Aerospace Engineering: Syracuse University
Division of Forestry: Ohio Department of Natural Resources
The wildland-urban interface (WUI) is now the fastest-growing land use in the United States. Future fire probability forecasting shows dramatic increases in probability of catastrophic fire in parts of the eastern U.S. due to climate change, combined with rapidly increasing WUI. More attention is needed to understand the factors of fire behavior and spread in eastern forests. This multi-disciplinary research program is focused on the integration of autonomous unmanned aerial systems (UAS) into prescribed wildland burn projects to understand how topographic, atmospheric and forest fuel factors in temperate hardwood forests influence fire intensity and rate of spread. Experts from the areas of forest management and ecology, uncertainty quantification, sensor fusion and data-driven modeling and control will collaborate to deploy autonomous aerial robotic systems in unstructured, uncertain, and hazardous fire environments. In the long term, this work will aid in the management of the wildland-urban interface, monitoring and suppression activities of unplanned wildfires as well as other hazardous phenomena. Multi-disciplinary partnerships in research and education will accentuate the right context for autonomous UAS, namely, taking humans out of missions that involve dangerous and repetitive actions. Research activities will strengthen and diversify stakeholder involvement, including state departments of natural resources, firefighting communities, and K-12 education.
The scientific merit of this research is anchored in the advancement and integration of autonomous unmanned aerial systems with wildland fire management projects. Theoretical, computational, and experimental methods and materials developed in this work will enhance situational awareness and enable autonomous risk-aware decision making in the face of unstructured uncertainty in a hazardous environment. UAS path planning will formulate and solve novel resource chance-constrained optimization problems. UAS will bypass computational heavy lifting to generate in-time micro-level local conditions by enabling physics-informed learning through Koopman operator theory. New sensor belief functions will be designed that accurately reflect sensing ignorance contained in hypotheses related to the fire environment. Evidential information fusion will effectively handle sensor epistemic uncertainty and allow reliable integration in an environment where not all data is trustworthy. Data-driven control will enable efficient and reliable operation of autonomous vehicles with uncertain dynamics in real time by using available knowledge of applied inputs and observed outputs, to learn the unknown inputs even without prior training data or persistent excitation. Real-time estimates of disturbance forces and torques acting on an UAS obtained by the disturbance observer will provide information on the turbulence and air flow around a wildland fire region. Finally, most of the research on wildland fire behavior has focused on western forests, and less so on eastern forests. Differences in forest composition and structure, and differences in fuel composition and characteristics may translate into differences in fire behavior. This work will help to delineate any differences as well as similarities of fire behavior between eastern and western forests.
Autonomous Path Planning
- Participation of UAVs, personal air vehicles and other aerial public transportation vehicles (e.g. taxis) in the US national airspace system is steadily increasing and will continue to do so. DOD has placed special emphasis on the continued advancement of Small Unmanned Aircraft Systems (SUAS) because they are positioned to replace human agentsin dangerous and/or repetitive missions.
- We are developing a framework for the safe operation of such vehicles, in terms of path planning problems, as they seek to share the national airspace in cluttered, uncertain and dynamically changing environments, e.g. an urban setting as shown in (see Figure).
Path planning in a cluttered environment must tackle complex no-fly/keep-out zones which, when analyzed in a deterministic framework, can reduce the domain of meaningful solutions to a vanishingly small set, often with high cost.
- Chance constraints offer a rigorous framework for recognizing that no-fly zones in complex environments are ultimately approximations, so that there may be merit in treating their boundaries as soft, “probabilistic barriers” as opposed to hard constraints. While this implies greater risk than the deterministic approach, the CC framework provides the decision maker (autonomous agent or a supervisor) the flexibility of prescribing application appropriate risk. One can generate a family of solutions parameterized by different levels of assigned risk, providing the decision maker the ability to undertake missions with known risk versus return awareness.
- We employ the formalism of chance-constraints to pose risk-aware path planning problems that assimilate environmental uncertainty into the design process while expanding the solution space in a cluttered environment, including the creation of potential keyholes trajectories (see figures).
- We employ pseudospectral discretization to achieve rapid transcription of the chance-constrained optimal control problem into standardized nonlinear programming (NLP) forms.
- We are working towards employing recent developments in geometric techniques employing constrained Delaunay triangulation to generate high quality initial trajectories and speed up the process of optimal path generation.
- Finally, we are working towards a hierarchical structure that involves separation of computational responsibilities between ground-based and onboard computing platforms. The former will process more accurate probabilistic representation of flight constraints while the latter will be equipped to rapidly generate and update optimal trajectories using simplified representations of chance-constraints.
Evidential Sensor Fusion
The recent rise in both computational and technological capabilities has corresponded with the emergence of autonomous applications for complex systems. We are interested in the development of an autonomous application in the form of a collaborating network of sensing, controlling, and computing agents that work to plan and execute mission tasks (such as tracking a maneuvering target or monitoring a growing wildfire), based on the information flow throughout the network.
Consider a physical system that evolves as a stochastic process. Because the current state of the system is never truly known, precise state estimation is required instead. The uncertainty inherent in this estimate may be characterized by a probability density function, also known as its belief state. This system is indirectly observed by one or more noisy sensors that output a measurement of the system. In the Bayesian recursive process shown in the diagram above, this measurement is fused with an estimate of the belief (prior distribution) to produce an updated belief state (posterior distribution). The desired goal when characterizing this belief is to reduce the amount of uncertainty that springs from two sources (i) the stochastic processes inherent in the system’s state evolution, and (ii) the imperfections of the sensors that measure the system, i.e. the lens through which the true state can be observed.
We are interested in establishing bi-directional feedback between sensing and controlling agents in the form of posterior belief state-based input actions (red arrow in the diagram above) with the ultimate goal of improving the characterization of a complex system’s belief state. Furthermore, we must contend that this belief state will be derived from multiple sources of information (e.g. forecasted dynamics and heterogeneous sensors) and thus we strive to combine the beliefs of these sources via evidential sensor fusion.
This evidential fusion is done within the framework of evidential reasoning, which requires the beliefs of particular propositions to be rooted in the available evidence supporting those propositions. In the absence of evidence, it is allowable to admit ignorance for subsequent decision-making. We have utilized well-known methods within the Dempster-Shafer theory of probable reasoning as well as more recent approaches such as the Dezert-Smarandache theory of plausible and paradoxical reasoning.
Control actions sent to sensing agents can be enacted in multiple forms:
1) Sensor selection: In a target tracking scenario (depicted in the image below), there may be many available sources that can observe the target but only a few sources can be integrated due to computational limits. Thus, the most advantageous measurements (e.g. Mahalanobis distance, information routing time, etc...) are incorporated. This is accomplished through sensor selection and is depicted in the image below where a single acoustic range sensor (blue dot) was selected based on its relative location to both the target ('X') and data acquisition server (star).
2) Sensor realignment: In the case of an evolving wildfire (certainly a stochastic system), the bi-directional feedback between the estimator/controller and the sensing set can be implemented as an optimized path-planning procedure for aerial vision sensors to continually track the wildfire system as it develops - this is shown in the graphic below. Evidence-based fusion occurs between multiple agents: 1) A wildfire belief state forecaster based on the available environmental information (surface wind velocities and local topography), 2) Embedded temperature sensors that are triggered by surrounding heat and 3) Aerial drones that confirm the fire's presence visually from onboard cameras.
Space Situational Awareness
The foremost purpose of space situational awareness (SSA) is to provide decision-making processes with a quantifiable and timely body of evidence of behavior(s), be it predictive, imminent or forensic, that are attributable to specific space domain threats and hazards. Continued sustainable access and utilization of space relies on the awareness of its environment, both from the perspective of human operators on the ground and autonomous spacecraft during flight. This is especially true as strategic orbit regimes such as the GEO belt, become increasingly crowded with increasing potential for intentional or inadvertent conjunctions.
LADDCS is building a fundamental research program for accurate uncertainty quantification and generation of resultant actionable intelligence for space situational awareness (SSA). We impact the following elements of SSA:
- decision, and
- control/action (sensor tasking).
Detection and tracking modules grow out of a novel closed-loop (self-monitoring) and adaptive particle platform that allows Monte Carlo state estimation with performance guarantees. See more on this closed-loop platform below under "Prognostics".
Complex engineering systems, such as aircraft engines and communication satellites operate under extremely harsh conditions. Their current state of health determines how well they will continue to operate in the future. Our inability to accurately forecast their future state translates to the inability to take appropriate corrective and/or maintenance actions at the present time.
Current technology lacks the ability to provide forecasts with guaranteed levels of accuracy, which, ideally should be defined by the practitioners of each relevant technology.
In other words, our existing prognostics systems are not sufficiently robust, which induces poor decision making, in turn leading to circumstances that severely reduce the life and optimality of complex engineering systems
LADDCS has developed a robust, scalable computational forecasting platform based on our adaptive particle uncertainty quantification framework. Our solution provides guaranteed performance in terms of precisely defined quantities of interest that are specific to the target technology. For example, we can help answer the following questions:
“Is it likely that my engine will encounter flame instability issues over the next year? Can I forecast such an occurrence within a 5% error tolerance?”
“Does my satellite stand a risk of collision with space debris over the next two weeks? Can I compute the probability of collision within three decimal points of accuracy?”
Tensor Data Association
The goal of target tracking is to estimate the state of one or many targets, achieved via a two-step recursion: (i.) uncertainty forecasting, in which the joint target state pdf is propagated using physical motion models; and (ii.) information fusion, in which sensor measurements are incorporated into the forecast state uncertainty under some sense of optimality.
In the fusion step, if the targets do not voluntarily reveal their identity, an important and difficult question must be addressed: for any target of interest, which measurement should be used to perform the update?This problem is commonly referred to as data-association (DA).
The Bayesian paradigm for DA is dominated by the Joint Probabilistic Data Association (JPDA) filter, but it is known to be NP hard.
At LADDCS, we are employing tensor decomposition techniques to tackle the curse of dimensionality in DA. At the fundamental level, tensors are multi-dimensional arrays. Tensors by themselves can be used to hold large sets of related data, like sensor measurements. However, they become significantly more interesting when tensor decomposition approaches are applied.
We are developing the Dynamic JPDA (DJPDA) filter which is constructed by developing the JPDA filter in the framework of Dynamic Tensor Analysis (DTA). Each scan is treated as a tensor, such that a ordered sequence of scans can be analyzed in the framework of DTA. Upon reduction to cores, they are used as input to the JPDA problem.