Special sessions

Distributed Tracking:
Virtually every visionary document and perspective of the future information environment for both military/defense applications as well as many civil applications depicts a highly-distributed but highly-connected information processing infrastructure. Most researchers and implementers in the information fusion community however are typically concerned with and have responsibility for the nodal processing in such environments, and have little to no responsibility for the internodal communication infrastructure. However, in the distributed fusion case, any integrated, multi-nodal approach to architecting a coherent information fusion process over the nodes involves a highly-coupled dependence on the network structure and its characteristics; this includes (dynamic) network topology, internodal communication/data exchange strategies, specific link characteristics etc. Relatively little research seems to be ongoing that is concerned with these interrelated issues. Since object tracking is among the most mature application areas for data fusion, this session attempts to open a dialog on the macro and micro issues of architecting rigorously-correct solutions to the distributed tracking problem as a case study associated with the larger issues of distributed information fusion. Among the topics that are of interest are:
  1. A taxonomy of viable approaches to Distributed Tracking: many taxonomic variants have been proposed over many papers; is there a way to categorize them that makes sense so that they can be examined in an orderly way at a macro/architectural-type level in terms of features, benefits, and disadvantages--this is toward organizing the "solution-space"
  2. Tradeoffs in dealing with the double-counting problem: tracklets--pedigree-covariance intersection--other methods
  3. Facing the interdependencies between network characteristics, to include topology, and approaches to nodal, fusion-based tracking approaches
  4. How to test and evaluate nominated Distributed Tracking techniques: methods, metrics, etc
  5. Discussion about "Information-Sharing Strategies (ISS)", these being the specifics of strategies about how to share information to enable efficient and effective (optimal?) Distributed Tracking; the focus is on " specific"--meaning what info to send, when/how often, to whom, etc-there are two questions here: how is an ISS specified?, and then how is it optimized?
  6. How to establish a working connection between the Information Fusion community and the organizations that specify and build the network infrastructures
Image Fusion & Exploitation:
With the success of a day-long set of sessions on "Image Fusion & Exploitation" at Fusion 2000 in Paris, we are looking forward to another set of interesting presentations and discussion at Fusion 2001 in Montreal. We plan to address a variety of important issues around multisensor, multi-modality, and multi-aspect image fusion, as enumerated below. There are challenges at every stage of combining imagery and extracting information, and the methods draw from conventional image & signal processing to biologically inspired approaches and algorithms. From algorithms to systems to implementations, and how well is the machine or human performing with the fused output. We hope to discuss many of these issues at the invited session.
  1. Cross-sensor registration and cross-image mosaicking
  2. 3D site modeling from multiple image sources
  3. Fusion for visualization, exploitation, and data mining
  4. Applications to remote sensing, defense, intelligence, and medicine
  5. Machine and human performance with fused image products
Situation Analysis and Situational Awareness:
Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of Situation Awareness (SAW), in support of dynamic human decision making activities. The integration of all refinement levels (specifically situation and impact refinement) of data fusion is seen as a key enabler for situation analysis. Enhancing SAW improves the probability of taking the right decision and selecting the appropriate course of actions in most of the situations. This session is meant as a forum to discuss a wide range of issues. Among these issues are:
  1. Concepts and definitions of SA & SAW
  2. Knowledge engineering and knowledge elicitation principles
    • Integration of the human element
    • Architectures
  3. Enabling Technologies/Fields
    • Spatial and temporal reasoning
    • Pattern recognition and analysis
    • Knowledge management
    • Reasoning under uncertainty
    • Multi-agent system
    • Human-Computer Interface
    • Control theory
  4. Test beds
    • Scenarios
    • Modelling & simulation
    • Performance Evaluation (MOPs/MOEs, SAW measurements)
  5. Implementations & lessons learned
Non-linear filtering:
Keith Kastella (Veridian Systems), together with Stan Musick (AFRL/WPAFB), and Subash Challa (Univ. of Melbourne) propose to organize a Fusion 2001 session in nonlinear filtering methods for tracking. The session will provide a forum for technical interaction and will introduce a Challenge Problem (CP) developed in collaboration between the Sensors Directorate of AFRL, the AF Office of Scientific Research and the University of Melbourne. We hope that this problem will stimulate interest within the research community and encourage investigations this important technical area.

AFRL is sponsoring a workshop at the Bergamo Conference Center in Kettering, OH, to formulate this challenge problem this February 21-22. The Bergamo Workshop includes several invited talks and a presentation of the CP attributes, with a discussion period allocated after each talk. Currently the CP is focused on scenarios involving a single target moving in a two-dimensional space where an idealized imaging sensor produces intensity proportional to target or clutter strength. Straightforward scenarios were chosen to maintain focus on the numerical filtering aspects of the nonlinear tracking problem. The attachment develops some of these ideas in more detail.

One goal of the Bergamo Workshop is to sharpen or broaden the CP, as required, so that it:
  1. Uses scenarios that do not bias it in favor of one technique over another,
  2. Addresses questions that reflect current issues and are genuinely interesting,
  3. Achieves a balance between abstraction and realism that avoids confounding diverse effects while producing useful research results.
The proposed special session will provide a forum for initial presentation of results flowing out of the Bergamo Workshop.

Formal Methods:
The document stating the need for an information fusion system is most likely first presented in natural language. The end result is an encoding of a number of algorithms in a programming language. The process that leads from the natural language description (start point) to the code (end point) may differ quite significantly depending on the engineering practice of the developer. However, most of the research in the area of information fusion is focused on the end side of the life cycle, i.e., on the algorithms of information fusion. The focus of this session is on the beginning part of the development process. Additionally, the scope is narrowed to the formal approach of specification and software development, i.e. research that investigates the specification of fusion systems in various ways -- from natural language to formal languages. In particular, we solicit contributions that incorporate expert knowledge into the fusion process by means of high-level specifications, which is known to be difficult for a low-level, statistical approach. This includes the application of linguistic descriptions to specify `fusion plans', i.e. what is to be fused when and how. Finally, contributions are welcome that describe research on the formal development of information fusion systems in which high-level specifications are transformed in a rigorous way to code.

Knowledge Base Role in Information Fusion:
Today's decision maker is caught in the middle because neither humans nor computers can effectively handle sophisticated knowledge. Complex problem domains, large search spaces, lots of data and little information all contribute to the problem. Traditional data base technology cannot handle this problem. To improve decision-making and realize productivity gains, we need to change the kinds of things we do because of computers. This requires that we rethink our information processes and make them more intelligent while continuing to assure the integrity and consistency of data/information/knowledge. We must be able to access and use information and develop innovative intelligent information systems (IIS's) which adapt and respond to changing environments and situations, and do so in a way that allows maximum mobility, adequate performance and information validity.

Knowledge base technology encompasses the use of computer programs that actually "understand" (represent explicitly and respond logically to) conceptual content. A knowledge base consists of declarative knowledge (facts, ontologies, implication-rules). The structure and representation of an ontology is a key research question. For instance, when developing knowledge bases should a single a unified ontology be used? Should smaller, more mobile ontologies be used to support specific applications? Efforts are now underway to address these and other knowledge base concerns.

Data fusion is an increasingly important element of diverse weapon, intelligence, and commercial systems. Data fusion uses overlapping information to determine relationships among data. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. Developing cost-effective multi-source information systems requires a standard method for specifying data fusion processing and control functions, interfaces, and associated data/knowledge bases.

This session will explore new techniques that can help the development of future Information Fusion Systems using Knowledge Base technology. This includes innovative techniques in Ontologies, Data Mining, Knowledge Acquisition, Knowledge Discovery, Knowledge Representation, Knowledge Base Fusion, Knowledge Tracking, Knowledge Scalability, and Knowledge-Intensive Problem Solving.

Papers in large part will be requested from several projects that are now ongoing and supporting this technology and are sponsored by both AFRL and DARPA. However, papers from other authors performing research in this area will also be requested.

Probabilistic Multi-Hypothesis Tracking:
Automatic tracking of multiple targets in the presence of interferers and clutter is a very difficult problem because of the necessity of correctly associating sensor measurements with the various targets under track. The assignment problem as it is traditionally formulated is known to be NP-hard. Assignment is the central theoretical problem; for the multi-target tracking problem reduces to a collection of independent single target tracking problems if the assignments are known. Given the assignments, multi-target tracking has computational complexity that is only linearly proportional to the number of targets under track.

The Probabilistic Multi-Hypothesis Tracking (PMHT) approach to tracking is related in a natural way to Bayesian classification methods that use Gaussian mixtures and derive mixture parameter estimates via the now well known Expectation-Maximization (EM) method. This close relationship between classification and multi-target tracking becomes more evident if the class probability density functions (PDFs) are intrinsically nonstationary and must be updated as they evolve over time. The analogy exploited by PMHT is that targets are classes, target measurements are sample data, and target posterior densities are class PDF estimates. Thus, the assignment of measurements to targets is little different, statistically speaking, from assigning sample data to components in a Gaussian mixture. Great flexibility in the PMHT algorithm is afforded by these connections to Gaussian mixtures and Bayesian classification.

The importance of the PMHT approach is that, by altering the definition of the assignment problem to allow targets to be assigned more than one measurement in a given scan, it solves the multi-target tracking problem with computational effort that is linearly proportional to the number of targets under track. In other words, PMHT solves the multi-target tracking problem with unknown assignments with the same computational complexity that would be required if the assignments were given.

There is no free lunch, however, because relaxing the condition that no target can be assigned more than one measurement makes the "classic" PMHT algorithm difficult to initialize. This difficulty is shared by other algorithms also, but it seems especially difficult with PMHT methods. One fundamental difficulty stems from what might be termed "data poverty," but there are other factors also that are the subject of active research. The purpose of this Special Session is to provide a forum for discussion of new PMHT related work.

Computationally Intensive Distributed Sensor Networks:
Dr. SriKumar (DARPA - Sensor IT Program), together with Dr. S.S.Iyengar (Louisiana State University), Dr. J. Chandra (GWU) proposes to organize a Fusion 2001 session on "Computationally Intensive Distributed Sensor Networks". The Session will provide a technical forum in Real time grid based Collaborative Signal Processing, Mobile agents in Distributed Signal Processing and integration, Adaptive Clustering in Dynamic Sensor Networks, Sensor / User / Application Scalability, Parallel algorithms for Visual Sensor Data Mining, Self-Organization in large-scale Distributed Sensor Networks, Localized routing in ad hoc Wireless Sensor Networks, Distributed data query for Sensor Data Mining, Neural Networks as tools for Image Sensor Data Mining and many other topics.

This special session aims to bring together researchers from distributed signal processing and low-power wireless sensor network community to identify and address challenge problems in collaborative signal processing (CSP). Recent advances in MEMS, Wireless networking, and distributed signal processing have enabled a new generation of sensor networks. However, unlike more traditional centralized ATR systems, distributed sensor nets are characterized by limited battery power, frequent node attrition, and variable data communication quality.

For Details Contact:

Last Updated: 26 11 2001



Conference Report | Location | Conference topics | Conference facilities | Related deadlines
Call for Papers | Committees | Special Sessions | Exhibitions and Sponsorships
Practical Information | Traveler information | Transportation | Photos

Web site by: Pierre Valin (content)
Paul Geanta (form)
copyright © ISIF 2000