A sketch of a notional sensor  and information processing string showing how one might use fixed data structures and XML within a system and DAML going out to multi-sensor and multi level fusion by a C2 Client (Fusion processes may be clients and /or services).
A processing string within a sensor or sensor type may have a range of markup  requirements from hard-wired formats (no mark up) to XML to DAML.  At low levels where the sensor communicates to the sensor processor, it may be more important to save bandwidth and communicate with fixed and efficient  formats. The processor always knows exactly what data to expect. Just where this break point occurs is a matter for research and experimentation.
When the processor is fixed, but the data may take a variety of forms, then XML may be the answer. All known types of data are accommodated, and there's no need to redesign if a new data type is added (this maintenance/development benefit is a great side effect of DAML/XML markup).
It is important to recall that within ESG sensors are not stand-alone but embedded in a networked environment where many heterogeneous sensors and information sources including structured data bases are available.  DAML and Semantic resources are essential to achieving the higher levels of fusion and inference necessary for decision support (products are indicated in red –objects, situations, impacts etc.)
When we go to the type of interoperability of heterogeneous systems that we are achieving by going to a Jini-based environment (the use of the CoABS Grid that ESG is experimenting with), the client needs to be given enough information to learn how to use a data-producing service. This requires more than the API, which will allow the client to *read* the data, but will not tell it what it means. If I use XML, I can tell the client that a certain piece of data is "Range Extent". But to be useful to a classifier, we need to know how range extent is calculated. DAML offers the added capability of passing a reference (the Semantic Resources shown in the diagram) to much more detailed information about the algorithm used to calculate range extent than it would be practical to pass as part of the data stream itself. Or it might pass references to known distributions of range extent for various types of targets, to which the data value may be compared for decision making purposes. This type of semantic content (what the fusion community is calling “pedigree and context” is what will allow the general benefits of mark up to be extended to allow clients to learn how to use the data.
Key points are: 1) Semantic resources being available to allow automated fusion of sensed information at the C2 fusion client or fusion service level, 2) Development and maintenance benefit of markup vs. hard-wired/hard coded data structures. 3) Importance of retaining a top level decision context view and experimenting to understand where context needs to be and can be retained in mark-up vs. other mechanisms for retention.