BBN: DAML Homework Assignment 2
* draft of $Date: 2001/01/10 23:50:09 $ *
This page contains BBN integration team results for
DAML Homework Assignment 2.
- What organizations are involved in the DAML program?
- What organizations are involved in the DAML program as part of
more than 1 project? [Answer: BBN, Kestrel]
- What people involved in DAML work for more than 1 organization?
- What U.S. states include DAML participants?
- What individuals participating in the DAML program have degrees from
universities also participating in the DAML program?
Distributed Processing Model
Even if they could be readily identified,
retrieving every potentially relevant DAML page for every query
is likely to incur unacceptable latencies.
We expect that DAML statements will be cached at at least 4 different levels,
according to the accessibility of the pages and/or servers containing them.
- personal cache
- 1 or more workgroup caches
- organizational intranet cache
- public Internet cache
In traditional distributed database query processing,
we would expect to use some metrics like the number of relevent statements
in each cache to determine the execution site for the query
(typically to minimize overall bandwidth consumption).
This assumes that each server is trusted to handle all of the data.
In an Internet environment,
we would generally not assume secure execution or transmission.
In general, this moves execution to the client or its proxy on a
server with better-connected server machine.
Queries will be stated in terms of a subject ontlogy, e.g. the
ontology we used for our homework submission.
Results will be returned in the subject ontology.
Content expressed in other ontologies will
be translated to the subject ontology.
Fundamentally, we think
of queries as graphs containing constants, variables, and possibly
other constraints that match against portions of a larger logical
DAML/RDF graph constructed by collecting DAML statements.
For example, Query 1 could be represented by the graph
Persistent knowledge bases like PARKA,
augmented with additional inference capabilities,
could be used to implement such graph matching.
Since we've involved in the DAML program, we assume that some facts about
DAML will be stored in our local (project and/or personal cache).
This might include the
(manually constructed from the Homework Assignment 1 results)
which contains pointers to the instances representing projects and individuals
involved this DAML.
The directory's person and project properties
refer to the Person and Contract instances from our own
ontology (the subject ontology for our query).
It might be reasonable to infer that the other instances are "similar" --
it might also be desirable to have some way to directly express this.
Answering a query will typically involve the use of cached statements.
It may be useful to have a mechanism whereby a result can be "verified"
against the current versions of the pages involved.
This could be done by the client if all of the source URIs were
were returned with the query results.
The following are additional tools that may aid in the processing of
- server-side interfaces that can return summary statistics about the
DAML content at a given site: number of instances of a given class,
number of statements associated with a given ontology, etc.
These statistics could be used for query optimization,
There may be security issues even with such aggregate information.
- query execution trace tools that optionally record and/or display the
steps and decisions involved in processing a query.
These could be used for tuning, debugging, explanation, etc.
$Id: bbn2.html,v 1.4 2001/01/10 23:50:09 mdean Exp $