Re: DAML-Time

From: pat hayes (phayes@ai.uwf.edu)
Date: 03/11/03

  • Next message: Jerry Hobbs: "Re: DAML-Time"
    Some comments in-line below.
    
    A quick overall comment: as a 'standard' ontology, this sometimes 
    seems to me to fall between two stools. On the one hand it goes to 
    considerable lengths to support a variety of rather arcane and 
    academic alternatives, eg the two-treatments of infinity and the 
    nonlinear time options; but considered as a general theory it often 
    cuts corners and seems if anything too simple in places. On the other 
    hand, considered as a pragmatically useful ontology it doesn't pay 
    enough attention to existing standards and is probably way too 
    complicated for most purposes, particularly where it gets into axioms 
    with three or four nested quantifiers. I suspect that most users 
    would be happier with a quick-and-dirty version which just took a 
    position on infinity and linearity and density, one way or the other, 
    was aligned with things like ISO and XSD, and allowed people to get 
    simple calendering up and running quickly; and had in the background, 
    as it were, a more 'advanced' version which allowed one to handle 
    leap seconds and make all kinds of subtle distinctions without 
    getting confused. But then the latter had better REALLY be 
    general-purpose, so that it wouldn't break when considering BC dates, 
    or tree-rings, or relativistic corrections for GPS transmissions, or 
    Ramadan.
    
    Pat
    ----------
    
    >Attached is the latest writeup on the DAML-Time ontology.  The
    >principal differences from previous versions are as follows:
    >
    >1.  A more careful treatment of the two approaches to infinite time --
    >     points at infinity or not.  Some changes elsewhere to accomodate
    >     that.
    >
    >2.  Selection of options in the ontology is now accomplished simply by
    >     asserting a single proposition -- e.g., Total-Order() to get total
    >     ordering.  The axioms for that option are conditionalized on these
    >     propositions.
    
    Neat.
    
    >3.  There is a treatment of granularity.  Comments solitcited.
    
    See below. I think it is too simplistic as it stands.
    
    >4.  There is a treatment of temporal aggregates.  Comments solicited.
    
    On the other hand, this seems too complicated. Maybe Im not seeing 
    the reasons for the complexity, but using set theory seems like 
    overkill right at the start. BTW, the 'clocks and calendars' section 
    4 of my old time-survey memo has a treatment of some of these issues.
    
    BTW, a general point: some of the complexity of parts of this can be 
    eliminated by using CL rather than a conventional FO syntax.
    
    >
    >The things that remain to be done in the abstract ontology are as
    >follows:
    
    I am not happy with the account here of the relationship between 
    durations and calendars. On the one hand, it repeats a common mistake 
    in treating months as duration measures (Its being a 'standard' 
    doesn't make it right, and repeating an error doesn't make it 
    better.) and in not properly handling leap seconds and other 
    awkward-but-real stuff; on the other hand, it seems overly 
    complicated.
    
    I really like your treatment of time zones, though, best Ive ever seen.
    
    >1.  Temporal arithmetic.
    
    That ought to just be arithmetic, if durations were done properly.
    
    >2.  Rates.
    
    Similarly.
    
    >
    >3.  Deictics like "now", "today", "ago", etc.
    >
    >4.  Vague terms, like "short", "soon", "recent", etc.
    
    Both of these seem like much more general issues than just to do with time.
    
    >
    >The aim in the last two areas is to arrive at an ontology that will
    >support useful annotation of natural language in web pages.
    >
    >You might also look at the DAML-Time web page:
    >
    >     http://www.cs.rochester.edu/~ferguson/daml/
    >
    >-- Jerry
    >
    >                        A DAML Ontology of Time
    >
    >
    >                              Jerry R. Hobbs
    >
    >                        with contributions from
    >
    >        George Ferguson, James Allen, Richard Fikes, Pat Hayes,
    >    Drew McDermott, Ian Niles, Adam Pease, Austin Tate, Mabry Tyson,
    >			and Richard Waldinger
    >
    >                              November 2002
    >
    >
    >                            1. Introduction
    >
    >A number of sites, DAML contractors and others, have developed
    >ontologies of time (e.g., DAML-S, Cycorp, CMU, Kestrel, Teknowledge).
    >A group of us have decided to collaborate to develop a representative
    >ontology of time for DAML, which could then be used as is or
    >elaborated on by others needing such an ontology.  It is hoped that
    >this collaboration will result in an ontology that will be adopted
    >much more widely than any single site's product would be.
    >
    >We envision three aspects to this effort:
    >
    >     1.  An abstract characterization of the concepts and their
    >         properties, expressed in first-order predicate calculus.
    >
    >     2.  A translation of the abstract ontology into DAML code, to
    >         whatever extent possible given the current state of DAML
    >         expressivity.
    >
    >     3.  Mappings between the DAML ontology and individual sites'
    >         ontologies.
    >
    >DAML is under development and is thus a moving target, and that is why
    >separating 1 and 2 is desirable.  Level 1 can stabilize before DAML
    >does.  A mapping in 3 may be an isomorphism, or it may be something
    >more complicated.  The reason for 3 is so DAML users can exploit the
    >wide variety of resources for temporal reasoning that are available.
    >Moreover, it will aid the widespread use of the ontology if it can be
    >linked easily to, for example, the temporal portion of Teknowledge's
    >IEEE Standard Upper Ontology effort or to Cycorp's soon-to-be widely
    >used knowledge base.
    
    Miaou?
    
    >
    >The purposes of the temporal ontology are both for expressing temporal
    >aspects of the contents of web resources and for expressing
    >time-related properties of web services.
    >
    >The following document outlines the principal features of a
    >representative DAML ontology of time.  It is informed by ontology
    >efforts at a number of sites and reflects but elaborates on a
    >tentative consensus during discussions at the last DAML meeting.  The
    >first three areas are spelled out in significant detail.  The last three
    >are just sketches of work to be done.
    >
    >There are a number of places where it is stated that the ontology is
    >silent about some issue.  This is done to avoid controversial choices
    >in the ontolgy where more than one treatment would be reasonable and
    >consistent.  Often these issues involve identifying a one-dimensional
    >entity and a zero-dimensional entity with one another.
    
    Nice observation. I wonder if we can make that apply to the 
    points-at-infinity case?
    
    >
    >In general, functions are used where they are total and have a unique
    >value; predicates are used otherwise.  The order of arguments usually
    >follows the subject-object-object of preposition order in the most
    >natural use in an English sentence (except for "Hath", where
    >topicalization applies).
    >
    >A note on notation: Conjunction (&) takes precedence over
    >implication(-->) and equivalence (<-->).  Formulas are assumed to be
    >universally quantified on the variables appearing in the antecedent of
    >the highest-level implication.  Thus,
    >
    >         p1(x) & p2(y) --> q1(x,y) & q2(y)
    >
    >is to be interpreted as
    >
    >         (A x,y)[[p1(x) & p2(y)] --> [q1(x,y) & q2(y)]]
    >
    >At the end of each section there is a subsection on MAPPINGS.  These
    >are sketches of the relations between some highly developed temporal
    >ontologies and the one outlined here.
    >
    >                   2. Topological Temporal Relations
    >
    >2.1.  Instants and Intervals:
    >
    >There are two subclasses of temporal-entity:  instant and interval.
    >
    ><axiom id="2.1-1">
    >         instant(t) --> temporal-entity(t)
    ></axiom>
    >
    ><axiom id="2.1-2">
    >         interval(T) --> temporal-entity(T)
    ></axiom>
    >
    >These are the only two subclasses of temporal entities.
    ><axiom id="2.1-3">
    >         (A T)[temporal-entity(T) --> [instant(T) v interval(T)]]
    ></axiom>
    >(In what follows, lower case t is used for instants, upper case T for
    >intervals and for temporal-entities unspecified as to subtype.  This
    >is strictly for the reader's convenience, and has no formal
    >significance.)
    
    You might remark that intervals are, intuitively, things with 
    'extent' and instants are, intuitively, point-like in that they have 
    no interior points.
    
    >_begins_ and _ends_ are relations between instants and temporal
    >entities.
    ><axiom id="2.1-4">
    >         begins(t,T) --> instant(t) & temporal-entity(T)
    ></axiom>
    >
    ><axiom id="2.1-5">
    >         ends(t,T) --> instant(t) & temporal-entity(T)
    ></axiom>
    >
    >For convenience, we can say that the beginning and end of an instant is
    >itself.
    
    Slight danger lurks here when defining Allen-style interval relations 
    in terms of endpoints, eg instants meet themselves. I expect you 
    already noticed that.
    
    ><axiom id="2.1-6">
    >         instant(t) --> begins(t,t)
    ></axiom>
    >
    ><axiom id="2.1-7">
    >         instant(t) --> ends(t,t)
    ></axiom>
    
    Hmmm. This is going to bite you, I suspect. Might be more agnostic to 
    reverse these implications. Then if something is self-ending, it must 
    be pointlike, but you don't insist on the self-ending property for 
    all points.  Basically, if its not a point then its begin is 
    different from its end.
    
    >The beginnings and ends of temporal entities, if they exist, are
    >unique. 
    ><axiom id="2.1-8">
    >         temporal-entity(T) & begins(t1,T) & begins(t2,T) --> t1=t2
    ></axiom>
    >
    ><axiom id="2.1-9">
    >         temporal-entity(T) & ends(t1,T) & ends(t2,T) --> t1=t2
    ></axiom>
    >As will be seen in Section 2.4, in one approach to infinite intervals,
    >a positively infinite interval has no end, and a negatively infinite
    >interval has no beginning.  Hence, we use the relations "begins" and
    >"ends" in the core ontology, rather than defining functions
    >"beginning-of" and "end-of", since the functions would not be total.
    >They can be defined in an extension of the core ontology that posits
    >instants at positive and negative infinity.
    >
    >_inside_ is a relation between an instant and an interval.
    >
    ><axiom id="2.1-10">
    >         inside(t,T) --> instant(t) & interval(T)
    ></axiom>
    >
    >This concept of inside is not intended to include beginnings and ends of
    >intervals, as will be seen below.
    >
    >It will be useful in characterizing clock and calendar terms to have a
    >relation between instants and intervals that says that the instant is
    >inside or the beginning of the interval.
    ><axiom id="2.1-11">
    >         (A t,T)[begins-or-in(t,T) <--> [begins(t,T) v inside(t,T)]]
    ></axiom>
    >
    >time-between is a relation among a temporal entity and two
    >instants.
    ><axiom id="2.1-12">
    >         time-between(T,t1,t2)
    >             --> temporal-entity(T) & instant(t1) & instant(t2)
    ></axiom>
    >The two instants are the beginning and end points of the temporal entity.
    ><axiom id="2.1-13">
    >         time-between(T,t1,t2) <--> begins(t1,T) & ends(t2,T)
    ></axiom>
    >
    >The ontology is silent about whether the time from t to t, if it
    >exists, is identical to the instant t.
    
    I don't think it is quite silent. Suppose t is an instant, then by 
    2.1-6 and 7, begins(t,t) and ends(t,t) , so by 2.1-13, 
    time-between(t,t,t). See comment above.
    
    >
    >The ontology is silent about whether intervals _consist of_ instants.
    >
    >The core ontology is silent about whether intervals are uniquely
    >determined by their beginnings and ends.  This issue is dealt with in
    >Section 2.4.
    >
    >We can define a proper-interval as one whose beginning and end are not
    >identical.
    ><axiom id="2.1-14">
    >         (A T)[proper-interval(T)
    >               <--> interval(T)
    >                    & (A t1,t2)[begins(t1,T) & ends(t2,T) --> t1 =/= t2]]
    ></axiom>
    >A half-infinite or infinite interval, by this definition, is proper.
    >
    >The ontology is silent about whether there are any intervals that are
    >not proper intervals.
    >
    >2.2.  Before:
    >
    >There is a before relation on temporal entities, which gives
    >directionality to time.  If temporal-entity T1 is before
    >temporal-entity T2, then the end of T1 is before the beginning of T2.
    >Thus, before can be considered to be basic to instants and derived for
    >intervals.
    >
    ><axiom id="2.2-1">
    >	(A T1,T2)[before(T1,T2)
    >	    <--> (E t1,t2)[ends(t1,T1) & begins(t2,T2) & before(t1,t2)]]
    ></axiom>
    >
    >The before relation is anti-reflexive, anti-symmetric and transitive.
    
    What about circular (seasonal) time? This is a coherent way of 
    thinking which is ruled out right at the top, which seems a pity.
    
    In many ways, the circular time model is more general than the linear 
    one, eg there are 16 (not 13) interval relations with some very 
    pretty symmetries between them (a power of 2, not a prime) ; and you 
    get the linear time model just by removing a single point from the 
    circle, thereby forbidding intervals to contain it.
    
    >
    ><axiom id="2.2-2">
    >         before(T1,T2) --> T1 =/= T2
    ></axiom>
    ><axiom id="2.2-3">
    >         before(T1,T2) --> ~before(T2,T1)
    ></axiom>
    ><axiom id="2.2-4">
    >         before(T1,T2) & before(T2,T3) --> before(T1,T3)
    ></axiom>
    >
    >The end of an interval is not before the beginning of the interval.
    ><axiom id="2.2-5">
    >         interval(T) & begins(t1,T) & ends(t2,T) --> ~before(t2,t1)
    ></axiom>
    >
    >The beginning of a proper interval is before the end of the interval.
    ><axiom id="2.2-6">
    >         proper-interval(T) & begins(t1,T) & ends(t2,T)
    >	    --> before(t1,t2)
    ></axiom>
    >
    >The converse of this is a theorem.
    >
    >	begins(t1,T) & ends(t2,T) & before(t1,t2)
    >	    --> proper-interval(T)
    >
    >If one instant is before another, there is a time between them.
    >
    ><axiom id="2.2-7">
    >         instant(t1) & instant(t2) & before(t1,t2)
    >             --> (E T) time-between(T,t1,t2)
    ></axiom>
    >
    >The ontology is silent about whether there is a time from t to t.
    
    t is from t to t. See above.
    
    >
    >If an instant is inside a proper interval, then the beginning of the
    >interval is before the instant, which is before the end of the
    >interval.  This is the principal property of "inside".
    ><axiom id="2.2-8">
    >         inside(t,T) & begins(t1,T) & ends(t2,T)
    >	     --> before(t1,t) & before(t,t2)
    ></axiom>
    >
    >The converse of this condition is called Convexity and is discussed in
    >Section 2.4.
    >
    >The relation "after" is defined in terms of "before".
    >
    ><axiom id="2.2-9">
    >         after(T1,T2) <--> before(T2,T1)
    ></axiom>
    >
    >The basic ontology is silent about whether time is linearly ordered.
    >Thus it supports theories of time, such as the branching futures
    >theory, which conflate time and possibility or knowledge.
    
    Bad play, IMO. Confuses temporal with other issues, and repeats an old error.
    
    >  This issue
    >is discussed further in Section 2.4.
    >
    >The basic ontology is silent about whether time is dense, that is,
    >whether between any two instants there is a third instant.  Thus it
    >supports theories in which time consists of discrete instants.  This
    >issue is discussed further in Section 2.4.
    >
    >2.3.  Interval Relations:
    >
    >The relations between intervals defined in Allen's temporal interval
    >calculus (Allen, 1984; Allen and Kautz, 1985; Allen and Hayes, 1989;
    >Allen and Ferguson, 1997) can be defined in a relatively
    >straightforward fashion in terms of "before" and identity on the
    >beginning and end points.  It is a bit more complicated than the
    >reader might at first suspect, because allowance has to be made for
    >the possibility of infinite intervals.  Where one of the intervals
    >could be infinite, the relation between the end points has to be
    >conditionalized on their existence.
    
    Well, hey, it also falls totally apart in branching time, and looks 
    rather different in circular time, so you ought to point out that all 
    these definitions rely for their intuitive correctness on having time 
    be totally ordered and infinite.
    
    >The standard interval calculus assumes all intervals are proper, and
    >we will do that here too. The definitions of the interval relations in
    >terms of "before" relations among their beginning and end points, when
    >they exist, are given by the following axioms.  In these axioms, t1
    >and t2 are the beginning and end of interval T1; t3 and t4 are the
    >beginning and end of T2.
    >
    ><axiom id="2.3-1">
    >         (A T1,T2)[int-equals(T1,T2)
    >                       <--> [proper-interval(T1) & proper-interval(T2)
    >                             & (A t1)[begins(t1,T1) <--> begins(t1,T2)]
    >			    & (A t2)[ends(t2,T1) <--> ends(t2,T2)]]]
    ></axiom>
    >
    ><axiom id="2.3-2">
    >         int-before(T1,T2) <--> proper-interval(T1) & proper-interval(T2)
    >                                & before(T1,T2)
    ></axiom>
    >
    ><axiom id="2.3-3">
    >         (A T1,T2)[int-meets(T1,T2)
    >                       <--> [proper-interval(T1) & proper-interval(T2)
    >                             & (E t)[ends(t,T1) & begins(t,T2)]]]
    ></axiom>
    >
    ><axiom id="2.3-4">
    >         (A T1,T2)[int-overlaps(T1,T2)
    >                       <--> [proper-interval(T1) & proper-interval(T2)
    >			    & (E t2,t3)[ends(t2,T1) & begins(t3,T2)
    >			           & before(t3,t2)
    >                                    & (A t1)[begins(t1,T1) --> before(t1,t3)]
    >                                    & (A t4)[ends(t4,T2) --> before(t2,t4)]]]]
    ></axiom>
    >
    ><axiom id="2.3-5">
    >         (A T1,T2)[int-starts(T1,T2)
    >                       <--> [proper-interval(T1) & proper-interval(T2)
    >			    & (E t2)[ends(t2,T1)
    >                                    & (A t1)[begins(t1,T1) <--> begins(t1,T2)]
    >                                    & (A t4)[ends(t4,T2) --> before(t2,t4)]]]]
    ></axiom>
    >
    ><axiom id="2.3-6">
    >         (A T1,T2)[int-during(T1,T2)
    >                       <--> [proper-interval(T1) & proper-interval(T2)
    >			    & (E t1,t2)[begins(t1,T1) & ends(t2,T1)
    >                                    & (A t3)[begins(t3,T2) --> before(t3,t1)]
    >                                    & (A t4)[ends(t4,T2) --> before(t2,t4)]]]]
    ></axiom>
    ><axiom id="2.3-7">
    >         (A T1,T2)[int-finishes(T1,T2)
    >                       <--> [proper-interval(T1) & proper-interval(T2)
    >			    & (E t1)[begins(t1,T1)
    >                                    & (A t3)[begins(t3,T2) --> before(t3,t1)]
    >                                    & (A t4)[ends(t4,T2) <--> ends(t4,T1))]]]]
    ></axiom>
    >
    >The inverse interval relations can be defined in terms of these relations.
    ><axiom id="2.3-8">
    >         int-after(T1,T2) <--> int-before(T2,T1)
    ></axiom>
    ><axiom id="2.3-9">
    >         int-met-by(T1,T2) <--> int-meets(T2,T1)
    ></axiom>
    ><axiom id="2.3-10">
    >         int-overlapped-by(T1,T2) <--> int-overlaps(T2,T1)
    ></axiom>
    ><axiom id="2.3-11">
    >         int-started-by(T1,T2) <--> int-starts(T2,T1)
    ></axiom>
    ><axiom id="2.3-12">
    >         int-contains(T1,T2) <--> int-during(T2,T1)
    ></axiom>
    ><axiom id="2.3-13">
    >         int-finished-by(T1,T2) <--> int-finishes(T2,T1)
    ></axiom>
    
    Presumably it ought to be a theorem that these 12 plus equality are 
    exclusive *and exhaustive*. Any idea if it in fact is (a theorem, 
    that is)?
    
    >In addition, it will be useful below to have a single predicate for
    >"starts or is during".  This is called "starts-or-during".
    >
    ><axiom id="2.3-14">
    >         starts-or-during(T1,T2)
    >             <--> [int-starts(T1,T2) v int-during(T1,T2)]
    ></axiom>
    >
    >It will also be useful to have a single predicate for intervals
    >intersecting in at most an instant.
    >
    ><axiom id="2.3-15">
    >         nonoverlap(T1,T2)
    >             <--> [int-before(T1,T2) v int-after(T1,T2) v int-meets(T1,T2)
    >                      v int-met-by(T1,T2)]
    ></axiom>
    
    You might point out that these have alternative definitions in terms 
    of endpoint orderings. EG starts-or-during is just (same-beginning 
    and ends-before) which isn't even a disjunction, and nonoverlap is 
    (end-before-or-equal-beginning) OR (the other way round).
    
    >
    >So far, the concepts and axioms in the ontology of time would be
    >appropriate for scalar phenomena in general.
    >
    >2.4.  Optional Extensions:
    >
    >In the basic ontology we have tried to remain neutral with respect to
    >controversial issues, while producing a consistent and useable
    >axiomatization.  In specific applications one may want to have
    >stronger properties and thus take a stand on some of these issues.  In
    >this section, we describe some options, with the axioms that would
    >implement them.  These axioms and any subsequent theorems depending on
    >them are prefaced with a 0-argument proposition that says the option
    >is being exercised.
    
    Neat trick.
    
    >Thus the axiom for total ordering is prefaced by
    >the proposition
    >
    >	Total-Order() -->
    >
    >Then to adopt the option of total ordering, one merely has to assert
    >
    >	Total-Order()
    >
    >Total or Linear Ordering: In many applications, if not most, it will
    >be useful to assume that time is linearly or totally ordered.  The
    >axiom that expresses this is as follows:
    >
    ><axiom id="2.4-1">
    >       Total-Order() -->
    >       (A t1,t2)[instant(t1) & instant(t2)
    >                    --> [before(t1,t2) v t1 = t2 v before(t2,t1)]]
    ></axiom>
    >
    >This eliminates models of time with branching futures and other
    >conflations of time and possibility or limited knowledge.
    >
    >Infinity: There are two common ways of allowing infinitely long
    >intervals. Both are common enough that it is worth a little effort to
    >construct the time ontology in a way that accommodates both.  The
    >statements of the axioms have been complicated modestly in order to
    >localize the difference between the two approaches to the choice
    >between two pairs of simple existence axioms, which are themselves
    >conditioned on 0-argument propositions indicating the choice of that
    >option.
    >
    >In the first approach, one posits time instants at positive and
    >negative infinity.  Half-infinite intervals are then intervals that
    >have one of these as an endpoint.  Rather than introduce constants for
    >these in the core ontology, we will have two predicates -- "posinf"
    >and "neginf" -- which are true of only these points.  The 0-argument
    >proposition corresponding to the choice of this approach will be
    >
    >	Pts-at-Inf()
    >
    >In the second approach, there are intervals that have no beginning
    >and/or end.  "posinf-interval(T)" says that T is a half-infinite
    >interval with no end.  "neginf-interval(T)" says that T is a
    >half-infinite interval with no beginning.  The 0-argument proposition
    >corresponding to this option will be
    >
    >	No-Pts-at-Inf()
    >
    >In the first approach, "posinf-interval" and "neginf-interval" will
    >not be true of anything.  In the second approach "posinf" and "neginf"
    >will not be true of anything.
    >
    >The axioms that specify the properties of posinf, neginf,
    >posinf-interval, and neginf-interval will be conditioned on the
    >existence of such temporal entities.  Thus, if an approach does not
    >include them, the condition will never be satisfied.  These axioms
    >can thus be part of the core theory.
    >
    >Axioms in the core theory will make the two approaches mutually
    >exclusive.
    >
    >Then which approach one takes amounts on which of two pairs of
    >existence axioms
    >one uses.  This choice is further localized to the decision between
    >asserting "Pts-at-Inf()" and asserting "No-Pts-at-Inf()". 
    >
    >The arguments of the predicates "posinf" and "neginf", if they exist,
    >are instants. 
    ><axiom id="2.4-2">
    >	posinf(t) --> instant(t)
    ></axiom>
    ><axiom id="2.4-3">
    >	neginf(t) --> instant(t)
    ></axiom>
    >
    >The principal property of the point at positive infinity is that every
    >other instant is before it. 
    ><axiom id="2.4-4">
    >          (A t,t1)[instant(t1) & posinf(t) --> [before(t1,t) v t1 = t]]
    ></axiom>
    >
    >The next axiom entails that there are infinitely many instants after
    >any given instant other than the point at positive infinity.
    ><axiom id="2.4-5">
    >         (A t1)[instant(t1) & ~posinf(t1)
    >                    --> (E t2)[instant(t2) & before(t1,t2)]]
    ></axiom>
    >Note that these two axioms are valid in an approach that does not
    >admit a point at positive infinity; the antecedent of Axiom 2.4-4 will
    >never be satisfied, and the second conjunct in the antecedent of Axiom
    >2.4-5 will always be satisfied, guaranteeing that after every instant
    >there will be another instant.
    >
    >The principal property of the point at negative infinity is that it is
    >before every other instant. 
    ><axiom id="2.4-6">
    >          (A t,t1)[instant(t1) & neginf(t) --> [before(t,t1) v t1 = t]]
    ></axiom>
    >
    >The next axiom entails that there are infinitely many instants before
    >any given instant other than the point at negative infinity.
    ><axiom id="2.4-7">
    >         (A t1)[instant(t1) & ~neginf(t1)
    >                    --> (E t2)[instant(t2) & before(t2,t1)]]
    ></axiom>
    >Likewise these axioms are valid in an approach that does not admit
    >a point at negative infinity.
    >
    >In the second approach instants at positive and negative infinity are
    >not posited, but intervals can have the properties "posinf-interval"
    >and "neginf-interval".  Because of Axiom 2.1-4, if an interval has an
    >end, it is not a positive infinite interval.  Thus, a positive
    >infinite interval cannot have an end.
    >
    >An instant inside a postive half-infinite interval has infinitely many
    >instants after it.
    ><axiom id="2.4-8">
    >	(A t1,T)[posinf-interval(T) & inside(t1,T)
    >	         --> (E t2)[before(t1,t2) & inside(t2,T)]]
    ></axiom>
    >This axiom is valid in the first approach as well, since
    >"posinf-interval" will never be true and the antecedent will never be
    >satisfied.
    >
    >Because of Axiom 2.1-3, if an interval has a beginning, it is not a
    >negative infinite interval.  Thus, a negative infinite interval cannot
    >have a beginning.
    >
    >Corresponding to Axiom 2.4-8 is the following axiom for
    >"neginf-interval":
    ><axiom id="2.4-9">
    >	(A t1,T)[neginf-interval(T) & inside(t1,T)
    >	         --> (E t2)[before(t2,t1) & inside(t2,T)]]
    ></axiom>
    >
    >It may be useful to have two more predicates.  An interval is (at
    >least) a half-infinite interval if either "posinf-interval" or
    >"neginf-interval" is true of it.
    ><axiom id="2.4-10">
    >	(A T)[halfinf-interval(T)
    >	      <--> [posinf-interval(T) v neginf-interval(T)]]
    ></axiom>
    >An interval is an infinite interval if it is both positively and
    >negatively infinite.
    ><axiom id="2.4-11">
    >	(A T)[inf-interval(T)
    >	      <--> [posinf-interval(T) & neginf-interval(T)]]
    ></axiom>
    >Again these axioms are valid in the first approach because the
    >antecedents will never be true.
    >
    >Finally for the core ontology, we probably want to stipulate that one
    >either uses the "posinf" approach or the "posinf-interval" approach.
    >This is accomplished by the following axiom. 
    ><axiom id="2.4-12">
    >	[(E t) posinf(t)] <--> ~[(E T) posinf-interval(T)]
    ></axiom>
    >Similarly,
    ><axiom id="2.4-13">
    >	[(E t) neginf(t)] <--> ~[(E T) neginf-interval(T)]
    ></axiom>
    >Note that one could use one approach for negative infinity and
    >the other for positive infinity, although this development does not
    >support it.
    
    The other point that seems relevant is that one can systematically 
    *translate between* these two alternatives, which is kind of 
    important in practice, and also incidentally shows that in some sense 
    (tricky to state precisely) they are equivalent in expressiveness.
    Later: OK, I see you do this.
    
    >
    >This completes the treatment of infinite time in the core ontology.
    >
    >The following two axioms give points at infinity if we are using the
    >first approach, indicated by the proposition "Pts-at-Inf()" in the
    >antecedent.
    ><axiom id="2.4-14a">
    >	Pts-at-Inf() --> (E t) posinf(t)
    ></axiom>
    ><axiom id="2.4-15a">
    >	Pts-at-Inf() --> (E t) neginf(t)
    ></axiom>
    >That is, there are instants out at positive and negative infinity,
    >respectively, when the Points at Infinity approach is taken.  Again,
    >to adopt this approach, simply assert
    >
    >	 Pts-at-Inf()
    >
    >When one adopts this approach, one can also, for convenience, extend
    >the language to include the two constants, PositiveInfinity and
    >NegativeInfinity, where
    >
    >	posinf(PositiveInfinity)
    >	neginf(NegativeInfinity)
    >
    >One can also extend the language to include the functions
    >"beginning-of" and "end-of", defined as follows:
    >
    >	beginning-of(T) = t <--> begins(t,T)
    >	end-of(T) = t <--> ends(t,T)
    >
    >We stipulated the uniqueness of "begins" and "ends" in Section 2.1,
    >and Axioms 2.4-12 and 2.4-13 rule out intervals with no beginnings or
    >ends, so the functions will be total.
    >
    >The following two axioms guarantee the existence of half infinite
    >intervals if one takes the "No Points at Infinity" approach.
    ><axiom id="2.4-14b">
    >	No-Pts-at-Inf() -->
    >	(A t)[instant(t)
    >	      --> (E T)[posinf-interval(T) & begins(t,T)]]
    ></axiom>
    ><axiom id="2.4-15b">
    >	No-Pts-at-Inf() -->
    >	(A t)[instant(t)
    >	      --> (E T)[neginf-interval(T) & ends(t,T)]]
    ></axiom>
    >To specify that we are using the second approach, we would assert
    >
    >	No-Pts-at-Inf()
    >
    >Suppose we wish to map between the two ontologies.  Suppose the
    >predicates and constants in the theory using the first approach are
    >subscripted with 1 and the predicates in the theory using the second
    >approach are subscripted with 2.  The domains of the two theories are
    >the same.
    
    Well, not *exactly* the same.
    
    >  All predicates and functions of the two theories are
    >equivalent with the exception of "begin", "ends", "beginning-of",
    >"end-of", "posinf", "neginf", "posinf-interval", and
    >"neginf-interval".  These are related by the following two
    >articulation axioms.
    >
    >	posinf1(end-of1(T)) <--> posinf-interval2(T)
    >
    >	neginf1(beginning-of1(T)) <--> neginf-interval2(T)
    >
    >Density: In some applications it is useful to have the property of
    >density, that is, the property that between any two distinct instants
    >there is a third distinct instant.  The axiom for this is as follows,
    >where the 0-argument predicate indicating the exercising of this
    >option is "Dense()":
    >
    ><axiom id="2.4-16">
    >	Dense() -->
    >         (A t1,t2)[instant(t1) & instant(t2) & before(t1,t2)
    >                     --> (E t)[instant(t) & before(t1,t) & before(t,t2)]]
    ></axiom>
    >
    >This is weaker than the mathematical property of continuity, which we
    >will not axiomatize here.
    
    Say why not? (Inherently second-order.)
    
    >If time is totally ordered and continuous,
    >it is isomorphic to the real numbers.
    
    Wow, are you sure? In fact Im pretty sure that is not the case, eg 
    consider the structure made of uncountably many copies of R joined 
    end-to-end by intervening points-at-infinity. That is totally ordered 
    and continuous but not isomorphic to R.  (Not my example; see van 
    Bentham, 'logic of time', p. 31)
    
    >Convexity:  In Section 2.2 we gave the axiom 2.2-8:
    >
    >         inside(t,T) & begins(t1,T) & ends(t2,T)
    >	     --> before(t1,t) & before(t,t2)
    >
    >The converse of this condition is called Convexity and may be stronger
    >than some users will want if they are modeling time as a partial
    >ordering.  (See Esoteric Note below.)  To choose the option of
    >Convexity, simply assert the 0-argument proposition "Convex()".
    >
    ><axiom id="2.4-17">
    >	Convex() -->
    >         [begins(t1,T) & ends(t2,T) & before(t1,t) & before(t,t2)
    >	    --> inside(t,T)]
    ></axiom>
    >
    >In the rest of this development anny property that depends on
    >Convexity will be conditioned on the proposition "Convex()".
    >
    >Convexity implies that intervals are contiguous with respect to the
    >before relation, in that an instant between two other instants inside
    >an interval is also inside the interval.
    >
    >	Convex() -->
    >         [before(t1,t2) & before(t2,t3) & inside(t1,T) & inside(t3,T)
    >             --> inside(t2,T)]
    >
    >Extensional Collapse: In the standard development of interval
    >calculus, it is assumed that any intervals that are int-equals are
    >identical.  That is, intervals are uniquely determined by their
    >beginning and end points.  We can call this the property of
    >Extensional Collapse, and indicate it by the 0-argument proposition
    >"Ext-Collapse()".
    >
    ><axiom id="2.4-18">
    >         Ext-Collapse() --> (A T1,T2)[int-equals(T1,T2) --> T1 = T2]
    ></axiom>
    >
    >If we think of different intervals between the end points as being
    >different ways the beginning can lead to the end, then Extensional
    >Collapse can be seen as collapsing all these into a single "before"
    >relation.
    >
    >In the rest of this development we will point it out whenever any
    >concept or property depends on Extensional Collapse.
    >
    >Esoteric Note: Convexity, Extensional Collapse, and Total Ordering are
    >independent properties.  This can be seen by considering the following
    >four models based on directed graphs, where the arcs define the before
    >relation:
    >
    >     1.  An interval is any subset of the paths between two nodes.
    >         (For example, time is partially ordered and an interval is
    >         any path from one node to another.)
    >
    >     2.  An interval is the complete set of paths between two nodes.
    >
    >     3.  An interval consists of the beginning and end nodes and all the
    >         arcs between the beginning and end nodes but no intermediate
    >         nodes.  So inside(t,T) is never true.  (This is a hard model
    >         to motivate.)
    >
    >     4.  The instants are a set of discrete, linearly ordered nodes.
    >         There are multiple arcs between the nodes.  The intervals are
    >         paths from one node to another, including the nodes.  (For
    >         example, the instants may be the successive states in the
    >         situation calculus and the intervals sequences of actions
    >         mapping one state into the next.  Different actions can have
    >         the same start and end states.)
    >
    >Model 1 has none of the three properties.  Model 2 has Convexity and
    >Extensional Collapse, but is not Totally Ordered.  Model 3 is Totally
    >Ordered and has Extensional Collapse but not Convexity.  Model 4 is
    >Totally Ordered and Convex, but lacks Extensional Collapse.
    >
    >2.5.  Linking Time and Events:
    >
    >The time ontology links to other things in the world through four
    >predicates -- at-time, during, holds, and time-span.  We assume
    >that another ontology provides for the description of events -- either
    >a general ontology of event structure abstractly conceived, or
    >specific, domain-dependent ontologies for specific domains.
    >
    >The term "eventuality" will be used to cover events, states,
    >processes, propositions, states of affairs, and anything else that can
    >be located with respect to time.  The possible natures of
    >eventualities would be spelled out in the event ontologies.  The term
    >"eventuality" in this document is only an expositional convenience and
    >has no formal role in the time ontology.
    >
    >The predicate at-time relates an eventuality to an instant, and is
    >intended to say that the eventuality holds, obtains, or is taking
    >place at that time.
    >
    ><axiom id="2.5-1">
    >         at-time(e,t) --> instant(t)
    ></axiom>
    >
    >The predicate during relates an eventuality to an interval, and is
    >intended to say that the eventuality holds, obtains, or is taking
    >place during that interval.
    
    Well, that is ambiguous, as Im sure you know. Ie does it mean holding 
    throughout, continuously, or only somewhere inside?
    
    >
    ><axiom id="2.5-2">
    >         during(e,T) --> interval(T)
    ></axiom>
    >
    >If an eventuality obtains during an interval, it obtains at every
    >instant inside the interval.
    
    Ah, OK. So then if I take a trip lasting five days, am I travelling 
    while Im sleeping? (Or if it takes me a couple of years to write a 
    book, am I writing while Im having a shower? Etc.)
    
    You maybe ought to at least *say* something about this issue.
    
    >
    ><axiom id="2.5-3">
    >         during(e,T) & inside(t,T) --> at-time(e,t)
    ></axiom>
    
    What about during sub-intervals?
    
    >Whether a particular process is viewed as instantaneous or as occuring
    >over an interval is a granularity decision that may vary according to
    >the context of use, and is assumed to be provided by the event
    >ontology.
    >
    >Often the eventualities in the event ontology are best thought of as
    >propositions,
    
    What other kinds of eventualities are there? (Why does one need this 
    2-stage approach? Seems needlessly complicated.)  Whatever they are, 
    by the way, they can be subsumed under the proposition "so-and-so 
    exists" for temporal purposes. That is, temporal entities can be 
    considered to be exemplifications of existential propositions.
    
    >and the relation between these and times is most
    >naturally called "holds".  "holds(e,T)" would say that e holds at
    >instant T or during interval T.  The predicate "holds" would be part
    >of the event ontology, not part of the time ontology, although its
    >second argument would be be provided by the time ontology.  The
    >designers of the event ontology may or may not want to relate "holds"
    >to "at-time" and "during" by axioms such as the following:
    >
    >         holds(e,t) & instant(t) <--> at-time(e,t)
    >         holds(e,T) & interval(T) <--> during(e,T)
    >
    >Similarly, the event ontology may provide other ways of linking events
    >with times, for example, by including a time parameter in
    >predications.
    >
    >         p(x,t)
    >
    >The time ontology provides ways of reasoning about the t's; their use
    >as arguments of predicates from another domain would be a feature of
    >the ontology of the other domain.
    
    Right. Good move.
    
    Incidentally, this seems to me to be a useful heuristic: the temporal 
    theory ought to be preserved under all variations on these other 
    theories, eg whether you include times as arguments or relativize 
    truth to them or use them as modal possible-world indices, it ought 
    to come out the same.
    
    >
    >The predicate time-span relates eventualities to instants or
    >intervals.  For contiguous states and processes, it tells the entire
    >instant or interval for which the state or process obtains or takes
    >place.  In Section 6 we will develop a treatment of discontinuous
    >temporal sequences, and it will be useful to remain open to having
    >these as time spans of eventualities as well.
    >
    ><axiom id="2.5-4">
    >         time-span(T,e) --> temporal-entity(T) & tseq(T)
    ></axiom>
    ><axiom id="2.5-5">
    >         time-span(T,e) & interval(T) --> during(e,T)
    ></axiom>
    ><axiom id="2.5-6">
    >         time-span(t,e) & instant(t) --> at-time(e,t)
    ></axiom>
    ><axiom id="2.5-7">
    >         time-span(T,e) & interval(T) & ~inside(t,T)
    >                 & ~begins(t,T) & ~ends(t,T)
    >             --> ~at-time(e,t)
    ></axiom>
    ><axiom id="2.5-8">
    >         time-span(t,e) & instant(t) & t1 =/= t --> ~at-time(e,t1)
    ></axiom>
    >
    >Whether the eventuality obtains at the beginning and end points of its
    >time span is a matter for the event ontology to specify.  The silence
    >here on this issue is the reason "time-span" is not defined in terms
    >of necessary and sufficient conditions.
    >
    >The event ontology could extend temporal functions and predicates to
    >apply to events in the obvious way, e.g.,
    >
    >         ev-begins(t,e) <--> time-span(T,e) & begins(t,T)
    >
    >This would not be part of the time ontology, but would be consistent
    >with it.
    >
    >Different communities have different ways of representing the times
    >and durations of states and events (processes).  In one approach,
    >states and events can both have durations, and at least events can be
    >instantaneous.  In another approach, events can only be instantaneous
    >and only states can have durations.  In the latter approach, events
    >that one might consider as having duration (e.g., heating water) are
    >modeled as a state of the system that is initiated and terminated by
    >instantaneous events.  That is, there is the instantaneous event of
    >the beginning of the heating at the beginning of an interval, that
    >transitions the system into a state in which the water is heating.
    >The state continues until another instantaneous event occurs---the
    >stopping of the heating at the end of the interval.  These two
    >perspectives on events are straightforwardly interdefinable in terms
    >of the ontology we have provided.  This is a matter for the event
    >ontology to specify.  This time ontology is neutral with respect to
    >the choice.
    >
    >MAPPINGS:
    >
    >Teknowledge's SUMO has pretty much the same ontology as presented
    >here, though the names are slightly different.  An instant is a
    >TimePoint, an interval is a TimeInterval, beginning-of is BeginFn, and
    >so on.  SUMO implements the Allen calculus.
    >
    >Cyc has functions #startingPoint and #endingPoint that apply to
    >intervals, but also to eventualities.  Cyc implements the Allen
    >calculus.  Cyc uses a holdIn predicate to relate events to times, but
    >to other events as well.  Cyc defines a very rich set of derived
    >concepts that are not defined here, but could be.
    >
    >For instant Kestral uses Time-Point, for interval they use
    >Time-Interval, for beginning-of they use start-time-point, and so on.
    >
    >PSL axiomatizes before as a total ordering.
    >
    >                        3.  Measuring Durations
    >
    >3.1.  Temporal Units:
    >
    >This development assumes ordinary arithmetic is available.
    >
    >There are at least two approaches that can be taken toward measuring
    >intervals.
    
    intervals= durations?
    
    >  The first is to consider units of time as functions from
    >Intervals to Reals.  Because of infinite intervals, the range must
    >also include Infinity.
    >
    >         minutes: Intervals --> Reals U {Infinity}
    >         minutes([5:14,5:17)) = 3
    >
    >The other approach is to consider temporal units to constitute a set
    >of entities -- call it TemporalUnits -- and have a single function
    >_duration_ mapping Intervals x TemporalUnits into the Reals.
    >
    >         duration: Intervals x TemporalUnits --> Reals U {Infinity}
    >         duration([5:14,5:17), *Minute*) = 3
    >
    >The two approaches are interdefinable:
    >
    ><axiom id="3.1-1">
    >         seconds(T) = duration(T,*Second*)
    ></axiom>
    ><axiom id="3.1-2">
    >         minutes(T) = duration(T,*Minute*)
    ></axiom>
    ><axiom id="3.1-3">
    >         hours(T) = duration(T,*Hour*)
    ></axiom>
    ><axiom id="3.1-4">
    >         days(T) = duration(T,*Day*)
    ></axiom>
    ><axiom id="3.1-5">
    >         weeks(T) = duration(T,*Week*)
    ></axiom>
    ><axiom id="3.1-6">
    >         months(T) = duration(T,*Month*)
    ></axiom>
    
    Months are a *really* bad thing to have as a duration measure.
    
    ><axiom id="3.1-7">
    >         years(T) = duration(T,*Year*)
    ></axiom>
    >
    >Ordinarily, the first is more convenient for stating specific facts
    >about particular units.  The second is more convenient for stating
    >general facts about all units.
    >
    >The constraints on the arguments of duration are as follows:
    >
    ><axiom id="3.1-8">
    >         duration(T,u) --> proper-interval(T) & temporal-unit(u)
    ></axiom>
    >
    >The temporal units are as follows:
    >
    ><axiom id="4.2-17">
    >         temporal-unit(*Second*) & temporal-unit(*Minute*)
    >	& temporal-unit(*Hour*) & temporal-unit(*Day*)
    >	& temporal-unit(*Week*) & temporal-unit(*Month*)
    >	& temporal-unit(*Year*)
    ></axiom>
    >
    >The aritmetic relations among the various units are as follows:
    >
    ><axiom id="3.1-9">
    >         seconds(T) = 60 * minutes(T)
    ></axiom>
    ><axiom id="3.1-10">
    >         minutes(T) = 60 * hours(T)
    ></axiom>
    ><axiom id="3.1-11">
    >         hours(T) = 24 * days(T)
    ></axiom>
    ><axiom id="3.1-12">
    >         days(T) = 7 * weeks(T)
    ></axiom>
    ><axiom id="3.1-13">
    >         months(T) = 12 * years(T)
    ></axiom>
    >
    >The relation between days and months (and, to a lesser extent, years)
    >will be specified as part of the ontology of clock and calendar below.
    >On their own, however, month and year are legitimate temporal units.
    
    Hmmm. Seems to me that we need to distinguish units of duration on 
    the one hand, and calendering devices on the other. What 'three 
    months' means is something like 'starting now, go on until the same 
    calendar date three months in the future'. But that is NOT an 
    interval of time: its a class of intervals with *varying* durations.
    
    With your definitions, nothing can be said about any relation between 
    years and seconds. But even with leap seconds, one ought to be able 
    to put some upper and lower bounds on the number of seconds in a year.
    
    >
    >In this development durations are treated as functions on intervals
    >and units, and not as first class entities on their own, as in some
    >approaches.  In the latter approach, durations are essentially
    >equivalence classes of intervals of the same length, and the length of
    >the duration is the length of the members of the class.  The relation
    >between an approach of this sort (indicated by prefix D-) and the one
    >presented here is straightforward.
    >
    >         (A T,u,n)[duration(T,u) = n
    >             <--> (E d)[D-duration-of(T) = d & D-duration(d,u) = n]]
    
    Exactly, and this does not work for months. Varying-length calendar 
    'units' break this mapping when treated as being durations. Strictly 
    speaking, things like leap seconds break it. This isn't just being 
    academic: these breakdowns are causing real problems in for example 
    the W3C calendaring effort and the XML Schmema WG (which has tried to 
    define durations as a datatype and is churning itself into knots 
    trying to deal with months and seconds).
    
    Things get even nastier when you have to deal with time zones and so 
    on. But all of this mess belongs in calendaring, not in the basic 
    time ontology. Allowing months to be a duration measure gets this 
    boundary confused.
    
    >At the present level of development of the temporal ontology, this
    >extra layer of representation seems superfluous.  It may be more
    >compelling, however, when the ontology is extended to deal with the
    >combined durations of noncontiguous aggregates of intervals.
    >
    >3.2.  Concatenation and Hath:
    >
    >The multiplicative relations above don't tell the whole story of the
    >relations among temporal units.  Temporal units are _composed of_
    >smaller temporal units.  A larger temporal unit is a concatenation of
    >smaller temporal units.
    
    Surely the unit is the *addition* of the smaller ones. Units are like 
    numbers, not like intervals.
    
    >We will first define a general relation of
    >concatenation between an interval and a set of smaller intervals.
    >Then we will introduce a predicate "Hath" that specifies the number of
    >smaller unit intervals that concatenate to a larger interval.
    >
    >Concatenation: A proper interval x is a concatenation of a set S of
    >proper intervals if and only if S covers all of x, and all members of
    >S are subintervals of x and are mutually disjoint.  (The third
    >conjunct on the right side of <--> is because begins-or-in covers only
    >beginning-of and inside.)
    >
    ><axiom id="3.2-1">
    >     concatenation(x,S)
    >         <--> proper-interval(x)
    >              & (A z)[begins-or-in(z,x)
    >                         --> (E y)[member(y,S) & begins-or-in(z,y)]]
    >              & (A z)[end-of(x) = z
    >                         --> (E y)[member(y,S) & end-of(y) = z]]
    >              & (A y)[member(y,S)
    >                         --> [int-starts(y,x) v int-during(y,x)
    >                                              v int-finishes(y,x)]]
    >              & (A y1,y2)[member(y1,S) & member(y2,S)
    >                         --> [y1=y2 v nonoverlap(y1,y2)]]
    ></axiom>
    >
    >The following properties of "concatenation" can be proved as theorems:
    >
    >There are elements in S that start and finish x:
    >
    >         concatenation(x,S) --> (E! y1)[member(y1,S) & int-starts(y1,x)]
    >
    >         concatenation(x,S) --> (E! y2)[member(y2,S) & int-finishes(y2,x)]
    >
    >Except for the first and last elements of S, every element of S has
    >elements that precede and follow it.  These theorems depend on the
    >property of Convexity.
    >
    >	Convex() -->
    >         [concatenation(x,S)
    >             --> (A y1)[member(y1,S)
    >                         --> [int-finishes(y1,x)
    >                                 v (E! y2)[member(y2,S) & int-meets(y1,y2)]]]]
    >
    >	Convex() -->
    >         [concatenation(x,S)
    >             --> (A y2)[member(y2,S)
    >                         --> [int-starts(y2,x)
    >                                 v (E! y1)[member(y1,S) & int-meets(y1,y2)]]]]
    >
    >The uniqueness (E!) follows from nonoverlap.
    
    I hate to think what these would do to most reasoning engines.
    
    >
    >Hath: The basic predicate used here for expressing the composition of
    >larger intervals out of smaller temporal intervals of unit length is
    >"Hath", from statements like "30 days hath September" and "60 minutes
    >hath an hour."  Its structure is
    >
    >         Hath(N,u,x)
    >
    >meaning "N proper intervals of duration one unit u hath the proper
    >interval x."  That is, if Hath(N,u,x) holds, then x is the
    >concatenation of N unit intervals where the unit is u.  For example,
    >if x is some month of September then "Hath(30,*Day*,x)" would be true.
    >
    >"Hath" is defined as follows:
    >
    ><axiom id="3.2-2">
    >         Hath(N,u,x)
    >             <--> (E S)[card(S) = N
    >                        & (A z)[member(z,S) --> duration(z,u) = 1]
    >                        & concatenation(x,S)]
    ></axiom>
    >
    >That is, x is the concatenation of a set S of N proper intervals of
    >duration one unit u.
    >
    >The type constraints on its arguments can be proved as a theorem: N is
    >an integer (assuming that is the constraint on the value of card), u
    >is a temporal unit, and x is a proper interval:
    >
    ><axiom id="3.2-3">
    >         Hath(N,u,x) --> integer(N) & temporal-unit(u) & proper-interval(x)
    ></axiom>
    >
    >This treatment of concatenation will work for scalar phenomena in
    >general.  This treatment of Hath will work for measurable quantities
    >in general.
    
    True, but it is way overly complicated for most of them. Most scalar 
    measuring schemes have been designed to be multiplicative, so that 
    you don't need to get into which particular collection of inches you 
    are concatenating, you can just say one foot = 12 inches, and then 
    multiply. Months break this because, basically, months are determined 
    by the moon and all the other units are determined by the sun, and 
    the sun and the moon have got pretty much nothing to do with one 
    another.
    
    I really think that the only effective way to handle months is in 
    terms of a repeating pattern of units (the 30/31/29) pattern for the 
    months in a year) and being explicit about this. The same machinery 
    also enables you to handle leap years, leap seconds and calendar 
    corrections.
    
    >
    >3.3.  The Structure of Temporal Units:
    >
    >We now define predicates true of intervals that are one temporal unit
    >long.  For example, "week" is a predicate true of intervals whose
    >duration is one week.
    >
    ><axiom id="3.3-1">
    >         second(T) <--> seconds(T) = 1
    ></axiom>
    ><axiom id="3.3-2">
    >         minute(T) <--> minutes(T) = 1
    ></axiom>
    ><axiom id="3.3-3">
    >         hour(T) <--> hours(T) = 1
    ></axiom>
    ><axiom id="3.3-4">
    >         day(T) <--> days(T) = 1
    ></axiom>
    ><axiom id="3.3-5">
    >         week(T) <--> weeks(T) = 1
    ></axiom>
    ><axiom id="3.3-6">
    >         month(T) <--> months(T) = 1
    ></axiom>
    ><axiom id="3.3-7">
    >         year(T) <--> years(T) = 1
    ></axiom>
    >
    >We are now in a position to state the relations between successive
    >temporal units.
    >
    ><axiom id="3.3-8">
    >         minute(T) --> Hath(60,*Second*,T)
    ></axiom>
    ><axiom id="3.3-9">
    >         hour(T) --> Hath(60,*Minute*,T)
    ></axiom>
    ><axiom id="3.3-10">
    >         day(T) --> Hath(24,*Hour*,T)
    ></axiom>
    ><axiom id="3.3-11">
    >         week(T) --> Hath(7,*Day*,T)
    ></axiom>
    ><axiom id="3.3-12">
    >         year(T) --> Hath(12,*Month*,T)
    ></axiom>
    >
    >The relations between months and days are dealt with in Section 4.4.
    >
    >MAPPINGS:
    >
    >Teknowledge's SUMO has some facts about the lengths of temporal units
    >in terms of smaller units.
    >
    >Cyc reifies durations.  Cyc's notion of time covering subsets aims at
    >the same concept dealt with here with Hath.
    >
    >Kestrel uses temporal units to specify the granularity of the time
    >representation.
    >
    >PSL reifies and axiomatizes durations.  PSL includes a treatment of
    >delays between events.  A delay is the interval between the instants
    >at which two events occur.
    >
    >                         4.  Clock and Calendar
    >
    >4.1.  Time Zones:
    >
    >What hour of the day an instant is in is relative to the time zone.
    >This is also true of minutes, since there are regions in the world,
    >e.g., central Australia, where the hours are not aligned with GMT
    >hours, but are, e.g., offset half an hour.  Probably seconds are not
    >relative to the time zone.
    
    Leap seconds are. (The XML Schema WG are all tied up with these 
    issues right now, which is where I found out about them, BTW. Just to 
    point out that we are not alone....)
    
    >
    >Days, weeks, months and years are also relative to the time zone,
    >since, e.g., 2002 began in the Eastern Standard time zone three hours
    >before it began in the Pacific Standard time zone.  Thus, predications
    >about all clock and calendar intervals except seconds are relative to
    >a time zone.
    
    In the Islamic calendar, months begin at the time the new moon is 
    first *seen* by two witnesses. This gets much hairier than normal 
    time-zoning.
    
    >
    >This can be carried to what seems like a ridiculous extreme, but turns
    >out to yield a very concise treatment.  The Common Era (C.E. or A.D.) is
    >also relative to a time zone, since 2002 years ago, it began three
    >hours earlier in what is now the Eastern Standard time zone than in
    >what is now the Pacific Standard time zone.  What we think of as the
    >Common Era is in fact 24 (or more) slightly displaced half-infinite
    >intervals.  (We leave B.C.E. to specialized ontologies.)
    >
    >The principal functions and predicates will specify a clock or
    >calendar unit interval to be the nth such unit in a larger interval.
    >The time zone need not be specified in this predication if it is
    >already built into the nature of the larger interval.  That means that
    >the time zone only needs to be specified in the largest interval, that
    >is, the Common Era; that time zone will be inherited by all smaller
    >intervals.  Thus, the Common Era can be considered as a function from
    >time zones (or "time standards", see below) to intervals.
    >
    >         CE(z) = T
    >
    >Fortunately, this counterintuitive conceptualization will usually be
    >invisible and, for example, will not be evident in the most useful
    >expressions for time, in Section 4.5 below.  In fact, the CE
    >predication functions as a good place to hide considerations of time
    >zone when they are not relevant.  (The BCE era is similarly time zone
    >dependent, although this will almost never be relevant.)
    >
    >Esoteric Aside: Strictly speaking, the use of CE as a function depends
    >on Extensional Collapse.  If we don't want to assume that, then we can
    >use a corresponding predicate -- CEPred(e,z) -- to mean era e is the
    >Common Era in time zone z.
    >
    >We have been refering to time _zones_, but in fact it is more
    >convenient to work in terms of what we might call the "time standard"
    >that is used in a time zone.  That is, it is better to work with *PST*
    >as a legal entity than with the *PST* zone as a geographical region.
    >A time standard is a way of computing the time, relative to a
    >world-wide system of computing time.  For each time standard, there is
    >a zone, or geographical region, and a time of the year in which it is
    >used for describing local times.  Where and when a time standard is
    >used have to be axiomatized, and this involves interrelating a time
    >ontology and a geographical ontology.  These relations can be quite
    >complex.  Only the entities like *PST* and *EDT*, the time standards,
    >are part of the _time_ ontology.
    >
    >If we were to conflate time zones (i.e., geographical regions) and
    >time standards, it would likely result in problems in several
    >situations.  For example, the Eastern Standard zone and the Eastern
    >Daylight zone are not identical, since most of Indiana is on Eastern
    >Standard time all year.  The state of Arizona and the Navajo Indian
    >Reservation, two overlapping geopolitical regions, have different time
    >standards -- one is Pacific and one is Mountain.
    >
    >Time standards that seem equivalent, like Eastern Standard and Central
    >Daylight, should be thought of as separate entities.  Whereas they
    >function the same in the time ontology, they do not function the same
    >in the ontology that articulates time and geography.  For example, it
    >would be false to say those parts of Indiana shift in April from
    >Eastern Standard to Central Daylight time.
    >
    >In this treatment it will be assumed there is a set of entities called
    >time standards.  Some relations among time standards are discussed in
    >Section 4.5.
    >
    >4.2.  Clock and Calendar Units:
    
    This is where months belong.
    
    >The aim of this section is to explicate the various standard clock and
    >calendar intervals.  A day as a calender interval begins at and
    >includes midnight and goes until but does not include the next
    >midnight.  By contrast, a day as a duration is any interval that is 24
    >hours in length.  The day as a duration was dealt with in Section 3.
    >This section deals with the day as a calendar interval.
    >
    >Including the beginning but not the end of a calendar interval in the
    >interval may strike some as arbitrary.  But we get a cleaner treatment
    >if, for example, all times of the form 12:xx a.m., including 12:00
    >a.m. are part of the same hour and day, and all times of the form
    >10:15:xx, including 10:15:00, are part of the same minute.
    >
    >It is useful to have three ways of saying the same thing: the clock or
    >calendar interval y is the nth clock or calendar interval of type u in
    >a larger interval x.  This can be expressed as follows for minutes:
    >
    >         minit(y,n,x)
    >
    >If the property of Extensional Collapse holds, then y is uniquely
    >determined by n and x, it can also be expressed as follows:
    >
    >         minitFn(n,x) = y
    >
    >For stating general properties about clock intervals, it is useful
    >also to have the following way to express the same thing:
    >
    >         clock-int(y,n,u,x)
    >
    >This expression says that y is the nth clock interval of type u in x.
    >For example, the proposition "clock-int(10:03,3,*Minute*,[10:00,11:00))"
    >holds.
    >
    >Here u can be a member of the set of clock units, that is, one of
    >*Second*, *Minute*, or *Hour*.
    >
    >In addition, there is a calendar unit function with similar structure:
    >
    >         cal-int(y,n,u,x)
    >
    >This says that y is the nth calendar interval of type u in x.  For
    >example, the proposition "cal-int(12Mar2002,12,*Day*,Mar2002)" holds.
    >Here u can be one of the calendar units *Day*, *Week*, *Month*, and
    >*Year*.
    >
    >The unit *DayOfWeek* will be introduced below in Section 4.3.
    >
    >The relations among these modes of expression are as follows:
    >
    ><axiom id="4.2-1">
    >         sec(y,n,x) <--> secFn(n,x) = y
    ></axiom>
    ><axiom id="4.2-2">
    >         sec(y,n,x) <--> clock-int(y,n,*Second*,x)
    ></axiom>
    ><axiom id="4.2-3">
    >         minit(y,n,x) <--> minitFn(n,x) = y
    ></axiom>
    ><axiom id="4.2-4">
    >         minit(y,n,x) <--> clock-int(y,n,*Minute*,x)
    ></axiom>
    ><axiom id="4.2-5">
    >         hr(y,n,x)  <--> hrFn(n,x) = y 
    ></axiom>
    ><axiom id="4.2-6">
    >         hr(y,n,x)  <--> clock-int(y,n,*Hour*,x)
    ></axiom>
    ><axiom id="4.2-7">
    >         da(y,n,x)  <--> daFn(n,x) = y 
    ></axiom>
    ><axiom id="4.2-8">
    >         da(y,n,x)  <--> cal-int(y,n,*Day*,x)
    ></axiom>
    ><axiom id="4.2-9">
    >         mon(y,n,x) <--> monFn(n,x) = y
    ></axiom>
    ><axiom id="4.2-10">
    >         mon(y,n,x) <--> cal-int(y,n,*Month*,x)
    ></axiom>
    ><axiom id="4.2-11">
    >         yr(y,n,x)  <--> yrFn(n,x) = y 
    ></axiom>
    ><axiom id="4.2-12">
    >         yr(y,n,x)  <--> cal-int(y,n,*Year*,x)
    ></axiom>
    >
    >Weeks and months are dealt with separately below.
    >
    >The am/pm designation of hours is represented by the function hr12.
    >
    ><axiom id="4.2-13">
    >         hr12(y,n,*am*,x) <--> hr(y,n,x)
    ></axiom>
    ><axiom id="4.2-14">
    >         hr12(y,n,*pm*,x) <--> hr(y,n+12,x)
    ></axiom>
    >
    >A distinction is made above between clocks and calendars because they
    >differ in how they number their unit intervals.  The first minute of
    >an hour is labelled with 0; for example, the first minute of the hour
    >[10:00,11:00) is 10:00.  The first day of a month is labelled with 1;
    >the first day of March is March 1.  We number minutes for the number
    >just completed; we number days for the day we are working on.
    
    Suggestion: number the *points* and then have alternative ways of 
    associating intervals to endpoints. Then a single ordering concept 
    works for both.
    
    >Thus,
    >if the larger unit has N smaller units, the argument n in clock-int
    >runs from 0 to N-1, whereas in cal-int n runs from 1 to N.  To state
    >properties true of both clock and calendar intervals, we can use the
    >predicate cal-int and relate the two notions with the axiom
    >
    ><axiom id="4.2-15">
    >         cal-int(y,n,u,x) <--> clock-int(y,n-1,u,x)
    ></axiom>
    >
    >Note that the Common Era is a calendar interval in this sense, since
    >it begins with 1 C.E. and not 0 C.E.
    >
    >The type constraints on the arguments of cal-int are as follows:
    >
    ><axiom id="4.2-16">
    >         cal-int(y,n,u,x) --> interval(y) & integer(n) & temporal-unit(u)
    >                                 & interval(x)
    ></axiom>
    >
    >We allow x to be any interval, not just a calendar interval.  When x
    >does not begin at the beginning of a calendar unit of type u, we take
    >y to be the nth _full_ interval of type u in x.  Thus, the first year
    >of World War II, in this sense, is 1940, the first full year, and not
    >1939, the year it began.  The first week of the year will be the first
    >full week.  We can express this constraint as follows:
    ><axiom id="4.2-17">
    >	cal-int(y,n,u,x) --> starts-or-during(y,x)
    ></axiom>
    >
    >Each of the calendar intervals is that unit long; for example, a
    >calendar year is a year long.
    ><axiom id="4.2-18">
    >         cal-int(y,n,u,x) --> duration(y,u) = 1
    ></axiom>
    >
    >There are properties relating to the labelling of clock and calendar
    >intervals.  If N u's hath x and y is the nth u in x, then n is between
    >1 and N.
    >
    ><axiom id="4.2-19">
    >         cal-int(y,n,u,x) & Hath(N,u,x)  --> 0 < n <= N
    ></axiom>
    >
    >There is a 1st small interval, and it starts the large interval.
    >
    ><axiom id="4.2-20">
    >         Hath(N,u,x) --> (E! y) cal-int(y,1,u,x)
    ></axiom>
    ><axiom id="4.2-21">
    >         Hath(S,N,u,x) & cal-int(y,1,u,x) --> int-starts(y,x)
    ></axiom>
    >
    >There is an Nth small interval, and it finishes the large interval.
    >
    ><axiom id="4.2-22">
    >         Hath(N,u,x) --> (E! y) cal-int(y,N,u,x)
    ></axiom>
    ><axiom id="4.2-23">
    >         Hath(N,u,x) & cal-int(y,N,u,x) --> int-finishes(y,x)
    ></axiom>
    >
    >All but the last small interval have a small interval that succeeds
    >and is met by it.
    >
    ><axiom id="4.2-24">
    >         cal-int(y1,n,u,x) & Hath(N,u,x) & n < N
    >             --> (E! y2)[cal-int(y2,n+1,u,x) & int-meets(y1,y2)]
    ></axiom>
    >
    >All but the first small interval have a small interval that precedes
    >and meets it.
    >
    ><axiom id="4.2-25">
    >         cal-int(y2,n,u,x) & Hath(N,u,x) & 1 < n
    >             --> (E! y1)[cal-int(y1,n - 1,u,x) & int-meets(y1,y2)]
    ></axiom>
    >
    >4.3.  Weeks
    >
    >A week is any seven consecutive days.  A calendar week, by contrast,
    >according to a commonly adopted convention, starts at midnight,
    >Saturday night, and goes to the next midnight, Saturday night.  There
    >are 52 weeks in a year, but there are not usually 52 calendar weeks in
    >a year.
    >
    >Weeks are independent of months and years.  However, we can still talk
    >about the nth week in some larger period of time, e.g., the third week
    >of the month or the fifth week of the semester.  So the same three
    >modes of representation are appropriate for weeks as well.
    >
    ><axiom id="4.3-1">
    >         wk(y,n,x)  <--> wkFn(n,x) = y 
    ></axiom>
    ><axiom id="4.3-2">
    >         wk(y,n,x)  <--> cal-int(y,n,*Week*,x)
    ></axiom>
    >
    >As it happens, the n and x arguments will often be irrelevant, when we
    >only want to say that some period is a calendar week.
    >
    >The day of the week is a calendar interval of type *Day*.  The nth
    >day-of-the-week in a week is the nth day in that interval.
    >
    ><axiom id="4.3-3">
    >         dayofweek(y,n,x) <--> day(y,n,x) & (E n1,x1) wk(x,n1,x1)
    ></axiom>
    >
    >The days of the week have special names in English. 
    >
    ><axiom id="4.3-4">
    >         dayofweek(y,1,x) <--> Sunday(y,x)
    ></axiom>
    ><axiom id="4.3-5">
    >         dayofweek(y,2,x) <--> Monday(y,x)
    ></axiom>
    ><axiom id="4.3-6">
    >         dayofweek(y,3,x) <--> Tuesday(y,x)
    ></axiom>
    ><axiom id="4.3-7">
    >         dayofweek(y,4,x) <--> Wednesday(y,x)
    ></axiom>
    ><axiom id="4.3-8">
    >         dayofweek(y,5,x) <--> Thursday(y,x)
    ></axiom>
    ><axiom id="4.3-9">
    >         dayofweek(y,6,x) <--> Friday(y,x)
    ></axiom>
    ><axiom id="4.3-10">
    >         dayofweek(y,7,x) <--> Saturday(y,x)
    ></axiom>
    >
    >For example, Sunday(y,x) says that y is the Sunday of week x.
    >
    >Since a day of the week is also a calendar day, it is a theorem that
    >it is a day long. 
    >
    >	dayofweek(y,n,x) --> day(y)
    
    That seems like a mistake. Consider the last day of a year with a 
    leap second. That particular day is a single, whole, calendar day 
    which lasts 24 hours plus one second, ie a day (duration) plus a 
    second, which is not the same as a day-duration.
    
    >
    >One correspondance will anchor the cycle of weeks to the rest of the
    >calendar, for example, saying that January 1, 2002 was the Tuesday of
    >some week x.
    >
    ><axiom id="4.3-11">
    >         (A z)(E x) Tuesday(dayFn(1,monFn(1,yrFn(2002,CE(z)))),x)
    ></axiom>
    >
    >We can define weekdays and weekend days as follows:
    >
    ><axiom id="4.3-12">
    >         weekday(y,x) <--> [Monday(y,x) v Tuesday(y,x) v Wednesday(y,x)
    >                                 v Thursday(y,x) v Friday(y,x)]
    ></axiom>
    ><axiom id="4.3-13">
    >         weekendday(y,x) <--> [Saturday(y,x) v Sunday(y,x)]
    ></axiom>
    >
    >As before, the use of the functions wkFn and dayofweekFn depend on
    >Extensional Collapse.
    >        
    >4.4.  Months and Years
    >
    >The months have special names in English.
    >
    ><axiom id="4.4-1">
    >         mon(y,1,x) <--> January(y,x)
    ></axiom>
    ><axiom id="4.4-2">
    >         mon(y,2,x) <--> February(y,x)
    ></axiom>
    ><axiom id="4.4-3">
    >         mon(y,3,x) <--> March(y,x)
    ></axiom>
    ><axiom id="4.4-4">
    >         mon(y,4,x) <--> April(y,x)
    ></axiom>
    ><axiom id="4.4-5">
    >         mon(y,5,x) <--> May(y,x)
    ></axiom>
    ><axiom id="4.4-6">
    >         mon(y,6,x) <--> June(y,x)
    ></axiom>
    ><axiom id="4.4-7">
    >         mon(y,7,x) <--> July(y,x)
    ></axiom>
    ><axiom id="4.4-8">
    >         mon(y,8,x) <--> August(y,x)
    ></axiom>
    ><axiom id="4.4-9">
    >         mon(y,9,x) <--> September(y,x)
    ></axiom>
    ><axiom id="4.4-10">
    >         mon(y,10,x) <--> October(y,x)
    ></axiom>
    ><axiom id="4.4-11">
    >         mon(y,11,x) <--> November(y,x)
    ></axiom>
    ><axiom id="4.4-12">
    >         mon(y,12,x) <--> December(y,x)
    ></axiom>
    >
    >The number of days in a month have to be spelled out for individual
    >months.
    >
    ><axiom id="4.4-13">
    >         January(m,y) --> Hath(31,*Day*,m)
    ></axiom>
    ><axiom id="4.4-14">
    >         March(m,y) --> Hath(31,*Day*,m)
    ></axiom>
    ><axiom id="4.4-15">
    >         April(m,y) --> Hath(30,*Day*,m)
    ></axiom>
    ><axiom id="4.4-16">
    >         May(m,y) --> Hath(31,*Day*,m)
    ></axiom>
    ><axiom id="4.4-17">
    >         June(m,y) --> Hath(30,*Day*,m)
    ></axiom>
    ><axiom id="4.4-18">
    >         July(m,y) --> Hath(31,*Day*,m)
    ></axiom>
    ><axiom id="4.4-19">
    >         August(m,y) --> Hath(31,*Day*,m)
    ></axiom>
    ><axiom id="4.4-20">
    >         September(m,y) --> Hath(30,*Day*,m)
    ></axiom>
    ><axiom id="4.4-21">
    >         October(m,y) --> Hath(31,*Day*,m)
    ></axiom>
    ><axiom id="4.4-22">
    >         November(m,y) --> Hath(30,*Day*,m)
    ></axiom>
    ><axiom id="4.4-23">
    >         December(m,y) --> Hath(31,*Day*,m)
    ></axiom>
    >
    >The definition of a leap year is as follows:
    >
    ><axiom id="4.4-24">
    >         (A z)[leap-year(y)
    >             <--> (E n,x)[year(y,n,CE(z))
    >                           & [divides(400,n) v [divides(4,n) & 
    >~divides(100,n)]]]]
    ></axiom>
    >
    >We leave leap seconds to specialized ontologies.
    
    Not good enough: a workable uniform treatment of time NEEDS to 
    consider leap seconds, if ignored they do things like breaking GPS 
    systems.
    
    All you need to say is that some calendar years are not a year long, 
    but a year plus a second long. And all you need in order to say that 
    is to maintain a clear conceptual distinction between durations 
    (which really are on a scale) and calendaric entities (which are 
    really not). A year-duration is a different thing from a calendar 
    year.
    
    You can always have a context in which they get identified, but 
    people who use that will have trouble if they try to export. Tough 
    for them, but trying to solve their problems pre-emptively just 
    spreads shit all over the landscape.
    
    Another unifying strategy would be to allow a nonmonotonic assumption 
    that all day, year, etc. durations were the duration of a 'standard' 
    day, year, etc., and treat the leap cases as nonmonotonic exceptions 
    to the standard. BTW, for several hundred years long-case clocks were 
    built with a 29.5-day 'standard' lunar month built into the moon-dial 
    mechanism.
    
    >
    >Now the number of days in February can be specified.
    >
    ><axiom id="4.4-25">
    >         February(m,y) & leap-year(y) --> Hath(29,*Day*,m)
    ></axiom>
    ><axiom id="4.4-26">
    >         February(m,y) & ~leap-year(y) --> Hath(28,*Day*,m)
    ></axiom>
    >
    >A reasonable approach to defining month as a unit of temporal measure
    >would be to specify that the beginning and end points have to be on the
    >same days of successive months.
    
    That doesn't work. Nothing works, because months really are not a 
    measure. Its like having a stretchable tape measure and trying to do 
    surveying: it just breaks, no matter how much you try to wriggle.
    
    >  The following rather ugly axiom
    >captures this.
    >
    ><axiom id="4.4-27">
    >         month(T)
    >             <--> (E t1,t2,d1,d2,n,m1,m2,n1,y1,y2,n2,e)
    >		     [begins(t1,T) & ends(t2,T)
    >                      [begins-or-in(t1,d1) & begins-or-in(t2,d2)
    >                      & da(d1,n,m1) & mon(m1,n1,y1) & yr(y1,n2,e)
    >                      & da(d2,n,m2)
    >                      & [mon(m2,n1+1,y1)
    >                          v (E y2)[n1=12 & mon(m2,1,y2) & yr(y2,n2+1,e)]]]
    ></axiom>
    >
    >The last disjunct takes care of months spaning December and January.
    >So the month as a measure of duration would be related to days as a
    >measure of duration only indirectly, mediated by the calendar.
    
    Great.
    
    >  It is
    >possible to prove that months are between 28 and 31 days.
    >
    >To say that July 4 is a holiday in the United States one could write
    >
    >         (A d,m,y)[da(d,4,m) & July(m,y) --> holiday(d,USA)]
    >
    >Holidays like Easter can be defined in terms of this ontology coupled
    >with an ontology of the phases of the moon.
    >
    >Other calendar systems could be axiomatized similarly
    
    Not the Islamic one, for example.
    
    >.  and the BCE
    >era could also be axiomatized in this framework.  These are left as
    >exercises for interested developers.
    >
    >4.5.  Time Stamps:
    >
    >Standard notation for times list the year, month, day, hour, minute,
    >and second.  It is useful to define a predication for this.
    >
    ><axiom id="4.5-1">
    >         time-of(t,y,m,d,h,n,s,z)
    >             <--> begins-or-in(t,secFn(s,minFn(n,hrFn(h,daFn(d,
    >                                             monFn(m,yrFn(y,CE(z))))))))
    ></axiom>
    >
    >Alternatively (and not assuming Extensional Collapse),
    >
    ><axiom id="4.5-2">
    >         time-of(t,y,m,d,h,n,s,z)
    >             <--> (E s1,n1,h1,d1,m1,y1,e)
    >                      [begins-or-in(t,s1) & sec(s1,s,n1) & min(n1,n,h1)
    >                      & hr(h1,h,d1) & da(d1,d,m1) & mon(m1,m,y1)
    >                      & yr(y1,y,e) & CEPred(e,z)]
    ></axiom>
    >
    >For example, an instant t has the time
    >
    >         5:14:35pm PST, Wednesday, February 6, 2002
    >
    >if the following properties hold for t:
    >
    >         time-of(t,2002,2,6,17,14,35,*PST*)
    >         (E w,x)[begins-or-in(t,w) & Wednesday(w,x)]
    >
    >The second line says that t is in the Wednesday w of some week x.
    >
    >The relations among time zones can be expressed in terms of the
    >"time-of" predicate.  Two examples are as follows:
    >
    >         h < 8 --> [time-of(t,y,m,d,h,n,s,*GMT*)
    >                         <--> time-of(t,y,m,d-1,h+16,n,s,*PST*)]
    >         h >= 8 --> [time-of(t,y,m,d,h,n,s,*GMT*)
    >                         <--> time-of(t,y,m,d,h-8,n,s,*PST*)]
    >
    >         time-of(t,y,m,d,h,n,s,*EST*) <--> time-of(t,y,m,d,h,n,s,*CDT*)
    >
    >The "time-of" predicate will be convenient for doing temporal
    >arithmetic.
    >
    >The predicate "time-of" has 8 arguments.  It will be convenient in
    >cases where exact times are not known or don't need to be specified to
    >have functions that identify each of the slots of a time description.
    >These functions are also useful for partial implementations of the
    >time ontology in versions of DAML based on description logic, such as
    >DAML+OIL.  We introduce functions that allow "time-of" to be expressed
    >as a collection of values of the functions: "second-of", "minute-of",
    >etc.  However, these functions cannot be applied to instants directly,
    >since an instant can have many "time-of" predications, one for each
    >time zone and alternate equivalent descriptions involving, for
    >example, 90 minutes versus 1 hour and 30 minutes..  Thus, we need an
    >intervening "temporal description".  An instant can have many temporal
    >descriptions, and each temporal description has a unique value for
    >"second-of", "minute-of", etc.Thus, "time-of(t,2002,2,6,17,14,35,PST)",
    >or 5:14:35pm PST, February 6, 2002, would be expressed by asserting of
    >the instant t the property "temporal-description(d,t)", meaning that d
    >is a temporal description of t, and asserting for d the properties
    >"year-of(d) = 2002", "month-of(d) = 2", etc. Coarser granularities on
    >times can be expressed by leaving the finer-grained units unspecified.
    >
    >These functions can be defined by the following axiom:
    ><axiom id="4.5-3">
    >
    >	(A t,y,m,d,h,n,s,z)[time-of(t,y,m,d,h,n,s,z)
    >	    <--> (E d1)[temporal-description(d1,t)
    >		        & year-of(d1) = y & month-of(d1) = m
    >		        & day-of(d1) = d & hour-of(d1) = h
    >		        & minute-of(d1) = n & second-of(d1) = s
    >		        & time-zone-of(d1) = z]]
    >
    ></axiom>
    >The domain of the functions is an entity of type "temporal
    >description".
    
    Description??? Yuck. There are enough use/mention confusions on the 
    Web already, please don't make another one official.
    
    >  The range of the functions is inherited from the
    >constraints on the arguments of "time of".
    ><axiom id="4.5-4">
    >	(A d1,y)[year-of(d1) = y --> (E t)[temporal-description(d1,t)]]
    ></axiom>
    ><axiom id="4.5-5">
    >	(A d1,m)[month-of(d1) = m --> (E t)[temporal-description(d1,t)]]
    ></axiom>
    ><axiom id="4.5-6">
    >	(A d1,d)[day-of(d1) = d --> (E t)[temporal-description(d1,t)]]
    ></axiom>
    ><axiom id="4.5-7">
    >	(A d1,h)[hour-of(d1) = h --> (E t)[temporal-description(d1,t)]]
    ></axiom>
    ><axiom id="4.5-8">
    >	(A d1,n)[minute-of(d1) = n --> (E t)[temporal-description(d1,t)]]
    ></axiom>
    ><axiom id="4.5-9">
    >	(A d1,s)[second-of(d1) = s --> (E t)[temporal-description(d1,t)]]
    ></axiom>
    ><axiom id="4.5-10">
    >	(A d1,z)[time-zone-of(d1) = z --> (E t)[temporal-description(d1,t)]]
    ></axiom>
    >
    >MAPPINGS:
    >
    >Teknowledge's SUMO distinguishes between durations (e.g., HourFn) and
    >clock and calendar intervals (e.g., Hour).
    
    Right, good idea.
    
    >  Time zones are treated as
    >geographical regions.
    >
    >The treatment of dates and times via functions follows Cyc's treatment. 
    >
    >Kestrel's roundabout attempts to state rather straightforward facts
    >about the clock and calendar are an excellent illustration of the lack
    >of expressivity in DAML+OIL.
    >
    >The ISO standard for dates and times can be represented
    >straightforwardly with the time-of predicate or the unitFn functions.
    >
    >
    >		       5. Temporal Granularity
    >
    >Useful background reading for this note includes Bettini et
    >al. (2002), Fikes and Zhou, and Hobbs (1985).
    >
    >Very often in reasoning about the world, we would like to treat an
    >event that has extent as instantaneous, and we would like to express
    >its time only down to a certain level of granularity.  For example, we
    >might want to say that the election occurs on November 5, 2002,
    >without specifying the hours, minutes, or seconds.  We might want to
    >say that the Thirty Years' War ended in 1648, without specifying the
    >month and day.
    >
    >For the most part, this can be done simply by being silent about the
    >more detailed temporal properties.  In Section 2.5 we introduced the 
    >predication
    >"time-span(T,e)" relating events to temporal entities, the relation
    >"temporal-description(d,t)" relating a temporal entity to a
    >description of the clock and calendar intervals it is included in, and
    >the functions "second-of(d)", "minute-of(d)", "hour-of(d)",
    >"day-of(d)", "month-of(d)", and "year-of(d)".  Suppose we know that an
    >event occurs on a specific day, but we don't know the hour, or it is
    >inappropriate to specify the hour.  Then we can specify the day-of,
    >month-of, and year-of properties, but not the hour-of, minute-of, or
    >second-of properties.
    
    BUt we had better be vary careful about saying that it holds .
    
    >  For example, for the election e, we can say
    >
    >    time-span(t,e), temporal-description(d,t), day-of(d) = 5,
    >       month-of(d) = 11, year-of(d) = 2002
    >
    >and no more.  We can even remain silent about whether t is an instant
    >or an interval.
    >
    >Sometimes it may be necessary to talk explicitly about the granularity
    >at which we are viewing the world.  For that we need to become clear
    >about what a granularity is, and how it functions in a reasoning
    >system.
    >
    >A granularity G on a set of entities S is defined by an
    >indistinguishability relation, or equivalently, a cover of S, i.e. a
    >set of sets of elements of S such that every element of S is an
    >element of at least one element of the cover.  We will identify the
    >granularity G with the cover. 
    >
    >    (A G,S)[cover(G,S)
    >            <--> (A x)[member(x,S)
    >                       <--> (E s)[member(s,G) & member(x,s)]]]
    >
    >Two elements of S are indistinguishable with respect to G if they are
    >in the same element of G.
    >
    >    (A x1,x2,G)[indisting(x1,x2,G)
    >                <--> (E s)[member(s,G) & member(x1,s) & member(x2,s)]]
    >
    >A granularity can be a partition of S, in which case every element of
    >G is an equivalence class.  The indistinguishability relation is
    >transitive in this case.  A common case of this is where the classes
    >are defined by the values of some given function f.
    >
    >    (A G,S)[G = f-gran(S,f)
    >               <--> [cover(G,S) & (A x1,x2)[indisting(x1,x2,G)
    >                                              <--> f(x1) = f(x2)]]]
    >
    >For example, if S is the set of descriptions of instants and f is the
    >function "year-of", then G will be a granularity on the time line that
    >does not distinguish between two instants within the same calendar
    >year.
    >
    >The granularities defined by Bettini et al. (2002) are essentially of
    >this nature.  They will be discussed further after we have introduced
    >temporal aggregates in Section 6 below.
    >
    >A granularity can also consist of overlapping sets, in which case the
    >indistinguishability relation is not transitive.  A common example of
    >this is in domains where there is some distance function d, and any
    >two elements that are closer than a given distance a to each other are
    >indistinguishable.  We will suppose d takes two entities and a unit u
    >as its arguments and returns a real number.
    >
    >    (A G,S)[G = d-gran(S,a)
    >               <--> [cover(G,S) & (A x1,x2)[indisting(x1,x2,G)
    >                                              <--> d(x1,x2,u) < a]]]
    >
    >For example, suppose S is the set of instants, d is duration of the
    >interval between the two instants, the unit u is *Minute*, and a is 1.
    >Then G will be the granularity on the time line that does not
    >distinguish between instants that are less than a minute apart.  Note
    >that this is not transitive, because 9:34:10 is indistinguishable from
    >9:34:50, which is indistinguishable from 9:35:30, but the first and
    >last are more than a minute apart and are thus distinguishable.
    
    YOu might mention that there is an existing mathematical field 
    discussing these things in a fairly general way. They are called 
    'tolerance spaces'.
    
    >Both of these granularities are uniform over the set, but we can
    >imagine wanting variable granularities.  Suppose we are planning a
    >robbery.  Before the week preceeding the robbery, we may not care what
    >time any events occur.  All times are indistinguishable.  The week
    >preceeding the robbery we may care only what day events take place on.
    >On the day of the robbery we may care about the hour in which an event
    >occurs, and during the robbery itself we may want to time the events
    >down to ten-second intervals.  Such a granularity could be defined as
    >above; the formula would only be more complex.
    >
    >The utility of viewing the world under some granularity is that the
    >task at hand becomes easier to reason about, because distinctions that
    >are possible in the world at large can be ignored in the task.  One
    >way of cashing this out in a theorem-proving framework is to treat the
    >relevant indistinguishability relation as equality.  This in effect
    >reduces the number of entities in the universe of discourse and makes
    >available rapid theorem-proving techniques for equality such as
    >paramodulation.
    
    But it instantly produces contradictions unless the tolerance is an 
    equivalence relation (and even sometimes if it is). You really ought 
    to point this out explicitly, as this trick is most definitely not a 
    safe general-purpose technique.
    
    >
    >We can express this assumption with the axiom
    >
    >(5.1)   (A x1,x2)[indisting(x1,x2,G) --> x1 = x2]
    >
    >for the relevant G.  For a temporal ontology, if 0-length intervals
    >are instants, this axiom has the effect of collapsing some intervals
    >into instants.
    >
    >There are several nearly equivalent ways of viewing the addition of
    >such an axiom -- as a context shift, as a theory mapping, or an an
    >extra antecedent condition.
    >
    >Context shift: In some formalisms, contexts are explicitly
    >represented.  A context can be viewed as a set of sentences that are
    >true in that context.  Adding axiom (5.1) to that set of sentences
    >shifts us to a new context.
    >
    >Theory mapping: We can view each granularity as coinciding with a
    >theory.  Within each theory, entities that are indistinguishable with
    >respect to that granularity are viewed as equal, so that, for example,
    >paramodulation can replace equals with equals.  To reason about
    >different granularities, there would be a "mediator theory" in which
    >all the constant, function and predicate symbols of the granular
    >theories are subscripted with their granularities.  So equality in a
    >granular theory G would appear as the predicate "=_G" in the mediator
    >theory.  In the mediator theory paramodulation is allowed with "true"
    >equality, but not with the granular equality relations =_G.  However,
    >invariances such as
    >
    >    if x =_G y, then [p_G(x) implies p_G(y)]
    >
    >hold in the mediator theory.
    >
    >Extra antecedent condition: Suppose we have a predicate
    >"under-granularity" that takes a granularity as its one argument and
    >is defined as follows:
    >
    >    (A g)[under-granularity(g)
    >            <--> (A x1,x2)[indisting(x1,x2,g) --> x1 = x2]]
    >
    >Then we can remain in the theory of the world at large, rather than
    >moving to a subtheory.  If we are using a granularity G, rather than
    >proving a theorem P, we prove the theorem
    >
    >      under-granularity(G) --> P
    >
    >If the granularity G is transitive, and thus partitions S, adding
    >axiom (5.1) should not get us into any trouble.
    
    That is being VERY optimistic. You should say, and if the theory has 
    nothing in it that could possibly distinguish things which are within 
    tolerance. Even perceptual granularities produce 'anomalies' which 
    would technically be contradictions, eg three small dots in a line 
    'blur' to produce one seen thing, but its not a dot.
    
    >  However, if G is not
    >transitive and consists of overlapping sets, such as the episilon
    >neighborhood granularity, then contradictions can result.  When we use
    >(5.1) with such a granularity, we are risking contradiction in the
    >hopes of efficiency gains.  Such a tradeoff must be judged on a case
    >by case basis, depending on the task and on the reasoning engine used.
    
    But if we are talking practical matters here, how can you tell if the 
    contradictions which arise do so from making that very optimistic 
    assumption?
    
    Seriously, I think this is VERY bad practice for a working ontology, 
    and that it is kind of irresponsible to give readers the impression 
    that this is a good strategy for ontology design without some pretty 
    sophisticated backup (eg a context mechanism) to avoid untraceable 
    and fatal contradictions being spawned.
    
    >
    >
    >                   6.  Aggregates of Temporal Entities
    >
    >6.1.  Describing Aggregates of Temporal Entities
    >
    >In annotating temporal expressions in newspapers, Laurie Gerber
    >encountered a number of problematic examples of temporal aggregates,
    >including expressions like "every 3rd Monday in 2001", "every morning
    >for the last 4 years", "4 consecutive Sundays", "the 1st 9 months of
    >1997", "3 weekdays after today", "the 1st full day of competition",
    >and "the 4th of 6 days of voting".  We have taken these as challenge
    >problems for the representation of temporal aggregates, and attempted
    >to develop convenient means for expressing the possible referents of
    >these expressions.
    >
    >In this section, we will assume the notation of set theory.  Sets and
    >elements of sets will be ordinary individuals, and relations such as
    >"member" will be relations between such individuals. In particular, we
    >will use the relation "member" between an element of a set and the
    >set, and the relation "subset" between two sets.  We will use "Phi" to
    >refer to the empty set.  We will use the notation "{x}" for the
    >singleton set containing the element x.  We will use the symbol "U" to
    >refer to the union operation between two sets.  The function "card"
    >will map a set into its cardinality.
    >
    >In addition, for convenience, we will make moderate use of
    >second-order formulations, and quantify over predicate symbols.
    
    Careful. I bet you really want to be quantifying over predicates, not 
    predicate SYMBOLS.  And are you SURE this is really second-order? 
    Just quantifying over predicates doesn't automatically put you into a 
    higher-order logic.
    
    >  This
    >could be eliminated with the use of an "apply" predicate and axiom
    >schemas systematically relating predicate symbols to corresponding
    >individuals, e.g., the axiom schema for unary predicates p,
    >
    >     (A x)[apply(*p*,x) <--> p(x)]
    
    If that is a genuine assumption here, then you are certainly not in 
    second-order logic. You can write this as a genuine axiom (no need 
    for the *'s) in CL, which is completely first-order.
    
    The key semantic point is that there is nothing in FOL which says 
    that the things in the universe are *not* relations. So a relation 
    can simply BE an individual:
    
    (A x p)[apply(p,x) <--> p(x)]
    
    All of the axioms in this section then get considerably 
    simpler-looking, and you probably don't need to use set theory. 
    (Treat a set as a predication.)
    
    >
    >It will be convenient to have a relation "ibefore" that generalizes
    >over several interval and instant relations, covering both
    >"int-before" and "int-meets" for proper intervals.
    ><axiom id="6.1-1">
    >
    >     (A T1,T2)[ibefore(T1,T2)
    >	      <--> [before(T1,T2)
    >                     v [proper-interval(T1) & proper-interval(T2)
    >                        & int-meets(T1,T2)]]]
    >
    ></axiom>
    >
    >It will also be useful to have a relation "iinside" that generalizes
    >over all temporal entities and aggregates.  We first define a
    >predicate "iinside-1" that generalizes over instants and intervals and
    >covers "int-starts", "int-finishes" and "int-equals" as well as
    >"int-during" for intervals.  We break the definition into several
    >cases. 
    ><axiom id="6.1-2">
    >
    >     (A T1,T2)[iinside-1(T1,T2)
    >               <--> [T1=T2
    >                     v [instant(T1) & proper-interval(T2) & inside(T1,T2)]
    >                     v [(E t) begins(t,T1) & ends(t,T1)
    >                              & proper-interval(T2) & inside(t,T2)]
    >                     v [proper-interval(T1) & proper-interval(T2)
    >                        & [int-starts(T1,T2) v int-during(T1,T2)
    >                           v int-finishes(T1,T2) v int-equals(T1,T2)]]]]
    >
    ></axiom>
    >The third disjunct in the definition is for the case of 0-length
    >intervals, should they be allowed and distinct from the corresponding
    >instants.
    >
    >A temporal aggregate is first of all a set of temporal entities, but
    >it has further structure.  The relation "ibefore" imposes a natural
    >order on some sets of temporal entities, and we will use the
    >predicate "tseq" to describe those sets.
    ><axiom id="6.1-3">
    >
    >     (A s)[tseq(s) <--> (A t)[member(t,s) --> temporal-entity(t)]
    >                        & (A t1,t2)[member(t1,s) & member(t2,s)
    >                                    --> [t1 = t2 v ibefore(t1,t2)
    >                                         v ibefore(t2,t1)]]]
    >
    ></axiom>
    >That is, a temporal sequence is a set of temporal entities totally
    >ordered by the "ibefore" relation.  A temporal sequence has no
    >overlapping temporal entities.
    
    Seems to me it would be easier and more transparent to assume the 
    integers as given, and treat tseqs as being entities like RDF 
    sequences, ie as having 'members' which are ordered by integers, 
    maybe using an nth-member-of relation. Then you don't need to invoke 
    the spectre of Set Theory and you get a lot of the ordering stuff 
    effectively for free.
    
    >
    >It will be useful to have the notion of a temporal sequence whose
    >elements all have a property p.
    ><axiom id="6.1-4">
    >
    >     (A s,p)[tseqp(s,p) <--> tseq(s) & (A t)[member(t,s) --> p(t)]]
    >
    ></axiom>
    >
    >A uniform temporal sequence is one all of whose members are of equal
    >duration.
    ><axiom id="6.1-5">
    >
    >     (A s)[uniform-tseq(s)
    >           <--> (A t1,t2,u)[member(t1,s) & member(t2,s) & temporal-unit(u)
    >                            --> duration(t1,u) = duration(t2,u)
    ></axiom>
    >
    >The same temporal aggregate can be broken up into a set of intervals
    >in many different ways.  Thus it is useful to be able to talk about
    >temporal sequences that are equivalent in the sense that they cover
    >the same regions of time.
    
    Right, exactly; and then once you have done that, the other defs get 
    easier (see above).
    
    ><axiom id="6.1-6">
    >
    >     (A s1,s2)[equiv-tseq(s1,s2)
    >               <--> tseq(s1) & tseq(s2)
    >                    & (A t)[temporal-entity(t)
    >                            --> [(E t1)[member(t1,s1) & iinside-1(t,t1)]
    >                                 <--> (E t2)[member(t2,s2) & 
    >iinside-1(t,t2)]]]]
    ></axiom>
    >That is, s1 and s2 are equivalent temporal sequences when any temporal
    >entity inside one is also inside the other.
    >
    >A minimal temporal sequence is one that is minimal in that its
    >intervals are maximal, so that the number of intervals in minimal.  We
    >can view a week as a week or as 7 individual successive days; the
    >first would be minimal.  We can go from a nonminimal to a minimal
    >temporal sequence by concatenating intervals that meet.
    ><axiom id="6.1-7">
    >
    >     (A s)[min-tseq(s)
    >           <--> (A t1,t2)[member(t1,s) & member(t2,s)
    >                          --> [t1 = t2
    >                               v (E t)[ibefore(t1,t) & ibefore(t,t2)
    >                                       & ~member(t,s)]]]]
    >
    ></axiom>
    >That is, s is a minimal temporal sequence when any two distinct
    >intervals in s have a temporal entity not in s between them.
    >
    >A temporal sequence s1 is a minimal equivalent temporal sequence to
    >temporal sequence s if s1 is minimal and equivalent to s.
    ><axiom id="6.1-8">
    >
    >     (A s1,s)[min-equiv-tseq(s1,s)
    >              <--> min-tseq(s1) & equiv-tseq(s1,s)]
    >
    ></axiom>
    >
    >We can now generalize "iinside-1" to the predicate "iinside", which
    >covers both temporal entities and temporal sequences.  A temporal
    >entity is "iinside" a temporal sequence if it is "iinside-1" one of
    >the elements of its minimal equivalent temporal sequence.
    ><axiom id="6.1-9">
    >
    >     (A t,s)[iinside(t,s)
    >             <--> [temporal-entity(t) & temporal-entity(s)
    >                     & iinside-1(t,s)]
    >                  v [temporal-entity(t) & tseq(s)
    >                       & (E s1,t1)[min-equiv-tseq(s1,s) & member(t1,s1)
    >                                   & iinside-1(t,t1)]]]
    >
    ></axiom>
    >
    >We can define a notion of "isubset" on the basis of "iinside".
    ><axiom id="6.1-10">
    >
    >     (A s,s0)[isubset(s,s0)
    >              <--> [tseq(s) & tseq(s0)
    >                    & (A t)[member(t,s) --> iinside(t,s0)]]]
    >
    ></axiom>
    >That is, every element of temporal sequence s is inside some element
    >of the minimal equivalent temporal sequence of s0.
    >
    >We can also define a relation of "idisjoint" between two temporal
    >sequences.
    ><axiom id="6.1-11">
    >
    >     (A s1,s2)[idisjoint(s1,s2)
    >               <--> [tseq(s1) & tseq(s2)
    >                     & ~(E t,t1,t2)[member(t1,s1) & member(t2,s2)
    >                                    & iinside(t,t1) & iinside(t,t2)]]]
    >
    ></axiom>
    >That is, temporal sequences s1 and s2 are disjoint if there is no
    >overlap between the elements of one and the elements of the other.
    >
    >
    >The first temporal entity in a temporal sequence is the one that is
    >"ibefore" any of the others.
    ><axiom id="6.1-12">
    >
    >     (A t,s)[first(t,s)
    >             <--> [tseq(s) & member(t,s)
    >                   & (A t1)[member(t1,s) --> [t1 = t v ibefore(t,t1)]]]]
    >
    ></axiom>
    >The predicate "last" is defined similarly.
    ><axiom id="6.1-13">
    >
    >     (A t,s)[last(t,s)
    >             <--> [tseq(s) & member(t,s)
    >                   & (A t1)[member(t1,s) --> [t1 = t v ibefore(t1,t)]]]]
    >
    ></axiom>
    >
    >More generally, we can talk about the nth element of temporal
    >sequence.
    ><axiom id="6.1-14">
    >
    >     (A t,s)[nth(t,n,s)
    >             <--> [tseq(s) & member(t,s) & natnum(n)
    >                   & (E s1)[(A t1)[member(t1,s1)
    >                                   <--> [member(t1,s) & ibefore(t1,t)]]
    >                            & card(s1) = n-1]]]
    >
    ></axiom>
    >That is, the nth element of a temporal sequence has n-1 elements
    >before it.
    >
    >It is a theorem that the first is the nth when n is 1, and that the
    >last is the nth when n is the cardinality of s.
    >
    >     (A t,s)[first(t,s) <--> nth(t,1,s)]
    >     (A t,s)[last(t,s) <--> nth(t,card(s),s)]
    >
    >Later in this development it will be convenient to have a predicate
    >"nbetw" that says there are n elements in a sequence between two given
    >elements.
    ><axiom id="6.1-15">
    >
    >	(A t1,t2,s,n)[nbetw(t1,t2,s,n)
    >            <--> (E s1)[card(s1) = n
    >                        & (A t)[member(t,s1)
    >                             <--> ibefore(t1,t) & ibefore(t,t2)
    >                                  & member(t,s)]]]
    >
    ></axiom>
    >
    >It may sometimes be of use to talk about the convex hull of a temporal
    >sequence.
    ><axiom id="6.1-16">
    >
    >     (A t,s)[convex-hull(t,s)
    >             <--> [tseq(s) & interval(t)
    >                   & (A t1)[first(t1,s) --> int-starts(t1,t)]
    >                   & (A t2)[last(t2,s) --> int-finishes(t2,t)]]]
    >
    ></axiom>
    >Note,however, that we cannot simply dispense with temporal sequences
    >and talk only about their convex hulls.  "Every Monday in 2003" has as
    >its convex hull the interval from January 6 to December 29, 2003, but
    >if we use that interval to represent the phrase, we lose all the
    >important information in the notice "The group will meet every Monday
    >in 2003."
    >
    >The predicate "ngap" will enable us to define "everynthp" below.
    >Essentially, we are after the idea of a temporal sequence s containing
    >every nth element of s0 for which p is true.  The predicate "ngap"
    >holds between two elements of s and says that there are n-1 elements
    >between them that are in s0 and not in s for which p is true.
    ><axiom id="6.1-17">
    >
    >     (A t1,t2,s,s0,p,n)
    >        [ngap(t1,t2,s,s0,p,n)
    >         <--> [member(t1,s) & member(t2,s) & tseqp(s,p) & tseq(s0)
    >               & isubset(s,s0) & natnum(n)
    >               & (E s1)[card(s1) = n-1 & idisjoint(s,s1)
    >                        & (A t)[member(t,s1)
    >                                <--> [iinside(t,s0) & p(t)
    >                                      & ibefore(t1,t) & ibefore(t,t2)]]]]]
    >
    ></axiom>
    >
    >The predicate "everynthp" says that a temporal sequence s consists of
    >every nth element of the temporal sequence s0 for which property p is
    >true.  It will be useful in describing temporal aggregates like "every
    >third Monday in 2001".
    ><axiom id="6.1-18">
    >
    >     (A s,s0,p,n)[everynthp(s,s0,p,n)
    >                  <--> [tseqp(s,p) & tseq(s0) & natnum(n)
    >                        & (E t1)[first(t1,s)
    >                                 & ~(E t)[iinside(t,s0) & ngap(t,t1,s,s0,p,n)]]
    >                        & (E t2)[last(t2,s)
    >                                 & ~(E t)[iinside(t,s0) & ngap(t2,t,s,s0,p,n)]]
    >                        & (A t1)[last(t1) v (E t2) ngap(t1,t2,s,s0,p,n0)]]]
    >
    ></axiom>
    >That is, the first element in s has no p element n elements before it
    >in s0, the last element in s has no p element n elements after it, and
    >every element but the last has a p element n elements after it.
    >
    >The variable for the temporal sequence s0 is, in a sense, a context
    >parameter.  When we say "every other Monday", we are unlikely to mean
    >every other Monday in the history of the Universe.  The parameter s0
    >constrains us to some particular segment of time.  (Of course, that
    >segment could in principle be the entire time line.)
    >
    >The definition of "everyp" is simpler.
    ><axiom id="6.1-19">
    >
    >     (A s,s0,p)[everyp(s,s0,p)
    >                <--> (A t)[member(t,s) <--> [iinside(t,s0) & p(t)]]]
    >           
    ></axiom>
    >It is a theorem that every p is equivalant to every first p.
    >
    >     (A s,s0,p)[everyp(s,s0,p) <--> everynthp(s,s0,p,1)]
    >
    >We could similarly define "every-other-p", but the resulting
    >simplification from "everynthp(s,s0,p,2)" would not be sufficient to
    >justify it.
    >
    >Now we will consider a number of English expressions for temporal
    >aggregates and show how they would be represented with the machinery
    >we have built up.
    >
    >"Every third Monday in 2001":  In Section 4.3, "Monday" is a predicate
    >with two arguments, the second being for the week it is in.  Let us
    >define "Monday1" as describing a Monday in any week.
    >
    >     (A d)[Monday1(d) <--> (E w) Monday(d,w)]
    >
    >Then the phrase "every third Monday in 2001" describes a temporal
    >sequence S for which the following is true.
    >
    >     (E y,z)[yr(y,2001,CE(z)) & everynthp(S,{y},Monday,3)]
    >
    >Note that this could describe any of three temporal sequences,
    >depending on the offset determining the first element of the set.
    >
    >"Every morning for the last four years": Suppose "nowfn" maps a
    >document d into the instant or interval viewed as "now" from the point
    >of view of that document, and suppose D is the document this phrase
    >occurs in.  Suppose also the predicate "morning" describes that part
    >of each day that is designated a "morning".  Then the phrase describes
    >a temporal sequence S for which the the following is true.
    >
    >     (E T,t)[duration(T,*Year*) = 4 & ends(t1,T) & iinside(t1,nowfn(D))
    >             & everyp(S,{T},morning)]
    >
    >"Four consecutive Mondays":  This describes a temporal sequence S for
    >which the following is true.
    >
    >     (E s0)[everyp(S,s0,Monday1) & card(S) = 4]
    >
    >"The first nine months of 1997": This describes the temporal sequence
    >S for which the following is true.
    >
    >     (E z)(A m)[member(m,S)
    >                <--> month(m,n,yrFn(1997,CD(z))) & 1 =< n =< 9]
    >
    >Note that this expression is ambiguous between the set of nine
    >individual months, and the interval spanning the nine months.  This is
    >a harmless ambiguity because the minimal equivalent temporal sequence
    >of the first is the singleton set consisting of the second.
    >
    >"The first full day of competition": For the convenience of this
    >example, let us assume an ontology where "competition" is a substance
    >or activity type and "full" relates intervals to such types.  Then the
    >phrase describes an interval D for which the following is true.
    >
    >     (E s)[(A d)[member(d,s)
    >                 <--> (E n,T)[day(d,n,T) & competition(c) & full(d,c)]]
    >           & first(D,s)]
    >
    >"Three weekdays after January 10": Suppose the predicate "weekday1"
    >describes any weekday of any week, similar to "Monday1".  Then this
    >phrase describes the temporal aggregate S for which the following is
    >true.
    >
    >     (E d,y,T)[da(d,11,moFn(1,y)) & everyp(S,{T},weekday1)
    >               & int-starts(d,T) & card(S) = 3]
    >
    >That is, January 11, the day after January 10, starts the interval
    >from which the three successive weekdays are to be taken.  The last of
    >these weekdays is the day D for which "last(D,S)" is true.  If we know
    >that January 10 is a Friday, we can deduce that the end of S is
    >Wednesday, January 15.
    >
    >"The fourth of six days of voting":  Let us, for the sake of the
    >example, say that voting is a substance/activity type and can appear
    >as the first argument of the predicate "during".  Then a voting day
    >can be defined as follows:
    >
    >    (A d)[voting-day(d)
    >          <--> (E v,n,T)[da(d,n,T) & voting(v) & during(v,d)]]
    >
    >Then the phrase describes an interval D for which the following is
    >true.
    >
    >     (E s,s0)[everyp(s,s0,voting-day) & card(s) = 6 & nth(D,4,s)]
    >
    >Betti et al.'s (19??) concept of granularity is simply a temporal
    >sequence in our terminology.  All of the examples they give are
    >uniform temporal sequences.  For example, their "hour" granularity
    >within an interval T is the set S such that "everyp(S,T,hr1)", where
    >"hr1" is to "hr" as "Monday1" is to "Monday".
    >Their notion of one granularity "grouping into" another can be defined
    >for temporal sequences. 
    ><axiom id="6.1-20">
    >
    >     (A s1,s2)[groups-into(s1,s2)
    >               <--> tseq(s1) & tseq(s2) & iinside(s1,s2)
    >                    & (A t)[member(t,s2)
    >                            --> (E s)[subset(s,s1) & min-equiv-tseq({t},s)]]]
    >           
    ></axiom>
    >That is, temporal sequence s1 groups into temporal sequence s2 if
    >every element of s2 is made up of a concatenation of elements of s1
    >and nothing else is in s1.
    >
    >Betti et al. also define a notion of "groups-periodically-into",
    >relative to a period characterized by integers r.  Essentially,
    >every r instances of a granule in the coarser granularity groups
    >a subset of the same number of granules in the finer granularity.
    ><axiom id="6.1-21">
    >
    >     (A s1,s2,r)[groups-periodically-into(s1,s2,r)
    >        <--> groups-into(s1,s2) & natnum(r)
    >             & (A t1,t2,s3)[member(t1,s2) & member(t2,s2)
    >                            & nbetw(t1,t2,s2,r-1) & subset(s3,s1)
    >                            & groups-into(s3,{t1})
    >                   --> (E s4)[subset(s4,s1) & groups-into(s4,{t2})
    >                              & card(s3) = card(s4)]]]]]
    >
    ></axiom>
    >
    >To know the time of an event down to a granularity of one clock hour
    >(in the context of S0) is to know which element it occurred during in
    >the set S such that "everyp(S,S0,hr1)". 
    >
    >A transitive granularity, as defined in Section 5, is a temporal
    >sequence.
    >
    >6.2.  Durations of Temporal Aggregates
    >
    >The concept of "duration", defined in Section 3, can be extended to
    >temporal sequences in a straightforward manner.  If a temporal
    >sequence is the empty set, Phi, its duration is zero.
    ><axiom id="6.2-1">
    >
    >     (A u)[temporal-unit(u) --> duration(Phi,u) = 0]
    >           
    ></axiom>
    >The duration of a singleton set is the duration of the temporal entity
    >in it.
    ><axiom id="6.2-2">
    >
    >     (A t,u)[temporal-entity(t) & temporal-unit(u)
    >             --> duration({t},u) = duration(t,u)]
    >           
    ></axiom>
    >The duration of the union of two disjoint temporal sequences is the
    >sum of their durations.
    ><axiom id="6.2-3">
    >
    >     (A s1,s2,u)[tseq(s1) & tseq(s2) & idisjoint(s1,s2) & temporal-unit(u)
    >                 --> duration(s1 U s2,u) = duration(s1,u) + duration(s2,u)]
    >
    ></axiom>
    >We need to use the predicate "idisjoint" to ensure that there is no
    >overlap between intervals in s1 and intervals in s2.
    >
    >The duration of the convex hull of a temporal sequence is of course
    >not the same as the duration of the temporal sequence.  Sometimes one
    >notion is appropriate, sometimes the other.  For determining what
    >workers hired on an hourly basis should be paid, we want to know the
    >duration of the temporal sequence of the hours that they worked,
    >whereas for someone on an annual salary, the appropriate measure is
    >the duration of its convex hull.
    >
    >It is a theorem that the duration of the convex hull of a temporal
    >sequence is at least as great as that of the temporal sequence.
    >
    >	(A t,s,u)[convex-hull(t,s) --> duration(t,u) >= duration(s,u)]
    >
    >
    >6.3.  Duration Arithmetic
    >
    >five business days after January 8, 2003.
    
    Is that an example, or the due date for it being written?
    
    >
    >6.4.  Rates
    >
    >                            7.  Deictic Time
    >
    >Deictic temporal concepts, such as ``now'', ``today'', ``tomorrow
    >night'', and ``last year'', are more common in natural language texts
    >than they will be in descriptions of Web resources, and for that
    >reason we are postponing a development of this domain until the first
    >three are in place.  But since most of the content on the Web is in
    >natural language, ultimately it will be necessary for this ontology to
    >be developed.  It should, as well, mesh well with the annotation
    >standards used in automatic tagging of text.
    >
    >We expect that the key concept in this area will be a relation
    >_now_ between an instant or interval and an utterance or document.
    >
    >         now(t,d)
    >
    >The concept of "today" would also be relative to a document, and would
    >be defined as follows:
    >
    >         today(T,d) <--> (E t,n,x)[now(t,d) & begins-or-in(t,T) & da(T,n,x)]
    >
    >That is, T is today with respect to document d if and only if there is
    >an instant t in T that is now with respect to the document and T is a
    >calendar day (and thus the nth calendar day in some interval x).
    >
    >Present, past and future can be defined in the obvious way in terms of
    >now and before.
    >
    >Another feature of a treatment of deictic time would be an
    >axiomatization of the concepts of last, this, and next on anchored
    >sequences of temporal entities.
    >
    >                      8.  Vague Temporal Concepts
    >
    >In natural language a very important class of temporal expressions are
    >inherently vague.  Included in this category are such terms as "soon",
    >"recently", and "a little while".  These require an underlying theory
    >of vagueness, and in any case are probably not immediately critical
    >for the Semantic Web.  This area will be postponed for a little
    >while.
    >
    >                               References
    >
    >Allen, J.F. (1984). Towards a general theory of action and time.
    >Artificial Intelligence 23, pp. 123-154.
    >
    >Allen, James F., and Henry A. Kautz. 1985. ``A Model of Naive Temporal
    >Reasoning'', {\it Formal Theories of the Commonsense World}, ed. by
    >Jerry R. Hobbs and Robert C. Moore, Ablex Publishing Corp., pp. 251-268.
    >
    >Allen, J.F. and P.J. Hayes (1989). Moments and points in an
    >interval-based temporal logic. Computational Intelligence 5, pp.
    >225-238.
    >
    >Allen, J.F. and G. Ferguson (1997). Actions and events in interval
    >temporal logic. In Oliveiro Stock (ed.), Spatial and Temporal
    >Reasoning, Kluwer Academic Publishers, pp. 205-245.
    >
    >Claudio Bettini, X. Sean Wang, and Sushil Jajodia, "Solving
    >multi-granularity temporal constraint networks", Artificial
    >Intelligence, vol. 140 (2002), pp. 107-152.
    >
    >Richard Fikes and Qing Zhou, "A Reusable Time Ontology"
    >
    >Jerry Hobbs, "Granularity", IJCAI-85, or
    >http://www.ai.sri.com/~hobbs/granularity-web.pdf
    
    
    -- 
    ---------------------------------------------------------------------
    IHMC					(850)434 8903 or (650)494 3973   home
    40 South Alcaniz St.			(850)202 4416   office
    Pensacola              			(850)202 4440   fax
    FL 32501           				(850)291 0667    cell
    phayes@ai.uwf.edu	          http://www.coginst.uwf.edu/~phayes
    s.pam@ai.uwf.edu   for spam
    


    This archive was generated by hypermail 2.1.4 : 03/11/03 EST