Re: Action Items

From: pat hayes (
Date: 02/25/03

  • Next message: Benjamin Grosof: "minutes of 2/25/03 meeting"
    >thanks a lot for your remarks!
    >At 03:16 PM 2/12/2003, pat hayes wrote:
    >>>Pat:     Sending his concerns about the notion of  "rules"
    >>This probably deserves a book, but I will try to summarize.
    >>Basically, I no longer know what this discussion is talking about, 
    >>since the word "rules" has become so all-encompassing as to be 
    >>effectively meaningless; it seems that any computation whatsoever 
    >>can be classified by a systems of 'rules' in some sense or other. 
    >>So "rules", "rule system", etc., have become simply words meaning 
    >>'computational specification' or 'program'.
    >>So what? Well, there are several consequences.
    >>First, admitting that this is so (if it in fact is) frees up the 
    >>discussion: to introduce "rules" it is not necessary to use 
    >>something that would be classified as a 'rules language', for 
    >>example: any computational formalism will do. Since in practice it 
    >>is rather hard to stop people writing code in their favorite 
    >>programming language, this would also have the effect of 
    >>corresponding to social reality. In fact, the overall picture that 
    >>one is then left with is this: the mechanisms available to the SW 
    >>consist of content languages for expressing and transmitting 
    >>propositional content (RDF, DAML, OWL etc) and code which is used 
    >>to manipulate it. Any code, written in any way anyone wants to. 
    >>That is a free-for-all kind of picture which I am reasonably happy 
    >>with. It goes along with the let-a-thousand-flowers-bloom kind of 
    >>attitude in the W3C community, which I find quite refreshing. 
    >>However, if this is in fact the case, then let us be open and 
    >>honest about it. In particular, any discussion of semantic 
    >>relationships between the program code and the content is then out 
    >>of place, or at least should be conducted entirely in terms of 
    >>entailments in the semantics of the content language, which need 
    >>have no relationship whatever to the semantics, if there is any, of 
    >>the programming language used to specify the processes which act on 
    >>the content.
    >I didn't have the impression to far that "rules" are used synonymous 
    >to "any computational formalism".
    Well, I can't see any place to put the boundary that is supposed to 
    differentiate the 'rules' from everything else.  Allowing 'reactive 
    rules' in to the mix seems like it allows arbitrary production rule 
    systems in - and Harold (I think it was) said on a telecon that 
    indeed this was intended. Well, you have Turing completeness right 
    there; and since production rules have been used to implement things 
    ranging from battlefield simulations (complete with graphics) to 
    neuro and psychological modelling, that covers a hell of a lot of 
    ground even when considered purely pragmatically.
    So let me respond: what is a NON-rule computational formalism? Give 
    me an example of one (C++? Java?) and tell me why it isn't a rule 
    And BTW, given the example of a 'reactive rule' in your document (the 
    one about buying stock) , are the following also 'reactive rules'? 
    (all real examples from various domains)
    1. Before starting engine, operate blower for five minutes.
    2. Break glass in case of fire.
    3. In case of fire, do not use elevator.
    4. Turn off power before opening cover.
    5. Proceed with caution when light is flashing.
    >But it is unclear to me how the different "rule styles" fit together.
    >Maybe the RuleML hierarchy helps:
    >                      rules
    >                     /     \
    >                 1. /       \ 2.
    >                   /         \
    >        reaction rules   transformation rules
    >                                  |
    >                               3. |
    >                                  |
    >                           derivation rules
    >                             |          |
    >                          4. |       5. |
    >                             |          |
    >                            facts   queries
    >                                        |
    >                                     6. |
    >                                        |
    >                                    integrity constraints
    >But it leaves still open which computational formalisms are useful 
    >for which part of the tree
    >(and how the different rule types are exactly defined).
    >>Second, I have the impression - ignore this remark if this is a 
    >>mistaken impression - that some members feel that "rules" or maybe 
    >>"rule systems" comprise a(nother) distinctive fundamental approach 
    >>to, or paradigm for, computational descriptions, one that can 
    >>perhaps be classified with the Turing 
    >>machine/sequential-instruction paradigm or the OOP paradigm or the 
    >>recursive-equations paradigm, and might be seen as an extension or 
    >>generalization of logic programming and production rules. On this 
    >>view, it is not entirely vacuous to claim that 'rules' are an 
    >>appropriate programming style for application to the SW; but in 
    >>this case, I would like to see this claim defended, as I for one do 
    >>not feel inclined to accept it, and certainly not without some 
    >>detailed discussion of the reasons for it and the claims that this 
    >>particular paradigm is supposed to have over many others. Certainly 
    >>it cannot claim to have wider influence or acceptability than, say, 
    >>OOP programming, so presumably the case must be based on some 
    >>perceived technical advantage. To make this case would require that 
    >>the nature of the paradigm be spelled out in more detail, so that 
    >>any technical case can be adequately evaluated. In particular, I 
    >>would want to know why the DAML committee is spending so much time 
    >>on this topic, which seems to have nothing particularly to do with 
    >My reply to your point goes as follows:
    >DAML+OIL  created the prerequisites for data on the Web.
    >Once we have a lot of data on the Web, we want to deploy it.
    >Deployment requires the combination, aggregation, and transformation of data
    >(which is instance data complying to DAML+OIL ontologies).
    OK so far.
    >Using conventional programming languages (Java, Pearl, whatever) writing
    >the necessary programs for combination, aggregation, and 
    >transformation of data
    >becomes a very time consuming and thus costly task.
    Why is that?I don't see a shred of evidence for this claim or any 
    theoretical justification for it. People in the W3C orbit seem to do 
    quite nicely using things like Perl, for example. We have people here 
    who tried to use DAML inference 'correctly', gave up in frustration, 
    and now use Java happily to manipulate DAML/RDF. The Jena tools make 
    it trivial to import an RDF graph into Java: an undergraduate 
    (admittedly a very talented UG) wrote the code in an afternoon.
    >Rules for data manipulation offer a promising way
    My point is that the term 'Rules" is UNDEFINED. So it is impossible 
    to even evaluate this claim. Right now it is, literally, meaningless. 
    Its like saying that Frobs offer a promising way... without saying 
    what a Frob is. Might be true, might not: no way to tell.
    >to reduce the cost, since
    >they offer an abstract computational model, which saves much of the 
    >effort that would
    >otherwise go into conventional programs.
    That last sentence strikes me as completely meaningless. In what 
    sense is, say, Prolog or ACT any more 'abstract' than, say, LISP or 
    even Java for that matter? In fact, OOP programming seems to have a 
    better claim to offer an 'abstract' computational model. And what I 
    know about practical LP suggests strongly that debugging real LP 
    programs can be at least as complicated as debugging, say, Java. 
    Prolog programmers often write code that is almost completely opaque 
    since it depends on run-time phenomena in the backtracker which 
    aren't reflected in the surface code anywhere; its more like a kind 
    of high-level assembly programming than OOP.
    BTW, it gets even hairier if we allow 'reactive' rules which can 
    trigger actions which have knock-on effects on the application of 
    other rules, eg by altering priority levels or tweaking with default 
    settings. Systems like this are more like computationally-fractal 
    than 'abstract'; they can behave in highly unpredictable ways, 
    exhibit dynamic instabilities, etc. etc. . This is why AI folk are so 
    fascinated by them, since you can set them up to do things like 
    simulate paranoia or be reactive planners which get 'interested' in 
    new things when they crop up. But this kind of unpredictable behavior 
    is the last kind of thing you tend to want in a structured 
    programming environment.
    Seems to me that all this boils down to is that there is a bunch of 
    guys, of whom you are one, who just *like* a certain programming 
    style, which they have decided to give a neat-sounding name to, and 
    are giving this style a certain cachet by claiming that it is in some 
    vague way connected with Logic, and hence is (1) modular (2) 
    intuitive and natural (3 ) semantically clean (4) abstract (5) you 
    name it, as long as it sounds good; and to ignore the known dangers 
    and problems that this style can have. To me this sounds about as 
    convincing as eating 'natural' vitamins to help cure cancer.
    >I don't know if there have been a formal evaluation of the above 
    >mentioned claim.
    >It seems to be plausible, but a formal evaluation would be useful.
    >So much for "transformation". I'm not claiming it is the only 
    >possible application of rules, and it is one.
    >>Third, there seems to be a presumption that 'rules' are somehow 
    >>closer to logic than other programming paradigms. Words like 
    >>'assertional' were being used in the telecon discussion, for 
    >>example, with an approving tone of voice. But if the term "rule" is 
    >>this wide in its scope, then there is no particular relationship 
    >>between rules and logic at all, any more than between rules and 
    >>numerical simulation or rules and graphics programming.  Prolog is 
    >>a perfectly general-purpose programming language. So I would like 
    >>to invite anyone who feels that there is some special connection 
    >>between 'rules' and logic, particularly SW logic, to spell this 
    >>connection out in enough detail that it can be examined critically.
    >The connection, from a logic programming point of view, is the 
    >existence of a model theory
    >for logic programs.
    There is a formal semantics for LP; it is not however a classical 
    model theory, because it is based on the assumption that all the 
    domains are recursive (minimal models). There are formal semantics 
    for almost all programming languages, however. There were formal 
    semantics for Algol 60.  So what?
    >Ignoring the non-monotonic negation for a moment, the main
    >differences to classical FOL is the focus on least Herbrand models.
    >"Least" because every Herbrand model that subsumes the least one
    >is also a model, and Herbrand model since Herbrand models suffice
    >(a set of clauses has a model iff it also has a Herbrand model).
    >>I think that the presumption of such a connection is based on a 
    >>series of historical misunderstandings.
    >>First, the word 'rule' is already used in logical metatheory but in 
    >>quite a different sense, as in 'inference rule'. It might be worth 
    >>expanding on this a little. The "rules" used in typical logic 
    >>programming systems - which can be loosely identified with Horn 
    >>clauses, let us suppose for the moment to simplify the discussion - 
    >>are like logical sentences, NOT logical inference rules. The thing 
    >>in a 'rule system' which corresponds most closely to a logical 
    >>inference rule would be the actual code of the unification 
    >>algorithm and the backward-chaining search process. Inference rules 
    >>are not logical statements: to identify them is a category error. 
    >>(Lewis Carrol wrote a famous imaginary dialog which illustrated the 
    >>error over a century ago, and it was considered a typical 
    >>elementary student error even then. )
    >>Second, the logic programming tradition has unfortunately confused 
    >>what is in fact a very clear semantic distinction. Contrary to 
    >>standard LP doctrine, logic programming is not logic. Programs are 
    >>not assertions, and algorithms are not logic plus control. I know 
    >>this goes against what Kowalski said; at the time I also said that, 
    >>but we were both wrong. The analogy is seductive but fundamentally 
    >>misleading, and it is misleading in a way that matters centrally to 
    >>the semantic web. What Kowalski should have said was, algorithms 
    >>are logic plus control modified by a closed-world assumption. This 
    >>is why I do not feel that logical programming should have any 
    >>central place in the SW technology, and why its presumptive claim 
    >>to do so is based on a semantic mistake.
    >I don't see your point.
    >Logic Programming consists of a model theory (least Herbrand models) and
    >multiple possible inferences procedures trying to implement the
    >model theory (this is very clear if we are not adding
    >the closed world negation to our language).
    >Prolog is just one particular example of a possible implementation - and
    >maybe not the one most useful for the SW.
    >But even if we are adding closed world negation to our language
    >we still have nice model theoretic characterizations of
    >eg., stable and well-founded model semantics.
    >So I'm not seeing your point here.
    Apparently not. To me there is a very basic and fundamental 
    distinction in two styles of semantic theories, according to what 
    kind of assumptions are made about the underlying domains 
    ('universes') in the interpretations.
    The classical FOL assumption is that NO assumptions are made about 
    the things in the domain. This really is very important: it is where 
    a lot of the fundamental properties of logic come from, and explain 
    its wide utility.
    Another frequent kind of assumption however is that all domains are 
    recursively enumerable, and in fact more than that, they are closed 
    under recursive definitions. The mathematical sign of this is that 
    they are required to be 'minimal', to be solutions to an equation 
    involving a minimal-fixed-point operation (like the Y combinator used 
    in semantics for functional languages) , to satisfy some induction 
    principle, or to only contain finitely generated entities.
    Most (all?) programming languages make this second kind of semantic 
    assumption, since all domains of data structures have this 
    finitely-generate-able character. Almost all assertional logics do 
    not. LP does; that word 'stable' (or 'least') is the critical shift 
    in meaning which makes Horn clauses a subset of logic but LP a 
    programming language. There are superficial similarities between Horn 
    clauses and LP, but they really are superficial: the semantic 
    distinction is deep.  For example, try working out what kinds of 
    entities would be in the range of the quantifiers in a stable model 
    of axiomatic set theory.
    >>One can characterize the semantic distinction quite precisely and 
    >>in very general terms: Ask the question, is the domain of discourse 
    >>of the language required to satisfy the second recursion theorem 
    >>(ie to be closed under minimal fixed-points)? If so, the language 
    >>is a programming language of some kind, and it talks about domains 
    >>which are in some sense computable. If not, it is an assertional 
    >>logic and suffers from the expressive limitations enforced by 
    >>Goedel's theorem. This is a *fundamental* semantic distinction. 
    >>Logic is the latter; logic programming, like all other programming, 
    >>the former. Closed-world assumptions, negation-by-failure and 
    >>variety of other nonmonotonic devices belong naturally in the 
    >>former kind of semantics framework, but are directly and fatally 
    >>invalid in the latter. Minimal model semantics belong naturally in 
    >>the former but are impossible to adequately formalize in the latter.
    >My theory knowledge is  rusty.  (I looked up the Kleen's 2nd 
    >recursion theorem, but didn't get the relationship
    >to least Herbrand models).
    >. What are the consequences of the difference?
    >And why can  minimal model semantics not adequately formalized?
    Essentially this is a corollary of Goedel's theorem. There is no FO 
    set of axioms such that their models are the 'minimal' models. Or, in 
    a nutshell, there is no FO axiomatization of the notion of 'finite'.
    >What is "adequately formalized"?
    >And why is it important?
    I don't know how to answer that. No doubt in some larger scheme of 
    things, no mathematics is truly 'important'.
    >>So, are rule languages in the former or latter category? It seems 
    >>that some are in one (Horn clauses), some in another (Production 
    >>systems, logic programming). To me, this makes the broad 
    >>categorization of 'rule system' worthless and potentially 
    >>dangerous, since it blurs a centrally important semantic 
    >>distinction.  Content languages for the interchange of 
    >>propositional content on the semantic web cannot be restricted to 
    >>computable domains; to do so would be fatal for the intended uses.
    >>Fouth, while I am not opposed to attempts to classify kinds of 
    >>"rules" or to give a general characterization of what "rules" are 
    >>or are not, I do not see how any of this is centrally relevant to 
    >>the DAML committee or to the SWeb more generally. I wish 
    >>enthusiasts of the "rules" paradigm well, and am happy that "rules" 
    >>interest groups, working consortia, etc. are being formed, I guess; 
    >>but I do not see any particular reason why their business should be 
    >>conducted in this forum.
    >DAML+OIL enabled data on the web - know we need a way to process it 
    Nice typo (?)
    If you meant 'declaratively', this term is either meaningless or 
    false when applied to "Rules"  as described by Gerhard and Harold. 
    And why not just tell people: here's the data; here's what it means: 
    now, process it any way you want. If you pay attention to the 
    semantics you won't go wrong, if you ignore them you might. It's a 
    risk you will have to take. But other than that, no holds barred, 
    write any code you like, use Perl or Java or Prolog, whatever.
    Seems to me that this would be preferable to telling them that Rules 
    somehow give them a kind of semantic security, being 'declarative' 
    and all, while at the same time letting "Rules" mean anything. This 
    is like selling people umbrellas made out of fishing net.
    >>Amplifying this point, I think that there is an argument which has 
    >>been implicitly accepted which goes roughly as follows: OWL is not 
    >>expressive enough; it cannot express quite a lot of logical 
    >>statements, for example. In order to provide the needed 
    >>expressivity, we need to add the ability to express these 'missing' 
    >>logical forms, and that is what a Rules Language is for. On this 
    >>view, then, Rules are something like the next step on a process of 
    >>discovery which aims to creep up Tim's famous layer cake. The 
    >>trouble with this idea is that we do not need to inch along this 
    >>slowly and carefully. That is like using rock-climbing tools to 
    >>walk along a highway: we know where it goes, it has been bearing 
    >>motorized traffic for years, there is no need to pretend to explore 
    >>it carefully. It leads to first-order logic; and so why not just 
    >>say this at once and stop pussyfooting around? On *this* view of 
    >>"rules" , note, they really are just Horn clauses. They are not 
    >>logic programs or a new paradigm for computation; they are not 
    >>inference rules or production systems or anything else: they do not 
    >>involve minimal model semantics or closed worlds or negation by 
    >>failure. Their semantics is Tarskian, simple, obvious and 
    >>thoroughly understood, and they are part of the content language, 
    >>not code for manipulating the content language. If this is what 
    >>"rules" are, then I have no problem with them, but there is no need 
    >>for us to spend time to define them; they are already defined.
    >The problem is, what can you actually do with first order logic?
    Just about anything, in a sense. What do you mean, "do" ? Horn 
    clauses are first-order logic.
    >If you have a mission critical application, would you rely on a 
    >first-order theorem prover?
    Maybe not. It depends on how big, how complicated, etc. the problem 
    was.  But again, what do you mean? Ian's OIL reasoner *is* a 
    first-order theorem prover; its just incomplete in a rather elegant 
    (I'll tell, you one thing, if it was mission critical I *certainly* 
    would not want any engine using nonmonotonic reasoning.)
    >So I guess part of the answer is, that FOL is actually more than we 
    >have asked for.
    OK, fine: if by 'rules' one means only a little less logic than full 
    FOL but a little more logic than DLs, then I agree this is a topic we 
    can usefully discuss and maybe even come to some rational decisions 
    about. But then let us agree to put aside 'reactive rules' and 
    'integrity constraints' and all the other stuff that is getting in 
    the way here; and even let us agree to treat LP with some care, at 
    the very least, since LP in many ways goes way *beyond* FOL. .
    >Also is the open world semantics suitable for all applications?
    >(If yes, why was there so many resources wasted exploring 
    >non-monotonic reasoning)?
    >>So, it seems to me, either "Rules" refers to a rather small thing 
    >>which is already done, and which we can simply agree on, stop 
    >>discussing and then move forward; or else it refers to something so 
    >>big and so underdefined as to be useless and more of a distraction 
    >>than of utility; or else it needs to be defined, and argued for, 
    >>much more carefully than I have seen so far.
    >If it is undefined (or better, there exist multiple viewpoints) we 
    >should list those viewpoints.
    >As a summary:
    >The "rule" notion is still unclear, and it seems that many
    >different interpretations are possible.
    >The question is which one we adopt.
    >I don't think it should be pure FOL, since FOL is lacking
    >certain properties important for industrial deployment, like
    >for starters, a terminating implementation.
    I wasn't meaning to suggest that it would be all of FOL. It might be 
    Horn clauses only used in a forward direction, or some such highly 
    restricted subset. But if it is a subset of FOL we can immediately 
    put a number of issues aside, eg its semantics is fixed. We don't 
    need another model theory for subsets of logic; they already have a 
    model theory. DLs plus Horn clauses is a topic worth getting down to 
    work on; DLs plus "Rules" is meaningless.
    IHMC					(850)434 8903 or (650)494 3973   home
    40 South Alcaniz St.			(850)202 4416   office
    Pensacola              			(850)202 4440   fax
    FL 32501           				(850)291 0667    cell	   for spam

    This archive was generated by hypermail 2.1.4 : 02/25/03 EST