[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
comments on summary v0.95 [250 lines]
*** p.ii, and then p.15: I do not like the terminology "modular
specifications" in this context, especially that this later leads to
"modularity concepts". Both relate too much to the notion of a module,
which to many people would link more with a programming module (or
unit) than with a bit of a specification. "Modular specifications"
come very close to ACT TWO'2 "module specifications", about which i
never really learned whether they are "specifications of programming
modules" or "requirements specifications in a modular form". In our
case we would mean the latter, which is probably a less likely reading
for an average user.
[I've tried to avoid using the word "module" precisely because it
means so many different things to different people - or may be even to
the same people... :-) However, I cannot agree that we should avoid
using the words "modular" and "modularity" in connection with
algebraic specifications, on the basis of a potential confusion about
the relation between CASL and existing frameworks! --PDM]
What was so wrong with "structured specifications"?
[Some seemed to find structure too close for comfort to
architecture. --PDM]
Or maybe "requirements specifications"?
[Basic specs are also about requirements... --PDM]
Or just "specifications" (with basics specs and arch specs reserved
for something special)?
[Not usable as a part or section heading. --PDM]
[Don supports Andrzej's view, whereas Bernd was happy with "modular".
Unless a better proposal comes up, I propose to stick to "modular".
--PDM]
*** We use freeness and initiality later on in the note, so somewhere
around sect.1.2 (and perhaps earlier in sect.1 when introducing
institutions) we have to mention what are the (homo)morphisms between
models. Otherwise it makes little sense to talk about freeness. And
the notion of a (homo)morphism is not entirely obvious, there are a
number of possible choices, as usual for first-order structures and
for partiality. What we want are "weak" homomorphisms that preserve
but not necessarily reflect in any way definedness of operations and
truth of predicates.
*** p.5, sect.2.1: reading the last sentence here, I keep wondering if
this really was what we decided in Edinburgh. At the moment it beats
me why overloading between functions and predicates is allowed and
between them and sorts is not... Maybe we should just keep all three
distinct?
[Bernd and I pondered the pragmatics of this after everyone had
left... :-) We thought that some users might want to define Boolean
functions corresponding to existing predicates, without having to
invent new names. But if one allows the same name to be used for a
sort and a (constant) function, this puts a constraint on the syntax
for ALTERNATIVE in TYPE-DEFN. This kind of detail is clearly open to
adjustment on the grounds of methodology, tools, or semantics, so I'd
rather leave things unconstrained except when there are some reasons
against. --PDM]
*** p.6, sect.2.2: just making sure: so we decided in the end that all
the axioms are implicitly universally quantified by all the declared
variables, not only by those which actually do occur free in each
given axiom, right?
[I thought the decision was that it didn't make any difference, now
that models with empty carrier sets are not allowed, so there was no
need to emphasise the one or the other in the summary? --PDM]
*** p.9/10, sect.2.5: I really find the meaning of the whole type
definition group construct less and less clear --- or rather full of
new and new surprises for me. Reading this subsection I could not help
thinking that we have thrown in there basically anything that came to
our minds. At this reading I realised that such a type definition
group may be inconsistent, which is rather surprising for a "type
definition". Of course, this is obvious when the axioms are allowed,
since the axioms maybe inconsistent. However, what surprised me now
was that only now I realised that the mixture of (perfectly sensible)
axioms with apparently innocuous selectors may lead to inconsistencies
(when values built by different constructors are forced to be
"confused" by axioms, selectors on them may be ambiguous).
[An obvious example is that when plus(E1,E2) = plus(E2,E1), it would
be inadvisable to consider left and right selectors for such a construct.
--PDM]
Of course, this can be swallowed as well, but first it broke my
informal (and wrong) understanding that "selectors are always there
except that we do not always bother to name them". I start wondering
if no similar and even less obvious things may arise in combination
with subsorting...
[I can't see any potential problems there. If you can, please give us
some hints. --PDM]
I a word, I think I would go back to the original proposal and remove
axioms from type definition group.
[I do not think that your suspicion of possible confusion is
sufficient grounds for a restriction that would require abandoning the
use of type-defns for specifying, for example, abstract syntax in
relational semantics in Milner's abbreviated style. To have to
abandon this concise way of declaring constructors when one decides to
impose some axioms would be quite annoying when developing larger
specs. Surely it is a matter of methodological guidelines to warn
users against mixing selectors and confusing axioms? --PDM]
Also I started wondering what is there meaning of two (or more) type
definition groups in the same basic specs in the case when they refer
to each other (which is possible since within basic specs we adopt
non-linear visibility). What does "persistent free extension" mean
then? Sorry, no answer obvious answer as yet from me, so just marking
a problem to think about.
[My personal view has always been that a TYPE-DEFN-GROUP (with or
without AXIOMS) is best regarded as a SPEC, not a BASIC-ITEM...
A compromise might be to allow a single TYPE-DEFN as a BASIC-ITEM, but
take a TYPE-DEFN-GROUP as a SPEC? Then (perhaps) a TYPE-DEFN in such
a context really would correspond to some declarations, some
disjointness axioms, and a sort-generation constraint? Clearly, one
wants to be able to abbreviate enumerations and sort unions as
BASIC-ITEMs. --PDM]
*** p.12, sect.4.2: can F contain free variables other than x? That
is, does it get universally quantified over the variables declared in
the surrounding basic specification? (I guess either way is okay, but
some decision has to be made.)
[Good question... My understanding was that x was the ONLY free
variable in F. If one uses y without an explicit quantification, it
was probably a typo. :-)
BTW, perhaps VAR-DECL should be replaced here by VAR SORT, to avoid a
well-formedness check for only a single VAR? --PDM]
*** p.13, sect.4.3: I must stay that at this stage the description of
the equivalence of two expansions and then of well-sortedness, are far
from understandable for me. This gets better with further reading,
but I do not think it will ever become clear without going through
some extra analysis in the study note. How much of it should be
imported into the summary?
[The study note is about to be installed. Since the summary refers to
the study note, I propose to leave the summary as it is. --PDM]
Small things about this:
Where do this embeddings mentioned in the items come from? After all,
we cannot write them in terms, can we?
[Yes, using SORTED-TERM. But this should be explained. --PDM]
In the last items for terms and formulae, i presume you meant
"provided that the profiles are related by the subsorting relation",
right?
[Yes. --PDM]
*** p.17, the paragraph in the middle, about the injectivity requirement:
As I mentioned in an earlier message, i think this "injectivity" is
very imprecise here. What we must require is that the function built
amounts to a signature morphism, and then the idea might be to require
that this signature morphism is an isomorphism between the original
signature and the one formed as its image. This means that "unrelated
symbols cannot be mapped to related symbols". The semantic reason
behind this is that the reduct functor is surjective (it is an
isomorphism in fact) and the translation is nothing more than a
renaming of the symbols used. Otherwise the reduct need not be
surjective, and so the translation would potentially introduce some
extra "logical" requirements. Of course, the meaning is still clear:
identifying two names (or making them related) just corresponds to
introducing an axiom which would assert that.
(I can even imagine that in some contexts, after combining some
specifications the result may imply that some, say, constants are
indeed equal and one may want get rid of one of the superfluous
names.)
Rather than going into the details of what "injective" here should
mean, i would rather propose to simply remove this restriction here
--- and discuss all the possible consequences and ways to avoid tem in
some accompanying methodology note.
*** p.17, the last paragraph:
I guess it goes without saying by now that if an overloaded symbol is
hidden or revealed, this means that all its instances are hidden or
revealed (with all the consequences for revealing of the sorts in the
all the profiles).
I would also add somewhere here when describing the signature that the
subsorting relation on sorts is inherited exactly from the original
signature (otherwise the use of the term "smallest sub-signature"
might be misunderstood to mean "with the smallest ordering", which
clearly is not meant).
*** p.19, SPEC-INST, FITTING-ARG
When looking at this, Michel and me have realised that perhaps the
case when a number of parameters share some part need not be as seldom
as I originally thought. Consequently, it might be a pain to have to
repeat the same mapping in all the fitting morphisms for each
parameter separately. A way out is that perhaps instead of a fitting
morphisms (or rather, its description as a SIG-MORPH) for each
argument separately, we might have a single description SIG-MORH for
all the fitting morphisms together, leading to the following syntax:
SPEC-INST ::= spec-inst SPEC-NAME FITTING-ARG
FITTING-ARG ::= fitting-arg SPEC* SIG-MORPH
This would make more explicit what I think in fact is a fact now that
a number of parameters is just a pretens for a single parameter which
is their union.
[From a pragmatic point of view, it might be best to allow both:
SPEC-INST ::= spec-inst SPEC-NAME FITTING-ARG* SIG-MORPH?
FITTING-ARG ::= fitting-arg SPEC SIG-MORPH?
where omitted SIG-MORPHs are identity. The first outer one would
apply to common parts, the inner ones to particular parameters.
--PDM]
*** p.20, top line:
"may" is very right here. In what Michel and me proposed there is no
extra assumption that a component of a compound identifier is a name
within the given signature.
*** p.23, about "A UNIT-TYPE is well-formed only when the signature of
the union of the argument specifications is included in the signature
of the result."
Is this to be meant literally? Then this would mean that what we adopt
here is a quite different convention from what i thought. Namely, I
presumed that this inclusion of the argument signatures may happen
implicitly, so that a unit type
SPEC1,....,SPECn -> SPEC
in a sense "really" stands for
SPEC1,....,SPECn -> extend SPEC1,....,SPECn by SPEC
So, the parameter specifications are of the same kind as OF-SPECs in
extensions (except that persistency does not make sense here, as it is
always required) and SPEC is of the ame kind as BY-SPEC in extensions
(except that BY-SPEC is now simply SPEC :-) This would yield some
uniformity w.r.t. to extension and generic specs.
Of course, in practice both SPECi's and SPEC will be externally
defined and grabbed from the library, in which SPEC for completeness
would typically have to include SPECi's anyway --- but the rule as
sketched above does not do any harm in this case, as far as I can
see.
Best regards,
Andrzej
[Andrzej provided also a lot of helpful comments for improving the
wording and explanations in the Summary. Thanks! --PDM]