Formula:Formula CR 1

From XBRL

Contents

Feedback and resolutions

Issues requiring resolution

None

Responses pending approval by FWG

None.

New features

None.

Removed features

None.

Changed features

18/09/08 Geoff Shuetrim: The definition of s-equal2 is not sufficient for determining matches for dimension aspects

The variable specification defines a match for the values of XDT dimension aspects using definitions in the XDT and XBRL 2.1 specifications.

The XDT specification states:

 Two facts have the same dimension if both have a dimension container whose content of the 
 dimension attribute are s-equal2 and both refers to the same dimension declaration [Def, 8] (sic).

I am not clear on what the last part adds, but it is clear that the content of the dimension attributes are subject to testing to see if they are s-equal2. The content of a dimension attribute is a sequence of one text node.

 Two dimension values [Def, 18] are the same when they are s-equal2.

Again, it is clear that we need to compare dimension values to see if they are s-equal2. A dimension value is a singleton sequence containing a single XML element node.

 The s-equal2 operation is the same operation defined in section 4.10 of the XBRL 2.1 specification 
 replacing XPath 1.0 with XPath 2.0 in the definition of the x-equal operation and with the "XPath 1.0 
 compatibility mode" property set to false in the static context.

This takes us back to the XBRL 2.1 specification.

There, the relevant parts of the s-equal definition are:

 Two sequences are s-equal if every node in one sequence is s-equal to the node in the same position in the other sequence.

I am guessing that the this is in error in that it does not require the two sequences to be of the same length.

 Two elements are s-equal if they are not identical 
 and their element local names and namespaces are both s-equal text, and the set of their attributes 
 are s-equal, and the sequence of text and sub-element contents are s-equal.

I am guessing that this is in error in that it does not require the sequence of text and sub-element contents to be in document order. I am guessing that s-equal text is the same thing as s-equal text strings.

 Two attributes are s-equal if their local names and namespaces are s-equal text strings if their values are x-equal.
 Two text strings are s-equal if they are x-equal.
 An XML object A is x-equal to an XML object B if the [XPATH] expression A = B 
 returns the value true (see http://www.w3.org/TR/xpath.html#booleans). In the case 
 of element and attribute values, those whose type are xsd:decimal, xsd:float, or xsd:double, 
 or derived from one of these types MUST be treated as numbers for the purposes of interpretation 
 of http://www.w3.org/TR/xpath.html#booleans. If a value has type xsd:boolean (or a type derived from xsd:boolean), 
 then it MUST be converted to an [XPATH] Boolean with '1' and 'true' being converted to true and '0' and 'false' being 
 converted to false. Values with any other XML Schema type are treated as [XPATH] strings.

Thus, matching dimension values are determined based upon a series of tests that reduce to x-equality tests of values that can have any data type, primitive or derived, allowed by XML Schema. These x-equality tests are not particularly helpful though, for data types other than the some numeric data types (decimal, float or double), boolean data types and text strings.

Any other data types, where there is more than one lexical representation of two equal values will cause problems.

the XDT specification acknowledges this for xs:QNames, where it notes:

 The XBRL 2.1 specification is based on XPath 1.0. According to section 5.3 of the XPath 1.0 specification, 
 the content of attributes is always a string-value rather than a QName. XBRL APIs implementing dimensions 
 should take care of this and normalize the prefixes of QNames that appear in the dimension attribute of the 
 dimension container and in the content of the dimension container within the instance document.

Examples of these problems are shown in the test case variations compiled for the function registry function for testing for s-equal2 dimension values. The examples cover QNames, integers, some types based on text strings, and time data types.

Possible resolutions include:

  1. Continuing to use s-equal2 and requiring a preprocessing transformation to XBRL instances to normalise all values for data types that are to be compared as strings but where there are several lexically equal representations for them. This would also require us to define the normalisations to use.
  2. Define our own notion of matched dimension values. For explicit dimensions, this is trivial. The QNames that are the dimension values must be equal based on the XPath 2.0 test of equality, taking their XML Schema data type into account. For typed dimensions, we need to define a notion of equality that respects the XML Schema data types of the information that constitutes the dimension values being matched. This would involve drafting our own modified version of the equality predicates in the XBRL 2.1 specification.
  3. Allow typed dimension declarations to provide their own aspect test. This would leave the notion of typed dimension equality in the hands of those defining the typed dimensions. Specifically, we could define a resource that could be associated with related (using XLink) to typed dimensions. That resource would contain an XPath 2.0 expression that was the aspect test for that aspect. If there was more than one such resource for a typed dimension, an error would be thrown. If there were no such expressions, we would also throw an error. The aspect test would then embody the definition of dimension value equality for that typed dimension.
  • Resolution: Typed dimension declarations will have a default aspect test based upon option 2 where we define a notion of equality based upon node-by node assessment that node names match and that the node values are equal in the sense implied by the XPath 2.0 eq operator. It will also be possible for a type dimension to over-ride this default aspect test by defining its own notion of equality through an XPath expression contained in a resource that is linked to the typed dimension's domain declaration via an Xlink arc in a generic extended link. This over-ride aspect test will cater to situations where the notion of equality does not align with the mapping from string to typed values for nodes, as defined in XPath 2.0. An error will be thrown if a typed dimension declaration has more than one aspect test associated with it. Note that this will take into account all networks of relationships.

Muramoto:
[Question] Does above aspect test replace 'xfi:fact-dimension-s-equal2' of the following specifications?

Variables 1.0
  2 Aspects
    Dimension aspects are XBRL dimensions that are reported in the fact's segment or scenario. 
    The aspect test for this aspect is: xfi:fact-dimension-s-equal2($a,$b,#dimension) 
    where #dimension is the QName of the dimension for which the aspect is defined. 

[Comment] From the reason of two points as follows, I think that 'custom aspect test' would rather set the option (on-off of the following options 1.-5.) to 'default aspect test' than describe it newly by using XPath expression. In other words, it is better to limit the customizable range, since new approach has large flexibility.

  • 'custom aspect test' will become near with 'default aspect test'. I think that it is difficult to describe the above all 'DEFINITION:' by using XPath expression. The setting that the user wants to customize is extracted to the following points (they are only what I have noticed).
    1. Whether is the 'id' attribute compared or not?
    2. Whether is the namespace declaration attribute compared or not?
    3. Whether is whitespace trimmed or not?
    4. Whether is the position of the element or the attribute checked or not?
    5. Whether Comment or Processing Instruction is checked or not?
  • The various 'custom aspect test' comes out. If there is no guideline, variation will appear in a result.

Geoff Shuetrim: 2008-10-15 I am not sure that I have understood everything you are saying but I do have concerns with the proposal: specifically that it is not meeting the identified requirement that users be able to provide their own determination of which lexical representations are matching. Thus, the example above, would not deal with, for example, dateUnion data types in typed dimension values.

Muramoto: Is your opinion the following? When comparing xs:date to xs:dateTime by the XPath 2.0 eq operator, the XPTY0004 error is raised. But there is the requirement that users want to comparing xs:date to xs:dateTime. Since I don't know how important the requirement is, I honestly can't answer. However, I think that we should proceed step by step. First step is 'default aspect test', second step is 'custom aspect test'.

Geoff Shuetrim 2008-10-16: This dateUnion example is just a simple and familiar situation showing that it can sometimes not be sufficient to define typed dimension aspect value equality in terms of the XML Schema mappings from the lexical representation to a given semantic meaning of the value. It is certainly not the only such example that I can think of. Another example would be of a typed dimension where the semantic meaning is not affected by capitalisation of the value of the typed dimension. That said, I agree with you that the definition of that default aspect test is a separate issue to defining custom aspect tests that override the default. In what sense do you see us getting the two issues mixed up?

Herm: My opinion is that this is an important base spec issue (because dateUnion does not indicate whether it is a start date or an instant/end date). A use case is restatements. If no base spec resolution or change to two dateUnion data types, then only a human who knows the usage of the dateUnion can decide to code whether xs:date converts to dateTime to the following or the preceding midnight, and a custom aspect test is required.

Andy Harris: UBmatrix is in agreement.

Muramoto: I understood the situation. Please move the discussion forward in sets with 'default aspect test' and 'custom aspect test'.

Andy Harris 2008-10-22: There are a few questions concerning the custom aspect test:

  • How will the custom aspect test resource be associated with a formula? Please specify processing model as well as syntactical details (arcs and arc roles). Geoff Shuetrim 2008-10-23 The custom aspect test resources will be expressed in generic extended links. There will be generic arcs from them to the schema element declarations that are the associated typed dimensions' domain declarations. When processing a formula and operating on typed dimensions, it will be necessary to determine the aspect test for the typed dimension. This will be possible to determine because you know the QName of the typed dimension, that gives you the XML Schema element declaration defining that QName and that then enables discovery of any custom aspect test resource.
  • Is there additional metadata required on the arc that associates formula to custom aspect test? Geoff Shuetrim 2008-10-23 Not that I have recognised.
  • Is there a way to have a default association of an custom aspect test to a formula aspect model such that an arc is not required for every formula where the custom aspect test is used? Geoff Shuetrim 2008-10-23See the explanation above. No such proliferation of arcs is required.
  • Should the formula author be able to override any aspect test of an aspect model? Geoff Shuetrim 2008-10-23 No. The aspect tests are integral features of the dimensions and dimensions should not have different interpretations depending on the formula that uses them, at least within a given aspect model.

Herm 2008-10-23 So should there be some way to say I don't have any reason to make custom aspect tests for my typed dimensions but please use some supplied xpath expression just for dateUnion comparisons where dateUnion objects are in any of my typed dimensions? Would save me a whole lot of coding for the rest? Maybe a function my:elements-correspond(..., "dateUnionTest='blah blah'", "someOtherPatternTest='blah blah'"). Not clear on how but just on pondering to specify the custom tests by type of object instead of by typed dimension.

Geoff Shuetrim 2008-10-23 No. I do not think the whole default test needs to be implemented except for the treatment of dateUnion values. When someone defines a custom test, they can rely on the dimension being valid (so that the content models match) and only need to really worry about the comparisons of specific semantic values. Try writing out the custom aspect test for a restatement date typed dimension and think carefully about only testing what is not given to you by XDT validation of the input instance.

19/09/08 Victor: Consistency assertions of formula returning invalid value according schema type

Here's a topic that is not clear in the spec, or at least, we don't have use cases in the conformance suite covering it. Let's suppose we have three positive numeric items A, B, C (controlled by schema definition), and the following formula:

 A = B - C

If C is greater than B, it will return a fact that is invalid according to XML schema constraints. Now, let's assume that we have a consistency assertion for this formulae with an absolute acceptance radius of 10. What should happen for these three values?

 A = 0
 B = 0
 C = 100

1) The consistency assertion should be evaluated and deemed not satisfied (the result is -100 which is not within the acceptance radius)

2) The consistency assertion should not be evaluated.

3) The consistency assertion should be evaluated and deemed not satisfied because the produced output is illegal

And for these three values?

 A = 0
 B = 0
 C = 5

1) The consistency assertion should be evaluated and deemed satisfied (the result is -5 which is within the acceptance radius)

2) The consistency assertion should not be evaluated.

3) The consistency assertion should be evaluated and deemed not satisfied because the produced output is illegal

Herm: It is my impression that we've attempted to separate formula processing from XML/XBRL validation (for example in the applying of rules of aspects of resulting facts). If we continue in that direction, then the formula processor is not tied to XML and XBRL validating of each aspect (such as value), and a value of negative number gets put into the output instance document, and processed by the formula processor for consistency, independently of some possibly independent XML/XBRL validation of the resulting fact items. Thus the answer in above cases would be (1). We've refrained from providing formula tests that depend on invoking XML and XBRL validation (on the resulting item), but can re-think that if it doesn't delay CR2.

  • Resolution: The assertion specification will be updated to require the output fact to be valid XBRL before testing it against the assertion. If it is not valid, then the assertion is deemed to not be satisfied. This is in line with option 3.

18/09/08 Geoff Shuetrim: The XBRL 2.1 definition of duplicates is not sufficient for our purposes

The definition of duplicates in the XBRL 2.1 specification is an attempt to capture the notion of two facts having the same set of aspects and the matching values for their corresponding aspects. We updated the variable specification to partition sequences of facts for variables that bind as a sequence so that XBRL 2.1 style duplicates are not included in the same evaluation sequence. This is not sufficient because XBRL 2.1 duplicates will not always be identified for all XDT-based facts, even where those facts are matched, based upon their aspects.

  • Resolution: We have updated the variable specification to define a new term: "aspect-matched facts", which refers to two facts with the same set of aspects and with matching aspect values for all of those aspects. We then require partitioning of evaluation results for variables that bind as a sequence to partition aspect-matched facts rather than XBRL 2.1 duplicates.

05/09/08 Andy Harris: Lost way to assemble a typed dimension from a rule

CR2 defines a new error code:

"Error code xbrlfe:badSubsequentOCCValue MUST be thrown if a subsequent OCC value contains information that implies a value for any other aspect in the aspect model of the formula than the OCC aspect whose value is being determined by the OCC rule being processed.

Thus, for example, ( xbrlfe:badSubsequentOCCValue ) would be thrown for a formula with a dimensional aspect model if an OCC rule produced a subsequent OCC value that included content that implies a dimension aspect value."

We still need a way to add the fragment or nodes for a typed dimension using explicit rules. Maybe the below bottom quoted paragraph should say 'that implies an explicit dimension value' instead of including typed for that error?

Victor: I think the place to give the value for a typed dimension is the dimension rule. I think we should change the schema to allow this kind of content. Or maybe have two different rules: one for explicit dimensions and another one for typed ones.

Geoff Shuetrim: I agree with Victor. We should just add a new dimension rule for typed dimensions.

  • Resolution: A new typed dimension rule has been added to the formula specification to provide the previously available capability of defining output typed dimension values using SAVs, XPath expressions and explicitly provided markup.

20/08/08 Víctor Morilla Limited usability of dimension filters because of the recognised domain for a dimension definition

The current definition of the recognized domain of a dimension in the dimension filter states:

“A recognised domain for a dimension is the set of domain members allowed as values for that dimension by consecutive relationships that include a has-hypercube relationship expressed by an arc in an extended link with an @xlink:role attribute that is equal to the filter linkrole."

This definition is problematic as the following use cases show:

Use case 1: a taxonomy with an open hypercube without dimensions

 ConceptA
 |--> Hypercube (open)
 DimensionX
 |--> x0
   |-->x1
     |-->x11

The dimension X is allowed for the concept A. There is a hierarchy of members defined for X, but there are no consecutive relationships including the has-hypercube arc: A dimension filter is unusable!!!

Use case 2: an extension uses an all hypercube to constrain the set of members of its "parent" taxonomy that doesn't respect the original hierarchy

 (extended link role 1)
 |--> Hypercube 1 (all) 
   |--> Dim D
     |--> d
       |--> d1
       |--> d2
         |--> d21
         |--> d22
 (extended link role 2)
 |--> Hypercube 2 (all) (extension) 
   |--> Dim D
     |--> d
     |--> d2
     |--> d21

What are the children of d? d appears twice in the set of consecutive relationships!

Note that two hypercubes in the same extended link role is equivalent to an intersection of those hypercubes. Here I'm doing the intersection with an hypercube that is a subset of the previous one. As a consequence, this is equivalent to a single hypercube:

 |--> Hypercube intersection (all) 
   |--> Dim D
     |--> d
       |--> d2
         |--> d21

Herm and Victor on conference call": This child ambiguity makes the filter operate unreliably from a domain-member child selection perspective.

Geoff Shuetrim: We should not be forced to use domain-member relationships to drive aggregation. They are already doing too much work for us in some situations. If we bring back the aggregator-contributor relationship and let that do the work of defining aggregation structures through dimension domains then this problem is more manageable.

Use case 3: Primary items whose domain for a dimension are different subsets of a common domain.

Given a dimension D, with a member d0 (a total) and some children d1,d2,d3, d4. Three primary items a, b and c. We model a statement like this (the "x" are invalid dimensional combinations):

     a   b   c
 d0
 d1  x
 d2      x
 d3          x
 d4

Now, let's define a formula to calculate the total of dimension D given its breakdown (d1,d2,d3,...). I wish a formula were enough for the three primary items (dTotal = sum(childrenOfdTotal)), but if the definition linkbase is like this:

 (extended link role 1)
 a
 |--> Hyp 1
   |--> D
     |--> d2
     |--> d3
     |--> d4
 (extended link role 2)
 b
 |--> Hyp 2
   |--> D
     |--> d1
     |--> d3
     |--> d4
 (extended link role 3)
 c
 |--> Hyp 3
   |--> D
     |--> d1
     |--> d2
     |--> d4

...I would need three different formulas! This could be solved if we were able to express more than one link role in the filter. But that's hard to maintain!!!


As a final part of this issue, the dimension filter specification states:

 If the explicit dimension filter has no filter members then 
 all domain members in the recognised domain for the filter 
 dimension meet the filter criteria."

The idea of a dimension filter without members is to filter any fact that has a dimension. I don't see the need to refer to the members in a recognised domain (the linkrole element in these cases should not be specified).

Here's a use case. A taxonomy defines a formula with a group filter for all those facts reported for the dimension X. The definition linkbase is like this:

 (extended link role 1)
 A
 |-> B
 |-> C
 |-> Hypercube
   |->Dim X
      |-> x1
      |-> x2

Then, an extension includes additional values for dim X

 (extended link role 2)
 A
 |-> B
 |-> C
 |-> Hypercube
   |->Dim X
      |-> x3
      |-> x4

For the formula to work with the new set of member, we need to remove the existing filter and create a boolean or with the previous dimension filter and a new one. But if the extension includes the new members in the same extended linkrole (the solution is equivalent from the XDT point of view), the formula will work without changes. We are creating an unnecessary dependency between dimension filters and the hypercubes in the definition linkbase!!!

  • Resolution: These issues are resolved by modifying the dimension filter specification to reflect a view that the actual domain of a dimension is any domain member that can be validly reported for the dimension for any of the primary items, with which it can be associated, in any of the hypercube conjunctions in the DTS. This simplifies the dimension filter specification considerably. The linkrole on the dimension filter no longer identifies a domain that is specific to the item being filtered. The domain is independent of such considerations. Control over the subset of the domain accepted by a filter is instead provided by an enhancement of the filter member structure, allowing filter members to specify the network of relationships between domain members to use in the filtration and to specify what parts of that network are acceptable and which are to be cause for filtering out facts. This approach solves all of the problematic use cases explained above and allows FINREP and COREP hierarchical dimensional formulae to be very easily constructed using the domain-member arcrole and the default extended linkrole. Traversal among relationship networks via the targetRole attribute is not necessary.

20/08/08 Geoff Shuetrim Ambiguity can occur in relation to segment vs scenario dimension aspects.

XDT dimension values can be reported for facts if those facts are for primary items with open hypercubes and if those facts are supported by a DTS that includes explicit dimensions with default values that are not part of those open hypercubes. In such cases, the facts apparently have default values for the explicit dimensions with default values but there is no information about whether they are values in the segment or scenario. That allocation does not matter from an XDT perspective (with segment and scenario distinctions being ignored) even though the XDT spec goes to some lengths for all other dimension values to be clear about whether they are values for segments or scenarios.

This is important for the formula and related specifications because those specifications distinguish segment aspects from scenario aspects.

For example, implicit filtering does not treat matching values for a segment dimension values and scenario dimension values as actually matching. Likewise for relative filtering. Similarly, OCC rules make a distinction so if a dimension value is picked up from a fact with a default dimension value in either segment or scenario (we are not sure which) then the output OCC is unclear.

  • Resolution: The ambiguity is resolved by redesigning the treatment of dimension aspects to not distinguish between segment and scenario dimensions. This simplification was not feasible earlier because the XDT had not been changed to prohibit reporting a dimension value in both the segment and the scenario for the one fact. Now that a fact is not allowed to report a dimension value in both the segment and scenario this is a more acceptable approach and it is the only approach that adequately handles the issue raised here. This change has the following consequences:
  1. Various functions in the function registry no longer select dimension values based on the dimension being reported in the segment or scenario.
  2. The dimensional aspect model simplifies to just have dimension aspects rather than segment and scenario dimension aspects.
  3. The variable specification impose more structure on the features required of an aspect model to ensure that OCC rules work in a consistent manner across all aspect models.
  4. Dimensional filters lose the attribute distinguishing whether they operate on segments or scenarios. This was introduced based on feedback to PWD4 but is unnecessary after simplifying the dimensional aspect model.
  5. The dimension OCC rule becomes a standard aspect rule, with dimensions being treated outside the treatment of the now more narrowly defined OCC content.

Alternative but rejected proposals to address the ambiguity include:

  • Assume that these kinds of ambiguous dimension values are in both segment and scenario. Doing so would interfere with filtrations that match the relevant dimension across facts.
  • If all dimensions in the universe of DTSs are being used in segment or all are being used in scenario then we could just assign all ambiguous dimensions to the container that everyone is using. After reviewing many DTSs, we cannot make assumptions about which is being used.
  • We could arbitrarily assign ambiguous dimension values to segment. That would again interact in unacceptable ways with filtering.
  • We could interpret default dimension values as dimensions that are not aspects of the facts that are reported with them. This interpretation of default dimension values as an absence of the dimension does not hold up consistently across XDT implementations or DTSs that use XDT dimension constructs.

21/08/08 Michele Romanelli Is a source sequence containing a single nil fact assumed to be an empty sequence?

If not, this seems not to align with the treatment of such facts in things like the XBRL 2.1 calculation linkbase.

  • Response: Nil facts are currently treated just like any other fact. This will mean that a nil fact can prevent a fact variable evaluating to a fallback value because it is present, even though it is not providing a fact value. This is a problem that needs to be addressed. We could address it by just pre-filtering all facts in instances to exclude nil facts. The variable specification will be changed to make this the default behaviour of all fact variables. That is not a sufficient response, however, because there are some situations in which formula authors can want fact variables to evaluate to sequences containing nil facts. In such situations, those fact variables will identify that they do not exclude nil facts from the set of facts that are subject to filtering. For the majority of existing formulae, this will have no syntactic consequences while making their behaviour robust to the presence of nil facts.

13/08/08 Mark Goodhand Duplicates

Extracted from an email to the base specification working group:

 On the subject of crazy things that should never have been allowed,  
 how does the formula spec cope with 'duplicate facts' (FRIS 2.8.1)?

Geoff Shuetrim: The current design provides no special treatment. Thus, for fact variables that do not bind as a sequence, duplicates will result in separate fact variable evaluations if they are both in the source sequence underpinning that fact variable evaluation. That seems fine to me. For fact variables that do bind as a sequence, duplicates currently both turn up in the one sequence evaluation for the fact variable. This can clearly cause trouble when using such variables to sum across the facts that the fact variable has evaluated to.

  • Resolution: The difficulty with duplicates for fact variables that bind as a sequence is not universal. It can obstruct usage of formulae to aggregate across dimensions. However, it can be extremely useful if you are performing some kind of assertion about the nature/number of duplicates in the instance. To cater to both situations the evaluation process for fact variables that bind as a sequence will be enhanced so that the author of the fact variable can control whether duplicates of facts in the source sequence cause additional partitionings of the source sequence and thus additional variable evaluations or whether duplicates are all placed in the one partitioning of the source sequence. Default behaviour for fact variables will involve additional partitionings because this would seem to be the more common desired behaviour and it aligns best with the evaluation model for fact variables that do not bind as a sequence. Fact variables where duplicates are to be included in the one evaluation sequence will explicitly identify that they require the alternative treatment of duplicates.

Víctor Morilla: I don't see how we can enhance current solution. If we provide syntax to get only one fact of the set of duplicates, what criteria is to be followed to get one fact or another? For COREP/FINREP we have created an existence assertion that verifies that there are no duplicates in the instance document (just using a variable that binds as sequence with a tuple filter and a precondition that tests that the size of the sequence is greater that one); the rest of formulae assume that there are no duplicates.

Geoff Shuetrim: The specification already has the feature you describe - where duplicates lead to a situation where one of the duplicates needs to be at the evaluation stage. This occurs for fact variables that do not bind as a sequence. The changes are only to bring the variables that do bind as a sequence into line with this behaviour, limiting the potential for duplicates to be hidden from users of formulae but still distorting their formula calculations.

Víctor Morilla: Ok, I get it now. I agree with the solution and with the default behaviour. However, I find difficult to express it. A variable set evaluates every possible combination of the different values for each variable that meet all filters. Those different values are different facts (if the variable doesn't bind as sequence) and different partitions, which are disjoint sets of facts (if the variable binds as sequence). With this approach we must let different partitions with overlapping sets (e.g: partition1 A, B, C, D and partition2 A', B, C, D). Another issue is that a partition could have more than one duplicated fact. In that case, it stands to reason to produce the cartesian product of all of them (e.g: if A and B have duplicates: A, B, C, D - A', B, C, D - A, B', C, D - A', B', C, D). This is giving me the creeps...

Geoff Shuetrim The creepiness derives from the potential for duplicates. Eliminate them and there is no creepiness. Have them in the instance, and we need to respect them.

31/07/08 Masatomo Goto Dimension-defaults may cause ambiguity for the aspects of facts in the dimensional aspect model

Dimension-defaults may cause ambiguity for the aspects of facts in the dimensional aspect model.

See the relevant conformance suite test: 49210 V04.

An example of the ambiguity is in the sharepoint area.

  • [1] (this is not in the public domain.)

This example is set up as follows:

Formula definition:

<Formula> @value= $v1
 |
 |-- $v1<Fact Variable>
 |    +--<General Filter>  @test=".ne 'ghi'"
 |
 |-- $v2<Fact Variable>
      +--<General Filter>  @test=".eq 'ghi'"
      +--<Relative Filter> refers to $v1

XDT dimension definitions in the taxonomy:

Role1:  http://xbrl.org/formula/conformance/example/role/linkFor3DimsInSeg

 PriItem1
   HyperCube1 (closed)
     ExplDim1
        ExplDim1Dom 
           ExplDim1DomMbr1
           ExplDim1DomMbr2
     ExplDim2
        ExplDim2Dom
           ExplDim2DomMbr1
           ExplDim2DomMbr2
     ExplDim3
        ExplDim3Dom
           ExplDim3DomMbr1
           ExplDim3DomMbr2 (*defined as dimension-default in http://www.xbrl.org/2003/role/link )

Role2: http://xbrl.org/formula/conformance/example/role/linkFor2DimsInSeg

 PriItem1
   HyperCube1 (closed)
     ExplDim1
        ExplDim1Dom
           ExplDim1DomMbr1
           ExplDim1DomMbr2
     ExplDim2
        ExplDim2Dom
           ExplDim2DomMbr1
           ExplDim2DomMbr2

Role3: http://www.xbrl.org/2003/role/link

 ExplDim3
    ExplDim3Mbr2 (dimension-default)

input instance (three facts with contexts):


(fact1)
 <context id="context-1">
 ...
   <segment>
     <explicitMember dimension="ExplDim1">ExplDim1Mbr1</explicitMember> 
     <explicitMember dimension="ExplDim2">ExplDim2Mbr1</explicitMember> 
     <explicitMember dimension="ExplDim3">ExplDim3Mbr1</explicitMember> 
   </segment>
 ...
 </context>
 <PriItem1 contextRef="context-1">abc</PriItem1> 

(fact2)
 <context id="context-2">
 ...
   <segment>
     <explicitMember dimension="ExplDim1">ExplDim1Mbr1</explicitMember> 
     <explicitMember dimension="ExplDim2">ExplDim2Mbr1</explicitMember> 
   </segment>
 ...
 </context>
 <PriItem1 contextRef="context-2">def</PriItem1> 

(fact3)
 <context id="context-3">
 ...
   <segment>
     <explicitMember dimension="ExplDim1">ExplDim1Mbr1</explicitMember> 
     <explicitMember dimension="ExplDim2">ExplDim2Mbr1</explicitMember> 
   </segment>
 ...
 </context>
 <PriItem1 contextRef="context-3">ghi</PriItem1>

With this test, we think there is an ambiguity about the aspects of fact3.

Here is the process of the formula evaluation:

1. bind facts for $V1 with general filter(@test=".ne 'ghi'"): fact1, fact2
2. bind facts for $V2 with 1st filter[general filter(@test=".eq 'ghi'")]: fact3
3. bind facts for $V2 with 2nd filter[relative filter]: UNCLEAR! Two possible bindings. 

Because of default dimensions, the context for fact3 can match for both of the closed hypercubes.

Role1: http://xbrl.org/formula/conformance/example/role/linkFor3DimsInSeg
Role2: http://xbrl.org/formula/conformance/example/role/linkFor2DimsInSeg


  • Response: Formulae need there to be no ambiguity inherent in the aspects of facts in an instance. Otherwise we can obtain situations in which the one fact in the instance could be reporting two quite different pieces of information, one qualified by one set of aspects and the other by another set of aspects. This is intolerable, from a reporter's perspective, because it means that you can only report that fact if its value is ALWAYS the same regardless of the configuration of aspects that qualify it. That cannot be relied upon. Rather than changing the formula specifications to accommodate such fact ambiguity, we will require an error to be thrown by a formula processor if the set of aspects and their values are not uniquely identified for all facts in an XBRL instance. Note that this is not specific to the XDT specification which motivated this concern. It will be a quality control on all aspect models.

Herm Fischer Test case variations 43230 v-06 and v-07 have been updated to match FINREP production facts. The n1 fact which is intended to be from the 1-dimensional link role is not alone, it has n1 fact siblings with double-dimension contexts clearly of the 2-dimensional link role.

06/8/08 Hitoshi Okumura The augment attribute is inadequately explained in the specification of OCC rules

The relation between xbrlfe:innappropriateAugmentAttribute error and OCC dimension rules is too vague. If an OCC dimension rule is not the first OCC aspect rule to be processed and it has an @augment=false, how will the OCC dimension rule be processed ?

According to the current spec, in case of an OCC fragment rule or an OCC XPath rule, xbrlfe:innappropriateAugmentAttribute error will occur, but in case of an OCC dimension rule, this error seems not to occur.Is this interpretation correct?

>>> Spec definition

 Error code xbrlfe:innappropriateAugmentAttribute MUST be thrown
 if an OCC fragment or XPath rule is not the first OCC fragment or XPath rule
 to be processed and has an @augment attribute.

>>> Spec definition

If correct, I think we need the test case for this non-error situation of OCC dimension rules.

For example, in 12061 V-13,

OCC dimension rules are defined as ....

 <formula:aspects source="v1">
   <formula:occDimension occ="segment" augment="false" dimension="eg:TypedDim1" />
 </formula:aspects>
 <formula:aspects source="v2">
   <formula:occDimension occ="segment" dimension="eg:ExplDim2" />
 </formula:aspects>
 <formula:aspects source="v3">
   <formula:occDimension occ="segment" dimension="eg:ExplDim1" />
 </formula:aspects>

...

Expected result is defined as ...

 <segment>
   <xbrldi:typedMember dimension="eg:TypedDim1">
     <eg:typedDim1Value>dimVal1</eg:typedDim1Value>
   </xbrldi:typedMember>
   <xbrldi:explicitMember dimension="eg:ExplDim2">eg:ExplDim2Mbr2</xbrldi:explicitMember>
   <xbrldi:explicitMember dimension="eg:ExplDim1">eg:ExplDim1Mbr2</xbrldi:explicitMember>
 </segment>

...

If these rules are changed as follows, what result will be expected?

* only 3rd OCC rule is changed to have @augment='false'.

...

 <formula:aspects source="v1">
   <formula:occDimension occ="segment" augment="false" dimension="eg:TypedDim1" />
 </formula:aspects>
 <formula:aspects source="v2">
   <formula:occDimension occ="segment" dimension="eg:ExplDim2" />
 </formula:aspects>
 <formula:aspects source="v3">
   <formula:occDimension occ="segment" augment="false" dimension="eg:ExplDim1" />
 </formula:aspects>

...

Is the following result expected? (However, if this is correct, I feel something is wrong.) ...

 <segment>
   <xbrldi:explicitMember dimension="eg:ExplDim1">eg:ExplDim1Mbr2</xbrldi:explicitMember>
 </segment>

...

  • Response: The augment attribute is proving to be a syntactic disaster, making this section of the specification quite hard to follow. The purpose of the augment attribute is to enable a segment or scenario occ rule set to indicate whether the rules are intended use the SAV as the original OCC for the first aspect rule in the set or to use the empty set of aspects values as the original OCC for the first aspect rule in the set. We have replaced the augment attribute with a new kind of OCC rule that always produces an empty subsequent OCC regardless of the original OCC.

10/7/08: Andy Harris: Interactions Between Fallback Values and Application Dependent Evaluation Orders

Variable evaluation orderings are partially processor dependent because evaluation ordering is only constrained by explicit dependencies between variables and there can be many different orderings that satisfy those dependency constraints.

Different variable orderings can lead to different allowed variable-set evaluations if some of the fact variables are allowed to evaluate to fallback values. See target instance 2 for example 2 below.

This dependency of possible evaluation outcomes on evaluation ordering, which is partially application dependent, is unacceptable. It arises because the variable specification has defined fact variable evaluation to only use the fallback value if the fact variable source sequence is empty. This prevents us from testing to see if the fallback value sufficiently relaxes the constraints on the source sequence for subsequently evaluated variables to enable them to evaluate also.

  • Resolution

The resolution needs to ensure that discretion over variable evaluation order does not impact upon the set of legal variable set evaluations.

The resolution also needs to ensure that the legal variable-set evaluations interpret fallback values as the values to use for facts that are missing from the instance.

The proposed resolution to the problem involves first defining a potential set of variable set evaluations that extends on the current legal set of variable-set evaluations by allowing fact variables that can evaluate to fall back values to do so, even when they could also evaluate to a sequence of facts. That then defines the set of potential variable-set evaluations. This set of potential evaluations then needs to be restricted to ensure that fallback values are not over-riding and contradicting facts that exist in the target instance.

Fact variables with fallback values to evaluate to either a value based upon their source sequence or a value based upon their @fallbackValue attribute. However, any variable-set evaluation in which a fact variable, V, has evaluated to a fallback value is then ruled out if that fact variable could have evaluated to a non-fallback value without forcing a change in the value of any one of the fact variables in the variable set that has not evaluated to a fallback value and that does not have a dependency on fact variable V.

Expressing this more formally:

Given the set of “potential evaluations of a variable-set” (current definition of variable-set evaluation including fallback values as alternatives), a potential evaluation "a" of the variable-set "V" is a legal variable-set evaluation if there does not exists a variable "Vi" such that, "ai" (the value given to that variable in the evaluation "a") is the fallback value and there exists another possible evaluation "b", such that "bi" is not the fallback value and for every variable "Vj" (where i!=j) in the variable set, at least one of the following conditions is satisfied:

  • "Vj" has a dependency upon "Vi"
  • "aj" has been evaluated to the fallback value
  • "bj" has not been evaluated to the fallback value and it is identical to "aj"

Clarification on the notation:

  • "V" is the definition of a variable-set. It is "static" information; it is part of the DTS
  • "V1" is the "first" variable in the variable set (given an arbitrary order, it doesn't matter; it could be the order of evaluation, but it could be a different one). "V1" is static information as well; it is part of the DTS.
  • So, V is composed of a sequence of variables [V1, V2, ..., Vn]
  • If "Vi" has a filter whose XPath expression has a reference to "Vj", then "Vi" has a dependency on "Vj". If "Vi" has a dependency on "Vj", and "Vj" has a dependency on "Vk", then "Vi" has a dependency on "Vk".
  • "a" and "b" are potential evaluations of the variable set "V". This is "runtime" information: given an instance, a formula processor produces a set of potential evaluations.
  • "ai" is the value given to the variable "Vi" in the evaluation "a". So, "a" is composed of a sequence [a1, a2, ..., an]

Examples

  • Example 1

Variables:

  $a: filters concept A and period 2008 
   (concept and period aspects are covered)
  $b: filters concept B
   (concept is covered)
   (Defines a fallback value)
  $c: filters concept C 
   (concept is covered)

Facts in the target instance:

  A year 2008
  B year 2007
  C year 2006

Desired evaluations:

Evaluation 1.

  $a = A 2008
  $b = fallback value
  $c = C 2006

Evaluations given the current specification:

 If the evaluation order is $a, $b, $c then there will be no evaluation outcome
 because $b will evaluate after $a, equalling the B 2007 fact and then implicit
 filtering for $c will cause $c not to evaluate because of the period mismatch. 
 If the evaluation order is $a, $c, $b then we get the desired evaluation outcome (evaluation 1)
 because $c evaluates to C 2006 and then B has an empty source sequence due to implicit filter
 matching of the period aspect so it falls back.

Evaluations using the proposed resolution:

Evaluation 1.

  $a = A 2008
  $b = fallback value
  $c = C 2006

Thus the proposed resolution works for this example.

  • Example 2

Variables:

  $a: filters concept A
   (concept aspect is covered)
  $b: filters concept B
   (concept aspect is covered)
  $c: filters concept C 
   (concept aspect is covered)
   (Defines a fallback value)

Facts in the target instance:

  A year 2008
  B year 2008
  C year 2008
  A year 2007
  B year 2007
  C year 2007
  A year 2006
  B year 2006
  Note no fact for concept C for period 2006)

Desired evaluations:

Evaluation 1.

  $a = A 2008
  $b = B 2008
  $c = C 2008

Evaluation 2.

  $a = A 2007
  $b = B 2007
  $c = C 2007

Evaluation 3.

  $a = A 2006
  $b = B 2006
  $c = fallback value


Evaluations given the current specification:

If the variable evaluation ordering is $a, $b, $c, then the application can produce
evaluations 1 and 2 and 3.
If the variable evaluation ordering is $a, $c, $b, then the application can produce
evaluations 1 and 2 and 3.
If the variable evaluation ordering is $c, $a, $b, then the application can produce
evaluations 1 and 2 but not 3 because $c never falls back to the defined fallback value.

Evaluations using the resolution:

The following evaluations are possible regardless of the evaluation order for the variables.

Evaluation 1.

  $a = A 2008
  $b = B 2008
  $c = C 2008

Evaluation 2.

  $a = A 2007
  $b = B 2007
  $c = C 2007

Evaluation 3.

  $a = A 2006
  $b = B 2006
  $c = fallback value

Thus the proposal works for this example also.

  • Example 3

Variables:

  $a: filters concept A
   (concept aspect is covered)
  $b: filters concept B
   (concept aspect is covered)
   (Falls back to zero)
  $c: filters concept C 
   (concept and period aspects are covered)
   (period filter matches to previous year to $b period or if $b has fallen back, matches to 2008) 

Facts in the target instances:

Target instance 1:

 A 2008
 B 2008
 C 2007

Target instance 2:

 A 2008
 B 2007
 C 2008

Target instance 3:

 A 2008
 B 2008
 C 2008
 C 2007

Desired evaluations:

For instance 1:

 Evaluation 1.
   $a = A 2008
   $b = B 2008
   $c = C 2007

For instance 2:

 Evaluation 1.
   $a = A 2008
   $b = 0
   $c = C 2008

For instance 3:

 Evaluation 1.
   $a = A 2008
   $b = B 2008
   $c = C 2007

Evaluations given the current specification:

Note that any evaluation order where $c is evaluated after $b is conformant with the specification.

Target instance 1: Regardless of evaluation order, the one desired evaluation will obtain.

Target instance 2: Evaluating $a first, we would get evaluation 1 as desired. Evaluating $b first, however, we would not get any evaluations because $b would not evaluate to a fall back value and so $c would not evaluate. Thus the legal evaluations for the variable set depend on the processor dependent evaluation order.

Target instance 3: Using any evaluation order, we would get the desired evaluation 1.

Evaluations using the resolution:

For instance 1:

 Evaluation 1.
   $a = A 2008
   $b = B 2008
   $c = C 2007

For instance 2:

 Evaluation 1.
   $a = A 2008
   $b = 0
   $c = C 2008

For instance 3:

 Evaluation 1.
   $a = A 2008
   $b = B 2008
   $c = C 2007
  • Example 4

Variables:

$a: filters concept A
 (covers the concept aspect)  
 (fallbackValue=0)
$b: filters concept B
 Period filter: with XPath expression: "if $a != 0 then period = period($a) - 1 else 2008 (cover = true)"
 (covers the concept and period aspect)  

Target Instance:

 A 2008
 B 2007
 B 2008


Desired evaluations:

 Evaluation 1.
   $a = A 2008
   $b = B 2007


Evaluations given the current specification:

 Evaluation 1.
   $a = A 2008
   $b = B 2007

This is because $b has a dependency on $a so it evaluates second. $a evaluates to a non-empty source sequence so it does not fall back. Thus, $b evaluates to have a period lagged by one year.

Evaluations using the resolution:

 Evaluation 1.
   $a = A 2008
   $b = B 2007
  • Example 5

Variables:

$a: filters concept A
 (covers the concept aspect)  
 (fallbackValue=0)
$b: filters concept B
 (covers the concept aspect)  
 (fallbackValue=0)
$c: filters concept C
 (covers the concept aspect)  
 (fallbackValue=0)

Target Instance:

 A 2008
 B 2008
 C 2008

Desired evaluations:

 Evaluation 1.
   $a = A 2008
   $b = B 2008
   $c = B 2008

Evaluations given the current specification:

 Evaluation 1.
   $a = A 2008
   $b = B 2008
   $c = C 2008

Evaluations using the resolution:

 Evaluation 1.
   $a = A 2008
   $b = B 2008
   $c = C 2008
  • Relevant test cases

13/7/08: Herm: a set of test cases contributed by Victor have been added to 22180 in subdirectory 22180-implFilter-cases, 22180 v-30 - v-34.

14/7/08: Herm: Added test cases 22180 v-10 & v-11 for a most-simple illustration of what I see in preceding algorithm. v-10 dynamicism of algorithm is that 3 fact variables, any combination of which can be absent, still produce result. V-11 adds an simple interdependency, so its fact variable and dependency are treated by the algorithm for both present, and fallen-back, cases.

15/7/08: Herm: Added test cases 22180 v-12 & v-13 for a subbranch (as above) which has a variable dependent on the subbranch result. Variables with bound or fallen back variables may have a dependency in a variable (not just in the formula construct), requiring the algorithm to evaluate fallen back values used by later dependent other variables (here generalVariable). V-13 has tandem dependencies (a fact variable on the preceding bound or fallen back variables, and then a general on that fact as well as the other bound or fallen back variables).

23/7/08: Herm: Added test cases 22180 v-15-19 to correspond to example 3 above and also v-18/v-19 to handle example 4 above. In v-18, $b is made the source variable, so that the period for the produced fact can grab $b's period. But for v-19, the source is formula:uncovered, $a is fallen back and I believe aspectless and atomic, then no factVariable can contribute an uncovered period, and I'd expect xbrlfe:undefinedSAV must be raised (on the fallen-back evaluation). So I show one fact produced in the result (where $a does not fall back) and none in the second case (where $a falls back and causes the exception to be raised).

08/08/08: Herm: Existing test case 22180 v-10 corresponds to example 5 above. Cases v-18/v-19, per above note corresponding to example 4, have been revised to new behavior of example 4 as above.

10/08/08: Herm: New test case variations 22180 v-51 to v-57 correspond to Finrep formulas I'm working on (here abstracted from finacial liabilities associated with transferred financial assets, total carrying amount, original assets, available for sale). These have fallback values and different formula terms have the somewhat similar dimensions with different primary item link roles; the variations explore the impact of dimension defaults too. Without the recent clarification, many superfluous and financially silly fallback values would have occurred.

29/5/08: Víctor: In scope variables of acceptance radius XPath expression in Consistency Assertions

The specification establishes the following context of evaluation of the XPath expression of the acceptance radius in consistency assertions:

  • Have as the context item an atomic value equal to the numeric value of the derived fact
  • Include assertion parameters among the in-scope variables

So, variables in the variable set of the formula being checked are not required to be part of the expression in-scope variables. I suggest including an additional point so that these variables are included (maybe next CR?). The reason is that current approach prevents the definition of an acceptance radius that depends on the number of facts used to calculate the derived fact. E.g.:

 TotalIncomes = sum($IncomesByCountry)

where $IncomesByCountry is a sequence of each value of the concept incomes reported for a value of the dimension country.

The accumulated error due to the precision is proportional to the number of facts that have been included in the sum (this was actually an important flaw of the calculation linkbase). So, it is sensible to define an acceptance radius like this:

 count($IncomesByCountry) * $margin

Resolved: Added the variables in the variable set evaluation for the formula being used for consistency assertion testing to the set of in-scope variables used when computing the acceptance radius for the consistency assertion. This means that acceptance radii need to be computed once for each evaluation of a consistency assertion formula.

13/6/08: Andy Harris: The specification is incomplete in regard to fallback values that include sequences of facts

A formula author has the ability to define a fallback value using an XPath expression that references other fact variables, thus potentially causing the fallback value to be a sequence of facts. The variable specification is silent on whether how such sequences impact on specification features like implicit filtering and relative filtering. The specification is also silent on how to handle a fallback value that is a sequence that contains mixed facts and atomic values.

  • Resolved: As noted by Victor, this does not cost us functionality. Note that this change will not prevent formula authors from having responsibility for making fallback values not throw unintended type checking errors in XPath 2.0.


  • Resolved: Variables (but not parameters) will be out of scope for the evaluation of fallback values. The specification will remain silent with regard to fallback values that are sequences that include facts as well as atomic nodes. With this change, an XPath error will be thrown if the fallback value includes references to variables (but not parameters) in the same variable set because they will not be in scope when evaluating the fallback expression.

The kind of flexibility that causes the problems that Andy has raised can be achieved without this kind of usage of fallback values. Specifically, every fallback value that uses a fact variable can be replaced by an empty sequence plus a condition in the evaluation of the formula / assertion. For example:

 factVariable A
 factVariable B
 factVariable C fallbackValue("$B")
 test="$A = $C"

can be replaced by

 factVariable A
 factVariable B
 factVariable C fallbackValue("()")
 test="$A = if (empty($C)) then $B else $C"

16/6/08: Víctor Morilla: Complement filters should be allowed to cover aspects.

Reviewing the specification I've found a small sentence that was unnoticed to me in previous reviews. Chapter 3.4.1 of the variable specification (filters) states: "A filter complement never covers an aspect". I don't see the need for this. I'd expect the same behaviour from complemented and not complemented filters. Whether I'm filtering "TotalIncomes" or any concept different to "TotalIncomes", I'm using a filter that works on the concept aspect.

A motivating use case follows:

A rule to check that consolidated incomes are greater than the incomes of the branches in different countries. The country dimension is composed of a "Total" member (that does not follow this rule) and a set of countries. The concept ConsolidatedIncomes is never reported for the dimension country whereas the concept IncomesByCountry always is.

A first version of this validation can be expressed like this (no complemented filters):

 Fact variable (consolidatedIncomes):
   Concept filter: prefix:ConsolidatedIncomes (cover = true)
 Fact variable (incomesByCountry):
   Concept filter: prefix:IncomesByCountry (cover = true)
   Dimension filter:  Country is a descendant of TotalCountry (cover = true)  (mind that I want to exclude the TotalCountry member)
 test="$consolidatedIncomes >= $incomesByCountry"

This rule works as expected. Now let’s suppose that we realized there is already a dimension filter to pick the total country and we want to reuse it:

 Fact variable (consolidatedIncomes):
   Concept filter: prefix:ConsolidatedIncomes (cover = true)
 Fact variable (incomesByCountry):
   Concept filter: prefix:IncomesByCountry (cover = true)
   Dimension filter:  complement of Country = TotalCountry (cover=  true)
 test="$consolidatedIncomes >= $incomesByCountry"

According to current specification, the second validation will have a different behavior than the first one. Actually, it won’t produce any results, as the consolidatedIncomes are not reported for the country dimension.

These two rules should behave the same. Moreover, note that every filter explicitly states whether the aspect is or is not to be covered. So, why leave uncovered something that it is explicitly covered by the author.

  • Resolved: Complemented filters will now be coverable. Coverage will be the same as for non-complemented filters. Coverage will be determined by appropriate setting of the relevant attribute.

Syntax changes

26/7/08 Andy Harris Covering properties and filtering properties of filters are not always able to coincide.

It appears that there is no specified way for the formula author to force the covering of an aspect value using a filter, independent of aspect value usage in the target instance. Most, if not all, filters have implied xpath expressions that require aspect values to exist in the target instance before they can be covered.

For example, ssf implied xpath expressions:

xfi:scenario(.)[#test]
xfi:segment(.)[#test]

The #test=true() means the aspect value must exist (and gets covered), so an ssf filter of true() will require that the fact being filtered does have a segment/scenario (e.g., ()[true()] fails).

A context from the target instance:

<context id="context-1">
  <entity>
     <identifier scheme="http://xbrl.org/entity/identification/scheme">01</identifier>
  </entity>
  <period>
     <instant>2007-12-31</instant>
  </period>
 </context>

The only way to force the aspects to get covered is by putting extra fragments in the scenario/segment:

 <context id="context-1">
   <entity>
     <identifier scheme="http://xbrl.org/entity/identification/scheme">01</identifier>
     <segment>
        <anything />
     </segment>
   </entity>
   <period>
      <instant>2007-12-31</instant>
   </period>
   <scenario>
      <anything />	
   </scenario>
 </context>

It does not seem reasonable to ask submitters to alter their instances in order for a formula to evaluate correctly for aspect covering functionality (implicit filtering, sequence partitioning, relative filters, consistency assertions, etc). It would be very convenient to author the following filters and have them cover, independent of aspect value usage in the target instance:

<ssf:segment xlink:type="resource" xlink:label="anycontext" test="true()" />
<ssf:scenario xlink:type="resource" xlink:label="anycontext" test="true()" />

This is a common usage pattern when authoring a formula.

Furthermore, the processing requirements of this class of filters becomes greatly reduced for this special case (#test=true(), #test=false()).

  • Response: Coverage of an aspect requires usage of a filter that can cover that aspect. Some filters cannot be expressed in a way that makes the filter accept all facts. The segment and scenario filters are examples of such filters. This means that with the current set of filter specifications, it is not possible to cover some aspects without also restricting the set of facts that can be matched by the fact variables with the filters cover those aspects. In essence, this is a request that we allow aspect coverage to be separable from filtration. For segment filters, this separation of coverage and filtration would enable a fact variable to be expressed in a way that ensures that the segments, of the facts that the fact variable evaluates to, do not impact on implicit filtering for the containing variable set. This seems like a reasonable feature to expect.

Right now, what is requested is possible but it is a bit contrived. What is required is the usage of a boolean or filter that ORs together the filter and a general filter that allows any aspect. If the filter that can cover the aspect of interest is covering and the boolean filter is covering then the covering filter does not exclude any facts and yet its covering properties apply.


The problem is most severe for segment filters and scenario filters because they filter content that may or may not appear in the instance. For this reason, those two filters have been adapted to allow them to explicitly specify that they always bind - regardless of segment/scenario presence and content. This is done by omitting the test attribute from the filters.

For the other filters, if this issue manifests, formula authors will need to use some variant on the boolean filter approach described here.

Normative Corrections

27/11/08 Nathan Summers: Problems will arise with using the eq operator with xs:untypedAtomic values

According to the xfi functions specification, functions such as xfi:facts-in-instance return a sequence of elements without specifying the runtime type. The conformance suite has some example formulae whose value expressions compare facts obtained through use of xfi:facts-in-instance using the "eq" operator. The "eq" operator casts expressions of xs:untypedAtomic (such as would be generated from fact variables created using XML infoset instead PSVI) to xs:string and does not permit the comparison of strings to numeric types, thus, value expressions such as "$var1 eq $var2 + 1" result in evaluation-time errors. The "=" operator, however, causes no problems. We either need to specify the runtime types returned by those xfi functions or restrict the use of "eq" in numeric-to-fact comparisons.

  • Resolution: The specifications allow too much flexibility in how the necessary XQuery/XPath data models are constructed. The function registry now locks this down by requiring functions to return XML Schema typed nodes.

27/11/08 Nathan Summers: Errors in implied XPath expressions for Unit filters

In the unit filter specification, the implied XPath for the two filters has problems -- either the custom xfi functions have changed and the unit filter spec has not kept pace or they were wrong to begin with. Namely, for the single-measure unit filter, the implied XPath is

xfi:is-numeric(.) and count(xfi:unit-numerator(.)) eq 1 and count(xfi:unit-denominator(.)) eq 0 and xfi:measure-name(xfi:unit-numerator(.)[1]) eq #measure.

The is-numeric, unit-numerator and unit-denominator functions are all being passed the fact element as their single argument; however the functions definitions require a QName, a Unit element, and a Unit element respectively. The general unit filter implies a similar XPath expression using the xfi:is-numeric function with a fact element argument. We presume the expressions for the single-measure and general unit filters should be, respectively:

xfi:is-numeric(fn:node-name(.)) and count(xfi:unit-numerator(xfi:unit(.))) eq 1 and count(xfi:unit-denominator(xfi:unit(.))) eq 0 and xfi:measure-name(xfi:unit-numerator(xfi:unit(.))[1]) eq #measure

and

xfi:is-numeric(fn:node-name(.)) and xfi:unit(.)[#test]

  • Resolution: The implied Xpath expressions have been adjusted as suggested to bring them into line with the functions in the function registry. The conformance suite for the unit filter specification has also been reviewed for completeness given how long this issue has required to surface.

12/11/08 Herm: invalidDimensionForFact error codes should be eliminated from function definitions

Reviewing 90300, 90301, 90302, 90303, etc, functions I think all the error codes need tweaking or removing, given the resolution above from Base Spec WG in DC (some error codes still imply DTS validating of paramters, where the Base Spec WG direction is that only whatever is in the context contributes to the dimensional space of the primary item irregardless of the DTS).

  • Resolution: We have removed the xfie:invalidDimensionForFact error code because any such problems are expected to be detected by pre-processing validation of the instance containing the fact. Pushing responsibility for this validation to validation software rather than the function implementation ensures the functions are consistent with the XDT specification.

14/11/08 Andy Harris Abstract dimension aspect element in the formula schema should be abstract

  • Resolution: This has now been corrected.

24/09/08 Muramoto: Data type of value which specifies QName as variable's name is xsl:QName better than xs:QName.

I found following problem caused by data type 'xs:QName' of 'name' attribute which has variable name of XPath expression.

 <element id="xml-variable-arc"
 name="variableArc" substitutionGroup="gen:arc">
   <complexType>
     <complexContent>
       <extension base="gen:genericArcType">
         <attribute name="name" type="QName" use="required"/>
       </extension>
     </complexContent>
   </complexType>
 </element>

Following formula linkbase (22010-noNamespaceVariableName-formula.xml) is contained in conformance tests. After understanding formula spec, this creator created test data.

original

 <link:linkbase
     xmlns:eg="http://xbrl.org/formula/conformance/example"
     xmlns="http://xbrl.org/formula/conformance/example">
   <variable:variableArc xlink:to="genvar3" xlink:from="formula1"
       name="eg:c2" />
   <variable:variableArc xlink:to="genvar2" xlink:from="formula1"
       name="c2" />
 </link:linkbase>

Third party who doesn't know formula spec looked at this. Since two namespace declarations (xmlns:eg="..." and xmlns="") were redundant, he wished to unify even either. When he checked data type of 'name' attribute in schema file, he confirmed it was 'xs:QName'. Then, two kinds of following change methods were considered.

change1

 <link:linkbase
     xmlns:eg="http://xbrl.org/formula/conformance/example">
   <variable:variableArc xlink:to="genvar3" xlink:from="formula1"
       name="eg:c2" />
   <variable:variableArc xlink:to="genvar2" xlink:from="formula1"
       name="eg:c2" />
 </link:linkbase>

change2

 <link:linkbase
     xmlns="http://xbrl.org/formula/conformance/example">
   <variable:variableArc xlink:to="genvar3" xlink:from="formula1"
       name="c2" />
   <variable:variableArc xlink:to="genvar2" xlink:from="formula1"
       name="c2" />
 </link:linkbase>

I will investigate namespace to which 'name' attribute belongs in both (creator and third party) position.

Creator who understands formula spec recognizes that 'c2' doesn't have namespace according to formula spec.

 "an unprefixed variable reference is in no namespace"

Therefore, namespace changes before and after the change.

[original]
"eg:c2" -> "http://xbrl.org/formula/conformance/example"
"c2" -> no namespace
[change1]
"eg:c2" -> "http://xbrl.org/formula/conformance/example"
"eg:c2" -> "http://xbrl.org/formula/conformance/example"
[change2]
"c2" -> no namespace
"c2" -> no namespace

On the other hand, third party who doesn't know formula spec recognizes that 'c2' has default namespace according to namespace spec and xml schema spec.

 The mapping between literals in the lexical space and 
 values in the value space of QName requires a namespace 
 declaration to be in scope for the context in which QName 
 is used. (XML Schema Part 2)
 If there is a default namespace declaration in scope, 
 the expanded name corresponding to an unprefixed element 
 name has the URI of the default namespace as its namespace 
 name. (Namespaces in XML 1.0)

Therefore, namespace doesn't change before and after the change.

[original]
"eg:c2" -> "http://xbrl.org/formula/conformance/example"
"c2" -> http://xbrl.org/formula/conformance/example"
[change1]
"eg:c2" -> "http://xbrl.org/formula/conformance/example"
"eg:c2" -> "http://xbrl.org/formula/conformance/example"
[change2]
"c2" -> "http://xbrl.org/formula/conformance/example"
"c2" -> "http://xbrl.org/formula/conformance/example"


The interpretation of namespace where unprefixed QName belongs is not consistent in two or more open to public specifications. Therefore, restoration of information can't be performed after change.

When formula linkbase is saved as XML file by the editor which has XML Schema engine, it is considered actually that the above redundancy is edited automatically.


The handling of unprefixed variable name was discussed by Formula WG conference call 2008-06-19. It came to the conclusion that the schema isn't changed, and the spec is updated.

Meeting notes 2008-06-19

f) No-qname variable prefixes.  Incorporation of xsl:QName 
   definition.  Update of any tests (parameters with no prefix).

  HF: Would require changing xsd files to xsl:Qname so schema 
      validation won't find resolve based on xmlns=.  
  GS: developers have to roll their own qname resolution 
      resolution for attribute values except for no prefix 
      variables.  In those cases it will be a qname with no 
      namespace.  The default namespace on the defining element 
      will not be taken into account.  
  GS: use standard schema resolution and then catch resolution 
      for those situations to the default namespace.  xs:Qname 
      is not sufficient as XPath forces us to not use the 
      containing variable default qname.   
  AH: UBmatrix already took the route of using the standard 
      xs:Qname and catch the errors, but that could be an 
      integration problem with 3rd party xpath software to 
      change the schema.  
  GS: what if we allow xs:Qname, not xsl:Qname, and if there 
      is no namespace prefix, resolution is not based on default 
      namespace prefix for containing element.  
  Call for objections:  None.
  HF: so the resolution is no change to xsd's and update spec.

Variables 1.0

1.7 XPath usage

 As required by Section 3.1.2 of the XPath 2.0, an unprefixed 
 variable reference is in no namespace. Care needs to be taken 
 to ensure that the default namespace for element and attribute 
 names is not used when determining the QName for variables 
 with no namespace prefix. 


However, case which can't be covered only with update of spec came out in this way. In XSLT, so as not to apply unprefixed QName to default namespace, special the following data type 'QName' is defined.

XSL Transformations (XSLT) Version 2.0

G Schema for XSLT Stylesheets (Non-Normative)

<xs:simpleType name="QName">
 <xs:annotation>
   <xs:documentation>
     A QName.
     This schema does not use the built-in type xs:QName, 
     but rather defines its own QName type. Although xs:QName 
     would define the correct validation on these attributes,
     a schema processor would expand unprefixed QNames incorrectly 
     when constructing the PSVI, because (as defined in XML Schema 
     errata) an unprefixed xs:QName is assumed to be in the default 
     namespace, which is not the correct assumption for XSLT.
     The data type is defined as a restriction of the built-in type 
     Name, restricted so that it can only contain one colon which 
     must not be the first or last character.
   </xs:documentation>
 </xs:annotation>
 <xs:restriction base="xs:Name">
   <xs:pattern value="([^:]+:)?[^:]+"/>      
 </xs:restriction>        
</xs:simpleType>


Please review the change to date type definition equivalent to xsl:QName.

  • Resolution: The name attribute on variableArc elements now has a data type that is not an XML Schema QName. This ensures that it will not be treated as a standard QName by software that does not take the XBRL variables specification into account in its processing. This change has required the definition of a new error code to be thrown when the namespace prefix on a variable name cannot be resolved. Note also, that this change has been applied to all attributes that contain QNames representing XPath 2.0 variable names. Specifically, source attributes in formulae, and variable attributes in filters have QNames with the QName data type defined in the schema for the variable specification rather than the XML Schema QName data type.

07/09/2008 Geoff Shuetrim: cannotOmitDimensionValue error code should not be part of the formula specification

This error should be omitted because such problems will be caught by validation of the output instance - something we rely on for everything else like this.

Resolution: This error has been deleted.

02/09/08 Herm Fischer: fact-has-dimension functions are not specific enough for typed dimension filters.

We have used a fact-has-explicit-dimension function in the explicit dimension filters so that the function will throw an error if the dimension is of the wrong type. The same logic applies to typed dimensions, suggesting we use a fact-has-typed-dimension function rather than a fact-has-dimension function in their implied XPath expressions.

Resolution: The suggested fix has been implemented.

7/8/08 Geoff Shuetrim: The OCC implied by an omitted segment or scenario element fails to take into account default dimension values.

If a context is used by a primary item with a single segment hypercube of XDT dimensions, all of which have a default value defined for them, then the omission of the segment from the context should imply an OCC that includes the default dimension values for the dimensions in the hypercube.

This means that we need to change the following statements in the formula specification:

 If the segment is omitted from a context then the context's segment OCC is an empty sequence.
 If the scenario is omitted from a context then the context's scenario OCC is an empty sequence.
  • Resolution: The problematic sentences in the formula specification have been replaced with:
 If the segment is omitted from an item's context then the segment 
 OCC for that item is the sequence of OCC aspect values, if any, 
 implied by the missing segment, given the item and the formula's aspect model.
 If the segment is omitted from an item's context then the scenario 
 OCC for that item is the sequence of OCC aspect values, if any, 
 implied by the missing scenario , given the item and the formula's aspect model.

4/8/08 Herm: Sharing variables and filters requires allowing undirected cycles

Finrep has formulas where paired existence assertion and consistency assertion formulas have the same variables. This seems to be spec-legal, but formula processors are unhappy. New test case 34110 v01 explores this. The consistency assertion formula is a=b+c+d, each term of which must exist and be non-nil; formula and existence have same label so they can share arcs to fact variables a, b, c, and d. Formula source=a, value=$b+$c+$d.

Filters and fact variables are intuitively expected to be sharable. Muramoto-san pointed out to me that wherever we do this, such as test cases 12020 v-01, 02, 03, where I shared two or more filters between fact variables, this makes an unintentional undirected cycle. I think sharing is a good idea. I'm not sure that it is good to prohibit this, and think only directed cycles are worth disallowing. The sharing of two or more fact variables between different formulas, or as below in 34110 between assertion and formula, also make such an inadvertent and acceptable undirected cycle, and I think should be allowed. Most egregious of all, two or more parameters, expected to be consumed by multiple formulas or assertions, do the same (31430 v01/02). I suggest changing all formula arcs to just worry about directed cycles.

  • Resolution: Only directed cycles will be ruled out in the role type declarations.

20/7/08 Geoff Shuetrim Definition of identical evaluations of a fact variable does not handle sequences of atomic values

The definition of identical fact-variable evaluations is:

 Two evaluations of a fact variable are identical fact-variable evaluations if: 
 both evaluations are empty sequences; or both evaluations are sequences of nodes 
 and for each node in the sequence for each evaluation, there is an identical node 
 in the sequence for the other evaluation. The two evaluations of the fact variable 
 MUST also be sequences of equal length if they are identical.

This does not cover the case where the fallback value is anything other than an empty sequence or a sequence of nodes. What about a sequence of atomic values, for example?

  • Resolution: The definition of identical fact variable evaluations is now:
 Two evaluations of a fact variable are identical fact-variable evaluations if: 
 both evaluations are sequences of the same length and 
 for each item in one of the sequences, this is an identical 
 item in the other sequence.

17/07/08: Herm Fischer: Extraneous quotes have been included in some of the implied XPath expressions.

Specifications have included terms in implied XPath expressions like '#param'. In some cases the quotes are innappropriate and would imply an argument that is of the wrong data type. For example period filters have the expression fn:dateTime(xs:date('#date') but that should only means #date is substituted as an atomic value, not forced to be lexically a string. So if #date were any expression yielding an object of type xs:date type, xs:date( atomic date object expression ) still gives a date, but if #date were an xs:string, then xs:date( atomic string object ) gives also a date.

  • Resolution: Wherever they are used in implied XPath expressions, quotes are intended to be included verbatim in implied XPath expressions. If there are XPath expressions where this is erroneous, the implied XPath expressions will need modification. In the period start, period end and period instant filters, quote are used. Replacements will me made to eliminate them as follows:
 xs:date('#date') will become: #date

and

 xs:time('#time') will become: #time

Thus, #date will be expected to be an XPath expression that evaluates to an xs:date data type. Thus, #time will be expected to be an XPath expression that evaluates to an xs:time data type.

1/4/08: Geoff Shuetrim: The URL for the XBRL 2.1 link schema is incorrect in the generic reference schema.

The incorrect URL was not picked up during testing of thr conformance suite or by TrueNorth testing of the schema itself.

  • Resolved: The URL is now the right absolute URL.

9/4/08: Takahide Muramoto: Validation specification: There are two typos in the assertion definition.

  • Resolved: The typos have been corrected.

9/4/08: Takahide Muramoto: Consistency assertion specification

There is one typo in the explanation of the proportional radius XPath expression.

  • Resolved: The typo has been corrected.

10/4/08 Herm Fischer: Variable Specification: Error code xbrlve:missingImplicitFilters should be renamed and better explained

Error code xbrlve:missingImplicitFilters MUST be thrown if the processing software does not know the implicit filtering system to be used for a variable set's aspect model and the @implicitFiltering attribute equals true. This should be renamed and re-explained to make clear what it is intended to catch.

  • Resolved: The xbrlve:missingImplicitFilters error code will be dropped from the variable specification because it would only catch situations in which the specification of an aspect model was flawed.

14/4/08: Takahide Muramoto: Period filters specification: The xfi:instant function signature needs to be xfi:period-instant.

  • Resolved: The typos have been corrected.

14/4/08: Takahide Muramoto: Value filters specification: The xfi:non-numeric function signature needs to be xfi:is-non-numeric.

  • Resolved: The typo has been corrected.

14/4/08: Takahide Muramoto: Variables specification: Some function names are out of sync with the function registry

The xfi:facts-scenario-dimension-s-equal2, xfi:facts-segment-dimension-s-equal2, xfi:scenario-remainder, and xfi:segment-remainder function names are not in line with the function registry.

  • Resolved: The typos have been corrected.

16/4/08 Herm Fischer: Matching filter coverage of aspects to the relevant aspect model should be enforced

The specification does not prohibit the usage of filters by variables in a variable set when they can cover aspects that are not defined in the aspect model of the variable set in question.

  • Resolved: A new error code has been defined xbrlve:filterAspectModelMismatch to cover exactly the situation described by Herm.


13/5/08: Herm Fischer: How should the specification treat variables with no QName prefix?

The XPath 2.0 specification states in section 3.1.2:

 "An unprefixed variable reference is in no namespace."  

This must be taken as a given by the formula and related specifications. To work with this feature of the XPath 2.0 specification we must ensure that an unprefixed QName when given as the value of an element or attribute (eg a @name attribute or a @source attribute) does not get expanded using the default namespace for the defining element.

  • Resolved: Wording has been added to the XPath section of the variable specification to ensure that this is the case. Note that the XML schema data type for attributes that contain variable QNames remains xs:QName and so care will need to be taken by developers in ensuring that variables names with no namespace prefix are not resolved

to the default namespace that is defined for the attribute's containing element.

4/6/08: Takahide Muramoto: Ambiguities caused by duplicate named variables in a single variable set.

If two variables share the same variable-set arc, they have the same name. In the existing specification, with the ordering of variables being determined by analysis of dependencies and with those dependencies being reflected in variable names, processors cannot determine which variable a dependency is on if two or more variables in a variable set have the same name.

  • Resolved: Added a new error code to be thrown whenever two variable/parameter names in a single variable set are identical.

16/6/08: Víctor Morilla: Identical fact-variable evaluations definition is too loose.

We need a rewording in the definition of identical fact-variable evaluations. The variable specification states:

 "Two evaluations of a fact variable are identical fact-variable evaluations if: both evaluations are empty sequences; 
 or both evaluations are sequences of nodes and for each node in the sequence for each evaluation, there is an identical
 node in the sequence for the other evaluation. The two evaluations of the fact variable MUST also be sequences of equal 
 length if they are identical."

This paragraph was meant to state that if two evaluations of a fact variable are bound to the empty sequence, then they are considered identical (no matter the result of the fallbackValue expression). If they are bound to fact variables, those fact variables must be the same node.

But given current wording, what happens if a fallbackValue evaluation is a node? And what happens if the fallbackValue is a reference to another variable? This is turning very thorny: we just wanted to avoid expressions like:

 (if (empty($a)) then 0 else $a) + (if (empty($b)) then 0 else $b) + ...

in favor of

 $a + $b + $c
  • Resolved: The text has been reworded to:
 "Two evaluations of a fact variable are identical fact-variable evaluations if: both evaluations are empty sequences; 
 or both evaluations are fallback values; or both evaluations are sequences of nodes and for each node in the sequence 
 for each evaluation, there is an identical node in the sequence for the other evaluation.  The two evaluations of the 
 fact variable MUST also be sequences of equal length if they are identical."


8/5/08 Herm Fischer: Type checking needs to be improved in explicit dimension filters to eliminate inappropriate dimension-related evaluation exceptions.

For explicit dimensions, if there's an explicit dimension filter and the fact being filtered cannot have a value for the dimension being filtered on, then an xfie:invalidDimensionForFact exception has to be raised when the xfi:fact-explicit-segment/scenario-dimension-value() function is executed. This causes an XPath runtime exception that halts the variable set evaluation process. This kind of exception easily arises in unintended circumstances when the required behaviour is just to not select such facts using the filter rather than to halt variable set evaluation entirely. The new test case variations addressing this issue are: 43210 v-02a, v-02b.

  • Resolved: The implied XPath expression for explicit segment dimension filters have changed from:
     some $member in 
          xfi:concept-select-explicit-segment-dimension-values(fn:node-name(),#dimension,#linkrole), 
          $value in xfi:fact-explicit-segment-dimension-value(.,#dimension) 
     satisfies ($value eq $member)

and:

     some $member in 
          xfi:concept-select-explicit-segment-dimension-values(fn:node-name(),#dimension,#linkrole,#member,#axis), 
          $value in xfi:fact-explicit-segment-dimension-value(.,#dimension) 
     satisfies ($value eq $member)

to:

    (
      if (xfi:concept-has-segment-dimension(fn:node-name(.),#dimension)) 
      then (
         some $member in 
         xfi:concept-select-explicit-segment-dimension-values(fn:node-name(),#dimension,#linkrole), 
              $value in xfi:fact-explicit-segment-dimension-value(.,#dimension) 
         satisfies ($value eq $member)
      ) 
      else false() 
    )

and:

    (
      if (xfi:concept-has-segment-dimension(fn:node-name(.),#dimension)) 
      then (
         some $member in 
         xfi:concept-select-explicit-segment-dimension-values(fn:node-name(),#dimension,#linkrole,#member,#axis), 
              $value in xfi:fact-explicit-segment-dimension-value(.,#dimension) 
         satisfies ($value eq $member)
      ) 
      else false() 
    )

Analogous changes have been made for scenario dimension filtering.

8/5/08 Herm Fischer: Type checking needs to be improved in typed dimension filters to eliminate inappropriate type mismatch evaluation exceptions.

If typed and the wrong primary item compares to the wrong typed contents (or even segment/scenario) then data type mismatch exceptions are thrown as required by the XPath 2.0 specification. This can prevent a variable set evaluation when instead, you simply want the fact causing the data type mismatch problem to be no selected by the filter.

  • Resolved: The implied XPath expression for typed dimension filters has changed from:
     xfi:concept-has-segment-dimension(fn:node-name(.),#dimension) and 
     xfi:fact-typed-segment-dimension-value(.,#dimension)[#test] 

to:

    (
      if (xfi:concept-has-segment-dimension(fn:node-name(.),#dimension)) 
        then xfi:fact-typed-segment-dimension-value(.,#dimension)[#test] 
        else false() 
    )


3/6/08: Víctor Morilla: Consistency assertion parameters definition incorrectly implies that consistency assertions are variable-set resources

The Consistency assertion specification defines consistency assertion parameters as "A consistency assertion parameter is any parameter that has a variable-set relationship to it from a consistency assertion." The definition of the variable-set relationship states that the source of such relationships must be a variable-set resource. But, consistency assertions are NOT variable-set resources. So, either we need to relax the definition of the variable-set relationship, or better, define a new relationship for consistency assertion parameters. See variable set relationships

  • Resolved: A new kind of relationship (from consistency assertions to parameters) is defined in the consistency assertion specification. The existing variable:variableArc arc continues to be used to express the new kind of relationship. This correction to the specification does not alter the capabilities of the specification.

Non-normative Corrections

1/4/08: Herm Fischer and Takahide Muramoto: There is an error in the first example instant, date without time, in the formula specification

In the formula specification, the first example instant, date without time, has to produce instant without time, or if it has time it must be the first instant of the next day.

  • Resolved: The example now reflects Herm's correction.

14/4/08: Takahide Muramoto: Parentheses and function arguments are wrong in the last part of the first example for general filters.

General filter specification: Parentheses and function arguments are wrong in the last part of the first example relating to a filter based upon a comparison of period information and the restatement date in a scenario.

  • Resolution: The typos have been corrected in example general filter expression.

17/4/08 Herm Fischer: The QName constants in the unit filter example using kilometer and hour measures are wrong.

Unit filters: The QName constants in the example using kilometer and hour measures need to be changed to use the fn:QName() function rather than a string representation of the QName.

  • Resolved: This has been done.

18/06/08: Herm: Should the xfi namespace be 2008, now it says 2005?

  • Resolved: This has been updated as suggested by Herm Fischer throughout the CVS repository.

Drafting enhancements

27/11/08 Nathan Summers: The wording of the specifications regarding variable QName usage needs clarification

QNames are specified in attributes and text nodes throughout the formula spec suite. The Variable spec in section 1.7 states: "QNames in each XPath expression MUST be resolvable using the specific resolution rules set out in this specification and the namespace declarations that are in scope for the element that contains the XPath expression. See Namespaces in XML [XML NAMES] for more information on namespace declarations and their scope." We're certain this means that we use the XML Namespace specification to resolve these QNames appropriately (through xmlns attributes), however, this sentence seems to imply that there are specific resolution rules within the Variable spec (i.e. "this specification"). Could this sentence be reworded to avoid that implication? Or are there specific rules within the Variable spec that we haven't been able to find yet?

  • Resolution: A paragraph has been added to the section on XPath usage to explain the usage of the variable:QName data type.

16/09/08 Victor Morilla: Period filters examples error?

Period filters use a time-period type that defines two attributes (date and time). According to the type of these attributes in the schema (variable:expression), it seems that they are meant to hold an XPath expression.

However, in the examples, it seems that the expected value for these attributes is a string.

  • Resolution: The examples have been updated to reflect the modification to the attribution interpretation.

08/09/2008 Geoff Shuetrim: New error code required for an existing MUST constraint for the XPath OCC rule

The XPath OCC rule requires the result of evaluating the XPath expression to be a sequence of element nodes. When this requirement is violated an error needs to be thrown but none is currently defined in the specification.

Resolution: A new error code has been added to the formula specification.

06/09/08 Herm: Definition 'explicit dimension domain' given open hypercubes

The definition wording is probably ok given deep experience and great wisdom, but not on first glance.

If a dimension filter is applied to a fact whose primary item has an open hypercube, then "set of all domain members allowed as values for that dimension by any of the conjunctions of hypercubes declared in that DTS" is sort of unclear.

Hypercube conjunction, does that only apply to candidate facts whose primary items have no open hypercube?

Does conjunction apply to notAll cubes? if so, is each notAll applied just to the all(s) with which it sits in a hypercube (and the all-notAll result to the members set)? or if treated as an isolated notAll it brings in the galactic universe of elements so if that is added to the set from the all, the set of elements is exploding...

With an open hypercube, a dimension unaffiliated with a hypercube, having consecutive relationships to members, does contribute to the explicit dimension domain. But it appears to me (hope I'm wrong) that the dimension spec only constrains members (of open hypercube dimensions) "be a QName whose global element definition can be found in the taxonomy schema", e.g., any random non-XBRL element can sneak in to be a member. (Dim spec 3.1.4.5.3)

In such case is it restricted to all members having consecutive relationships from that dimension even in absence of the hypercube-dimension relation to the dimension at source of consecutive relationships to the member? (I would be comfortable with that) Or any QName even if not XBRL item?

So should the definition have separate clauses for closed and for open situations?

Víctor: I daresay that the intent was to define it as any usable member linked to the dimension through dimension-domain / domain-member relationships, but certainly the wording is not clear to me: it referes to any valid member of the explicit dimension, but this definition has a reference to the dimensional relationship set, which is rooted at primary items.

I'm afraid we are running into unnecessary problems in this part of the specification. Why don't we replace all references to the explicit dimension domain, by references to the XDT definition of valid members of the explicit dimension? This way we can move this kind of discussions to the XDT specification and move on.

Geoff Shuetrim: The XDT spec provides the following definition:

 A domain of valid members of a explicit dimension is the set of QNames of all usable elements (see 2.5.3.3 below) 
 in the dimensional relationship set [Def, 3] for the domain-member relation rooted at one domain member [Def, 11]

The set of domain members that we are trying to define in this specification is somewhat broader. We are trying to define the set in a way that includes the members that are allowed as values for the dimension under the rules for expressing such permissions in the XDT specification. Thus, if the member is allowed by one conjunction of hypercubes but not by another, it can be in on domain of valid members but not for another. That is not OK for us. We want the union of the domains of valid members, across all such domains of valid members for the dimension.

  • Resolution: The explicit dimension domain is now defined in terms of 'the union of all domains of valid members for the dimension'.

7/8/08 Geoff Shuetrim: OCC aspect rules section requires better linking to term definitions.

  • Resolution: The various terms used in the section of the formula specification on OCC aspect rules now includes more comprehensive linking to definitions of the terminology being used. Also replaced all usages of "OCC aspect rule" with the defined term: "OCC rule".

22/7/08: Geoff Shuetrim: Relative filters and fallback value interactions are not made clear

Relative filters do not state how to handle matching to a fact variable that has evaluated to a fallback value.

  • Resolution: The following wording has been added to the relative filter specification:
 If the fact variable identified by the @variable attribute on the relative filter 
 has evaluated to a fallback value then the implied XPath expression for the relative filter is: 
 fn:false()

Other minor drafting modifications have been made to integrate this into the wording of the relative filter specification.

07/04/08 Andy Harris: The term "formula source" in Error code xbrlfe:nonexistentSourceVariable and Error code xbrlfe:bindEmptySourceVariable is not clear.

Does it refer to the source attribute on a formula:formula element or the attributes on aspect rules etc also?

  • Resolution: Added a paragraph clarifying that a source in a formula can be the source defined by a source attribute on a formula:formula element or on any of its descendant elements.

17/4/08 Herm Fischer: Clarify that a no-variable variable-set evaluates once if its preconditions are satisfied.

Variable specification: We need to clarify that a no-variable variable-set evaluates once if all of its preconditions (if any) evaluate once. If there are no preconditions, then it evaluates once.

  • Resolution: Done in a footnote.

4/6/08: Geoff Shuetrim: factVariablesNotAllowed error code in consistency assertions spec is misnamed.

The error code should be variablesNotAllowed because both general and fact variables are now not allowed.

  • Resolved: Rename the error code to variablesNotAllowed.

17/4/08 Andy Harris: Formula specification: OCC XPath and Fragment rules need to be clarified.

When a set of output OCC content has been identified by XPath or Fragment OCC rules and that content contains XDT dimensions, those dimensions need to be taken into account when determining what XDT dimension output aspect values to determine from source aspect values. Is this interpretation correct?

  • Resolved: The formula specification has been clarified to ensure that any XDT dimension value that is included in the output OCC by an XPath or fragment OCC rule MUST over-ride any XDT values for that XDT dimension from the original OCC. The added paragraph reads:
 Any complete aspect value, specified by an OCC aspect rule as being included in its subsequent OCC, 
 replaces any value for that aspect in the subsequent OCC.  Thus, for example, if an OCC XPath or 
 fragment rule specified a value for an explicit dimension that was to be added to the subsequent OCC, 
 then that explicit dimension value would replace any value for the explicit dimension contained in the original OCC.

Test case variation 12061 V-11a && V-11b has been added to test this.

Other comments

2008-12-03 cgh_chen: New requirement to only run assertion sets if some preconditions are met

cgh_chen has situations where it would be useful to be able to run assertion sets only when one or more preconditions are met.

  • Resolution: This new requirment is currently outside of the scope of the requirements and is not sufficiently well motivated or explained by use cases. Addressing the requirment would require information to be shared across variable sets in ways that are not currently supported. It is expected that this kind of information sharing will be better supported once formula chaining style use cases have been addressed.

2008-11-17 Herm: aspect tests between tuple and item

I got stuck on a production formula that binds one-by-one to tuples in an instance and processes the items in the tuple. In the real world case the tuple children can fall back (optional address lines depending on the country).

I added location filtering 51240-locationByParent-* and bind-empty 22180-fallback-in-tuple-*

The aspect testing is now that a fact variable representing the tuple binds one by one to each tuple, and for it two fact variables representing items in the tuple (with covering location filter to "..") binds to the two items to test inside the tuple.

The aspect question is that the tuple does not have period, entity, and so on, but the items do. I assumed that I didn't need to fake covering of these aspects for the tuple (to prevent the items from trying to implicitly match the absent aspects on the tuple). And by leaving them implicitly binding, the items in the tuple do match context to each other (in case there were items of multiple contexts inside the tuple).

Is this correct?

Geoff Shuetrim 2008-11-18

I presume that the formula involves implicit filtering and:

  1. Fact variable $tuple just uses a concept-name filter
  2. Fact variable $itemA uses a concept-name filter and a location filter requiring that it is a child of the tuple that $tuple has evaluated to.
  3. Fact variable $itemB uses another concept-name filter and a location filter requiring that it is a child of the tuple that $tuple has evaluated to.

Implicit filtering requires that all non-covered aspects must be matched across all variables. At face value that would prevent us from having fact variable $tuple evaluate and that would be a show stopper for the suggested design - you would need to fake covering of the necessary aspects.

However, you could express the formula somewhat differently, avoiding the use of $tuple and just having:

  1. Fact variable $itemA uses a concept-name filter and a parent filter requiring the parent to have the tuple's QName.
  2. Fact variable $itemB uses another concept-name filter and a sibling filter requiring that it is a sibling of $itemA.

Something similar could also be done using the location filter - no fact variables that iterate the tuples are needed.

You could also just use relative filters rather than implicit filtering so that fact variable $tuple is as before and fact variable $itemB has a relative filter that points to $itemA. That might run into problems with handling of fallback values but it does avoid the forced matching of missing aspects for the tuple facts.

That should deal with the problem that is being raised. The question then arises, should we tailor the implicit filtering treatment of tuple facts to not require matching of aspects that tuples cannot have? My inclination is to say no because this is just a special case of more general situations involving XDT dimensions where different facts have different sets of dimension aspects. I think that the set of filters we have and the ability to use relative filters rather than implicit filters.

Herm 2008-11-17 response

The requirement here is to process each tuple for a set of rules, where many of its contents are optional and the assertion is to relate presence or absence of optional contents.

Fake covering filters aren't possible with current constructs (no way to write for nonXDTsegment aspect, for example).

I removed implicit filtering from 51240-locationByParent-* and bind-empty 22180-fallback-in-tuple-*.

Geoff and I discussed adding two extension filter specs:

  • "this aspect must be absent", which if covered, can be used to cover the unit on a string factVariable, or most things on a tuple
  • "this aspect can match anything", which if covered, can be used like a period filter with test="true()" in many of my test cases.
  • Resolution: The extension filter specs, should they be written) will be IWD for now, not part of CR2.)

08/11/08 Herm: dimension filters open hypercubes & xfi:filter-member-network-selection

Based on washington DC base spec hypercube wording tweaks, the 'explicit dimension domain' on any open or unvalidated hypercube is the same as the dimension infoset, e.g., all xml elements in the DTS.

(Geoff, note, the html for the spec has some glitch displaying this definition.)

This causes no spec change, but I believe I have to change the function 'xfi:filter-member-network-selection' under error 'xfie:unrecognisedExplicitDimensionValueQName' (this error MUST be thrown if the member is not in the recognised domain for the specified dimension). It now should say in addition that if the hypercube is open then the error isn't thrown at all. This is to allow test 43230 v-09 which has a random xml element in an explicit dimension member, which isn't part of explicit domain-member validated explicit choices (and doesn't have to be), to be legal here. Thoughts?

Geoff Shuetrim Herm, please send me a copy of the updated dimensions specification and references to the relevant sections to review. Regardless of those changes however, I am a bit puzzled by the issue being raised because the unrecognised dimension value error relates to actual XDT dimension values, not non-XDT markup. That goes in the "remainder" that has its own aspect in the dimensional aspect model. That means that 43230 V-09 should not throw the error regardless of the change in the XDT specification. A somewhat different example would be required to trigger the error. Recall also that the xfi:filter-member-network-selection function is intended to give us some degree of independence from the issues afflicting hypercubes. Specifically its parameters are:

  1. The name of the dimension
  2. The name of a dimension member (no need to analyse hypercubes to determine that the QName given is the QName of a dimension member)
  3. A linkrole and an arcrole to determine the network(s) of relationships between dimension members to use in applying the selection criteria
  4. An axis to use in selecting the relevant subset of nodes in the chosen network(s).

Note that all of that is completely independent of any semantics that the XDT spec wraps around hypercubes. That said, some of the networks that are produced for XDT validation purposes may be used by this function, when those networks are designed in ways that make them amenable for such usage.

Herm 2008-11-10: the change to dimension is not in the spec, just in the meeting minutes from base spec WG in DC, where it caused substantial discussion, in particular

  • Dimensions explicitly present in a context always contribute to the dimensional space of a fact, regardless of whether the fact is associated with hypercubes.

This "revelation" or "epiphany" much alters our previous thought that a dimension value not defined in the hypercube goes to the non-XDT aspect, because with this change, any dimension value that is present, whether validated (because of closed hypercube), or not validated, is indeed in the dimensional (not non-XDT) space.

Thus I believe the filter-member-network-selection, though not changed in behavior at all, no longer throws the error above if the hypercube is open (or maybe never throws it at all), and certainly never throws an error when a context has an explicit dimension when no hypercubes are associated with the primary item.

Geoff Shuetrim 2008-11-11: I see the variation in terms of what the aspects of facts are but not in terms of the error code for the function. That error code is intended to relate to validation of the member parameter of the function, checking that the specified member is in the relevant set of members. Looking just now at the function definition for this function, I note that the explanation of the error code is quite out of date, still referring to the "recognised domain". There is no such thing in the dimension specification. The error code should simply be thrown when the specified member is not in any of the networks of relationships between dimension members that are identified by the arcrole and linkrole parameters to the function. I have tried to make the necessary changes in the wording of the function definition to bring it up to date with the dimensions specification. Please confirm the wording changes.

  • Resolution: No specification changes are required. The circumstances under which the xfie:unrecognisedExplicitDimensionValueQName error code are thrown have been bought into line with the dimension filter specification.

03/11/08 Herm: general filters and xsi:nil

We previously noted that general filters operate independently (no inter-filter dependency ordering), so if an instance has a mix of numeric and non-numeric items, a general filter with a computation has to have an "if (xfi:is-numeric(node-name(.))) then calculation else false()". But this will give an xpath error on a nil fact. Although nilness affects binding of a fact variable, it is applied independently of filtering actions.

So a safe general filter expression would be "if (xfi:is-numeric(node-name(.)) and not(nilled(.))) then calculation else false()".

Andy Harris and I wonder if fact variables that would avoid nil items should somehow have general filters that 'know' this, but to do that would defeat optimization of general filters shared across multiple fact variables.

I can make some test cases for this situation if it is warranted. I don't think we should change the spec.

  • Resolution: No changes are required to the specifications.

28/10/08 Andy Harris: Target Instance inconsistencies caused by Fallback Values needs clarity.

A previously resolution for topic "Interactions Between Fallback Values and Application Dependent Evaluation Orders" states that "This set of potential evaluations then needs to be restricted to ensure that fallback values are not over-riding and contradicting facts that exist in the target instance."

It is very clear that fallback values cannot override reported facts in the target instance (according to matched aspects).

What exactly is a contradicting fact defined to be when measured against the target instance? A fallback value can cause any number of potential semantic errors ranging from dimensionally invalid primary items to summation-item inconsitencies to non-u-equal essence alias items.

Geoff Shuetrim 2008-10-29 The idea is that fallback values that do override (get used in place of) a fact from the target instance, that would otherwise be used, could have be different values to the values of those facts that they do override and so they would contradict them.

Certainly the formula processor cannot be expected to perform full base spec and dimensional validation when checking for potential target instance inconsistencies.

Geoff Shuetrim 2008-10-29 I do not see how the statement quoted above would lead to the inference that full XBRL 2.1 and XDT validation is to be done as part of checking for inconsistencies. I hope the clarification above helps this understanding. I do not see how we need to clarify the wording of any specifications based on this feedback but if you have suggestions, fire away!

Is this requirement being over-analyzed outside the scope of intent?

Geoff Shuetrim 2008-10-29 I think so.

  • Resolution: No changes are required to the specifications.

24/10/08 Geoff Shuetrim: We should define helper functions in the function registry for dateUnion comparisons.

They should be xfi:equal-starts(first as dateUnion,second as dateUnion) and xfi:equal-ends(first as dateUnion,second as dateUnion).

They should return boolean values that reflect whether the two arguments to the functions are equal start moments or equal end moments respectively, as defined in the XBRL 2.1 specification.

This function should not be required to be a recommendation before the formula specification achieves recommendation status. It is simply a helper function to ease working with dateUnion data types.

The variable specification defines a match for the values of XDT dimension aspects using definitions in the XDT and XBRL 2.1 specifications.

Herm 2008-10-30 I suggest naming in a similar manner to x-equal or u-equal, so what about start-equal and end-equal. Note that end-equal is really end-or-instant-date-equal, but short names make code more readable. Added function registry directories 80217 & 80218.

  • Resolution: There are no changes to the specifications. The functions will be named: start-equal end-equal and instant-equal. end-equal and instant-equal will have the same semantics.

18/09/08 Herm: Selective formula/assertion firing

An agency requires selective assertion firing of rules (and not some private ad hoc solution). Previously extended link roles were used to select firing of rule sets. Today that can only be done by carving up the sea of formulas (20 to 2000) into separate formula linkbases of formulas that are fired as groups together (as no parameter can block an existence assertion). Is this the only way? Is it desired?

22/09/08 Víctor: Spanish COREP experience The reporting process in Spain requires some subsets of assertions to be fired depending on some information in the instance document (a manifest). These subsets of assertions have been grouped using assertion sets (see validation specification). Each assertion set has a generic reference that identifies a set of properties.

The API we are using allows us to specify whether we want to pass all assertions in the DTS or just the assertions under some specific assertion sets. This feature allows us to select only those assertion sets that meet some properties given the information in the manifest. This selection is done at application level.

Selecting a subset of assertions using assertion sets rather than extended link roles has two important advantages: - One assertion can belong to zero, one or more than one assertion sets. One assertion belongs to one and only one extended link role. This gives us a better flexibility. - Having all our relationships in single extended link role gives us the possibility of reusing fact variables (a fact variable cannot be reused from an assertion / formula in another extended link role).

Anyway, the use of assertion sets, extended link roles or a combination of both of them to define sets and partitions of assertions is not incompatible.

So, the formula specification meets perfectly this goal. Maybe, we can propose some guidance to improve software interoperability. Another possibility is to use formula chaining to define sets of assertions and/or formulae to be fired if some other assertions are verified. This way, the logic that we have included in Spain at application level could be defined using formulae.

  • Resolution: No changes will be made to the specifications. As useful metadata about sets of variable sets/assertions/formulae to evaluate and the required ordering of evaluation are identified, future specifications can capture this using a standardised syntax. The formula chaining usage pattern is an example of this approach.

18/09/08 Herm: Messages on assertion success or not

An agency requires human readable messages to be associated with assertion results. Andy made the following up but it is not acceptable to the agency until and unless it is in the to-do list of this group (some status other than on infinite hold or one guy's idea in a vacuum).

Andy proposes generic labels, with arcs from the existence/value/consistency assertion, arcrole xlink:arcrole="http://xbrl.org/arcrole/2008/element-label", one each resource of label role xlink:role="http://www.xbrl.org/2008/role/label/satisfied" and xlink:role="http://www.xbrl.org/2008/role/label/notSatisfied" with xml:lang argument also on the label. The contents of the label are an XPath expression, e.g., 'Your blah blah is blah blah', or concat('blah blah',$fooVariable) etc.

21/09/08 Herm second thoughts: Having XPath expressions means customers have to edit geeky XPath code, concats and string-joins. What about either (a) in-line {XPath} with text, like "your blah blah of {$foo} is too late for date {$fooDate}"? Or (b) mixed content like, "your <b>blah blah</b> of <i><xxx:expression>$foo</xxx:expression></i> is too late for ..." etc?

22/09/08 Victor: The Spanish COREP project is using a very similar solution to the one suggested by Andy. There are only two differences: - We are using plain text messages; no XPath expressions. However, using XPath is obviously an improvement. Herm's in-line XPath sounds good. - We are using also labels attached to formula resources. This way, we are grouping formulae in consistency assertions with a common acceptance radius; but the error sent to the user depends on the formulae that has failed.

Regarding a set of link roles to choose, I can think of different situations to consider: - A message for each evaluation of an assertion that is satisfied - A message for each evaluation of an assertion that is not satisfied - A message for assertions where every evaluation has been satisfied - A message for assertions where at least one evaluation has not been satisfied - A message to be used for assertion sets where every evaluation of every assertion has been satisfied - A message to be used for assertion sets where at least one evaluation has not been satisfied

It is also important to note that the core formulae specification meets perfectly all these approaches. We just need to provide some guidance at best practices level or maybe, include as part of a usage pattern specification.

21/09/08 Andy: In agreement with inline xpath approach and in expanding the set of initial role semantics for messaging.

  • Resolution: No changes will be made to existing specifications. This is clearly an area which needs development. Associating message resources with the formulae/assertions etc seems like a sensible design approach. Using labels to contain executable code is less coherent with current label designs. This is already on the radar for the FWG as an area to be developed using an extension specification but it will not be part of the existing set of specifications. Today, the plan is to specify a format for reports produced by execution runs, much like the reports generated by Schematron, where the human readable messages are produced from labels and other executable resources, depending on execution outcomes and where those messages are augmented by information about the facts, formulae etc, that contributed to generation of the message.

16/09/08 Victor Morilla: Overriding Group filters proposal

I'd like to propose a change in the variable specification regarding group filters. The paragraph:

  A filter participating in a variable-set-filter relationship is, by definition, associated with each of the fact variables 
  in variable set defined by the resource that it is related to.

To:

 A filter participating in a variable-set-filter relationship is, by definition, associated with each of the fact variables 
 in variable set defined by the resource that it is related to, except with those fact variables that have one or more
 explicit filters  covering any of the aspects that can be covered by the former.

In other words: the explicit filter overrides the group filter of the same aspect. For instance, if a variable set has a group filter for concepts A, B and C, and one fact variable has another concept filter for concept D, then, the group filter won't have effect on that variable. According to CR-1, both filters are combined using a logical-and operator.

The motivation for this change is a use case I've come across with: the value for every monetary concept in a table for dimension D = dt can be calculated as the addition of the same concept for every other value of dimension D given that the value of another item (a ratio) for that dimension is above a threshold. For instance:

 Concept:  A     B     C     D     R
 D = d1   10€   20€   20€   20€    10%
 D = d2   10€   20€   20€   20€    20%
 D = d3   10€   20€   20€   20€    30%
 D = d4   10€   20€   20€   20€    40%
 D = d5   10€   20€   20€   20€    50%
 ---------------------------------------
 D = dt   20€   40€   40€   40€ 

Given a threshold of 35%, only rows D=d4 and D=d5 are taken into accound in the aggregation.

My first try for this formula was:

 Group filter: Concept monetary
 Fact variable (bindAsSequence=true) $ratio
 - Concept filter : R
 - Dimension filter D = child(dt)
 Fact variable (bindAsSequence=true) $aboveThreshold
 - Dimension filter D (expression) some $i in $ratio[. gt 0.35] satisfies . eq xfi:fact-dimension(D) (not accurate)
 result = "sum($aboveThreshold)"
 Dimension D = dt

But it won't work as R is not a monetary concept. The only solution I have found for this kind of formulae is either using a general variable to iterate through the set of monetary concepts (which is a dirty solution), or writing a formula for each monetary concept (which is awfull).

In my opinion, this is a small change at specification and conformance suite level (I don't think we have a case for group and explicit filters for the same aspect). If it does not have a sensitive impact on software developers, I think it is worth to change.

  • Resolution: The specification will not be changed. The example can be handled simply by relating the concept filter to the individual fact variables rather than using it as a group filter.

06/09/08 Herm Fischer: Var spec Error code xbrlve:ambiguousAspects testing

Is this error condition only checked when an evaluation is grabbing dimension aspects and finds an ambiguity in that process?

I wrote the test case (22161) to be doing an evaluation involving the suspect aspect. I'm adding a variation v-02a where there is no evaluation involving the suspect aspect (facts must be checked for such aspects even if not involved in any formula).

  • Resolution: No chnages to the specifications are required. The specifications only require errors to be checked when processing variable sets.

04/09/08 Andy Harris: Define a new term for the the filter-member network that intersects the explicit dimension domain network for a filter dimension

From the dimension filter specification: "the filter-member network that belong to the explicit dimension domain for the filter dimension." I'm not sure what to call it. It is not a recognized or effective domain.

Suggestions:

  • linkrole-arcrole-dimension-filter-member-network
  • filter-dimension's filter-member network

Both are a mouthful. Both are slightly better than repeating "the filter-member network that intersects the explicit dimension domain network for the filter dimension" or "the filter-member network that belong to the explicit dimension domain for the filter dimension."

It would be easier to talk about if there were a single term.

Geoff Shuetrim: I have tried to add in the necessary definition but it is not clear to me where it would be used in a way that makes the specification clearer. It only seems to impact on the explanation of the invalidDomainMember error code and that explanation does not seem to become more transparent by defining a new term. Are there other areas of the specification where this term would also be useful? See also the filter-member set definition. In most cases, in referring to these filters, I expect that to be the set of domain members that interests people.

  • Resolution: No changes will be made to the text of the specification.

02/09/08 Herm: Adding linkrole/arcrole to concept name filter

The dimension filter member function has a flexible way of specifying aggregation using either a qname or variable reference, with linkrole, arorole, and axis. This would be greatly helpful for formula aggregation in non-dimensional cases such as a requirement by a Finrep user for formulas to aggregate calculation linkbase roll-ups (with formula filter and threshold features) and to do movement roll-ups.

One could either extend the conceptName or make something separate (but coverability of the concept aspect is important).

Concept Name (or equivalent) could then have, like dimension member:

   qname or qnameExpression (as now for the usual match-this-concept) or
   qname or qnameExpression or variableName 
   linkrole
   arcrole
   axis (sibling is required in addition to child/descendant)

For movement patterns it is important to also be able to filter on patterns of text in labels, so then instead of arcrole & axis, labelrole and labelregularexpression would be helpful.

   qname or qnameExpression or variableName 
   linkrole
   labelrole
   labelregularexpression

Thoughts?

'Geoff Shuetrim: Not for this release - but such filters like those described above should be defined and released over time.

Víctor: That's an interesting point. In a meeting with the IASCF we came through the possibility of being able to express a formula in terms of a calculation linkbase, which could be addressed by such kind of filters. The only problem is how to deal with the weight attribute. What I don't understand in Herm's example is the label regular expression part. At a first glance it doesn't look very clean; I mean, a formula defined in terms of a label.

  • Resolution: No changes will be made to this release candidate because such new filters can be defined in separate specifications should they be deemed useful.

26/08/08 Víctor Morilla: XPath expression in custom function signatures

This is a very simple feature that, given our experience with COREP, can greatly enhance the readability and maintainability of formulas. The idea is to include an optional attribute in functions signatures for an XPath expression to be used as the definition of the external function, in a similar way to XSLT 2.0 functions.

For instance:

 <function name="weighted-average" type="xsd:number" output="($a + $b*2 + $c*2 + $d)/6">
   <input type="xsd:number" name="a"/>
   <input type="xsd:number" name="b"/>
   <input type="xsd:number" name="c"/>
   <input type="xsd:number" name="d"/>
 </function>

If the @output attribute is not included, the function will be out of the scope of the variable specification (no changes so far). But if the attribute is included, then the function should be evaluated given the XPath expression in the output attribute considering inputs as in scope variables.

Note that I have replaced the original output attribute with another named type and used the original output attibute as the placeholder for the XPath expression. A new optional attribute (name) on input elements is used in order to reference them from the XPath expression.

Why is this such an enhancement? New functions were agreed to be part of the function registry process. This proposal is not meant to change that. However, in my opinion, the function registry should focus on common interest function that cannot be implemented (or cannot be easily implemented) using XPath expression (e.g.: logarithms). However, there are several occasions where formulae expressions can be simplified using custom functions that are not of general interest. For instace:

Bank of Spain uses a function to work out the tolerance of certain validations that depends on the number of operands: ($number_of_operands div 2 + 1) * 1000. This function is not really of common interest; nobody wants the function registry populated with functions like "Bank of Spain's tolerance for validation of type A".

With the CR-1 I see only three possible solutions:

1.- Repeat the function on each validation.

  Validation test="$a - $b < (2 div 2 + 1) * 1000"
    factVar: a
    factVar: b

2.- Using a shared general variable, that depends on another general variable with the number of operands. Something like this:

  Validation test="$a - $b < $tolerance"
    factVar: a
    factVar: b
    generalVariable: number_of_operands = 2 (defined for each validation)
    generalVariable (shared): tolerance = ($number_of_operands div 2 + 1) * 1000

This solution is clumsy and dirty: the scope of variables of a general variable is defined by its variable set.

3.- Using one paramter for each different number (this is our current solution for COREP Spain). If we have validations with 1, 2, 3, ..., 10 operands, the we create 10 different parameters (Tolerance1, Tolerance2, ..., Tolerance10):

  Validation test="$a - $b < $tolerance"
    factVar: a
    factVar: b
    parameter: tolerance ==> Tolerance2

Again, this is a dirty solution and cannot be considered for general purposes.

However, using external functions the validation is simplified:

  Validation test="$a - $b < be:tolerance(2)"
    factVar: a
    factVar: b

Geoff Shuetrim: Lets discuss this on the conference call. I can see why it is a simple modification but I have two reservations;

  1. Fixing problems with what we have is causing enough instability without incrementing the set of features we are asking software vendors to support.
  2. It begs the question of why these are not just XQuery function implementations. Having a single XPath expression is not particularly adequate for handling more complex functions and in the context of those more complex cases, the solution looks somewhat piecemeal.

Why not use a custom function but just not define its implemntation in the DTS? That is what is expected by the current specification.

Herm: Agree with idea of XQuery to implement functions, this is what XQuery is intended for, and how Saxon is sort of expecting things to work. I'd suggest a standard way of providing XQuery function implementations.

Geoff Shuetrim: My point was slightly different. I do not want to bring in XQuery at this late stage of the process. What about handling this as follows?

1. Make sure that the custom function declaration has a more open content model so that it can be extended at some later point to provide links to implementation details for the custom function (and possibly links to files like those defining functions in the function registry).

2. For now, use custom functions, include declarations as they are, but do not provide the custom function implementation in the declaration of the custom function. That does limit the ability to run the formulae using the custom functions on non-customised formula processors but such limitations have been accepted by this WG for a long while.

  • Resolution: No changes will be made to the variable specification. The content model for custom functions is already sufficiently open, allowing extension both by the use of additional attributes (because its data type is derived from that of the abstract variable:resource element) and, should it be necessary, by the use of elements in the substitution group for custom functions. It would also be feasible to link a custom function declaration resource to a custom function implementation resource (in a DTS or outside of a DTS) should that be deemed useful.


24/7/08: Andy Harris and Herm Fischer Can we find some way to have all fallback values be facts, with complete sets of aspect values?

Fact variables that have evaluated to a fallback value imply an evaluation result that does not have aspects. If such a fact variable is referenced by a source attribute in a formula, then, for the variable-set evaluation where the fact variable has evaluated to a fallback value, any aspect rule that attempts to draw upon the source aspect value from that source, will trigger an xbrlfe:undefinedSAV error. Test case 22180 v-19 addresses this possibility by having fact variable $a evaluate to a fallback value and then using the uncovered QName in a source attribute to force the formula to attempt to obtain implied SAVs from variable $a. The xbrlfe:undefinedSAV error is expected for that variable-set evaluation.

  • Response: No changes are required to the specification. The interpretation of the specification is correct and appears to be unambiguous. The only question is: should we try to catch and prevent this kind of dependency on variables that can evaluate to fallback values prior to formula evaluation. The answer is clear: we cannot because this would rule out determination of output aspect values using implicit aspect values for formulae where all facts can evaluate to fallback values.

17/07/08: Herm Fischer: Can we allow attributes with the dateUnion data type in period filters and can we use them in typed dimension filters?

We should also allow dateUnions (xbrli:dateTimeItemType or dateUnion typed dimensions) as arguments for period instant period start and period end filters and handle them appropriately (without forcing calling code to perform deconstruction of date unions into date & time in calling code). This could be by suggested XPath code testing for the date expression object type being a dateTime object and handling differently from when it is a string or date object type.

  • Resolution: This modification will not be implemented in the period filter specification. The change is purely syntactic on the part of the affected period filters and does not impact on the functionality of the filters.

17/07/08: Herm Fischer: Using the dateUnion data type in typed dimension filters?

Can we use use the dateUnion data type in typed dimension filters?

Andy Harris: Not only should dateUnion data types be allowed, but types that are derived from or in same subsitution group as that data type should be allowed. That said, all valid present and future schema types should be accessible in the xpath expressions.

  • Resolution: Such data types are possible to use in the #test attribute of typed dimension filters. No changes are required to the specifications to facilitate this.

27/6/08: Herm Fischer: There is no way to grab precision/decimals from an uncovered SAV.

A use case that is processing each input fact to do some dimension hacking (leaving values and units alone), such as transfer all these items (some numeric and some not) from the LosAngeles office dimension to the SanFrancisco office dimension, might want to output each input SAV (formula:uncovered) with whatever decimals and precision the bound item has, and just hack the occ dimension. But there's no notation to grab accuracy from the formula:uncovered SAV (if it is numeric, and not if it is non-numeric).

  • Resolution: No changes are required to the specifications. As noted by Victor Morilla (14/7/08), the key behind the uncovered SAV is that the implicit filter warrants that there is a unique value for each uncovered aspect. However, precision / decimal attribute is not an aspect in any of the current aspect models. Trying to access precision or decimal information through an uncovered SAV is not appropriate. Access that information from a fact variable itself.

16/6/08: Víctor Morilla: There is no clear definition of the behavior of the implicit filter regarding fact variables that have been bound to the empty sequence

The implicit filter should skip such variables (regardless of whether they end up evaluating to fallback values) when doing implicit filtering.

  • Resolution: The suggested behaviour is sensible. The suggested behaviour is already enforced in the specification by the definition of preceding variables:
 A preceding variable for the current variable is a fact variable in the same variable set 
 as the current variable that has already been evaluated to a sequence of one or more facts. 

Requiring the preceding variable to have evaluated to a sequence of one or more facts was intended to imply that the preceding variable has not bound to the empty sequence. The wording will be changed to make this more explicit:

 A preceding variable for the current variable is a fact variable in the same variable set 
 as the current variable that has already been evaluated and has not evaluated to a fallback value.

13/6/08: Andy Harris: Are sequences of atomic values partitioned into additional sub-sequences based on aspect matching?

As an obscure, but possible example, a future aspect model might define rules for covering atomic values by their xml schema facets.

  • Resolution: The specification only deals with partitioning into sub-sequences for fact variables. Only facts can have aspects.

22/5/08: Herm: Can parameters have sequence values?

Per WG call, reflecting Victor-initiated discussion from Eindhoven, supported by formulas written RH/HF for Dutch tax office, it is desirable to allow parameters to reflect sequences (or nodes or anything else). For example, a sequence of rates or codes, e.g., '(0.35, 0.22, 0.17)', or dates "(xs:date('2007-01-01'),xs:date('2007-03-30'), ...)", or nodes loaded from an ordinary xml document, should all be allowed in a parameter. Then they can be used in lookups from general variables and as otherwise needed in other XPath expressions. They should not be subject to bind-as-sequence or iteration, just act as a typed object (whether atomic value, sequence of atomic values, or nodes from a document).

  • Resolved: No changes are required to the existing specifications. Parameters are not subject to bind-as-sequence or iteration. They can be evaluated to sequences. The final paragraph in the section on parameters states:
 "Unlike parameters in the XSLT 2.0 specification [XSLT 2.0], the parameters defined in this specification cannot contain sequence constructors."

The reference to sequence-constructors needs to be understood by reading the material at the link to the definition of a sequence constructor in the XSLT 2.0 specification. Sequence constructors are markup nested within parameter declaration elements. Ruling out such sequence constructors does not rule out sequence valued parameters.

22/5/08: Herm Fischer: Is it possible to use a lookup table in an external XML document?

Lookup tables have been discussed at Eindhoven, and used for Dutch Tax Office and discussed in COREP examples. While for starting up such tables can be constants in formula expressions, production use may expect such tables in external maintainable XML files. VM suggested use of fn:doc to load such tables (possibly into parameters for convenience, per above wiki note). Any "ordinary xml document" nodes (tax table, rate table, etc) will appear as validated, typed, nodes, according to it's own xml schema, so that ordinary XPath 2 atomization applies to its nodes just as to any fact items from the input instance document. This applies to any XPath 2 loaded document by the fn:doc function whether specified once in a parameter, or in scattered separate expressions. Michael Kay confirms by e-mail that when an XPath processor has its principal primary document schema-validated (e.g., the source instance document has its nodes schema-validated, as with formula processors), then every document loaded by fn:doc() is likewise schema-validated. This is the behavior with XSLT 2. (He suggests a "lax" parameter to protect oneself from such a fn:doc-loaded document, where a schema does not exist for its elements. Test 12060-V09 is an example that tests this situation.)

Test case variation 22010 V-12 is such an example. An external XML file's table, loaded by fn:doc, contains rates (decimal) and categories (QNames) each with lower and upper number threshold for lookup, maybe like a primitive tax rates step table. The instance provides values for each step, and the formula assures that the nodes are delivered of proper type in atomization for the value expression.

  • Resolution: No changes are required in the specification. It should be noted that using the fn:doc function to load values into a parameter ignores the fact that parameters are designed to avoid the need for such fn:doc function calls. They provide a means for the processing application to supply the necessary parameter value in an application determined way. One application may obtain the parameter value from a specific file. Another may obtain the parameter from the command line. The use of parameters enables the formula to be written in a manner that is independent of the representation of the parameter value outside of the formula itself.

22/5/08 Victor Morilla: Can we clarify the set of OCC SAVs when the relevant source equals the uncovered QName?

For example:

 Formula implicitFiltering=true aspectModel=dimensional source=”uncovered” (no OCC rules specified)
 Variable $a:
    ExplicitDimensionFilter scenario dimension=Country member=Japan cover=true
 For the input fact: assets (scenario country=Japan): which of the following results is to be expected?
   A) An output fact with empty scenario?
   B) Error  undefinedSAV?
   C) Error missingSAVForDimensionRule?

I would suggest adding the following to the formula specification: “The set of OCC SAVs of the uncovered QName is the set of uncovered aspects of each fact variable in the variable set.” This modification clarifies that the output fact for the example above is an output fact with an empty scenario.

  • Resolution: No change is required to the specification. The specification defines the term "implied SAV" and there is an implied SAV for each aspect that is uncovered for a fact variable that has evaluated to a non-empty sequence of facts. This includes OCC aspects. For the example provided with the feedback, the fact variable $a would bind to the input fact that is described. Given that variable $a is the only fact variable, and assuming that the input fact has nothing in the scenario except for the country dimension, there would be no implied OCC SAVs. However, there are also no aspect rules in the formula for constructing an output value for the country dimension so no undefinedSAV error would need to be thrown. Likewise, without an OCC aspect rule, it is not appropriate for conforming software to throw a missingSAVForDimensionRule error. That leaves us with producing an output fact. Because the country dimension is covered for the only fact variable in the variable set, and because there is no aspect rule for that dimension, the output fact would not have a country dimension value in its scenario. Because the input had no other content in the scenario, this means that the output fact scenario would be empty.

21/5/08: Herm Fischer: How should the xbrlfe:undefinedSAV error be handled?

See the relevant section of the formula specification for details. That section of the formula specification is intended to convey that the error is required to be thrown when we have:

  1. the uncovered qname as the value of the source being used to determine SAVs;
  2. an aspect rule that determines the value for a specific output aspect when the formula is evaluated and that relies on the SAV for that value determination;
  3. no fact variables in the formula's variable set that meet the following conditions: it does not have a filter covering the aspect whose value will be determined by the aspect rule being processed; it has been evaluated to a non-empty sequence.
  • Resolution: No changes to the drafting of the formula specification are required.

16/5/08: Herm: Period filter date attribute inconsistencies

The usage of separate date and time attributes in period filters is inconsistent with formula period rule expressions and the rest of XBRL 2.1. Instead of separate @date and @time attributes, it may be more consistent to use dateUnions. However attributes can only have simple types, so such a dateUnion would have to be in sub-elements (instead of attributes). With date and time separate, as now, an if statement is needed to grab a date and time from a concept or typed element that is a dateUnion (instead of simple reference to the source dateUnion). Even for reference to period dates, that should use the xfi:period-xx functions to assure proper timeless date to time conversion.

E.g., if $foo is a factVariable dateUnion (as in a GL instance), or a typed dimension dateUnion (as from a restatement date), one must code something ugly like:

 date='if (data($foo) instance of xs:date) then ($foo) else ($foo cast as xs:date)
 time='if (data($foo) instance of xs:date) then (xs:time('00:00:00')) else ($foo cast as xs:time)
  • Resolved: No changes required to the specification. The example is somewhat confusing. The discussion is in relation to period filters and then the motivating examples draw on dataUnion content that is not in the context period but in fact values or in dimension values. The design of the period filter is intended to make the specification of parameter values simple and intuitive, separating the period into a date and an optional period. This seems to provide as much precision as is needed to filter any period that can be used in a context period based on the XBRL 2.1 specification. Merging these two pieces of information into a single value in the filter seems to be a syntactic issue only and a step toward a less intuitive structure (merging the two values into a single combined value. Perhaps I am missing something.

15/5/08: Herm Fischer: The conformance suite needs extra tests

  • Resolved: No changes are required to the specifications. There are now new test case variations. Some deal with errors resulting from a type mismatch arising during XPath expression evaluation. Others deal with errors that can be thrown when functions in the XBRL function registry encounter a problem. Yet others attempt to deal with issues relating to the XBRL validity of output facts (e.g: test 48210 v-04). The tests attempting to deal with issues of XBRL validity of output facts are out of scope for the conformance suite. Such issues are required to be identified by post processing validation of output XBRL instances.

13/5/08: Herm Fischer: Should the variable specification define an equivalent to the xsl:xpath-default-namespace attribute in XSLT 2.0?

For element and type expressions that are unprefixed QNames in XPath 2 expressions, XSLT 2.0 does not expand them using the default namespace from the defining element. Instead it has an [xsl:]xpath-default-namespace attribute (http://www.w3.org/TR/xslt20/#unprefixed-qnames). We should consider adopting something similar. This would be useful for schemas with no namespace.

  • Resolved: The motivating use case is not sufficiently convincing to warrant the specification change.

8/5/08 Herm: Whitespace trimming. Spec WG has an open bug on trimming of whitespace for s-equality testing.

Formula filters need same issue addressed. The built-in operations for normalizing space not only trim exterior whitespace but collapse between word spaces, thus destroying grammatically required double spaces after sentence end periods in text; and spaces needed for formatting number strings (e.g., two numbers 1,234,567 789 may be localized as 1 234 567 and 789, but if normalized the two numbers crammed together become one in the local thousands-separator semantics).

For string value purposes

  <eg:a>foo</eg:a>

and

  <eg:a>
     foo
  </eg:a>

should be filterable and processable as the same. A prior decision was to trim whitespaces. Should this be in the spec?

  • Resolved: No specification changes are required. We inherit appropriate whitespace handdling via the usage of the functions in the function registry that need to be designed to reflect the comparison operators defined in the XBRL 2.1 and XDT specifications. It would worry me if we started having to address this issue in specific formula or filter specifications. If some function definition requires refinement, we will do that instead.

8/5/08 Herm Fischer and Geoff Shuetrim: Could we use an author specified ordering of filter execution to simplify type checking in filters?

The specifications have been designed with the following principle in mind: Filters MUST be independent. In other words, when designing a filter, users should not need to condition their understanding of the behaviour of the filter based upon the other filters that will be used. Likewise, programmers should not need to build such conditionality into their code, examining the full set of filters to be applied before determining the logic (read XPath expression) that will implement a given filter.

That said, as the specifications now stand, variable set evaluation can result in in evaluation exceptions that are caused by filters applying tests to facts that do not test in advance that the data type of the information being used by the filter is matched to the tests being applied by the filter. To avoid such evaluation exceptions today, we are doing two things. For some of the filters, we are building the type checking directly into the implied XPath expressions for the filters. That is simple. For other filters, and in particular, for the general filters, such type checking is the responsibility of formula authors. We could simplify this for formula authors by modifying the specification of fact variable evaluation such that:

  1. Authors had control over the evaluation ordering of filters.
  2. Filtering was not done in a single XPath expression evaluation but as a sequence of XPath expression evaluations, one per filtration.

These changes would allow a fact variable definition to be constructed in a way that did things like concept data type filtering prior to general filtering based upon some test of the fact value that relies on the fact value being of the required data type. Then the general filter, does not need to incorporate the data type testing. The author of the fact variable could control filter evaluation order by using the order of the arcs from the fact variable to the filters (though we would have to introduce some new constraints on the relationships to the filters all being in the same network and some additional rules to handle those filters related directly to the fact variable and those filters related to the variable set as a whole). This new feature would mean that the impact of doing a filtration would depend on the preceding filters and that may have considerable adverse implications for some kinds of software optimisations such as caching of filtration results.

An example of the issue follows:

 If an instance has 3 facts, one each of a string, integer, and QName data type, and a fact variable has a general filter is ". eq 'foo' ", 
 and a concept data type filter that selects only facts with string data types, then we will encounter XPath evaluation exceptions when comparing 
 . to 'foo'  for both the integer and qname typed facts.  See test case 45210 v-04 and v-07 for complete examples highlighting the nature of this issue.
  • Resolved: The suggestion has been rejected by the working group for the following reasons:
    • it introduces an important limitation on th ability of processors to improve the performance of filtration operations. (e.g: see the RETE algorithm
    • It places too much of an onus on formula authors

22/4/08 Suguru Washio: How should we handle role types for generic labels

The role "http://www.xbrl.org/2003/role/label" can not be used for label:label, because there are no link:roleType elements which define the role for label:label.

I think it is OK to use the XBRL standard label and reference roles for generic labels and generic references without defining link:roleType elements, but I could not find any descriptions about this in GNL spec and I found the following description.

[GNL - 1 Introduction]

 This specification also does not define any XLink resource roles for use with 
 generic labels.

[XBRL 2.1]

 There MUST NOT be more than one roleType element with the same roleURI attribute value within a taxonomy schema. Within a 
 DTS, there MAY be more than one roleType element with the same roleURI attribute value. However, all roleType elements with 
 the same roleURI attribute value MUST be s-equal.
  • Resolution: There are no specification implications. Generic label and reference roles need to be defined in the link role registry. We will use different role values than those used for the XBRL 2.1 role values. The conformance suite tests require additional label role type definitions to be provided and those role types to be used in the linkbases containing generic labels.

7/4/08 Herm Fischer: Should we add a new OCC rule?

Formula specification: OCC rules: I would think that a generalized formula, that could apply to concepts with and without dimensions, thus contexts with and without dimensions, but emit formula results of nondimensional context, could sort of generalize, with an OCC dimension augment=false (and no omit) simply to wipe away any OCC aspects including (if there are any) dimensions. Or maybe I'm dreaming too far off the edge of edge cases?

  • Resolution: If such an OCC rule is deemed to be useful, it will be defined in a future specification. Right now it is out of scope.

17/4/08 Andy Harris: Should we add exceptions when the QName arguments are for concept or data type match no datatype or concept in the supporting DTS?

  • Resolution: Such errors will be caught at the XBRL instance validation stage and, in keeping with our treatment of all other aspects of output fact validity, that is where we should leave it in the specifications themselves. That said, warnings from software for those limited number of situations where such output validity problems can be detected in advance are likely to be valuable to users.

16/4/08 Herm Fischer:Uncovered aspect matching of facts in sequences that fact variables have evaluated to - should this occur regardless of implicit filtering?

Variable Specification: An evaluation of sequence fact variables always requires matching of the uncovered aspects of the facts in the evaluation sequence. This is regardless of whether implicit filtering is being used for the variable set containing the sequence variables being evaluated. Is this desired?

  • Resolved: No changes to the specification are required. The described feature is intended. Implicit filtering is optional only because other methods of matching uncovered aspects may be deemed more useful by formula authors (eg: relative filters which permit matching of uncovered aspects across fact variables to be somewhat more granular). It may be helpful for people to view the ability to turn implicit filtering on or off as a means of switching between uncovered aspect matching approaches, not as a means of switching between matching uncovered aspects and not matching uncovered aspects.