The section on rational numbers for expressions in the V1 DSDL specification gives me pause. I’m concerned that platform differences in floating point representations could lead to the sort of subtle and hard to find bugs that result in a spacecraft impacting planetary bodies at speed (http://exploration.esa.int/mars/59176-exomars-2016-schiaparelli-anomaly-inquiry/). In the current draft we are saying that only IEEE 754 numbers can be used but IEEE allows for several different levels of precision and many dynamic languages (like Python) provide no guarantee for the bit depth used on a given platform.

One may simply say “smart developers should understand precision loss in floating point calculations” which is true. My concern, raised by this response, is these calculations are performed quietly by several different DSDL processors which most developers will take for granted. However, my larger concern is that we are performing independent calculations on both sides of a wire to generate constants. The concept of a constant implies that it abstractly exists as a immutable, singular value (i.e. the implication is stronger than equality suggesting that two representations of a given constant physically represent the same object). Because we do not transmit the constant itself, because we declare a given DSDL type of the same version to be bit compatible, and because cisc (with apologies to Leibniz) most developers will simply assume REMOTE_CONSTANT_A == LOCAL_CONSTANT_A without expecting two to be different based on resolution loss.

Because of this, and because UAVCAN is targeting systems where this sort of thing may be more than academic (i.e. Mars is very far away and spacecraft go very fast) I propose we require that constants are always calculated by a given DSDL processor in a way that provides a deterministic binary result that will always be == between any two processors.

The easy way to do this is to disallow floating point numbers in expressions. The more complex way is to to force a given bit-depth for the IEEE floating point values used internally. This latter approach may make building compliant python processors difficult.

Thoughts?

Footnote

From Section 3.3.1 in the v1 draft:

The associated loss of information inherent to IEEE 754 representations is assumed to be acceptable.

There may be wording/clarity issues, but the intention of the current specification draft is to require just that.

Rational numbers store exact values. Having entered a real literal in a DSDL definition, say, 123.456e-123, the developer will be certain that any compliant DSDL processor will use that exact value internally without any possibility for information loss:

Any further transformation performed in the process of a DSDL expression evaluation must be exact (excepting the case of exponentiation with a non-integer power – in that case, the spec says that the accuracy is to be implementation-defined; if desired, we can prohibit non-integer power completely or define specific accuracy goals, although I find neither of the alternatives desirable).

From the above follows that the end result of a DSDL expression evaluation is a well-defined deterministic number which is invariant to the platform’s floating point representation because a spec-compliant DSDL processor does not internally use floating point at all. The first and only instance of information loss occurs when the result of a DSDL expression is assigned to an IEEE 754 constant value; in this case, the spec demands that the nearest valid value is to be used, which is deterministic.

The associated loss of information inherent to IEEE 754 representations is assumed to be acceptable.

The distortions are well-defined and are guaranteed to be identical across all DSDL-compliant specifications because they:

Evaluate DSDL expressions exactly (unless they contain exponentiation with non-integer power).

Always use the nearest valid IEEE 754 value when converting the exact value of the initialization expression into the final constant value.

But they aren’t == since the information loss will differ on one side of the wire when converting to IEEE where an implementation uses 64 bits of precision and the other side uses 32 bits.

Again, I agree that this is a common programming concern that engineers must know how to deal with and handle correctly, however; and perhaps this is really my concern, we are defining a sneaky little programming language in expressions but mislabeling the results as a CONSTANTS. While they are CONST they aren’t mathematical constants in the sense that the are the same value where-ever they are encountered. I fear engineers will have a blind spot when consuming constants that are calculated expressions where they carefully account for precision in their C code but forget (or just don’t realize) that there is another program running on different parts of their system (at compile-time) that also has precision concerns.

Perhaps I should just simplify my concern to: are expressions too much complexity for the DSDL language to take on especially in this first version?

Would you also apply this reasoning to fundamental physical constants? Suppose one platform defines pi as #define M_PI 3.14159265358979323846, another as boost::math::constants::pi<float>(); even if both rely on IEEE 754, their runtime representations will likely be different.

The part about compile-time precision concerns is not exactly true. A DSDL expression is guaranteed to yield exact same value regardless of any particular properties of the DSDL processor as long as it is spec-compliant (excepting the issue with exponentiation I mentioned earlier). Platform-specific variations appear only when the exact value is converted to a platform-specific storage format, but such variations also affect fundamental physical constants. Engineers must be prepared to deal with precision loss caused by imperfect modeling of real numbers by IEEE 754 or whatever native floating point format they use; we can’t change that, and disallowing constant expressions in DSDL will not affect this problem in any way.

While I agree this is completely reasonable and even expected what is not expected is that the exact / binary-value of CONST FOO will depend on the system where the DSDL generator was executed. This means the binary value may vary between build runs if an organization is using a heterogeneous build fleet and may vary between two different systems when generating the same code from the same DSDL*.

To illustrate my concern, imagine if the C preprocessor supported our version of expressions. If this was the case then, when cross compiling, the floating-point bit-depth would leak from the compiler’s host platform onto the platform being targeted. I want to avoid this complication with DSDL expressions**.

A Counter Proposal

In section 3.3 we state:

Expression types are a special category of data types whose instances can only exist and be operated upon at the time of DSDL definition processing.

if we require that expressions are actually expanded into native expressions then we will use the target platform and it’s properties to calculate the final value. This will keep all platform-specific limitations consistent for a given code base.

With this proposal any true constant will be expanded simply as a literal for a target platform to compile whereas expressions will be expanded into statements to be executed either by a compiler (i.e. C++ constexpr) or at runtime. This avoids adding the complexity of running a program in the DSDL generator and pushes platform specific resolution loss out to the target compiler.

* This might be a complication for aviation systems that are certified at higher DAL levels.

** Of course C++ now has this very problem which is another reason why I’m pushing so hard on this. I don’t want to add a third way floating point values can be calculated when, for libuavan in particular, I already have two ways to consider.

This is not true. The output of the DSDL generator is only a function of the expression itself; no property of the platform may affect the final result of any expression.

Say, a DSDL expression 0.1 + 0.1 + 0.1 - 0.3 will always yield an integer zero because all computations are exact. This would not be true if we were relying on native floats or some other platform-specific representation. Likewise, 1.1 + 2.2 would yield exactly 3.3 rather than:

>>> 1.1+2.2
3.3000000000000003

So a DSDL expression is equivalent to a raw literal as far as precision is concerned.

But we’ll be emitting a floating point token from the generator (Sorry, I feel like I’m frustrating you with this conversation)? How do we determine what the precision is of that final token?

Could we not actually emit a function where a floating point value would be required? Say we get to the end and we have 355/113 in our internal representation. Instead of pasting ‘3.14159292035’ we would emit a representation that does not lose precision and require the generator to provide a function that can be a compiler expression (i.e. for C++) or a runtime expression allowing the system to decide how and when to convert to an IEEE format.

I think we’re close to agreeing but I’m still confused by statements like this:

Conversion from a rational number into an IEEE 754 floating point representation is not allowed if the source number exceeds the finite range of the target floating point representation

But how does the DSDL “compiler” know the target floating point representation if it generating source code?

I guess this might also be me needing to RTFM for pydsdl and see how pydsdlgen is given the result of the expression. Does it get a string token, a python float, or some other representation?

This is sensible. It also seems to relieve spec-adhering implementations from dealing with rounding to the nearest IEEE 754, which is some hairy business.

In the pyuavcan code generator that I am building currently floating point constants are initialized with a float division expression; the example below evaluates to the first dozen of digits of pi:

VALUE: float = 314159265358979 / 100000000000000

It is a wording issue. The “target floating point representation” refers to the particular IEEE 754 format used in the context at hand, which is known to the DSDL processor. The objective of this constraint is to ensure that we are not trying to initialize a float16 constant with 10e+300. Platforms that use floating point representations other than IEEE 754 may face issues here, particularly if the range of a floating point type specified in DSDL exceeds that of the native non-IEEE floating point type. Edge cases like this are to be resolved by the implementation (for example, by picking a wider or emulated floating point representation).

It gets a rational. The constant in the above example is emitted by the following template expression: