Data type extensibility and composition

The problem is we are trading data-coupling for control-coupling. In a hard-realtime system the latter is less desirable and even fatal. As such I wonder if we should invert this to be “deterministic by default” and require datatype authors to specify not-final or mutable to enable this behavior.

# Y.1.0
@mutable
bool b
bool c

The type of control coupling I’m describing already exists in UAVCAN where we use variable-length arrays but these are bounded so the effects on a remote system are also bounded and a provable analysis of the system is still possible. Allowing up to 9 peta-bytes of additional data would throw any marginal analysis for a system right out the window. It also raises the specter of the protocol becoming split into deterministic and non-deterministic flavors which I’m solidly against. This suggests that we must take the additional step of requiring type authors to provide an upper-bound.

# Y.1.0
@mutable[<=16]
bool b
bool c

This complicates serialization since we must now have the previous type available to serialize the new type in order to evaluate that it’s mutability bounds have not been violated:

# X.1.0
@mutable[<=16]
bool a

# X.1.1
@mutable[<=16] # cannot ever change after being defined in 1.0
bool a
bool d

# Y.1.0
bool b
bool c

# Y.1.1
bool b
bool c
bool e # <- illegal since 1.0 was not mutable

# Z.1.0
# implicitly, this is now @mutable[<=16]
X.1.0 x
Y.1.0 y

I’d be interested to hear how you would handle these issues.