It’s no secret that a pillar of a standard’s adoption is the availability of well tested libraries in the desired software/technology stack, something that’s obviously very difficulty to produce and maintain. Perhaps some tooling could help encourage adoption.
Has any thought been put into the development of a test set (or test bed) to validate Cyphal implementations? By that I mean a piece of code that actually sends and receives data through the wire and can be used as a form of black box testing by library developers.
This would also be very helpful in ensuring interoperability of the different implementations and detecting anomalies when changes to the standards arise.
Food for thought,
It is indeed true that having an automatic conformance testing suite would help, and it is something that has been discussed on this forum in the past; however, no such thing has been built to date. When conformance testing is required, it is performed manually with the help of the following tools:
(Manual conformance testing is performed on the products featured in the Cyphal.Store).
I’m glad the idea is well received.
I’m late to arrive at this show, but I’ve worked in the software business for a long while and I can say this with a very high degree of confidence:
- The idea behind Cyphal is very clever as it embraces change. Change is inevitable, and a requirement for progress.
- Change is also very hard, humans are creatures of habit.
- Automation is key to success.
Other standards such as J1939 and NMEA2000 are, put bluntly, detrimental to progress and only serve to enforce the monopoly of a few big players.
That being said, I believe Cyphal has the potential to become the defacto across a broad range of industries, not just in the UAV world. Reality is that all modern vehicles are becoming flying/rolling/floating/etc. computers and human interaction is something that is going to diminish, or even disappear over time.
I’m willing to donate some of my time into the development of such tooling as it would 1) help me absorb the material you put together and 2) ease development (let’s face it, we all hate manual testing… error prone and boring).
Can you point me to any relevant conversations/ideas/proposals so I can get up to speed? I’ll re-read the guide and spécifications in the meantime. It’s important to fully understand a problem before trying to solve it…
Again, my compliments on your work, very impressive.
Thank you for the kind words.
I’ve looked through the forum but failed to find useful topics for you because this idea was mostly discussed in fairly vague terms. The general idea is to make a highly configurable Python script that can interrogate a node using any of the supported transports (PyCyphal supports all of them) and ensure that the transport layer is operating correctly and the application-level functions also behave per Spec. The list of application-level functions to test should be configurable; for example, via the command-line options (e.g., there could be an option for testing the register API, another one for testing the
uavcan.node.GetInfo service, etc). Obviously, it’s going to be difficult to achieve full state space exploration through a general-purpose script like this; the objective instead should be to provide some sensible minimal coverage for the most common use cases only.
It might be a good idea to build this tool on the PyTest framework.
Just thinking out loud here, by all means correct me if you think I’m off the mark.
A recurring problem I see with specifications is that they’re written in natural languages which are ambiguous and can be interpreted in multiple ways. HTML is a perfect example.
Source code on the other hand doesn’t leave room for ambiguity. Each bit is either a 0 or a 1, there are no 0.5s.
Another important aspect is having a single source of truth.
It seems to me, from what I’ve seen so far, that your C implementations are what’s simplest and most likely to be accurate. Those C libs could be the “reference implementation” and treated as an annex to the source of truth which are the specs.
Wouldn’t starting with CAN/libcanard be a good idea seeing it’s what’s most likely to be used in production? It’s also a good choice I think because unlike Python, C code can be called via FFI by pretty much any programming language in existence.
Again, thinking out loud… Thoughts?
Basing the test suite on libcanard is not the most sensible choice because interfacing it with higher-level languages is going to require the amount of glue code that is comparable to the full reimplementation in that language from scratch, because the library itself is very simple (~1k LoC).
Understood. I’ll look at the various options and see what I can come up with.
Thanks for your input, much appreciated,
“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
— Antoine de Saint-Exupéry
Another possible approach: do the heavy lifting in Rust, and expose the functionality to multiple languages.
Signal is a very good example of how to do so: GitHub - signalapp/libsignal: Home to the Signal Protocol as well as other cryptographic primitives which make Signal possible.
I’m not sure if there are things in Cyphal transport layer implementations that qualify as heavy lifting. As I said, the core can be implemented in ~1k LoC, which makes one question if it’s easier to reimplement it from scratch rather than dealing with FFI.
Then I’m not clear why the Canadensis implementation is so complex? And in it’s current state, it only supports Linux and bxCAN devices. C is very permissive (dangerously so), but most other languages are not. Reality is that when comes time to hire help, higher level languages offer more options. Very qualified low level developers are few and far apart.
I’m going out on a limb here, but assuming there is a wide adoption of Cyphal, don’t you think it’s safe to assume a large percentage of the workloads would land on full blown operating systems?
If the Cyphal specifications where going to be carved in stone for all eternity, then I would agree. What do you think are the chances of that happening? Keeping in sync multiple code bases doing the same thing and owned by multiple teams when you have a moving target is hell… been there.
Not trying to be argumentative, just trying to avoid the age old software mistake: coming up with a great solution to the wrong problem. Hope this helps,
This is (tangentially) related to your question regarding testing; but perhaps you might find this interesting.
Since most Zubax Robotics products rely on OpenCyphal for IO, there are currently 2 setups (Myxa and EPM) which use integration testing to make sure that the Cyphal interface works correctly.
As far as testing goes, I think there’s a good chance that errors would have already been detected if either of the following we’re not up to spec:
If you’re interested, here’s a good start that shows how our test pipeline works: Devops for Zephyr RTOS - Blog - Zubax Forum (see Section 4 which covers Integration tests).
In the case of EPM, we can then do a test as follows:
pub_command = ctx.node.make_publisher(Integer8, DEFAULT_CONFIG["uavcan.sub.command.id"])
assert await pub_command.publish(Integer8(value=1)) # 1 = ON
feedback_msg_meta = await sub_feedback.receive_for(60) # Check to see if the magnet is on
feedback_msg, meta = feedback_msg_meta
assert isinstance(feedback_msg, Feedback)
assert feedback_msg.cycles_on_off == 1, "Cycle on count should be one"
assert feedback_msg.cycles_on_off == 0, "Cycle off count should be one"
This publishes a Cyphal message to the magnet, then verifies whether the command has been processed and the relevant counters updated correctly. (EPM is connected to a desktop using Babel)
As far as I can tell this is not what you’re after? (You’re not interested in testing some actual hardware, just the libraries themselves?)
PS: At some point I will try to make a post about how to do this intergration testing of Cyphal hardware, since I suspect this might be useful for other developers as well.
Thanks for sharing, very nice. Indeed, I’m sure a blog post would be well received.
I don’t think complete code coverage is possible other than via unit tests (although bugs can exist in unit tests too!), but a good set of integration tests is very likely to catch a bad commit early.
Like they say: If you’re going to fail, fail early.
Looking forward for your blog post!
I’ve collected a few pytest tests for Cyphal Application-level functions (5.3 from the specification). I periodically add new tests when it gets boring to test it manually. I hope it can be useful for someone: https://github.com/PonomarevDA/tools/tree/main/cyphal