A few months back I said that I am slowly working on a Yukon-related proposal. A recent post by @bbworld1 on GitHub made me realize that I better make my work slightly more visible to enable collaboration. Here we go:
At this very moment, there is virtually nothing you could run and poke at (aside from a silly demo script that doesnāt do much interesting), but I am working on putting all the core blocks in place to make that possible. Once the groundwork around the DCS and the basic networking is finished, it should be easy to distribute the remaining tasks around a larger group of people.
For now, I would like people to validate the design proposal and submit architectural criticism. There are no specifics to argue about yet, but they do not appear to be required for a high-level discussion. If the general direction is considered sensible, I would like to tag the current master and merge my work on top of that to continue cowboy-style development upstream.
I will continue to focus on the core business logic and the nitty-gritty networking details. While I am focused on that, I would really appreciate help on the following fronts:
The DSDL definitions of Yukon take 10 minutes to compile. This is despite the fact that some of the messages are intentionally truncated to speed up compilation. The reason is this old combinatorial explosion problem in PyDSDL, which I expect to be fixable by replacing the current brute force algorithm (exponential complexity) with something closer to linear:
Build a simple discardable demo based on Dear ImGui (better) or Electron (slightly less interesting) that subscribes to a UAVCAN subject with GUI messages and renders them on screen. When the user clicks an interactive GUI element, the demo should send a UAVCAN service request to the node that published the element. You can use the new PyUAVCAN tutorial to bootstrap the UAVCAN side of the demo. The GUI UAVCAN types, as I imagine them, should be built roughly like this:
A top-level message that contains a unique ID of the element (say, a uint64 hash) and the element itself.
The āelement itselfā is a union of several options:
When the user clicks on an interactive element (like a button) or enters text, the event is reported back to the publisher via a service call along with the unique ID of the affected element and some payload (like entered text). A stateless immediate-mode framework like Dear ImGui is probably a better fit for this task because you can just periodically scan the most recently received GUI objects and render them in a single pass, then start anew on the next iteration.
Am I making sense here? Would anybody like to join this effort? Maybe this time we will drive it to some conclusion.
Iād be interested in trying out building the demo for the GUI; it seems like a cool project, and the usage of UAVCAN to render UI elements is a very interesting possibility that I havenāt really considered before. While I havenāt had much experience with Dear ImGui itself, Iāve used somewhat similar frameworks and the API seems intuitive. I will need to brush up a bit on PyUAVCAN; Iām still a little bit of a beginner to using UAVCAN in general, so you might have to bear with a couple of my idiotic questions (apologies ahead of time).
Speaking of questions, one major question I had was about the usage of Dear ImGui in this project, combined with a UAVCAN implementation. The library is written in C++; the two open source UAVCAN implementations listed on uavcan.org are libcanard (C11, intended for embedded environments) and pyuavcan (libuavcan, the C++ library implementation, seems to still be in progress) - with pyuavcan being recommended for HMI applications such as this. Is it fine to write using libcanard? Should we use Python bindings for ImGui?
We should use Python. Libcanard has two problems here: 1. it supports only UAVCAN/CAN, whereas for Yukon it makes sense to rely on a more capable transport like UAVCAN/UDP. 2. Life is short, use Python.
Perhaps we should sit down and draft up basic requirements for the UAVCAN UI interface? Semi-related, you may want to consider joining our weekly call tomorrow if you prefer to discuss something in real-time:
Thanks for the response; weāll use Python for the GUI then.
Definitely true
Absolutely. Iād like to ask a few (minor) questions about this at the conference call; after that we might want to start discussing a list of requirements on this forum thread or another thread.
As for the conference call itself, thatās at 9AM for me; my classes start at 9:45. However, this shouldnāt be a major issue in terms of scheduling - as long as everything that needs to be discussed regarding the UI interface takes place in the first 45 minutes of the call it should be fine. If not, I will just have to drop off the call at 9:45.
Iāll just voice my opinion that this rather exasperating project would be best built as a vscode extension such that we avoid spending time re-solving all the problems of GUI toolkit, cross-platform support, packaging, installation, updating, distribution, windowing system integration, configuration, customization, etc, etc, etc. We also allow Yukon features to integrate with the IDE features of vscode such that we can relate debug of code on a ĀµC (see Cortex-Debug extension, for example) with observation of a bus.
I wouldnāt support this because while VSCode does help with GUI and distribution, it is hardly helpful with the underwater part of the iceberg. If the Dear ImGui experiment is a success, we would end up with a pure Python application that can be initially distributed like Yakut, which is: pip install yukon. Itās not the most user-friendly solution for an average Windows user but many developers already have Python installed so it should be acceptable for an early stage.
Even if we did go the VSCode way, we would still need the backend, so the rest of the story holds.
Hey, sorry for the lack of recent updates. Itās been a little busy on my side but I got a chance to sit down and finish the basics of the demo today:
The Imgui-based UI is capable of rendering a window UI description with buttons and text right now. Other primitive UI elements can be added very easily. Itās more or less implemented the way Pavel suggested; the Button and Text elements each have a uuid and a text attribute, which is unionād into an Element, which is stored in an array inside a UIDescription, published on subject id 420. When a new description is published the UI is automatically updated. The node will be removed from the UI automatically when its heartbeat is lost, although this behavior is easily modifiable.
UI event callback sending seems to work flawlessly (at least from my admittedly non-rigorous testing). We can use this to update the UI description on events, allowing us to build very robust and dynamic node UIs.
After writing a basic UI based on this distributed concept, it seems worthwhile to continue with this idea; the extensibility of the design allows for a lot of exciting possibilities.