Time abstractions for libuavcan v1

In the latest commit to the v1 branch of libuavcan we have ported the old time.hpp abstraction over but have yet to provide more definition for system time. We do have monotonic timestamps on received frames that are documented to be a “hardware supplied monotonic time value” but no other definition has been defined.

The questions for this thread are:

  1. Do we want to continue to provide system clock and time value abstractions or do we remove time.hpp and go all-in with the C++ chrono abstractions?
  2. Should we require that rx timestamps be related to system time in some manner?


The argument for using this abstraction are that it’s a proven one (it’s about 8 years old now?) designed by professionals to be portable and efficient. The only reason I can think of for not using it is the unknown difficulty in supporting it on embedded systems. I personally have never used chrono time on a micro-controller so, before I have an opinion, I need to look at the support for it in nanolib and newlib. What other outliers are people aware of that could cause people to avoid libuavcan because of chrono support? Does anyone have experience implementing chrono clock abstractions?

RX timestamps

My current inclination is to avoid any requirements other than the timestamps on received frames are monotonic and that they don’t roll over (i.e. 64-bit == geologic timescales == rollover cannot happen because the hardware itself will have disintegrated into its constituent components before this would happen). Is there any other reason to put further constraints on these timestamps? Remember that requiring correlation between system time and hardware timestamps could severely limit the ability to support some platforms like linux where the application using libuavcan has very little control over this.

AFAIK the chrono library is purely header-only and it can be trivially leveraged on a baremetal implementation. We use it on some of our resource-limited systems (512 KB flash 128 KB RAM) with GCC/C++17, but I am not sure if this experience is transferable to other embedded C++ compilers.

The main question, as I see it, is whether we should consider the system time a completely independent quantity, or if we should model it as a function of monotonic time with defined rate and phase offsets. The latter seems sensible because it simplifies the time management in the library greatly, requiring us to deal with just one time value (monotonic), adding the system time only at the highest layers of abstraction (and only if necessary). There may be a problem with GNU/Linux though because it does not seem to be possible to accurately timestamp frames using monotonic time (only system time) and it is not possible to reliably determine the rate and phase differences between the clocks. So in the interest of platform compatibility we might want to keep the old separate two-clock approach unless someone has an idea how to reconcile the one-clock solution with high-level OSes.