Big-endian vs. little-endian in the context of bit-level encoding

FWIW, GCC and Clang on AMD64 use little-endian bit order:

#include <stdio.h>

struct A
{
    unsigned a : 2;
    unsigned b : 3;
    unsigned c : 1;
    unsigned d : 16;
};

union U
{
    unsigned bits;
    struct A a;
};

int main()
{
    union U u = {0};
    u.a.a = 3;
    u.a.c = 1;
    u.a.d = 0b1111110011001100;
    printf("%x\n", (unsigned)u.bits);
    return 0;
}

Outputs 3f3323 which is 0b1111110011001100100011, which is d=1111110011001100 c=1 b=000 a=11.

Although, in Python, numpy.packbits() (and its counterpart numpy.unpackbits()) defaults to the big-endian bit order.