This is the big change needed to allow for parallel revision
processing. Previously, a lock was used to prevent this since the parallel
transactions could deadlock if each inserted data that the other then went to
insert.
By defining the order in which inserts happen, both in terms of the order of
tables, and the order of rows within the table, this change should guarantee
that there won't be deadlocks.
I'm also hoping this change will address whatever issue was causing some
derivation data to be missing from the database.
This needs rethinking, it's not feasible to maintain tests if you have to
struggle to get backtraces when they fail and they rely on fragile and broken
mocking.
While processing a revision. It would be good to also record what systems and
targets are in the platforms so it's clear what data is missing, but that can
be added later.
Just have one fiber at the moment, but this will enable using fibers for
parallelism in the future.
Fibers seemed to cause problems with the logging setup, which was a bit odd in
the first place. So move logging to the parent process which is better anyway.
Generating system test derivations are difficult, since you generally need to
do potentially expensive builds for the system you're generating the system
tests for. You might not want to disable grafts for instance because you might
be trying to test whatever the test is testing in the context of grafts being
enabled.
I'm looking at skipping the system tests on data.guix.gnu.org, because they're
not used and quite expensive to compute.
This should speed up processing new revisions, reduce latency between finding
out about new revisions and processing them, as well as help manage memory
usage, by processing each job in a process that then exits.