The packages comparison was getting confused by differences in the
derivations, so split the data used to make the comparison more sensible.
This resolves an issue comparing 8dd723f5… and 365892e9… which coinsided with
the fix for importing foreign architecture derivations, meaning that a whole
lot of new derivations appeared in the database. Prior to these changes, it
appeared like every package was new, and with these changes, the list is more
sensible.
This seems to work better for both generating the non-cross and cross
derivations. Previously, using the package-transitive-supported-systems
approach didn't generate some cross derivations.
Use the second argument to package-transitive-supported-systems to correctly
identify the different bootstrap path for non x86_64 and i686-linux. The
previous implementation did work, but only up until a merge of core-updates
changed the bootstrap approach.
If the file exists in the local store, then read it and add an entry to the
derivation_source_file_nars table. This will help to fill in the missing
entries, as currently entries are only added when the derivation source file
isn't in the database when the load new revision job runs.
It's more accurate to describe the specifics of the relevant data here through
terms like "matching" and "not matching", as a statement that something built
reproducibility needs to be made alongside the test conditions. So just say
that build outputs matched, or didn't match, as this is more descriptive of
the data available.
This is to aid rendering of narinfo files. They're requested with the path
/HASH.narinfo, so to quickly find the relevant derivation, this index can be
used.
This effectively duplicates the behaviour in Guix for serializing derivations,
but this uses the database representation in the Guix Data Service, rather
than the records Guix uses.
Previously, they were nix-base32-string encoded, but the representation in the
derivations is base16, so it doesn't make sense to use a different
representation in the database.
Therefore, add some code that runs before the start of each job to convert the
data in the database. It was easier to do this in Guile with the existing
support for working with these bytevector representations. After some
migration period, the code for converting the old hashes can be removed.
Both in terms of the code fetching the data from the database, as well as the
formatted and detail outputs. This corrects an error in the formatted output
for derivations where inputs would be duplicated.