These changes were motivated by switching to a mechanism of loading data that
isn't dependent on the big advisory lock that prevents more than one revision
from being processed at a time.
Since INSERT ... RETURNING id; is used, this can block if another transaction
inserts the same data, and then cause an error when that transaction
commits. The solution is to use ON CONFLICT DO NOTHING, but you have to handle
the case when the INSERT doesn't return an id since the other transaction has
inserted it.
This commit rewrites insert-missing-data-and-return-all-ids to do as described
above, as well as being more efficient in how existing data is detected and to
use more vectors. Other utilities for inserting data are added as well.
This is currently happening because some linters try to evaluate parts of
packages for cross-building to aarch64-linux-gnu, but not all packages support
that and some crash in this case.
I'm not quite sure what the correct behaviour should be, but maybe the data
service needs to try and handle these crashes rather than not processing the
entire revision.
This took a while to find as process-job would just get stuck, and this wasn't
directly related to any particular change, just that more fibers increased the
chance of hitting it.
This commit includes lots of the things I changed while debugging.
If there's lots of contention for the resource pool, there will be lots of
waiters, so telling all of them to retry whenever a resource is returned seems
wasteful. This commit adds a new option (assume-reliable-waiters?) which will
have the resource pool try to give a returned resource to the oldest waiter,
if this fails, it'll go back to the old behaviour of telling all waiters to
retry.
As I think this was happening when it missed the resource-pool-retry-checkout
reply from the resource pool. Handle this case by periodically retrying with a
configurable timeout.
The new fibers-map uses the same batching approach that fibers-for-each uses,
and fibers-map-with-progress allows tracking on the results while the map is
happening.