This was motivated by trying to allow for completely cleaning up
resource pools, which involved removing their use of fiberize which
currently has no destroy mechanism.
As part of this, there's a new parallelism limiter mechanism using
resource pools rather than fibers, and also a fixed size resource
pool.
The tests now drain? and destroy the resource pools to check cleaning
up.
Instead of batching the list items, change the batch size to a
parallelism limit and run up to that many fibers. When the processing
of one list item finishes, another will then start immediately after,
rather than when the whole batch is finished.
These changes also make the fibers-map and fibers-for-each operations
work with vectors as well as lists.