this improves performance on s3-backed volumes
noktuas reported on discord that the upload performance was
unexpectedly poor when writing to an s3 bucket through a JuiceFS
fuse-mount, only getting 1.5 MiB/s with copyparty, meanwhile a
regular filecopy averaged 30 MiB/s plus
the issue was that s3 does not support sparse files, so copyparty
would fall back to sequential uploading, and also disable fpool,
causing JuiceFS to repeatedly commit the same 5 MiB range to
the storage provider as each chunk arrived from the client
by forcing use of sparse files, s3 adapters such as JuiceFS and
geesefs will "only" write the entire file to s3 *twice*, initially
it writes the full filesize of zerobytes (depending on adapter,
hopefully using gzip compression to reduce the bandwidth necessary)
and then the actual file data in an adapter-specific chunksize
with this volflag, copyparty appears to reach the full expected speed