Send patches - preferably formatted by git format-patch - to patches at archlinux32 dot org.
summaryrefslogtreecommitdiff
path: root/scripts/libmakepkg/util.sh.in
diff options
context:
space:
mode:
authorAnatol Pomozov <anatol.pomozov@gmail.com>2020-05-12 15:26:38 -0700
committerAllan McRae <allan@archlinux.org>2020-07-07 21:35:35 +1000
commitf078c2d3bcb72bafda0dce5fe2c9418ca462bb1a (patch)
tree4442ea1911716f399f514f1b7d404f0ce62f788f /scripts/libmakepkg/util.sh.in
parent6b9c1b4d54225b4c2808b5fadc2b6e779ae1916a (diff)
Move signature payload creation to download engine
Until now callee of ALPM download functionality has been in charge of payload creation both for the main file (e.g. *.pkg) and for the accompanied *.sig file. One advantage of such solution is that all payloads are independent and can be fetched in parallel thus exploiting the maximum level of download parallelism. To build *.sig file url we've been using a simple string concatenation: $requested_url + ".sig". Unfortunately there are cases when it does not work. For example an archlinux.org "Download From Mirror" link looks like this https://www.archlinux.org/packages/core/x86_64/bash/download/ and it gets redirected to some mirror. But if we append ".sig" to the end of the link url and try to download it then archlinux.org returns 404 error. To overcome this issue we need to follow redirects for the main payload first, find the final url and only then append '.sig' suffix. This implies 2 things: - the signature payload initialization need to be moved to dload.c as it is the place where we have access to the resolved url - *.sig is downloaded serially with the main payload and this reduces level of parallelism Move *.sig payload creation to dload.c. Once the main payload is fetched successfully we check if the callee asked to download the accompanied signature. If yes - create a new payload and add it to mcurl. *.sig payload does not use server list of the main payload and thus does not support mirror failover. *.sig file comes from the same server as the main payload. Refactor event loop in curl_multi_download_internal() a bit. Instead of relying on curl_multi_check_finished_download() to return number of new payloads we simply rerun the loop iteration one more time to check if there are any active downloads left. Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com> Signed-off-by: Allan McRae <allan@archlinux.org>
Diffstat (limited to 'scripts/libmakepkg/util.sh.in')
0 files changed, 0 insertions, 0 deletions