Compare commits

...

17 Commits

Author SHA1 Message Date
ed
7781e0529d nogil: remove -j (multiprocessing option)
in case cpython's free-threading / nogil performance proves to
be good enough that the multiprocessing option can be removed,
this is roughly how you'd do it  (not everything's been tested)

but while you'd expect this change to improve performance,
it somehow doesn't, not even measurably so

as the performance gain is negligible, the only win here is
simplifying the code, and maybe less chance of bugs in the future

as a result, this probably won't get merged anytime soon
(if at all), and would probably warrant bumping to v2
2024-03-24 21:53:28 +00:00
ed
cb99fbf442 update pkgs to 1.11.2 2024-03-23 17:53:19 +00:00
ed
bccc44dc21 v1.11.2 2024-03-23 17:24:36 +00:00
ed
2f20d29edd idp: mention lack of volume persistence 2024-03-23 16:35:45 +00:00
ed
c6acd3a904 add option --s-rd-sz (socket read size):
counterpart of `--s-wr-sz` which existed already

the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)

was previously 32 KiB, so large uploads should now use 17% less CPU

also adds sanchecks for values of `--iobuf`, `--s-rd-sz`, `--s-wr-sz`

also adds file-overwrite feature for multipart posts
2024-03-23 16:35:14 +00:00
ed
2b24c50eb7 add option --iobuf (file r/w buffersize):
the default (256 KiB) appears optimal in the most popular scenario
(linux host with storage on local physical disk, usually NVMe)

was previously a mix of 64 and 512 KiB;
now the same value is enforced everywhere

download-as-tar is now 20% faster with the default value
2024-03-23 16:17:40 +00:00
ed
d30ae8453d idp: precise expansion of ${u} (fixes #79);
it is now possible to grant access to users other than `${u}`
(the user which the volume belongs to)

previously, permissions did not apply correctly to IdP volumes due to
the way `${u}` and `${g}` was expanded, which was a funky iteration
over all known users/groups instead of... just expanding them?

also adds another sanchk that a volume's URL must contain a
`${u}` to be allowed to mention `${u}` in the accs list, and
similarly for `${g}` / `@${g}` since users can be in multiple groups
2024-03-21 20:10:27 +00:00
ed
8e5c436bef black + isort 2024-03-21 18:51:23 +00:00
ed
f500e55e68 update pkgs to 1.11.1 2024-03-18 17:41:43 +00:00
ed
9700a12366 v1.11.1 2024-03-18 17:09:56 +00:00
ed
2b6a34dc5c sfx: lexically comparable git-build versions
if building from an untagged git commit, the third value in the
VERSION tuple (in __version__.py) was a string instead of an int,
causing the version to compare and sort incorrectly
2024-03-18 17:04:49 +00:00
ed
ee80cdb9cf docs: real-ip (with or without cloudflare) 2024-03-18 16:30:51 +00:00
ed
2def4cd248 fix linter warnings + a test 2024-03-18 15:25:10 +00:00
ed
0287c7baa5 fix unpost when there is no rootfs;
the volflags of `/` were used to determine if e2d was enabled,
which is wrong in two ways:

* if there is no `/` volume, it would be globally disabled

* if `/` has e2d, but another volume doesn't, it would
   erroneously think unpost was available, which is not an
   issue unless that volume used to have e2d enabled AND
   there is stale data matching the client's IP

3f05b665 (v1.11.0) had an incomplete fix for the stale-data part of
the above, which also introduced the other issue
2024-03-18 06:15:32 +01:00
ed
51d31588e6 parse xff before deciding to reject a connection
this commit partially fixes the following issue:
if a client manages to escape real-ip detection, copyparty will
try to ban the reverse-proxy instead, effectively banning all clients

this can happen if the configuration says to obtain client real-ip
from a cloudflare header, but the server is not configured to reject
connections from non-cloudflare IPs, so a scanner will eventually
hit the server IP with malicious-looking requests and trigger a ban

copyparty will now continue to process requests from banned IPs until
the header has been parsed and the real-ip has been obtained (or not),
causing an increased server load from malicious clients

assuming the `--xff-src` and `--xff-hdr` config is correct,
this issue should no longer be hitting innocent clients

the old behavior of immediately rejecting a banned IP address
can be re-enabled with the new option `--early-ban`
2024-03-17 02:36:03 +00:00
ed
32553e4520 fix building mtp deps on python 3.12 2024-03-16 13:59:08 +00:00
ed
211a30da38 update pkgs to 1.11.0 2024-03-15 21:34:29 +00:00
42 changed files with 601 additions and 714 deletions

View File

@@ -75,6 +75,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [themes](#themes) * [themes](#themes)
* [complete examples](#complete-examples) * [complete examples](#complete-examples)
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites * [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
* [prometheus](#prometheus) - metrics/stats can be enabled * [prometheus](#prometheus) - metrics/stats can be enabled
* [packages](#packages) - the party might be closer than you think * [packages](#packages) - the party might be closer than you think
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes) * [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
@@ -357,6 +358,9 @@ upgrade notes
* firefox refuses to connect over https, saying "Secure Connection Failed" or "SEC_ERROR_BAD_SIGNATURE", but the usual button to "Accept the Risk and Continue" is not shown * firefox refuses to connect over https, saying "Secure Connection Failed" or "SEC_ERROR_BAD_SIGNATURE", but the usual button to "Accept the Risk and Continue" is not shown
* firefox has corrupted its certstore; fix this by exiting firefox, then find and delete the file named `cert9.db` somewhere in your firefox profile folder * firefox has corrupted its certstore; fix this by exiting firefox, then find and delete the file named `cert9.db` somewhere in your firefox profile folder
* the server keeps saying `thank you for playing` when I try to access the website
* you've gotten banned for malicious traffic! if this happens by mistake, and you're running a reverse-proxy and/or something like cloudflare, see [real-ip](#real-ip) on how to fix this
* copyparty seems to think I am using http, even though the URL is https * copyparty seems to think I am using http, even though the URL is https
* your reverse-proxy is not sending the `X-Forwarded-Proto: https` header; this could be because your reverse-proxy itself is confused. Ensure that none of the intermediates (such as cloudflare) are terminating https before the traffic hits your entrypoint * your reverse-proxy is not sending the `X-Forwarded-Proto: https` header; this could be because your reverse-proxy itself is confused. Ensure that none of the intermediates (such as cloudflare) are terminating https before the traffic hits your entrypoint
@@ -597,7 +601,7 @@ this initiates an upload using `up2k`; there are two uploaders available:
* `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0 * `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0
* `[🚀] up2k`, the good / fancy one * `[🚀] up2k`, the good / fancy one
NB: you can undo/delete your own uploads with `[🧯]` [unpost](#unpost) NB: you can undo/delete your own uploads with `[🧯]` [unpost](#unpost) (and this is also where you abort unfinished uploads, but you have to refresh the page first)
up2k has several advantages: up2k has several advantages:
* you can drop folders into the browser (files are added recursively) * you can drop folders into the browser (files are added recursively)
@@ -1287,6 +1291,8 @@ you may experience poor upload performance this way, but that can sometimes be f
someone has also tested geesefs in combination with [gocryptfs](https://nuetzlich.net/gocryptfs/) with surprisingly good results, getting 60 MiB/s upload speeds on a gbit line, but JuiceFS won with 80 MiB/s using its built-in encryption someone has also tested geesefs in combination with [gocryptfs](https://nuetzlich.net/gocryptfs/) with surprisingly good results, getting 60 MiB/s upload speeds on a gbit line, but JuiceFS won with 80 MiB/s using its built-in encryption
you may improve performance by specifying larger values for `--iobuf` / `--s-rd-sz` / `--s-wr-sz`
## hiding from google ## hiding from google
@@ -1383,6 +1389,15 @@ example webserver configs:
* [apache2 config](contrib/apache/copyparty.conf) -- location-based * [apache2 config](contrib/apache/copyparty.conf) -- location-based
### real-ip
teaching copyparty how to see client IPs when running behind a reverse-proxy, or a WAF, or another protection service such as cloudflare
if you (and maybe everybody else) keep getting a message that says `thank you for playing`, then you've gotten banned for malicious traffic. This ban applies to the IP address that copyparty *thinks* identifies the shady client -- so, depending on your setup, you might have to tell copyparty where to find the correct IP
for most common setups, there should be a helpful message in the server-log explaining what to do, but see [docs/xff.md](docs/xff.md) if you want to learn more, including a quick hack to **just make it work** (which is **not** recommended, but hey...)
## prometheus ## prometheus
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0) metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
@@ -1727,6 +1742,7 @@ below are some tweaks roughly ordered by usefulness:
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set * `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
* and also makes thumbnails load faster, regardless of e2d/e2t * and also makes thumbnails load faster, regardless of e2d/e2t
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable * `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
* if your volumes are on a network-disk such as NFS / SMB / s3, specifying larger values for `--iobuf` and/or `--s-rd-sz` and/or `--s-wr-sz` may help; try setting all of them to `524288` or `1048576` or `4194304`
* `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger) * `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger)
* `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example: * `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example:
* lots of connections (many users or heavy clients) * lots of connections (many users or heavy clients)

View File

@@ -223,7 +223,10 @@ install_vamp() {
# use msys2 in mingw-w64 mode # use msys2 in mingw-w64 mode
# pacman -S --needed mingw-w64-x86_64-{ffmpeg,python,python-pip,vamp-plugin-sdk} # pacman -S --needed mingw-w64-x86_64-{ffmpeg,python,python-pip,vamp-plugin-sdk}
$pybin -m pip install --user vamp $pybin -m pip install --user vamp || {
printf '\n\033[7malright, trying something else...\033[0m\n'
$pybin -m pip install --user --no-build-isolation vamp
}
cd "$td" cd "$td"
echo '#include <vamp-sdk/Plugin.h>' | g++ -x c++ -c -o /dev/null - || [ -e ~/pe/vamp-sdk ] || { echo '#include <vamp-sdk/Plugin.h>' | g++ -x c++ -c -o /dev/null - || [ -e ~/pe/vamp-sdk ] || {

View File

@@ -11,6 +11,14 @@
# (5'000 requests per second, or 20gbps upload/download in parallel) # (5'000 requests per second, or 20gbps upload/download in parallel)
# #
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1 # on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
#
# if you are behind cloudflare (or another protection service),
# remember to reject all connections which are not coming from your
# protection service -- for cloudflare in particular, you can
# generate the list of permitted IP ranges like so:
# (curl -s https://www.cloudflare.com/ips-v{4,6} | sed 's/^/allow /; s/$/;/'; echo; echo "deny all;") > /etc/nginx/cloudflare-only.conf
#
# and then enable it below by uncomenting the cloudflare-only.conf line
upstream cpp { upstream cpp {
server 127.0.0.1:3923 fail_timeout=1s; server 127.0.0.1:3923 fail_timeout=1s;
@@ -21,7 +29,10 @@ server {
listen [::]:443 ssl; listen [::]:443 ssl;
server_name fs.example.com; server_name fs.example.com;
# uncomment the following line to reject non-cloudflare connections, ensuring client IPs cannot be spoofed:
#include /etc/nginx/cloudflare-only.conf;
location / { location / {
proxy_pass http://cpp; proxy_pass http://cpp;
proxy_redirect off; proxy_redirect off;

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe> # Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty pkgname=copyparty
pkgver="1.10.2" pkgver="1.11.2"
pkgrel=1 pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++" pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any") arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
) )
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz") source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" ) backup=("etc/${pkgname}.d/init" )
sha256sums=("001be979a0fdd8ace7d48cab79a137c13b87b78be35fc9633430f45a2831c3ed") sha256sums=("0b37641746d698681691ea9e7070096404afc64a42d3d4e96cc4e036074fded9")
build() { build() {
cd "${srcdir}/${pkgname}-${pkgver}" cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{ {
"url": "https://github.com/9001/copyparty/releases/download/v1.10.2/copyparty-sfx.py", "url": "https://github.com/9001/copyparty/releases/download/v1.11.2/copyparty-sfx.py",
"version": "1.10.2", "version": "1.11.2",
"hash": "sha256-O9lkN30gy3kwIH+39O4dN7agZPkuH36BDTk8mEsQCVg=" "hash": "sha256-3nIHLM4xJ9RQH3ExSGvBckHuS40IdzyREAtMfpJmfug="
} }

View File

@@ -841,7 +841,6 @@ def add_general(ap, nc, srvname):
ap2 = ap.add_argument_group('general options') ap2 = ap.add_argument_group('general options')
ap2.add_argument("-c", metavar="PATH", type=u, action="append", help="add config file") ap2.add_argument("-c", metavar="PATH", type=u, action="append", help="add config file")
ap2.add_argument("-nc", metavar="NUM", type=int, default=nc, help="max num clients") ap2.add_argument("-nc", metavar="NUM", type=int, default=nc, help="max num clients")
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores, 0=all")
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, \033[33mUSER\033[0m:\033[33mPASS\033[0m; example [\033[32med:wark\033[0m]") ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, \033[33mUSER\033[0m:\033[33mPASS\033[0m; example [\033[32med:wark\033[0m]")
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts") ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
ap2.add_argument("--grp", metavar="G:N,N", type=u, action="append", help="add group, \033[33mNAME\033[0m:\033[33mUSER1\033[0m,\033[33mUSER2\033[0m,\033[33m...\033[0m; example [\033[32madmins:ed,foo,bar\033[0m]") ap2.add_argument("--grp", metavar="G:N,N", type=u, action="append", help="add group, \033[33mNAME\033[0m:\033[33mUSER1\033[0m,\033[33mUSER2\033[0m,\033[33m...\033[0m; example [\033[32madmins:ed,foo,bar\033[0m]")
@@ -869,6 +868,7 @@ def add_fs(ap):
ap2 = ap.add_argument_group("filesystem options") ap2 = ap.add_argument_group("filesystem options")
rm_re_def = "5/0.1" if ANYWIN else "0/0" rm_re_def = "5/0.1" if ANYWIN else "0/0"
ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)") ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)")
ap2.add_argument("--iobuf", metavar="BYTES", type=int, default=256*1024, help="file I/O buffer-size; if your volumes are on a network drive, try increasing to \033[32m524288\033[0m or even \033[32m4194304\033[0m (and let me know if that improves your performance)")
def add_upload(ap): def add_upload(ap):
@@ -880,7 +880,7 @@ def add_upload(ap):
ap2.add_argument("--blank-wt", metavar="SEC", type=int, default=300, help="file write grace period (any client can write to a blank file last-modified more recently than \033[33mSEC\033[0m seconds ago)") ap2.add_argument("--blank-wt", metavar="SEC", type=int, default=300, help="file write grace period (any client can write to a blank file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without \033[33m-e2d\033[0m; roughly 1 MiB RAM per 600") ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without \033[33m-e2d\033[0m; roughly 1 MiB RAM per 600")
ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (bad idea to enable this on windows and/or cow filesystems)") ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (bad idea to enable this on windows and/or cow filesystems)")
ap2.add_argument("--use-fpool", action="store_true", help="force file-handle pooling, even when it might be dangerous (multiprocessing, filesystems lacking sparse-files support, ...)") ap2.add_argument("--use-fpool", action="store_true", help="force file-handle pooling, even when it might be dangerous (filesystems lacking sparse-files support, ...)")
ap2.add_argument("--hardlink", action="store_true", help="prefer hardlinks instead of symlinks when possible (within same filesystem) (volflag=hardlink)") ap2.add_argument("--hardlink", action="store_true", help="prefer hardlinks instead of symlinks when possible (within same filesystem) (volflag=hardlink)")
ap2.add_argument("--never-symlink", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=neversymlink)") ap2.add_argument("--never-symlink", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=neversymlink)")
ap2.add_argument("--no-dedup", action="store_true", help="disable symlink/hardlink creation; copy file contents instead (volflag=copydupes)") ap2.add_argument("--no-dedup", action="store_true", help="disable symlink/hardlink creation; copy file contents instead (volflag=copydupes)")
@@ -916,6 +916,7 @@ def add_network(ap):
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)") ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)") ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost") ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes") ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds") ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds")
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds") ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds")
@@ -1122,6 +1123,7 @@ def add_safety(ap):
ap2.add_argument("--ban-url", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m sus URL's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; applies only to permissions g/G/h (decent replacement for \033[33m--ban-404\033[0m if that can't be used)") ap2.add_argument("--ban-url", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m sus URL's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; applies only to permissions g/G/h (decent replacement for \033[33m--ban-404\033[0m if that can't be used)")
ap2.add_argument("--sus-urls", metavar="R", type=u, default=r"\.php$|(^|/)wp-(admin|content|includes)/", help="URLs which are considered sus / eligible for banning; disable with blank or [\033[32mno\033[0m]") ap2.add_argument("--sus-urls", metavar="R", type=u, default=r"\.php$|(^|/)wp-(admin|content|includes)/", help="URLs which are considered sus / eligible for banning; disable with blank or [\033[32mno\033[0m]")
ap2.add_argument("--nonsus-urls", metavar="R", type=u, default=r"^(favicon\.ico|robots\.txt)$|^apple-touch-icon|^\.well-known", help="harmless URLs ignored from 404-bans; disable with blank or [\033[32mno\033[0m]") ap2.add_argument("--nonsus-urls", metavar="R", type=u, default=r"^(favicon\.ico|robots\.txt)$|^apple-touch-icon|^\.well-known", help="harmless URLs ignored from 404-bans; disable with blank or [\033[32mno\033[0m]")
ap2.add_argument("--early-ban", action="store_true", help="if a client is banned, reject its connection as soon as possible; not a good idea to enable when proxied behind cloudflare since it could ban your reverse-proxy")
ap2.add_argument("--aclose", metavar="MIN", type=int, default=10, help="if a client maxes out the server connection limit, downgrade it from connection:keep-alive to connection:close for \033[33mMIN\033[0m minutes (and also kill its active connections) -- disable with 0") ap2.add_argument("--aclose", metavar="MIN", type=int, default=10, help="if a client maxes out the server connection limit, downgrade it from connection:keep-alive to connection:close for \033[33mMIN\033[0m minutes (and also kill its active connections) -- disable with 0")
ap2.add_argument("--loris", metavar="B", type=int, default=60, help="if a client maxes out the server connection limit without sending headers, ban it for \033[33mB\033[0m minutes; disable with [\033[32m0\033[0m]") ap2.add_argument("--loris", metavar="B", type=int, default=60, help="if a client maxes out the server connection limit without sending headers, ban it for \033[33mB\033[0m minutes; disable with [\033[32m0\033[0m]")
ap2.add_argument("--acao", metavar="V[,V]", type=u, default="*", help="Access-Control-Allow-Origin; list of origins (domains/IPs without port) to accept requests from; [\033[32mhttps://1.2.3.4\033[0m]. Default [\033[32m*\033[0m] allows requests from all sites but removes cookies and http-auth; only ?pw=hunter2 survives") ap2.add_argument("--acao", metavar="V[,V]", type=u, default="*", help="Access-Control-Allow-Origin; list of origins (domains/IPs without port) to accept requests from; [\033[32mhttps://1.2.3.4\033[0m]. Default [\033[32m*\033[0m] allows requests from all sites but removes cookies and http-auth; only ?pw=hunter2 survives")

View File

@@ -1,8 +1,8 @@
# coding: utf-8 # coding: utf-8
VERSION = (1, 11, 0) VERSION = (1, 11, 2)
CODENAME = "You Can (Not) Proceed" CODENAME = "You Can (Not) Proceed"
BUILD_DT = (2024, 3, 15) BUILD_DT = (2024, 3, 23)
S_VERSION = ".".join(map(str, VERSION)) S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT) S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -44,9 +44,7 @@ if True: # pylint: disable=using-constant-test
from .util import NamedLogger, RootLogger from .util import NamedLogger, RootLogger
if TYPE_CHECKING: if TYPE_CHECKING:
from .broker_mp import BrokerMp from .svchub import SvcHub
from .broker_thr import BrokerThr
from .broker_util import BrokerCli
# Vflags: TypeAlias = dict[str, str | bool | float | list[str]] # Vflags: TypeAlias = dict[str, str | bool | float | list[str]]
# Vflags: TypeAlias = dict[str, Any] # Vflags: TypeAlias = dict[str, Any]
@@ -141,9 +139,9 @@ class Lim(object):
sz: int, sz: int,
ptop: str, ptop: str,
abspath: str, abspath: str,
broker: Optional[Union["BrokerCli", "BrokerMp", "BrokerThr"]] = None, hub: Optional["SvcHub"] = None,
reg: Optional[dict[str, dict[str, Any]]] = None, reg: Optional[dict[str, dict[str, Any]]] = None,
volgetter: str = "up2k.get_volsize", volgetter: str = "get_volsize",
) -> tuple[str, str]: ) -> tuple[str, str]:
if reg is not None and self.reg is None: if reg is not None and self.reg is None:
self.reg = reg self.reg = reg
@@ -154,7 +152,7 @@ class Lim(object):
self.chk_rem(rem) self.chk_rem(rem)
if sz != -1: if sz != -1:
self.chk_sz(sz) self.chk_sz(sz)
self.chk_vsz(broker, ptop, sz, volgetter) self.chk_vsz(hub, ptop, sz, volgetter)
self.chk_df(abspath, sz) # side effects; keep last-ish self.chk_df(abspath, sz) # side effects; keep last-ish
ap2, vp2 = self.rot(abspath) ap2, vp2 = self.rot(abspath)
@@ -172,16 +170,15 @@ class Lim(object):
def chk_vsz( def chk_vsz(
self, self,
broker: Optional[Union["BrokerCli", "BrokerMp", "BrokerThr"]], hub: Optional["SvcHub"],
ptop: str, ptop: str,
sz: int, sz: int,
volgetter: str = "up2k.get_volsize", volgetter: str = "up2k.get_volsize",
) -> None: ) -> None:
if not broker or not self.vbmax + self.vnmax: if not hub or not self.vbmax + self.vnmax:
return return
x = broker.ask(volgetter, ptop) nbytes, nfiles = hub.up2k.getattr(volgetter)(ptop)
nbytes, nfiles = x.get()
if self.vbmax and self.vbmax < nbytes + sz: if self.vbmax and self.vbmax < nbytes + sz:
raise Pebkac(400, "volume has exceeded max size") raise Pebkac(400, "volume has exceeded max size")
@@ -815,9 +812,7 @@ class AuthSrv(object):
yield prev, True yield prev, True
def idp_checkin( def idp_checkin(self, hub: Optional["SvcHub"], uname: str, gname: str) -> bool:
self, broker: Optional["BrokerCli"], uname: str, gname: str
) -> bool:
if uname in self.acct: if uname in self.acct:
return False return False
@@ -837,12 +832,12 @@ class AuthSrv(object):
t = "reinitializing due to new user from IdP: [%s:%s]" t = "reinitializing due to new user from IdP: [%s:%s]"
self.log(t % (uname, gnames), 3) self.log(t % (uname, gnames), 3)
if not broker: if not hub:
# only true for tests # only true for tests
self._reload() self._reload()
return True return True
broker.ask("_reload_blocking", False).get() hub._reload_blocking(False)
return True return True
def _map_volume_idp( def _map_volume_idp(
@@ -1224,7 +1219,9 @@ class AuthSrv(object):
if un.startswith("@"): if un.startswith("@"):
grp = un[1:] grp = un[1:]
uns = [x[0] for x in un_gns.items() if grp in x[1]] uns = [x[0] for x in un_gns.items() if grp in x[1]]
if not uns and grp != "${g}" and not self.args.idp_h_grp: if grp == "${g}":
unames.append(un)
elif not uns and not self.args.idp_h_grp:
t = "group [%s] must be defined with --grp argument (or in a [groups] config section)" t = "group [%s] must be defined with --grp argument (or in a [groups] config section)"
raise CfgEx(t % (grp,)) raise CfgEx(t % (grp,))
@@ -1234,31 +1231,28 @@ class AuthSrv(object):
# unames may still contain ${u} and ${g} so now expand those; # unames may still contain ${u} and ${g} so now expand those;
un_gn = [(un, gn) for un, gns in un_gns.items() for gn in gns] un_gn = [(un, gn) for un, gns in un_gns.items() for gn in gns]
if "*" not in un_gns:
# need ("*","") to match "*" in unames
un_gn.append(("*", ""))
for _, dst, vu, vg in vols: for src, dst, vu, vg in vols:
unames2 = set() unames2 = set(unames)
for un, gn in un_gn:
# if vu/vg (volume user/group) is non-null,
# then each non-null value corresponds to
# ${u}/${g}; consider this a filter to
# apply to unames, as well as un_gn
if (vu and vu != un) or (vg and vg != gn):
continue
for uname in unames + ([un] if vu or vg else []): if "${u}" in unames:
if uname == "${u}": if not vu:
uname = vu or un t = "cannot use ${u} in accs of volume [%s] because the volume url does not contain ${u}"
elif uname in ("${g}", "@${g}"): raise CfgEx(t % (src,))
uname = vg or gn unames2.add(vu)
if vu and vu != uname: if "@${g}" in unames:
continue if not vg:
t = "cannot use @${g} in accs of volume [%s] because the volume url does not contain @${g}"
raise CfgEx(t % (src,))
unames2.update([un for un, gn in un_gn if gn == vg])
if uname: if "${g}" in unames:
unames2.add(uname) t = 'the accs of volume [%s] contains "${g}" but the only supported way of specifying that is "@${g}"'
raise CfgEx(t % (src,))
unames2.discard("${u}")
unames2.discard("@${g}")
self._read_vol_str(lvl, list(unames2), axs[dst]) self._read_vol_str(lvl, list(unames2), axs[dst])

View File

@@ -1,141 +0,0 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import threading
import time
import traceback
import queue
from .__init__ import CORES, TYPE_CHECKING
from .broker_mpw import MpWorker
from .broker_util import ExceptionalQueue, try_exec
from .util import Daemon, mp
if TYPE_CHECKING:
from .svchub import SvcHub
if True: # pylint: disable=using-constant-test
from typing import Any
class MProcess(mp.Process):
def __init__(
self,
q_pend: queue.Queue[tuple[int, str, list[Any]]],
q_yield: queue.Queue[tuple[int, str, list[Any]]],
target: Any,
args: Any,
) -> None:
super(MProcess, self).__init__(target=target, args=args)
self.q_pend = q_pend
self.q_yield = q_yield
class BrokerMp(object):
"""external api; manages MpWorkers"""
def __init__(self, hub: "SvcHub") -> None:
self.hub = hub
self.log = hub.log
self.args = hub.args
self.procs = []
self.mutex = threading.Lock()
self.num_workers = self.args.j or CORES
self.log("broker", "booting {} subprocesses".format(self.num_workers))
for n in range(1, self.num_workers + 1):
q_pend: queue.Queue[tuple[int, str, list[Any]]] = mp.Queue(1) # type: ignore
q_yield: queue.Queue[tuple[int, str, list[Any]]] = mp.Queue(64) # type: ignore
proc = MProcess(q_pend, q_yield, MpWorker, (q_pend, q_yield, self.args, n))
Daemon(self.collector, "mp-sink-{}".format(n), (proc,))
self.procs.append(proc)
proc.start()
def shutdown(self) -> None:
self.log("broker", "shutting down")
for n, proc in enumerate(self.procs):
thr = threading.Thread(
target=proc.q_pend.put((0, "shutdown", [])),
name="mp-shutdown-{}-{}".format(n, len(self.procs)),
)
thr.start()
with self.mutex:
procs = self.procs
self.procs = []
while procs:
if procs[-1].is_alive():
time.sleep(0.05)
continue
procs.pop()
def reload(self) -> None:
self.log("broker", "reloading")
for _, proc in enumerate(self.procs):
proc.q_pend.put((0, "reload", []))
def collector(self, proc: MProcess) -> None:
"""receive message from hub in other process"""
while True:
msg = proc.q_yield.get()
retq_id, dest, args = msg
if dest == "log":
self.log(*args)
elif dest == "retq":
# response from previous ipc call
raise Exception("invalid broker_mp usage")
else:
# new ipc invoking managed service in hub
try:
obj = self.hub
for node in dest.split("."):
obj = getattr(obj, node)
# TODO will deadlock if dest performs another ipc
rv = try_exec(retq_id, obj, *args)
except:
rv = ["exception", "stack", traceback.format_exc()]
if retq_id:
proc.q_pend.put((retq_id, "retq", rv))
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
# new non-ipc invoking managed service in hub
obj = self.hub
for node in dest.split("."):
obj = getattr(obj, node)
rv = try_exec(True, obj, *args)
retq = ExceptionalQueue(1)
retq.put(rv)
return retq
def say(self, dest: str, *args: Any) -> None:
"""
send message to non-hub component in other process,
returns a Queue object which eventually contains the response if want_retval
(not-impl here since nothing uses it yet)
"""
if dest == "listen":
for p in self.procs:
p.q_pend.put((0, dest, [args[0], len(self.procs)]))
elif dest == "set_netdevs":
for p in self.procs:
p.q_pend.put((0, dest, list(args)))
elif dest == "cb_httpsrv_up":
self.hub.cb_httpsrv_up()
else:
raise Exception("what is " + str(dest))

View File

@@ -1,123 +0,0 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import argparse
import os
import signal
import sys
import threading
import queue
from .__init__ import ANYWIN
from .authsrv import AuthSrv
from .broker_util import BrokerCli, ExceptionalQueue
from .httpsrv import HttpSrv
from .util import FAKE_MP, Daemon, HMaccas
if True: # pylint: disable=using-constant-test
from types import FrameType
from typing import Any, Optional, Union
class MpWorker(BrokerCli):
"""one single mp instance"""
def __init__(
self,
q_pend: queue.Queue[tuple[int, str, list[Any]]],
q_yield: queue.Queue[tuple[int, str, list[Any]]],
args: argparse.Namespace,
n: int,
) -> None:
super(MpWorker, self).__init__()
self.q_pend = q_pend
self.q_yield = q_yield
self.args = args
self.n = n
self.log = self._log_disabled if args.q and not args.lo else self._log_enabled
self.retpend: dict[int, Any] = {}
self.retpend_mutex = threading.Lock()
self.mutex = threading.Lock()
# we inherited signal_handler from parent,
# replace it with something harmless
if not FAKE_MP:
sigs = [signal.SIGINT, signal.SIGTERM]
if not ANYWIN:
sigs.append(signal.SIGUSR1)
for sig in sigs:
signal.signal(sig, self.signal_handler)
# starting to look like a good idea
self.asrv = AuthSrv(args, None, False)
# instantiate all services here (TODO: inheritance?)
self.iphash = HMaccas(os.path.join(self.args.E.cfg, "iphash"), 8)
self.httpsrv = HttpSrv(self, n)
# on winxp and some other platforms,
# use thr.join() to block all signals
Daemon(self.main, "mpw-main").join()
def signal_handler(self, sig: Optional[int], frame: Optional[FrameType]) -> None:
# print('k')
pass
def _log_enabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
self.q_yield.put((0, "log", [src, msg, c]))
def _log_disabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
pass
def logw(self, msg: str, c: Union[int, str] = 0) -> None:
self.log("mp%d" % (self.n,), msg, c)
def main(self) -> None:
while True:
retq_id, dest, args = self.q_pend.get()
# self.logw("work: [{}]".format(d[0]))
if dest == "shutdown":
self.httpsrv.shutdown()
self.logw("ok bye")
sys.exit(0)
return
elif dest == "reload":
self.logw("mpw.asrv reloading")
self.asrv.reload()
self.logw("mpw.asrv reloaded")
elif dest == "listen":
self.httpsrv.listen(args[0], args[1])
elif dest == "set_netdevs":
self.httpsrv.set_netdevs(args[0])
elif dest == "retq":
# response from previous ipc call
with self.retpend_mutex:
retq = self.retpend.pop(retq_id)
retq.put(args)
else:
raise Exception("what is " + str(dest))
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
retq = ExceptionalQueue(1)
retq_id = id(retq)
with self.retpend_mutex:
self.retpend[retq_id] = retq
self.q_yield.put((retq_id, dest, list(args)))
return retq
def say(self, dest: str, *args: Any) -> None:
self.q_yield.put((0, dest, list(args)))

View File

@@ -1,73 +0,0 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import os
import threading
from .__init__ import TYPE_CHECKING
from .broker_util import BrokerCli, ExceptionalQueue, try_exec
from .httpsrv import HttpSrv
from .util import HMaccas
if TYPE_CHECKING:
from .svchub import SvcHub
if True: # pylint: disable=using-constant-test
from typing import Any
class BrokerThr(BrokerCli):
"""external api; behaves like BrokerMP but using plain threads"""
def __init__(self, hub: "SvcHub") -> None:
super(BrokerThr, self).__init__()
self.hub = hub
self.log = hub.log
self.args = hub.args
self.asrv = hub.asrv
self.mutex = threading.Lock()
self.num_workers = 1
# instantiate all services here (TODO: inheritance?)
self.iphash = HMaccas(os.path.join(self.args.E.cfg, "iphash"), 8)
self.httpsrv = HttpSrv(self, None)
self.reload = self.noop
def shutdown(self) -> None:
# self.log("broker", "shutting down")
self.httpsrv.shutdown()
def noop(self) -> None:
pass
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
# new ipc invoking managed service in hub
obj = self.hub
for node in dest.split("."):
obj = getattr(obj, node)
rv = try_exec(True, obj, *args)
# pretend we're broker_mp
retq = ExceptionalQueue(1)
retq.put(rv)
return retq
def say(self, dest: str, *args: Any) -> None:
if dest == "listen":
self.httpsrv.listen(args[0], 1)
return
if dest == "set_netdevs":
self.httpsrv.set_netdevs(args[0])
return
# new ipc invoking managed service in hub
obj = self.hub
for node in dest.split("."):
obj = getattr(obj, node)
try_exec(False, obj, *args)

View File

@@ -1,72 +0,0 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import argparse
import traceback
from queue import Queue
from .__init__ import TYPE_CHECKING
from .authsrv import AuthSrv
from .util import HMaccas, Pebkac
if True: # pylint: disable=using-constant-test
from typing import Any, Optional, Union
from .util import RootLogger
if TYPE_CHECKING:
from .httpsrv import HttpSrv
class ExceptionalQueue(Queue, object):
def get(self, block: bool = True, timeout: Optional[float] = None) -> Any:
rv = super(ExceptionalQueue, self).get(block, timeout)
if isinstance(rv, list):
if rv[0] == "exception":
if rv[1] == "pebkac":
raise Pebkac(*rv[2:])
else:
raise Exception(rv[2])
return rv
class BrokerCli(object):
"""
helps mypy understand httpsrv.broker but still fails a few levels deeper,
for example resolving httpconn.* in httpcli -- see lines tagged #mypy404
"""
log: "RootLogger"
args: argparse.Namespace
asrv: AuthSrv
httpsrv: "HttpSrv"
iphash: HMaccas
def __init__(self) -> None:
pass
def ask(self, dest: str, *args: Any) -> ExceptionalQueue:
return ExceptionalQueue(1)
def say(self, dest: str, *args: Any) -> None:
pass
def try_exec(want_retval: Union[bool, int], func: Any, *args: list[Any]) -> Any:
try:
return func(*args)
except Pebkac as ex:
if not want_retval:
raise
return ["exception", "pebkac", ex.code, str(ex)]
except:
if not want_retval:
raise
return ["exception", "stack", traceback.format_exc()]

View File

@@ -88,12 +88,8 @@ class FtpAuth(DummyAuthorizer):
if bonk: if bonk:
logging.warning("client banned: invalid passwords") logging.warning("client banned: invalid passwords")
bans[ip] = bonk bans[ip] = bonk
try: self.hub.httpsrv.bans[ip] = bonk
# only possible if multiprocessing disabled self.hub.httpsrv.nban += 1
self.hub.broker.httpsrv.bans[ip] = bonk # type: ignore
self.hub.broker.httpsrv.nban += 1 # type: ignore
except:
pass
raise AuthenticationFailed("Authentication failed.") raise AuthenticationFailed("Authentication failed.")
@@ -218,7 +214,7 @@ class FtpFs(AbstractedFS):
raise FSE("Cannot open existing file for writing") raise FSE("Cannot open existing file for writing")
self.validpath(ap) self.validpath(ap)
return open(fsenc(ap), mode) return open(fsenc(ap), mode, self.args.iobuf)
def chdir(self, path: str) -> None: def chdir(self, path: str) -> None:
nwd = join(self.cwd, path) nwd = join(self.cwd, path)

View File

@@ -36,6 +36,7 @@ from .bos import bos
from .star import StreamTar from .star import StreamTar
from .sutil import StreamArc, gfilter from .sutil import StreamArc, gfilter
from .szip import StreamZip from .szip import StreamZip
from .util import unquote # type: ignore
from .util import ( from .util import (
APPLESAN_RE, APPLESAN_RE,
BITNESS, BITNESS,
@@ -84,7 +85,6 @@ from .util import (
sendfile_py, sendfile_py,
undot, undot,
unescape_cookie, unescape_cookie,
unquote, # type: ignore
unquotep, unquotep,
vjoin, vjoin,
vol_san, vol_san,
@@ -115,6 +115,7 @@ class HttpCli(object):
self.t0 = time.time() self.t0 = time.time()
self.conn = conn self.conn = conn
self.hub = conn.hsrv.hub
self.u2mutex = conn.u2mutex # mypy404 self.u2mutex = conn.u2mutex # mypy404
self.s = conn.s self.s = conn.s
self.sr = conn.sr self.sr = conn.sr
@@ -174,7 +175,6 @@ class HttpCli(object):
self.parser: Optional[MultipartParser] = None self.parser: Optional[MultipartParser] = None
# end placeholders # end placeholders
self.bufsz = 1024 * 32
self.html_head = "" self.html_head = ""
def log(self, msg: str, c: Union[int, str] = 0) -> None: def log(self, msg: str, c: Union[int, str] = 0) -> None:
@@ -228,7 +228,7 @@ class HttpCli(object):
"Cache-Control": "no-store, max-age=0", "Cache-Control": "no-store, max-age=0",
} }
if self.is_banned(): if self.args.early_ban and self.is_banned():
return False return False
if self.conn.ipa_nm and not self.conn.ipa_nm.map(self.conn.addr[0]): if self.conn.ipa_nm and not self.conn.ipa_nm.map(self.conn.addr[0]):
@@ -323,9 +323,7 @@ class HttpCli(object):
if "." in pip if "." in pip
else ":".join(pip.split(":")[:4]) + ":" else ":".join(pip.split(":")[:4]) + ":"
) + "0.0/16" ) + "0.0/16"
zs2 = ( zs2 = ' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
' or "--xff-src=lan"' if self.conn.hsrv.xff_lan.map(pip) else ""
)
self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs, zs2), 3) self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs, zs2), 3)
else: else:
self.ip = cli_ip self.ip = cli_ip
@@ -478,7 +476,7 @@ class HttpCli(object):
) or self.args.idp_h_key in self.headers ) or self.args.idp_h_key in self.headers
if trusted_key and trusted_xff: if trusted_key and trusted_xff:
self.asrv.idp_checkin(self.conn.hsrv.broker, idp_usr, idp_grp) self.asrv.idp_checkin(self.hub, idp_usr, idp_grp)
else: else:
if not trusted_key: if not trusted_key:
t = 'the idp-h-key header ("%s") is not present in the request; will NOT trust the other headers saying that the client\'s username is "%s" and group is "%s"' t = 'the idp-h-key header ("%s") is not present in the request; will NOT trust the other headers saying that the client\'s username is "%s" and group is "%s"'
@@ -496,9 +494,7 @@ class HttpCli(object):
else ":".join(pip.split(":")[:4]) + ":" else ":".join(pip.split(":")[:4]) + ":"
) + "0.0/16" ) + "0.0/16"
zs2 = ( zs2 = (
' or "--xff-src=lan"' ' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
if self.conn.hsrv.xff_lan.map(pip)
else ""
) )
self.log(t % (pip, idp_usr, idp_grp, zs, zs2), 3) self.log(t % (pip, idp_usr, idp_grp, zs, zs2), 3)
@@ -631,7 +627,7 @@ class HttpCli(object):
msg += "hint: %s\r\n" % (self.hint,) msg += "hint: %s\r\n" % (self.hint,)
if "database is locked" in em: if "database is locked" in em:
self.conn.hsrv.broker.say("log_stacks") self.hub.log_stacks()
msg += "hint: important info in the server log\r\n" msg += "hint: important info in the server log\r\n"
zb = b"<pre>" + html_escape(msg).encode("utf-8", "replace") zb = b"<pre>" + html_escape(msg).encode("utf-8", "replace")
@@ -1615,15 +1611,16 @@ class HttpCli(object):
return enc or "utf-8" return enc or "utf-8"
def get_body_reader(self) -> tuple[Generator[bytes, None, None], int]: def get_body_reader(self) -> tuple[Generator[bytes, None, None], int]:
bufsz = self.args.s_rd_sz
if "chunked" in self.headers.get("transfer-encoding", "").lower(): if "chunked" in self.headers.get("transfer-encoding", "").lower():
return read_socket_chunked(self.sr), -1 return read_socket_chunked(self.sr, bufsz), -1
remains = int(self.headers.get("content-length", -1)) remains = int(self.headers.get("content-length", -1))
if remains == -1: if remains == -1:
self.keepalive = False self.keepalive = False
return read_socket_unbounded(self.sr), remains return read_socket_unbounded(self.sr, bufsz), remains
else: else:
return read_socket(self.sr, remains), remains return read_socket(self.sr, bufsz, remains), remains
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, int, str, str]: def dump_to_file(self, is_put: bool) -> tuple[int, str, str, int, str, str]:
# post_sz, sha_hex, sha_b64, remains, path, url # post_sz, sha_hex, sha_b64, remains, path, url
@@ -1633,9 +1630,7 @@ class HttpCli(object):
lim = vfs.get_dbv(rem)[0].lim lim = vfs.get_dbv(rem)[0].lim
fdir = vfs.canonical(rem) fdir = vfs.canonical(rem)
if lim: if lim:
fdir, rem = lim.all( fdir, rem = lim.all(self.ip, rem, remains, vfs.realpath, fdir, self.hub)
self.ip, rem, remains, vfs.realpath, fdir, self.conn.hsrv.broker
)
fn = None fn = None
if rem and not self.trailing_slash and not bos.path.isdir(fdir): if rem and not self.trailing_slash and not bos.path.isdir(fdir):
@@ -1645,7 +1640,7 @@ class HttpCli(object):
bos.makedirs(fdir) bos.makedirs(fdir)
open_ka: dict[str, Any] = {"fun": open} open_ka: dict[str, Any] = {"fun": open}
open_a = ["wb", 512 * 1024] open_a = ["wb", self.args.iobuf]
# user-request || config-force # user-request || config-force
if ("gz" in vfs.flags or "xz" in vfs.flags) and ( if ("gz" in vfs.flags or "xz" in vfs.flags) and (
@@ -1768,7 +1763,7 @@ class HttpCli(object):
lim.bup(self.ip, post_sz) lim.bup(self.ip, post_sz)
try: try:
lim.chk_sz(post_sz) lim.chk_sz(post_sz)
lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, post_sz) lim.chk_vsz(self.hub, vfs.realpath, post_sz)
except: except:
wunlink(self.log, path, vfs.flags) wunlink(self.log, path, vfs.flags)
raise raise
@@ -1827,8 +1822,7 @@ class HttpCli(object):
raise Pebkac(403, t) raise Pebkac(403, t)
vfs, rem = vfs.get_dbv(rem) vfs, rem = vfs.get_dbv(rem)
self.conn.hsrv.broker.say( self.hub.up2k.hash_file(
"up2k.hash_file",
vfs.realpath, vfs.realpath,
vfs.vpath, vfs.vpath,
vfs.flags, vfs.flags,
@@ -1904,7 +1898,7 @@ class HttpCli(object):
f.seek(ofs) f.seek(ofs)
with open(fp, "wb") as fo: with open(fp, "wb") as fo:
while nrem: while nrem:
buf = f.read(min(nrem, 512 * 1024)) buf = f.read(min(nrem, self.args.iobuf))
if not buf: if not buf:
break break
@@ -1926,7 +1920,7 @@ class HttpCli(object):
return "%s %s n%s" % (spd1, spd2, self.conn.nreq) return "%s %s n%s" % (spd1, spd2, self.conn.nreq)
def handle_post_multipart(self) -> bool: def handle_post_multipart(self) -> bool:
self.parser = MultipartParser(self.log, self.sr, self.headers) self.parser = MultipartParser(self.log, self.args, self.sr, self.headers)
self.parser.parse() self.parser.parse()
file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]] = [] file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]] = []
@@ -2053,8 +2047,7 @@ class HttpCli(object):
# not to protect u2fh, but to prevent handshakes while files are closing # not to protect u2fh, but to prevent handshakes while files are closing
with self.u2mutex: with self.u2mutex:
x = self.conn.hsrv.broker.ask("up2k.handle_json", body, self.u2fh.aps) ret = self.hub.up2k.handle_json(body, self.u2fh.aps)
ret = x.get()
if self.is_vproxied: if self.is_vproxied:
if "purl" in ret: if "purl" in ret:
@@ -2142,7 +2135,7 @@ class HttpCli(object):
vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True) vfs, _ = self.asrv.vfs.get(self.vpath, self.uname, False, True)
ptop = (vfs.dbv or vfs).realpath ptop = (vfs.dbv or vfs).realpath
x = self.conn.hsrv.broker.ask("up2k.handle_chunk", ptop, wark, chash) x = self.hub.up2k.handle_chunk(ptop, wark, chash)
response = x.get() response = x.get()
chunksize, cstart, path, lastmod, sprs = response chunksize, cstart, path, lastmod, sprs = response
@@ -2155,7 +2148,7 @@ class HttpCli(object):
self.log("writing {} #{} @{} len {}".format(path, chash, cstart, remains)) self.log("writing {} #{} @{} len {}".format(path, chash, cstart, remains))
reader = read_socket(self.sr, remains) reader = read_socket(self.sr, self.args.s_rd_sz, remains)
f = None f = None
fpool = not self.args.no_fpool and sprs fpool = not self.args.no_fpool and sprs
@@ -2166,7 +2159,7 @@ class HttpCli(object):
except: except:
pass pass
f = f or open(fsenc(path), "rb+", 512 * 1024) f = f or open(fsenc(path), "rb+", self.args.iobuf)
try: try:
f.seek(cstart[0]) f.seek(cstart[0])
@@ -2189,7 +2182,8 @@ class HttpCli(object):
) )
ofs = 0 ofs = 0
while ofs < chunksize: while ofs < chunksize:
bufsz = min(chunksize - ofs, 4 * 1024 * 1024) bufsz = max(4 * 1024 * 1024, self.args.iobuf)
bufsz = min(chunksize - ofs, bufsz)
f.seek(cstart[0] + ofs) f.seek(cstart[0] + ofs)
buf = f.read(bufsz) buf = f.read(bufsz)
for wofs in cstart[1:]: for wofs in cstart[1:]:
@@ -2210,11 +2204,9 @@ class HttpCli(object):
f.close() f.close()
raise raise
finally: finally:
x = self.conn.hsrv.broker.ask("up2k.release_chunk", ptop, wark, chash) self.hub.up2k.release_chunk(ptop, wark, chash)
x.get() # block client until released
x = self.conn.hsrv.broker.ask("up2k.confirm_chunk", ptop, wark, chash) ztis = self.hub.up2k.confirm_chunk(ptop, wark, chash)
ztis = x.get()
try: try:
num_left, fin_path = ztis num_left, fin_path = ztis
except: except:
@@ -2226,9 +2218,7 @@ class HttpCli(object):
self.u2fh.close(path) self.u2fh.close(path)
if not num_left and not self.args.nw: if not num_left and not self.args.nw:
self.conn.hsrv.broker.ask( self.hub.up2k.finish_upload(ptop, wark, self.u2fh.aps)
"up2k.finish_upload", ptop, wark, self.u2fh.aps
).get()
cinf = self.headers.get("x-up2k-stat", "") cinf = self.headers.get("x-up2k-stat", "")
@@ -2406,7 +2396,7 @@ class HttpCli(object):
fdir_base = vfs.canonical(rem) fdir_base = vfs.canonical(rem)
if lim: if lim:
fdir_base, rem = lim.all( fdir_base, rem = lim.all(
self.ip, rem, -1, vfs.realpath, fdir_base, self.conn.hsrv.broker self.ip, rem, -1, vfs.realpath, fdir_base, self.hub
) )
upload_vpath = "{}/{}".format(vfs.vpath, rem).strip("/") upload_vpath = "{}/{}".format(vfs.vpath, rem).strip("/")
if not nullwrite: if not nullwrite:
@@ -2442,6 +2432,18 @@ class HttpCli(object):
suffix = "-{:.6f}-{}".format(time.time(), dip) suffix = "-{:.6f}-{}".format(time.time(), dip)
open_args = {"fdir": fdir, "suffix": suffix} open_args = {"fdir": fdir, "suffix": suffix}
if "replace" in self.uparam:
abspath = os.path.join(fdir, fname)
if not self.can_delete:
self.log("user not allowed to overwrite with ?replace")
elif bos.path.exists(abspath):
try:
bos.unlink(abspath)
t = "overwriting file with new upload: %s"
except:
t = "toctou while deleting for ?replace: %s"
self.log(t % (abspath,))
# reserve destination filename # reserve destination filename
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw: with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw:
fname = zfw["orz"][1] fname = zfw["orz"][1]
@@ -2486,7 +2488,7 @@ class HttpCli(object):
v2 = lim.dfv - lim.dfl v2 = lim.dfv - lim.dfl
max_sz = min(v1, v2) if v1 and v2 else v1 or v2 max_sz = min(v1, v2) if v1 and v2 else v1 or v2
with ren_open(tnam, "wb", 512 * 1024, **open_args) as zfw: with ren_open(tnam, "wb", self.args.iobuf, **open_args) as zfw:
f, tnam = zfw["orz"] f, tnam = zfw["orz"]
tabspath = os.path.join(fdir, tnam) tabspath = os.path.join(fdir, tnam)
self.log("writing to {}".format(tabspath)) self.log("writing to {}".format(tabspath))
@@ -2502,7 +2504,7 @@ class HttpCli(object):
try: try:
lim.chk_df(tabspath, sz, True) lim.chk_df(tabspath, sz, True)
lim.chk_sz(sz) lim.chk_sz(sz)
lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, sz) lim.chk_vsz(self.hub, vfs.realpath, sz)
lim.chk_bup(self.ip) lim.chk_bup(self.ip)
lim.chk_nup(self.ip) lim.chk_nup(self.ip)
except: except:
@@ -2540,8 +2542,7 @@ class HttpCli(object):
raise Pebkac(403, t) raise Pebkac(403, t)
dbv, vrem = vfs.get_dbv(rem) dbv, vrem = vfs.get_dbv(rem)
self.conn.hsrv.broker.say( self.hub.up2k.hash_file(
"up2k.hash_file",
dbv.realpath, dbv.realpath,
vfs.vpath, vfs.vpath,
dbv.flags, dbv.flags,
@@ -2688,7 +2689,7 @@ class HttpCli(object):
fp = vfs.canonical(rp) fp = vfs.canonical(rp)
lim = vfs.get_dbv(rem)[0].lim lim = vfs.get_dbv(rem)[0].lim
if lim: if lim:
fp, rp = lim.all(self.ip, rp, clen, vfs.realpath, fp, self.conn.hsrv.broker) fp, rp = lim.all(self.ip, rp, clen, vfs.realpath, fp, self.hub)
bos.makedirs(fp) bos.makedirs(fp)
fp = os.path.join(fp, fn) fp = os.path.join(fp, fn)
@@ -2782,7 +2783,7 @@ class HttpCli(object):
if bos.path.exists(fp): if bos.path.exists(fp):
wunlink(self.log, fp, vfs.flags) wunlink(self.log, fp, vfs.flags)
with open(fsenc(fp), "wb", 512 * 1024) as f: with open(fsenc(fp), "wb", self.args.iobuf) as f:
sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp) sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp)
if lim: if lim:
@@ -2790,7 +2791,7 @@ class HttpCli(object):
lim.bup(self.ip, sz) lim.bup(self.ip, sz)
try: try:
lim.chk_sz(sz) lim.chk_sz(sz)
lim.chk_vsz(self.conn.hsrv.broker, vfs.realpath, sz) lim.chk_vsz(self.hub, vfs.realpath, sz)
except: except:
wunlink(self.log, fp, vfs.flags) wunlink(self.log, fp, vfs.flags)
raise raise
@@ -2819,8 +2820,7 @@ class HttpCli(object):
raise Pebkac(403, t) raise Pebkac(403, t)
vfs, rem = vfs.get_dbv(rem) vfs, rem = vfs.get_dbv(rem)
self.conn.hsrv.broker.say( self.hub.up2k.hash_file(
"up2k.hash_file",
vfs.realpath, vfs.realpath,
vfs.vpath, vfs.vpath,
vfs.flags, vfs.flags,
@@ -3014,8 +3014,7 @@ class HttpCli(object):
upper = gzip_orig_sz(fs_path) upper = gzip_orig_sz(fs_path)
else: else:
open_func = open open_func = open
# 512 kB is optimal for huge files, use 64k open_args = [fsenc(fs_path), "rb", self.args.iobuf]
open_args = [fsenc(fs_path), "rb", 64 * 1024]
use_sendfile = ( use_sendfile = (
# fmt: off # fmt: off
not self.tls not self.tls
@@ -3150,6 +3149,7 @@ class HttpCli(object):
bgen = packer( bgen = packer(
self.log, self.log,
self.args,
fgen, fgen,
utf8="utf" in uarg, utf8="utf" in uarg,
pre_crc="crc" in uarg, pre_crc="crc" in uarg,
@@ -3227,7 +3227,7 @@ class HttpCli(object):
sz_md = 0 sz_md = 0
lead = b"" lead = b""
fullfile = b"" fullfile = b""
for buf in yieldfile(fs_path): for buf in yieldfile(fs_path, self.args.iobuf):
if sz_md < max_sz: if sz_md < max_sz:
fullfile += buf fullfile += buf
else: else:
@@ -3300,7 +3300,7 @@ class HttpCli(object):
if fullfile: if fullfile:
self.s.sendall(fullfile) self.s.sendall(fullfile)
else: else:
for buf in yieldfile(fs_path): for buf in yieldfile(fs_path, self.args.iobuf):
self.s.sendall(html_bescape(buf)) self.s.sendall(html_bescape(buf))
self.s.sendall(html[1]) self.s.sendall(html[1])
@@ -3354,8 +3354,8 @@ class HttpCli(object):
] ]
if self.avol and not self.args.no_rescan: if self.avol and not self.args.no_rescan:
x = self.conn.hsrv.broker.ask("up2k.get_state") zs = self.hub.up2k.get_state()
vs = json.loads(x.get()) vs = json.loads(zs)
vstate = {("/" + k).rstrip("/") + "/": v for k, v in vs["volstate"].items()} vstate = {("/" + k).rstrip("/") + "/": v for k, v in vs["volstate"].items()}
else: else:
vstate = {} vstate = {}
@@ -3499,10 +3499,8 @@ class HttpCli(object):
vn, _ = self.asrv.vfs.get(self.vpath, self.uname, True, True) vn, _ = self.asrv.vfs.get(self.vpath, self.uname, True, True)
args = [self.asrv.vfs.all_vols, [vn.vpath], False, True] err = self.hub.up2k.rescan(self.asrv.vfs.all_vols, [vn.vpath], False, True)
x = self.conn.hsrv.broker.ask("up2k.rescan", *args)
err = x.get()
if not err: if not err:
self.redirect("", "?h") self.redirect("", "?h")
return True return True
@@ -3520,8 +3518,8 @@ class HttpCli(object):
if self.args.no_reload: if self.args.no_reload:
raise Pebkac(403, "the reload feature is disabled in server config") raise Pebkac(403, "the reload feature is disabled in server config")
x = self.conn.hsrv.broker.ask("reload") zs = self.hub.reload()
return self.redirect("", "?h", x.get(), "return to", False) return self.redirect("", "?h", zs, "return to", False)
def tx_stack(self) -> bool: def tx_stack(self) -> bool:
if not self.avol and not [x for x in self.wvol if x in self.rvol]: if not self.avol and not [x for x in self.wvol if x in self.rvol]:
@@ -3605,8 +3603,6 @@ class HttpCli(object):
return ret return ret
def tx_ups(self) -> bool: def tx_ups(self) -> bool:
have_unpost = self.args.unpost and "e2d" in self.vn.flags
idx = self.conn.get_u2idx() idx = self.conn.get_u2idx()
if not idx or not hasattr(idx, "p_end"): if not idx or not hasattr(idx, "p_end"):
raise Pebkac(500, "sqlite3 is not available on the server; cannot unpost") raise Pebkac(500, "sqlite3 is not available on the server; cannot unpost")
@@ -3625,13 +3621,16 @@ class HttpCli(object):
and (self.uname in vol.axs.uread or self.uname in vol.axs.upget) and (self.uname in vol.axs.uread or self.uname in vol.axs.upget)
} }
x = self.conn.hsrv.broker.ask( uret = self.hub.up2k.get_unfinished_by_user(self.uname, self.ip)
"up2k.get_unfinished_by_user", self.uname, self.ip
)
uret = x.get()
allvols = self.asrv.vfs.all_vols if have_unpost else {} if not self.args.unpost:
for vol in allvols.values(): allvols = []
else:
allvols = list(self.asrv.vfs.all_vols.values())
allvols = [x for x in allvols if "e2d" in x.flags]
for vol in allvols:
cur = idx.get_cur(vol.realpath) cur = idx.get_cur(vol.realpath)
if not cur: if not cur:
continue continue
@@ -3683,7 +3682,7 @@ class HttpCli(object):
for v in ret: for v in ret:
v["vp"] = self.args.SR + v["vp"] v["vp"] = self.args.SR + v["vp"]
if not have_unpost: if not allvols:
ret = [{"kinshi": 1}] ret = [{"kinshi": 1}]
jtxt = '{"u":%s,"c":%s}' % (uret, json.dumps(ret, indent=0)) jtxt = '{"u":%s,"c":%s}' % (uret, json.dumps(ret, indent=0))
@@ -3708,10 +3707,8 @@ class HttpCli(object):
nlim = int(self.uparam.get("lim") or 0) nlim = int(self.uparam.get("lim") or 0)
lim = [nlim, nlim] if nlim else [] lim = [nlim, nlim] if nlim else []
x = self.conn.hsrv.broker.ask( zs = self.hub.up2k.handle_rm(self.uname, self.ip, req, lim, False, unpost)
"up2k.handle_rm", self.uname, self.ip, req, lim, False, unpost self.loud_reply(zs)
)
self.loud_reply(x.get())
return True return True
def handle_mv(self) -> bool: def handle_mv(self) -> bool:
@@ -3733,8 +3730,8 @@ class HttpCli(object):
if self.args.no_mv: if self.args.no_mv:
raise Pebkac(403, "the rename/move feature is disabled in server config") raise Pebkac(403, "the rename/move feature is disabled in server config")
x = self.conn.hsrv.broker.ask("up2k.handle_mv", self.uname, vsrc, vdst) zs = self.hub.up2k.handle_mv(self.uname, vsrc, vdst)
self.loud_reply(x.get(), status=201) self.loud_reply(zs, status=201)
return True return True
def tx_ls(self, ls: dict[str, Any]) -> bool: def tx_ls(self, ls: dict[str, Any]) -> bool:

View File

@@ -55,9 +55,10 @@ class HttpConn(object):
self.E: EnvParams = self.args.E self.E: EnvParams = self.args.E
self.asrv: AuthSrv = hsrv.asrv # mypy404 self.asrv: AuthSrv = hsrv.asrv # mypy404
self.u2fh: Util.FHC = hsrv.u2fh # mypy404 self.u2fh: Util.FHC = hsrv.u2fh # mypy404
self.ipa_nm: NetMap = hsrv.ipa_nm self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
self.xff_nm: NetMap = hsrv.xff_nm self.xff_nm: Optional[NetMap] = hsrv.xff_nm
self.iphash: HMaccas = hsrv.broker.iphash self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
self.iphash: HMaccas = hsrv.hub.iphash
self.bans: dict[str, int] = hsrv.bans self.bans: dict[str, int] = hsrv.bans
self.aclose: dict[str, int] = hsrv.aclose self.aclose: dict[str, int] = hsrv.aclose

View File

@@ -77,7 +77,7 @@ from .util import (
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from .broker_util import BrokerCli from .svchub import SvcHub
from .ssdp import SSDPr from .ssdp import SSDPr
if True: # pylint: disable=using-constant-test if True: # pylint: disable=using-constant-test
@@ -90,21 +90,18 @@ class HttpSrv(object):
relying on MpSrv for performance (HttpSrv is just plain threads) relying on MpSrv for performance (HttpSrv is just plain threads)
""" """
def __init__(self, broker: "BrokerCli", nid: Optional[int]) -> None: def __init__(self, hub: "SvcHub", nid: Optional[int]) -> None:
self.broker = broker self.hub = hub
self.nid = nid self.nid = nid
self.args = broker.args self.args = hub.args
self.E: EnvParams = self.args.E self.E: EnvParams = self.args.E
self.log = broker.log self.log = hub.log
self.asrv = broker.asrv self.asrv = hub.asrv
# redefine in case of multiprocessing
socket.setdefaulttimeout(120)
self.t0 = time.time() self.t0 = time.time()
nsuf = "-n{}-i{:x}".format(nid, os.getpid()) if nid else "" nsuf = "-n{}-i{:x}".format(nid, os.getpid()) if nid else ""
self.magician = Magician() self.magician = Magician()
self.nm = NetMap([], {}) self.nm = NetMap([], [])
self.ssdp: Optional["SSDPr"] = None self.ssdp: Optional["SSDPr"] = None
self.gpwd = Garda(self.args.ban_pw) self.gpwd = Garda(self.args.ban_pw)
self.g404 = Garda(self.args.ban_404) self.g404 = Garda(self.args.ban_404)
@@ -169,7 +166,7 @@ class HttpSrv(object):
if self.args.zs: if self.args.zs:
from .ssdp import SSDPr from .ssdp import SSDPr
self.ssdp = SSDPr(broker) self.ssdp = SSDPr(hub)
if self.tp_q: if self.tp_q:
self.start_threads(4) self.start_threads(4)
@@ -186,8 +183,7 @@ class HttpSrv(object):
def post_init(self) -> None: def post_init(self) -> None:
try: try:
x = self.broker.ask("thumbsrv.getcfg") self.th_cfg = self.hub.thumbsrv.getcfg()
self.th_cfg = x.get()
except: except:
pass pass
@@ -237,19 +233,11 @@ class HttpSrv(object):
self.t_periodic = None self.t_periodic = None
return return
def listen(self, sck: socket.socket, nlisteners: int) -> None: def listen(self, sck: socket.socket) -> None:
if self.args.j != 1:
# lost in the pickle; redefine
if not ANYWIN or self.args.reuseaddr:
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sck.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
sck.settimeout(None) # < does not inherit, ^ opts above do
ip, port = sck.getsockname()[:2] ip, port = sck.getsockname()[:2]
self.srvs.append(sck) self.srvs.append(sck)
self.bound.add((ip, port)) self.bound.add((ip, port))
self.nclimax = math.ceil(self.args.nc * 1.0 / nlisteners) self.nclimax = self.args.nc
Daemon( Daemon(
self.thr_listen, self.thr_listen,
"httpsrv-n{}-listen-{}-{}".format(self.nid or "0", ip, port), "httpsrv-n{}-listen-{}-{}".format(self.nid or "0", ip, port),
@@ -265,7 +253,7 @@ class HttpSrv(object):
self.log(self.name, msg) self.log(self.name, msg)
def fun() -> None: def fun() -> None:
self.broker.say("cb_httpsrv_up") self.hub.cb_httpsrv_up()
threading.Thread(target=fun, name="sig-hsrv-up1").start() threading.Thread(target=fun, name="sig-hsrv-up1").start()

View File

@@ -15,6 +15,7 @@ if TYPE_CHECKING:
class Metrics(object): class Metrics(object):
def __init__(self, hsrv: "HttpSrv") -> None: def __init__(self, hsrv: "HttpSrv") -> None:
self.hsrv = hsrv self.hsrv = hsrv
self.hub = hsrv.hub
def tx(self, cli: "HttpCli") -> bool: def tx(self, cli: "HttpCli") -> bool:
if not cli.avol: if not cli.avol:
@@ -88,8 +89,8 @@ class Metrics(object):
addg("cpp_total_bans", str(self.hsrv.nban), t) addg("cpp_total_bans", str(self.hsrv.nban), t)
if not args.nos_vst: if not args.nos_vst:
x = self.hsrv.broker.ask("up2k.get_state") zs = self.hub.up2k.get_state()
vs = json.loads(x.get()) vs = json.loads(zs.get())
nvidle = 0 nvidle = 0
nvbusy = 0 nvbusy = 0
@@ -146,8 +147,7 @@ class Metrics(object):
volsizes = [] volsizes = []
try: try:
ptops = [x.realpath for _, x in allvols] ptops = [x.realpath for _, x in allvols]
x = self.hsrv.broker.ask("up2k.get_volsizes", ptops) volsizes = self.hub.up2k.get_volsizes(ptops)
volsizes = x.get()
except Exception as ex: except Exception as ex:
cli.log("tx_stats get_volsizes: {!r}".format(ex), 3) cli.log("tx_stats get_volsizes: {!r}".format(ex), 3)
@@ -204,8 +204,7 @@ class Metrics(object):
tnbytes = 0 tnbytes = 0
tnfiles = 0 tnfiles = 0
try: try:
x = self.hsrv.broker.ask("up2k.get_unfinished") xs = self.hub.up2k.get_unfinished()
xs = x.get()
if not xs: if not xs:
raise Exception("up2k mutex acquisition timed out") raise Exception("up2k mutex acquisition timed out")

View File

@@ -12,7 +12,6 @@ from .multicast import MC_Sck, MCast
from .util import CachedSet, html_escape, min_ex from .util import CachedSet, html_escape, min_ex
if TYPE_CHECKING: if TYPE_CHECKING:
from .broker_util import BrokerCli
from .httpcli import HttpCli from .httpcli import HttpCli
from .svchub import SvcHub from .svchub import SvcHub
@@ -32,9 +31,9 @@ class SSDP_Sck(MC_Sck):
class SSDPr(object): class SSDPr(object):
"""generates http responses for httpcli""" """generates http responses for httpcli"""
def __init__(self, broker: "BrokerCli") -> None: def __init__(self, hub: "SvcHub") -> None:
self.broker = broker self.hub = hub
self.args = broker.args self.args = hub.args
def reply(self, hc: "HttpCli") -> bool: def reply(self, hc: "HttpCli") -> bool:
if hc.vpath.endswith("device.xml"): if hc.vpath.endswith("device.xml"):

View File

@@ -1,6 +1,7 @@
# coding: utf-8 # coding: utf-8
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import argparse
import re import re
import stat import stat
import tarfile import tarfile
@@ -44,11 +45,12 @@ class StreamTar(StreamArc):
def __init__( def __init__(
self, self,
log: "NamedLogger", log: "NamedLogger",
args: argparse.Namespace,
fgen: Generator[dict[str, Any], None, None], fgen: Generator[dict[str, Any], None, None],
cmp: str = "", cmp: str = "",
**kwargs: Any **kwargs: Any
): ):
super(StreamTar, self).__init__(log, fgen) super(StreamTar, self).__init__(log, args, fgen)
self.ci = 0 self.ci = 0
self.co = 0 self.co = 0
@@ -126,7 +128,7 @@ class StreamTar(StreamArc):
inf.gid = 0 inf.gid = 0
self.ci += inf.size self.ci += inf.size
with open(fsenc(src), "rb", 512 * 1024) as fo: with open(fsenc(src), "rb", self.args.iobuf) as fo:
self.tar.addfile(inf, fo) self.tar.addfile(inf, fo)
def _gen(self) -> None: def _gen(self) -> None:

View File

@@ -1,6 +1,7 @@
# coding: utf-8 # coding: utf-8
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import argparse
import os import os
import tempfile import tempfile
from datetime import datetime from datetime import datetime
@@ -20,10 +21,12 @@ class StreamArc(object):
def __init__( def __init__(
self, self,
log: "NamedLogger", log: "NamedLogger",
args: argparse.Namespace,
fgen: Generator[dict[str, Any], None, None], fgen: Generator[dict[str, Any], None, None],
**kwargs: Any **kwargs: Any
): ):
self.log = log self.log = log
self.args = args
self.fgen = fgen self.fgen = fgen
self.stopped = False self.stopped = False

View File

@@ -28,9 +28,10 @@ if True: # pylint: disable=using-constant-test
import typing import typing
from typing import Any, Optional, Union from typing import Any, Optional, Union
from .__init__ import ANYWIN, E, EXE, MACOS, TYPE_CHECKING, EnvParams, unicode from .__init__ import ANYWIN, EXE, TYPE_CHECKING, E, EnvParams, unicode
from .authsrv import BAD_CFG, AuthSrv from .authsrv import BAD_CFG, AuthSrv
from .cert import ensure_cert from .cert import ensure_cert
from .httpsrv import HttpSrv
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE from .mtag import HAVE_FFMPEG, HAVE_FFPROBE
from .tcpsrv import TcpSrv from .tcpsrv import TcpSrv
from .th_srv import HAVE_PIL, HAVE_VIPS, HAVE_WEBP, ThumbSrv from .th_srv import HAVE_PIL, HAVE_VIPS, HAVE_WEBP, ThumbSrv
@@ -51,7 +52,6 @@ from .util import (
ansi_re, ansi_re,
build_netmap, build_netmap,
min_ex, min_ex,
mp,
odfusion, odfusion,
pybin, pybin,
start_log_thrs, start_log_thrs,
@@ -67,16 +67,6 @@ if TYPE_CHECKING:
class SvcHub(object): class SvcHub(object):
"""
Hosts all services which cannot be parallelized due to reliance on monolithic resources.
Creates a Broker which does most of the heavy stuff; hosted services can use this to perform work:
hub.broker.<say|ask>(destination, args_list).
Either BrokerThr (plain threads) or BrokerMP (multiprocessing) is used depending on configuration.
Nothing is returned synchronously; if you want any value returned from the call,
put() can return a queue (if want_reply=True) which has a blocking get() with the response.
"""
def __init__( def __init__(
self, self,
args: argparse.Namespace, args: argparse.Namespace,
@@ -163,15 +153,25 @@ class SvcHub(object):
if args.log_thrs: if args.log_thrs:
start_log_thrs(self.log, args.log_thrs, 0) start_log_thrs(self.log, args.log_thrs, 0)
if not args.use_fpool and args.j != 1: for name, arg in (
args.no_fpool = True ("iobuf", "iobuf"),
t = "multithreading enabled with -j {}, so disabling fpool -- this can reduce upload performance on some filesystems" ("s-rd-sz", "s_rd_sz"),
self.log("root", t.format(args.j)) ("s-wr-sz", "s_wr_sz"),
):
zi = getattr(args, arg)
if zi < 32768:
t = "WARNING: expect very poor performance because you specified a very low value (%d) for --%s"
self.log("root", t % (zi, name), 3)
zi = 2
zi2 = 2 ** (zi - 1).bit_length()
if zi != zi2:
zi3 = 2 ** ((zi - 1).bit_length() - 1)
t = "WARNING: expect poor performance because --%s is not a power-of-two; consider using %d or %d instead of %d"
self.log("root", t % (name, zi2, zi3, zi), 3)
if not args.no_fpool and args.j != 1: if args.s_rd_sz > args.iobuf:
t = "WARNING: ignoring --use-fpool because multithreading (-j{}) is enabled" t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
self.log("root", t.format(args.j), c=3) self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
args.no_fpool = True
bri = "zy"[args.theme % 2 :][:1] bri = "zy"[args.theme % 2 :][:1]
ch = "abcdefghijklmnopqrstuvwx"[int(args.theme / 2)] ch = "abcdefghijklmnopqrstuvwx"[int(args.theme / 2)]
@@ -296,13 +296,7 @@ class SvcHub(object):
self.mdns: Optional["MDNS"] = None self.mdns: Optional["MDNS"] = None
self.ssdp: Optional["SSDPd"] = None self.ssdp: Optional["SSDPd"] = None
# decide which worker impl to use self.httpsrv = HttpSrv(self, None)
if self.check_mp_enable():
from .broker_mp import BrokerMp as Broker
else:
from .broker_thr import BrokerThr as Broker # type: ignore
self.broker = Broker(self)
def start_ftpd(self) -> None: def start_ftpd(self) -> None:
time.sleep(30) time.sleep(30)
@@ -341,15 +335,14 @@ class SvcHub(object):
def thr_httpsrv_up(self) -> None: def thr_httpsrv_up(self) -> None:
time.sleep(1 if self.args.ign_ebind_all else 5) time.sleep(1 if self.args.ign_ebind_all else 5)
expected = self.broker.num_workers * self.tcpsrv.nsrv expected = self.tcpsrv.nsrv
failed = expected - self.httpsrv_up failed = expected - self.httpsrv_up
if not failed: if not failed:
return return
if self.args.ign_ebind_all: if self.args.ign_ebind_all:
if not self.tcpsrv.srv: if not self.tcpsrv.srv:
for _ in range(self.broker.num_workers): self.cb_httpsrv_up()
self.broker.say("cb_httpsrv_up")
return return
if self.args.ign_ebind and self.tcpsrv.srv: if self.args.ign_ebind and self.tcpsrv.srv:
@@ -367,8 +360,6 @@ class SvcHub(object):
def cb_httpsrv_up(self) -> None: def cb_httpsrv_up(self) -> None:
self.httpsrv_up += 1 self.httpsrv_up += 1
if self.httpsrv_up != self.broker.num_workers:
return
ar = self.args ar = self.args
for _ in range(10 if ar.ftp or ar.ftps else 0): for _ in range(10 if ar.ftp or ar.ftps else 0):
@@ -703,7 +694,6 @@ class SvcHub(object):
self.log("root", "reloading config") self.log("root", "reloading config")
self.asrv.reload() self.asrv.reload()
self.up2k.reload(rescan_all_vols) self.up2k.reload(rescan_all_vols)
self.broker.reload()
self.reloading = 0 self.reloading = 0
def _reload_blocking(self, rescan_all_vols: bool = True) -> None: def _reload_blocking(self, rescan_all_vols: bool = True) -> None:
@@ -788,7 +778,7 @@ class SvcHub(object):
tasks.append(Daemon(self.ssdp.stop, "ssdp")) tasks.append(Daemon(self.ssdp.stop, "ssdp"))
slp = time.time() + 0.5 slp = time.time() + 0.5
self.broker.shutdown() self.httpsrv.shutdown()
self.tcpsrv.shutdown() self.tcpsrv.shutdown()
self.up2k.shutdown() self.up2k.shutdown()
@@ -950,48 +940,6 @@ class SvcHub(object):
if ex.errno != errno.EPIPE: if ex.errno != errno.EPIPE:
raise raise
def check_mp_support(self) -> str:
if MACOS:
return "multiprocessing is wonky on mac osx;"
elif sys.version_info < (3, 3):
return "need python 3.3 or newer for multiprocessing;"
try:
x: mp.Queue[tuple[str, str]] = mp.Queue(1)
x.put(("foo", "bar"))
if x.get()[0] != "foo":
raise Exception()
except:
return "multiprocessing is not supported on your platform;"
return ""
def check_mp_enable(self) -> bool:
if self.args.j == 1:
return False
try:
if mp.cpu_count() <= 1:
raise Exception()
except:
self.log("svchub", "only one CPU detected; multiprocessing disabled")
return False
try:
# support vscode debugger (bonus: same behavior as on windows)
mp.set_start_method("spawn", True)
except AttributeError:
# py2.7 probably, anyways dontcare
pass
err = self.check_mp_support()
if not err:
return True
else:
self.log("svchub", err)
self.log("svchub", "cannot efficiently use multiple CPU cores")
return False
def sd_notify(self) -> None: def sd_notify(self) -> None:
try: try:
zb = os.getenv("NOTIFY_SOCKET") zb = os.getenv("NOTIFY_SOCKET")

View File

@@ -1,6 +1,7 @@
# coding: utf-8 # coding: utf-8
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import argparse
import calendar import calendar
import stat import stat
import time import time
@@ -218,12 +219,13 @@ class StreamZip(StreamArc):
def __init__( def __init__(
self, self,
log: "NamedLogger", log: "NamedLogger",
args: argparse.Namespace,
fgen: Generator[dict[str, Any], None, None], fgen: Generator[dict[str, Any], None, None],
utf8: bool = False, utf8: bool = False,
pre_crc: bool = False, pre_crc: bool = False,
**kwargs: Any **kwargs: Any
) -> None: ) -> None:
super(StreamZip, self).__init__(log, fgen) super(StreamZip, self).__init__(log, args, fgen)
self.utf8 = utf8 self.utf8 = utf8
self.pre_crc = pre_crc self.pre_crc = pre_crc
@@ -248,7 +250,7 @@ class StreamZip(StreamArc):
crc = 0 crc = 0
if self.pre_crc: if self.pre_crc:
for buf in yieldfile(src): for buf in yieldfile(src, self.args.iobuf):
crc = zlib.crc32(buf, crc) crc = zlib.crc32(buf, crc)
crc &= 0xFFFFFFFF crc &= 0xFFFFFFFF
@@ -257,7 +259,7 @@ class StreamZip(StreamArc):
buf = gen_hdr(None, name, sz, ts, self.utf8, crc, self.pre_crc) buf = gen_hdr(None, name, sz, ts, self.utf8, crc, self.pre_crc)
yield self._ct(buf) yield self._ct(buf)
for buf in yieldfile(src): for buf in yieldfile(src, self.args.iobuf):
if not self.pre_crc: if not self.pre_crc:
crc = zlib.crc32(buf, crc) crc = zlib.crc32(buf, crc)

View File

@@ -297,7 +297,7 @@ class TcpSrv(object):
if self.args.q: if self.args.q:
print(msg) print(msg)
self.hub.broker.say("listen", srv) self.hub.httpsrv.listen(srv)
self.srv = srvs self.srv = srvs
self.bound = bound self.bound = bound
@@ -305,7 +305,7 @@ class TcpSrv(object):
self._distribute_netdevs() self._distribute_netdevs()
def _distribute_netdevs(self): def _distribute_netdevs(self):
self.hub.broker.say("set_netdevs", self.netdevs) self.hub.httpsrv.set_netdevs(self.netdevs)
self.hub.start_zeroconf() self.hub.start_zeroconf()
gencert(self.log, self.args, self.netdevs) gencert(self.log, self.args, self.netdevs)
self.hub.restart_ftpd() self.hub.restart_ftpd()

View File

@@ -97,8 +97,6 @@ class Tftpd(object):
cbak = [] cbak = []
if not self.args.tftp_no_fast and not EXE: if not self.args.tftp_no_fast and not EXE:
try: try:
import inspect
ptn = re.compile(r"(^\s*)log\.debug\(.*\)$") ptn = re.compile(r"(^\s*)log\.debug\(.*\)$")
for C in Cs: for C in Cs:
cbak.append(C.__dict__) cbak.append(C.__dict__)
@@ -342,6 +340,9 @@ class Tftpd(object):
if not self.args.tftp_nols and bos.path.isdir(ap): if not self.args.tftp_nols and bos.path.isdir(ap):
return self._ls(vpath, "", 0, True) return self._ls(vpath, "", 0, True)
if not a:
a = [self.args.iobuf]
return open(ap, mode, *a, **ka) return open(ap, mode, *a, **ka)
def _mkdir(self, vpath: str, *a) -> None: def _mkdir(self, vpath: str, *a) -> None:

View File

@@ -7,7 +7,6 @@ from .__init__ import TYPE_CHECKING
from .authsrv import VFS from .authsrv import VFS
from .bos import bos from .bos import bos
from .th_srv import HAVE_WEBP, thumb_path from .th_srv import HAVE_WEBP, thumb_path
from .util import Cooldown
if True: # pylint: disable=using-constant-test if True: # pylint: disable=using-constant-test
from typing import Optional, Union from typing import Optional, Union
@@ -18,14 +17,11 @@ if TYPE_CHECKING:
class ThumbCli(object): class ThumbCli(object):
def __init__(self, hsrv: "HttpSrv") -> None: def __init__(self, hsrv: "HttpSrv") -> None:
self.broker = hsrv.broker self.hub = hsrv.hub
self.log_func = hsrv.log self.log_func = hsrv.log
self.args = hsrv.args self.args = hsrv.args
self.asrv = hsrv.asrv self.asrv = hsrv.asrv
# cache on both sides for less broker spam
self.cooldown = Cooldown(self.args.th_poke)
try: try:
c = hsrv.th_cfg c = hsrv.th_cfg
if not c: if not c:
@@ -134,13 +130,11 @@ class ThumbCli(object):
if ret: if ret:
tdir = os.path.dirname(tpath) tdir = os.path.dirname(tpath)
if self.cooldown.poke(tdir): self.hub.thumbsrv.poke(tdir)
self.broker.say("thumbsrv.poke", tdir)
if want_opus: if want_opus:
# audio files expire individually # audio files expire individually
if self.cooldown.poke(tpath): self.hub.thumbsrv.poke(tpath)
self.broker.say("thumbsrv.poke", tpath)
return ret return ret
@@ -150,5 +144,4 @@ class ThumbCli(object):
if not bos.path.getsize(os.path.join(ptop, rem)): if not bos.path.getsize(os.path.join(ptop, rem)):
return None return None
x = self.broker.ask("thumbsrv.get", ptop, rem, mtime, fmt) return self.hub.thumbsrv.get(ptop, rem, mtime, fmt)
return x.get() # type: ignore

View File

@@ -16,9 +16,9 @@ from .__init__ import ANYWIN, TYPE_CHECKING
from .authsrv import VFS from .authsrv import VFS
from .bos import bos from .bos import bos
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
from .util import BytesIO # type: ignore
from .util import ( from .util import (
FFMPEG_URL, FFMPEG_URL,
BytesIO, # type: ignore
Cooldown, Cooldown,
Daemon, Daemon,
Pebkac, Pebkac,

View File

@@ -2745,9 +2745,9 @@ class Up2k(object):
cj["size"], cj["size"],
cj["ptop"], cj["ptop"],
ap1, ap1,
self.hub.broker, self.hub,
reg, reg,
"up2k._get_volsize", "_get_volsize",
) )
bos.makedirs(ap2) bos.makedirs(ap2)
vfs.lim.nup(cj["addr"]) vfs.lim.nup(cj["addr"])
@@ -3920,7 +3920,7 @@ class Up2k(object):
csz = up2k_chunksize(fsz) csz = up2k_chunksize(fsz)
ret = [] ret = []
suffix = " MB, {}".format(path) suffix = " MB, {}".format(path)
with open(fsenc(path), "rb", 512 * 1024) as f: with open(fsenc(path), "rb", self.args.iobuf) as f:
if self.mth and fsz >= 1024 * 512: if self.mth and fsz >= 1024 * 512:
tlt = self.mth.hash(f, fsz, csz, self.pp, prefix, suffix) tlt = self.mth.hash(f, fsz, csz, self.pp, prefix, suffix)
ret = [x[0] for x in tlt] ret = [x[0] for x in tlt]

View File

@@ -1400,10 +1400,15 @@ def ren_open(
class MultipartParser(object): class MultipartParser(object):
def __init__( def __init__(
self, log_func: "NamedLogger", sr: Unrecv, http_headers: dict[str, str] self,
log_func: "NamedLogger",
args: argparse.Namespace,
sr: Unrecv,
http_headers: dict[str, str],
): ):
self.sr = sr self.sr = sr
self.log = log_func self.log = log_func
self.args = args
self.headers = http_headers self.headers = http_headers
self.re_ctype = re.compile(r"^content-type: *([^; ]+)", re.IGNORECASE) self.re_ctype = re.compile(r"^content-type: *([^; ]+)", re.IGNORECASE)
@@ -1502,7 +1507,7 @@ class MultipartParser(object):
def _read_data(self) -> Generator[bytes, None, None]: def _read_data(self) -> Generator[bytes, None, None]:
blen = len(self.boundary) blen = len(self.boundary)
bufsz = 32 * 1024 bufsz = self.args.s_rd_sz
while True: while True:
try: try:
buf = self.sr.recv(bufsz) buf = self.sr.recv(bufsz)
@@ -2243,10 +2248,11 @@ def shut_socket(log: "NamedLogger", sck: socket.socket, timeout: int = 3) -> Non
sck.close() sck.close()
def read_socket(sr: Unrecv, total_size: int) -> Generator[bytes, None, None]: def read_socket(
sr: Unrecv, bufsz: int, total_size: int
) -> Generator[bytes, None, None]:
remains = total_size remains = total_size
while remains > 0: while remains > 0:
bufsz = 32 * 1024
if bufsz > remains: if bufsz > remains:
bufsz = remains bufsz = remains
@@ -2260,16 +2266,16 @@ def read_socket(sr: Unrecv, total_size: int) -> Generator[bytes, None, None]:
yield buf yield buf
def read_socket_unbounded(sr: Unrecv) -> Generator[bytes, None, None]: def read_socket_unbounded(sr: Unrecv, bufsz: int) -> Generator[bytes, None, None]:
try: try:
while True: while True:
yield sr.recv(32 * 1024) yield sr.recv(bufsz)
except: except:
return return
def read_socket_chunked( def read_socket_chunked(
sr: Unrecv, log: Optional["NamedLogger"] = None sr: Unrecv, bufsz: int, log: Optional["NamedLogger"] = None
) -> Generator[bytes, None, None]: ) -> Generator[bytes, None, None]:
err = "upload aborted: expected chunk length, got [{}] |{}| instead" err = "upload aborted: expected chunk length, got [{}] |{}| instead"
while True: while True:
@@ -2303,7 +2309,7 @@ def read_socket_chunked(
if log: if log:
log("receiving %d byte chunk" % (chunklen,)) log("receiving %d byte chunk" % (chunklen,))
for chunk in read_socket(sr, chunklen): for chunk in read_socket(sr, bufsz, chunklen):
yield chunk yield chunk
x = sr.recv_ex(2, False) x = sr.recv_ex(2, False)
@@ -2361,10 +2367,11 @@ def build_netmap(csv: str):
return NetMap(ips, cidrs, True) return NetMap(ips, cidrs, True)
def yieldfile(fn: str) -> Generator[bytes, None, None]: def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
with open(fsenc(fn), "rb", 512 * 1024) as f: readsz = min(bufsz, 128 * 1024)
with open(fsenc(fn), "rb", bufsz) as f:
while True: while True:
buf = f.read(128 * 1024) buf = f.read(readsz)
if not buf: if not buf:
break break

96
docs/bufsize.txt Normal file
View File

@@ -0,0 +1,96 @@
notes from testing various buffer sizes of files and sockets
summary:
download-folder-as-tar: would be 7% faster with --iobuf 65536 (but got 20% faster in v1.11.2)
download-folder-as-zip: optimal with default --iobuf 262144
download-file-over-https: optimal with default --iobuf 262144
put-large-file: optimal with default --iobuf 262144, --s-rd-sz 262144 (and got 14% faster in v1.11.2)
post-large-file: optimal with default --iobuf 262144, --s-rd-sz 262144 (and got 18% faster in v1.11.2)
----
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/?tar
3.3 req/s 1.11.1
4.3 4.0 3.3 req/s 1.12.2
64 256 512 --iobuf 256 (prefer smaller)
32 32 32 --s-rd-sz
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/?zip
2.9 req/s 1.11.1
2.5 2.9 2.9 req/s 1.12.2
64 256 512 --iobuf 256 (prefer bigger)
32 32 32 --s-rd-sz
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/?tar
8.3 req/s 1.11.1
8.4 8.4 8.5 req/s 1.12.2
64 256 512 --iobuf 256 (prefer bigger)
32 32 32 --s-rd-sz
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/?zip
13.9 req/s 1.11.1
14.1 14.0 13.8 req/s 1.12.2
64 256 512 --iobuf 256 (prefer smaller)
32 32 32 --s-rd-sz
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/987a
5260 req/s 1.11.1
5246 5246 5280 5268 req/s 1.12.2
64 256 512 256 --iobuf dontcare
32 32 32 512 --s-rd-sz dontcare
oha -z10s -c1 --ipv4 --insecure https://127.0.0.1:3923/pairdupes/987a
4445 req/s 1.11.1
4462 4494 4444 req/s 1.12.2
64 256 512 --iobuf dontcare
32 32 32 --s-rd-sz
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/gssc-02-cannonball-skydrift/track10.cdda.flac
95 req/s 1.11.1
95 97 req/s 1.12.2
64 512 --iobuf dontcare
32 32 --s-rd-sz
oha -z10s -c1 --ipv4 --insecure https://127.0.0.1:3923/bigs/gssc-02-cannonball-skydrift/track10.cdda.flac
15.4 req/s 1.11.1
15.4 15.3 14.9 15.4 req/s 1.12.2
64 256 512 512 --iobuf 256 (prefer smaller, and smaller than s-wr-sz)
32 32 32 32 --s-rd-sz
256 256 256 512 --s-wr-sz
----
python3 ~/dev/old/copyparty\ v1.11.1\ dont\ ban\ the\ pipes.py -q -i 127.0.0.1 -v .::A --daw
python3 ~/dev/copyparty/dist/copyparty-sfx.py -q -i 127.0.0.1 -v .::A --daw --iobuf $((1024*512))
oha -z10s -c1 --ipv4 --insecure -mPUT -r0 -D ~/Music/gssc-02-cannonball-skydrift/track10.cdda.flac http://127.0.0.1:3923/a.bin
10.8 req/s 1.11.1
10.8 11.5 11.8 12.1 12.2 12.3 req/s new
512 512 512 512 512 256 --iobuf 256
32 64 128 256 512 256 --s-rd-sz 256 (prefer bigger)
----
buildpost() {
b=--jeg-er-grensestaven;
printf -- "$b\r\nContent-Disposition: form-data; name=\"act\"\r\n\r\nbput\r\n$b\r\nContent-Disposition: form-data; name=\"f\"; filename=\"a.bin\"\r\nContent-Type: audio/mpeg\r\n\r\n"
cat "$1"
printf -- "\r\n${b}--\r\n"
}
buildpost ~/Music/gssc-02-cannonball-skydrift/track10.cdda.flac >big.post
buildpost ~/Music/bottomtext.txt >smol.post
oha -z10s -c1 --ipv4 --insecure -mPOST -r0 -T 'multipart/form-data; boundary=jeg-er-grensestaven' -D big.post http://127.0.0.1:3923/?replace
9.6 11.2 11.3 11.1 10.9 req/s v1.11.2
512 512 256 128 256 --iobuf 256
32 512 256 128 128 --s-rd-sz 256
oha -z10s -c1 --ipv4 --insecure -mPOST -r0 -T 'multipart/form-data; boundary=jeg-er-grensestaven' -D smol.post http://127.0.0.1:3923/?replace
2445 2414 2401 2437
256 128 256 256 --iobuf 256
128 128 256 64 --s-rd-sz 128 (but use 256 since big posts are more important)

View File

@@ -1,3 +1,103 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0318-1709 `v1.11.1` dont ban the pipes
the [previous release](https://github.com/9001/copyparty/releases/tag/v1.11.0) had all the fun new features... this one's just bugfixes
## bugfixes
* less aggressive rejection of requests from banned IPs 51d31588
* clients would get kicked before the header was parsed (which contains the xff header), meaning the server could become inaccessible to everyone if the reverse-proxy itself were to "somehow" get banned
* ...which can happen if a server behind cloudflare also accepts non-cloudflare connections, meaning the client IP would not be resolved, and it'll ban the LAN IP instead heh
* that part still happens, but now it won't affect legit clients through the intended route
* the old behavior can be restored with `--early-ban` to save some cycles, and/or avoid slowloris somewhat
* the unpost feature could appear to be disabled on servers where no volume was mapped to `/` 0287c7ba
* python 3.12 support for [compiling the dependencies](https://github.com/9001/copyparty/tree/hovudstraum/bin/mtag#dependencies) necessary to detect bpm/key in audio files 32553e45
## other changes
* mention [real-ip configuration](https://github.com/9001/copyparty?tab=readme-ov-file#real-ip) in the readme ee80cdb9
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0315-2047 `v1.11.0` You Can (Not) Proceed
this release was made possible by [stoltzekleiven, kvikklunsj, and tako](https://a.ocv.me/pub/g/nerd-stuff/2024-0310-stoltzekleiven.jpg)
## new features
* #62 support for [identity providers](https://github.com/9001/copyparty#identity-providers) and automatically creating volumes for each user/group ("home folders")
* login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
* [documentation](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md) and [examples](https://github.com/9001/copyparty/tree/hovudstraum/docs/examples/docker/idp-authelia-traefik) could still use some help (I did my best)
* #77 UI to cancel unfinished uploads (available in the 🧯 unpost tab) 3f05b665
* the user's IP and username must match the upload by default; can be changed with global-option / volflag `u2abort`
* new volflag `sparse` to pretend sparse files are supported even if the filesystem doesn't 8785d2f9
* gives drastically better performance when writing to s3 buckets through juicefs/geesefs
* only for when you know the filesystem can deal with it (so juicefs/geesefs is OK, but **definitely not** fat32)
* `--xff-src` and `--ipa` now support CIDR notation (but the old syntax still works) b377791b
* ux:
* #74 option to use [custom fonts](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice) 263adec7 6cc7101d 8016e671
* option to disable autoplay when page url contains a song hash 8413ed6d
* good if you're using copyparty to listen to music at the office and the office policy is to have the webbrowser automatically restart to install updates, meaning your coworkers are suddenly and involuntarily enjoying some loud af jcore while you're asleep at home
## bugfixes
* don't panic if cloudflare (or another reverse-proxy) decides to hijack json responses and replace them with html 7741870d
* #73 the fancy markdown editor was incompatible with caddy (a reverse-proxy) ac96fd9c
* media player could get confused if neighboring folders had songs with the same filenames 206af8f1
* benign race condition in the config reloader (could only be triggered by admins and/or SIGUSR1) 096de508
* running tftp with optimizations enabled would cause issues for `--ipa` b377791b
* cosmetic tftp bugs 115020ba
* ux:
* up2k rendering glitch if the last couple uploads were dupes 547a4863
* up2k rendering glitch when switching between readonly/writeonly folders 51a83b04
* markdown editor preview was glitchy on tiny screens e5582605
## other changes
* add a [sharex v12.1](https://github.com/9001/copyparty/tree/hovudstraum/contrib#sharexsxcu) config example 2527e903
* make it easier to discover/diagnose issues with docker and/or reverse-proxy config d744f3ff
* stop recommending the use of `--xff-src=any` in the log messages 7f08f10c
* ux:
* remove the `k304` togglebutton in the controlpanel by default 1c011ff0
* mention that a full restart is required for `[global]` config changes to take effect 0c039219
* docs e78af022
* [how to use copyparty with amazon aws s3](https://github.com/9001/copyparty#using-the-cloud-as-storage)
* faq: http/https confusion caused by incorrectly configured cloudflare
* #76 docker: ftp-server howto
* copyparty.exe: updated pyinstaller to 6.5.0 bdbcbbb0
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0221-2132 `v1.10.2` tall thumbs
## new features
* thumbnails can be way taller when centercrop is disabled in the browser UI 5026b212
* good for folders with lots of portrait pics (no more letterboxing)
* more thumbnail stuff:
* zoom levels are twice as granular 5026b212
* write-only folders get an "upload-only" icon 89c6c2e0
* inaccessible files/folders get a 403/404 icon 8a38101e
## bugfixes
* tftp fixes d07859e8
* server could crash if a nic disappeared / got restarted mid-transfer
* tiny resource leak if dualstack causes ipv4 bind to fail
* thumbnails:
* when behind a caching proxy (cloudflare), icons in folders would be a random mix of png and svg 43ee6b9f
* produce valid folder icons when thumbnails are disabled 14af136f
* trailing newline in html responses d39a99c9
## other changes
* webdeps: update dompurify 13e77777
* copyparty.exe: update jinja2, markupsafe, pyinstaller, upx 13e77777
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0218-1554 `v1.10.1` big thumbs # 2024-0218-1554 `v1.10.1` big thumbs

View File

@@ -164,6 +164,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| PUT | `?xz` | (binary data) | compress with xz and write into file at URL | | PUT | `?xz` | (binary data) | compress with xz and write into file at URL |
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL | | mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
| mPOST | `?j` | `f=FILE` | ...and reply with json | | mPOST | `?j` | `f=FILE` | ...and reply with json |
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL | | mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
| POST | `?delete` | | delete URL recursively | | POST | `?delete` | | delete URL recursively |
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively | | jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |

View File

@@ -5,3 +5,18 @@ to configure IdP from scratch, you must place copyparty behind a reverse-proxy w
in the copyparty `[global]` config, specify which headers to read client info from; username is required (`idp-h-usr: X-Authooley-User`), group(s) are optional (`idp-h-grp: X-Authooley-Groups`) in the copyparty `[global]` config, specify which headers to read client info from; username is required (`idp-h-usr: X-Authooley-User`), group(s) are optional (`idp-h-grp: X-Authooley-Groups`)
* it is also required to specify the subnet that legit requests will be coming from, for example `--xff-src=10.88.0.0/24` to allow 10.88.x.x (or `--xff-src=lan` for all private IPs), and it is recommended to configure the reverseproxy to include a secret header as proof that the other headers are also legit (and not smuggled in by a malicious client), telling copyparty the headername to expect with `idp-h-key: shangala-bangala` * it is also required to specify the subnet that legit requests will be coming from, for example `--xff-src=10.88.0.0/24` to allow 10.88.x.x (or `--xff-src=lan` for all private IPs), and it is recommended to configure the reverseproxy to include a secret header as proof that the other headers are also legit (and not smuggled in by a malicious client), telling copyparty the headername to expect with `idp-h-key: shangala-bangala`
# important notes
## IdP volumes are forgotten on shutdown
IdP volumes, meaning dynamically-created volumes, meaning volumes that contain `${u}` or `${g}` in their URL, will be forgotten during a server restart and then "revived" when the volume's owner sends their first request after the restart
until each IdP volume is revived, it will inherit the permissions of its parent volume (if any)
this means that, if an IdP volume is located inside a folder that is readable by anyone, then each of those IdP volumes will **also become readable by anyone** until the volume is revived
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)

45
docs/xff.md Normal file
View File

@@ -0,0 +1,45 @@
when running behind a reverse-proxy, or a WAF, or another protection service such as cloudflare:
if you (and maybe everybody else) keep getting a message that says `thank you for playing`, then you've gotten banned for malicious traffic. This ban applies to the IP-address that copyparty *thinks* identifies the shady client -- so, depending on your setup, you might have to tell copyparty where to find the correct IP
knowing the correct IP is also crucial for some other features, such as the unpost feature which lets you delete your own recent uploads -- but if everybody has the same IP, well...
----
for most common setups, there should be a helpful message in the server-log explaining what to do, something like `--xff-src=10.88.0.0/16` or `--xff-src=lan` to accept the `X-Forwarded-For` header from your reverse-proxy with a LAN IP of `10.88.x.y`
if you are behind cloudflare, it is recommended to also set `--xff-hdr=cf-connecting-ip` to use a more trustworthy source of info, but then it's also very important to ensure your reverse-proxy does not accept connections from anything BUT cloudflare; you can do this by generating an ip-address allowlist and reject all other connections
* if you are using nginx as your reverse-proxy, see the [example nginx config](https://github.com/9001/copyparty/blob/hovudstraum/contrib/nginx/copyparty.conf) on how the cloudflare allowlist can be done
----
the server-log will give recommendations in the form of commandline arguments;
to do the same thing using config files, take the options that are suggested in the serverlog and put them into the `[global]` section in your `copyparty.conf` like so:
```yaml
[global]
xff-src: lan
xff-hdr: cf-connecting-ip
```
----
# but if you just want to get it working:
...and don't care about security, you can optionally disable the bot-detectors, either by specifying commandline-args `--ban-404=no --ban-403=no --ban-422=no --ban-url=no --ban-pw=no`
or by adding these lines inside the `[global]` section in your `copyparty.conf`:
```yaml
[global]
ban-404: no
ban-403: no
ban-422: no
ban-url: no
ban-pw: no
```
but remember that this will make other features insecure as well, such as unpost

View File

@@ -1,4 +1,4 @@
FROM fedora:38 FROM fedora:39
WORKDIR /z WORKDIR /z
LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \ LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
org.opencontainers.image.source="https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker" \ org.opencontainers.image.source="https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker" \
@@ -21,7 +21,7 @@ RUN dnf install -y \
vips vips-jxl vips-poppler vips-magick \ vips vips-jxl vips-poppler vips-magick \
python3-numpy fftw libsndfile \ python3-numpy fftw libsndfile \
gcc gcc-c++ make cmake patchelf jq \ gcc gcc-c++ make cmake patchelf jq \
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools \ python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools python3-wheel \
vamp-plugin-sdk qm-vamp-plugins \ vamp-plugin-sdk qm-vamp-plugins \
vamp-plugin-sdk-devel vamp-plugin-sdk-static \ vamp-plugin-sdk-devel vamp-plugin-sdk-static \
&& rm -f /usr/lib/python3*/EXTERNALLY-MANAGED \ && rm -f /usr/lib/python3*/EXTERNALLY-MANAGED \
@@ -29,7 +29,7 @@ RUN dnf install -y \
&& bash install-deps.sh \ && bash install-deps.sh \
&& dnf erase -y \ && dnf erase -y \
gcc gcc-c++ make cmake patchelf jq \ gcc gcc-c++ make cmake patchelf jq \
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools \ python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools python3-wheel \
vamp-plugin-sdk-devel vamp-plugin-sdk-static \ vamp-plugin-sdk-devel vamp-plugin-sdk-static \
&& dnf clean all \ && dnf clean all \
&& find /usr/ -name __pycache__ | xargs rm -rf \ && find /usr/ -name __pycache__ | xargs rm -rf \

View File

@@ -368,7 +368,7 @@ git describe --tags >/dev/null 2>/dev/null && {
printf '%s\n' "$git_ver" | grep -qE '^v[0-9\.]+-[0-9]+-g[0-9a-f]+$' && { printf '%s\n' "$git_ver" | grep -qE '^v[0-9\.]+-[0-9]+-g[0-9a-f]+$' && {
# long format (unreleased commit) # long format (unreleased commit)
t_ver="$(printf '%s\n' "$ver" | sed -r 's/\./, /g; s/(.*) (.*)/\1 "\2"/')" t_ver="$(printf '%s\n' "$ver" | sed -r 's/[-.]/, /g; s/(.*) (.*)/\1 "\2"/')"
} }
[ -z "$t_ver" ] && { [ -z "$t_ver" ] && {

View File

@@ -69,8 +69,6 @@ sed -ri s/copyparty.exe/copyparty$esuf.exe/ loader.rc2
excl=( excl=(
asyncio asyncio
copyparty.broker_mp
copyparty.broker_mpw
copyparty.smbd copyparty.smbd
ctypes.macholib ctypes.macholib
curses curses

View File

@@ -7,10 +7,6 @@ copyparty/bos,
copyparty/bos/__init__.py, copyparty/bos/__init__.py,
copyparty/bos/bos.py, copyparty/bos/bos.py,
copyparty/bos/path.py, copyparty/bos/path.py,
copyparty/broker_mp.py,
copyparty/broker_mpw.py,
copyparty/broker_thr.py,
copyparty/broker_util.py,
copyparty/cert.py, copyparty/cert.py,
copyparty/cfg.py, copyparty/cfg.py,
copyparty/dxml.py, copyparty/dxml.py,

View File

@@ -234,8 +234,9 @@ def u8(gen):
def yieldfile(fn): def yieldfile(fn):
with open(fn, "rb") as f: s = 64 * 1024
for block in iter(lambda: f.read(64 * 1024), b""): with open(fn, "rb", s * 4) as f:
for block in iter(lambda: f.read(s), b""):
yield block yield block

24
tests/res/idp/6.conf Normal file
View File

@@ -0,0 +1,24 @@
# -*- mode: yaml -*-
# vim: ft=yaml:
[global]
idp-h-usr: x-idp-user
idp-h-grp: x-idp-group
[/get/${u}]
/get/${u}
accs:
g: *
r: ${u}, @su
m: @su
[/priv/${u}]
/priv/${u}
accs:
r: ${u}, @su
m: @su
[/team/${g}/${u}]
/team/${g}/${u}
accs:
r: @${g}

View File

@@ -49,11 +49,7 @@ class TestHttpCli(unittest.TestCase):
with open(filepath, "wb") as f: with open(filepath, "wb") as f:
f.write(filepath.encode("utf-8")) f.write(filepath.encode("utf-8"))
vcfg = [ vcfg = [".::r,u1:r.,u2", "a:a:r,u1:r,u2", ".b:.b:r.,u1:r,u2"]
".::r,u1:r.,u2",
"a:a:r,u1:r,u2",
".b:.b:r.,u1:r,u2"
]
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], e2dsa=True) self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], e2dsa=True)
self.asrv = AuthSrv(self.args, self.log) self.asrv = AuthSrv(self.args, self.log)
@@ -96,7 +92,7 @@ class TestHttpCli(unittest.TestCase):
tar = tarfile.open(fileobj=io.BytesIO(b), mode="r|").getnames() tar = tarfile.open(fileobj=io.BytesIO(b), mode="r|").getnames()
top = ("top" if not url else url.lstrip(".").split("/")[0]) + "/" top = ("top" if not url else url.lstrip(".").split("/")[0]) + "/"
assert len(tar) == len([x for x in tar if x.startswith(top)]) assert len(tar) == len([x for x in tar if x.startswith(top)])
return " ".join([x[len(top):] for x in tar]) return " ".join([x[len(top) :] for x in tar])
def curl(self, url, uname, binary=False): def curl(self, url, uname, binary=False):
conn = tu.VHttpConn(self.args, self.asrv, self.log, hdr(url, uname)) conn = tu.VHttpConn(self.args, self.asrv, self.log, hdr(url, uname))

View File

@@ -15,6 +15,16 @@ class TestVFS(unittest.TestCase):
print(json.dumps(vfs, indent=4, sort_keys=True, default=lambda o: o.__dict__)) print(json.dumps(vfs, indent=4, sort_keys=True, default=lambda o: o.__dict__))
def log(self, src, msg, c=0): def log(self, src, msg, c=0):
m = "%s" % (msg,)
if (
"warning: filesystem-path does not exist:" in m
or "you are sharing a system directory:" in m
or "reinitializing due to new user from IdP:" in m
or m.startswith("hint: argument")
or (m.startswith("loaded ") and " config files:" in m)
):
return
print(("[%s] %s" % (src, msg)).encode("ascii", "replace").decode("ascii")) print(("[%s] %s" % (src, msg)).encode("ascii", "replace").decode("ascii"))
def nav(self, au, vp): def nav(self, au, vp):
@@ -30,21 +40,23 @@ class TestVFS(unittest.TestCase):
self.assertEqual(unpacked, expected + [[]] * pad) self.assertEqual(unpacked, expected + [[]] * pad)
def assertAxsAt(self, au, vp, expected): def assertAxsAt(self, au, vp, expected):
self.assertAxs(self.nav(au, vp).axs, expected) vn = self.nav(au, vp)
self.assertAxs(vn.axs, expected)
def assertNodes(self, vfs, expected): def assertNodes(self, vfs, expected):
got = list(sorted(vfs.nodes.keys())) got = list(sorted(vfs.nodes.keys()))
self.assertEqual(got, expected) self.assertEqual(got, expected)
def assertNodesAt(self, au, vp, expected): def assertNodesAt(self, au, vp, expected):
self.assertNodes(self.nav(au, vp), expected) vn = self.nav(au, vp)
self.assertNodes(vn, expected)
def prep(self): def prep(self):
here = os.path.abspath(os.path.dirname(__file__)) here = os.path.abspath(os.path.dirname(__file__))
cfgdir = os.path.join(here, "res", "idp") cfgdir = os.path.join(here, "res", "idp")
# globals are applied by main so need to cheat a little # globals are applied by main so need to cheat a little
xcfg = { "idp_h_usr": "x-idp-user", "idp_h_grp": "x-idp-group" } xcfg = {"idp_h_usr": "x-idp-user", "idp_h_grp": "x-idp-group"}
return here, cfgdir, xcfg return here, cfgdir, xcfg
@@ -140,6 +152,11 @@ class TestVFS(unittest.TestCase):
self.assertEqual(self.nav(au, "vg/iga1").realpath, "/g1-iga") self.assertEqual(self.nav(au, "vg/iga1").realpath, "/g1-iga")
self.assertEqual(self.nav(au, "vg/iga2").realpath, "/g2-iga") self.assertEqual(self.nav(au, "vg/iga2").realpath, "/g2-iga")
au.idp_checkin(None, "iub", "iga")
self.assertAxsAt(au, "vu/iua", [["iua"]])
self.assertAxsAt(au, "vg/iga1", [["iua", "iub"]])
self.assertAxsAt(au, "vg/iga2", [["iua", "iub", "ua"]])
def test_5(self): def test_5(self):
""" """
one IdP user in multiple groups one IdP user in multiple groups
@@ -169,3 +186,44 @@ class TestVFS(unittest.TestCase):
self.assertAxsAt(au, "g", [["iua"]]) self.assertAxsAt(au, "g", [["iua"]])
self.assertAxsAt(au, "ga", [["iua"]]) self.assertAxsAt(au, "ga", [["iua"]])
self.assertAxsAt(au, "gb", [["iua"]]) self.assertAxsAt(au, "gb", [["iua"]])
def test_6(self):
"""
IdP volumes with anon-get and other users/groups (github#79)
"""
_, cfgdir, xcfg = self.prep()
au = AuthSrv(Cfg(c=[cfgdir + "/6.conf"], **xcfg), self.log)
self.assertAxs(au.vfs.axs, [])
self.assertEqual(au.vfs.vpath, "")
self.assertEqual(au.vfs.realpath, "")
self.assertNodes(au.vfs, [])
au.idp_checkin(None, "iua", "")
star = ["*", "iua"]
self.assertNodes(au.vfs, ["get", "priv"])
self.assertAxsAt(au, "get/iua", [["iua"], [], [], [], star])
self.assertAxsAt(au, "priv/iua", [["iua"], [], []])
au.idp_checkin(None, "iub", "")
star = ["*", "iua", "iub"]
self.assertNodes(au.vfs, ["get", "priv"])
self.assertAxsAt(au, "get/iua", [["iua"], [], [], [], star])
self.assertAxsAt(au, "get/iub", [["iub"], [], [], [], star])
self.assertAxsAt(au, "priv/iua", [["iua"], [], []])
self.assertAxsAt(au, "priv/iub", [["iub"], [], []])
au.idp_checkin(None, "iuc", "su")
star = ["*", "iua", "iub", "iuc"]
self.assertNodes(au.vfs, ["get", "priv", "team"])
self.assertAxsAt(au, "get/iua", [["iua", "iuc"], [], ["iuc"], [], star])
self.assertAxsAt(au, "get/iub", [["iub", "iuc"], [], ["iuc"], [], star])
self.assertAxsAt(au, "get/iuc", [["iuc"], [], ["iuc"], [], star])
self.assertAxsAt(au, "priv/iua", [["iua", "iuc"], [], ["iuc"]])
self.assertAxsAt(au, "priv/iub", [["iub", "iuc"], [], ["iuc"]])
self.assertAxsAt(au, "priv/iuc", [["iuc"], [], ["iuc"]])
self.assertAxsAt(au, "team/su/iuc", [["iuc"]])
au.idp_checkin(None, "iud", "su")
self.assertAxsAt(au, "team/su/iuc", [["iuc", "iud"]])
self.assertAxsAt(au, "team/su/iud", [["iuc", "iud"]])

View File

@@ -110,7 +110,7 @@ class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None, **ka0): def __init__(self, a=None, v=None, c=None, **ka0):
ka = {} ka = {}
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats vague_403 vc ver xdev xlink xvol" ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats vague_403 vc ver xdev xlink xvol"
ka.update(**{k: False for k in ex.split()}) ka.update(**{k: False for k in ex.split()})
ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip" ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip"
@@ -147,6 +147,7 @@ class Cfg(Namespace):
dbd="wal", dbd="wal",
fk_salt="a" * 16, fk_salt="a" * 16,
idp_gsep=re.compile("[|:;+,]"), idp_gsep=re.compile("[|:;+,]"),
iobuf=256 * 1024,
lang="eng", lang="eng",
log_badpwd=1, log_badpwd=1,
logout=573, logout=573,
@@ -154,7 +155,8 @@ class Cfg(Namespace):
mth={}, mth={},
mtp=[], mtp=[],
rm_retry="0/0", rm_retry="0/0",
s_wr_sz=512 * 1024, s_rd_sz=256 * 1024,
s_wr_sz=256 * 1024,
sort="href", sort="href",
srch_hits=99999, srch_hits=99999,
th_crop="y", th_crop="y",
@@ -168,12 +170,14 @@ class Cfg(Namespace):
) )
class NullBroker(object): class NullUp2k(object):
def say(self, *args): def hash_file(*a):
pass pass
def ask(self, *args):
pass class NullHub(object):
def __init__(self):
self.up2k = NullUp2k()
class VSock(object): class VSock(object):
@@ -204,7 +208,7 @@ class VHttpSrv(object):
self.asrv = asrv self.asrv = asrv
self.log = log self.log = log
self.broker = NullBroker() self.hub = NullHub()
self.prism = None self.prism = None
self.bans = {} self.bans = {}
self.nreq = 0 self.nreq = 0
@@ -254,4 +258,4 @@ class VHttpConn(object):
self.thumbcli = None self.thumbcli = None
self.u2fh = FHC() self.u2fh = FHC()
self.get_u2idx = self.hsrv.get_u2idx self.get_u2idx = self.hsrv.get_u2idx