Compare commits
92 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a0022805d1 | ||
|
|
853adb5d04 | ||
|
|
7744226b5c | ||
|
|
d94b5b3fc9 | ||
|
|
e6ba065bc2 | ||
|
|
59a53ba9ac | ||
|
|
b88cc7b5ce | ||
|
|
5ab54763c6 | ||
|
|
59f815ff8c | ||
|
|
9c42cbec6f | ||
|
|
f471b05aa4 | ||
|
|
34c32e3e89 | ||
|
|
a080759a03 | ||
|
|
0ae12868e5 | ||
|
|
ef52e2c06c | ||
|
|
32c912bb16 | ||
|
|
20870fda79 | ||
|
|
bdfe2c1a5f | ||
|
|
cb99fbf442 | ||
|
|
bccc44dc21 | ||
|
|
2f20d29edd | ||
|
|
c6acd3a904 | ||
|
|
2b24c50eb7 | ||
|
|
d30ae8453d | ||
|
|
8e5c436bef | ||
|
|
f500e55e68 | ||
|
|
9700a12366 | ||
|
|
2b6a34dc5c | ||
|
|
ee80cdb9cf | ||
|
|
2def4cd248 | ||
|
|
0287c7baa5 | ||
|
|
51d31588e6 | ||
|
|
32553e4520 | ||
|
|
211a30da38 | ||
|
|
bdbcbbb002 | ||
|
|
e78af02241 | ||
|
|
115020ba60 | ||
|
|
66abf17bae | ||
|
|
b377791be7 | ||
|
|
78919e65d6 | ||
|
|
84b52ea8c5 | ||
|
|
fd89f7ecb9 | ||
|
|
2ebfdc2562 | ||
|
|
dbf1cbc8af | ||
|
|
a259704596 | ||
|
|
04b55f1a1d | ||
|
|
206af8f151 | ||
|
|
645bb5c990 | ||
|
|
f8966222e4 | ||
|
|
d71f844b43 | ||
|
|
e8b7f65f82 | ||
|
|
f193f398c1 | ||
|
|
b6554a7f8c | ||
|
|
3f05b6655c | ||
|
|
51a83b04a0 | ||
|
|
0c03921965 | ||
|
|
2527e90325 | ||
|
|
7f08f10c37 | ||
|
|
1c011ff0bb | ||
|
|
a1ad608267 | ||
|
|
547a486387 | ||
|
|
7741870dc7 | ||
|
|
8785d2f9fe | ||
|
|
d744f3ff8f | ||
|
|
8ca996e2f7 | ||
|
|
096de50889 | ||
|
|
bec3fee9ee | ||
|
|
8413ed6d1f | ||
|
|
055302b5be | ||
|
|
8016e6711b | ||
|
|
c8ea4066b1 | ||
|
|
6cc7101d31 | ||
|
|
263adec70a | ||
|
|
ac96fd9c96 | ||
|
|
e5582605cd | ||
|
|
1b52ef1f8a | ||
|
|
503face974 | ||
|
|
13e77777d7 | ||
|
|
89c6c2e0d9 | ||
|
|
14af136fcd | ||
|
|
d39a99c929 | ||
|
|
43ee6b9f5b | ||
|
|
8a38101e48 | ||
|
|
5026b21226 | ||
|
|
d07859e8e6 | ||
|
|
df7219d3b6 | ||
|
|
ad9be54f55 | ||
|
|
a96d9ac6cb | ||
|
|
643e222986 | ||
|
|
35165f8472 | ||
|
|
caf7e93f5e | ||
|
|
10bc2d9205 |
111
README.md
111
README.md
@@ -70,14 +70,16 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
|
||||
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
|
||||
* [using the cloud as storage](#using-the-cloud-as-storage) - connecting to an aws s3 bucket and similar
|
||||
* [hiding from google](#hiding-from-google) - tell search engines you dont wanna be indexed
|
||||
* [themes](#themes)
|
||||
* [complete examples](#complete-examples)
|
||||
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
||||
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
||||
* [prometheus](#prometheus) - metrics/stats can be enabled
|
||||
* [packages](#packages) - the party might be closer than you think
|
||||
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
|
||||
* [fedora package](#fedora-package) - currently **NOT** available on [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/)
|
||||
* [fedora package](#fedora-package) - does not exist yet
|
||||
* [nix package](#nix-package) - `nix profile install github:9001/copyparty`
|
||||
* [nixos module](#nixos-module)
|
||||
* [browser support](#browser-support) - TLDR: yes
|
||||
@@ -92,6 +94,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [gotchas](#gotchas) - behavior that might be unexpected
|
||||
* [cors](#cors) - cross-site request config
|
||||
* [filekeys](#filekeys) - prevent filename bruteforcing
|
||||
* [dirkeys](#dirkeys) - share specific folders in a volume
|
||||
* [password hashing](#password-hashing) - you can hash passwords
|
||||
* [https](#https) - both HTTP and HTTPS are accepted
|
||||
* [recovering from crashes](#recovering-from-crashes)
|
||||
@@ -104,7 +107,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [sfx](#sfx) - the self-contained "binary"
|
||||
* [copyparty.exe](#copypartyexe) - download [copyparty.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) (win8+) or [copyparty32.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty32.exe) (win7+)
|
||||
* [install on android](#install-on-android)
|
||||
* [reporting bugs](#reporting-bugs) - ideas for context to include in bug reports
|
||||
* [reporting bugs](#reporting-bugs) - ideas for context to include, and where to submit them
|
||||
* [devnotes](#devnotes) - for build instructions etc, see [./docs/devnotes.md](./docs/devnotes.md)
|
||||
|
||||
|
||||
@@ -197,7 +200,7 @@ firewall-cmd --reload
|
||||
* browser
|
||||
* ☑ [navpane](#navpane) (directory tree sidebar)
|
||||
* ☑ file manager (cut/paste, delete, [batch-rename](#batch-rename))
|
||||
* ☑ audio player (with [OS media controls](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) and opus transcoding)
|
||||
* ☑ audio player (with [OS media controls](https://user-images.githubusercontent.com/241032/215347492-b4250797-6c90-4e09-9a4c-721edf2fb15c.png) and opus/mp3 transcoding)
|
||||
* ☑ image gallery with webm player
|
||||
* ☑ textfile browser with syntax hilighting
|
||||
* ☑ [thumbnails](#thumbnails)
|
||||
@@ -286,6 +289,9 @@ roughly sorted by chance of encounter
|
||||
* cannot index non-ascii filenames with `-e2d`
|
||||
* cannot handle filenames with mojibake
|
||||
|
||||
if you have a new exciting bug to share, see [reporting bugs](#reporting-bugs)
|
||||
|
||||
|
||||
## not my bugs
|
||||
|
||||
same order here too
|
||||
@@ -341,9 +347,24 @@ upgrade notes
|
||||
* yes, using the [`g` permission](#accounts-and-volumes), see the examples there
|
||||
* you can also do this with linux filesystem permissions; `chmod 111 music` will make it possible to access files and folders inside the `music` folder but not list the immediate contents -- also works with other software, not just copyparty
|
||||
|
||||
* can I link someone to a password-protected volume/file by including the password in the URL?
|
||||
* yes, by adding `?pw=hunter2` to the end; replace `?` with `&` if there are parameters in the URL already, meaning it contains a `?` near the end
|
||||
|
||||
* how do I stop `.hist` folders from appearing everywhere on my HDD?
|
||||
* by default, a `.hist` folder is created inside each volume for the filesystem index, thumbnails, audio transcodes, and markdown document history. Use the `--hist` global-option or the `hist` volflag to move it somewhere else; see [database location](#database-location)
|
||||
|
||||
* can I make copyparty download a file to my server if I give it a URL?
|
||||
* yes, using [hooks](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/wget.py)
|
||||
|
||||
* firefox refuses to connect over https, saying "Secure Connection Failed" or "SEC_ERROR_BAD_SIGNATURE", but the usual button to "Accept the Risk and Continue" is not shown
|
||||
* firefox has corrupted its certstore; fix this by exiting firefox, then find and delete the file named `cert9.db` somewhere in your firefox profile folder
|
||||
|
||||
* the server keeps saying `thank you for playing` when I try to access the website
|
||||
* you've gotten banned for malicious traffic! if this happens by mistake, and you're running a reverse-proxy and/or something like cloudflare, see [real-ip](#real-ip) on how to fix this
|
||||
|
||||
* copyparty seems to think I am using http, even though the URL is https
|
||||
* your reverse-proxy is not sending the `X-Forwarded-Proto: https` header; this could be because your reverse-proxy itself is confused. Ensure that none of the intermediates (such as cloudflare) are terminating https before the traffic hits your entrypoint
|
||||
|
||||
* i want to learn python and/or programming and am considering looking at the copyparty source code in that occasion
|
||||
* ```bash
|
||||
_| _ __ _ _|_
|
||||
@@ -567,7 +588,7 @@ you can also zip a selection of files or folders by clicking them in the browser
|
||||
|
||||

|
||||
|
||||
cool trick: download a folder by appending url-params `?tar&opus` to transcode all audio files (except aac|m4a|mp3|ogg|opus|wma) to opus before they're added to the archive
|
||||
cool trick: download a folder by appending url-params `?tar&opus` or `?tar&mp3` to transcode all audio files (except aac|m4a|mp3|ogg|opus|wma) to opus/mp3 before they're added to the archive
|
||||
* super useful if you're 5 minutes away from takeoff and realize you don't have any music on your phone but your server only has flac files and downloading those will burn through all your data + there wouldn't be enough time anyways
|
||||
* and url-params `&j` / `&w` produce jpeg/webm thumbnails/spectrograms instead of the original audio/video/images
|
||||
* can also be used to pregenerate thumbnails; combine with `--th-maxage=9999999` or `--th-clean=0`
|
||||
@@ -581,7 +602,7 @@ this initiates an upload using `up2k`; there are two uploaders available:
|
||||
* `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0
|
||||
* `[🚀] up2k`, the good / fancy one
|
||||
|
||||
NB: you can undo/delete your own uploads with `[🧯]` [unpost](#unpost)
|
||||
NB: you can undo/delete your own uploads with `[🧯]` [unpost](#unpost) (and this is also where you abort unfinished uploads, but you have to refresh the page first)
|
||||
|
||||
up2k has several advantages:
|
||||
* you can drop folders into the browser (files are added recursively)
|
||||
@@ -758,9 +779,9 @@ open the `[🎺]` media-player-settings tab to configure it,
|
||||
* `[loop]` keeps looping the folder
|
||||
* `[next]` plays into the next folder
|
||||
* "transcode":
|
||||
* `[flac]` converts `flac` and `wav` files into opus
|
||||
* `[aac]` converts `aac` and `m4a` files into opus
|
||||
* `[oth]` converts all other known formats into opus
|
||||
* `[flac]` converts `flac` and `wav` files into opus (if supported by browser) or mp3
|
||||
* `[aac]` converts `aac` and `m4a` files into opus (if supported by browser) or mp3
|
||||
* `[oth]` converts all other known formats into opus (if supported by browser) or mp3
|
||||
* `aac|ac3|aif|aiff|alac|alaw|amr|ape|au|dfpwm|dts|flac|gsm|it|m4a|mo3|mod|mp2|mp3|mpc|mptm|mt2|mulaw|ogg|okt|opus|ra|s3m|tak|tta|ulaw|wav|wma|wv|xm|xpk`
|
||||
* "tint" reduces the contrast of the playback bar
|
||||
|
||||
@@ -1252,11 +1273,26 @@ replace 404 and 403 errors with something completely different (that's it for no
|
||||
|
||||
replace copyparty passwords with oauth and such
|
||||
|
||||
work is [ongoing](https://github.com/9001/copyparty/issues/62) to support authenticating / authorizing users based on a separate authentication proxy, which makes it possible to support oauth, single-sign-on, etc.
|
||||
you can disable the built-in password-based login sysem, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
|
||||
|
||||
it is currently possible to specify `--idp-h-usr x-username`; copyparty will then skip password validation and blindly trust the username specified in the `X-Username` request header
|
||||
a popular choice is [Authelia](https://www.authelia.com/) (config-file based), another one is [authentik](https://goauthentik.io/) (GUI-based, more complex)
|
||||
|
||||
the remaining stuff (accepting user groups through another header, creating volumes on the fly) are still to-do; configuration will probably [look like this](./docs/examples/docker/idp/copyparty.conf)
|
||||
there is a [docker-compose example](./docs/examples/docker/idp-authelia-traefik) which is hopefully a good starting point (alternatively see [./docs/idp.md](./docs/idp.md) if you're the DIY type)
|
||||
|
||||
a more complete example of the copyparty configuration options [look like this](./docs/examples/docker/idp/copyparty.conf)
|
||||
|
||||
|
||||
## using the cloud as storage
|
||||
|
||||
connecting to an aws s3 bucket and similar
|
||||
|
||||
there is no built-in support for this, but you can use FUSE-software such as [rclone](https://rclone.org/) / [geesefs](https://github.com/yandex-cloud/geesefs) / [JuiceFS](https://juicefs.com/en/) to first mount your cloud storage as a local disk, and then let copyparty use (a folder in) that disk as a volume
|
||||
|
||||
you may experience poor upload performance this way, but that can sometimes be fixed by specifying the volflag `sparse` to force the use of sparse files; this has improved the upload speeds from `1.5 MiB/s` to over `80 MiB/s` in one case, but note that you are also more likely to discover funny bugs in your FUSE software this way, so buckle up
|
||||
|
||||
someone has also tested geesefs in combination with [gocryptfs](https://nuetzlich.net/gocryptfs/) with surprisingly good results, getting 60 MiB/s upload speeds on a gbit line, but JuiceFS won with 80 MiB/s using its built-in encryption
|
||||
|
||||
you may improve performance by specifying larger values for `--iobuf` / `--s-rd-sz` / `--s-wr-sz`
|
||||
|
||||
|
||||
## hiding from google
|
||||
@@ -1292,6 +1328,8 @@ the classname of the HTML tag is set according to the selected theme, which is u
|
||||
|
||||
see the top of [./copyparty/web/browser.css](./copyparty/web/browser.css) where the color variables are set, and there's layout-specific stuff near the bottom
|
||||
|
||||
if you want to change the fonts, see [./docs/rice/](./docs/rice/)
|
||||
|
||||
|
||||
## complete examples
|
||||
|
||||
@@ -1352,6 +1390,15 @@ example webserver configs:
|
||||
* [apache2 config](contrib/apache/copyparty.conf) -- location-based
|
||||
|
||||
|
||||
### real-ip
|
||||
|
||||
teaching copyparty how to see client IPs when running behind a reverse-proxy, or a WAF, or another protection service such as cloudflare
|
||||
|
||||
if you (and maybe everybody else) keep getting a message that says `thank you for playing`, then you've gotten banned for malicious traffic. This ban applies to the IP address that copyparty *thinks* identifies the shady client -- so, depending on your setup, you might have to tell copyparty where to find the correct IP
|
||||
|
||||
for most common setups, there should be a helpful message in the server-log explaining what to do, but see [docs/xff.md](docs/xff.md) if you want to learn more, including a quick hack to **just make it work** (which is **not** recommended, but hey...)
|
||||
|
||||
|
||||
## prometheus
|
||||
|
||||
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
|
||||
@@ -1431,17 +1478,7 @@ it comes with a [systemd service](./contrib/package/arch/copyparty.service) and
|
||||
|
||||
## fedora package
|
||||
|
||||
currently **NOT** available on [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/) , fedora is having issues with their build servers and won't be fixed for several months
|
||||
|
||||
if you previously installed copyparty from copr, you may run one of the following commands to upgrade to a more recent version:
|
||||
|
||||
```bash
|
||||
dnf install https://ocv.me/copyparty/fedora/37/python3-copyparty.fc37.noarch.rpm
|
||||
dnf install https://ocv.me/copyparty/fedora/38/python3-copyparty.fc38.noarch.rpm
|
||||
dnf install https://ocv.me/copyparty/fedora/39/python3-copyparty.fc39.noarch.rpm
|
||||
```
|
||||
|
||||
to run copyparty as a service, use the [systemd service scripts](https://github.com/9001/copyparty/tree/hovudstraum/contrib/systemd), just replace `/usr/bin/python3 /usr/local/bin/copyparty-sfx.py` with `/usr/bin/copyparty`
|
||||
does not exist yet; using the [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/) builds is **NOT recommended** because updates can be delayed by [several months](https://github.com/fedora-copr/copr/issues/3056)
|
||||
|
||||
|
||||
## nix package
|
||||
@@ -1706,6 +1743,7 @@ below are some tweaks roughly ordered by usefulness:
|
||||
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
|
||||
* and also makes thumbnails load faster, regardless of e2d/e2t
|
||||
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
|
||||
* if your volumes are on a network-disk such as NFS / SMB / s3, specifying larger values for `--iobuf` and/or `--s-rd-sz` and/or `--s-wr-sz` may help; try setting all of them to `524288` or `1048576` or `4194304`
|
||||
* `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger)
|
||||
* `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example:
|
||||
* lots of connections (many users or heavy clients)
|
||||
@@ -1802,12 +1840,29 @@ cors can be configured with `--acao` and `--acam`, or the protections entirely d
|
||||
|
||||
prevent filename bruteforcing
|
||||
|
||||
volflag `c,fk` generates filekeys (per-file accesskeys) for all files; users which have full read-access (permission `r`) will then see URLs with the correct filekey `?k=...` appended to the end, and `g` users must provide that URL including the correct key to avoid a 404
|
||||
volflag `fk` generates filekeys (per-file accesskeys) for all files; users which have full read-access (permission `r`) will then see URLs with the correct filekey `?k=...` appended to the end, and `g` users must provide that URL including the correct key to avoid a 404
|
||||
|
||||
by default, filekeys are generated based on salt (`--fk-salt`) + filesystem-path + file-size + inode (if not windows); add volflag `fka` to generate slightly weaker filekeys which will not be invalidated if the file is edited (only salt + path)
|
||||
|
||||
permissions `wG` (write + upget) lets users upload files and receive their own filekeys, still without being able to see other uploads
|
||||
|
||||
### dirkeys
|
||||
|
||||
share specific folders in a volume without giving away full read-access to the rest -- the visitor only needs the `g` (get) permission to view the link
|
||||
|
||||
volflag `dk` generates dirkeys (per-directory accesskeys) for all folders, granting read-access to that folder; by default only that folder itself, no subfolders
|
||||
|
||||
volflag `dky` disables the actual key-check, meaning anyone can see the contents of a folder where they have `g` access, but not its subdirectories
|
||||
|
||||
* `dk` + `dky` gives the same behavior as if all users with `g` access have full read-access, but subfolders are hidden files (their names start with a dot), so `dky` is an alternative to renaming all the folders for that purpose, maybe just for some users
|
||||
|
||||
volflag `dks` lets people enter subfolders as well, and also enables download-as-zip/tar
|
||||
|
||||
dirkeys are generated based on another salt (`--dk-salt`) + filesystem-path and have a few limitations:
|
||||
* the key does not change if the contents of the folder is modified
|
||||
* if you need a new dirkey, either change the salt or rename the folder
|
||||
* linking to a textfile (so it opens in the textfile viewer) is not possible if recipient doesn't have read-access
|
||||
|
||||
|
||||
## password hashing
|
||||
|
||||
@@ -1943,7 +1998,12 @@ if you want thumbnails (photos+videos) and you're okay with spending another 132
|
||||
|
||||
# reporting bugs
|
||||
|
||||
ideas for context to include in bug reports
|
||||
ideas for context to include, and where to submit them
|
||||
|
||||
please get in touch using any of the following URLs:
|
||||
* https://github.com/9001/copyparty/ **(primary)**
|
||||
* https://gitlab.com/9001/copyparty/ *(mirror)*
|
||||
* https://codeberg.org/9001/copyparty *(mirror)*
|
||||
|
||||
in general, commandline arguments (and config file if any)
|
||||
|
||||
@@ -1958,3 +2018,6 @@ if there's a wall of base64 in the log (thread stacks) then please include that,
|
||||
# devnotes
|
||||
|
||||
for build instructions etc, see [./docs/devnotes.md](./docs/devnotes.md)
|
||||
|
||||
see [./docs/TODO.md](./docs/TODO.md) for planned features / fixes / changes
|
||||
|
||||
|
||||
@@ -223,7 +223,10 @@ install_vamp() {
|
||||
# use msys2 in mingw-w64 mode
|
||||
# pacman -S --needed mingw-w64-x86_64-{ffmpeg,python,python-pip,vamp-plugin-sdk}
|
||||
|
||||
$pybin -m pip install --user vamp
|
||||
$pybin -m pip install --user vamp || {
|
||||
printf '\n\033[7malright, trying something else...\033[0m\n'
|
||||
$pybin -m pip install --user --no-build-isolation vamp
|
||||
}
|
||||
|
||||
cd "$td"
|
||||
echo '#include <vamp-sdk/Plugin.h>' | g++ -x c++ -c -o /dev/null - || [ -e ~/pe/vamp-sdk ] || {
|
||||
|
||||
@@ -16,6 +16,8 @@
|
||||
* sharex config file to upload screenshots and grab the URL
|
||||
* `RequestURL`: full URL to the target folder
|
||||
* `pw`: password (remove the `pw` line if anon-write)
|
||||
* the `act:bput` thing is optional since copyparty v1.9.29
|
||||
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
||||
|
||||
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
||||
* browser integration, kind of? custom rightclick actions and stuff
|
||||
|
||||
@@ -11,6 +11,14 @@
|
||||
# (5'000 requests per second, or 20gbps upload/download in parallel)
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
#
|
||||
# if you are behind cloudflare (or another protection service),
|
||||
# remember to reject all connections which are not coming from your
|
||||
# protection service -- for cloudflare in particular, you can
|
||||
# generate the list of permitted IP ranges like so:
|
||||
# (curl -s https://www.cloudflare.com/ips-v{4,6} | sed 's/^/allow /; s/$/;/'; echo; echo "deny all;") > /etc/nginx/cloudflare-only.conf
|
||||
#
|
||||
# and then enable it below by uncomenting the cloudflare-only.conf line
|
||||
|
||||
upstream cpp {
|
||||
server 127.0.0.1:3923 fail_timeout=1s;
|
||||
@@ -21,7 +29,10 @@ server {
|
||||
listen [::]:443 ssl;
|
||||
|
||||
server_name fs.example.com;
|
||||
|
||||
|
||||
# uncomment the following line to reject non-cloudflare connections, ensuring client IPs cannot be spoofed:
|
||||
#include /etc/nginx/cloudflare-only.conf;
|
||||
|
||||
location / {
|
||||
proxy_pass http://cpp;
|
||||
proxy_redirect off;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.10.0"
|
||||
pkgver="1.11.2"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("a44338b24d28fd6962504ec2c0e466e16144ddd4c5af44eb3ae493534152fd07")
|
||||
sha256sums=("0b37641746d698681691ea9e7070096404afc64a42d3d4e96cc4e036074fded9")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.10.0/copyparty-sfx.py",
|
||||
"version": "1.10.0",
|
||||
"hash": "sha256-pPkiDKXEv7P1zPB7/BSo85St0FNnhUpb30sxIi7mn1c="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.11.2/copyparty-sfx.py",
|
||||
"version": "1.11.2",
|
||||
"hash": "sha256-3nIHLM4xJ9RQH3ExSGvBckHuS40IdzyREAtMfpJmfug="
|
||||
}
|
||||
13
contrib/sharex12.sxcu
Normal file
13
contrib/sharex12.sxcu
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"Name": "copyparty",
|
||||
"DestinationType": "ImageUploader, TextUploader, FileUploader",
|
||||
"RequestURL": "http://127.0.0.1:3923/sharex",
|
||||
"FileFormName": "f",
|
||||
"Arguments": {
|
||||
"act": "bput"
|
||||
},
|
||||
"Headers": {
|
||||
"accept": "url",
|
||||
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE"
|
||||
}
|
||||
}
|
||||
@@ -56,7 +56,6 @@ class EnvParams(object):
|
||||
self.t0 = time.time()
|
||||
self.mod = ""
|
||||
self.cfg = ""
|
||||
self.ox = getattr(sys, "oxidized", None)
|
||||
|
||||
|
||||
E = EnvParams()
|
||||
|
||||
@@ -157,7 +157,8 @@ def warn(msg: str) -> None:
|
||||
|
||||
|
||||
def init_E(EE: EnvParams) -> None:
|
||||
# __init__ runs 18 times when oxidized; do expensive stuff here
|
||||
# some cpython alternatives (such as pyoxidizer) can
|
||||
# __init__ several times, so do expensive stuff here
|
||||
|
||||
E = EE # pylint: disable=redefined-outer-name
|
||||
|
||||
@@ -190,34 +191,9 @@ def init_E(EE: EnvParams) -> None:
|
||||
|
||||
raise Exception("could not find a writable path for config")
|
||||
|
||||
def _unpack() -> str:
|
||||
import atexit
|
||||
import tarfile
|
||||
import tempfile
|
||||
from importlib.resources import open_binary
|
||||
|
||||
td = tempfile.TemporaryDirectory(prefix="")
|
||||
atexit.register(td.cleanup)
|
||||
tdn = td.name
|
||||
|
||||
with open_binary("copyparty", "z.tar") as tgz:
|
||||
with tarfile.open(fileobj=tgz) as tf:
|
||||
try:
|
||||
tf.extractall(tdn, filter="tar")
|
||||
except TypeError:
|
||||
tf.extractall(tdn) # nosec (archive is safe)
|
||||
|
||||
return tdn
|
||||
|
||||
try:
|
||||
E.mod = os.path.dirname(os.path.realpath(__file__))
|
||||
if E.mod.endswith("__init__"):
|
||||
E.mod = os.path.dirname(E.mod)
|
||||
except:
|
||||
if not E.ox:
|
||||
raise
|
||||
|
||||
E.mod = _unpack()
|
||||
E.mod = os.path.dirname(os.path.realpath(__file__))
|
||||
if E.mod.endswith("__init__"):
|
||||
E.mod = os.path.dirname(E.mod)
|
||||
|
||||
if sys.platform == "win32":
|
||||
bdir = os.environ.get("APPDATA") or os.environ.get("TEMP") or "."
|
||||
@@ -274,6 +250,19 @@ def get_fk_salt() -> str:
|
||||
return ret.decode("utf-8")
|
||||
|
||||
|
||||
def get_dk_salt() -> str:
|
||||
fp = os.path.join(E.cfg, "dk-salt.txt")
|
||||
try:
|
||||
with open(fp, "rb") as f:
|
||||
ret = f.read().strip()
|
||||
except:
|
||||
ret = base64.b64encode(os.urandom(30))
|
||||
with open(fp, "wb") as f:
|
||||
f.write(ret + b"\n")
|
||||
|
||||
return ret.decode("utf-8")
|
||||
|
||||
|
||||
def get_ah_salt() -> str:
|
||||
fp = os.path.join(E.cfg, "ah-salt.txt")
|
||||
try:
|
||||
@@ -395,7 +384,7 @@ def configure_ssl_ciphers(al: argparse.Namespace) -> None:
|
||||
|
||||
def args_from_cfg(cfg_path: str) -> list[str]:
|
||||
lines: list[str] = []
|
||||
expand_config_file(lines, cfg_path, "")
|
||||
expand_config_file(None, lines, cfg_path, "")
|
||||
lines = upgrade_cfg_fmt(None, argparse.Namespace(vc=False), lines, "")
|
||||
|
||||
ret: list[str] = []
|
||||
@@ -503,6 +492,10 @@ def get_sects():
|
||||
* "\033[33mperm\033[0m" is "permissions,username1,username2,..."
|
||||
* "\033[32mvolflag\033[0m" is config flags to set on this volume
|
||||
|
||||
--grp takes groupname:username1,username2,...
|
||||
and groupnames can be used instead of usernames in -v
|
||||
by prefixing the groupname with %
|
||||
|
||||
list of permissions:
|
||||
"r" (read): list folder contents, download files
|
||||
"w" (write): upload files; need "r" to see the uploads
|
||||
@@ -840,6 +833,7 @@ def add_general(ap, nc, srvname):
|
||||
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores, 0=all")
|
||||
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, \033[33mUSER\033[0m:\033[33mPASS\033[0m; example [\033[32med:wark\033[0m]")
|
||||
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
|
||||
ap2.add_argument("--grp", metavar="G:N,N", type=u, action="append", help="add group, \033[33mNAME\033[0m:\033[33mUSER1\033[0m,\033[33mUSER2\033[0m,\033[33m...\033[0m; example [\033[32madmins:ed,foo,bar\033[0m]")
|
||||
ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files (volflag=dots)")
|
||||
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
|
||||
ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="server terminal title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]")
|
||||
@@ -864,6 +858,7 @@ def add_fs(ap):
|
||||
ap2 = ap.add_argument_group("filesystem options")
|
||||
rm_re_def = "5/0.1" if ANYWIN else "0/0"
|
||||
ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)")
|
||||
ap2.add_argument("--iobuf", metavar="BYTES", type=int, default=256*1024, help="file I/O buffer-size; if your volumes are on a network drive, try increasing to \033[32m524288\033[0m or even \033[32m4194304\033[0m (and let me know if that improves your performance)")
|
||||
|
||||
|
||||
def add_upload(ap):
|
||||
@@ -871,6 +866,7 @@ def add_upload(ap):
|
||||
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads, hiding them from clients unless \033[33m-ed\033[0m")
|
||||
ap2.add_argument("--plain-ip", action="store_true", help="when avoiding filename collisions by appending the uploader's ip to the filename: append the plaintext ip instead of salting and hashing the ip")
|
||||
ap2.add_argument("--unpost", metavar="SEC", type=int, default=3600*12, help="grace period where uploads can be deleted by the uploader, even without delete permissions; 0=disabled, default=12h")
|
||||
ap2.add_argument("--u2abort", metavar="NUM", type=int, default=1, help="clients can abort incomplete uploads by using the unpost tab (requires \033[33m-e2d\033[0m). [\033[32m0\033[0m] = never allowed (disable feature), [\033[32m1\033[0m] = allow if client has the same IP as the upload AND is using the same account, [\033[32m2\033[0m] = just check the IP, [\033[32m3\033[0m] = just check account-name (volflag=u2abort)")
|
||||
ap2.add_argument("--blank-wt", metavar="SEC", type=int, default=300, help="file write grace period (any client can write to a blank file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
|
||||
ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without \033[33m-e2d\033[0m; roughly 1 MiB RAM per 600")
|
||||
ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (bad idea to enable this on windows and/or cow filesystems)")
|
||||
@@ -901,8 +897,8 @@ def add_network(ap):
|
||||
ap2.add_argument("--ll", action="store_true", help="include link-local IPv4/IPv6 in mDNS replies, even if the NIC has routable IPs (breaks some mDNS clients)")
|
||||
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to associate clients with; [\033[32m0\033[0m]=tcp, [\033[32m1\033[0m]=origin (first x-fwd, unsafe), [\033[32m2\033[0m]=outermost-proxy, [\033[32m3\033[0m]=second-proxy, [\033[32m-1\033[0m]=closest-proxy")
|
||||
ap2.add_argument("--xff-hdr", metavar="NAME", type=u, default="x-forwarded-for", help="if reverse-proxied, which http header to read the client's real ip from")
|
||||
ap2.add_argument("--xff-src", metavar="IP", type=u, default="127., ::1", help="comma-separated list of trusted reverse-proxy IPs; only accept the real-ip header (\033[33m--xff-hdr\033[0m) if the incoming connection is from an IP starting with either of these. Can be disabled with [\033[32many\033[0m] if you are behind cloudflare (or similar) and are using \033[32m--xff-hdr=cf-connecting-ip\033[0m (or similar)")
|
||||
ap2.add_argument("--ipa", metavar="PREFIX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPREFIX\033[0m; example: [\033[32m127., 10.89., 192.168.\033[0m]")
|
||||
ap2.add_argument("--xff-src", metavar="CIDR", type=u, default="127.0.0.0/8, ::1/128", help="comma-separated list of trusted reverse-proxy CIDRs; only accept the real-ip header (\033[33m--xff-hdr\033[0m) and IdP headers if the incoming connection is from an IP within either of these subnets. Specify [\033[32mlan\033[0m] to allow all LAN / private / non-internet IPs. Can be disabled with [\033[32many\033[0m] if you are behind cloudflare (or similar) and are using \033[32m--xff-hdr=cf-connecting-ip\033[0m (or similar)")
|
||||
ap2.add_argument("--ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||
ap2.add_argument("--rp-loc", metavar="PATH", type=u, default="", help="if reverse-proxying on a location instead of a dedicated domain/subdomain, provide the base location here; example: [\033[32m/foo/bar\033[0m]")
|
||||
if ANYWIN:
|
||||
ap2.add_argument("--reuseaddr", action="store_true", help="set reuseaddr on listening sockets on windows; allows rapid restart of copyparty at the expense of being able to accidentally start multiple instances")
|
||||
@@ -910,6 +906,7 @@ def add_network(ap):
|
||||
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
|
||||
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
|
||||
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
||||
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
|
||||
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
|
||||
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds")
|
||||
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds")
|
||||
@@ -949,8 +946,9 @@ def add_cert(ap, cert_path):
|
||||
def add_auth(ap):
|
||||
ap2 = ap.add_argument_group('IdP / identity provider / user authentication options')
|
||||
ap2.add_argument("--idp-h-usr", metavar="HN", type=u, default="", help="bypass the copyparty authentication checks and assume the request-header \033[33mHN\033[0m contains the username of the requesting user (for use with authentik/oauth/...)\n\033[1;31mWARNING:\033[0m if you enable this, make sure clients are unable to specify this header themselves; must be washed away and replaced by a reverse-proxy")
|
||||
return
|
||||
ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control")
|
||||
ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present")
|
||||
ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m")
|
||||
|
||||
|
||||
def add_zeroconf(ap):
|
||||
@@ -999,7 +997,7 @@ def add_ftp(ap):
|
||||
ap2.add_argument("--ftps", metavar="PORT", type=int, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
|
||||
ap2.add_argument("--ftpv", action="store_true", help="verbose")
|
||||
ap2.add_argument("--ftp4", action="store_true", help="only listen on IPv4")
|
||||
ap2.add_argument("--ftp-ipa", metavar="PFX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPFX\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Example: [\033[32m127., 10.89., 192.168.\033[0m]")
|
||||
ap2.add_argument("--ftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||
ap2.add_argument("--ftp-wt", metavar="SEC", type=int, default=7, help="grace period for resuming interrupted uploads (any client can write to any file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
|
||||
ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, help="the NAT address to use for passive connections")
|
||||
ap2.add_argument("--ftp-pr", metavar="P-P", type=u, help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000")
|
||||
@@ -1022,7 +1020,7 @@ def add_tftp(ap):
|
||||
ap2.add_argument("--tftp-no-fast", action="store_true", help="debug: disable optimizations")
|
||||
ap2.add_argument("--tftp-lsf", metavar="PTN", type=u, default="\\.?(dir|ls)(\\.txt)?", help="return a directory listing if a file with this name is requested and it does not exist; defaults matches .ls, dir, .dir.txt, ls.txt, ...")
|
||||
ap2.add_argument("--tftp-nols", action="store_true", help="if someone tries to download a directory, return an error instead of showing its directory listing")
|
||||
ap2.add_argument("--tftp-ipa", metavar="PFX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPFX\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Example: [\033[32m127., 10.89., 192.168.\033[0m]")
|
||||
ap2.add_argument("--tftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
|
||||
|
||||
|
||||
@@ -1115,19 +1113,21 @@ def add_safety(ap):
|
||||
ap2.add_argument("--ban-url", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m sus URL's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; applies only to permissions g/G/h (decent replacement for \033[33m--ban-404\033[0m if that can't be used)")
|
||||
ap2.add_argument("--sus-urls", metavar="R", type=u, default=r"\.php$|(^|/)wp-(admin|content|includes)/", help="URLs which are considered sus / eligible for banning; disable with blank or [\033[32mno\033[0m]")
|
||||
ap2.add_argument("--nonsus-urls", metavar="R", type=u, default=r"^(favicon\.ico|robots\.txt)$|^apple-touch-icon|^\.well-known", help="harmless URLs ignored from 404-bans; disable with blank or [\033[32mno\033[0m]")
|
||||
ap2.add_argument("--early-ban", action="store_true", help="if a client is banned, reject its connection as soon as possible; not a good idea to enable when proxied behind cloudflare since it could ban your reverse-proxy")
|
||||
ap2.add_argument("--aclose", metavar="MIN", type=int, default=10, help="if a client maxes out the server connection limit, downgrade it from connection:keep-alive to connection:close for \033[33mMIN\033[0m minutes (and also kill its active connections) -- disable with 0")
|
||||
ap2.add_argument("--loris", metavar="B", type=int, default=60, help="if a client maxes out the server connection limit without sending headers, ban it for \033[33mB\033[0m minutes; disable with [\033[32m0\033[0m]")
|
||||
ap2.add_argument("--acao", metavar="V[,V]", type=u, default="*", help="Access-Control-Allow-Origin; list of origins (domains/IPs without port) to accept requests from; [\033[32mhttps://1.2.3.4\033[0m]. Default [\033[32m*\033[0m] allows requests from all sites but removes cookies and http-auth; only ?pw=hunter2 survives")
|
||||
ap2.add_argument("--acam", metavar="V[,V]", type=u, default="GET,HEAD", help="Access-Control-Allow-Methods; list of methods to accept from offsite ('*' behaves like \033[33m--acao\033[0m's description)")
|
||||
|
||||
|
||||
def add_salt(ap, fk_salt, ah_salt):
|
||||
def add_salt(ap, fk_salt, dk_salt, ah_salt):
|
||||
ap2 = ap.add_argument_group('salting options')
|
||||
ap2.add_argument("--ah-alg", metavar="ALG", type=u, default="none", help="account-pw hashing algorithm; one of these, best to worst: \033[32margon2 scrypt sha2 none\033[0m (each optionally followed by alg-specific comma-sep. config)")
|
||||
ap2.add_argument("--ah-salt", metavar="SALT", type=u, default=ah_salt, help="account-pw salt; ignored if \033[33m--ah-alg\033[0m is none (default)")
|
||||
ap2.add_argument("--ah-gen", metavar="PW", type=u, default="", help="generate hashed password for \033[33mPW\033[0m, or read passwords from STDIN if \033[33mPW\033[0m is [\033[32m-\033[0m]")
|
||||
ap2.add_argument("--ah-cli", action="store_true", help="launch an interactive shell which hashes passwords without ever storing or displaying the original passwords")
|
||||
ap2.add_argument("--fk-salt", metavar="SALT", type=u, default=fk_salt, help="per-file accesskey salt; used to generate unpredictable URLs for hidden files")
|
||||
ap2.add_argument("--dk-salt", metavar="SALT", type=u, default=dk_salt, help="per-directory accesskey salt; used to generate unpredictable URLs to share folders with users who only have the 'get' permission")
|
||||
ap2.add_argument("--warksalt", metavar="SALT", type=u, default="hunter2", help="up2k file-hash salt; serves no purpose, no reason to change this (but delete all databases if you do)")
|
||||
|
||||
|
||||
@@ -1193,6 +1193,8 @@ def add_thumbnail(ap):
|
||||
|
||||
def add_transcoding(ap):
|
||||
ap2 = ap.add_argument_group('transcoding options')
|
||||
ap2.add_argument("--q-opus", metavar="KBPS", type=int, default=128, help="target bitrate for transcoding to opus; set 0 to disable")
|
||||
ap2.add_argument("--q-mp3", metavar="QUALITY", type=u, default="q2", help="target quality for transcoding to mp3, for example [\033[32m192k\033[0m] (CBR) or [\033[32mq0\033[0m] (CQ/CRF, q0=maxquality, q9=smallest); set 0 to disable")
|
||||
ap2.add_argument("--no-acode", action="store_true", help="disable audio transcoding")
|
||||
ap2.add_argument("--no-bacode", action="store_true", help="disable batch audio transcoding by folder download (zip/tar)")
|
||||
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
|
||||
@@ -1269,6 +1271,7 @@ def add_ui(ap, retry):
|
||||
ap2.add_argument("--bname", metavar="TXT", type=u, default="--name", help="server name (displayed in filebrowser document title)")
|
||||
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
|
||||
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
|
||||
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
|
||||
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
|
||||
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README.md documents (volflags: no_sb_md | sb_md)")
|
||||
@@ -1308,6 +1311,7 @@ def run_argparse(
|
||||
cert_path = os.path.join(E.cfg, "cert.pem")
|
||||
|
||||
fk_salt = get_fk_salt()
|
||||
dk_salt = get_dk_salt()
|
||||
ah_salt = get_ah_salt()
|
||||
|
||||
# alpine peaks at 5 threads for some reason,
|
||||
@@ -1339,7 +1343,7 @@ def run_argparse(
|
||||
add_tftp(ap)
|
||||
add_smb(ap)
|
||||
add_safety(ap)
|
||||
add_salt(ap, fk_salt, ah_salt)
|
||||
add_salt(ap, fk_salt, dk_salt, ah_salt)
|
||||
add_optouts(ap)
|
||||
add_shutdown(ap)
|
||||
add_yolo(ap)
|
||||
@@ -1431,6 +1435,7 @@ def main(argv: Optional[list[str]] = None) -> None:
|
||||
deprecated: list[tuple[str, str]] = [
|
||||
("--salt", "--warksalt"),
|
||||
("--hdr-au-usr", "--idp-h-usr"),
|
||||
("--idp-h-sep", "--idp-gsep"),
|
||||
("--th-no-crop", "--th-crop=n"),
|
||||
]
|
||||
for dk, nk in deprecated:
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 10, 1)
|
||||
CODENAME = "tftp"
|
||||
BUILD_DT = (2024, 2, 18)
|
||||
VERSION = (1, 12, 0)
|
||||
CODENAME = "locksmith"
|
||||
BUILD_DT = (2024, 4, 6)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -18,7 +18,6 @@ from .cfg import flagdescs, permdescs, vf_bmap, vf_cmap, vf_vmap
|
||||
from .pwhash import PWHash
|
||||
from .util import (
|
||||
IMPLICATIONS,
|
||||
META_NOBOTS,
|
||||
SQLITE_VER,
|
||||
UNPLICATIONS,
|
||||
UTC,
|
||||
@@ -34,6 +33,7 @@ from .util import (
|
||||
uncyg,
|
||||
undot,
|
||||
unhumanize,
|
||||
vsplit,
|
||||
)
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
@@ -61,6 +61,10 @@ BAD_CFG = "invalid config; {}".format(SEE_LOG)
|
||||
SBADCFG = " ({})".format(BAD_CFG)
|
||||
|
||||
|
||||
class CfgEx(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class AXS(object):
|
||||
def __init__(
|
||||
self,
|
||||
@@ -551,7 +555,12 @@ class VFS(object):
|
||||
# no vfs nodes in the list of real inodes
|
||||
real = [x for x in real if x[0] not in self.nodes]
|
||||
|
||||
dbv = self.dbv or self
|
||||
for name, vn2 in sorted(self.nodes.items()):
|
||||
if vn2.dbv == dbv and self.flags.get("dk"):
|
||||
virt_vis[name] = vn2
|
||||
continue
|
||||
|
||||
ok = False
|
||||
zx = vn2.axs
|
||||
axs = [zx.uread, zx.uwrite, zx.umove, zx.udel, zx.uget]
|
||||
@@ -780,6 +789,20 @@ class AuthSrv(object):
|
||||
self.line_ctr = 0
|
||||
self.indent = ""
|
||||
|
||||
# fwd-decl
|
||||
self.vfs = VFS(log_func, "", "", AXS(), {})
|
||||
self.acct: dict[str, str] = {}
|
||||
self.iacct: dict[str, str] = {}
|
||||
self.grps: dict[str, list[str]] = {}
|
||||
self.re_pwd: Optional[re.Pattern] = None
|
||||
|
||||
# all volumes observed since last restart
|
||||
self.idp_vols: dict[str, str] = {} # vpath->abspath
|
||||
|
||||
# all users/groups observed since last restart
|
||||
self.idp_accs: dict[str, list[str]] = {} # username->groupnames
|
||||
self.idp_usr_gh: dict[str, str] = {} # username->group-header-value (cache)
|
||||
|
||||
self.mutex = threading.Lock()
|
||||
self.reload()
|
||||
|
||||
@@ -797,6 +820,86 @@ class AuthSrv(object):
|
||||
|
||||
yield prev, True
|
||||
|
||||
def idp_checkin(
|
||||
self, broker: Optional["BrokerCli"], uname: str, gname: str
|
||||
) -> bool:
|
||||
if uname in self.acct:
|
||||
return False
|
||||
|
||||
if self.idp_usr_gh.get(uname) == gname:
|
||||
return False
|
||||
|
||||
gnames = [x.strip() for x in self.args.idp_gsep.split(gname)]
|
||||
gnames.sort()
|
||||
|
||||
with self.mutex:
|
||||
self.idp_usr_gh[uname] = gname
|
||||
if self.idp_accs.get(uname) == gnames:
|
||||
return False
|
||||
|
||||
self.idp_accs[uname] = gnames
|
||||
|
||||
t = "reinitializing due to new user from IdP: [%s:%s]"
|
||||
self.log(t % (uname, gnames), 3)
|
||||
|
||||
if not broker:
|
||||
# only true for tests
|
||||
self._reload()
|
||||
return True
|
||||
|
||||
broker.ask("_reload_blocking", False).get()
|
||||
return True
|
||||
|
||||
def _map_volume_idp(
|
||||
self,
|
||||
src: str,
|
||||
dst: str,
|
||||
mount: dict[str, str],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
un_gns: dict[str, list[str]],
|
||||
) -> list[tuple[str, str, str, str]]:
|
||||
ret: list[tuple[str, str, str, str]] = []
|
||||
visited = set()
|
||||
src0 = src # abspath
|
||||
dst0 = dst # vpath
|
||||
|
||||
un_gn = [(un, gn) for un, gns in un_gns.items() for gn in gns]
|
||||
if not un_gn:
|
||||
# ensure volume creation if there's no users
|
||||
un_gn = [("", "")]
|
||||
|
||||
for un, gn in un_gn:
|
||||
# if ap/vp has a user/group placeholder, make sure to keep
|
||||
# track so the same user/gruop is mapped when setting perms;
|
||||
# otherwise clear un/gn to indicate it's a regular volume
|
||||
|
||||
src1 = src0.replace("${u}", un or "\n")
|
||||
dst1 = dst0.replace("${u}", un or "\n")
|
||||
if src0 == src1 and dst0 == dst1:
|
||||
un = ""
|
||||
|
||||
src = src1.replace("${g}", gn or "\n")
|
||||
dst = dst1.replace("${g}", gn or "\n")
|
||||
if src == src1 and dst == dst1:
|
||||
gn = ""
|
||||
|
||||
if "\n" in (src + dst):
|
||||
continue
|
||||
|
||||
label = "%s\n%s" % (src, dst)
|
||||
if label in visited:
|
||||
continue
|
||||
visited.add(label)
|
||||
|
||||
src, dst = self._map_volume(src, dst, mount, daxs, mflags)
|
||||
if src:
|
||||
ret.append((src, dst, un, gn))
|
||||
if un or gn:
|
||||
self.idp_vols[dst] = src
|
||||
|
||||
return ret
|
||||
|
||||
def _map_volume(
|
||||
self,
|
||||
src: str,
|
||||
@@ -804,7 +907,11 @@ class AuthSrv(object):
|
||||
mount: dict[str, str],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
) -> None:
|
||||
) -> tuple[str, str]:
|
||||
src = os.path.expandvars(os.path.expanduser(src))
|
||||
src = absreal(src)
|
||||
dst = dst.strip("/")
|
||||
|
||||
if dst in mount:
|
||||
t = "multiple filesystem-paths mounted at [/{}]:\n [{}]\n [{}]"
|
||||
self.log(t.format(dst, mount[dst], src), c=1)
|
||||
@@ -825,6 +932,7 @@ class AuthSrv(object):
|
||||
mount[dst] = src
|
||||
daxs[dst] = AXS()
|
||||
mflags[dst] = {}
|
||||
return (src, dst)
|
||||
|
||||
def _e(self, desc: Optional[str] = None) -> None:
|
||||
if not self.args.vc or not self.line_ctr:
|
||||
@@ -852,31 +960,76 @@ class AuthSrv(object):
|
||||
|
||||
self.log(t.format(self.line_ctr, c, self.indent, ln, desc))
|
||||
|
||||
def _all_un_gn(
|
||||
self,
|
||||
acct: dict[str, str],
|
||||
grps: dict[str, list[str]],
|
||||
) -> dict[str, list[str]]:
|
||||
"""
|
||||
generate list of all confirmed pairs of username/groupname seen since last restart;
|
||||
in case of conflicting group memberships then it is selected as follows:
|
||||
* any non-zero value from IdP group header
|
||||
* otherwise take --grps / [groups]
|
||||
"""
|
||||
ret = {un: gns[:] for un, gns in self.idp_accs.items()}
|
||||
ret.update({zs: [""] for zs in acct if zs not in ret})
|
||||
for gn, uns in grps.items():
|
||||
for un in uns:
|
||||
try:
|
||||
ret[un].append(gn)
|
||||
except:
|
||||
ret[un] = [gn]
|
||||
|
||||
return ret
|
||||
|
||||
def _parse_config_file(
|
||||
self,
|
||||
fp: str,
|
||||
cfg_lines: list[str],
|
||||
acct: dict[str, str],
|
||||
grps: dict[str, list[str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
mount: dict[str, str],
|
||||
) -> None:
|
||||
self.line_ctr = 0
|
||||
|
||||
expand_config_file(cfg_lines, fp, "")
|
||||
expand_config_file(self.log, cfg_lines, fp, "")
|
||||
if self.args.vc:
|
||||
lns = ["{:4}: {}".format(n, s) for n, s in enumerate(cfg_lines, 1)]
|
||||
self.log("expanded config file (unprocessed):\n" + "\n".join(lns))
|
||||
|
||||
cfg_lines = upgrade_cfg_fmt(self.log, self.args, cfg_lines, fp)
|
||||
|
||||
# due to IdP, volumes must be parsed after users and groups;
|
||||
# do volumes in a 2nd pass to allow arbitrary order in config files
|
||||
for npass in range(1, 3):
|
||||
if self.args.vc:
|
||||
self.log("parsing config files; pass %d/%d" % (npass, 2))
|
||||
self._parse_config_file_2(cfg_lines, acct, grps, daxs, mflags, mount, npass)
|
||||
|
||||
def _parse_config_file_2(
|
||||
self,
|
||||
cfg_lines: list[str],
|
||||
acct: dict[str, str],
|
||||
grps: dict[str, list[str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
mount: dict[str, str],
|
||||
npass: int,
|
||||
) -> None:
|
||||
self.line_ctr = 0
|
||||
all_un_gn = self._all_un_gn(acct, grps)
|
||||
|
||||
cat = ""
|
||||
catg = "[global]"
|
||||
cata = "[accounts]"
|
||||
catgrp = "[groups]"
|
||||
catx = "accs:"
|
||||
catf = "flags:"
|
||||
ap: Optional[str] = None
|
||||
vp: Optional[str] = None
|
||||
vols: list[tuple[str, str, str, str]] = []
|
||||
for ln in cfg_lines:
|
||||
self.line_ctr += 1
|
||||
ln = ln.split(" #")[0].strip()
|
||||
@@ -889,7 +1042,7 @@ class AuthSrv(object):
|
||||
subsection = ln in (catx, catf)
|
||||
if ln.startswith("[") or subsection:
|
||||
self._e()
|
||||
if ap is None and vp is not None:
|
||||
if npass > 1 and ap is None and vp is not None:
|
||||
t = "the first line after [/{}] must be a filesystem path to share on that volume"
|
||||
raise Exception(t.format(vp))
|
||||
|
||||
@@ -905,6 +1058,8 @@ class AuthSrv(object):
|
||||
self._l(ln, 6, t)
|
||||
elif ln == cata:
|
||||
self._l(ln, 5, "begin user-accounts section")
|
||||
elif ln == catgrp:
|
||||
self._l(ln, 5, "begin user-groups section")
|
||||
elif ln.startswith("[/"):
|
||||
vp = ln[1:-1].strip("/")
|
||||
self._l(ln, 2, "define volume at URL [/{}]".format(vp))
|
||||
@@ -941,15 +1096,39 @@ class AuthSrv(object):
|
||||
raise Exception(t + SBADCFG)
|
||||
continue
|
||||
|
||||
if cat == catgrp:
|
||||
try:
|
||||
gn, zs1 = [zs.strip() for zs in ln.split(":", 1)]
|
||||
uns = [zs.strip() for zs in zs1.split(",")]
|
||||
t = "group [%s] = " % (gn,)
|
||||
t += ", ".join("user [%s]" % (x,) for x in uns)
|
||||
self._l(ln, 5, t)
|
||||
grps[gn] = uns
|
||||
except:
|
||||
t = 'lines inside the [groups] section must be "groupname: user1, user2, user..."'
|
||||
raise Exception(t + SBADCFG)
|
||||
continue
|
||||
|
||||
if vp is not None and ap is None:
|
||||
if npass != 2:
|
||||
continue
|
||||
|
||||
ap = ln
|
||||
ap = os.path.expandvars(os.path.expanduser(ap))
|
||||
ap = absreal(ap)
|
||||
self._l(ln, 2, "bound to filesystem-path [{}]".format(ap))
|
||||
self._map_volume(ap, vp, mount, daxs, mflags)
|
||||
vols = self._map_volume_idp(ap, vp, mount, daxs, mflags, all_un_gn)
|
||||
if not vols:
|
||||
ap = vp = None
|
||||
self._l(ln, 2, "└─no users/groups known; was not mapped")
|
||||
elif len(vols) > 1:
|
||||
for vol in vols:
|
||||
self._l(ln, 2, "└─mapping: [%s] => [%s]" % (vol[1], vol[0]))
|
||||
continue
|
||||
|
||||
if cat == catx:
|
||||
if npass != 2 or not ap:
|
||||
# not stage2, or unmapped ${u}/${g}
|
||||
continue
|
||||
|
||||
err = ""
|
||||
try:
|
||||
self._l(ln, 5, "volume access config:")
|
||||
@@ -960,14 +1139,20 @@ class AuthSrv(object):
|
||||
if " " in re.sub(", *", "", sv).strip():
|
||||
err = "list of users is not comma-separated; "
|
||||
raise Exception(err)
|
||||
assert vp is not None
|
||||
self._read_vol_str(sk, sv.replace(" ", ""), daxs[vp], mflags[vp])
|
||||
sv = sv.replace(" ", "")
|
||||
self._read_vol_str_idp(sk, sv, vols, all_un_gn, daxs, mflags)
|
||||
continue
|
||||
except CfgEx:
|
||||
raise
|
||||
except:
|
||||
err += "accs entries must be 'rwmdgGhaA.: user1, user2, ...'"
|
||||
raise Exception(err + SBADCFG)
|
||||
raise CfgEx(err + SBADCFG)
|
||||
|
||||
if cat == catf:
|
||||
if npass != 2 or not ap:
|
||||
# not stage2, or unmapped ${u}/${g}
|
||||
continue
|
||||
|
||||
err = ""
|
||||
try:
|
||||
self._l(ln, 6, "volume-specific config:")
|
||||
@@ -984,11 +1169,14 @@ class AuthSrv(object):
|
||||
else:
|
||||
fstr += ",{}={}".format(sk, sv)
|
||||
assert vp is not None
|
||||
self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp])
|
||||
self._read_vol_str_idp(
|
||||
"c", fstr[1:], vols, all_un_gn, daxs, mflags
|
||||
)
|
||||
fstr = ""
|
||||
if fstr:
|
||||
assert vp is not None
|
||||
self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp])
|
||||
self._read_vol_str_idp(
|
||||
"c", fstr[1:], vols, all_un_gn, daxs, mflags
|
||||
)
|
||||
continue
|
||||
except:
|
||||
err += "flags entries (volflags) must be one of the following:\n 'flag1, flag2, ...'\n 'key: value'\n 'flag1, flag2, key: value'"
|
||||
@@ -999,12 +1187,18 @@ class AuthSrv(object):
|
||||
self._e()
|
||||
self.line_ctr = 0
|
||||
|
||||
def _read_vol_str(
|
||||
self, lvl: str, uname: str, axs: AXS, flags: dict[str, Any]
|
||||
def _read_vol_str_idp(
|
||||
self,
|
||||
lvl: str,
|
||||
uname: str,
|
||||
vols: list[tuple[str, str, str, str]],
|
||||
un_gns: dict[str, list[str]],
|
||||
axs: dict[str, AXS],
|
||||
flags: dict[str, dict[str, Any]],
|
||||
) -> None:
|
||||
if lvl.strip("crwmdgGhaA."):
|
||||
t = "%s,%s" % (lvl, uname) if uname else lvl
|
||||
raise Exception("invalid config value (volume or volflag): %s" % (t,))
|
||||
raise CfgEx("invalid config value (volume or volflag): %s" % (t,))
|
||||
|
||||
if lvl == "c":
|
||||
# here, 'uname' is not a username; it is a volflag name... sorry
|
||||
@@ -1019,16 +1213,62 @@ class AuthSrv(object):
|
||||
while "," in uname:
|
||||
# one or more bools before the final flag; eat them
|
||||
n1, uname = uname.split(",", 1)
|
||||
self._read_volflag(flags, n1, True, False)
|
||||
for _, vp, _, _ in vols:
|
||||
self._read_volflag(flags[vp], n1, True, False)
|
||||
|
||||
for _, vp, _, _ in vols:
|
||||
self._read_volflag(flags[vp], uname, cval, False)
|
||||
|
||||
self._read_volflag(flags, uname, cval, False)
|
||||
return
|
||||
|
||||
if uname == "":
|
||||
uname = "*"
|
||||
|
||||
junkset = set()
|
||||
unames = []
|
||||
for un in uname.replace(",", " ").strip().split():
|
||||
if un.startswith("@"):
|
||||
grp = un[1:]
|
||||
uns = [x[0] for x in un_gns.items() if grp in x[1]]
|
||||
if grp == "${g}":
|
||||
unames.append(un)
|
||||
elif not uns and not self.args.idp_h_grp:
|
||||
t = "group [%s] must be defined with --grp argument (or in a [groups] config section)"
|
||||
raise CfgEx(t % (grp,))
|
||||
|
||||
unames.extend(uns)
|
||||
else:
|
||||
unames.append(un)
|
||||
|
||||
# unames may still contain ${u} and ${g} so now expand those;
|
||||
un_gn = [(un, gn) for un, gns in un_gns.items() for gn in gns]
|
||||
|
||||
for src, dst, vu, vg in vols:
|
||||
unames2 = set(unames)
|
||||
|
||||
if "${u}" in unames:
|
||||
if not vu:
|
||||
t = "cannot use ${u} in accs of volume [%s] because the volume url does not contain ${u}"
|
||||
raise CfgEx(t % (src,))
|
||||
unames2.add(vu)
|
||||
|
||||
if "@${g}" in unames:
|
||||
if not vg:
|
||||
t = "cannot use @${g} in accs of volume [%s] because the volume url does not contain @${g}"
|
||||
raise CfgEx(t % (src,))
|
||||
unames2.update([un for un, gn in un_gn if gn == vg])
|
||||
|
||||
if "${g}" in unames:
|
||||
t = 'the accs of volume [%s] contains "${g}" but the only supported way of specifying that is "@${g}"'
|
||||
raise CfgEx(t % (src,))
|
||||
|
||||
unames2.discard("${u}")
|
||||
unames2.discard("@${g}")
|
||||
|
||||
self._read_vol_str(lvl, list(unames2), axs[dst])
|
||||
|
||||
def _read_vol_str(self, lvl: str, unames: list[str], axs: AXS) -> None:
|
||||
junkset = set()
|
||||
for un in unames:
|
||||
for alias, mapping in [
|
||||
("h", "gh"),
|
||||
("G", "gG"),
|
||||
@@ -1105,12 +1345,18 @@ class AuthSrv(object):
|
||||
then supplementing with config files
|
||||
before finally building the VFS
|
||||
"""
|
||||
with self.mutex:
|
||||
self._reload()
|
||||
|
||||
def _reload(self) -> None:
|
||||
acct: dict[str, str] = {} # username:password
|
||||
grps: dict[str, list[str]] = {} # groupname:usernames
|
||||
daxs: dict[str, AXS] = {}
|
||||
mflags: dict[str, dict[str, Any]] = {} # moutpoint:flags
|
||||
mount: dict[str, str] = {} # dst:src (mountpoint:realpath)
|
||||
|
||||
self.idp_vols = {} # yolo
|
||||
|
||||
if self.args.a:
|
||||
# list of username:password
|
||||
for x in self.args.a:
|
||||
@@ -1121,9 +1367,22 @@ class AuthSrv(object):
|
||||
t = '\n invalid value "{}" for argument -a, must be username:password'
|
||||
raise Exception(t.format(x))
|
||||
|
||||
if self.args.grp:
|
||||
# list of groupname:username,username,...
|
||||
for x in self.args.grp:
|
||||
try:
|
||||
# accept both = and : as separator between groupname and usernames,
|
||||
# accept both , and : as separators between usernames
|
||||
zs1, zs2 = x.replace("=", ":").split(":", 1)
|
||||
grps[zs1] = zs2.replace(":", ",").split(",")
|
||||
except:
|
||||
t = '\n invalid value "{}" for argument --grp, must be groupname:username1,username2,...'
|
||||
raise Exception(t.format(x))
|
||||
|
||||
if self.args.v:
|
||||
# list of src:dst:permset:permset:...
|
||||
# permset is <rwmdgGhaA.>[,username][,username] or <c>,<flag>[=args]
|
||||
all_un_gn = self._all_un_gn(acct, grps)
|
||||
for v_str in self.args.v:
|
||||
m = re_vol.match(v_str)
|
||||
if not m:
|
||||
@@ -1133,20 +1392,19 @@ class AuthSrv(object):
|
||||
if WINDOWS:
|
||||
src = uncyg(src)
|
||||
|
||||
# print("\n".join([src, dst, perms]))
|
||||
src = absreal(src)
|
||||
dst = dst.strip("/")
|
||||
self._map_volume(src, dst, mount, daxs, mflags)
|
||||
vols = self._map_volume_idp(src, dst, mount, daxs, mflags, all_un_gn)
|
||||
|
||||
for x in perms.split(":"):
|
||||
lvl, uname = x.split(",", 1) if "," in x else [x, ""]
|
||||
self._read_vol_str(lvl, uname, daxs[dst], mflags[dst])
|
||||
self._read_vol_str_idp(lvl, uname, vols, all_un_gn, daxs, mflags)
|
||||
|
||||
if self.args.c:
|
||||
for cfg_fn in self.args.c:
|
||||
lns: list[str] = []
|
||||
try:
|
||||
self._parse_config_file(cfg_fn, lns, acct, daxs, mflags, mount)
|
||||
self._parse_config_file(
|
||||
cfg_fn, lns, acct, grps, daxs, mflags, mount
|
||||
)
|
||||
|
||||
zs = "#\033[36m cfg files in "
|
||||
zst = [x[len(zs) :] for x in lns if x.startswith(zs)]
|
||||
@@ -1177,7 +1435,7 @@ class AuthSrv(object):
|
||||
|
||||
mount = cased
|
||||
|
||||
if not mount:
|
||||
if not mount and not self.args.idp_h_usr:
|
||||
# -h says our defaults are CWD at root and read/write for everyone
|
||||
axs = AXS(["*"], ["*"], None, None)
|
||||
vfs = VFS(self.log_func, absreal("."), "", axs, {})
|
||||
@@ -1213,9 +1471,13 @@ class AuthSrv(object):
|
||||
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
|
||||
vol.root = vfs
|
||||
|
||||
zss = set(acct)
|
||||
zss.update(self.idp_accs)
|
||||
zss.discard("*")
|
||||
unames = ["*"] + list(sorted(zss))
|
||||
|
||||
for perm in "read write move del get pget html admin dot".split():
|
||||
axs_key = "u" + perm
|
||||
unames = ["*"] + list(acct.keys())
|
||||
for vp, vol in vfs.all_vols.items():
|
||||
zx = getattr(vol.axs, axs_key)
|
||||
if "*" in zx:
|
||||
@@ -1249,18 +1511,20 @@ class AuthSrv(object):
|
||||
]:
|
||||
for usr in d:
|
||||
all_users[usr] = 1
|
||||
if usr != "*" and usr not in acct:
|
||||
if usr != "*" and usr not in acct and usr not in self.idp_accs:
|
||||
missing_users[usr] = 1
|
||||
if "*" not in d:
|
||||
associated_users[usr] = 1
|
||||
|
||||
if missing_users:
|
||||
self.log(
|
||||
"you must -a the following users: "
|
||||
+ ", ".join(k for k in sorted(missing_users)),
|
||||
c=1,
|
||||
)
|
||||
raise Exception(BAD_CFG)
|
||||
zs = ", ".join(k for k in sorted(missing_users))
|
||||
if self.args.idp_h_usr:
|
||||
t = "the following users are unknown, and assumed to come from IdP: "
|
||||
self.log(t + zs, c=6)
|
||||
else:
|
||||
t = "you must -a the following users: "
|
||||
self.log(t + zs, c=1)
|
||||
raise Exception(BAD_CFG)
|
||||
|
||||
if LEELOO_DALLAS in all_users:
|
||||
raise Exception("sorry, reserved username: " + LEELOO_DALLAS)
|
||||
@@ -1401,13 +1665,6 @@ class AuthSrv(object):
|
||||
if not vol.flags.get("robots"):
|
||||
vol.flags["norobots"] = True
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
h = [vol.flags.get("html_head", self.args.html_head)]
|
||||
if vol.flags.get("norobots"):
|
||||
h.insert(0, META_NOBOTS)
|
||||
|
||||
vol.flags["html_head"] = "\n".join([x for x in h if x])
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
if self.args.no_vthumb:
|
||||
vol.flags["dvthumb"] = True
|
||||
@@ -1429,6 +1686,20 @@ class AuthSrv(object):
|
||||
vol.flags["fk"] = int(fk) if fk is not True else 8
|
||||
have_fk = True
|
||||
|
||||
dk = vol.flags.get("dk")
|
||||
dks = vol.flags.get("dks")
|
||||
dky = vol.flags.get("dky")
|
||||
if dks is not None and dky is not None:
|
||||
t = "WARNING: volume /%s has both dks and dky enabled; this is too yolo and not permitted"
|
||||
raise Exception(t % (vol.vpath,))
|
||||
|
||||
if dks and not dk:
|
||||
dk = dks
|
||||
if dky and not dk:
|
||||
dk = dky
|
||||
if dk:
|
||||
vol.flags["dk"] = int(dk) if dk is not True else 8
|
||||
|
||||
if have_fk and re.match(r"^[0-9\.]+$", self.args.fk_salt):
|
||||
self.log("filekey salt: {}".format(self.args.fk_salt))
|
||||
|
||||
@@ -1485,7 +1756,7 @@ class AuthSrv(object):
|
||||
if k not in vol.flags:
|
||||
vol.flags[k] = getattr(self.args, k)
|
||||
|
||||
for k in ("nrand",):
|
||||
for k in ("nrand", "u2abort"):
|
||||
if k in vol.flags:
|
||||
vol.flags[k] = int(vol.flags[k])
|
||||
|
||||
@@ -1749,20 +2020,37 @@ class AuthSrv(object):
|
||||
except Pebkac:
|
||||
self.warn_anonwrite = True
|
||||
|
||||
with self.mutex:
|
||||
self.vfs = vfs
|
||||
self.acct = acct
|
||||
self.iacct = {v: k for k, v in acct.items()}
|
||||
idp_err = "WARNING! The following IdP volumes are mounted directly below another volume where anonymous users can read and/or write files. This is a SECURITY HAZARD!! When copyparty is restarted, it will not know about these IdP volumes yet. These volumes will then be accessible by anonymous users UNTIL one of the users associated with their volume sends a request to the server. RECOMMENDATION: You should create a restricted volume where nobody can read/write files, and make sure that all IdP volumes are configured to appear somewhere below that volume."
|
||||
for idp_vp in self.idp_vols:
|
||||
parent_vp = vsplit(idp_vp)[0]
|
||||
vn, _ = vfs.get(parent_vp, "*", False, False)
|
||||
zs = (
|
||||
"READABLE"
|
||||
if "*" in vn.axs.uread
|
||||
else "WRITABLE"
|
||||
if "*" in vn.axs.uwrite
|
||||
else ""
|
||||
)
|
||||
if zs:
|
||||
t = '\nWARNING: Volume "/%s" appears below "/%s" and would be WORLD-%s'
|
||||
idp_err += t % (idp_vp, vn.vpath, zs)
|
||||
if "\n" in idp_err:
|
||||
self.log(idp_err, 1)
|
||||
|
||||
self.re_pwd = None
|
||||
pwds = [re.escape(x) for x in self.iacct.keys()]
|
||||
if pwds:
|
||||
if self.ah.on:
|
||||
zs = r"(\[H\] pw:.*|[?&]pw=)([^&]+)"
|
||||
else:
|
||||
zs = r"(\[H\] pw:.*|=)(" + "|".join(pwds) + r")([]&; ]|$)"
|
||||
self.vfs = vfs
|
||||
self.acct = acct
|
||||
self.grps = grps
|
||||
self.iacct = {v: k for k, v in acct.items()}
|
||||
|
||||
self.re_pwd = re.compile(zs)
|
||||
self.re_pwd = None
|
||||
pwds = [re.escape(x) for x in self.iacct.keys()]
|
||||
if pwds:
|
||||
if self.ah.on:
|
||||
zs = r"(\[H\] pw:.*|[?&]pw=)([^&]+)"
|
||||
else:
|
||||
zs = r"(\[H\] pw:.*|=)(" + "|".join(pwds) + r")([]&; ]|$)"
|
||||
|
||||
self.re_pwd = re.compile(zs)
|
||||
|
||||
def setup_pwhash(self, acct: dict[str, str]) -> None:
|
||||
self.ah = PWHash(self.args)
|
||||
@@ -2004,6 +2292,12 @@ class AuthSrv(object):
|
||||
ret.append(" {}: {}".format(u, p))
|
||||
ret.append("")
|
||||
|
||||
if self.grps:
|
||||
ret.append("[groups]")
|
||||
for gn, uns in self.grps.items():
|
||||
ret.append(" %s: %s" % (gn, ", ".join(uns)))
|
||||
ret.append("")
|
||||
|
||||
for vol in self.vfs.all_vols.values():
|
||||
ret.append("[/{}]".format(vol.vpath))
|
||||
ret.append(" " + vol.realpath)
|
||||
@@ -2101,27 +2395,50 @@ def split_cfg_ln(ln: str) -> dict[str, Any]:
|
||||
return ret
|
||||
|
||||
|
||||
def expand_config_file(ret: list[str], fp: str, ipath: str) -> None:
|
||||
def expand_config_file(
|
||||
log: Optional["NamedLogger"], ret: list[str], fp: str, ipath: str
|
||||
) -> None:
|
||||
"""expand all % file includes"""
|
||||
fp = absreal(fp)
|
||||
if len(ipath.split(" -> ")) > 64:
|
||||
raise Exception("hit max depth of 64 includes")
|
||||
|
||||
if os.path.isdir(fp):
|
||||
names = os.listdir(fp)
|
||||
crumb = "#\033[36m cfg files in {} => {}\033[0m".format(fp, names)
|
||||
ret.append(crumb)
|
||||
for fn in sorted(names):
|
||||
names = list(sorted(os.listdir(fp)))
|
||||
cnames = [x for x in names if x.lower().endswith(".conf")]
|
||||
if not cnames:
|
||||
t = "warning: tried to read config-files from folder '%s' but it does not contain any "
|
||||
if names:
|
||||
t += ".conf files; the following files were ignored: %s"
|
||||
t = t % (fp, ", ".join(names[:8]))
|
||||
else:
|
||||
t += "files at all"
|
||||
t = t % (fp,)
|
||||
|
||||
if log:
|
||||
log(t, 3)
|
||||
|
||||
ret.append("#\033[33m %s\033[0m" % (t,))
|
||||
else:
|
||||
zs = "#\033[36m cfg files in %s => %s\033[0m" % (fp, cnames)
|
||||
ret.append(zs)
|
||||
|
||||
for fn in cnames:
|
||||
fp2 = os.path.join(fp, fn)
|
||||
if not fp2.endswith(".conf") or fp2 in ipath:
|
||||
if fp2 in ipath:
|
||||
continue
|
||||
|
||||
expand_config_file(ret, fp2, ipath)
|
||||
expand_config_file(log, ret, fp2, ipath)
|
||||
|
||||
if ret[-1] == crumb:
|
||||
# no config files below; remove breadcrumb
|
||||
ret.pop()
|
||||
return
|
||||
|
||||
if not os.path.exists(fp):
|
||||
t = "warning: tried to read config from '%s' but the file/folder does not exist"
|
||||
t = t % (fp,)
|
||||
if log:
|
||||
log(t, 3)
|
||||
|
||||
ret.append("#\033[31m %s\033[0m" % (t,))
|
||||
return
|
||||
|
||||
ipath += " -> " + fp
|
||||
@@ -2135,7 +2452,7 @@ def expand_config_file(ret: list[str], fp: str, ipath: str) -> None:
|
||||
fp2 = ln[1:].strip()
|
||||
fp2 = os.path.join(os.path.dirname(fp), fp2)
|
||||
ofs = len(ret)
|
||||
expand_config_file(ret, fp2, ipath)
|
||||
expand_config_file(log, ret, fp2, ipath)
|
||||
for n in range(ofs, len(ret)):
|
||||
ret[n] = pad + ret[n]
|
||||
continue
|
||||
|
||||
@@ -66,6 +66,7 @@ def vf_vmap() -> dict[str, str]:
|
||||
"rm_retry",
|
||||
"sort",
|
||||
"unlist",
|
||||
"u2abort",
|
||||
"u2ts",
|
||||
):
|
||||
ret[k] = k
|
||||
@@ -116,6 +117,7 @@ flagcats = {
|
||||
"hardlink": "does dedup with hardlinks instead of symlinks",
|
||||
"neversymlink": "disables symlink fallback; full copy instead",
|
||||
"copydupes": "disables dedup, always saves full copies of dupes",
|
||||
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
||||
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
||||
"nosub": "forces all uploads into the top folder of the vfs",
|
||||
"magic": "enables filetype detection for nameless uploads",
|
||||
@@ -130,6 +132,7 @@ flagcats = {
|
||||
"rand": "force randomized filenames, 9 chars long by default",
|
||||
"nrand=N": "randomized filenames are N chars long",
|
||||
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
||||
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
|
||||
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
|
||||
"df=1g": "ensure 1 GiB free disk space",
|
||||
},
|
||||
|
||||
@@ -20,6 +20,7 @@ from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .util import (
|
||||
Daemon,
|
||||
ODict,
|
||||
Pebkac,
|
||||
exclude_dotfiles,
|
||||
fsenc,
|
||||
@@ -217,7 +218,7 @@ class FtpFs(AbstractedFS):
|
||||
raise FSE("Cannot open existing file for writing")
|
||||
|
||||
self.validpath(ap)
|
||||
return open(fsenc(ap), mode)
|
||||
return open(fsenc(ap), mode, self.args.iobuf)
|
||||
|
||||
def chdir(self, path: str) -> None:
|
||||
nwd = join(self.cwd, path)
|
||||
@@ -299,7 +300,7 @@ class FtpFs(AbstractedFS):
|
||||
|
||||
vp = join(self.cwd, path).lstrip("/")
|
||||
try:
|
||||
self.hub.up2k.handle_rm(self.uname, self.h.cli_ip, [vp], [], False)
|
||||
self.hub.up2k.handle_rm(self.uname, self.h.cli_ip, [vp], [], False, False)
|
||||
except Exception as ex:
|
||||
raise FSE(str(ex))
|
||||
|
||||
@@ -409,7 +410,7 @@ class FtpHandler(FTPHandler):
|
||||
if cip.startswith("::ffff:"):
|
||||
cip = cip[7:]
|
||||
|
||||
if self.args.ftp_ipa_re and not self.args.ftp_ipa_re.match(cip):
|
||||
if self.args.ftp_ipa_nm and not self.args.ftp_ipa_nm.map(cip):
|
||||
logging.warning("client rejected (--ftp-ipa): %s", cip)
|
||||
self.connected = False
|
||||
conn.close()
|
||||
@@ -545,6 +546,8 @@ class Ftpd(object):
|
||||
if self.args.ftp4:
|
||||
ips = [x for x in ips if ":" not in x]
|
||||
|
||||
ips = list(ODict.fromkeys(ips)) # dedup
|
||||
|
||||
ioloop = IOLoop()
|
||||
for ip in ips:
|
||||
for h, lp in hs:
|
||||
|
||||
@@ -36,6 +36,7 @@ from .bos import bos
|
||||
from .star import StreamTar
|
||||
from .sutil import StreamArc, gfilter
|
||||
from .szip import StreamZip
|
||||
from .util import unquote # type: ignore
|
||||
from .util import (
|
||||
APPLESAN_RE,
|
||||
BITNESS,
|
||||
@@ -84,7 +85,6 @@ from .util import (
|
||||
sendfile_py,
|
||||
undot,
|
||||
unescape_cookie,
|
||||
unquote, # type: ignore
|
||||
unquotep,
|
||||
vjoin,
|
||||
vol_san,
|
||||
@@ -170,16 +170,11 @@ class HttpCli(object):
|
||||
self.can_dot = False
|
||||
self.out_headerlist: list[tuple[str, str]] = []
|
||||
self.out_headers: dict[str, str] = {}
|
||||
self.html_head = " "
|
||||
# post
|
||||
self.parser: Optional[MultipartParser] = None
|
||||
# end placeholders
|
||||
|
||||
self.bufsz = 1024 * 32
|
||||
h = self.args.html_head
|
||||
if self.args.no_robots:
|
||||
h = META_NOBOTS + (("\n" + h) if h else "")
|
||||
self.html_head = h
|
||||
self.html_head = ""
|
||||
|
||||
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||
ptn = self.asrv.re_pwd
|
||||
@@ -231,13 +226,11 @@ class HttpCli(object):
|
||||
"Vary": "Origin, PW, Cookie",
|
||||
"Cache-Control": "no-store, max-age=0",
|
||||
}
|
||||
if self.args.no_robots:
|
||||
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
||||
|
||||
if self.is_banned():
|
||||
if self.args.early_ban and self.is_banned():
|
||||
return False
|
||||
|
||||
if self.args.ipa_re and not self.args.ipa_re.match(self.conn.addr[0]):
|
||||
if self.conn.ipa_nm and not self.conn.ipa_nm.map(self.conn.addr[0]):
|
||||
self.log("client rejected (--ipa)", 3)
|
||||
self.terse_reply(b"", 500)
|
||||
return False
|
||||
@@ -300,6 +293,7 @@ class HttpCli(object):
|
||||
zs = "%s:%s" % self.s.getsockname()[:2]
|
||||
self.host = zs[7:] if zs.startswith("::ffff:") else zs
|
||||
|
||||
trusted_xff = False
|
||||
n = self.args.rproxy
|
||||
if n:
|
||||
zso = self.headers.get(self.args.xff_hdr)
|
||||
@@ -316,21 +310,26 @@ class HttpCli(object):
|
||||
self.log(t.format(self.args.rproxy, zso), c=3)
|
||||
|
||||
pip = self.conn.addr[0]
|
||||
if self.args.xff_re and not self.args.xff_re.match(pip):
|
||||
t = 'got header "%s" from untrusted source "%s" claiming the true client ip is "%s" (raw value: "%s"); if you trust this, you must allowlist this proxy with "--xff-src=%s"'
|
||||
xffs = self.conn.xff_nm
|
||||
if xffs and not xffs.map(pip):
|
||||
t = 'got header "%s" from untrusted source "%s" claiming the true client ip is "%s" (raw value: "%s"); if you trust this, you must allowlist this proxy with "--xff-src=%s"%s'
|
||||
if self.headers.get("cf-connecting-ip"):
|
||||
t += " Alternatively, if you are behind cloudflare, it is better to specify these two instead: --xff-hdr=cf-connecting-ip --xff-src=any"
|
||||
t += ' Note: if you are behind cloudflare, then this default header is not a good choice; please first make sure your local reverse-proxy (if any) does not allow non-cloudflare IPs from providing cf-* headers, and then add this additional global setting: "--xff-hdr=cf-connecting-ip"'
|
||||
else:
|
||||
t += ' Note: depending on your reverse-proxy, and/or WAF, and/or other intermediates, you may want to read the true client IP from another header by also specifying "--xff-hdr=SomeOtherHeader"'
|
||||
zs = (
|
||||
".".join(pip.split(".")[:2]) + "."
|
||||
if "." in pip
|
||||
else ":".join(pip.split(":")[:4]) + ":"
|
||||
)
|
||||
self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs), 3)
|
||||
) + "0.0/16"
|
||||
zs2 = ' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
|
||||
self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs, zs2), 3)
|
||||
else:
|
||||
self.ip = cli_ip
|
||||
self.is_vproxied = bool(self.args.R)
|
||||
self.log_src = self.conn.set_rproxy(self.ip)
|
||||
self.host = self.headers.get("x-forwarded-host") or self.host
|
||||
trusted_xff = True
|
||||
|
||||
if self.is_banned():
|
||||
return False
|
||||
@@ -458,9 +457,56 @@ class HttpCli(object):
|
||||
|
||||
if self.args.idp_h_usr:
|
||||
self.pw = ""
|
||||
self.uname = self.headers.get(self.args.idp_h_usr) or "*"
|
||||
if self.uname not in self.asrv.vfs.aread:
|
||||
self.log("unknown username: [%s]" % (self.uname), 1)
|
||||
idp_usr = self.headers.get(self.args.idp_h_usr) or ""
|
||||
if idp_usr:
|
||||
idp_grp = (
|
||||
self.headers.get(self.args.idp_h_grp) or ""
|
||||
if self.args.idp_h_grp
|
||||
else ""
|
||||
)
|
||||
|
||||
if not trusted_xff:
|
||||
pip = self.conn.addr[0]
|
||||
xffs = self.conn.xff_nm
|
||||
trusted_xff = xffs and xffs.map(pip)
|
||||
|
||||
trusted_key = (
|
||||
not self.args.idp_h_key
|
||||
) or self.args.idp_h_key in self.headers
|
||||
|
||||
if trusted_key and trusted_xff:
|
||||
self.asrv.idp_checkin(self.conn.hsrv.broker, idp_usr, idp_grp)
|
||||
else:
|
||||
if not trusted_key:
|
||||
t = 'the idp-h-key header ("%s") is not present in the request; will NOT trust the other headers saying that the client\'s username is "%s" and group is "%s"'
|
||||
self.log(t % (self.args.idp_h_key, idp_usr, idp_grp), 3)
|
||||
|
||||
if not trusted_xff:
|
||||
t = 'got IdP headers from untrusted source "%s" claiming the client\'s username is "%s" and group is "%s"; if you trust this, you must allowlist this proxy with "--xff-src=%s"%s'
|
||||
if not self.args.idp_h_key:
|
||||
t += " Note: you probably also want to specify --idp-h-key <SECRET-HEADER-NAME> for additional security"
|
||||
|
||||
pip = self.conn.addr[0]
|
||||
zs = (
|
||||
".".join(pip.split(".")[:2]) + "."
|
||||
if "." in pip
|
||||
else ":".join(pip.split(":")[:4]) + ":"
|
||||
) + "0.0/16"
|
||||
zs2 = (
|
||||
' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
|
||||
)
|
||||
self.log(t % (pip, idp_usr, idp_grp, zs, zs2), 3)
|
||||
|
||||
idp_usr = "*"
|
||||
idp_grp = ""
|
||||
|
||||
if idp_usr in self.asrv.vfs.aread:
|
||||
self.uname = idp_usr
|
||||
self.html_head += "<script>var is_idp=1</script>\n"
|
||||
else:
|
||||
self.log("unknown username: [%s]" % (idp_usr), 1)
|
||||
self.uname = "*"
|
||||
else:
|
||||
self.uname = "*"
|
||||
else:
|
||||
self.pw = uparam.get("pw") or self.headers.get("pw") or bauth or cookie_pw
|
||||
@@ -510,6 +556,10 @@ class HttpCli(object):
|
||||
|
||||
self.s.settimeout(self.args.s_tbody or None)
|
||||
|
||||
if "norobots" in vn.flags:
|
||||
self.html_head += META_NOBOTS
|
||||
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
||||
|
||||
try:
|
||||
cors_k = self._cors()
|
||||
if self.mode in ("GET", "HEAD"):
|
||||
@@ -518,9 +568,13 @@ class HttpCli(object):
|
||||
return self.handle_options() and self.keepalive
|
||||
|
||||
if not cors_k:
|
||||
host = self.headers.get("host", "<?>")
|
||||
origin = self.headers.get("origin", "<?>")
|
||||
self.log("cors-reject {} from {}".format(self.mode, origin), 3)
|
||||
raise Pebkac(403, "no surfing")
|
||||
proto = "https://" if self.is_https else "http://"
|
||||
guess = "modifying" if (origin and host) else "stripping"
|
||||
t = "cors-reject %s because request-header Origin='%s' does not match request-protocol '%s' and host '%s' based on request-header Host='%s' (note: if this request is not malicious, check if your reverse-proxy is accidentally %s request headers, in particular 'Origin', for example by running copyparty with --ihead='*' to show all request headers)"
|
||||
self.log(t % (self.mode, origin, proto, self.host, host, guess), 3)
|
||||
raise Pebkac(403, "rejected by cors-check")
|
||||
|
||||
# getattr(self.mode) is not yet faster than this
|
||||
if self.mode == "POST":
|
||||
@@ -651,7 +705,11 @@ class HttpCli(object):
|
||||
|
||||
def k304(self) -> bool:
|
||||
k304 = self.cookies.get("k304")
|
||||
return k304 == "y" or ("; Trident/" in self.ua and not k304)
|
||||
return (
|
||||
k304 == "y"
|
||||
or (self.args.k304 == 2 and k304 != "n")
|
||||
or ("; Trident/" in self.ua and not k304)
|
||||
)
|
||||
|
||||
def send_headers(
|
||||
self,
|
||||
@@ -1552,15 +1610,16 @@ class HttpCli(object):
|
||||
return enc or "utf-8"
|
||||
|
||||
def get_body_reader(self) -> tuple[Generator[bytes, None, None], int]:
|
||||
bufsz = self.args.s_rd_sz
|
||||
if "chunked" in self.headers.get("transfer-encoding", "").lower():
|
||||
return read_socket_chunked(self.sr), -1
|
||||
return read_socket_chunked(self.sr, bufsz), -1
|
||||
|
||||
remains = int(self.headers.get("content-length", -1))
|
||||
if remains == -1:
|
||||
self.keepalive = False
|
||||
return read_socket_unbounded(self.sr), remains
|
||||
return read_socket_unbounded(self.sr, bufsz), remains
|
||||
else:
|
||||
return read_socket(self.sr, remains), remains
|
||||
return read_socket(self.sr, bufsz, remains), remains
|
||||
|
||||
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, int, str, str]:
|
||||
# post_sz, sha_hex, sha_b64, remains, path, url
|
||||
@@ -1582,7 +1641,7 @@ class HttpCli(object):
|
||||
bos.makedirs(fdir)
|
||||
|
||||
open_ka: dict[str, Any] = {"fun": open}
|
||||
open_a = ["wb", 512 * 1024]
|
||||
open_a = ["wb", self.args.iobuf]
|
||||
|
||||
# user-request || config-force
|
||||
if ("gz" in vfs.flags or "xz" in vfs.flags) and (
|
||||
@@ -1841,7 +1900,7 @@ class HttpCli(object):
|
||||
f.seek(ofs)
|
||||
with open(fp, "wb") as fo:
|
||||
while nrem:
|
||||
buf = f.read(min(nrem, 512 * 1024))
|
||||
buf = f.read(min(nrem, self.args.iobuf))
|
||||
if not buf:
|
||||
break
|
||||
|
||||
@@ -1863,7 +1922,7 @@ class HttpCli(object):
|
||||
return "%s %s n%s" % (spd1, spd2, self.conn.nreq)
|
||||
|
||||
def handle_post_multipart(self) -> bool:
|
||||
self.parser = MultipartParser(self.log, self.sr, self.headers)
|
||||
self.parser = MultipartParser(self.log, self.args, self.sr, self.headers)
|
||||
self.parser.parse()
|
||||
|
||||
file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]] = []
|
||||
@@ -1907,7 +1966,12 @@ class HttpCli(object):
|
||||
|
||||
v = self.uparam[k]
|
||||
|
||||
vn, rem = self.asrv.vfs.get(self.vpath, self.uname, True, False)
|
||||
if self._use_dirkey():
|
||||
vn = self.vn
|
||||
rem = self.rem
|
||||
else:
|
||||
vn, rem = self.asrv.vfs.get(self.vpath, self.uname, True, False)
|
||||
|
||||
zs = self.parser.require("files", 1024 * 1024)
|
||||
if not zs:
|
||||
raise Pebkac(422, "need files list")
|
||||
@@ -2092,7 +2156,7 @@ class HttpCli(object):
|
||||
|
||||
self.log("writing {} #{} @{} len {}".format(path, chash, cstart, remains))
|
||||
|
||||
reader = read_socket(self.sr, remains)
|
||||
reader = read_socket(self.sr, self.args.s_rd_sz, remains)
|
||||
|
||||
f = None
|
||||
fpool = not self.args.no_fpool and sprs
|
||||
@@ -2103,7 +2167,7 @@ class HttpCli(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
f = f or open(fsenc(path), "rb+", 512 * 1024)
|
||||
f = f or open(fsenc(path), "rb+", self.args.iobuf)
|
||||
|
||||
try:
|
||||
f.seek(cstart[0])
|
||||
@@ -2126,7 +2190,8 @@ class HttpCli(object):
|
||||
)
|
||||
ofs = 0
|
||||
while ofs < chunksize:
|
||||
bufsz = min(chunksize - ofs, 4 * 1024 * 1024)
|
||||
bufsz = max(4 * 1024 * 1024, self.args.iobuf)
|
||||
bufsz = min(chunksize - ofs, bufsz)
|
||||
f.seek(cstart[0] + ofs)
|
||||
buf = f.read(bufsz)
|
||||
for wofs in cstart[1:]:
|
||||
@@ -2379,6 +2444,18 @@ class HttpCli(object):
|
||||
suffix = "-{:.6f}-{}".format(time.time(), dip)
|
||||
open_args = {"fdir": fdir, "suffix": suffix}
|
||||
|
||||
if "replace" in self.uparam:
|
||||
abspath = os.path.join(fdir, fname)
|
||||
if not self.can_delete:
|
||||
self.log("user not allowed to overwrite with ?replace")
|
||||
elif bos.path.exists(abspath):
|
||||
try:
|
||||
wunlink(self.log, abspath, vfs.flags)
|
||||
t = "overwriting file with new upload: %s"
|
||||
except:
|
||||
t = "toctou while deleting for ?replace: %s"
|
||||
self.log(t % (abspath,))
|
||||
|
||||
# reserve destination filename
|
||||
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw:
|
||||
fname = zfw["orz"][1]
|
||||
@@ -2423,7 +2500,7 @@ class HttpCli(object):
|
||||
v2 = lim.dfv - lim.dfl
|
||||
max_sz = min(v1, v2) if v1 and v2 else v1 or v2
|
||||
|
||||
with ren_open(tnam, "wb", 512 * 1024, **open_args) as zfw:
|
||||
with ren_open(tnam, "wb", self.args.iobuf, **open_args) as zfw:
|
||||
f, tnam = zfw["orz"]
|
||||
tabspath = os.path.join(fdir, tnam)
|
||||
self.log("writing to {}".format(tabspath))
|
||||
@@ -2719,7 +2796,7 @@ class HttpCli(object):
|
||||
if bos.path.exists(fp):
|
||||
wunlink(self.log, fp, vfs.flags)
|
||||
|
||||
with open(fsenc(fp), "wb", 512 * 1024) as f:
|
||||
with open(fsenc(fp), "wb", self.args.iobuf) as f:
|
||||
sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp)
|
||||
|
||||
if lim:
|
||||
@@ -2798,6 +2875,30 @@ class HttpCli(object):
|
||||
|
||||
return file_lastmod, True
|
||||
|
||||
def _use_dirkey(self, ap: str = "") -> bool:
|
||||
if self.can_read or not self.can_get:
|
||||
return False
|
||||
|
||||
if self.vn.flags.get("dky"):
|
||||
return True
|
||||
|
||||
req = self.uparam.get("k") or ""
|
||||
if not req:
|
||||
return False
|
||||
|
||||
dk_len = self.vn.flags.get("dk")
|
||||
if not dk_len:
|
||||
return False
|
||||
|
||||
ap = ap or self.vn.canonical(self.rem)
|
||||
zs = self.gen_fk(2, self.args.dk_salt, ap, 0, 0)[:dk_len]
|
||||
if req == zs:
|
||||
return True
|
||||
|
||||
t = "wrong dirkey, want %s, got %s\n vp: %s\n ap: %s"
|
||||
self.log(t % (zs, req, self.req, ap), 6)
|
||||
return False
|
||||
|
||||
def _expand(self, txt: str, phs: list[str]) -> str:
|
||||
for ph in phs:
|
||||
if ph.startswith("hdr."):
|
||||
@@ -2827,11 +2928,11 @@ class HttpCli(object):
|
||||
logtail = ""
|
||||
|
||||
#
|
||||
# if request is for foo.js, check if we have foo.js.{gz,br}
|
||||
# if request is for foo.js, check if we have foo.js.gz
|
||||
|
||||
file_ts = 0.0
|
||||
editions: dict[str, tuple[str, int]] = {}
|
||||
for ext in ["", ".gz", ".br"]:
|
||||
for ext in ("", ".gz"):
|
||||
try:
|
||||
fs_path = req_path + ext
|
||||
st = bos.stat(fs_path)
|
||||
@@ -2876,12 +2977,7 @@ class HttpCli(object):
|
||||
x.strip()
|
||||
for x in self.headers.get("accept-encoding", "").lower().split(",")
|
||||
]
|
||||
if ".br" in editions and "br" in supported_editions:
|
||||
is_compressed = True
|
||||
selected_edition = ".br"
|
||||
fs_path, file_sz = editions[".br"]
|
||||
self.out_headers["Content-Encoding"] = "br"
|
||||
elif ".gz" in editions:
|
||||
if ".gz" in editions:
|
||||
is_compressed = True
|
||||
selected_edition = ".gz"
|
||||
fs_path, file_sz = editions[".gz"]
|
||||
@@ -2897,13 +2993,8 @@ class HttpCli(object):
|
||||
is_compressed = False
|
||||
selected_edition = "plain"
|
||||
|
||||
try:
|
||||
fs_path, file_sz = editions[selected_edition]
|
||||
logmsg += "{} ".format(selected_edition.lstrip("."))
|
||||
except:
|
||||
# client is old and we only have .br
|
||||
# (could make brotli a dep to fix this but it's not worth)
|
||||
raise Pebkac(404)
|
||||
fs_path, file_sz = editions[selected_edition]
|
||||
logmsg += "{} ".format(selected_edition.lstrip("."))
|
||||
|
||||
#
|
||||
# partial
|
||||
@@ -2961,8 +3052,7 @@ class HttpCli(object):
|
||||
upper = gzip_orig_sz(fs_path)
|
||||
else:
|
||||
open_func = open
|
||||
# 512 kB is optimal for huge files, use 64k
|
||||
open_args = [fsenc(fs_path), "rb", 64 * 1024]
|
||||
open_args = [fsenc(fs_path), "rb", self.args.iobuf]
|
||||
use_sendfile = (
|
||||
# fmt: off
|
||||
not self.tls
|
||||
@@ -3087,7 +3177,7 @@ class HttpCli(object):
|
||||
# for f in fgen: print(repr({k: f[k] for k in ["vp", "ap"]}))
|
||||
cfmt = ""
|
||||
if self.thumbcli and not self.args.no_bacode:
|
||||
for zs in ("opus", "w", "j"):
|
||||
for zs in ("opus", "mp3", "w", "j"):
|
||||
if zs in self.ouparam or uarg == zs:
|
||||
cfmt = zs
|
||||
|
||||
@@ -3097,6 +3187,7 @@ class HttpCli(object):
|
||||
|
||||
bgen = packer(
|
||||
self.log,
|
||||
self.args,
|
||||
fgen,
|
||||
utf8="utf" in uarg,
|
||||
pre_crc="crc" in uarg,
|
||||
@@ -3139,11 +3230,15 @@ class HttpCli(object):
|
||||
|
||||
ext = ext.rstrip(".") or "unk"
|
||||
if len(ext) > 11:
|
||||
ext = "⋯" + ext[-9:]
|
||||
ext = "~" + ext[-9:]
|
||||
|
||||
return self.tx_svg(ext, exact)
|
||||
|
||||
def tx_svg(self, txt: str, small: bool = False) -> bool:
|
||||
# chrome cannot handle more than ~2000 unique SVGs
|
||||
chrome = " rv:" not in self.ua
|
||||
mime, ico = self.ico.get(ext, not exact, chrome)
|
||||
# so url-param "raster" returns a png/webp instead
|
||||
# (useragent-sniffing kinshi due to caching proxies)
|
||||
mime, ico = self.ico.get(txt, not small, "raster" in self.uparam)
|
||||
|
||||
lm = formatdate(self.E.t0, usegmt=True)
|
||||
self.reply(ico, mime=mime, headers={"Last-Modified": lm})
|
||||
@@ -3170,7 +3265,7 @@ class HttpCli(object):
|
||||
sz_md = 0
|
||||
lead = b""
|
||||
fullfile = b""
|
||||
for buf in yieldfile(fs_path):
|
||||
for buf in yieldfile(fs_path, self.args.iobuf):
|
||||
if sz_md < max_sz:
|
||||
fullfile += buf
|
||||
else:
|
||||
@@ -3243,7 +3338,7 @@ class HttpCli(object):
|
||||
if fullfile:
|
||||
self.s.sendall(fullfile)
|
||||
else:
|
||||
for buf in yieldfile(fs_path):
|
||||
for buf in yieldfile(fs_path, self.args.iobuf):
|
||||
self.s.sendall(html_bescape(buf))
|
||||
|
||||
self.s.sendall(html[1])
|
||||
@@ -3339,6 +3434,8 @@ class HttpCli(object):
|
||||
self.reply(zb, mime="text/plain; charset=utf-8")
|
||||
return True
|
||||
|
||||
self.html_head += self.vn.flags.get("html_head", "")
|
||||
|
||||
html = self.j2s(
|
||||
"splash",
|
||||
this=self,
|
||||
@@ -3354,6 +3451,7 @@ class HttpCli(object):
|
||||
dbwt=vs["dbwt"],
|
||||
url_suf=suf,
|
||||
k304=self.k304(),
|
||||
k304vis=self.args.k304 > 0,
|
||||
ver=S_VERSION if self.args.ver else "",
|
||||
ahttps="" if self.is_https else "https://" + self.host + self.req,
|
||||
)
|
||||
@@ -3362,7 +3460,7 @@ class HttpCli(object):
|
||||
|
||||
def set_k304(self) -> bool:
|
||||
v = self.uparam["k304"].lower()
|
||||
if v == "y":
|
||||
if v in "yn":
|
||||
dur = 86400 * 299
|
||||
else:
|
||||
dur = 0
|
||||
@@ -3407,6 +3505,9 @@ class HttpCli(object):
|
||||
self.reply(pt.encode("utf-8"), status=rc)
|
||||
return True
|
||||
|
||||
if "th" in self.ouparam:
|
||||
return self.tx_svg("e" + pt[:3])
|
||||
|
||||
t = t.format(self.args.SR)
|
||||
qv = quotep(self.vpaths) + self.ourlq()
|
||||
html = self.j2s("splash", this=self, qvpath=qv, msg=t)
|
||||
@@ -3485,7 +3586,7 @@ class HttpCli(object):
|
||||
|
||||
dst = dst[len(top) + 1 :]
|
||||
|
||||
ret = self.gen_tree(top, dst)
|
||||
ret = self.gen_tree(top, dst, self.uparam.get("k", ""))
|
||||
if self.is_vproxied:
|
||||
parents = self.args.R.split("/")
|
||||
for parent in reversed(parents):
|
||||
@@ -3495,18 +3596,25 @@ class HttpCli(object):
|
||||
self.reply(zs.encode("utf-8"), mime="application/json")
|
||||
return True
|
||||
|
||||
def gen_tree(self, top: str, target: str) -> dict[str, Any]:
|
||||
def gen_tree(self, top: str, target: str, dk: str) -> dict[str, Any]:
|
||||
ret: dict[str, Any] = {}
|
||||
excl = None
|
||||
if target:
|
||||
excl, target = (target.split("/", 1) + [""])[:2]
|
||||
sub = self.gen_tree("/".join([top, excl]).strip("/"), target)
|
||||
sub = self.gen_tree("/".join([top, excl]).strip("/"), target, dk)
|
||||
ret["k" + quotep(excl)] = sub
|
||||
|
||||
vfs = self.asrv.vfs
|
||||
dk_sz = False
|
||||
if dk:
|
||||
vn, rem = vfs.get(top, self.uname, False, False)
|
||||
if vn.flags.get("dks") and self._use_dirkey(vn.canonical(rem)):
|
||||
dk_sz = vn.flags.get("dk")
|
||||
|
||||
dots = False
|
||||
fsroot = ""
|
||||
try:
|
||||
vn, rem = vfs.get(top, self.uname, True, False)
|
||||
vn, rem = vfs.get(top, self.uname, not dk_sz, False)
|
||||
fsroot, vfs_ls, vfs_virt = vn.ls(
|
||||
rem,
|
||||
self.uname,
|
||||
@@ -3514,7 +3622,9 @@ class HttpCli(object):
|
||||
[[True, False], [False, True]],
|
||||
)
|
||||
dots = self.uname in vn.axs.udot
|
||||
dk_sz = vn.flags.get("dk")
|
||||
except:
|
||||
dk_sz = None
|
||||
vfs_ls = []
|
||||
vfs_virt = {}
|
||||
for v in self.rvol:
|
||||
@@ -3529,6 +3639,14 @@ class HttpCli(object):
|
||||
|
||||
dirs = [quotep(x) for x in dirs if x != excl]
|
||||
|
||||
if dk_sz and fsroot:
|
||||
kdirs = []
|
||||
for dn in dirs:
|
||||
ap = os.path.join(fsroot, dn)
|
||||
zs = self.gen_fk(2, self.args.dk_salt, ap, 0, 0)[:dk_sz]
|
||||
kdirs.append(dn + "?k=" + zs)
|
||||
dirs = kdirs
|
||||
|
||||
for x in vfs_virt:
|
||||
if x != excl:
|
||||
try:
|
||||
@@ -3542,9 +3660,6 @@ class HttpCli(object):
|
||||
return ret
|
||||
|
||||
def tx_ups(self) -> bool:
|
||||
if not self.args.unpost:
|
||||
raise Pebkac(403, "the unpost feature is disabled in server config")
|
||||
|
||||
idx = self.conn.get_u2idx()
|
||||
if not idx or not hasattr(idx, "p_end"):
|
||||
raise Pebkac(500, "sqlite3 is not available on the server; cannot unpost")
|
||||
@@ -3562,7 +3677,20 @@ class HttpCli(object):
|
||||
if "fk" in vol.flags
|
||||
and (self.uname in vol.axs.uread or self.uname in vol.axs.upget)
|
||||
}
|
||||
for vol in self.asrv.vfs.all_vols.values():
|
||||
|
||||
x = self.conn.hsrv.broker.ask(
|
||||
"up2k.get_unfinished_by_user", self.uname, self.ip
|
||||
)
|
||||
uret = x.get()
|
||||
|
||||
if not self.args.unpost:
|
||||
allvols = []
|
||||
else:
|
||||
allvols = list(self.asrv.vfs.all_vols.values())
|
||||
|
||||
allvols = [x for x in allvols if "e2d" in x.flags]
|
||||
|
||||
for vol in allvols:
|
||||
cur = idx.get_cur(vol.realpath)
|
||||
if not cur:
|
||||
continue
|
||||
@@ -3614,9 +3742,13 @@ class HttpCli(object):
|
||||
for v in ret:
|
||||
v["vp"] = self.args.SR + v["vp"]
|
||||
|
||||
jtxt = json.dumps(ret, indent=2, sort_keys=True).encode("utf-8", "replace")
|
||||
self.log("{} #{} {:.2f}sec".format(lm, len(ret), time.time() - t0))
|
||||
self.reply(jtxt, mime="application/json")
|
||||
if not allvols:
|
||||
ret = [{"kinshi": 1}]
|
||||
|
||||
jtxt = '{"u":%s,"c":%s}' % (uret, json.dumps(ret, indent=0))
|
||||
zi = len(uret.split('\n"pd":')) - 1
|
||||
self.log("%s #%d+%d %.2fsec" % (lm, zi, len(ret), time.time() - t0))
|
||||
self.reply(jtxt.encode("utf-8", "replace"), mime="application/json")
|
||||
return True
|
||||
|
||||
def handle_rm(self, req: list[str]) -> bool:
|
||||
@@ -3631,11 +3763,12 @@ class HttpCli(object):
|
||||
elif self.is_vproxied:
|
||||
req = [x[len(self.args.SR) :] for x in req]
|
||||
|
||||
unpost = "unpost" in self.uparam
|
||||
nlim = int(self.uparam.get("lim") or 0)
|
||||
lim = [nlim, nlim] if nlim else []
|
||||
|
||||
x = self.conn.hsrv.broker.ask(
|
||||
"up2k.handle_rm", self.uname, self.ip, req, lim, False
|
||||
"up2k.handle_rm", self.uname, self.ip, req, lim, False, unpost
|
||||
)
|
||||
self.loud_reply(x.get())
|
||||
return True
|
||||
@@ -3773,13 +3906,12 @@ class HttpCli(object):
|
||||
e2d = "e2d" in vn.flags
|
||||
e2t = "e2t" in vn.flags
|
||||
|
||||
self.html_head = vn.flags.get("html_head", "")
|
||||
if vn.flags.get("norobots") or "b" in self.uparam:
|
||||
self.html_head += vn.flags.get("html_head", "")
|
||||
if "b" in self.uparam:
|
||||
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
||||
else:
|
||||
self.out_headers.pop("X-Robots-Tag", None)
|
||||
|
||||
is_dir = stat.S_ISDIR(st.st_mode)
|
||||
is_dk = False
|
||||
fk_pass = False
|
||||
icur = None
|
||||
if is_dir and (e2t or e2d):
|
||||
@@ -3787,12 +3919,15 @@ class HttpCli(object):
|
||||
if idx and hasattr(idx, "p_end"):
|
||||
icur = idx.get_cur(dbv.realpath)
|
||||
|
||||
if self.can_read:
|
||||
th_fmt = self.uparam.get("th")
|
||||
th_fmt = self.uparam.get("th")
|
||||
if self.can_read or (self.can_get and vn.flags.get("dk")):
|
||||
if th_fmt is not None:
|
||||
nothumb = "dthumb" in dbv.flags
|
||||
if is_dir:
|
||||
vrem = vrem.rstrip("/")
|
||||
if icur and vrem:
|
||||
if nothumb:
|
||||
pass
|
||||
elif icur and vrem:
|
||||
q = "select fn from cv where rd=? and dn=?"
|
||||
crd, cdn = vrem.rsplit("/", 1) if "/" in vrem else ("", vrem)
|
||||
# no mojibake support:
|
||||
@@ -3815,10 +3950,10 @@ class HttpCli(object):
|
||||
break
|
||||
|
||||
if is_dir:
|
||||
return self.tx_ico("a.folder")
|
||||
return self.tx_svg("folder")
|
||||
|
||||
thp = None
|
||||
if self.thumbcli:
|
||||
if self.thumbcli and not nothumb:
|
||||
thp = self.thumbcli.get(dbv, vrem, int(st.st_mtime), th_fmt)
|
||||
|
||||
if thp:
|
||||
@@ -3829,6 +3964,9 @@ class HttpCli(object):
|
||||
|
||||
return self.tx_ico(rem)
|
||||
|
||||
elif self.can_write and th_fmt is not None:
|
||||
return self.tx_svg("upload\nonly")
|
||||
|
||||
elif self.can_get and self.avn:
|
||||
axs = self.avn.axs
|
||||
if self.uname not in axs.uhtml:
|
||||
@@ -3888,8 +4026,11 @@ class HttpCli(object):
|
||||
|
||||
return self.tx_file(abspath)
|
||||
|
||||
elif is_dir and not self.can_read and not self.can_write:
|
||||
return self.tx_404(True)
|
||||
elif is_dir and not self.can_read:
|
||||
if self._use_dirkey(abspath):
|
||||
is_dk = True
|
||||
elif not self.can_write:
|
||||
return self.tx_404(True)
|
||||
|
||||
srv_info = []
|
||||
|
||||
@@ -3911,7 +4052,7 @@ class HttpCli(object):
|
||||
srv_infot = "</span> // <span>".join(srv_info)
|
||||
|
||||
perms = []
|
||||
if self.can_read:
|
||||
if self.can_read or is_dk:
|
||||
perms.append("read")
|
||||
if self.can_write:
|
||||
perms.append("write")
|
||||
@@ -4039,7 +4180,7 @@ class HttpCli(object):
|
||||
if not self.conn.hsrv.prism:
|
||||
j2a["no_prism"] = True
|
||||
|
||||
if not self.can_read:
|
||||
if not self.can_read and not is_dk:
|
||||
if is_ls:
|
||||
return self.tx_ls(ls_ret)
|
||||
|
||||
@@ -4092,8 +4233,15 @@ class HttpCli(object):
|
||||
):
|
||||
ls_names = exclude_dotfiles(ls_names)
|
||||
|
||||
add_dk = vf.get("dk")
|
||||
add_fk = vf.get("fk")
|
||||
fk_alg = 2 if "fka" in vf else 1
|
||||
if add_dk:
|
||||
if vf.get("dky"):
|
||||
add_dk = False
|
||||
else:
|
||||
zs = self.gen_fk(2, self.args.dk_salt, abspath, 0, 0)[:add_dk]
|
||||
ls_ret["dk"] = cgv["dk"] = zs
|
||||
|
||||
dirs = []
|
||||
files = []
|
||||
@@ -4121,6 +4269,12 @@ class HttpCli(object):
|
||||
href += "/"
|
||||
if self.args.no_zip:
|
||||
margin = "DIR"
|
||||
elif add_dk:
|
||||
zs = absreal(fspath)
|
||||
margin = '<a href="%s?k=%s&zip" rel="nofollow">zip</a>' % (
|
||||
quotep(href),
|
||||
self.gen_fk(2, self.args.dk_salt, zs, 0, 0)[:add_dk],
|
||||
)
|
||||
else:
|
||||
margin = '<a href="%s?zip" rel="nofollow">zip</a>' % (quotep(href),)
|
||||
elif fn in hist:
|
||||
@@ -4161,6 +4315,11 @@ class HttpCli(object):
|
||||
0 if ANYWIN else inf.st_ino,
|
||||
)[:add_fk],
|
||||
)
|
||||
elif add_dk and is_dir:
|
||||
href = "%s?k=%s" % (
|
||||
quotep(href),
|
||||
self.gen_fk(2, self.args.dk_salt, fspath, 0, 0)[:add_dk],
|
||||
)
|
||||
else:
|
||||
href = quotep(href)
|
||||
|
||||
@@ -4179,6 +4338,9 @@ class HttpCli(object):
|
||||
files.append(item)
|
||||
item["rd"] = rem
|
||||
|
||||
if is_dk and not vf.get("dks"):
|
||||
dirs = []
|
||||
|
||||
if (
|
||||
self.cookies.get("idxh") == "y"
|
||||
and "ls" not in self.uparam
|
||||
|
||||
@@ -23,7 +23,7 @@ from .mtag import HAVE_FFMPEG
|
||||
from .th_cli import ThumbCli
|
||||
from .th_srv import HAVE_PIL, HAVE_VIPS
|
||||
from .u2idx import U2idx
|
||||
from .util import HMaccas, shut_socket
|
||||
from .util import HMaccas, NetMap, shut_socket
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Optional, Pattern, Union
|
||||
@@ -55,6 +55,9 @@ class HttpConn(object):
|
||||
self.E: EnvParams = self.args.E
|
||||
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
||||
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
||||
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
||||
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
||||
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
||||
self.iphash: HMaccas = hsrv.broker.iphash
|
||||
self.bans: dict[str, int] = hsrv.bans
|
||||
self.aclose: dict[str, int] = hsrv.aclose
|
||||
|
||||
@@ -67,6 +67,7 @@ from .util import (
|
||||
Netdev,
|
||||
NetMap,
|
||||
absreal,
|
||||
build_netmap,
|
||||
ipnorm,
|
||||
min_ex,
|
||||
shut_socket,
|
||||
@@ -103,7 +104,7 @@ class HttpSrv(object):
|
||||
self.t0 = time.time()
|
||||
nsuf = "-n{}-i{:x}".format(nid, os.getpid()) if nid else ""
|
||||
self.magician = Magician()
|
||||
self.nm = NetMap([], {})
|
||||
self.nm = NetMap([], [])
|
||||
self.ssdp: Optional["SSDPr"] = None
|
||||
self.gpwd = Garda(self.args.ban_pw)
|
||||
self.g404 = Garda(self.args.ban_404)
|
||||
@@ -150,6 +151,10 @@ class HttpSrv(object):
|
||||
zs = os.path.join(self.E.mod, "web", "deps", "prism.js.gz")
|
||||
self.prism = os.path.exists(zs)
|
||||
|
||||
self.ipa_nm = build_netmap(self.args.ipa)
|
||||
self.xff_nm = build_netmap(self.args.xff_src)
|
||||
self.xff_lan = build_netmap("lan")
|
||||
|
||||
self.statics: set[str] = set()
|
||||
self._build_statics()
|
||||
|
||||
@@ -191,7 +196,7 @@ class HttpSrv(object):
|
||||
for fn in df:
|
||||
ap = absreal(os.path.join(dp, fn))
|
||||
self.statics.add(ap)
|
||||
if ap.endswith(".gz") or ap.endswith(".br"):
|
||||
if ap.endswith(".gz"):
|
||||
self.statics.add(ap[:-3])
|
||||
|
||||
def set_netdevs(self, netdevs: dict[str, Netdev]) -> None:
|
||||
@@ -199,7 +204,7 @@ class HttpSrv(object):
|
||||
for ip, _ in self.bound:
|
||||
ips.add(ip)
|
||||
|
||||
self.nm = NetMap(list(ips), netdevs)
|
||||
self.nm = NetMap(list(ips), list(netdevs))
|
||||
|
||||
def start_threads(self, n: int) -> None:
|
||||
self.tp_nthr += n
|
||||
|
||||
@@ -8,7 +8,7 @@ import re
|
||||
|
||||
from .__init__ import PY2
|
||||
from .th_srv import HAVE_PIL, HAVE_PILF
|
||||
from .util import BytesIO # type: ignore
|
||||
from .util import BytesIO, html_escape # type: ignore
|
||||
|
||||
|
||||
class Ico(object):
|
||||
@@ -31,10 +31,9 @@ class Ico(object):
|
||||
|
||||
w = 100
|
||||
h = 30
|
||||
if "n" in self.args.th_crop and as_thumb:
|
||||
if as_thumb:
|
||||
sw, sh = self.args.th_size.split("x")
|
||||
h = int(100.0 / (float(sw) / float(sh)))
|
||||
w = 100
|
||||
|
||||
if chrome:
|
||||
# cannot handle more than ~2000 unique SVGs
|
||||
@@ -99,6 +98,6 @@ class Ico(object):
|
||||
fill="#{}" font-family="monospace" font-size="14px" style="letter-spacing:.5px">{}</text>
|
||||
</g></svg>
|
||||
"""
|
||||
svg = svg.format(h, c[:6], c[6:], ext)
|
||||
svg = svg.format(h, c[:6], c[6:], html_escape(ext, True))
|
||||
|
||||
return "image/svg+xml", svg.encode("utf-8")
|
||||
|
||||
@@ -206,6 +206,9 @@ class Metrics(object):
|
||||
try:
|
||||
x = self.hsrv.broker.ask("up2k.get_unfinished")
|
||||
xs = x.get()
|
||||
if not xs:
|
||||
raise Exception("up2k mutex acquisition timed out")
|
||||
|
||||
xj = json.loads(xs)
|
||||
for ptop, (nbytes, nfiles) in xj.items():
|
||||
tnbytes += nbytes
|
||||
|
||||
@@ -551,8 +551,7 @@ class MTag(object):
|
||||
pypath = str(os.pathsep.join(zsl))
|
||||
env["PYTHONPATH"] = pypath
|
||||
except:
|
||||
if not E.ox and not EXE:
|
||||
raise
|
||||
raise # might be expected outside cpython
|
||||
|
||||
ret: dict[str, Any] = {}
|
||||
for tagname, parser in sorted(parsers.items(), key=lambda x: (x[1].pri, x[0])):
|
||||
|
||||
@@ -110,7 +110,7 @@ class MCast(object):
|
||||
)
|
||||
|
||||
ips = [x for x in ips if x not in ("::1", "127.0.0.1")]
|
||||
ips = find_prefix(ips, netdevs)
|
||||
ips = find_prefix(ips, list(netdevs))
|
||||
|
||||
on = self.on[:]
|
||||
off = self.off[:]
|
||||
|
||||
@@ -340,7 +340,7 @@ class SMB(object):
|
||||
yeet("blocked delete (no-del-acc): " + vpath)
|
||||
|
||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||
self.hub.up2k.handle_rm(uname, "1.7.6.2", [vpath], [], False)
|
||||
self.hub.up2k.handle_rm(uname, "1.7.6.2", [vpath], [], False, False)
|
||||
|
||||
def _utime(self, vpath: str, times: tuple[float, float]) -> None:
|
||||
if not self.args.smbw:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import stat
|
||||
import tarfile
|
||||
@@ -44,11 +45,12 @@ class StreamTar(StreamArc):
|
||||
def __init__(
|
||||
self,
|
||||
log: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
fgen: Generator[dict[str, Any], None, None],
|
||||
cmp: str = "",
|
||||
**kwargs: Any
|
||||
):
|
||||
super(StreamTar, self).__init__(log, fgen)
|
||||
super(StreamTar, self).__init__(log, args, fgen)
|
||||
|
||||
self.ci = 0
|
||||
self.co = 0
|
||||
@@ -126,7 +128,7 @@ class StreamTar(StreamArc):
|
||||
inf.gid = 0
|
||||
|
||||
self.ci += inf.size
|
||||
with open(fsenc(src), "rb", 512 * 1024) as fo:
|
||||
with open(fsenc(src), "rb", self.args.iobuf) as fo:
|
||||
self.tar.addfile(inf, fo)
|
||||
|
||||
def _gen(self) -> None:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import tempfile
|
||||
from datetime import datetime
|
||||
@@ -20,10 +21,12 @@ class StreamArc(object):
|
||||
def __init__(
|
||||
self,
|
||||
log: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
fgen: Generator[dict[str, Any], None, None],
|
||||
**kwargs: Any
|
||||
):
|
||||
self.log = log
|
||||
self.args = args
|
||||
self.fgen = fgen
|
||||
self.stopped = False
|
||||
|
||||
@@ -78,7 +81,9 @@ def enthumb(
|
||||
) -> dict[str, Any]:
|
||||
rem = f["vp"]
|
||||
ext = rem.rsplit(".", 1)[-1].lower()
|
||||
if fmt == "opus" and ext in "aac|m4a|mp3|ogg|opus|wma".split("|"):
|
||||
if (fmt == "mp3" and ext == "mp3") or (
|
||||
fmt == "opus" and ext in "aac|m4a|mp3|ogg|opus|wma".split("|")
|
||||
):
|
||||
raise Exception()
|
||||
|
||||
vp = vjoin(vtop, rem.split("/", 1)[1])
|
||||
|
||||
@@ -28,7 +28,7 @@ if True: # pylint: disable=using-constant-test
|
||||
import typing
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
from .__init__ import ANYWIN, EXE, MACOS, TYPE_CHECKING, EnvParams, unicode
|
||||
from .__init__ import ANYWIN, EXE, MACOS, TYPE_CHECKING, E, EnvParams, unicode
|
||||
from .authsrv import BAD_CFG, AuthSrv
|
||||
from .cert import ensure_cert
|
||||
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE
|
||||
@@ -49,6 +49,7 @@ from .util import (
|
||||
ODict,
|
||||
alltrace,
|
||||
ansi_re,
|
||||
build_netmap,
|
||||
min_ex,
|
||||
mp,
|
||||
odfusion,
|
||||
@@ -94,7 +95,7 @@ class SvcHub(object):
|
||||
self.stopping = False
|
||||
self.stopped = False
|
||||
self.reload_req = False
|
||||
self.reloading = False
|
||||
self.reloading = 0
|
||||
self.stop_cond = threading.Condition()
|
||||
self.nsigs = 3
|
||||
self.retcode = 0
|
||||
@@ -154,6 +155,8 @@ class SvcHub(object):
|
||||
lg.handlers = [lh]
|
||||
lg.setLevel(logging.DEBUG)
|
||||
|
||||
self._check_env()
|
||||
|
||||
if args.stackmon:
|
||||
start_stackmon(args.stackmon, 0)
|
||||
|
||||
@@ -170,6 +173,26 @@ class SvcHub(object):
|
||||
self.log("root", t.format(args.j), c=3)
|
||||
args.no_fpool = True
|
||||
|
||||
for name, arg in (
|
||||
("iobuf", "iobuf"),
|
||||
("s-rd-sz", "s_rd_sz"),
|
||||
("s-wr-sz", "s_wr_sz"),
|
||||
):
|
||||
zi = getattr(args, arg)
|
||||
if zi < 32768:
|
||||
t = "WARNING: expect very poor performance because you specified a very low value (%d) for --%s"
|
||||
self.log("root", t % (zi, name), 3)
|
||||
zi = 2
|
||||
zi2 = 2 ** (zi - 1).bit_length()
|
||||
if zi != zi2:
|
||||
zi3 = 2 ** ((zi - 1).bit_length() - 1)
|
||||
t = "WARNING: expect poor performance because --%s is not a power-of-two; consider using %d or %d instead of %d"
|
||||
self.log("root", t % (name, zi2, zi3, zi), 3)
|
||||
|
||||
if args.s_rd_sz > args.iobuf:
|
||||
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
|
||||
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
|
||||
|
||||
bri = "zy"[args.theme % 2 :][:1]
|
||||
ch = "abcdefghijklmnopqrstuvwx"[int(args.theme / 2)]
|
||||
args.theme = "{0}{1} {0} {1}".format(ch, bri)
|
||||
@@ -253,6 +276,11 @@ class SvcHub(object):
|
||||
if want_ff and ANYWIN:
|
||||
self.log("thumb", "download FFmpeg to fix it:\033[0m " + FFMPEG_URL, 3)
|
||||
|
||||
if not args.no_acode:
|
||||
if not re.match("^(0|[qv][0-9]|[0-9]{2,3}k)$", args.q_mp3.lower()):
|
||||
t = "invalid mp3 transcoding quality [%s] specified; only supports [0] to disable, a CBR value such as [192k], or a CQ/CRF value such as [v2]"
|
||||
raise Exception(t % (args.q_mp3,))
|
||||
|
||||
args.th_poke = min(args.th_poke, args.th_maxage, args.ac_maxage)
|
||||
|
||||
zms = ""
|
||||
@@ -385,6 +413,17 @@ class SvcHub(object):
|
||||
|
||||
Daemon(self.sd_notify, "sd-notify")
|
||||
|
||||
def _check_env(self) -> None:
|
||||
try:
|
||||
files = os.listdir(E.cfg)
|
||||
except:
|
||||
files = []
|
||||
|
||||
hits = [x for x in files if x.lower().endswith(".conf")]
|
||||
if hits:
|
||||
t = "WARNING: found config files in [%s]: %s\n config files are not expected here, and will NOT be loaded (unless your setup is intentionally hella funky)"
|
||||
self.log("root", t % (E.cfg, ", ".join(hits)), 3)
|
||||
|
||||
def _process_config(self) -> bool:
|
||||
al = self.args
|
||||
|
||||
@@ -465,12 +504,11 @@ class SvcHub(object):
|
||||
|
||||
al.xff_hdr = al.xff_hdr.lower()
|
||||
al.idp_h_usr = al.idp_h_usr.lower()
|
||||
# al.idp_h_grp = al.idp_h_grp.lower()
|
||||
al.idp_h_grp = al.idp_h_grp.lower()
|
||||
al.idp_h_key = al.idp_h_key.lower()
|
||||
|
||||
al.xff_re = self._ipa2re(al.xff_src)
|
||||
al.ipa_re = self._ipa2re(al.ipa)
|
||||
al.ftp_ipa_re = self._ipa2re(al.ftp_ipa or al.ipa)
|
||||
al.tftp_ipa_re = self._ipa2re(al.tftp_ipa or al.ipa)
|
||||
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa)
|
||||
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa)
|
||||
|
||||
mte = ODict.fromkeys(DEF_MTE.split(","), True)
|
||||
al.mte = odfusion(mte, al.mte)
|
||||
@@ -487,6 +525,18 @@ class SvcHub(object):
|
||||
if ptn:
|
||||
setattr(self.args, k, re.compile(ptn))
|
||||
|
||||
for k in ["idp_gsep"]:
|
||||
ptn = getattr(self.args, k)
|
||||
if "]" in ptn:
|
||||
ptn = "]" + ptn.replace("]", "")
|
||||
if "[" in ptn:
|
||||
ptn = ptn.replace("[", "") + "["
|
||||
if "-" in ptn:
|
||||
ptn = ptn.replace("-", "") + "-"
|
||||
|
||||
ptn = ptn.replace("\\", "\\\\").replace("^", "\\^")
|
||||
setattr(self.args, k, re.compile("[%s]" % (ptn,)))
|
||||
|
||||
try:
|
||||
zf1, zf2 = self.args.rm_retry.split("/")
|
||||
self.args.rm_re_t = float(zf1)
|
||||
@@ -662,21 +712,37 @@ class SvcHub(object):
|
||||
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
|
||||
|
||||
def reload(self) -> str:
|
||||
if self.reloading:
|
||||
return "cannot reload; already in progress"
|
||||
with self.up2k.mutex:
|
||||
if self.reloading:
|
||||
return "cannot reload; already in progress"
|
||||
self.reloading = 1
|
||||
|
||||
self.reloading = True
|
||||
Daemon(self._reload, "reloading")
|
||||
return "reload initiated"
|
||||
|
||||
def _reload(self) -> None:
|
||||
self.log("root", "reload scheduled")
|
||||
def _reload(self, rescan_all_vols: bool = True) -> None:
|
||||
with self.up2k.mutex:
|
||||
if self.reloading != 1:
|
||||
return
|
||||
self.reloading = 2
|
||||
self.log("root", "reloading config")
|
||||
self.asrv.reload()
|
||||
self.up2k.reload()
|
||||
self.up2k.reload(rescan_all_vols)
|
||||
self.broker.reload()
|
||||
self.reloading = 0
|
||||
|
||||
self.reloading = False
|
||||
def _reload_blocking(self, rescan_all_vols: bool = True) -> None:
|
||||
while True:
|
||||
with self.up2k.mutex:
|
||||
if self.reloading < 2:
|
||||
self.reloading = 1
|
||||
break
|
||||
time.sleep(0.05)
|
||||
|
||||
# try to handle multiple pending IdP reloads at once:
|
||||
time.sleep(0.2)
|
||||
|
||||
self._reload(rescan_all_vols=rescan_all_vols)
|
||||
|
||||
def stop_thr(self) -> None:
|
||||
while not self.stop_req:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import calendar
|
||||
import stat
|
||||
import time
|
||||
@@ -218,12 +219,13 @@ class StreamZip(StreamArc):
|
||||
def __init__(
|
||||
self,
|
||||
log: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
fgen: Generator[dict[str, Any], None, None],
|
||||
utf8: bool = False,
|
||||
pre_crc: bool = False,
|
||||
**kwargs: Any
|
||||
) -> None:
|
||||
super(StreamZip, self).__init__(log, fgen)
|
||||
super(StreamZip, self).__init__(log, args, fgen)
|
||||
|
||||
self.utf8 = utf8
|
||||
self.pre_crc = pre_crc
|
||||
@@ -248,7 +250,7 @@ class StreamZip(StreamArc):
|
||||
|
||||
crc = 0
|
||||
if self.pre_crc:
|
||||
for buf in yieldfile(src):
|
||||
for buf in yieldfile(src, self.args.iobuf):
|
||||
crc = zlib.crc32(buf, crc)
|
||||
|
||||
crc &= 0xFFFFFFFF
|
||||
@@ -257,7 +259,7 @@ class StreamZip(StreamArc):
|
||||
buf = gen_hdr(None, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||
yield self._ct(buf)
|
||||
|
||||
for buf in yieldfile(src):
|
||||
for buf in yieldfile(src, self.args.iobuf):
|
||||
if not self.pre_crc:
|
||||
crc = zlib.crc32(buf, crc)
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ from partftpy.TftpShared import TftpException
|
||||
from .__init__ import EXE, TYPE_CHECKING
|
||||
from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .util import BytesIO, Daemon, exclude_dotfiles, min_ex, runhook, undot
|
||||
from .util import BytesIO, Daemon, ODict, exclude_dotfiles, min_ex, runhook, undot
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Any, Union
|
||||
@@ -55,17 +55,16 @@ def noop(*a, **ka) -> None:
|
||||
|
||||
def _serverInitial(self, pkt: Any, raddress: str, rport: int) -> bool:
|
||||
info("connection from %s:%s", raddress, rport)
|
||||
ret = _orig_serverInitial(self, pkt, raddress, rport)
|
||||
ptn = _hub[0].args.tftp_ipa_re
|
||||
if ptn and not ptn.match(raddress):
|
||||
ret = _sinitial[0](self, pkt, raddress, rport)
|
||||
nm = _hub[0].args.tftp_ipa_nm
|
||||
if nm and not nm.map(raddress):
|
||||
yeet("client rejected (--tftp-ipa): %s" % (raddress,))
|
||||
return ret
|
||||
|
||||
|
||||
# patch ipa-check into partftpd
|
||||
# patch ipa-check into partftpd (part 1/2)
|
||||
_hub: list["SvcHub"] = []
|
||||
_orig_serverInitial = TftpStates.TftpServerState.serverInitial
|
||||
TftpStates.TftpServerState.serverInitial = _serverInitial
|
||||
_sinitial: list[Any] = []
|
||||
|
||||
|
||||
class Tftpd(object):
|
||||
@@ -98,8 +97,6 @@ class Tftpd(object):
|
||||
cbak = []
|
||||
if not self.args.tftp_no_fast and not EXE:
|
||||
try:
|
||||
import inspect
|
||||
|
||||
ptn = re.compile(r"(^\s*)log\.debug\(.*\)$")
|
||||
for C in Cs:
|
||||
cbak.append(C.__dict__)
|
||||
@@ -116,6 +113,11 @@ class Tftpd(object):
|
||||
for C in Cs:
|
||||
C.log.debug = noop
|
||||
|
||||
# patch ipa-check into partftpd (part 2/2)
|
||||
_sinitial[:] = []
|
||||
_sinitial.append(TftpStates.TftpServerState.serverInitial)
|
||||
TftpStates.TftpServerState.serverInitial = _serverInitial
|
||||
|
||||
# patch vfs into partftpy
|
||||
TftpContexts.open = self._open
|
||||
TftpStates.open = self._open
|
||||
@@ -169,6 +171,8 @@ class Tftpd(object):
|
||||
if self.args.ftp4:
|
||||
ips = [x for x in ips if ":" not in x]
|
||||
|
||||
ips = list(ODict.fromkeys(ips)) # dedup
|
||||
|
||||
for ip in ips:
|
||||
name = "tftp_%s" % (ip,)
|
||||
Daemon(self._start, name, [ip, ports])
|
||||
@@ -179,18 +183,54 @@ class Tftpd(object):
|
||||
|
||||
def _start(self, ip, ports):
|
||||
fam = socket.AF_INET6 if ":" in ip else socket.AF_INET
|
||||
srv = TftpServer.TftpServer("/", self._ls)
|
||||
with self.mutex:
|
||||
self.srv.append(srv)
|
||||
self.ips.append(ip)
|
||||
try:
|
||||
srv.listen(ip, self.port, af_family=fam, ports=ports)
|
||||
except OSError:
|
||||
have_been_alive = False
|
||||
while True:
|
||||
srv = TftpServer.TftpServer("/", self._ls)
|
||||
with self.mutex:
|
||||
self.srv.remove(srv)
|
||||
self.ips.remove(ip)
|
||||
if ip != "0.0.0.0" or "::" not in self.ips:
|
||||
raise
|
||||
self.srv.append(srv)
|
||||
self.ips.append(ip)
|
||||
|
||||
try:
|
||||
# this is the listen loop; it should block forever
|
||||
srv.listen(ip, self.port, af_family=fam, ports=ports)
|
||||
except:
|
||||
with self.mutex:
|
||||
self.srv.remove(srv)
|
||||
self.ips.remove(ip)
|
||||
|
||||
try:
|
||||
srv.sock.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
bound = bool(srv.listenport)
|
||||
except:
|
||||
bound = False
|
||||
|
||||
if bound:
|
||||
# this instance has managed to bind at least once
|
||||
have_been_alive = True
|
||||
|
||||
if have_been_alive:
|
||||
t = "tftp server [%s]:%d crashed; restarting in 3 sec:\n%s"
|
||||
error(t, ip, self.port, min_ex())
|
||||
time.sleep(3)
|
||||
continue
|
||||
|
||||
# server failed to start; could be due to dualstack (ipv6 managed to bind and this is ipv4)
|
||||
if ip != "0.0.0.0" or "::" not in self.ips:
|
||||
# nope, it's fatal
|
||||
t = "tftp server [%s]:%d failed to start:\n%s"
|
||||
error(t, ip, self.port, min_ex())
|
||||
|
||||
# yep; ignore
|
||||
# (TODO: move the "listening @ ..." infolog in partftpy to
|
||||
# after the bind attempt so it doesn't print twice)
|
||||
return
|
||||
|
||||
info("tftp server [%s]:%d terminated", ip, self.port)
|
||||
break
|
||||
|
||||
def stop(self):
|
||||
with self.mutex:
|
||||
@@ -300,6 +340,9 @@ class Tftpd(object):
|
||||
if not self.args.tftp_nols and bos.path.isdir(ap):
|
||||
return self._ls(vpath, "", 0, True)
|
||||
|
||||
if not a:
|
||||
a = [self.args.iobuf]
|
||||
|
||||
return open(ap, mode, *a, **ka)
|
||||
|
||||
def _mkdir(self, vpath: str, *a) -> None:
|
||||
@@ -322,7 +365,7 @@ class Tftpd(object):
|
||||
yeet("attempted delete of non-empty file")
|
||||
|
||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||
self.hub.up2k.handle_rm("*", "8.3.8.7", [vpath], [], False)
|
||||
self.hub.up2k.handle_rm("*", "8.3.8.7", [vpath], [], False, False)
|
||||
|
||||
def _access(self, *a: Any) -> bool:
|
||||
return True
|
||||
|
||||
@@ -57,7 +57,7 @@ class ThumbCli(object):
|
||||
if is_vid and "dvthumb" in dbv.flags:
|
||||
return None
|
||||
|
||||
want_opus = fmt in ("opus", "caf")
|
||||
want_opus = fmt in ("opus", "caf", "mp3")
|
||||
is_au = ext in self.fmt_ffa
|
||||
if is_au:
|
||||
if want_opus:
|
||||
|
||||
@@ -16,9 +16,9 @@ from .__init__ import ANYWIN, TYPE_CHECKING
|
||||
from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
|
||||
from .util import BytesIO # type: ignore
|
||||
from .util import (
|
||||
FFMPEG_URL,
|
||||
BytesIO, # type: ignore
|
||||
Cooldown,
|
||||
Daemon,
|
||||
Pebkac,
|
||||
@@ -109,7 +109,7 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
|
||||
h = hashlib.sha512(afsenc(fn)).digest()
|
||||
fn = base64.urlsafe_b64encode(h).decode("ascii")[:24]
|
||||
|
||||
if fmt in ("opus", "caf"):
|
||||
if fmt in ("opus", "caf", "mp3"):
|
||||
cat = "ac"
|
||||
else:
|
||||
fc = fmt[:1]
|
||||
@@ -307,6 +307,8 @@ class ThumbSrv(object):
|
||||
elif lib == "ff" and ext in self.fmt_ffa:
|
||||
if tpath.endswith(".opus") or tpath.endswith(".caf"):
|
||||
funs.append(self.conv_opus)
|
||||
elif tpath.endswith(".mp3"):
|
||||
funs.append(self.conv_mp3)
|
||||
elif tpath.endswith(".png"):
|
||||
funs.append(self.conv_waves)
|
||||
png_ok = True
|
||||
@@ -637,8 +639,47 @@ class ThumbSrv(object):
|
||||
cmd += [fsenc(tpath)]
|
||||
self._run_ff(cmd, vn)
|
||||
|
||||
def conv_mp3(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||
quality = self.args.q_mp3.lower()
|
||||
if self.args.no_acode or not quality:
|
||||
raise Exception("disabled in server config")
|
||||
|
||||
self.wait4ram(0.2, tpath)
|
||||
ret, _ = ffprobe(abspath, int(vn.flags["convt"] / 2))
|
||||
if "ac" not in ret:
|
||||
raise Exception("not audio")
|
||||
|
||||
if quality.endswith("k"):
|
||||
qk = b"-b:a"
|
||||
qv = quality.encode("ascii")
|
||||
else:
|
||||
qk = b"-q:a"
|
||||
qv = quality[1:].encode("ascii")
|
||||
|
||||
# extremely conservative choices for output format
|
||||
# (always 2ch 44k1) because if a device is old enough
|
||||
# to not support opus then it's probably also super picky
|
||||
|
||||
# fmt: off
|
||||
cmd = [
|
||||
b"ffmpeg",
|
||||
b"-nostdin",
|
||||
b"-v", b"error",
|
||||
b"-hide_banner",
|
||||
b"-i", fsenc(abspath),
|
||||
b"-map_metadata", b"-1",
|
||||
b"-map", b"0:a:0",
|
||||
b"-ar", b"44100",
|
||||
b"-ac", b"2",
|
||||
b"-c:a", b"libmp3lame",
|
||||
qk, qv,
|
||||
fsenc(tpath)
|
||||
]
|
||||
# fmt: on
|
||||
self._run_ff(cmd, vn, oom=300)
|
||||
|
||||
def conv_opus(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||
if self.args.no_acode:
|
||||
if self.args.no_acode or not self.args.q_opus:
|
||||
raise Exception("disabled in server config")
|
||||
|
||||
self.wait4ram(0.2, tpath)
|
||||
@@ -662,6 +703,7 @@ class ThumbSrv(object):
|
||||
pass
|
||||
|
||||
caf_src = abspath if src_opus else tmp_opus
|
||||
bq = ("%dk" % (self.args.q_opus,)).encode("ascii")
|
||||
|
||||
if not want_caf or not src_opus:
|
||||
# fmt: off
|
||||
@@ -674,7 +716,7 @@ class ThumbSrv(object):
|
||||
b"-map_metadata", b"-1",
|
||||
b"-map", b"0:a:0",
|
||||
b"-c:a", b"libopus",
|
||||
b"-b:a", b"128k",
|
||||
b"-b:a", bq,
|
||||
fsenc(tmp_opus)
|
||||
]
|
||||
# fmt: on
|
||||
@@ -697,7 +739,7 @@ class ThumbSrv(object):
|
||||
b"-map_metadata", b"-1",
|
||||
b"-ac", b"2",
|
||||
b"-c:a", b"libopus",
|
||||
b"-b:a", b"128k",
|
||||
b"-b:a", bq,
|
||||
b"-f", b"caf",
|
||||
fsenc(tpath)
|
||||
]
|
||||
@@ -771,7 +813,7 @@ class ThumbSrv(object):
|
||||
|
||||
def _clean(self, cat: str, thumbpath: str) -> int:
|
||||
# self.log("cln {}".format(thumbpath))
|
||||
exts = ["jpg", "webp", "png"] if cat == "th" else ["opus", "caf"]
|
||||
exts = ["jpg", "webp", "png"] if cat == "th" else ["opus", "caf", "mp3"]
|
||||
maxage = getattr(self.args, cat + "_maxage")
|
||||
now = time.time()
|
||||
prev_b64 = None
|
||||
|
||||
@@ -199,11 +199,16 @@ class Up2k(object):
|
||||
|
||||
Daemon(self.deferred_init, "up2k-deferred-init")
|
||||
|
||||
def reload(self) -> None:
|
||||
self.gid += 1
|
||||
self.log("reload #{} initiated".format(self.gid))
|
||||
def reload(self, rescan_all_vols: bool) -> None:
|
||||
"""mutex me"""
|
||||
self.log("reload #{} scheduled".format(self.gid + 1))
|
||||
all_vols = self.asrv.vfs.all_vols
|
||||
self.rescan(all_vols, list(all_vols.keys()), True, False)
|
||||
|
||||
scan_vols = [k for k, v in all_vols.items() if v.realpath not in self.registry]
|
||||
if rescan_all_vols:
|
||||
scan_vols = list(all_vols.keys())
|
||||
|
||||
self._rescan(all_vols, scan_vols, True, False)
|
||||
|
||||
def deferred_init(self) -> None:
|
||||
all_vols = self.asrv.vfs.all_vols
|
||||
@@ -232,8 +237,6 @@ class Up2k(object):
|
||||
for n in range(max(1, self.args.mtag_mt)):
|
||||
Daemon(self._tagger, "tagger-{}".format(n))
|
||||
|
||||
Daemon(self._run_all_mtp, "up2k-mtp-init")
|
||||
|
||||
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||
if self.pp:
|
||||
msg += "\033[K"
|
||||
@@ -282,9 +285,48 @@ class Up2k(object):
|
||||
}
|
||||
return json.dumps(ret, indent=4)
|
||||
|
||||
def get_unfinished_by_user(self, uname, ip) -> str:
|
||||
if PY2 or not self.mutex.acquire(timeout=2):
|
||||
return '[{"timeout":1}]'
|
||||
|
||||
ret: list[tuple[int, str, int, int, int]] = []
|
||||
try:
|
||||
for ptop, tab2 in self.registry.items():
|
||||
cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
if not cfg:
|
||||
continue
|
||||
addr = (ip or "\n") if cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if cfg in (1, 3) else ""
|
||||
drp = self.droppable.get(ptop, {})
|
||||
for wark, job in tab2.items():
|
||||
if (
|
||||
wark in drp
|
||||
or (user and user != job["user"])
|
||||
or (addr and addr != job["addr"])
|
||||
):
|
||||
continue
|
||||
|
||||
zt5 = (
|
||||
int(job["t0"]),
|
||||
djoin(job["vtop"], job["prel"], job["name"]),
|
||||
job["size"],
|
||||
len(job["need"]),
|
||||
len(job["hash"]),
|
||||
)
|
||||
ret.append(zt5)
|
||||
finally:
|
||||
self.mutex.release()
|
||||
|
||||
ret.sort(reverse=True)
|
||||
ret2 = [
|
||||
{"at": at, "vp": "/" + vp, "pd": 100 - ((nn * 100) // (nh or 1)), "sz": sz}
|
||||
for (at, vp, sz, nn, nh) in ret
|
||||
]
|
||||
return json.dumps(ret2, indent=0)
|
||||
|
||||
def get_unfinished(self) -> str:
|
||||
if PY2 or not self.mutex.acquire(timeout=0.5):
|
||||
return "{}"
|
||||
return ""
|
||||
|
||||
ret: dict[str, tuple[int, int]] = {}
|
||||
try:
|
||||
@@ -337,14 +379,21 @@ class Up2k(object):
|
||||
def rescan(
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], wait: bool, fscan: bool
|
||||
) -> str:
|
||||
with self.mutex:
|
||||
return self._rescan(all_vols, scan_vols, wait, fscan)
|
||||
|
||||
def _rescan(
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], wait: bool, fscan: bool
|
||||
) -> str:
|
||||
"""mutex me"""
|
||||
if not wait and self.pp:
|
||||
return "cannot initiate; scan is already in progress"
|
||||
|
||||
args = (all_vols, scan_vols, fscan)
|
||||
self.gid += 1
|
||||
Daemon(
|
||||
self.init_indexes,
|
||||
"up2k-rescan-{}".format(scan_vols[0] if scan_vols else "all"),
|
||||
args,
|
||||
(all_vols, scan_vols, fscan, self.gid),
|
||||
)
|
||||
return ""
|
||||
|
||||
@@ -456,7 +505,7 @@ class Up2k(object):
|
||||
if vp:
|
||||
fvp = "%s/%s" % (vp, fvp)
|
||||
|
||||
self._handle_rm(LEELOO_DALLAS, "", fvp, [], True)
|
||||
self._handle_rm(LEELOO_DALLAS, "", fvp, [], True, False)
|
||||
nrm += 1
|
||||
|
||||
if nrm:
|
||||
@@ -575,19 +624,32 @@ class Up2k(object):
|
||||
return True, ret
|
||||
|
||||
def init_indexes(
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], fscan: bool
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], fscan: bool, gid: int = 0
|
||||
) -> bool:
|
||||
gid = self.gid
|
||||
while self.pp and gid == self.gid:
|
||||
time.sleep(0.1)
|
||||
if not gid:
|
||||
with self.mutex:
|
||||
gid = self.gid
|
||||
|
||||
if gid != self.gid:
|
||||
return False
|
||||
nspin = 0
|
||||
while True:
|
||||
nspin += 1
|
||||
if nspin > 1:
|
||||
time.sleep(0.1)
|
||||
|
||||
with self.mutex:
|
||||
if gid != self.gid:
|
||||
return False
|
||||
|
||||
if self.pp:
|
||||
continue
|
||||
|
||||
self.pp = ProgressPrinter(self.log, self.args)
|
||||
|
||||
break
|
||||
|
||||
if gid:
|
||||
self.log("reload #{} running".format(self.gid))
|
||||
self.log("reload #%d running" % (gid,))
|
||||
|
||||
self.pp = ProgressPrinter(self.log, self.args)
|
||||
vols = list(all_vols.values())
|
||||
t0 = time.time()
|
||||
have_e2d = False
|
||||
@@ -771,20 +833,14 @@ class Up2k(object):
|
||||
msg = "could not read tags because no backends are available (Mutagen or FFprobe)"
|
||||
self.log(msg, c=1)
|
||||
|
||||
thr = None
|
||||
if self.mtag:
|
||||
t = "online (running mtp)"
|
||||
if scan_vols:
|
||||
thr = Daemon(self._run_all_mtp, "up2k-mtp-scan", r=False)
|
||||
else:
|
||||
self.pp = None
|
||||
t = "online, idle"
|
||||
|
||||
t = "online (running mtp)" if self.mtag else "online, idle"
|
||||
for vol in vols:
|
||||
self.volstate[vol.vpath] = t
|
||||
|
||||
if thr:
|
||||
thr.start()
|
||||
if self.mtag:
|
||||
Daemon(self._run_all_mtp, "up2k-mtp-scan", (gid,))
|
||||
else:
|
||||
self.pp = None
|
||||
|
||||
return have_e2d
|
||||
|
||||
@@ -1809,26 +1865,28 @@ class Up2k(object):
|
||||
self.pending_tags = []
|
||||
return ret
|
||||
|
||||
def _run_all_mtp(self) -> None:
|
||||
gid = self.gid
|
||||
def _run_all_mtp(self, gid: int) -> None:
|
||||
t0 = time.time()
|
||||
for ptop, flags in self.flags.items():
|
||||
if "mtp" in flags:
|
||||
if ptop not in self.entags:
|
||||
t = "skipping mtp for unavailable volume {}"
|
||||
self.log(t.format(ptop), 1)
|
||||
continue
|
||||
self._run_one_mtp(ptop, gid)
|
||||
else:
|
||||
self._run_one_mtp(ptop, gid)
|
||||
|
||||
vtop = "\n"
|
||||
for vol in self.asrv.vfs.all_vols.values():
|
||||
if vol.realpath == ptop:
|
||||
vtop = vol.vpath
|
||||
if "running mtp" in self.volstate.get(vtop, ""):
|
||||
self.volstate[vtop] = "online, idle"
|
||||
|
||||
td = time.time() - t0
|
||||
msg = "mtp finished in {:.2f} sec ({})"
|
||||
self.log(msg.format(td, s2hms(td, True)))
|
||||
|
||||
self.pp = None
|
||||
for k in list(self.volstate.keys()):
|
||||
if "OFFLINE" not in self.volstate[k]:
|
||||
self.volstate[k] = "online, idle"
|
||||
|
||||
if self.args.exit == "idx":
|
||||
self.hub.sigterm()
|
||||
|
||||
@@ -2671,6 +2729,9 @@ class Up2k(object):
|
||||
a = [job[x] for x in zs.split()]
|
||||
self.db_add(cur, vfs.flags, *a)
|
||||
cur.connection.commit()
|
||||
elif wark in reg:
|
||||
# checks out, but client may have hopped IPs
|
||||
job["addr"] = cj["addr"]
|
||||
|
||||
if not job:
|
||||
ap1 = djoin(cj["ptop"], cj["prel"])
|
||||
@@ -3207,7 +3268,13 @@ class Up2k(object):
|
||||
pass
|
||||
|
||||
def handle_rm(
|
||||
self, uname: str, ip: str, vpaths: list[str], lim: list[int], rm_up: bool
|
||||
self,
|
||||
uname: str,
|
||||
ip: str,
|
||||
vpaths: list[str],
|
||||
lim: list[int],
|
||||
rm_up: bool,
|
||||
unpost: bool,
|
||||
) -> str:
|
||||
n_files = 0
|
||||
ok = {}
|
||||
@@ -3217,7 +3284,7 @@ class Up2k(object):
|
||||
self.log("hit delete limit of {} files".format(lim[1]), 3)
|
||||
break
|
||||
|
||||
a, b, c = self._handle_rm(uname, ip, vp, lim, rm_up)
|
||||
a, b, c = self._handle_rm(uname, ip, vp, lim, rm_up, unpost)
|
||||
n_files += a
|
||||
for k in b:
|
||||
ok[k] = 1
|
||||
@@ -3231,25 +3298,43 @@ class Up2k(object):
|
||||
return "deleted {} files (and {}/{} folders)".format(n_files, iok, iok + ing)
|
||||
|
||||
def _handle_rm(
|
||||
self, uname: str, ip: str, vpath: str, lim: list[int], rm_up: bool
|
||||
self, uname: str, ip: str, vpath: str, lim: list[int], rm_up: bool, unpost: bool
|
||||
) -> tuple[int, list[str], list[str]]:
|
||||
self.db_act = time.time()
|
||||
try:
|
||||
partial = ""
|
||||
if not unpost:
|
||||
permsets = [[True, False, False, True]]
|
||||
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn.get_dbv(rem)
|
||||
unpost = False
|
||||
except:
|
||||
else:
|
||||
# unpost with missing permissions? verify with db
|
||||
if not self.args.unpost:
|
||||
raise Pebkac(400, "the unpost feature is disabled in server config")
|
||||
|
||||
unpost = True
|
||||
permsets = [[False, True]]
|
||||
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn.get_dbv(rem)
|
||||
ptop = vn.realpath
|
||||
with self.mutex:
|
||||
_, _, _, _, dip, dat = self._find_from_vpath(vn.realpath, rem)
|
||||
abrt_cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
addr = (ip or "\n") if abrt_cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if abrt_cfg in (1, 3) else ""
|
||||
reg = self.registry.get(ptop, {}) if abrt_cfg else {}
|
||||
for wark, job in reg.items():
|
||||
if (user and user != job["user"]) or (addr and addr != job["addr"]):
|
||||
continue
|
||||
if djoin(job["prel"], job["name"]) == rem:
|
||||
if job["ptop"] != ptop:
|
||||
t = "job.ptop [%s] != vol.ptop [%s] ??"
|
||||
raise Exception(t % (job["ptop"] != ptop))
|
||||
partial = vn.canonical(vjoin(job["prel"], job["tnam"]))
|
||||
break
|
||||
if partial:
|
||||
dip = ip
|
||||
dat = time.time()
|
||||
else:
|
||||
if not self.args.unpost:
|
||||
t = "the unpost feature is disabled in server config"
|
||||
raise Pebkac(400, t)
|
||||
|
||||
_, _, _, _, dip, dat = self._find_from_vpath(ptop, rem)
|
||||
|
||||
t = "you cannot delete this: "
|
||||
if not dip:
|
||||
@@ -3342,6 +3427,9 @@ class Up2k(object):
|
||||
cur.connection.commit()
|
||||
|
||||
wunlink(self.log, abspath, dbv.flags)
|
||||
if partial:
|
||||
wunlink(self.log, partial, dbv.flags)
|
||||
partial = ""
|
||||
if xad:
|
||||
runhook(
|
||||
self.log,
|
||||
@@ -3832,7 +3920,7 @@ class Up2k(object):
|
||||
csz = up2k_chunksize(fsz)
|
||||
ret = []
|
||||
suffix = " MB, {}".format(path)
|
||||
with open(fsenc(path), "rb", 512 * 1024) as f:
|
||||
with open(fsenc(path), "rb", self.args.iobuf) as f:
|
||||
if self.mth and fsz >= 1024 * 512:
|
||||
tlt = self.mth.hash(f, fsz, csz, self.pp, prefix, suffix)
|
||||
ret = [x[0] for x in tlt]
|
||||
@@ -3937,7 +4025,13 @@ class Up2k(object):
|
||||
|
||||
if not ANYWIN and sprs and sz > 1024 * 1024:
|
||||
fs = self.fstab.get(pdir)
|
||||
if fs != "ok":
|
||||
if fs == "ok":
|
||||
pass
|
||||
elif "sparse" in self.flags[job["ptop"]]:
|
||||
t = "volflag 'sparse' is forcing use of sparse files for uploads to [%s]"
|
||||
self.log(t % (job["ptop"],))
|
||||
relabel = True
|
||||
else:
|
||||
relabel = True
|
||||
f.seek(1024 * 1024 - 1)
|
||||
f.write(b"e")
|
||||
|
||||
@@ -186,7 +186,7 @@ else:
|
||||
|
||||
SYMTIME = sys.version_info > (3, 6) and os.utime in os.supports_follow_symlinks
|
||||
|
||||
META_NOBOTS = '<meta name="robots" content="noindex, nofollow">'
|
||||
META_NOBOTS = '<meta name="robots" content="noindex, nofollow">\n'
|
||||
|
||||
FFMPEG_URL = "https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z"
|
||||
|
||||
@@ -559,20 +559,26 @@ class HLog(logging.Handler):
|
||||
|
||||
|
||||
class NetMap(object):
|
||||
def __init__(self, ips: list[str], netdevs: dict[str, Netdev]) -> None:
|
||||
def __init__(self, ips: list[str], cidrs: list[str], keep_lo=False) -> None:
|
||||
"""
|
||||
ips: list of plain ipv4/ipv6 IPs, not cidr
|
||||
cidrs: list of cidr-notation IPs (ip/prefix)
|
||||
"""
|
||||
if "::" in ips:
|
||||
ips = [x for x in ips if x != "::"] + list(
|
||||
[x.split("/")[0] for x in netdevs if ":" in x]
|
||||
[x.split("/")[0] for x in cidrs if ":" in x]
|
||||
)
|
||||
ips.append("0.0.0.0")
|
||||
|
||||
if "0.0.0.0" in ips:
|
||||
ips = [x for x in ips if x != "0.0.0.0"] + list(
|
||||
[x.split("/")[0] for x in netdevs if ":" not in x]
|
||||
[x.split("/")[0] for x in cidrs if ":" not in x]
|
||||
)
|
||||
|
||||
ips = [x for x in ips if x not in ("::1", "127.0.0.1")]
|
||||
ips = find_prefix(ips, netdevs)
|
||||
if not keep_lo:
|
||||
ips = [x for x in ips if x not in ("::1", "127.0.0.1")]
|
||||
|
||||
ips = find_prefix(ips, cidrs)
|
||||
|
||||
self.cache: dict[str, str] = {}
|
||||
self.b2sip: dict[bytes, str] = {}
|
||||
@@ -589,6 +595,9 @@ class NetMap(object):
|
||||
self.bip.sort(reverse=True)
|
||||
|
||||
def map(self, ip: str) -> str:
|
||||
if ip.startswith("::ffff:"):
|
||||
ip = ip[7:]
|
||||
|
||||
try:
|
||||
return self.cache[ip]
|
||||
except:
|
||||
@@ -1391,10 +1400,15 @@ def ren_open(
|
||||
|
||||
class MultipartParser(object):
|
||||
def __init__(
|
||||
self, log_func: "NamedLogger", sr: Unrecv, http_headers: dict[str, str]
|
||||
self,
|
||||
log_func: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
sr: Unrecv,
|
||||
http_headers: dict[str, str],
|
||||
):
|
||||
self.sr = sr
|
||||
self.log = log_func
|
||||
self.args = args
|
||||
self.headers = http_headers
|
||||
|
||||
self.re_ctype = re.compile(r"^content-type: *([^; ]+)", re.IGNORECASE)
|
||||
@@ -1493,7 +1507,7 @@ class MultipartParser(object):
|
||||
|
||||
def _read_data(self) -> Generator[bytes, None, None]:
|
||||
blen = len(self.boundary)
|
||||
bufsz = 32 * 1024
|
||||
bufsz = self.args.s_rd_sz
|
||||
while True:
|
||||
try:
|
||||
buf = self.sr.recv(bufsz)
|
||||
@@ -1920,10 +1934,10 @@ def ipnorm(ip: str) -> str:
|
||||
return ip
|
||||
|
||||
|
||||
def find_prefix(ips: list[str], netdevs: dict[str, Netdev]) -> list[str]:
|
||||
def find_prefix(ips: list[str], cidrs: list[str]) -> list[str]:
|
||||
ret = []
|
||||
for ip in ips:
|
||||
hit = next((x for x in netdevs if x.startswith(ip + "/")), None)
|
||||
hit = next((x for x in cidrs if x.startswith(ip + "/") or ip == x), None)
|
||||
if hit:
|
||||
ret.append(hit)
|
||||
return ret
|
||||
@@ -2234,10 +2248,11 @@ def shut_socket(log: "NamedLogger", sck: socket.socket, timeout: int = 3) -> Non
|
||||
sck.close()
|
||||
|
||||
|
||||
def read_socket(sr: Unrecv, total_size: int) -> Generator[bytes, None, None]:
|
||||
def read_socket(
|
||||
sr: Unrecv, bufsz: int, total_size: int
|
||||
) -> Generator[bytes, None, None]:
|
||||
remains = total_size
|
||||
while remains > 0:
|
||||
bufsz = 32 * 1024
|
||||
if bufsz > remains:
|
||||
bufsz = remains
|
||||
|
||||
@@ -2251,16 +2266,16 @@ def read_socket(sr: Unrecv, total_size: int) -> Generator[bytes, None, None]:
|
||||
yield buf
|
||||
|
||||
|
||||
def read_socket_unbounded(sr: Unrecv) -> Generator[bytes, None, None]:
|
||||
def read_socket_unbounded(sr: Unrecv, bufsz: int) -> Generator[bytes, None, None]:
|
||||
try:
|
||||
while True:
|
||||
yield sr.recv(32 * 1024)
|
||||
yield sr.recv(bufsz)
|
||||
except:
|
||||
return
|
||||
|
||||
|
||||
def read_socket_chunked(
|
||||
sr: Unrecv, log: Optional["NamedLogger"] = None
|
||||
sr: Unrecv, bufsz: int, log: Optional["NamedLogger"] = None
|
||||
) -> Generator[bytes, None, None]:
|
||||
err = "upload aborted: expected chunk length, got [{}] |{}| instead"
|
||||
while True:
|
||||
@@ -2294,7 +2309,7 @@ def read_socket_chunked(
|
||||
if log:
|
||||
log("receiving %d byte chunk" % (chunklen,))
|
||||
|
||||
for chunk in read_socket(sr, chunklen):
|
||||
for chunk in read_socket(sr, bufsz, chunklen):
|
||||
yield chunk
|
||||
|
||||
x = sr.recv_ex(2, False)
|
||||
@@ -2317,10 +2332,46 @@ def list_ips() -> list[str]:
|
||||
return list(ret)
|
||||
|
||||
|
||||
def yieldfile(fn: str) -> Generator[bytes, None, None]:
|
||||
with open(fsenc(fn), "rb", 512 * 1024) as f:
|
||||
def build_netmap(csv: str):
|
||||
csv = csv.lower().strip()
|
||||
|
||||
if csv in ("any", "all", "no", ",", ""):
|
||||
return None
|
||||
|
||||
if csv in ("lan", "local", "private", "prvt"):
|
||||
csv = "10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, fd00::/8" # lan
|
||||
csv += ", 169.254.0.0/16, fe80::/10" # link-local
|
||||
csv += ", 127.0.0.0/8, ::1/128" # loopback
|
||||
|
||||
srcs = [x.strip() for x in csv.split(",") if x.strip()]
|
||||
cidrs = []
|
||||
for zs in srcs:
|
||||
if not zs.endswith("."):
|
||||
cidrs.append(zs)
|
||||
continue
|
||||
|
||||
# translate old syntax "172.19." => "172.19.0.0/16"
|
||||
words = len(zs.rstrip(".").split("."))
|
||||
if words == 1:
|
||||
zs += "0.0.0/8"
|
||||
elif words == 2:
|
||||
zs += "0.0/16"
|
||||
elif words == 3:
|
||||
zs += "0/24"
|
||||
else:
|
||||
raise Exception("invalid config value [%s]" % (zs,))
|
||||
|
||||
cidrs.append(zs)
|
||||
|
||||
ips = [x.split("/")[0] for x in cidrs]
|
||||
return NetMap(ips, cidrs, True)
|
||||
|
||||
|
||||
def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
|
||||
readsz = min(bufsz, 128 * 1024)
|
||||
with open(fsenc(fn), "rb", bufsz) as f:
|
||||
while True:
|
||||
buf = f.read(128 * 1024)
|
||||
buf = f.read(readsz)
|
||||
if not buf:
|
||||
break
|
||||
|
||||
|
||||
@@ -394,8 +394,7 @@ window.baguetteBox = (function () {
|
||||
}
|
||||
|
||||
function dlpic() {
|
||||
var url = findfile()[3].href;
|
||||
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache';
|
||||
var url = addq(findfile()[3].href, 'cache');
|
||||
dl_file(url);
|
||||
}
|
||||
|
||||
@@ -592,12 +591,13 @@ window.baguetteBox = (function () {
|
||||
document.documentElement.style.overflowY = 'auto';
|
||||
document.body.style.overflowY = 'auto';
|
||||
}
|
||||
if (overlay.style.display === 'none')
|
||||
return;
|
||||
|
||||
if (options.duringHide)
|
||||
options.duringHide();
|
||||
|
||||
if (overlay.style.display === 'none')
|
||||
return;
|
||||
|
||||
sethash('');
|
||||
unbindEvents();
|
||||
try {
|
||||
@@ -682,7 +682,7 @@ window.baguetteBox = (function () {
|
||||
options.captions.call(currentGallery, imageElement) :
|
||||
imageElement.getAttribute('data-caption') || imageElement.title;
|
||||
|
||||
imageSrc += imageSrc.indexOf('?') < 0 ? '?cache' : '&cache';
|
||||
imageSrc = addq(imageSrc, 'cache');
|
||||
|
||||
if (is_vid && index != currentIndex)
|
||||
return; // no preload
|
||||
@@ -720,6 +720,9 @@ window.baguetteBox = (function () {
|
||||
|
||||
figure.appendChild(image);
|
||||
|
||||
if (is_vid && window.afilt)
|
||||
afilt.apply(undefined, image);
|
||||
|
||||
if (options.async && callback)
|
||||
callback();
|
||||
}
|
||||
|
||||
@@ -494,6 +494,7 @@ html.dz {
|
||||
|
||||
text-shadow: none;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
html.dy {
|
||||
--fg: #000;
|
||||
@@ -603,6 +604,7 @@ html {
|
||||
color: var(--fg);
|
||||
background: var(--bgg);
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
text-shadow: 1px 1px 0px var(--bg-max);
|
||||
}
|
||||
html, body {
|
||||
@@ -611,6 +613,7 @@ html, body {
|
||||
}
|
||||
pre, code, tt, #doc, #doc>code {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
.ayjump {
|
||||
position: fixed;
|
||||
@@ -759,6 +762,7 @@ html #files.hhpick thead th {
|
||||
}
|
||||
#files tbody td:nth-child(3) {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
text-align: right;
|
||||
padding-right: 1em;
|
||||
white-space: nowrap;
|
||||
@@ -821,6 +825,7 @@ html.y #path a:hover {
|
||||
.logue.raw {
|
||||
white-space: pre;
|
||||
font-family: 'scp', 'consolas', monospace;
|
||||
font-family: var(--font-mono), 'scp', 'consolas', monospace;
|
||||
}
|
||||
#doc>iframe,
|
||||
.logue>iframe {
|
||||
@@ -985,6 +990,10 @@ html.y #path a:hover {
|
||||
margin: 0 auto;
|
||||
display: block;
|
||||
}
|
||||
#ggrid.nocrop>a img {
|
||||
max-height: 20em;
|
||||
max-height: calc(var(--grid-sz)*2);
|
||||
}
|
||||
#ggrid>a.dir:before {
|
||||
content: '📂';
|
||||
}
|
||||
@@ -1413,6 +1422,7 @@ input[type="checkbox"]:checked+label {
|
||||
}
|
||||
html.dz input {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
.opwide div>span>input+label {
|
||||
padding: .3em 0 .3em .3em;
|
||||
@@ -1698,6 +1708,7 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
}
|
||||
.ntree a:first-child {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
font-size: 1.2em;
|
||||
line-height: 0;
|
||||
}
|
||||
@@ -1828,6 +1839,10 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
#unpost td:nth-child(3),
|
||||
#unpost td:nth-child(4) {
|
||||
text-align: right;
|
||||
}
|
||||
#rui {
|
||||
background: #fff;
|
||||
background: var(--bg);
|
||||
@@ -1855,6 +1870,7 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
}
|
||||
#rn_vadv input {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
#rui td+td,
|
||||
#rui td input[type="text"] {
|
||||
@@ -1918,6 +1934,7 @@ html.y #doc {
|
||||
#doc.mdo {
|
||||
white-space: normal;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
}
|
||||
#doc.prism * {
|
||||
line-height: 1.5em;
|
||||
@@ -1977,6 +1994,7 @@ a.btn,
|
||||
}
|
||||
#hkhelp td:first-child {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
html.noscroll,
|
||||
html.noscroll .sbar {
|
||||
@@ -2486,6 +2504,7 @@ html.y #bbox-overlay figcaption a {
|
||||
}
|
||||
#op_up2k.srch td.prog {
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
font-size: 1em;
|
||||
width: auto;
|
||||
}
|
||||
@@ -2500,6 +2519,7 @@ html.y #bbox-overlay figcaption a {
|
||||
white-space: nowrap;
|
||||
display: inline-block;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
#u2etas.o {
|
||||
width: 20em;
|
||||
@@ -2569,6 +2589,7 @@ html.y #bbox-overlay figcaption a {
|
||||
#u2cards span {
|
||||
color: var(--fg-max);
|
||||
font-family: 'scp', monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace;
|
||||
}
|
||||
#u2cards > a:nth-child(4) > span {
|
||||
display: inline-block;
|
||||
@@ -2734,6 +2755,7 @@ html.b #u2conf a.b:hover {
|
||||
}
|
||||
.prog {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
#u2tab span.inf,
|
||||
#u2tab span.ok,
|
||||
|
||||
@@ -7,9 +7,9 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8, minimum-scale=0.6">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/browser.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
{%- if css %}
|
||||
<link rel="stylesheet" media="screen" href="{{ css }}_={{ ts }}">
|
||||
{%- endif %}
|
||||
@@ -161,3 +161,4 @@
|
||||
</body>
|
||||
|
||||
</html>
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -6,12 +6,12 @@
|
||||
<title>{{ title }}</title>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
{{ html_head }}
|
||||
<style>
|
||||
html{font-family:sans-serif}
|
||||
td{border:1px solid #999;border-width:1px 1px 0 0;padding:0 5px}
|
||||
a{display:block}
|
||||
</style>
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -61,3 +61,4 @@
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -25,3 +25,4 @@
|
||||
</body>
|
||||
|
||||
</html>
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ html, body {
|
||||
color: #333;
|
||||
background: #eee;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
line-height: 1.5em;
|
||||
}
|
||||
html.y #helpbox a {
|
||||
@@ -67,6 +68,7 @@ a {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
font-weight: bold;
|
||||
font-size: 1.3em;
|
||||
line-height: .1em;
|
||||
|
||||
@@ -4,12 +4,12 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.7">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/md.css?_={{ ts }}">
|
||||
{%- if edit %}
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/md2.css?_={{ ts }}">
|
||||
{%- endif %}
|
||||
{{ html_head }}
|
||||
</head>
|
||||
<body>
|
||||
<div id="mn"></div>
|
||||
@@ -160,3 +160,4 @@ try { l.light = drk? 0:1; } catch (ex) { }
|
||||
<script src="{{ r }}/.cpr/md2.js?_={{ ts }}"></script>
|
||||
{%- endif %}
|
||||
</body></html>
|
||||
|
||||
|
||||
@@ -512,13 +512,6 @@ dom_navtgl.onclick = function () {
|
||||
redraw();
|
||||
};
|
||||
|
||||
if (!HTTPS && location.hostname != '127.0.0.1') try {
|
||||
ebi('edit2').onclick = function (e) {
|
||||
toast.err(0, "the fancy editor is only available over https");
|
||||
return ev(e);
|
||||
}
|
||||
} catch (ex) { }
|
||||
|
||||
if (sread('hidenav') == 1)
|
||||
dom_navtgl.onclick();
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
width: calc(100% - 56em);
|
||||
}
|
||||
#mw {
|
||||
left: calc(100% - 55em);
|
||||
left: max(0em, calc(100% - 55em));
|
||||
overflow-y: auto;
|
||||
position: fixed;
|
||||
bottom: 0;
|
||||
@@ -56,6 +56,7 @@
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
overflow-wrap: break-word;
|
||||
|
||||
@@ -368,14 +368,14 @@ function save(e) {
|
||||
|
||||
function save_cb() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var r;
|
||||
try {
|
||||
r = JSON.parse(this.responseText);
|
||||
}
|
||||
catch (ex) {
|
||||
return toast.err(0, 'Failed to parse reply from server:\n\n' + this.responseText);
|
||||
return toast.err(0, 'Error! The file was likely NOT saved.\n\nFailed to parse reply from server:\n\n' + unpre(this.responseText));
|
||||
}
|
||||
|
||||
if (!r.ok) {
|
||||
@@ -418,7 +418,7 @@ function run_savechk(lastmod, txt, btn, ntry) {
|
||||
|
||||
function savechk_cb() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var doc1 = this.txt.replace(/\r\n/g, "\n");
|
||||
var doc2 = this.responseText.replace(/\r\n/g, "\n");
|
||||
|
||||
@@ -17,6 +17,7 @@ html, body {
|
||||
padding: 0;
|
||||
min-height: 100%;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
background: #f7f7f7;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
@@ -4,11 +4,11 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.7">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/mde.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/deps/mini-fa.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/deps/easymde.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
</head>
|
||||
<body>
|
||||
<div id="mw">
|
||||
@@ -54,3 +54,4 @@ try { l.light = drk? 0:1; } catch (ex) { }
|
||||
<script src="{{ r }}/.cpr/deps/easymde.js?_={{ ts }}"></script>
|
||||
<script src="{{ r }}/.cpr/mde.js?_={{ ts }}"></script>
|
||||
</body></html>
|
||||
|
||||
|
||||
@@ -134,14 +134,14 @@ function save(mde) {
|
||||
|
||||
function save_cb() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var r;
|
||||
try {
|
||||
r = JSON.parse(this.responseText);
|
||||
}
|
||||
catch (ex) {
|
||||
return toast.err(0, 'Failed to parse reply from server:\n\n' + this.responseText);
|
||||
return toast.err(0, 'Error! The file was likely NOT saved.\n\nFailed to parse reply from server:\n\n' + unpre(this.responseText));
|
||||
}
|
||||
|
||||
if (!r.ok) {
|
||||
@@ -180,7 +180,7 @@ function save_cb() {
|
||||
|
||||
function save_chk() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var doc1 = this.txt.replace(/\r\n/g, "\n");
|
||||
var doc2 = this.responseText.replace(/\r\n/g, "\n");
|
||||
|
||||
@@ -1,3 +1,8 @@
|
||||
:root {
|
||||
--font-main: sans-serif;
|
||||
--font-serif: serif;
|
||||
--font-mono: 'scp';
|
||||
}
|
||||
html,body,tr,th,td,#files,a {
|
||||
color: inherit;
|
||||
background: none;
|
||||
@@ -10,6 +15,7 @@ html {
|
||||
color: #ccc;
|
||||
background: #333;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
text-shadow: 1px 1px 0px #000;
|
||||
touch-action: manipulation;
|
||||
}
|
||||
@@ -23,6 +29,7 @@ html, body {
|
||||
}
|
||||
pre {
|
||||
font-family: monospace, monospace;
|
||||
font-family: var(--font-mono), monospace, monospace;
|
||||
}
|
||||
a {
|
||||
color: #fc5;
|
||||
|
||||
@@ -7,8 +7,8 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/msg.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -48,4 +48,5 @@
|
||||
{%- endif %}
|
||||
</body>
|
||||
|
||||
</html>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ html {
|
||||
color: #333;
|
||||
background: #f7f7f7;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
touch-action: manipulation;
|
||||
}
|
||||
#wrap {
|
||||
@@ -127,6 +128,7 @@ pre, code {
|
||||
color: #480;
|
||||
background: #fff;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
border: 1px solid rgba(128,128,128,0.3);
|
||||
border-radius: .2em;
|
||||
padding: .15em .2em;
|
||||
|
||||
@@ -7,9 +7,9 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -78,13 +78,15 @@
|
||||
|
||||
<h1 id="cc">client config:</h1>
|
||||
<ul>
|
||||
{% if k304 or k304vis %}
|
||||
{% if k304 %}
|
||||
<li><a id="h" href="{{ r }}/?k304=n">disable k304</a> (currently enabled)
|
||||
{%- else %}
|
||||
<li><a id="i" href="{{ r }}/?k304=y" class="r">enable k304</a> (currently disabled)
|
||||
{% endif %}
|
||||
<blockquote id="j">enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
|
||||
|
||||
{% endif %}
|
||||
|
||||
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>
|
||||
</ul>
|
||||
|
||||
@@ -118,3 +120,4 @@ document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme
|
||||
<script src="{{ r }}/.cpr/splash.js?_={{ ts }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ var Ls = {
|
||||
"d1": "tilstand",
|
||||
"d2": "vis tilstanden til alle tråder",
|
||||
"e1": "last innst.",
|
||||
"e2": "leser inn konfigurasjonsfiler på nytt$N(kontoer, volumer, volumbrytere)$Nog kartlegger alle e2ds-volumer",
|
||||
"e2": "leser inn konfigurasjonsfiler på nytt$N(kontoer, volumer, volumbrytere)$Nog kartlegger alle e2ds-volumer$N$Nmerk: endringer i globale parametere$Nkrever en full restart for å ta gjenge",
|
||||
"f1": "du kan betrakte:",
|
||||
"g1": "du kan laste opp til:",
|
||||
"cc1": "klient-konfigurasjon",
|
||||
@@ -30,7 +30,7 @@ var Ls = {
|
||||
},
|
||||
"eng": {
|
||||
"d2": "shows the state of all active threads",
|
||||
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes",
|
||||
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes$N$Nnote: any changes to global settings$Nrequire a full restart to take effect",
|
||||
"u2": "time since the last server write$N( upload / rename / ... )$N$N17d = 17 days$N1h23 = 1 hour 23 minutes$N4m56 = 4 minutes 56 seconds",
|
||||
"v2": "use this server as a local HDD$N$NWARNING: this will show your password!",
|
||||
}
|
||||
@@ -49,6 +49,15 @@ for (var k in (d || {})) {
|
||||
o[a].setAttribute("tt", d[k]);
|
||||
}
|
||||
|
||||
try {
|
||||
if (is_idp) {
|
||||
var z = ['#l+div', '#l', '#c'];
|
||||
for (var a = 0; a < z.length; a++)
|
||||
QS(z[a]).style.display = 'none';
|
||||
}
|
||||
}
|
||||
catch (ex) { }
|
||||
|
||||
tt.init();
|
||||
var o = QS('input[name="cppwd"]');
|
||||
if (!ebi('c') && o.offsetTop + o.offsetHeight < window.innerHeight)
|
||||
|
||||
@@ -7,10 +7,10 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<style>ul{padding-left:1.3em}li{margin:.4em 0}</style>
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -246,3 +246,4 @@ document.documentElement.className = (STG && STG.cpp_thm) || "{{ args.theme }}";
|
||||
<script src="{{ r }}/.cpr/svcs.js?_={{ ts }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
:root {
|
||||
--font-main: sans-serif;
|
||||
--font-serif: serif;
|
||||
--font-mono: 'scp';
|
||||
|
||||
--fg: #ccc;
|
||||
--fg-max: #fff;
|
||||
--bg-u2: #2b2b2b;
|
||||
@@ -378,6 +382,7 @@ html.y textarea:focus {
|
||||
.mdo code,
|
||||
.mdo tt {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-all;
|
||||
}
|
||||
@@ -447,6 +452,7 @@ html.y textarea:focus {
|
||||
}
|
||||
.mdo blockquote {
|
||||
font-family: serif;
|
||||
font-family: var(--font-serif), serif;
|
||||
background: #f7f7f7;
|
||||
border: .07em dashed #ccc;
|
||||
padding: 0 2em;
|
||||
|
||||
@@ -17,7 +17,7 @@ function goto_up2k() {
|
||||
var up2k = null,
|
||||
up2k_hooks = [],
|
||||
hws = [],
|
||||
sha_js = window.WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
|
||||
sha_js = WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
|
||||
m = 'will use ' + sha_js + ' instead of native sha512 due to';
|
||||
|
||||
try {
|
||||
@@ -717,7 +717,7 @@ function Donut(uc, st) {
|
||||
sfx();
|
||||
|
||||
// firefox may forget that filedrops are user-gestures so it can skip this:
|
||||
if (uc.upnag && window.Notification && Notification.permission == 'granted')
|
||||
if (uc.upnag && Notification && Notification.permission == 'granted')
|
||||
new Notification(uc.nagtxt);
|
||||
}
|
||||
|
||||
@@ -779,8 +779,8 @@ function up2k_init(subtle) {
|
||||
};
|
||||
|
||||
setTimeout(function () {
|
||||
if (window.WebAssembly && !hws.length)
|
||||
fetch(SR + '/.cpr/w.hash.js' + CB);
|
||||
if (WebAssembly && !hws.length)
|
||||
fetch(SR + '/.cpr/w.hash.js?_=' + TS);
|
||||
}, 1000);
|
||||
|
||||
function showmodal(msg) {
|
||||
@@ -869,7 +869,7 @@ function up2k_init(subtle) {
|
||||
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo);
|
||||
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null);
|
||||
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort);
|
||||
bcfg_bind(uc, 'hashw', 'hashw', !!window.WebAssembly && (!subtle || !CHROME || MOBILE || VCHROME >= 107), set_hashw);
|
||||
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && (!subtle || !CHROME || MOBILE || VCHROME >= 107), set_hashw);
|
||||
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
|
||||
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
|
||||
|
||||
@@ -1347,9 +1347,9 @@ function up2k_init(subtle) {
|
||||
var evpath = get_evpath(),
|
||||
draw_each = good_files.length < 50;
|
||||
|
||||
if (window.WebAssembly && !hws.length) {
|
||||
if (WebAssembly && !hws.length) {
|
||||
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++)
|
||||
hws.push(new Worker(SR + '/.cpr/w.hash.js' + CB));
|
||||
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
|
||||
|
||||
console.log(hws.length + " hashers");
|
||||
}
|
||||
@@ -1722,8 +1722,6 @@ function up2k_init(subtle) {
|
||||
ebi('u2etas').style.textAlign = 'left';
|
||||
}
|
||||
etafun();
|
||||
if (pvis.act == 'bz')
|
||||
pvis.changecard('bz');
|
||||
}
|
||||
|
||||
if (flag) {
|
||||
@@ -1859,6 +1857,9 @@ function up2k_init(subtle) {
|
||||
timer.rm(donut.do);
|
||||
ebi('u2tabw').style.minHeight = '0px';
|
||||
utw_minh = 0;
|
||||
|
||||
if (pvis.act == 'bz')
|
||||
pvis.changecard('bz');
|
||||
}
|
||||
|
||||
function chill(t) {
|
||||
@@ -2256,6 +2257,7 @@ function up2k_init(subtle) {
|
||||
console.log('handshake onerror, retrying', t.name, t);
|
||||
apop(st.busy.handshake, t);
|
||||
st.todo.handshake.unshift(t);
|
||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
||||
t.keepalive = keepalive;
|
||||
};
|
||||
var orz = function (e) {
|
||||
@@ -2263,16 +2265,26 @@ function up2k_init(subtle) {
|
||||
return console.log('zombie handshake onload', t.name, t);
|
||||
|
||||
if (xhr.status == 200) {
|
||||
try {
|
||||
var response = JSON.parse(xhr.responseText);
|
||||
}
|
||||
catch (ex) {
|
||||
apop(st.busy.handshake, t);
|
||||
st.todo.handshake.unshift(t);
|
||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
||||
return toast.err(0, 'Handshake error; will retry...\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
|
||||
}
|
||||
|
||||
t.t_handshake = Date.now();
|
||||
if (keepalive) {
|
||||
apop(st.busy.handshake, t);
|
||||
tasker();
|
||||
return;
|
||||
}
|
||||
|
||||
if (toast.tag === t)
|
||||
toast.ok(5, L.u_fixed);
|
||||
|
||||
var response = JSON.parse(xhr.responseText);
|
||||
if (!response.name) {
|
||||
var msg = '',
|
||||
smsg = '';
|
||||
@@ -2856,6 +2868,8 @@ function up2k_init(subtle) {
|
||||
new_state = false;
|
||||
fixed = true;
|
||||
}
|
||||
if (new_state === undefined)
|
||||
new_state = can_write ? false : have_up2k_idx ? true : undefined;
|
||||
}
|
||||
|
||||
if (new_state === undefined)
|
||||
@@ -2936,7 +2950,7 @@ function up2k_init(subtle) {
|
||||
}
|
||||
|
||||
function set_hashw() {
|
||||
if (!window.WebAssembly) {
|
||||
if (!WebAssembly) {
|
||||
bcfg_set('hashw', uc.hashw = false);
|
||||
toast.err(10, L.u_nowork);
|
||||
}
|
||||
@@ -2953,7 +2967,7 @@ function up2k_init(subtle) {
|
||||
nopenag();
|
||||
}
|
||||
|
||||
if (!window.Notification || !HTTPS)
|
||||
if (!Notification || !HTTPS)
|
||||
return nopenag();
|
||||
|
||||
if (en && Notification.permission == 'default')
|
||||
@@ -2975,7 +2989,7 @@ function up2k_init(subtle) {
|
||||
};
|
||||
}
|
||||
|
||||
if (uc.upnag && (!window.Notification || Notification.permission != 'granted'))
|
||||
if (uc.upnag && (!Notification || Notification.permission != 'granted'))
|
||||
bcfg_set('upnag', uc.upnag = false);
|
||||
|
||||
ebi('nthread_add').onclick = function (e) {
|
||||
|
||||
@@ -16,7 +16,6 @@ var wah = '',
|
||||
NOAC = 'autocorrect="off" autocapitalize="off"',
|
||||
L, tt, treectl, thegrid, up2k, asmCrypto, hashwasm, vbar, marked,
|
||||
T0 = Date.now(),
|
||||
CB = '?_=' + Math.floor(T0 / 1000).toString(36),
|
||||
R = SR.slice(1),
|
||||
RS = R ? "/" + R : "",
|
||||
HALFMAX = 8192 * 8192 * 8192 * 8192,
|
||||
@@ -52,8 +51,6 @@ catch (ex) {
|
||||
}
|
||||
|
||||
try {
|
||||
CB = '?' + document.currentScript.src.split('?').pop();
|
||||
|
||||
if (navigator.userAgentData.mobile)
|
||||
MOBILE = true;
|
||||
|
||||
@@ -130,7 +127,7 @@ if ((document.location + '').indexOf(',rej,') + 1)
|
||||
|
||||
try {
|
||||
console.hist = [];
|
||||
var CMAXHIST = 100;
|
||||
var CMAXHIST = 1000;
|
||||
var hook = function (t) {
|
||||
var orig = console[t].bind(console),
|
||||
cfun = function () {
|
||||
@@ -151,8 +148,6 @@ try {
|
||||
hook('error');
|
||||
}
|
||||
catch (ex) {
|
||||
if (console.stdlog)
|
||||
console.log = console.stdlog;
|
||||
console.log('console capture failed', ex);
|
||||
}
|
||||
var crashed = false, ignexd = {}, evalex_fatal = false;
|
||||
@@ -182,7 +177,7 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
|
||||
if (url.indexOf('easymde.js') + 1)
|
||||
return; // clicking the preview pane
|
||||
|
||||
if (url.indexOf('deps/marked.js') + 1 && !window.WebAssembly)
|
||||
if (url.indexOf('deps/marked.js') + 1 && !WebAssembly)
|
||||
return; // ff<52
|
||||
|
||||
crashed = true;
|
||||
@@ -740,6 +735,11 @@ function vjoin(p1, p2) {
|
||||
}
|
||||
|
||||
|
||||
function addq(url, q) {
|
||||
return url + (url.indexOf('?') < 0 ? '?' : '&') + (q === undefined ? '' : q);
|
||||
}
|
||||
|
||||
|
||||
function uricom_enc(txt, do_fb_enc) {
|
||||
try {
|
||||
return encodeURIComponent(txt);
|
||||
@@ -1417,9 +1417,12 @@ function lf2br(txt) {
|
||||
}
|
||||
|
||||
|
||||
function unpre(txt) {
|
||||
function hunpre(txt) {
|
||||
return ('' + txt).replace(/^<pre>/, '');
|
||||
}
|
||||
function unpre(txt) {
|
||||
return esc(hunpre(txt));
|
||||
}
|
||||
|
||||
|
||||
var toast = (function () {
|
||||
@@ -1466,7 +1469,7 @@ var toast = (function () {
|
||||
clmod(obj, 'vis');
|
||||
r.visible = false;
|
||||
r.tag = obj;
|
||||
if (!window.WebAssembly)
|
||||
if (!WebAssembly)
|
||||
te = setTimeout(function () {
|
||||
obj.className = 'hide';
|
||||
}, 500);
|
||||
@@ -1882,7 +1885,7 @@ function md_thumbs(md) {
|
||||
float = has(flags, 'l') ? 'left' : has(flags, 'r') ? 'right' : '';
|
||||
|
||||
if (!/[?&]cache/.exec(url))
|
||||
url += (url.indexOf('?') < 0 ? '?' : '&') + 'cache=i';
|
||||
url = addq(url, 'cache=i');
|
||||
|
||||
md[a] = '<a href="' + url + '" class="mdth mdth' + float.slice(0, 1) + '"><img src="' + url + '&th=w" alt="' + alt + '" /></a>' + md[a].slice(o2 + 1);
|
||||
}
|
||||
@@ -1995,15 +1998,21 @@ function xhrchk(xhr, prefix, e404, lvl, tag) {
|
||||
if (tag === undefined)
|
||||
tag = prefix;
|
||||
|
||||
var errtxt = (xhr.response && xhr.response.err) || xhr.responseText,
|
||||
var errtxt = ((xhr.response && xhr.response.err) || xhr.responseText) || '',
|
||||
suf = '',
|
||||
fun = toast[lvl || 'err'],
|
||||
is_cf = /[Cc]loud[f]lare|>Just a mo[m]ent|#cf-b[u]bbles|Chec[k]ing your br[o]wser|\/chall[e]nge-platform|"chall[e]nge-error|nable Ja[v]aScript and cook/.test(errtxt);
|
||||
|
||||
if (errtxt.startsWith('<pre>'))
|
||||
suf = '\n\nerror-details: «' + unpre(errtxt).split('\n')[0].trim() + '»';
|
||||
else
|
||||
errtxt = esc(errtxt).slice(0, 32768);
|
||||
|
||||
if (xhr.status == 403 && !is_cf)
|
||||
return toast.err(0, prefix + (L && L.xhr403 || "403: access denied\n\ntry pressing F5, maybe you got logged out"), tag);
|
||||
return toast.err(0, prefix + (L && L.xhr403 || "403: access denied\n\ntry pressing F5, maybe you got logged out") + suf, tag);
|
||||
|
||||
if (xhr.status == 404)
|
||||
return toast.err(0, prefix + e404, tag);
|
||||
return toast.err(0, prefix + e404 + suf, tag);
|
||||
|
||||
if (is_cf && (xhr.status == 403 || xhr.status == 503)) {
|
||||
var now = Date.now(), td = now - cf_cha_t;
|
||||
|
||||
@@ -13,6 +13,9 @@
|
||||
|
||||
# other stuff
|
||||
|
||||
## [`TODO.md`](TODO.md)
|
||||
* planned features / fixes / changes
|
||||
|
||||
## [`example.conf`](example.conf)
|
||||
* example config file for `-c`
|
||||
|
||||
|
||||
15
docs/TODO.md
Normal file
15
docs/TODO.md
Normal file
@@ -0,0 +1,15 @@
|
||||
a living list of upcoming features / fixes / changes, very roughly in order of priority
|
||||
|
||||
* download accelerator
|
||||
* definitely download chunks in parallel
|
||||
* maybe resumable downloads (chrome-only, jank api)
|
||||
* maybe checksum validation (return sha512 of requested range in responses, and probably also warks)
|
||||
|
||||
* [github issue #37](https://github.com/9001/copyparty/issues/37) - upload PWA
|
||||
* or [maybe not](https://arstechnica.com/tech-policy/2024/02/apple-under-fire-for-disabling-iphone-web-apps-eu-asks-developers-to-weigh-in/), or [maybe](https://arstechnica.com/gadgets/2024/03/apple-changes-course-will-keep-iphone-eu-web-apps-how-they-are-in-ios-17-4/)
|
||||
|
||||
* [github issue #57](https://github.com/9001/copyparty/issues/57) - config GUI
|
||||
* configs given to -c can be ordered with numerical prefix
|
||||
* autorevert settings if it fails to apply
|
||||
* countdown until session invalidates in settings gui, with refresh-button
|
||||
|
||||
96
docs/bufsize.txt
Normal file
96
docs/bufsize.txt
Normal file
@@ -0,0 +1,96 @@
|
||||
notes from testing various buffer sizes of files and sockets
|
||||
|
||||
summary:
|
||||
|
||||
download-folder-as-tar: would be 7% faster with --iobuf 65536 (but got 20% faster in v1.11.2)
|
||||
|
||||
download-folder-as-zip: optimal with default --iobuf 262144
|
||||
|
||||
download-file-over-https: optimal with default --iobuf 262144
|
||||
|
||||
put-large-file: optimal with default --iobuf 262144, --s-rd-sz 262144 (and got 14% faster in v1.11.2)
|
||||
|
||||
post-large-file: optimal with default --iobuf 262144, --s-rd-sz 262144 (and got 18% faster in v1.11.2)
|
||||
|
||||
----
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/?tar
|
||||
3.3 req/s 1.11.1
|
||||
4.3 4.0 3.3 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer smaller)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/?zip
|
||||
2.9 req/s 1.11.1
|
||||
2.5 2.9 2.9 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer bigger)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/?tar
|
||||
8.3 req/s 1.11.1
|
||||
8.4 8.4 8.5 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer bigger)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/?zip
|
||||
13.9 req/s 1.11.1
|
||||
14.1 14.0 13.8 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer smaller)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/987a
|
||||
5260 req/s 1.11.1
|
||||
5246 5246 5280 5268 req/s 1.12.2
|
||||
64 256 512 256 --iobuf dontcare
|
||||
32 32 32 512 --s-rd-sz dontcare
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure https://127.0.0.1:3923/pairdupes/987a
|
||||
4445 req/s 1.11.1
|
||||
4462 4494 4444 req/s 1.12.2
|
||||
64 256 512 --iobuf dontcare
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/gssc-02-cannonball-skydrift/track10.cdda.flac
|
||||
95 req/s 1.11.1
|
||||
95 97 req/s 1.12.2
|
||||
64 512 --iobuf dontcare
|
||||
32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure https://127.0.0.1:3923/bigs/gssc-02-cannonball-skydrift/track10.cdda.flac
|
||||
15.4 req/s 1.11.1
|
||||
15.4 15.3 14.9 15.4 req/s 1.12.2
|
||||
64 256 512 512 --iobuf 256 (prefer smaller, and smaller than s-wr-sz)
|
||||
32 32 32 32 --s-rd-sz
|
||||
256 256 256 512 --s-wr-sz
|
||||
|
||||
----
|
||||
|
||||
python3 ~/dev/old/copyparty\ v1.11.1\ dont\ ban\ the\ pipes.py -q -i 127.0.0.1 -v .::A --daw
|
||||
python3 ~/dev/copyparty/dist/copyparty-sfx.py -q -i 127.0.0.1 -v .::A --daw --iobuf $((1024*512))
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure -mPUT -r0 -D ~/Music/gssc-02-cannonball-skydrift/track10.cdda.flac http://127.0.0.1:3923/a.bin
|
||||
10.8 req/s 1.11.1
|
||||
10.8 11.5 11.8 12.1 12.2 12.3 req/s new
|
||||
512 512 512 512 512 256 --iobuf 256
|
||||
32 64 128 256 512 256 --s-rd-sz 256 (prefer bigger)
|
||||
|
||||
----
|
||||
|
||||
buildpost() {
|
||||
b=--jeg-er-grensestaven;
|
||||
printf -- "$b\r\nContent-Disposition: form-data; name=\"act\"\r\n\r\nbput\r\n$b\r\nContent-Disposition: form-data; name=\"f\"; filename=\"a.bin\"\r\nContent-Type: audio/mpeg\r\n\r\n"
|
||||
cat "$1"
|
||||
printf -- "\r\n${b}--\r\n"
|
||||
}
|
||||
buildpost ~/Music/gssc-02-cannonball-skydrift/track10.cdda.flac >big.post
|
||||
buildpost ~/Music/bottomtext.txt >smol.post
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure -mPOST -r0 -T 'multipart/form-data; boundary=jeg-er-grensestaven' -D big.post http://127.0.0.1:3923/?replace
|
||||
9.6 11.2 11.3 11.1 10.9 req/s v1.11.2
|
||||
512 512 256 128 256 --iobuf 256
|
||||
32 512 256 128 128 --s-rd-sz 256
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure -mPOST -r0 -T 'multipart/form-data; boundary=jeg-er-grensestaven' -D smol.post http://127.0.0.1:3923/?replace
|
||||
2445 2414 2401 2437
|
||||
256 128 256 256 --iobuf 256
|
||||
128 128 256 64 --s-rd-sz 128 (but use 256 since big posts are more important)
|
||||
@@ -1,3 +1,177 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0323-1724 `v1.11.2` public idp volumes
|
||||
|
||||
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||
* [docker image](https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker) ╱ [similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) ╱ [client testbed](https://cd.ocv.me/b/)
|
||||
|
||||
there is a [discord server](https://discord.gg/25J8CdTT6G) with an `@everyone` in case of future important updates, such as [vulnerabilities](https://github.com/9001/copyparty/security) (most recently 2023-07-23)
|
||||
|
||||
## new features
|
||||
|
||||
* global-option `--iobuf` to set a custom I/O buffersize 2b24c50e
|
||||
* changes the default buffersize to 256 KiB everywhere (was a mix of 64 and 512)
|
||||
* may improve performance of networked volumes (s3 etc.) if increased
|
||||
* on gbit networks: download-as-tar is now up to 20% faster
|
||||
* slightly faster FTP and TFTP too
|
||||
|
||||
* global-option `--s-rd-sz` to set a custom read-size for sockets c6acd3a9
|
||||
* changes the default from 32 to 256 KiB
|
||||
* may improve performance of networked volumes (s3 etc.) if increased
|
||||
* on 10gbit networks: uploading large files is now up to 17% faster
|
||||
|
||||
* add url parameter `?replace` to overwrite any existing files with a multipart-post c6acd3a9
|
||||
|
||||
## bugfixes
|
||||
|
||||
* #79 idp volumes (introduced in [v1.11.0](https://github.com/9001/copyparty/releases/tag/v1.11.0)) would only accept permissions for the user that owned the volume; was impossible to grant read/write-access to other users d30ae845
|
||||
|
||||
## other changes
|
||||
|
||||
* mention the [lack of persistence for idp volumes](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#important-notes) in the IdP docs 2f20d29e
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0318-1709 `v1.11.1` dont ban the pipes
|
||||
|
||||
the [previous release](https://github.com/9001/copyparty/releases/tag/v1.11.0) had all the fun new features... this one's just bugfixes
|
||||
|
||||
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||
* [docker image](https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker) ╱ [similar software](https://github.com/9001/copyparty/blob/hovudstraum/docs/versus.md) ╱ [client testbed](https://cd.ocv.me/b/)
|
||||
|
||||
### no vulnerabilities since 2023-07-23
|
||||
* there is a [discord server](https://discord.gg/25J8CdTT6G) with an `@everyone` in case of future important updates
|
||||
* [v1.8.7](https://github.com/9001/copyparty/releases/tag/v1.8.7) (2023-07-23) - [CVE-2023-38501](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-38501) - reflected XSS
|
||||
* [v1.8.2](https://github.com/9001/copyparty/releases/tag/v1.8.2) (2023-07-14) - [CVE-2023-37474](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-37474) - path traversal (first CVE)
|
||||
|
||||
## bugfixes
|
||||
|
||||
* less aggressive rejection of requests from banned IPs 51d31588
|
||||
* clients would get kicked before the header was parsed (which contains the xff header), meaning the server could become inaccessible to everyone if the reverse-proxy itself were to "somehow" get banned
|
||||
* ...which can happen if a server behind cloudflare also accepts non-cloudflare connections, meaning the client IP would not be resolved, and it'll ban the LAN IP instead heh
|
||||
* that part still happens, but now it won't affect legit clients through the intended route
|
||||
* the old behavior can be restored with `--early-ban` to save some cycles, and/or avoid slowloris somewhat
|
||||
* the unpost feature could appear to be disabled on servers where no volume was mapped to `/` 0287c7ba
|
||||
* python 3.12 support for [compiling the dependencies](https://github.com/9001/copyparty/tree/hovudstraum/bin/mtag#dependencies) necessary to detect bpm/key in audio files 32553e45
|
||||
|
||||
## other changes
|
||||
|
||||
* mention [real-ip configuration](https://github.com/9001/copyparty?tab=readme-ov-file#real-ip) in the readme ee80cdb9
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0315-2047 `v1.11.0` You Can (Not) Proceed
|
||||
|
||||
this release was made possible by [stoltzekleiven, kvikklunsj, and tako](https://a.ocv.me/pub/g/nerd-stuff/2024-0310-stoltzekleiven.jpg)
|
||||
|
||||
## new features
|
||||
|
||||
* #62 support for [identity providers](https://github.com/9001/copyparty#identity-providers) and automatically creating volumes for each user/group ("home folders")
|
||||
* login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
|
||||
* [documentation](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md) and [examples](https://github.com/9001/copyparty/tree/hovudstraum/docs/examples/docker/idp-authelia-traefik) could still use some help (I did my best)
|
||||
* #77 UI to cancel unfinished uploads (available in the 🧯 unpost tab) 3f05b665
|
||||
* the user's IP and username must match the upload by default; can be changed with global-option / volflag `u2abort`
|
||||
* new volflag `sparse` to pretend sparse files are supported even if the filesystem doesn't 8785d2f9
|
||||
* gives drastically better performance when writing to s3 buckets through juicefs/geesefs
|
||||
* only for when you know the filesystem can deal with it (so juicefs/geesefs is OK, but **definitely not** fat32)
|
||||
* `--xff-src` and `--ipa` now support CIDR notation (but the old syntax still works) b377791b
|
||||
* ux:
|
||||
* #74 option to use [custom fonts](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice) 263adec7 6cc7101d 8016e671
|
||||
* option to disable autoplay when page url contains a song hash 8413ed6d
|
||||
* good if you're using copyparty to listen to music at the office and the office policy is to have the webbrowser automatically restart to install updates, meaning your coworkers are suddenly and involuntarily enjoying some loud af jcore while you're asleep at home
|
||||
|
||||
## bugfixes
|
||||
|
||||
* don't panic if cloudflare (or another reverse-proxy) decides to hijack json responses and replace them with html 7741870d
|
||||
* #73 the fancy markdown editor was incompatible with caddy (a reverse-proxy) ac96fd9c
|
||||
* media player could get confused if neighboring folders had songs with the same filenames 206af8f1
|
||||
* benign race condition in the config reloader (could only be triggered by admins and/or SIGUSR1) 096de508
|
||||
* running tftp with optimizations enabled would cause issues for `--ipa` b377791b
|
||||
* cosmetic tftp bugs 115020ba
|
||||
* ux:
|
||||
* up2k rendering glitch if the last couple uploads were dupes 547a4863
|
||||
* up2k rendering glitch when switching between readonly/writeonly folders 51a83b04
|
||||
* markdown editor preview was glitchy on tiny screens e5582605
|
||||
|
||||
## other changes
|
||||
|
||||
* add a [sharex v12.1](https://github.com/9001/copyparty/tree/hovudstraum/contrib#sharexsxcu) config example 2527e903
|
||||
* make it easier to discover/diagnose issues with docker and/or reverse-proxy config d744f3ff
|
||||
* stop recommending the use of `--xff-src=any` in the log messages 7f08f10c
|
||||
* ux:
|
||||
* remove the `k304` togglebutton in the controlpanel by default 1c011ff0
|
||||
* mention that a full restart is required for `[global]` config changes to take effect 0c039219
|
||||
* docs e78af022
|
||||
* [how to use copyparty with amazon aws s3](https://github.com/9001/copyparty#using-the-cloud-as-storage)
|
||||
* faq: http/https confusion caused by incorrectly configured cloudflare
|
||||
* #76 docker: ftp-server howto
|
||||
* copyparty.exe: updated pyinstaller to 6.5.0 bdbcbbb0
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0221-2132 `v1.10.2` tall thumbs
|
||||
|
||||
## new features
|
||||
|
||||
* thumbnails can be way taller when centercrop is disabled in the browser UI 5026b212
|
||||
* good for folders with lots of portrait pics (no more letterboxing)
|
||||
* more thumbnail stuff:
|
||||
* zoom levels are twice as granular 5026b212
|
||||
* write-only folders get an "upload-only" icon 89c6c2e0
|
||||
* inaccessible files/folders get a 403/404 icon 8a38101e
|
||||
|
||||
## bugfixes
|
||||
|
||||
* tftp fixes d07859e8
|
||||
* server could crash if a nic disappeared / got restarted mid-transfer
|
||||
* tiny resource leak if dualstack causes ipv4 bind to fail
|
||||
* thumbnails:
|
||||
* when behind a caching proxy (cloudflare), icons in folders would be a random mix of png and svg 43ee6b9f
|
||||
* produce valid folder icons when thumbnails are disabled 14af136f
|
||||
* trailing newline in html responses d39a99c9
|
||||
|
||||
## other changes
|
||||
|
||||
* webdeps: update dompurify 13e77777
|
||||
* copyparty.exe: update jinja2, markupsafe, pyinstaller, upx 13e77777
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0218-1554 `v1.10.1` big thumbs
|
||||
|
||||
## new features
|
||||
|
||||
* button to enable hi-res thumbnails 33f41f3e 58ae38c6
|
||||
* enable with the `3x` button in the gridview
|
||||
* can be force-enabled/disabled serverside with `--th-x3` or volflag `th3x`
|
||||
* tftp: IPv6 support and UTF-8 filenames + optimizations 0504b010
|
||||
* ux:
|
||||
* when closing the image viewer, scroll to the last viewed pic bbc37990
|
||||
* respect `prefers-reduced-motion` some more places fbfdd833
|
||||
|
||||
## bugfixes
|
||||
|
||||
* #72 impossible to delete recently uploaded zerobyte files if database was disabled 6bd087dd
|
||||
* tftp now works in `copyparty.exe`, `copyparty32.exe`, `copyparty-winpe64.exe`
|
||||
* the [sharex config example](https://github.com/9001/copyparty/tree/hovudstraum/contrib#sharexsxcu) was still using cookie-auth 8ff7094e
|
||||
* ux:
|
||||
* prevent scrolling while a pic is open 7f1c9926
|
||||
* fix gridview in older firefox versions 7f1c9926
|
||||
|
||||
## other changes
|
||||
|
||||
* thumbnail center-cropping can be force-enabled/disabled serverside with `--th-crop` or volflag `crop`
|
||||
* replaces `--th-no-crop` which is now deprecated (but will continue to work)
|
||||
|
||||
----
|
||||
|
||||
this release contains a build of `copyparty-winpe64.exe` which is almost **entirely useless,** except for in *extremely specific scenarios*, namely the kind where a TFTP server could also be useful -- the [previous build](https://github.com/9001/copyparty/releases/download/v1.8.7/copyparty-winpe64.exe) was from [version 1.8.7](https://github.com/9001/copyparty/releases/tag/v1.8.7) (2023-07-23)
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0215-0000 `v1.10.0` tftp
|
||||
|
||||
|
||||
@@ -20,6 +20,8 @@
|
||||
* [just the sfx](#just-the-sfx)
|
||||
* [build from release tarball](#build-from-release-tarball) - uses the included prebuilt webdeps
|
||||
* [complete release](#complete-release)
|
||||
* [debugging](#debugging)
|
||||
* [music playback halting on phones](#music-playback-halting-on-phones) - mostly fine on android
|
||||
* [todo](#todo) - roughly sorted by priority
|
||||
* [discarded ideas](#discarded-ideas)
|
||||
|
||||
@@ -164,6 +166,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
||||
| PUT | `?xz` | (binary data) | compress with xz and write into file at URL |
|
||||
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
|
||||
| mPOST | `?j` | `f=FILE` | ...and reply with json |
|
||||
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
|
||||
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
|
||||
| POST | `?delete` | | delete URL recursively |
|
||||
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |
|
||||
@@ -218,7 +221,7 @@ if you don't need all the features, you can repack the sfx and save a bunch of s
|
||||
* `269k` after `./scripts/make-sfx.sh re no-cm no-hl`
|
||||
|
||||
the features you can opt to drop are
|
||||
* `cm`/easymde, the "fancy" markdown editor, saves ~82k
|
||||
* `cm`/easymde, the "fancy" markdown editor, saves ~89k
|
||||
* `hl`, prism, the syntax hilighter, saves ~41k
|
||||
* `fnt`, source-code-pro, the monospace font, saves ~9k
|
||||
* `dd`, the custom mouse cursor for the media player tray tab, saves ~2k
|
||||
@@ -300,6 +303,19 @@ in the `scripts` folder:
|
||||
* run `./rls.sh 1.2.3` which uploads to pypi + creates github release + sfx
|
||||
|
||||
|
||||
# debugging
|
||||
|
||||
## music playback halting on phones
|
||||
|
||||
mostly fine on android, but still haven't find a way to massage iphones into behaving well
|
||||
|
||||
* conditionally starting/stopping mp.fau according to mp.au.readyState <3 or <4 doesn't help
|
||||
* loop=true doesn't work, and manually looping mp.fau from an onended also doesn't work (it does nothing)
|
||||
* assigning fau.currentTime in a timer doesn't work, as safari merely pretends to assign it
|
||||
|
||||
can be reproduced with `--no-sendfile --s-wr-sz 8192 --s-wr-slp 0.3 --rsp-slp 6` and then play a collection of small audio files with the screen off, `ffmpeg -i track01.cdda.flac -c:a libopus -b:a 128k -segment_time 12 -f segment smol-%02d.opus`
|
||||
|
||||
|
||||
# todo
|
||||
|
||||
roughly sorted by priority
|
||||
@@ -309,6 +325,10 @@ roughly sorted by priority
|
||||
|
||||
## discarded ideas
|
||||
|
||||
* optimization attempts which didn't improve performance
|
||||
* remove brokers / multiprocessing stuff; https://github.com/9001/copyparty/tree/no-broker
|
||||
* reduce the nesting / indirections in `HttpCli` / `httpcli.py`
|
||||
* nearly zero benefit from stuff like replacing all the `self.conn.hsrv` with a local `hsrv` variable
|
||||
* reduce up2k roundtrips
|
||||
* start from a chunk index and just go
|
||||
* terminate client on bad data
|
||||
|
||||
@@ -10,7 +10,6 @@
|
||||
|
||||
# q, lo: /cfg/log/%Y-%m%d.log # log to file instead of docker
|
||||
|
||||
# ftp: 3921 # enable ftp server on port 3921
|
||||
# p: 3939 # listen on another port
|
||||
# ipa: 10.89. # only allow connections from 10.89.*
|
||||
# df: 16 # stop accepting uploads if less than 16 GB free disk space
|
||||
|
||||
50
docs/examples/docker/idp-authelia-traefik/README.md
Normal file
50
docs/examples/docker/idp-authelia-traefik/README.md
Normal file
@@ -0,0 +1,50 @@
|
||||
> [!WARNING]
|
||||
> I am unable to guarantee the quality, safety, and security of anything in this folder; it is a combination of examples I found online. Please submit corrections or improvements 🙏
|
||||
|
||||
to try this out with minimal adjustments:
|
||||
* specify what filesystem-path to share with copyparty, replacing the default/example value `/srv/pub` in `docker-compose.yml`
|
||||
* add `127.0.0.1 fs.example.com traefik.example.com authelia.example.com` to your `/etc/hosts`
|
||||
* `sudo docker-compose up`
|
||||
* login to https://fs.example.com/ with username `authelia` password `authelia`
|
||||
|
||||
to use this in a safe and secure manner:
|
||||
* follow a guide on setting up authelia properly (TODO:link) and use the copyparty-specific parts of this folder as inspiration for your own config; namely the `cpp` subfolder and the `copyparty` service in `docker-compose.yml`
|
||||
|
||||
this folder is based on:
|
||||
* https://github.com/authelia/authelia/tree/39763aaed24c4abdecd884b47357a052b235942d/examples/compose/lite
|
||||
|
||||
incomplete list of modifications made:
|
||||
* support for running with podman as root on fedora (`:z` volumes, `label:disable`)
|
||||
* explicitly using authelia `v4.38.0-beta3` because config syntax changed since last stable release
|
||||
* disabled automatic letsencrypt certificate signing
|
||||
* reduced logging from debug to info
|
||||
* added a warning that traefik is given access to the docker socket (as recommended by traefik docs) which means traefik is able to break out of the container and has full root access on the host machine
|
||||
|
||||
|
||||
# security
|
||||
|
||||
there is probably/definitely room for improvement in this example setup. Some ideas taken from [github issue #62](https://github.com/9001/copyparty/issues/62):
|
||||
|
||||
* Add in a redis password to limit attacker lateral movement in the system
|
||||
* Move redis to a private network shared with just authelia
|
||||
* Pin to image hashes (or go all in on updates and add `watchtower`)
|
||||
* Drop bridge networking for just exposing traefik's public ports
|
||||
* Configure docker for non-root access to docker socket and then move traefik to use [non-root perms](https://docs.docker.com/engine/security/rootless/)
|
||||
|
||||
if you manage to improve on any of this, especially in a way that might be useful for other people, consider sending a PR :>
|
||||
|
||||
|
||||
# performance
|
||||
|
||||
currently **not optimal,** at least when compared to running the python sfx outside of docker... some numbers from my laptop (ryzen4500u/fedora39):
|
||||
|
||||
| req/s | https D/L | http D/L | approach |
|
||||
| -----:| ----------:|:--------:| -------- |
|
||||
| 5200 | 1294 MiB/s | 5+ GiB/s | [copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) running on host |
|
||||
| 4370 | 725 MiB/s | 4+ GiB/s | `docker run copyparty/ac` |
|
||||
| 2420 | 694 MiB/s | n/a | `copyparty/ac` behind traefik |
|
||||
| 75 | 694 MiB/s | n/a | traefik and authelia **(you are here)** |
|
||||
|
||||
authelia is behaving strangely, handling 340 requests per second for a while, but then it suddenly drops to 75 and stays there...
|
||||
|
||||
I'm assuming all of the performance issues is due to a misconfiguration of authelia/traefik/docker on my end, but I don't relly know where to start
|
||||
@@ -0,0 +1,66 @@
|
||||
# based on https://github.com/authelia/authelia/blob/39763aaed24c4abdecd884b47357a052b235942d/examples/compose/lite/authelia/configuration.yml
|
||||
|
||||
# Authelia configuration
|
||||
|
||||
# This secret can also be set using the env variables AUTHELIA_JWT_SECRET_FILE
|
||||
jwt_secret: a_very_important_secret
|
||||
|
||||
server:
|
||||
address: 'tcp://:9091'
|
||||
|
||||
log:
|
||||
level: info # debug
|
||||
|
||||
totp:
|
||||
issuer: authelia.com
|
||||
|
||||
authentication_backend:
|
||||
file:
|
||||
path: /config/users_database.yml
|
||||
|
||||
access_control:
|
||||
default_policy: deny
|
||||
rules:
|
||||
# Rules applied to everyone
|
||||
- domain: traefik.example.com
|
||||
policy: one_factor
|
||||
- domain: fs.example.com
|
||||
policy: one_factor
|
||||
|
||||
session:
|
||||
# This secret can also be set using the env variables AUTHELIA_SESSION_SECRET_FILE
|
||||
secret: unsecure_session_secret
|
||||
|
||||
cookies:
|
||||
- name: authelia_session
|
||||
domain: example.com # Should match whatever your root protected domain is
|
||||
default_redirection_url: https://fs.example.com
|
||||
authelia_url: https://authelia.example.com/
|
||||
expiration: 3600 # 1 hour
|
||||
inactivity: 300 # 5 minutes
|
||||
|
||||
redis:
|
||||
host: redis
|
||||
port: 6379
|
||||
# This secret can also be set using the env variables AUTHELIA_SESSION_REDIS_PASSWORD_FILE
|
||||
# password: authelia
|
||||
|
||||
regulation:
|
||||
max_retries: 3
|
||||
find_time: 120
|
||||
ban_time: 300
|
||||
|
||||
storage:
|
||||
encryption_key: you_must_generate_a_random_string_of_more_than_twenty_chars_and_configure_this
|
||||
local:
|
||||
path: /config/db.sqlite3
|
||||
|
||||
notifier:
|
||||
disable_startup_check: true
|
||||
smtp:
|
||||
username: test
|
||||
# This secret can also be set using the env variables AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
|
||||
password: password
|
||||
host: mail.example.com
|
||||
port: 25
|
||||
sender: admin@example.com
|
||||
@@ -0,0 +1,18 @@
|
||||
# based on https://github.com/authelia/authelia/blob/39763aaed24c4abdecd884b47357a052b235942d/examples/compose/lite/authelia/users_database.yml
|
||||
|
||||
# Users Database
|
||||
|
||||
# This file can be used if you do not have an LDAP set up.
|
||||
|
||||
# List of users
|
||||
users:
|
||||
authelia:
|
||||
disabled: false
|
||||
displayname: "Authelia User"
|
||||
# Password is authelia
|
||||
password: "$6$rounds=50000$BpLnfgDsc2WD8F2q$Zis.ixdg9s/UOJYrs56b5QEZFiZECu0qZVNsIYxBaNJ7ucIL.nlxVCT5tqh8KHG8X4tlwCFm5r6NTOZZ5qRFN/"
|
||||
email: authelia@authelia.com
|
||||
groups:
|
||||
- admins
|
||||
- dev
|
||||
- su
|
||||
82
docs/examples/docker/idp-authelia-traefik/cpp/copyparty.conf
Normal file
82
docs/examples/docker/idp-authelia-traefik/cpp/copyparty.conf
Normal file
@@ -0,0 +1,82 @@
|
||||
# not actually YAML but lets pretend:
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
|
||||
# example config for how authelia can be used to replace
|
||||
# copyparty's built-in authentication/authorization mechanism,
|
||||
# providing copyparty with HTTP headers through traefik to
|
||||
# signify who the user is, and what groups they belong to
|
||||
#
|
||||
# the filesystem-path that will be shared with copyparty is
|
||||
# specified in the docker-compose in the parent folder, where
|
||||
# a real filesystem-path is mapped onto this container's path `/w`,
|
||||
# meaning `/w` in this config-file is actually `/srv/pub` in the
|
||||
# outside world (assuming you didn't modify that value)
|
||||
|
||||
|
||||
[global]
|
||||
e2dsa # enable file indexing and filesystem scanning
|
||||
e2ts # enable multimedia indexing
|
||||
ansi # enable colors in log messages
|
||||
#q # disable logging for more performance
|
||||
|
||||
# if we are confident that we got the docker-network config correct
|
||||
# (meaning copyparty is only accessible through traefik, and
|
||||
# traefik makes sure that all requests go through authelia),
|
||||
# then accept X-Forwarded-For and IdP headers from any private IP:
|
||||
xff-src: lan
|
||||
|
||||
# enable IdP support by expecting username/groupname in
|
||||
# http-headers provided by the reverse-proxy; header "X-IdP-User"
|
||||
# will contain the username, "X-IdP-Group" the groupname
|
||||
idp-h-usr: remote-user
|
||||
idp-h-grp: remote-groups
|
||||
|
||||
# DEBUG: show all incoming request headers from traefik/authelia
|
||||
#ihead: *
|
||||
|
||||
|
||||
[/] # create a volume at "/" (the webroot), which will
|
||||
/w # share /w (the docker data volume, which is mapped to /srv/pub on the host in docker-compose.yml)
|
||||
accs:
|
||||
rw: * # everyone gets read-access, but
|
||||
rwmda: @su # the group "su" gets read-write-move-delete-admin
|
||||
|
||||
|
||||
[/u/${u}] # each user gets their own home-folder at /u/username
|
||||
/w/u/${u} # which will be "u/username" in the docker data volume
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/u/${u}/priv] # each user also gets a private area at /u/username/priv
|
||||
/w/u/${u}/priv # stored at DATAVOLUME/u/username/priv
|
||||
accs:
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}] # each group gets their own shared volume
|
||||
/w/lounge/${g} # stored at DATAVOLUME/lounge/groupname
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}/priv] # and a private area for each group too
|
||||
/w/lounge/${g}/priv # stored at DATAVOLUME/lounge/groupname/priv
|
||||
accs:
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
# and create some strategic volumes to prevent anyone from gaining
|
||||
# unintended access to priv folders if the users/groups db is lost
|
||||
[/u]
|
||||
/w/u
|
||||
accs:
|
||||
rwmda: @su
|
||||
[/lounge]
|
||||
/w/lounge
|
||||
accs:
|
||||
rwmda: @su
|
||||
99
docs/examples/docker/idp-authelia-traefik/docker-compose.yml
Normal file
99
docs/examples/docker/idp-authelia-traefik/docker-compose.yml
Normal file
@@ -0,0 +1,99 @@
|
||||
version: '3.3'
|
||||
|
||||
networks:
|
||||
net:
|
||||
driver: bridge
|
||||
|
||||
services:
|
||||
copyparty:
|
||||
image: copyparty/ac
|
||||
container_name: idp_copyparty
|
||||
user: "1000:1000" # should match the user/group of your fileshare volumes
|
||||
volumes:
|
||||
- ./cpp/:/cfg:z # the copyparty config folder
|
||||
- /srv/pub:/w:z # this is where we declare that "/srv/pub" is the filesystem-path on the server that shall be shared online
|
||||
networks:
|
||||
- net
|
||||
expose:
|
||||
- 3923
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.copyparty.rule=Host(`fs.example.com`)'
|
||||
- 'traefik.http.routers.copyparty.entrypoints=https'
|
||||
- 'traefik.http.routers.copyparty.tls=true'
|
||||
- 'traefik.http.routers.copyparty.middlewares=authelia@docker'
|
||||
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
|
||||
|
||||
authelia:
|
||||
image: authelia/authelia:v4.38.0-beta3 # the config files in the authelia folder use the new syntax
|
||||
container_name: idp_authelia
|
||||
volumes:
|
||||
- ./authelia:/config:z
|
||||
networks:
|
||||
- net
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.authelia.rule=Host(`authelia.example.com`)'
|
||||
- 'traefik.http.routers.authelia.entrypoints=https'
|
||||
- 'traefik.http.routers.authelia.tls=true'
|
||||
#- 'traefik.http.routers.authelia.tls.certresolver=letsencrypt' # uncomment this to enable automatic certificate signing (1/2)
|
||||
- 'traefik.http.middlewares.authelia.forwardauth.address=http://authelia:9091/api/authz/forward-auth?authelia_url=https://authelia.example.com'
|
||||
- 'traefik.http.middlewares.authelia.forwardauth.trustForwardHeader=true'
|
||||
- 'traefik.http.middlewares.authelia.forwardauth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'
|
||||
expose:
|
||||
- 9091
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
disable: true
|
||||
environment:
|
||||
- TZ=Etc/UTC
|
||||
|
||||
redis:
|
||||
image: redis:7.2.4-alpine3.19
|
||||
container_name: idp_redis
|
||||
volumes:
|
||||
- ./redis:/data:z
|
||||
networks:
|
||||
- net
|
||||
expose:
|
||||
- 6379
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=Etc/UTC
|
||||
|
||||
traefik:
|
||||
image: traefik:2.11.0
|
||||
container_name: idp_traefik
|
||||
volumes:
|
||||
- ./traefik:/etc/traefik:z
|
||||
- /var/run/docker.sock:/var/run/docker.sock # WARNING: this gives traefik full root-access to the host OS, but is recommended/required(?) by traefik
|
||||
security_opt:
|
||||
- label:disable # disable selinux because it (rightly) blocks access to docker.sock
|
||||
networks:
|
||||
- net
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.api.rule=Host(`traefik.example.com`)'
|
||||
- 'traefik.http.routers.api.entrypoints=https'
|
||||
- 'traefik.http.routers.api.service=api@internal'
|
||||
- 'traefik.http.routers.api.tls=true'
|
||||
#- 'traefik.http.routers.api.tls.certresolver=letsencrypt' # uncomment this to enable automatic certificate signing (2/2)
|
||||
- 'traefik.http.routers.api.middlewares=authelia@docker'
|
||||
ports:
|
||||
- '80:80'
|
||||
- '443:443'
|
||||
command:
|
||||
- '--api'
|
||||
- '--providers.docker=true'
|
||||
- '--providers.docker.exposedByDefault=false'
|
||||
- '--entrypoints.http=true'
|
||||
- '--entrypoints.http.address=:80'
|
||||
- '--entrypoints.http.http.redirections.entrypoint.to=https'
|
||||
- '--entrypoints.http.http.redirections.entrypoint.scheme=https'
|
||||
- '--entrypoints.https=true'
|
||||
- '--entrypoints.https.address=:443'
|
||||
- '--certificatesResolvers.letsencrypt.acme.email=your-email@your-domain.com'
|
||||
- '--certificatesResolvers.letsencrypt.acme.storage=/etc/traefik/acme.json'
|
||||
- '--certificatesResolvers.letsencrypt.acme.httpChallenge.entryPoint=http'
|
||||
- '--log=true'
|
||||
- '--log.level=WARNING' # DEBUG
|
||||
12
docs/examples/docker/idp-authentik-traefik/README.md
Normal file
12
docs/examples/docker/idp-authentik-traefik/README.md
Normal file
@@ -0,0 +1,12 @@
|
||||
> [!WARNING]
|
||||
> I am unable to guarantee the quality, safety, and security of anything in this folder; it is a combination of examples I found online. Please submit corrections or improvements 🙏
|
||||
|
||||
> [!WARNING]
|
||||
> does not work yet... if you are able to fix this, please do!
|
||||
|
||||
this is based on:
|
||||
* https://goauthentik.io/docker-compose.yml
|
||||
* https://goauthentik.io/docs/providers/proxy/server_traefik
|
||||
|
||||
incomplete list of modifications made:
|
||||
* support for running with podman as root on fedora (`:z` volumes, `label:disable`)
|
||||
@@ -0,0 +1,88 @@
|
||||
# https://goauthentik.io/docker-compose.yml
|
||||
---
|
||||
version: "3.4"
|
||||
|
||||
services:
|
||||
postgresql:
|
||||
image: docker.io/library/postgres:12-alpine
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
volumes:
|
||||
- database:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
|
||||
POSTGRES_USER: ${PG_USER:-authentik}
|
||||
POSTGRES_DB: ${PG_DB:-authentik}
|
||||
env_file:
|
||||
- .env
|
||||
redis:
|
||||
image: docker.io/library/redis:alpine
|
||||
command: --save 60 1 --loglevel warning
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 3s
|
||||
volumes:
|
||||
- redis:/data
|
||||
server:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.2.1}
|
||||
restart: unless-stopped
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
volumes:
|
||||
- ./media:/media
|
||||
- ./custom-templates:/templates
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- "${COMPOSE_PORT_HTTP:-9000}:9000"
|
||||
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
worker:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.2.1}
|
||||
restart: unless-stopped
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
# `user: root` and the docker socket volume are optional.
|
||||
# See more for the docker socket integration here:
|
||||
# https://goauthentik.io/docs/outposts/integrations/docker
|
||||
# Removing `user: root` also prevents the worker from fixing the permissions
|
||||
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
|
||||
# (1000:1000 by default)
|
||||
user: root
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- ./media:/media
|
||||
- ./certs:/certs
|
||||
- ./custom-templates:/templates
|
||||
env_file:
|
||||
- .env
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
|
||||
volumes:
|
||||
database:
|
||||
driver: local
|
||||
redis:
|
||||
driver: local
|
||||
@@ -0,0 +1,46 @@
|
||||
# https://goauthentik.io/docs/providers/proxy/server_traefik
|
||||
---
|
||||
version: "3.7"
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:v2.2
|
||||
container_name: traefik
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- 80:80
|
||||
command:
|
||||
- "--api"
|
||||
- "--providers.docker=true"
|
||||
- "--providers.docker.exposedByDefault=false"
|
||||
- "--entrypoints.web.address=:80"
|
||||
|
||||
authentik-proxy:
|
||||
image: ghcr.io/goauthentik/proxy
|
||||
ports:
|
||||
- 9000:9000
|
||||
- 9443:9443
|
||||
environment:
|
||||
AUTHENTIK_HOST: https://your-authentik.tld
|
||||
AUTHENTIK_INSECURE: "false"
|
||||
AUTHENTIK_TOKEN: token-generated-by-authentik
|
||||
# Starting with 2021.9, you can optionally set this too
|
||||
# when authentik_host for internal communication doesn't match the public URL
|
||||
# AUTHENTIK_HOST_BROWSER: https://external-domain.tld
|
||||
labels:
|
||||
traefik.enable: true
|
||||
traefik.port: 9000
|
||||
traefik.http.routers.authentik.rule: Host(`app.company`) && PathPrefix(`/outpost.goauthentik.io/`)
|
||||
# `authentik-proxy` refers to the service name in the compose file.
|
||||
traefik.http.middlewares.authentik.forwardauth.address: http://authentik-proxy:9000/outpost.goauthentik.io/auth/traefik
|
||||
traefik.http.middlewares.authentik.forwardauth.trustForwardHeader: true
|
||||
traefik.http.middlewares.authentik.forwardauth.authResponseHeaders: X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid,X-authentik-jwt,X-authentik-meta-jwks,X-authentik-meta-outpost,X-authentik-meta-provider,X-authentik-meta-app,X-authentik-meta-version
|
||||
restart: unless-stopped
|
||||
|
||||
whoami:
|
||||
image: containous/whoami
|
||||
labels:
|
||||
traefik.enable: true
|
||||
traefik.http.routers.whoami.rule: Host(`app.company`)
|
||||
traefik.http.routers.whoami.middlewares: authentik@docker
|
||||
restart: unless-stopped
|
||||
@@ -0,0 +1,72 @@
|
||||
# not actually YAML but lets pretend:
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
|
||||
# example config for how copyparty can be used with an identity
|
||||
# provider, replacing the built-in authentication/authorization
|
||||
# mechanism, and instead expecting the reverse-proxy to provide
|
||||
# the requester's username (and possibly a group-name, for
|
||||
# optional group-based access control)
|
||||
#
|
||||
# the filesystem-path `/w` is used as the storage location
|
||||
# because that is the data-volume in the docker containers,
|
||||
# because a deployment like this (with an IdP) is more commonly
|
||||
# seen in containerized environments -- but this is not required
|
||||
|
||||
|
||||
[global]
|
||||
e2dsa # enable file indexing and filesystem scanning
|
||||
e2ts # enable multimedia indexing
|
||||
ansi # enable colors in log messages
|
||||
|
||||
# enable IdP support by expecting username/groupname in
|
||||
# http-headers provided by the reverse-proxy; header "X-IdP-User"
|
||||
# will contain the username, "X-IdP-Group" the groupname
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
|
||||
[/] # create a volume at "/" (the webroot), which will
|
||||
/w # share /w (the docker data volume, which is mapped to /srv/pub on the host in docker-compose.yml)
|
||||
accs:
|
||||
rw: * # everyone gets read-access, but
|
||||
rwmda: @su # the group "su" gets read-write-move-delete-admin
|
||||
|
||||
|
||||
[/u/${u}] # each user gets their own home-folder at /u/username
|
||||
/w/u/${u} # which will be "u/username" in the docker data volume
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/u/${u}/priv] # each user also gets a private area at /u/username/priv
|
||||
/w/u/${u}/priv # stored at DATAVOLUME/u/username/priv
|
||||
accs:
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}] # each group gets their own shared volume
|
||||
/w/lounge/${g} # stored at DATAVOLUME/lounge/groupname
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}/priv] # and a private area for each group too
|
||||
/w/lounge/${g}/priv # stored at DATAVOLUME/lounge/groupname/priv
|
||||
accs:
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
# and create some strategic volumes to prevent anyone from gaining
|
||||
# unintended access to priv folders if the users/groups db is lost
|
||||
[/u]
|
||||
/w/u
|
||||
accs:
|
||||
rwmda: @su
|
||||
[/lounge]
|
||||
/w/lounge
|
||||
accs:
|
||||
rwmda: @su
|
||||
131
docs/examples/docker/idp-authentik-traefik/docker-compose.yml
Normal file
131
docs/examples/docker/idp-authentik-traefik/docker-compose.yml
Normal file
@@ -0,0 +1,131 @@
|
||||
version: "3.4"
|
||||
|
||||
volumes:
|
||||
database:
|
||||
driver: local
|
||||
redis:
|
||||
driver: local
|
||||
|
||||
services:
|
||||
copyparty:
|
||||
image: copyparty/ac
|
||||
container_name: idp_copyparty
|
||||
restart: unless-stopped
|
||||
user: "1000:1000" # should match the user/group of your fileshare volumes
|
||||
volumes:
|
||||
- ./cpp/:/cfg:z # the copyparty config folder
|
||||
- /srv/pub:/w:z # this is where we declare that "/srv/pub" is the filesystem-path on the server that shall be shared online
|
||||
ports:
|
||||
- 3923
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.fs.rule=Host(`fs.example.com`)'
|
||||
- 'traefik.http.routers.fs.entrypoints=http'
|
||||
#- 'traefik.http.routers.fs.middlewares=authelia@docker' # TODO: ???
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --spider -q 127.0.0.1:3923/?reset"]
|
||||
interval: 1m
|
||||
timeout: 2s
|
||||
retries: 5
|
||||
start_period: 15s
|
||||
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
|
||||
|
||||
traefik:
|
||||
image: traefik:v2.11
|
||||
container_name: traefik
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock # WARNING: this gives traefik full root-access to the host OS, but is recommended/required(?) by traefik
|
||||
security_opt:
|
||||
- label:disable # disable selinux because it (rightly) blocks access to docker.sock
|
||||
ports:
|
||||
- 80:80
|
||||
command:
|
||||
- '--api'
|
||||
- '--providers.docker=true'
|
||||
- '--providers.docker.exposedByDefault=false'
|
||||
- '--entrypoints.web.address=:80'
|
||||
|
||||
postgresql:
|
||||
image: docker.io/library/postgres:12-alpine
|
||||
container_name: idp_postgresql
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
volumes:
|
||||
- database:/var/lib/postgresql/data:z
|
||||
environment:
|
||||
POSTGRES_PASSWORD: postgrass
|
||||
POSTGRES_USER: authentik
|
||||
POSTGRES_DB: authentik
|
||||
env_file:
|
||||
- .env
|
||||
|
||||
redis:
|
||||
image: docker.io/library/redis:alpine
|
||||
command: --save 60 1 --loglevel warning
|
||||
container_name: idp_redis
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 3s
|
||||
volumes:
|
||||
- redis:/data:z
|
||||
|
||||
authentik_server:
|
||||
image: ghcr.io/goauthentik/server:2024.2.1
|
||||
container_name: idp_authentik_server
|
||||
restart: unless-stopped
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: authentik
|
||||
AUTHENTIK_POSTGRESQL__NAME: authentik
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: postgrass
|
||||
volumes:
|
||||
- ./media:/media:z
|
||||
- ./custom-templates:/templates:z
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- 9000
|
||||
- 9443
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
|
||||
authentik_worker:
|
||||
image: ghcr.io/goauthentik/server:2024.2.1
|
||||
container_name: idp_authentik_worker
|
||||
restart: unless-stopped
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: authentik
|
||||
AUTHENTIK_POSTGRESQL__NAME: authentik
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: postgrass
|
||||
# `user: root` and the docker socket volume are optional.
|
||||
# See more for the docker socket integration here:
|
||||
# https://goauthentik.io/docs/outposts/integrations/docker
|
||||
# Removing `user: root` also prevents the worker from fixing the permissions
|
||||
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
|
||||
# (1000:1000 by default)
|
||||
user: root
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- ./media:/media:z
|
||||
- ./certs:/certs:z
|
||||
- ./custom-templates:/templates:z
|
||||
env_file:
|
||||
- .env
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
@@ -26,6 +26,24 @@
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
# but copyparty will refuse to accept those headers unless you
|
||||
# tell it the LAN IP of the reverse-proxy to expect them from,
|
||||
# preventing malicious users from pretending to be the proxy;
|
||||
# pay attention to the warning message in the logs and then
|
||||
# adjust the following config option accordingly:
|
||||
xff-src: 192.168.0.0/16
|
||||
|
||||
# or just allow all LAN / private IPs (probably good enough):
|
||||
xff-src: lan
|
||||
|
||||
# an additional, optional security measure is to expect a
|
||||
# secret header name from the reverse-proxy; you can enable
|
||||
# this feature by setting the header-name to expect here:
|
||||
#idp-h-key: shangala-bangala
|
||||
|
||||
# convenient debug option:
|
||||
# log all incoming request headers from the proxy
|
||||
#ihead: *
|
||||
|
||||
[/] # create a volume at "/" (the webroot), which will
|
||||
/w # share /w (the docker data volume)
|
||||
|
||||
22
docs/idp.md
Normal file
22
docs/idp.md
Normal file
@@ -0,0 +1,22 @@
|
||||
there is a [docker-compose example](./examples/docker/idp-authelia-traefik) which is hopefully a good starting point (meaning you can skip the steps below) -- but if you want to set this up from scratch yourself (or learn about how it works), keep reading:
|
||||
|
||||
to configure IdP from scratch, you must place copyparty behind a reverse-proxy which sends all requests through a middleware (the IdP / identity-provider service) which will inject a set of headers into the requests, telling copyparty who the user is
|
||||
|
||||
in the copyparty `[global]` config, specify which headers to read client info from; username is required (`idp-h-usr: X-Authooley-User`), group(s) are optional (`idp-h-grp: X-Authooley-Groups`)
|
||||
|
||||
* it is also required to specify the subnet that legit requests will be coming from, for example `--xff-src=10.88.0.0/24` to allow 10.88.x.x (or `--xff-src=lan` for all private IPs), and it is recommended to configure the reverseproxy to include a secret header as proof that the other headers are also legit (and not smuggled in by a malicious client), telling copyparty the headername to expect with `idp-h-key: shangala-bangala`
|
||||
|
||||
|
||||
# important notes
|
||||
|
||||
## IdP volumes are forgotten on shutdown
|
||||
|
||||
IdP volumes, meaning dynamically-created volumes, meaning volumes that contain `${u}` or `${g}` in their URL, will be forgotten during a server restart and then "revived" when the volume's owner sends their first request after the restart
|
||||
|
||||
until each IdP volume is revived, it will inherit the permissions of its parent volume (if any)
|
||||
|
||||
this means that, if an IdP volume is located inside a folder that is readable by anyone, then each of those IdP volumes will **also become readable by anyone** until the volume is revived
|
||||
|
||||
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
|
||||
|
||||
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)
|
||||
@@ -1,52 +0,0 @@
|
||||
pyoxidizer doesn't crosscompile yet so need to build in a windows vm,
|
||||
luckily possible to do mostly airgapped (https-proxy for crates)
|
||||
|
||||
none of this is version-specific but doing absolute links just in case
|
||||
(only exception is py3.8 which is the final win7 ver)
|
||||
|
||||
# deps (download on linux host):
|
||||
https://www.python.org/ftp/python/3.10.7/python-3.10.7-amd64.exe
|
||||
https://github.com/indygreg/PyOxidizer/releases/download/pyoxidizer%2F0.22.0/pyoxidizer-0.22.0-x86_64-pc-windows-msvc.zip
|
||||
https://github.com/upx/upx/releases/download/v3.96/upx-3.96-win64.zip
|
||||
https://static.rust-lang.org/dist/rust-1.61.0-x86_64-pc-windows-msvc.msi
|
||||
https://github.com/indygreg/python-build-standalone/releases/download/20220528/cpython-3.8.13%2B20220528-i686-pc-windows-msvc-static-noopt-full.tar.zst
|
||||
|
||||
# need cl.exe, prefer 2017 -- download on linux host:
|
||||
https://visualstudio.microsoft.com/downloads/?q=build+tools
|
||||
https://docs.microsoft.com/en-us/visualstudio/releases/2022/release-history#release-dates-and-build-numbers
|
||||
https://aka.ms/vs/15/release/vs_buildtools.exe # 2017
|
||||
https://aka.ms/vs/16/release/vs_buildtools.exe # 2019
|
||||
https://aka.ms/vs/17/release/vs_buildtools.exe # 2022
|
||||
https://docs.microsoft.com/en-us/visualstudio/install/workload-component-id-vs-build-tools?view=vs-2017
|
||||
|
||||
# use disposable w10 vm to prep offline installer; xfer to linux host with firefox to copyparty
|
||||
vs_buildtools-2017.exe --add Microsoft.VisualStudio.Workload.MSBuildTools --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Component.Windows10SDK.17763 --layout c:\msbt2017 --lang en-us
|
||||
|
||||
# need two proxies on host; s5s or ssh for msys2(socks5), and tinyproxy for rust(http)
|
||||
UP=- python3 socks5server.py 192.168.123.1 4321
|
||||
ssh -vND 192.168.123.1:4321 localhost
|
||||
git clone https://github.com/tinyproxy/tinyproxy.git
|
||||
./autogen.sh
|
||||
./configure --prefix=/home/ed/pe/tinyproxy
|
||||
make -j24 install
|
||||
printf '%s\n' >cfg "Port 4380" "Listen 192.168.123.1"
|
||||
./tinyproxy -dccfg
|
||||
|
||||
https://github.com/msys2/msys2-installer/releases/download/2022-09-04/msys2-x86_64-20220904.exe
|
||||
export all_proxy=socks5h://192.168.123.1:4321
|
||||
# if chat dies after auth (2 messages) it probably failed dns, note the h in socks5h to tunnel dns
|
||||
pacman -Syuu
|
||||
pacman -S git patch mingw64/mingw-w64-x86_64-zopfli
|
||||
cd /c && curl -k https://192.168.123.1:3923/ro/ox/msbt2017/?tar | tar -xv
|
||||
|
||||
first install certs from msbt/certificates then admin-cmd `vs_buildtools.exe --noweb`,
|
||||
default selection (vc++2017-v15.9-v14.16, vc++redist, vc++bt-core) += win10sdk (for io.h)
|
||||
|
||||
install rust without documentation, python 3.10, put upx and pyoxidizer into ~/bin,
|
||||
[cmd.exe] python -m pip install --user -U wheel-0.37.1.tar.gz strip-hints-0.1.10.tar.gz
|
||||
p=192.168.123.1:4380; export https_proxy=$p; export http_proxy=$p
|
||||
|
||||
# and with all of the one-time-setup out of the way,
|
||||
mkdir /c/d; cd /c/d && curl -k https://192.168.123.1:3923/cpp/gb?pw=wark > gb && git clone gb copyparty
|
||||
cd /c/d/copyparty/ && curl -k https://192.168.123.1:3923/cpp/patch?pw=wark | patch -p1
|
||||
cd /c/d/copyparty/scripts && CARGO_HTTP_CHECK_REVOKE=false PATH=/c/Users/$USER/AppData/Local/Programs/Python/Python310:/c/Users/$USER/bin:"$(cygpath "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.16.27023\bin\Hostx86\x86"):$PATH" ./make-sfx.sh ox ultra
|
||||
49
docs/rice/README.md
Normal file
49
docs/rice/README.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# custom fonts
|
||||
|
||||
to change the fonts in the web-UI, first save the following text (the default font-config) to a new css file, for example named `customfonts.css` in your webroot:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--font-main: sans-serif;
|
||||
--font-serif: serif;
|
||||
--font-mono: 'scp';
|
||||
}
|
||||
```
|
||||
|
||||
add this to your copyparty config so the css file gets loaded: `--html-head='<link rel="stylesheet" href="/customfonts.css">'`
|
||||
|
||||
alternatively, if you are using a config file instead of commandline args:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
html-head: <link rel="stylesheet" href="/customfonts.css">
|
||||
```
|
||||
|
||||
restart copyparty for the config change to take effect
|
||||
|
||||
edit the css file you made and press `ctrl`-`shift`-`R` in the browser to see the changes as you go (no need to restart copyparty for each change)
|
||||
|
||||
if you are introducing a new ttf/woff font, don't forget to declare the font itself in the css file; here's one of the default fonts from `ui.css`:
|
||||
|
||||
```css
|
||||
@font-face {
|
||||
font-family: 'scp';
|
||||
font-display: swap;
|
||||
src: local('Source Code Pro Regular'), local('SourceCodePro-Regular'), url(deps/scp.woff2) format('woff2');
|
||||
}
|
||||
```
|
||||
|
||||
and because textboxes don't inherit fonts by default, you can force it like this:
|
||||
|
||||
```css
|
||||
input[type=text], input[type=submit], input[type=button] { font-family: var(--font-main) }
|
||||
```
|
||||
|
||||
and if you want to have a monospace font in the fancy markdown editor, do this:
|
||||
|
||||
```css
|
||||
.EasyMDEContainer .CodeMirror { font-family: var(--font-mono) }
|
||||
```
|
||||
|
||||
NB: `<textarea id="mt">` and `<div id="mtr">` in the regular markdown editor must have the same font; none of the suggestions above will cause any issues but keep it in mind if you're getting creative
|
||||
|
||||
@@ -248,9 +248,9 @@ symbol legend,
|
||||
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - |
|
||||
| accounts | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
|
||||
| per-account chroot | | | | | | | | | | | | █ |
|
||||
| single-sign-on | | | | █ | █ | | | | • | | | |
|
||||
| token auth | | | | █ | █ | | | █ | | | | |
|
||||
| 2fa | | | | █ | █ | | | | | | | █ |
|
||||
| single-sign-on | ╱ | | | █ | █ | | | | • | | | |
|
||||
| token auth | ╱ | | | █ | █ | | | █ | | | | |
|
||||
| 2fa | ╱ | | | █ | █ | | | | | | | █ |
|
||||
| per-volume permissions | █ | █ | █ | █ | █ | █ | █ | | █ | █ | ╱ | █ |
|
||||
| per-folder permissions | ╱ | | | █ | █ | | █ | | █ | █ | ╱ | █ |
|
||||
| per-file permissions | | | | █ | █ | | █ | | █ | | | |
|
||||
@@ -289,6 +289,7 @@ symbol legend,
|
||||
* `curl-friendly ls` = returns a [sortable plaintext folder listing](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png) when curled
|
||||
* `curl-friendly upload` = uploading with curl is just `curl -T some.bin http://.../`
|
||||
* `a`/copyparty remarks:
|
||||
* single-sign-on, token-auth, and 2fa is *possible* through authelia/authentik or similar, but nobody's made an example yet
|
||||
* one-way folder sync from local to server can be done efficiently with [u2c.py](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy), or with webdav and conventional rsync
|
||||
* can hot-reload config files (with just a few exceptions)
|
||||
* can set per-folder permissions if that folder is made into a separate volume, so there is configuration overhead
|
||||
|
||||
45
docs/xff.md
Normal file
45
docs/xff.md
Normal file
@@ -0,0 +1,45 @@
|
||||
when running behind a reverse-proxy, or a WAF, or another protection service such as cloudflare:
|
||||
|
||||
if you (and maybe everybody else) keep getting a message that says `thank you for playing`, then you've gotten banned for malicious traffic. This ban applies to the IP-address that copyparty *thinks* identifies the shady client -- so, depending on your setup, you might have to tell copyparty where to find the correct IP
|
||||
|
||||
knowing the correct IP is also crucial for some other features, such as the unpost feature which lets you delete your own recent uploads -- but if everybody has the same IP, well...
|
||||
|
||||
----
|
||||
|
||||
for most common setups, there should be a helpful message in the server-log explaining what to do, something like `--xff-src=10.88.0.0/16` or `--xff-src=lan` to accept the `X-Forwarded-For` header from your reverse-proxy with a LAN IP of `10.88.x.y`
|
||||
|
||||
if you are behind cloudflare, it is recommended to also set `--xff-hdr=cf-connecting-ip` to use a more trustworthy source of info, but then it's also very important to ensure your reverse-proxy does not accept connections from anything BUT cloudflare; you can do this by generating an ip-address allowlist and reject all other connections
|
||||
|
||||
* if you are using nginx as your reverse-proxy, see the [example nginx config](https://github.com/9001/copyparty/blob/hovudstraum/contrib/nginx/copyparty.conf) on how the cloudflare allowlist can be done
|
||||
|
||||
----
|
||||
|
||||
the server-log will give recommendations in the form of commandline arguments;
|
||||
|
||||
to do the same thing using config files, take the options that are suggested in the serverlog and put them into the `[global]` section in your `copyparty.conf` like so:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
xff-src: lan
|
||||
xff-hdr: cf-connecting-ip
|
||||
```
|
||||
|
||||
----
|
||||
|
||||
# but if you just want to get it working:
|
||||
|
||||
...and don't care about security, you can optionally disable the bot-detectors, either by specifying commandline-args `--ban-404=no --ban-403=no --ban-422=no --ban-url=no --ban-pw=no`
|
||||
|
||||
or by adding these lines inside the `[global]` section in your `copyparty.conf`:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
ban-404: no
|
||||
ban-403: no
|
||||
ban-422: no
|
||||
ban-url: no
|
||||
ban-pw: no
|
||||
```
|
||||
|
||||
but remember that this will make other features insecure as well, such as unpost
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
# builds win7-i386 exe on win10-ltsc-1809(17763.316)
|
||||
# see docs/pyoxidizer.txt
|
||||
|
||||
def make_exe():
|
||||
dist = default_python_distribution(flavor="standalone_static", python_version="3.8")
|
||||
policy = dist.make_python_packaging_policy()
|
||||
policy.allow_files = True
|
||||
policy.allow_in_memory_shared_library_loading = True
|
||||
#policy.bytecode_optimize_level_zero = True
|
||||
#policy.include_distribution_sources = False # error instantiating embedded Python interpreter: during initializing Python main: init_fs_encoding: failed to get the Python codec of the filesystem encoding
|
||||
policy.include_distribution_resources = False
|
||||
policy.include_non_distribution_sources = False
|
||||
policy.include_test = False
|
||||
python_config = dist.make_python_interpreter_config()
|
||||
#python_config.module_search_paths = ["$ORIGIN/lib"]
|
||||
|
||||
python_config.run_module = "copyparty"
|
||||
exe = dist.to_python_executable(
|
||||
name="copyparty",
|
||||
config=python_config,
|
||||
packaging_policy=policy,
|
||||
)
|
||||
exe.windows_runtime_dlls_mode = "never"
|
||||
exe.windows_subsystem = "console"
|
||||
exe.add_python_resources(exe.read_package_root(
|
||||
path="sfx",
|
||||
packages=[
|
||||
"copyparty",
|
||||
"jinja2",
|
||||
"markupsafe",
|
||||
"pyftpdlib",
|
||||
"python-magic",
|
||||
]
|
||||
))
|
||||
return exe
|
||||
|
||||
def make_embedded_resources(exe):
|
||||
return exe.to_embedded_resources()
|
||||
|
||||
def make_install(exe):
|
||||
files = FileManifest()
|
||||
files.add_python_resource("copyparty", exe)
|
||||
return files
|
||||
|
||||
register_target("exe", make_exe)
|
||||
register_target("resources", make_embedded_resources, depends=["exe"], default_build_script=True)
|
||||
register_target("install", make_install, depends=["exe"], default=True)
|
||||
resolve_targets()
|
||||
@@ -49,7 +49,7 @@ thumbnails2 = ["pyvips"]
|
||||
audiotags = ["mutagen"]
|
||||
ftpd = ["pyftpdlib"]
|
||||
ftps = ["pyftpdlib", "pyopenssl"]
|
||||
tftpd = ["partftpy>=0.3.0"]
|
||||
tftpd = ["partftpy>=0.3.1"]
|
||||
pwhash = ["argon2-cffi"]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -3,7 +3,7 @@ WORKDIR /z
|
||||
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
||||
ver_hashwasm=4.10.0 \
|
||||
ver_marked=4.3.0 \
|
||||
ver_dompf=3.0.8 \
|
||||
ver_dompf=3.0.11 \
|
||||
ver_mde=2.18.0 \
|
||||
ver_codemirror=5.65.16 \
|
||||
ver_fontawesome=5.13.0 \
|
||||
@@ -24,7 +24,9 @@ ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
||||
# the scp url is regular latin from https://fonts.googleapis.com/css2?family=Source+Code+Pro&display=swap
|
||||
RUN mkdir -p /z/dist/no-pk \
|
||||
&& wget https://fonts.gstatic.com/s/sourcecodepro/v11/HI_SiYsKILxRpg3hIP6sJ7fM7PqlPevW.woff2 -O scp.woff2 \
|
||||
&& apk add cmake make g++ git bash npm patch wget tar pigz brotli gzip unzip python3 python3-dev brotli py3-brotli \
|
||||
&& apk add \
|
||||
bash brotli cmake make g++ git gzip lame npm patch pigz \
|
||||
python3 python3-dev py3-brotli sox tar unzip wget \
|
||||
&& rm -f /usr/lib/python3*/EXTERNALLY-MANAGED \
|
||||
&& wget https://github.com/openpgpjs/asmcrypto.js/archive/$ver_asmcrypto.tar.gz -O asmcrypto.tgz \
|
||||
&& wget https://github.com/markedjs/marked/archive/v$ver_marked.tar.gz -O marked.tgz \
|
||||
@@ -58,6 +60,11 @@ RUN mkdir -p /z/dist/no-pk \
|
||||
&& tar -xf zopfli.tgz
|
||||
|
||||
|
||||
COPY busy-mp3.sh /z/
|
||||
RUN /z/busy-mp3.sh \
|
||||
&& mv -v /dev/shm/busy.mp3.gz /z/dist
|
||||
|
||||
|
||||
# build fonttools (which needs zopfli)
|
||||
RUN tar -xf zopfli.tgz \
|
||||
&& cd zopfli* \
|
||||
@@ -143,9 +150,8 @@ RUN ./genprism.sh $ver_prism
|
||||
|
||||
|
||||
# compress
|
||||
COPY brotli.makefile zopfli.makefile /z/dist/
|
||||
COPY zopfli.makefile /z/dist/
|
||||
RUN cd /z/dist \
|
||||
&& make -j$(nproc) -f brotli.makefile \
|
||||
&& make -j$(nproc) -f zopfli.makefile \
|
||||
&& rm *.makefile \
|
||||
&& mv no-pk/* . \
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
all: $(addsuffix .br, $(wildcard easymde*))
|
||||
|
||||
%.br: %
|
||||
brotli -jZ $<
|
||||
61
scripts/deps-docker/busy-mp3.sh
Executable file
61
scripts/deps-docker/busy-mp3.sh
Executable file
@@ -0,0 +1,61 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
cat >/dev/null <<EOF
|
||||
a frame is 1152 samples
|
||||
1sec @ 48000 = 41.66 frames
|
||||
11 frames = 12672 samples = 0.264 sec
|
||||
22 frames = 25344 samples = 0.528 sec
|
||||
EOF
|
||||
|
||||
fast=1
|
||||
fast=
|
||||
|
||||
echo
|
||||
mkdir -p /dev/shm/$1
|
||||
cd /dev/shm/$1
|
||||
find -maxdepth 1 -type f -iname 'a.*.mp3*' -delete
|
||||
min=99999999
|
||||
|
||||
for freq in 425; do # {400..500}
|
||||
for vol in 0; do # {10..30}
|
||||
for kbps in 32; do
|
||||
for fdur in 1124; do # {800..1200}
|
||||
for fdu2 in 1042; do # {800..1200}
|
||||
for ftyp in h; do # q h t l p
|
||||
for ofs1 in 9214; do # {0..32768}
|
||||
for ofs2 in 0; do # {0..4096}
|
||||
for ofs3 in 0; do # {0..4096}
|
||||
for nores in --nores; do # '' --nores
|
||||
|
||||
f=a.b$kbps$nores-f$freq-v$vol-$ftyp$fdur-$fdu2-o$ofs1-$ofs2-$ofs3
|
||||
sox -r48000 -Dn -r48000 -b16 -c2 -t raw s1.pcm synth 25344s sin $freq vol 0.$vol fade $ftyp ${fdur}s 25344s ${fdu2}s
|
||||
sox -r48000 -Dn -r48000 -b16 -c2 -t raw s0.pcm synth 12672s sin $freq vol 0
|
||||
tail -c +$ofs1 s0.pcm >s0a.pcm
|
||||
tail -c +$ofs2 s0.pcm >s0b.pcm
|
||||
tail -c +$ofs3 s0.pcm >s0c.pcm
|
||||
cat s{0a,1,0,0b,1,0c}.pcm > a.pcm
|
||||
lame --silent -r -s 48 --bitwidth 16 --signed a.pcm -m j --resample 48 -b $kbps -q 0 $nores $f.mp3
|
||||
if [ $fast ]
|
||||
then gzip -c9 <$f.mp3 >$f.mp3.gz
|
||||
else pigz -c11 -I1 <$f.mp3 >$f.mp3.gz
|
||||
fi
|
||||
sz=$(wc -c <$f.mp3.gz)
|
||||
printf '\033[A%d %s\033[K\n' $sz $f
|
||||
[ $sz -le $((min+10)) ] && echo
|
||||
[ $sz -le $min ] && echo && min=$sz
|
||||
|
||||
done;done;done;done;done;done;done;done;done;done
|
||||
true
|
||||
|
||||
f=a.b32--nores-f425-v0-h1124-1042-o9214-0-0.mp3
|
||||
[ $fast ] &&
|
||||
pigz -c11 -I1 <$f >busy.mp3.gz ||
|
||||
mv $f.gz busy.mp3.gz
|
||||
|
||||
sz=$(wc -c <busy.mp3.gz)
|
||||
[ "$sz" -eq 106 ] &&
|
||||
echo busy.mp3 built successfully ||
|
||||
echo WARNING: unexpected size of busy.mp3
|
||||
|
||||
find -maxdepth 1 -type f -iname 'a.*.mp3*' -delete
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM fedora:38
|
||||
FROM fedora:39
|
||||
WORKDIR /z
|
||||
LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
|
||||
org.opencontainers.image.source="https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker" \
|
||||
@@ -21,7 +21,7 @@ RUN dnf install -y \
|
||||
vips vips-jxl vips-poppler vips-magick \
|
||||
python3-numpy fftw libsndfile \
|
||||
gcc gcc-c++ make cmake patchelf jq \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools python3-wheel \
|
||||
vamp-plugin-sdk qm-vamp-plugins \
|
||||
vamp-plugin-sdk-devel vamp-plugin-sdk-static \
|
||||
&& rm -f /usr/lib/python3*/EXTERNALLY-MANAGED \
|
||||
@@ -29,7 +29,7 @@ RUN dnf install -y \
|
||||
&& bash install-deps.sh \
|
||||
&& dnf erase -y \
|
||||
gcc gcc-c++ make cmake patchelf jq \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools python3-wheel \
|
||||
vamp-plugin-sdk-devel vamp-plugin-sdk-static \
|
||||
&& dnf clean all \
|
||||
&& find /usr/ -name __pycache__ | xargs rm -rf \
|
||||
|
||||
@@ -33,6 +33,8 @@ the recommended way to configure copyparty inside a container is to mount a fold
|
||||
* but you can also provide arguments to the docker command if you prefer that
|
||||
* config files must be named `something.conf` to get picked up
|
||||
|
||||
also see [docker-specific recommendations](#docker-specific-recommendations)
|
||||
|
||||
|
||||
## editions
|
||||
|
||||
@@ -88,6 +90,26 @@ the following advice is best-effort and not guaranteed to be entirely correct
|
||||
|
||||
|
||||
|
||||
# docker-specific recommendations
|
||||
|
||||
* copyparty will generally create a `.hist` folder at the top of each volume, which contains the filesystem index, thumbnails and such. For performance reasons, but also just to keep things tidy, it might be convenient to store these inside the config folder instead. Add the line `hist: /cfg/hists/` inside the `[global]` section of your `copyparty.conf` to do this
|
||||
|
||||
|
||||
## enabling the ftp server
|
||||
|
||||
...is tricky because ftp is a weird protocol and docker is making it worse 🎉
|
||||
|
||||
add the following three config entries into the `[global]` section of your `copyparty.conf`:
|
||||
|
||||
* `ftp: 3921` to enable the service, listening for connections on port 3921
|
||||
|
||||
* `ftp-nat: 127.0.0.1` but replace `127.0.0.1` with the actual external IP of your server; the clients will only be able to connect to this IP, even if the server has multiple IPs
|
||||
|
||||
* `ftp-pr: 12000-12099` to restrict the [passive-mode](http://slacksite.com/other/ftp.html#passive) port selection range; this allows up to 100 simultaneous file transfers
|
||||
|
||||
then finally update your docker config so that the port-range you specified (12000-12099) is exposed to the internet
|
||||
|
||||
|
||||
# build the images yourself
|
||||
|
||||
basically `./make.sh hclean pull img push` but see [devnotes.md](./devnotes.md)
|
||||
|
||||
@@ -16,8 +16,6 @@ help() { exec cat <<'EOF'
|
||||
# `re` does a repack of an sfx which you already executed once
|
||||
# (grabs files from the sfx-created tempdir), overrides `clean`
|
||||
#
|
||||
# `ox` builds a pyoxidizer exe instead of py
|
||||
#
|
||||
# `gz` creates a gzip-compressed python sfx instead of bzip2
|
||||
#
|
||||
# `lang` limits which languages/translations to include,
|
||||
@@ -37,7 +35,7 @@ help() { exec cat <<'EOF'
|
||||
# _____________________________________________________________________
|
||||
# web features:
|
||||
#
|
||||
# `no-cm` saves ~82k by removing easymde/codemirror
|
||||
# `no-cm` saves ~89k by removing easymde/codemirror
|
||||
# (the fancy markdown editor)
|
||||
#
|
||||
# `no-hl` saves ~41k by removing syntax hilighting in the text viewer
|
||||
@@ -111,7 +109,6 @@ while [ ! -z "$1" ]; do
|
||||
case $1 in
|
||||
clean) clean=1 ; ;;
|
||||
re) repack=1 ; ;;
|
||||
ox) use_ox=1 ; ;;
|
||||
gz) use_gz=1 ; ;;
|
||||
gzz) shift;use_gzz=$1;use_gz=1; ;;
|
||||
no-ftp) no_ftp=1 ; ;;
|
||||
@@ -225,9 +222,9 @@ necho() {
|
||||
mv pyftpdlib ftp/
|
||||
|
||||
necho collecting partftpy
|
||||
f="../build/partftpy-0.3.0.tar.gz"
|
||||
f="../build/partftpy-0.3.1.tar.gz"
|
||||
[ -e "$f" ] ||
|
||||
(url=https://files.pythonhosted.org/packages/06/ce/531978c831c47f79bc72d5bbb3f12757daf1602d1fffad012305f2d270f6/partftpy-0.3.0.tar.gz;
|
||||
(url=https://files.pythonhosted.org/packages/37/79/1a1de1d3fdf27ddc9c2d55fec6552e7b8ed115258fedac6120679898b83d/partftpy-0.3.1.tar.gz;
|
||||
wget -O$f "$url" || curl -L "$url" >$f)
|
||||
|
||||
tar -zxf $f
|
||||
@@ -368,7 +365,7 @@ git describe --tags >/dev/null 2>/dev/null && {
|
||||
|
||||
printf '%s\n' "$git_ver" | grep -qE '^v[0-9\.]+-[0-9]+-g[0-9a-f]+$' && {
|
||||
# long format (unreleased commit)
|
||||
t_ver="$(printf '%s\n' "$ver" | sed -r 's/\./, /g; s/(.*) (.*)/\1 "\2"/')"
|
||||
t_ver="$(printf '%s\n' "$ver" | sed -r 's/[-.]/, /g; s/(.*) (.*)/\1 "\2"/')"
|
||||
}
|
||||
|
||||
[ -z "$t_ver" ] && {
|
||||
@@ -406,7 +403,7 @@ find -type f -name ._\* | while IFS= read -r f; do cmp <(printf '\x00\x05\x16')
|
||||
|
||||
rm -f copyparty/web/deps/*.full.* copyparty/web/dbg-* copyparty/web/Makefile
|
||||
|
||||
find copyparty | LC_ALL=C sort | sed -r 's/\.(gz|br)$//;s/$/,/' > have
|
||||
find copyparty | LC_ALL=C sort | sed -r 's/\.gz$//;s/$/,/' > have
|
||||
cat have | while IFS= read -r x; do
|
||||
grep -qF -- "$x" ../scripts/sfx.ls || {
|
||||
echo "unexpected file: $x"
|
||||
@@ -461,8 +458,8 @@ rm -f ftp/pyftpdlib/{__main__,prefork}.py
|
||||
iawk '/^\}/{l=0} !l; /^var Ls =/{l=1;next} o; /^\t["}]/{o=0} /^\t"'"$langs"'"/{o=1;print}' $f
|
||||
done
|
||||
|
||||
[ ! $repack ] && [ ! $use_ox ] && {
|
||||
# uncomment; oxidized drops 45 KiB but becomes undebuggable
|
||||
[ ! $repack ] && {
|
||||
# uncomment
|
||||
find | grep -E '\.py$' |
|
||||
grep -vE '__version__' |
|
||||
tr '\n' '\0' |
|
||||
@@ -570,40 +567,13 @@ nf=$(ls -1 "$zdir"/arc.* 2>/dev/null | wc -l)
|
||||
}
|
||||
|
||||
|
||||
[ $use_ox ] && {
|
||||
tgt=x86_64-pc-windows-msvc
|
||||
tgt=i686-pc-windows-msvc # 2M smaller (770k after upx)
|
||||
bdir=build/$tgt/release/install/copyparty
|
||||
|
||||
t="res web"
|
||||
(printf "\n\n\nBUT WAIT! THERE'S MORE!!\n\n";
|
||||
cat ../$bdir/COPYING.txt) >> copyparty/res/COPYING.txt ||
|
||||
echo "copying.txt 404 pls rebuild"
|
||||
|
||||
mv ftp/* j2/* .
|
||||
rm -rf ftp j2 py2 py37
|
||||
(cd copyparty; tar -cvf z.tar $t; rm -rf $t)
|
||||
cd ..
|
||||
pyoxidizer build --release --target-triple $tgt
|
||||
mv $bdir/copyparty.exe dist/
|
||||
cp -pv "$(for d in '/c/Program Files (x86)/Microsoft Visual Studio/'*'/BuildTools/VC/Redist/MSVC'; do
|
||||
find "$d" -name vcruntime140.dll; done | sort | grep -vE '/x64/|/onecore/' | head -n 1)" dist/
|
||||
dist/copyparty.exe --version
|
||||
cp -pv dist/copyparty{,.orig}.exe
|
||||
[ $ultra ] && a="--best --lzma" || a=-1
|
||||
/bin/time -f %es upx $a dist/copyparty.exe >/dev/null
|
||||
ls -al dist/copyparty{,.orig}.exe
|
||||
exit 0
|
||||
}
|
||||
|
||||
|
||||
echo gen tarlist
|
||||
for d in copyparty partftpy magic j2 py2 py37 ftp; do find $d -type f || true; done | # strip_hints
|
||||
sed -r 's/(.*)\.(.*)/\2 \1/' | LC_ALL=C sort |
|
||||
sed -r 's/([^ ]*) (.*)/\2.\1/' | grep -vE '/list1?$' > list1
|
||||
|
||||
for n in {1..50}; do
|
||||
(grep -vE '\.(gz|br)$' list1; grep -E '\.(gz|br)$' list1 | (shuf||gshuf) ) >list || true
|
||||
(grep -vE '\.gz$' list1; grep -E '\.gz$' list1 | (shuf||gshuf) ) >list || true
|
||||
s=$( (sha1sum||shasum) < list | cut -c-16)
|
||||
grep -q $s "$zdir/h" 2>/dev/null && continue
|
||||
echo $s >> "$zdir/h"
|
||||
|
||||
@@ -1,13 +1,10 @@
|
||||
f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl
|
||||
eda6c38fc4d813fee897e969ff9ecc5acc613df755ae63df0392217bbd67408b5c1f6c676f2bf5497b772a3eb4e1a360e1245e1c16ee83f0af555f1ab82c3977 Git-2.39.1-32-bit.exe
|
||||
17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl
|
||||
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
|
||||
f23615c522ed58b9a05978ba4c69c06224590f3a6adbd8e89b31838b181a57160739ceff1fc2ba6f4239b8fee46f92ce02910b2debda2710558ed42cff1ce3f1 pyinstaller-6.1.0-py3-none-win_amd64.whl
|
||||
5747b3b119629c4cf956f0eaa85f29218bb3680d3a4a262fa6e976e56b35067302e153d2c0a001505f2cb642b1f78752567889b3b82e342d6cd29aac8b70e92e pyinstaller_hooks_contrib-2023.10-py2.py3-none-any.whl
|
||||
126ca016c00256f4ff13c88707ead21b3b98f3c665ae57a5bcbb80c8be3004bff36d9c7f9a1cc9d20551019708f2b195154f302d80a1e5a2026d6d0fe9f3d5f4 pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
|
||||
749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl
|
||||
6e0d854040baff861e1647d2bece7d090bc793b2bd9819c56105b94090df54881a6a9b43ebd82578cd7c76d47181571b671e60672afd9def389d03c9dae84fcf setuptools-68.2.2-py3-none-any.whl
|
||||
3c5adf0a36516d284a2ede363051edc1bcc9df925c5a8a9fa2e03cab579dd8d847fdad42f7fd5ba35992e08234c97d2dbfec40a9d12eec61c8dc03758f2bd88e typing_extensions-4.4.0-py3-none-any.whl
|
||||
8d16a967a0a7872a7575b1005cf66915deacda6ee8611fbb52f42fc3e3beb2f901a5140c942a5d146bd412b92bfa9cbadd82beeba83df6d70930c6dc26608a5b upx-4.1.0-win32.zip
|
||||
# u2c (win7)
|
||||
f3390290b896019b2fa169932390e4930d1c03c014e1f6db2405ca2eb1f51f5f5213f725885853805b742997b0edb369787e5c0069d217bc4e8b957f847f58b6 certifi-2023.11.17-py3-none-any.whl
|
||||
904eb57b13bea80aea861de86987e618665d37fa9ea0856e0125a9ba767a53e5064de0b9c4735435a2ddf4f16f7f7d2c75a682e1de83d9f57922bdca8e29988c charset_normalizer-3.3.0-cp37-cp37m-win32.whl
|
||||
@@ -18,15 +15,19 @@ b795abb26ba2f04f1afcfb196f21f638014b26c8186f8f488f1c2d91e8e0220962fbd259dbc9c387
|
||||
91c025f7d94bcdf93df838fab67053165a414fc84e8496f92ecbb910dd55f6b6af5e360bbd051444066880c5a6877e75157bd95e150ead46e5c605930dfc50f2 future-0.18.2.tar.gz
|
||||
c06b3295d1d0b0f0a6f9a6cd0be861b9b643b4a5ea37857f0bd41c45deaf27bb927b71922dab74e633e43d75d04a9bd0d1c4ad875569740b0f2a98dd2bfa5113 importlib_metadata-5.0.0-py3-none-any.whl
|
||||
016a8cbd09384f1a9a44cb0e8274df75a8bcb2f3966bb5d708c62145289efaa5db98f75256c97e4f8046735ce2e529fbb076f284a46cdb716e89a75660200ad9 pip-23.2.1-py3-none-any.whl
|
||||
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
|
||||
6bb73cc2db795c59c92f2115727f5c173cacc9465af7710db9ff2f2aec2d73130d0992d0f16dcb3fac222dc15c0916562d0813b2337401022020673a4461df3d python-3.7.9-amd64.exe
|
||||
500747651c87f59f2436c5ab91207b5b657856e43d10083f3ce27efb196a2580fadd199a4209519b409920c562aaaa7dcbdfb83ed2072a43eaccae6e2d056f31 python-3.7.9.exe
|
||||
2e04acff170ca3bbceeeb18489c687126c951ec0bfd53cccfb389ba8d29a4576c1a9e8f2e5ea26c84dd21bfa2912f4e71fa72c1e2653b71e34afc0e65f1722d4 upx-4.2.2-win32.zip
|
||||
68e1b618d988be56aaae4e2eb92bc0093627a00441c1074ebe680c41aa98a6161e52733ad0c59888c643a33fe56884e4f935178b2557fbbdd105e92e0d993df6 windows6.1-kb2533623-x64.msu
|
||||
479a63e14586ab2f2228208116fc149ed8ee7b1e4ff360754f5bda4bf765c61af2e04b5ef123976623d04df4976b7886e0445647269da81436bd0a7b5671d361 windows6.1-kb2533623-x86.msu
|
||||
ba91ab0518c61eff13e5612d9e6b532940813f6b56e6ed81ea6c7c4d45acee4d98136a383a25067512b8f75538c67c987cf3944bfa0229e3cb677e2fb81e763e zipp-3.10.0-py3-none-any.whl
|
||||
# win10
|
||||
00558cca2e0ac813d404252f6e5aeacb50546822ecb5d0570228b8ddd29d94e059fbeb6b90393dee5abcddaca1370aca784dc9b095cbb74e980b3c024767fb24 Jinja2-3.1.2-py3-none-any.whl
|
||||
7f8f4daa4f4f2dbf24cdd534b2952ee3fba6334eb42b37465ccda3aa1cccc3d6204aa6bfffb8a83bf42ec59c702b5b5247d4c8ee0d4df906334ae53072ef8c4c MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
|
||||
e3e2e6bd511dec484dd0292f4c46c55c88a885eabf15413d53edea2dd4a4dbae1571735b9424f78c0cd7f1082476a8259f31fd3f63990f726175470f636df2b3 Jinja2-3.1.3-py3-none-any.whl
|
||||
e21495f1d473d855103fb4a243095b498ec90eb68776b0f9b48e994990534f7286c0292448e129c507e5d70409f8a05cca58b98d59ce2a815993d0a873dfc480 MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
|
||||
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
||||
656015f5cc2c04aa0653ee5609c39a7e5f0b6a58c84fe26b20bd070c52d20b4effb810132f7fb771168483e9fd975cc3302837dd7a1a687ee058b0460c857cc4 packaging-23.2-py3-none-any.whl
|
||||
424e20dc7263a31d524307bc39ed755a9dd82f538086fff68d98dd97e236c9b00777a8ac2e3853081b532b0e93cef44983e74d0ab274877440e8b7341b19358a pillow-10.2.0-cp311-cp311-win_amd64.whl
|
||||
e6bdbae1affd161e62fc87407c912462dfe875f535ba9f344d0c4ade13715c947cd3ae832eff60f1bad4161938311d06ac8bc9b52ef203f7b0d9de1409f052a5 python-3.11.8-amd64.exe
|
||||
1dfe6f66bef5c9d62c9028a964196b902772ec9e19db215f3f41acb8d2d563586988d81b455fa6b895b434e9e1e9d57e4d271d1b1214483bdb3eadffcbba6a33 pillow-10.3.0-cp311-cp311-win_amd64.whl
|
||||
8760eab271e79256ae3bfb4af8ccc59010cb5d2eccdd74b325d1a533ae25eb127d51c2ec28ff90d449afed32dd7d6af62934fe9caaf1ae1f4d4831e948e912da pyinstaller-6.5.0-py3-none-win_amd64.whl
|
||||
897a14d5ee5cbc6781a0f48beffc27807a4f789d58c4329d899233f615d168a5dcceddf7f8f2d5bb52212ddcf3eba4664590d9f1fdb25bb5201f44899e03b2f7 python-3.11.9-amd64.exe
|
||||
729dc52f1a02bc6274d012ce33f534102975a828cba11f6029600ea40e2d23aefeb07bf4ae19f9621d0565dd03eb2635bbb97d45fb692c1f756315e8c86c5255 upx-4.2.2-win64.zip
|
||||
|
||||
@@ -17,19 +17,19 @@ uname -s | grep NT-10 && w10=1 || {
|
||||
fns=(
|
||||
altgraph-0.17.4-py2.py3-none-any.whl
|
||||
pefile-2023.2.7-py3-none-any.whl
|
||||
pyinstaller_hooks_contrib-2023.10-py2.py3-none-any.whl
|
||||
pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
|
||||
pywin32_ctypes-0.2.2-py3-none-any.whl
|
||||
setuptools-68.2.2-py3-none-any.whl
|
||||
upx-4.1.0-win32.zip
|
||||
)
|
||||
[ $w10 ] && fns+=(
|
||||
pyinstaller-6.1.0-py3-none-win_amd64.whl
|
||||
Jinja2-3.1.2-py3-none-any.whl
|
||||
MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
|
||||
pyinstaller-6.5.0-py3-none-win_amd64.whl
|
||||
Jinja2-3.1.3-py3-none-any.whl
|
||||
MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
|
||||
mutagen-1.47.0-py3-none-any.whl
|
||||
packaging-23.2-py3-none-any.whl
|
||||
pillow-10.2.0-cp311-cp311-win_amd64.whl
|
||||
python-3.11.8-amd64.exe
|
||||
pillow-10.3.0-cp311-cp311-win_amd64.whl
|
||||
python-3.11.9-amd64.exe
|
||||
upx-4.2.2-win64.zip
|
||||
)
|
||||
[ $w7 ] && fns+=(
|
||||
pyinstaller-5.13.2-py3-none-win32.whl
|
||||
@@ -38,6 +38,7 @@ fns=(
|
||||
idna-3.4-py3-none-any.whl
|
||||
requests-2.28.2-py3-none-any.whl
|
||||
urllib3-1.26.14-py2.py3-none-any.whl
|
||||
upx-4.2.2-win32.zip
|
||||
)
|
||||
[ $w7 ] && fns+=(
|
||||
future-0.18.2.tar.gz
|
||||
|
||||
@@ -81,6 +81,7 @@ copyparty/web/dd/5.png,
|
||||
copyparty/web/dd/__init__.py,
|
||||
copyparty/web/deps,
|
||||
copyparty/web/deps/__init__.py,
|
||||
copyparty/web/deps/busy.mp3,
|
||||
copyparty/web/deps/easymde.css,
|
||||
copyparty/web/deps/easymde.js,
|
||||
copyparty/web/deps/marked.js,
|
||||
|
||||
@@ -234,8 +234,9 @@ def u8(gen):
|
||||
|
||||
|
||||
def yieldfile(fn):
|
||||
with open(fn, "rb") as f:
|
||||
for block in iter(lambda: f.read(64 * 1024), b""):
|
||||
s = 64 * 1024
|
||||
with open(fn, "rb", s * 4) as f:
|
||||
for block in iter(lambda: f.read(s), b""):
|
||||
yield block
|
||||
|
||||
|
||||
|
||||
2
setup.py
2
setup.py
@@ -141,7 +141,7 @@ args = {
|
||||
"audiotags": ["mutagen"],
|
||||
"ftpd": ["pyftpdlib"],
|
||||
"ftps": ["pyftpdlib", "pyopenssl"],
|
||||
"tftpd": ["partftpy>=0.3.0"],
|
||||
"tftpd": ["partftpy>=0.3.1"],
|
||||
"pwhash": ["argon2-cffi"],
|
||||
},
|
||||
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},
|
||||
|
||||
17
tests/res/idp/1.conf
Normal file
17
tests/res/idp/1.conf
Normal file
@@ -0,0 +1,17 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[accounts]
|
||||
ua: pa
|
||||
|
||||
[/]
|
||||
/
|
||||
accs:
|
||||
r: ua
|
||||
|
||||
[/vb]
|
||||
/b
|
||||
29
tests/res/idp/2.conf
Normal file
29
tests/res/idp/2.conf
Normal file
@@ -0,0 +1,29 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[accounts]
|
||||
ua: pa
|
||||
ub: pb
|
||||
uc: pc
|
||||
|
||||
[groups]
|
||||
ga: ua, ub
|
||||
|
||||
[/]
|
||||
/
|
||||
accs:
|
||||
r: @ga
|
||||
|
||||
[/vb]
|
||||
/b
|
||||
accs:
|
||||
r: @ga, ua
|
||||
|
||||
[/vc]
|
||||
/c
|
||||
accs:
|
||||
r: @ga, uc
|
||||
16
tests/res/idp/3.conf
Normal file
16
tests/res/idp/3.conf
Normal file
@@ -0,0 +1,16 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[/vu/${u}]
|
||||
/
|
||||
accs:
|
||||
r: ${u}
|
||||
|
||||
[/vg/${g}]
|
||||
/b
|
||||
accs:
|
||||
r: @${g}
|
||||
25
tests/res/idp/4.conf
Normal file
25
tests/res/idp/4.conf
Normal file
@@ -0,0 +1,25 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[accounts]
|
||||
ua: pa
|
||||
ub: pb
|
||||
|
||||
[/vu/${u}]
|
||||
/u-${u}
|
||||
accs:
|
||||
r: ${u}
|
||||
|
||||
[/vg/${g}1]
|
||||
/g1-${g}
|
||||
accs:
|
||||
r: @${g}
|
||||
|
||||
[/vg/${g}2]
|
||||
/g2-${g}
|
||||
accs:
|
||||
r: @${g}, ua
|
||||
21
tests/res/idp/5.conf
Normal file
21
tests/res/idp/5.conf
Normal file
@@ -0,0 +1,21 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[/ga]
|
||||
/ga
|
||||
accs:
|
||||
r: @ga
|
||||
|
||||
[/gb]
|
||||
/gb
|
||||
accs:
|
||||
r: @gb
|
||||
|
||||
[/g]
|
||||
/g
|
||||
accs:
|
||||
r: @ga, @gb
|
||||
24
tests/res/idp/6.conf
Normal file
24
tests/res/idp/6.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[/get/${u}]
|
||||
/get/${u}
|
||||
accs:
|
||||
g: *
|
||||
r: ${u}, @su
|
||||
m: @su
|
||||
|
||||
[/priv/${u}]
|
||||
/priv/${u}
|
||||
accs:
|
||||
r: ${u}, @su
|
||||
m: @su
|
||||
|
||||
[/team/${g}/${u}]
|
||||
/team/${g}/${u}
|
||||
accs:
|
||||
r: @${g}
|
||||
@@ -4,6 +4,9 @@ from __future__ import print_function, unicode_literals
|
||||
|
||||
import io
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import pprint
|
||||
import shutil
|
||||
import tarfile
|
||||
import tempfile
|
||||
@@ -49,11 +52,7 @@ class TestHttpCli(unittest.TestCase):
|
||||
with open(filepath, "wb") as f:
|
||||
f.write(filepath.encode("utf-8"))
|
||||
|
||||
vcfg = [
|
||||
".::r,u1:r.,u2",
|
||||
"a:a:r,u1:r,u2",
|
||||
".b:.b:r.,u1:r,u2"
|
||||
]
|
||||
vcfg = [".::r,u1:r.,u2", "a:a:r,u1:r,u2", ".b:.b:r.,u1:r,u2"]
|
||||
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], e2dsa=True)
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
|
||||
@@ -66,7 +65,9 @@ class TestHttpCli(unittest.TestCase):
|
||||
|
||||
self.assertEqual(self.curl("?tar", "x")[1][:17], "\nJ2EOT")
|
||||
|
||||
# search
|
||||
##
|
||||
## search
|
||||
|
||||
up2k = Up2k(self)
|
||||
u2idx = U2idx(self)
|
||||
allvols = list(self.asrv.vfs.all_vols.values())
|
||||
@@ -91,15 +92,55 @@ class TestHttpCli(unittest.TestCase):
|
||||
xe = "a/da/f4 a/f3 f0 t/f1"
|
||||
self.assertEqual(x, xe)
|
||||
|
||||
def tardir(self, url, uname):
|
||||
h, b = self.curl("/" + url + "?tar", uname, True)
|
||||
tar = tarfile.open(fileobj=io.BytesIO(b), mode="r|").getnames()
|
||||
top = ("top" if not url else url.lstrip(".").split("/")[0]) + "/"
|
||||
assert len(tar) == len([x for x in tar if x.startswith(top)])
|
||||
return " ".join([x[len(top):] for x in tar])
|
||||
##
|
||||
## dirkeys
|
||||
|
||||
def curl(self, url, uname, binary=False):
|
||||
conn = tu.VHttpConn(self.args, self.asrv, self.log, hdr(url, uname))
|
||||
os.mkdir("v")
|
||||
with open("v/f1.txt", "wb") as f:
|
||||
f.write(b"a")
|
||||
os.rename("a", "v/a")
|
||||
os.rename(".b", "v/.b")
|
||||
|
||||
vcfg = [
|
||||
".::r.,u1:g,u2:c,dk",
|
||||
"v/a:v/a:r.,u1:g,u2:c,dk",
|
||||
"v/.b:v/.b:r.,u1:g,u2:c,dk",
|
||||
]
|
||||
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"])
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
zj = json.loads(self.curl("?ls", "u1")[1])
|
||||
url = "?k=" + zj["dk"]
|
||||
# should descend into folders, but not other volumes:
|
||||
self.assertEqual(self.tardir(url, "u2"), "f0 t/f1 v/f1.txt")
|
||||
|
||||
zj = json.loads(self.curl("v?ls", "u1")[1])
|
||||
url = "v?k=" + zj["dk"]
|
||||
self.assertEqual(self.tarsel(url, "u2", ["f1.txt", "a", ".b"]), "f1.txt")
|
||||
|
||||
def tardir(self, url, uname):
|
||||
top = url.split("?")[0]
|
||||
top = ("top" if not top else top.lstrip(".").split("/")[0]) + "/"
|
||||
url += ("&" if "?" in url else "?") + "tar"
|
||||
h, b = self.curl(url, uname, True)
|
||||
tar = tarfile.open(fileobj=io.BytesIO(b), mode="r|").getnames()
|
||||
if len(tar) != len([x for x in tar if x.startswith(top)]):
|
||||
raise Exception("bad-prefix:", tar)
|
||||
return " ".join([x[len(top) :] for x in tar])
|
||||
|
||||
def tarsel(self, url, uname, sel):
|
||||
url += ("&" if "?" in url else "?") + "tar"
|
||||
zs = '--XD\r\nContent-Disposition: form-data; name="act"\r\n\r\nzip\r\n--XD\r\nContent-Disposition: form-data; name="files"\r\n\r\n'
|
||||
zs += "\r\n".join(sel) + "\r\n--XD--\r\n"
|
||||
zb = zs.encode("utf-8")
|
||||
hdr = "POST /%s HTTP/1.1\r\nPW: %s\r\nConnection: close\r\nContent-Type: multipart/form-data; boundary=XD\r\nContent-Length: %d\r\n\r\n"
|
||||
req = (hdr % (url, uname, len(zb))).encode("utf-8") + zb
|
||||
h, b = self.curl("/" + url, uname, True, req)
|
||||
tar = tarfile.open(fileobj=io.BytesIO(b), mode="r|").getnames()
|
||||
return " ".join(tar)
|
||||
|
||||
def curl(self, url, uname, binary=False, req=b""):
|
||||
req = req or hdr(url, uname)
|
||||
conn = tu.VHttpConn(self.args, self.asrv, self.log, req)
|
||||
HttpCli(conn).run()
|
||||
if binary:
|
||||
h, b = conn.s._reply.split(b"\r\n\r\n", 1)
|
||||
|
||||
229
tests/test_idp.py
Normal file
229
tests/test_idp.py
Normal file
@@ -0,0 +1,229 @@
|
||||
#!/usr/bin/env python3
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import json
|
||||
import os
|
||||
import unittest
|
||||
|
||||
from copyparty.authsrv import AuthSrv
|
||||
from tests.util import Cfg
|
||||
|
||||
|
||||
class TestVFS(unittest.TestCase):
|
||||
def dump(self, vfs):
|
||||
print(json.dumps(vfs, indent=4, sort_keys=True, default=lambda o: o.__dict__))
|
||||
|
||||
def log(self, src, msg, c=0):
|
||||
m = "%s" % (msg,)
|
||||
if (
|
||||
"warning: filesystem-path does not exist:" in m
|
||||
or "you are sharing a system directory:" in m
|
||||
or "reinitializing due to new user from IdP:" in m
|
||||
or m.startswith("hint: argument")
|
||||
or (m.startswith("loaded ") and " config files:" in m)
|
||||
):
|
||||
return
|
||||
|
||||
print(("[%s] %s" % (src, msg)).encode("ascii", "replace").decode("ascii"))
|
||||
|
||||
def nav(self, au, vp):
|
||||
return au.vfs.get(vp, "", False, False)[0]
|
||||
|
||||
def assertAxs(self, axs, expected):
|
||||
unpacked = []
|
||||
zs = "uread uwrite umove udel uget upget uhtml uadmin udot"
|
||||
for k in zs.split():
|
||||
unpacked.append(list(sorted(getattr(axs, k))))
|
||||
|
||||
pad = len(unpacked) - len(expected)
|
||||
self.assertEqual(unpacked, expected + [[]] * pad)
|
||||
|
||||
def assertAxsAt(self, au, vp, expected):
|
||||
vn = self.nav(au, vp)
|
||||
self.assertAxs(vn.axs, expected)
|
||||
|
||||
def assertNodes(self, vfs, expected):
|
||||
got = list(sorted(vfs.nodes.keys()))
|
||||
self.assertEqual(got, expected)
|
||||
|
||||
def assertNodesAt(self, au, vp, expected):
|
||||
vn = self.nav(au, vp)
|
||||
self.assertNodes(vn, expected)
|
||||
|
||||
def prep(self):
|
||||
here = os.path.abspath(os.path.dirname(__file__))
|
||||
cfgdir = os.path.join(here, "res", "idp")
|
||||
|
||||
# globals are applied by main so need to cheat a little
|
||||
xcfg = {"idp_h_usr": "x-idp-user", "idp_h_grp": "x-idp-group"}
|
||||
|
||||
return here, cfgdir, xcfg
|
||||
|
||||
# buckle up...
|
||||
|
||||
def test_1(self):
|
||||
"""
|
||||
trivial; volumes [/] and [/vb] with one user in [/] only
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/1.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "/")
|
||||
self.assertNodes(au.vfs, ["vb"])
|
||||
self.assertNodes(au.vfs.nodes["vb"], [])
|
||||
|
||||
self.assertAxs(au.vfs.axs, [["ua"]])
|
||||
self.assertAxs(au.vfs.nodes["vb"].axs, [])
|
||||
|
||||
def test_2(self):
|
||||
"""
|
||||
users ua/ub/uc, group ga (ua+ub) in basic combinations
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/2.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "/")
|
||||
self.assertNodes(au.vfs, ["vb", "vc"])
|
||||
self.assertNodes(au.vfs.nodes["vb"], [])
|
||||
self.assertNodes(au.vfs.nodes["vc"], [])
|
||||
|
||||
self.assertAxs(au.vfs.axs, [["ua", "ub"]])
|
||||
self.assertAxsAt(au, "vb", [["ua", "ub"]]) # same as:
|
||||
self.assertAxs(au.vfs.nodes["vb"].axs, [["ua", "ub"]])
|
||||
self.assertAxs(au.vfs.nodes["vc"].axs, [["ua", "ub", "uc"]])
|
||||
|
||||
def test_3(self):
|
||||
"""
|
||||
IdP-only; dynamically created volumes for users/groups
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/3.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, [])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
|
||||
au.idp_checkin(None, "iua", "iga")
|
||||
self.assertNodes(au.vfs, ["vg", "vu"])
|
||||
self.assertNodesAt(au, "vu", ["iua"]) # same as:
|
||||
self.assertNodes(au.vfs.nodes["vu"], ["iua"])
|
||||
self.assertNodes(au.vfs.nodes["vg"], ["iga"])
|
||||
self.assertEqual(au.vfs.nodes["vu"].realpath, "")
|
||||
self.assertEqual(au.vfs.nodes["vg"].realpath, "")
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertAxsAt(au, "vu/iua", [["iua"]]) # same as:
|
||||
self.assertAxs(self.nav(au, "vu/iua").axs, [["iua"]])
|
||||
self.assertAxs(self.nav(au, "vg/iga").axs, [["iua"]]) # axs is unames
|
||||
|
||||
def test_4(self):
|
||||
"""
|
||||
IdP mixed with regular users
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/4.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, ["vu"])
|
||||
self.assertNodesAt(au, "vu", ["ua", "ub"])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertAxsAt(au, "vu", [])
|
||||
self.assertAxsAt(au, "vu/ua", [["ua"]])
|
||||
self.assertAxsAt(au, "vu/ub", [["ub"]])
|
||||
|
||||
au.idp_checkin(None, "iua", "iga")
|
||||
self.assertNodes(au.vfs, ["vg", "vu"])
|
||||
self.assertNodesAt(au, "vu", ["iua", "ua", "ub"])
|
||||
self.assertNodesAt(au, "vg", ["iga1", "iga2"])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertAxsAt(au, "vu", [])
|
||||
self.assertAxsAt(au, "vu/iua", [["iua"]])
|
||||
self.assertAxsAt(au, "vu/ua", [["ua"]])
|
||||
self.assertAxsAt(au, "vu/ub", [["ub"]])
|
||||
self.assertAxsAt(au, "vg", [])
|
||||
self.assertAxsAt(au, "vg/iga1", [["iua"]])
|
||||
self.assertAxsAt(au, "vg/iga2", [["iua", "ua"]])
|
||||
self.assertEqual(self.nav(au, "vu/ua").realpath, "/u-ua")
|
||||
self.assertEqual(self.nav(au, "vu/iua").realpath, "/u-iua")
|
||||
self.assertEqual(self.nav(au, "vg/iga1").realpath, "/g1-iga")
|
||||
self.assertEqual(self.nav(au, "vg/iga2").realpath, "/g2-iga")
|
||||
|
||||
au.idp_checkin(None, "iub", "iga")
|
||||
self.assertAxsAt(au, "vu/iua", [["iua"]])
|
||||
self.assertAxsAt(au, "vg/iga1", [["iua", "iub"]])
|
||||
self.assertAxsAt(au, "vg/iga2", [["iua", "iub", "ua"]])
|
||||
|
||||
def test_5(self):
|
||||
"""
|
||||
one IdP user in multiple groups
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/5.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
|
||||
au.idp_checkin(None, "iua", "ga")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxsAt(au, "g", [["iua"]])
|
||||
self.assertAxsAt(au, "ga", [["iua"]])
|
||||
self.assertAxsAt(au, "gb", [])
|
||||
|
||||
au.idp_checkin(None, "iua", "gb")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxsAt(au, "g", [["iua"]])
|
||||
self.assertAxsAt(au, "ga", [])
|
||||
self.assertAxsAt(au, "gb", [["iua"]])
|
||||
|
||||
au.idp_checkin(None, "iua", "ga|gb")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxsAt(au, "g", [["iua"]])
|
||||
self.assertAxsAt(au, "ga", [["iua"]])
|
||||
self.assertAxsAt(au, "gb", [["iua"]])
|
||||
|
||||
def test_6(self):
|
||||
"""
|
||||
IdP volumes with anon-get and other users/groups (github#79)
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/6.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, [])
|
||||
|
||||
au.idp_checkin(None, "iua", "")
|
||||
star = ["*", "iua"]
|
||||
self.assertNodes(au.vfs, ["get", "priv"])
|
||||
self.assertAxsAt(au, "get/iua", [["iua"], [], [], [], star])
|
||||
self.assertAxsAt(au, "priv/iua", [["iua"], [], []])
|
||||
|
||||
au.idp_checkin(None, "iub", "")
|
||||
star = ["*", "iua", "iub"]
|
||||
self.assertNodes(au.vfs, ["get", "priv"])
|
||||
self.assertAxsAt(au, "get/iua", [["iua"], [], [], [], star])
|
||||
self.assertAxsAt(au, "get/iub", [["iub"], [], [], [], star])
|
||||
self.assertAxsAt(au, "priv/iua", [["iua"], [], []])
|
||||
self.assertAxsAt(au, "priv/iub", [["iub"], [], []])
|
||||
|
||||
au.idp_checkin(None, "iuc", "su")
|
||||
star = ["*", "iua", "iub", "iuc"]
|
||||
self.assertNodes(au.vfs, ["get", "priv", "team"])
|
||||
self.assertAxsAt(au, "get/iua", [["iua", "iuc"], [], ["iuc"], [], star])
|
||||
self.assertAxsAt(au, "get/iub", [["iub", "iuc"], [], ["iuc"], [], star])
|
||||
self.assertAxsAt(au, "get/iuc", [["iuc"], [], ["iuc"], [], star])
|
||||
self.assertAxsAt(au, "priv/iua", [["iua", "iuc"], [], ["iuc"]])
|
||||
self.assertAxsAt(au, "priv/iub", [["iub", "iuc"], [], ["iuc"]])
|
||||
self.assertAxsAt(au, "priv/iuc", [["iuc"], [], ["iuc"]])
|
||||
self.assertAxsAt(au, "team/su/iuc", [["iuc"]])
|
||||
|
||||
au.idp_checkin(None, "iud", "su")
|
||||
self.assertAxsAt(au, "team/su/iuc", [["iuc", "iud"]])
|
||||
self.assertAxsAt(au, "team/su/iud", [["iuc", "iud"]])
|
||||
@@ -110,28 +110,28 @@ class Cfg(Namespace):
|
||||
def __init__(self, a=None, v=None, c=None, **ka0):
|
||||
ka = {}
|
||||
|
||||
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats vague_403 vc ver xdev xlink xvol"
|
||||
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats vague_403 vc ver xdev xlink xvol"
|
||||
ka.update(**{k: False for k in ex.split()})
|
||||
|
||||
ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip"
|
||||
ka.update(**{k: True for k in ex.split()})
|
||||
|
||||
ex = "ah_cli ah_gen css_browser hist ipa_re js_browser no_forget no_hash no_idx nonsus_urls"
|
||||
ex = "ah_cli ah_gen css_browser hist js_browser no_forget no_hash no_idx nonsus_urls"
|
||||
ka.update(**{k: None for k in ex.split()})
|
||||
|
||||
ex = "hash_mt srch_time u2j"
|
||||
ex = "hash_mt srch_time u2abort u2j"
|
||||
ka.update(**{k: 1 for k in ex.split()})
|
||||
|
||||
ex = "reg_cap s_thead s_tbody th_convt"
|
||||
ka.update(**{k: 9 for k in ex.split()})
|
||||
|
||||
ex = "db_act df loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ex = "db_act df k304 loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ka.update(**{k: 0 for k in ex.split()})
|
||||
|
||||
ex = "ah_alg bname doctitle exit favico idp_h_usr html_head lg_sbf log_fk md_sbf name textfiles unlist vname R RS SR"
|
||||
ka.update(**{k: "" for k in ex.split()})
|
||||
|
||||
ex = "on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
|
||||
ex = "grp on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
|
||||
ka.update(**{k: [] for k in ex.split()})
|
||||
|
||||
ex = "exp_lg exp_md th_coversd"
|
||||
@@ -145,7 +145,10 @@ class Cfg(Namespace):
|
||||
c=c,
|
||||
E=E,
|
||||
dbd="wal",
|
||||
dk_salt="b" * 16,
|
||||
fk_salt="a" * 16,
|
||||
idp_gsep=re.compile("[|:;+,]"),
|
||||
iobuf=256 * 1024,
|
||||
lang="eng",
|
||||
log_badpwd=1,
|
||||
logout=573,
|
||||
@@ -153,7 +156,8 @@ class Cfg(Namespace):
|
||||
mth={},
|
||||
mtp=[],
|
||||
rm_retry="0/0",
|
||||
s_wr_sz=512 * 1024,
|
||||
s_rd_sz=256 * 1024,
|
||||
s_wr_sz=256 * 1024,
|
||||
sort="href",
|
||||
srch_hits=99999,
|
||||
th_crop="y",
|
||||
@@ -241,6 +245,7 @@ class VHttpConn(object):
|
||||
self.freshen_pwd = 0.0
|
||||
self.hsrv = VHttpSrv(args, asrv, log)
|
||||
self.ico = None
|
||||
self.ipa_nm = None
|
||||
self.lf_url = None
|
||||
self.log_func = log
|
||||
self.log_src = "a"
|
||||
@@ -252,4 +257,4 @@ class VHttpConn(object):
|
||||
self.thumbcli = None
|
||||
self.u2fh = FHC()
|
||||
|
||||
self.get_u2idx = self.hsrv.get_u2idx
|
||||
self.get_u2idx = self.hsrv.get_u2idx
|
||||
|
||||
Reference in New Issue
Block a user