Compare commits
84 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bccc44dc21 | ||
|
|
2f20d29edd | ||
|
|
c6acd3a904 | ||
|
|
2b24c50eb7 | ||
|
|
d30ae8453d | ||
|
|
8e5c436bef | ||
|
|
f500e55e68 | ||
|
|
9700a12366 | ||
|
|
2b6a34dc5c | ||
|
|
ee80cdb9cf | ||
|
|
2def4cd248 | ||
|
|
0287c7baa5 | ||
|
|
51d31588e6 | ||
|
|
32553e4520 | ||
|
|
211a30da38 | ||
|
|
bdbcbbb002 | ||
|
|
e78af02241 | ||
|
|
115020ba60 | ||
|
|
66abf17bae | ||
|
|
b377791be7 | ||
|
|
78919e65d6 | ||
|
|
84b52ea8c5 | ||
|
|
fd89f7ecb9 | ||
|
|
2ebfdc2562 | ||
|
|
dbf1cbc8af | ||
|
|
a259704596 | ||
|
|
04b55f1a1d | ||
|
|
206af8f151 | ||
|
|
645bb5c990 | ||
|
|
f8966222e4 | ||
|
|
d71f844b43 | ||
|
|
e8b7f65f82 | ||
|
|
f193f398c1 | ||
|
|
b6554a7f8c | ||
|
|
3f05b6655c | ||
|
|
51a83b04a0 | ||
|
|
0c03921965 | ||
|
|
2527e90325 | ||
|
|
7f08f10c37 | ||
|
|
1c011ff0bb | ||
|
|
a1ad608267 | ||
|
|
547a486387 | ||
|
|
7741870dc7 | ||
|
|
8785d2f9fe | ||
|
|
d744f3ff8f | ||
|
|
8ca996e2f7 | ||
|
|
096de50889 | ||
|
|
bec3fee9ee | ||
|
|
8413ed6d1f | ||
|
|
055302b5be | ||
|
|
8016e6711b | ||
|
|
c8ea4066b1 | ||
|
|
6cc7101d31 | ||
|
|
263adec70a | ||
|
|
ac96fd9c96 | ||
|
|
e5582605cd | ||
|
|
1b52ef1f8a | ||
|
|
503face974 | ||
|
|
13e77777d7 | ||
|
|
89c6c2e0d9 | ||
|
|
14af136fcd | ||
|
|
d39a99c929 | ||
|
|
43ee6b9f5b | ||
|
|
8a38101e48 | ||
|
|
5026b21226 | ||
|
|
d07859e8e6 | ||
|
|
df7219d3b6 | ||
|
|
ad9be54f55 | ||
|
|
eeecc50757 | ||
|
|
8ff7094e4d | ||
|
|
58ae38c613 | ||
|
|
7f1c992601 | ||
|
|
fbfdd8338b | ||
|
|
bbc379906a | ||
|
|
33f41f3e61 | ||
|
|
655f6d00f8 | ||
|
|
fd552842d4 | ||
|
|
6bd087ddc5 | ||
|
|
0504b010a1 | ||
|
|
39cc92d4bc | ||
|
|
a96d9ac6cb | ||
|
|
643e222986 | ||
|
|
35165f8472 | ||
|
|
caf7e93f5e |
98
README.md
98
README.md
@@ -70,14 +70,16 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
|
||||
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
|
||||
* [using the cloud as storage](#using-the-cloud-as-storage) - connecting to an aws s3 bucket and similar
|
||||
* [hiding from google](#hiding-from-google) - tell search engines you dont wanna be indexed
|
||||
* [themes](#themes)
|
||||
* [complete examples](#complete-examples)
|
||||
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
||||
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
||||
* [prometheus](#prometheus) - metrics/stats can be enabled
|
||||
* [packages](#packages) - the party might be closer than you think
|
||||
* [arch package](#arch-package) - now [available on aur](https://aur.archlinux.org/packages/copyparty) maintained by [@icxes](https://github.com/icxes)
|
||||
* [fedora package](#fedora-package) - currently **NOT** available on [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/)
|
||||
* [fedora package](#fedora-package) - does not exist yet
|
||||
* [nix package](#nix-package) - `nix profile install github:9001/copyparty`
|
||||
* [nixos module](#nixos-module)
|
||||
* [browser support](#browser-support) - TLDR: yes
|
||||
@@ -104,7 +106,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [sfx](#sfx) - the self-contained "binary"
|
||||
* [copyparty.exe](#copypartyexe) - download [copyparty.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) (win8+) or [copyparty32.exe](https://github.com/9001/copyparty/releases/latest/download/copyparty32.exe) (win7+)
|
||||
* [install on android](#install-on-android)
|
||||
* [reporting bugs](#reporting-bugs) - ideas for context to include in bug reports
|
||||
* [reporting bugs](#reporting-bugs) - ideas for context to include, and where to submit them
|
||||
* [devnotes](#devnotes) - for build instructions etc, see [./docs/devnotes.md](./docs/devnotes.md)
|
||||
|
||||
|
||||
@@ -286,6 +288,9 @@ roughly sorted by chance of encounter
|
||||
* cannot index non-ascii filenames with `-e2d`
|
||||
* cannot handle filenames with mojibake
|
||||
|
||||
if you have a new exciting bug to share, see [reporting bugs](#reporting-bugs)
|
||||
|
||||
|
||||
## not my bugs
|
||||
|
||||
same order here too
|
||||
@@ -341,9 +346,24 @@ upgrade notes
|
||||
* yes, using the [`g` permission](#accounts-and-volumes), see the examples there
|
||||
* you can also do this with linux filesystem permissions; `chmod 111 music` will make it possible to access files and folders inside the `music` folder but not list the immediate contents -- also works with other software, not just copyparty
|
||||
|
||||
* can I link someone to a password-protected volume/file by including the password in the URL?
|
||||
* yes, by adding `?pw=hunter2` to the end; replace `?` with `&` if there are parameters in the URL already, meaning it contains a `?` near the end
|
||||
|
||||
* how do I stop `.hist` folders from appearing everywhere on my HDD?
|
||||
* by default, a `.hist` folder is created inside each volume for the filesystem index, thumbnails, audio transcodes, and markdown document history. Use the `--hist` global-option or the `hist` volflag to move it somewhere else; see [database location](#database-location)
|
||||
|
||||
* can I make copyparty download a file to my server if I give it a URL?
|
||||
* yes, using [hooks](https://github.com/9001/copyparty/blob/hovudstraum/bin/hooks/wget.py)
|
||||
|
||||
* firefox refuses to connect over https, saying "Secure Connection Failed" or "SEC_ERROR_BAD_SIGNATURE", but the usual button to "Accept the Risk and Continue" is not shown
|
||||
* firefox has corrupted its certstore; fix this by exiting firefox, then find and delete the file named `cert9.db` somewhere in your firefox profile folder
|
||||
|
||||
* the server keeps saying `thank you for playing` when I try to access the website
|
||||
* you've gotten banned for malicious traffic! if this happens by mistake, and you're running a reverse-proxy and/or something like cloudflare, see [real-ip](#real-ip) on how to fix this
|
||||
|
||||
* copyparty seems to think I am using http, even though the URL is https
|
||||
* your reverse-proxy is not sending the `X-Forwarded-Proto: https` header; this could be because your reverse-proxy itself is confused. Ensure that none of the intermediates (such as cloudflare) are terminating https before the traffic hits your entrypoint
|
||||
|
||||
* i want to learn python and/or programming and am considering looking at the copyparty source code in that occasion
|
||||
* ```bash
|
||||
_| _ __ _ _|_
|
||||
@@ -581,7 +601,7 @@ this initiates an upload using `up2k`; there are two uploaders available:
|
||||
* `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0
|
||||
* `[🚀] up2k`, the good / fancy one
|
||||
|
||||
NB: you can undo/delete your own uploads with `[🧯]` [unpost](#unpost)
|
||||
NB: you can undo/delete your own uploads with `[🧯]` [unpost](#unpost) (and this is also where you abort unfinished uploads, but you have to refresh the page first)
|
||||
|
||||
up2k has several advantages:
|
||||
* you can drop folders into the browser (files are added recursively)
|
||||
@@ -954,17 +974,24 @@ a TFTP server (read/write) can be started using `--tftp 3969` (you probably wan
|
||||
* based on [partftpy](https://github.com/9001/partftpy)
|
||||
* no accounts; read from world-readable folders, write to world-writable, overwrite in world-deletable
|
||||
* needs a dedicated port (cannot share with the HTTP/HTTPS API)
|
||||
* run as root to use the spec-recommended port `69` (nice)
|
||||
* run as root (or see below) to use the spec-recommended port `69` (nice)
|
||||
* can reply from a predefined portrange (good for firewalls)
|
||||
* only supports the binary/octet/image transfer mode (no netascii)
|
||||
* [RFC 7440](https://datatracker.ietf.org/doc/html/rfc7440) is **not** supported, so will be extremely slow over WAN
|
||||
* expect 1100 KiB/s over 1000BASE-T, 400-500 KiB/s over wifi, 200 on bad wifi
|
||||
* assuming default blksize (512), expect 1100 KiB/s over 100BASE-T, 400-500 KiB/s over wifi, 200 on bad wifi
|
||||
|
||||
most clients expect to find TFTP on port 69, but on linux and macos you need to be root to listen on that. Alternatively, listen on 3969 and use NAT on the server to forward 69 to that port;
|
||||
* on linux: `iptables -t nat -A PREROUTING -i eth0 -p udp --dport 69 -j REDIRECT --to-port 3969`
|
||||
|
||||
some recommended TFTP clients:
|
||||
* curl (cross-platform, read/write)
|
||||
* get: `curl --tftp-blksize 1428 tftp://127.0.0.1:3969/firmware.bin`
|
||||
* put: `curl --tftp-blksize 1428 -T firmware.bin tftp://127.0.0.1:3969/`
|
||||
* windows: `tftp.exe` (you probably already have it)
|
||||
* `tftp -i 127.0.0.1 put firmware.bin`
|
||||
* linux: `tftp-hpa`, `atftp`
|
||||
* `tftp 127.0.0.1 3969 -v -m binary -c put firmware.bin`
|
||||
* `curl tftp://127.0.0.1:3969/firmware.bin` (read-only)
|
||||
* `atftp --option "blksize 1428" 127.0.0.1 3969 -p -l firmware.bin -r firmware.bin`
|
||||
* `tftp -v -m binary 127.0.0.1 3969 -c put firmware.bin`
|
||||
|
||||
|
||||
## smb server
|
||||
@@ -997,7 +1024,7 @@ known client bugs:
|
||||
* however smb1 is buggy and is not enabled by default on win10 onwards
|
||||
* windows cannot access folders which contain filenames with invalid unicode or forbidden characters (`<>:"/\|?*`), or names ending with `.`
|
||||
|
||||
the smb protocol listens on TCP port 445, which is a privileged port on linux and macos, which would require running copyparty as root. However, this can be avoided by listening on another port using `--smb-port 3945` and then using NAT to forward the traffic from 445 to there;
|
||||
the smb protocol listens on TCP port 445, which is a privileged port on linux and macos, which would require running copyparty as root. However, this can be avoided by listening on another port using `--smb-port 3945` and then using NAT on the server to forward the traffic from 445 to there;
|
||||
* on linux: `iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 445 -j REDIRECT --to-port 3945`
|
||||
|
||||
authenticate with one of the following:
|
||||
@@ -1245,11 +1272,26 @@ replace 404 and 403 errors with something completely different (that's it for no
|
||||
|
||||
replace copyparty passwords with oauth and such
|
||||
|
||||
work is [ongoing](https://github.com/9001/copyparty/issues/62) to support authenticating / authorizing users based on a separate authentication proxy, which makes it possible to support oauth, single-sign-on, etc.
|
||||
you can disable the built-in password-based login sysem, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
|
||||
|
||||
it is currently possible to specify `--idp-h-usr x-username`; copyparty will then skip password validation and blindly trust the username specified in the `X-Username` request header
|
||||
a popular choice is [Authelia](https://www.authelia.com/) (config-file based), another one is [authentik](https://goauthentik.io/) (GUI-based, more complex)
|
||||
|
||||
the remaining stuff (accepting user groups through another header, creating volumes on the fly) are still to-do; configuration will probably [look like this](./docs/examples/docker/idp/copyparty.conf)
|
||||
there is a [docker-compose example](./docs/examples/docker/idp-authelia-traefik) which is hopefully a good starting point (alternatively see [./docs/idp.md](./docs/idp.md) if you're the DIY type)
|
||||
|
||||
a more complete example of the copyparty configuration options [look like this](./docs/examples/docker/idp/copyparty.conf)
|
||||
|
||||
|
||||
## using the cloud as storage
|
||||
|
||||
connecting to an aws s3 bucket and similar
|
||||
|
||||
there is no built-in support for this, but you can use FUSE-software such as [rclone](https://rclone.org/) / [geesefs](https://github.com/yandex-cloud/geesefs) / [JuiceFS](https://juicefs.com/en/) to first mount your cloud storage as a local disk, and then let copyparty use (a folder in) that disk as a volume
|
||||
|
||||
you may experience poor upload performance this way, but that can sometimes be fixed by specifying the volflag `sparse` to force the use of sparse files; this has improved the upload speeds from `1.5 MiB/s` to over `80 MiB/s` in one case, but note that you are also more likely to discover funny bugs in your FUSE software this way, so buckle up
|
||||
|
||||
someone has also tested geesefs in combination with [gocryptfs](https://nuetzlich.net/gocryptfs/) with surprisingly good results, getting 60 MiB/s upload speeds on a gbit line, but JuiceFS won with 80 MiB/s using its built-in encryption
|
||||
|
||||
you may improve performance by specifying larger values for `--iobuf` / `--s-rd-sz` / `--s-wr-sz`
|
||||
|
||||
|
||||
## hiding from google
|
||||
@@ -1285,6 +1327,8 @@ the classname of the HTML tag is set according to the selected theme, which is u
|
||||
|
||||
see the top of [./copyparty/web/browser.css](./copyparty/web/browser.css) where the color variables are set, and there's layout-specific stuff near the bottom
|
||||
|
||||
if you want to change the fonts, see [./docs/rice/](./docs/rice/)
|
||||
|
||||
|
||||
## complete examples
|
||||
|
||||
@@ -1345,6 +1389,15 @@ example webserver configs:
|
||||
* [apache2 config](contrib/apache/copyparty.conf) -- location-based
|
||||
|
||||
|
||||
### real-ip
|
||||
|
||||
teaching copyparty how to see client IPs when running behind a reverse-proxy, or a WAF, or another protection service such as cloudflare
|
||||
|
||||
if you (and maybe everybody else) keep getting a message that says `thank you for playing`, then you've gotten banned for malicious traffic. This ban applies to the IP address that copyparty *thinks* identifies the shady client -- so, depending on your setup, you might have to tell copyparty where to find the correct IP
|
||||
|
||||
for most common setups, there should be a helpful message in the server-log explaining what to do, but see [docs/xff.md](docs/xff.md) if you want to learn more, including a quick hack to **just make it work** (which is **not** recommended, but hey...)
|
||||
|
||||
|
||||
## prometheus
|
||||
|
||||
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
|
||||
@@ -1424,17 +1477,7 @@ it comes with a [systemd service](./contrib/package/arch/copyparty.service) and
|
||||
|
||||
## fedora package
|
||||
|
||||
currently **NOT** available on [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/) , fedora is having issues with their build servers and won't be fixed for several months
|
||||
|
||||
if you previously installed copyparty from copr, you may run one of the following commands to upgrade to a more recent version:
|
||||
|
||||
```bash
|
||||
dnf install https://ocv.me/copyparty/fedora/37/python3-copyparty.fc37.noarch.rpm
|
||||
dnf install https://ocv.me/copyparty/fedora/38/python3-copyparty.fc38.noarch.rpm
|
||||
dnf install https://ocv.me/copyparty/fedora/39/python3-copyparty.fc39.noarch.rpm
|
||||
```
|
||||
|
||||
to run copyparty as a service, use the [systemd service scripts](https://github.com/9001/copyparty/tree/hovudstraum/contrib/systemd), just replace `/usr/bin/python3 /usr/local/bin/copyparty-sfx.py` with `/usr/bin/copyparty`
|
||||
does not exist yet; using the [copr-pypi](https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/) builds is **NOT recommended** because updates can be delayed by [several months](https://github.com/fedora-copr/copr/issues/3056)
|
||||
|
||||
|
||||
## nix package
|
||||
@@ -1699,6 +1742,7 @@ below are some tweaks roughly ordered by usefulness:
|
||||
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
|
||||
* and also makes thumbnails load faster, regardless of e2d/e2t
|
||||
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
|
||||
* if your volumes are on a network-disk such as NFS / SMB / s3, specifying larger values for `--iobuf` and/or `--s-rd-sz` and/or `--s-wr-sz` may help; try setting all of them to `524288` or `1048576` or `4194304`
|
||||
* `--no-htp --hash-mt=0 --mtag-mt=1 --th-mt=1` minimizes the number of threads; can help in some eccentric environments (like the vscode debugger)
|
||||
* `-j0` enables multiprocessing (actual multithreading), can reduce latency to `20+80/numCores` percent and generally improve performance in cpu-intensive workloads, for example:
|
||||
* lots of connections (many users or heavy clients)
|
||||
@@ -1936,7 +1980,12 @@ if you want thumbnails (photos+videos) and you're okay with spending another 132
|
||||
|
||||
# reporting bugs
|
||||
|
||||
ideas for context to include in bug reports
|
||||
ideas for context to include, and where to submit them
|
||||
|
||||
please get in touch using any of the following URLs:
|
||||
* https://github.com/9001/copyparty/ **(primary)**
|
||||
* https://gitlab.com/9001/copyparty/ *(mirror)*
|
||||
* https://codeberg.org/9001/copyparty *(mirror)*
|
||||
|
||||
in general, commandline arguments (and config file if any)
|
||||
|
||||
@@ -1951,3 +2000,6 @@ if there's a wall of base64 in the log (thread stacks) then please include that,
|
||||
# devnotes
|
||||
|
||||
for build instructions etc, see [./docs/devnotes.md](./docs/devnotes.md)
|
||||
|
||||
see [./docs/TODO.md](./docs/TODO.md) for planned features / fixes / changes
|
||||
|
||||
|
||||
@@ -223,7 +223,10 @@ install_vamp() {
|
||||
# use msys2 in mingw-w64 mode
|
||||
# pacman -S --needed mingw-w64-x86_64-{ffmpeg,python,python-pip,vamp-plugin-sdk}
|
||||
|
||||
$pybin -m pip install --user vamp
|
||||
$pybin -m pip install --user vamp || {
|
||||
printf '\n\033[7malright, trying something else...\033[0m\n'
|
||||
$pybin -m pip install --user --no-build-isolation vamp
|
||||
}
|
||||
|
||||
cd "$td"
|
||||
echo '#include <vamp-sdk/Plugin.h>' | g++ -x c++ -c -o /dev/null - || [ -e ~/pe/vamp-sdk ] || {
|
||||
|
||||
14
bin/u2c.py
14
bin/u2c.py
@@ -1,8 +1,8 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
S_VERSION = "1.14"
|
||||
S_BUILD_DT = "2024-01-27"
|
||||
S_VERSION = "1.15"
|
||||
S_BUILD_DT = "2024-02-18"
|
||||
|
||||
"""
|
||||
u2c.py: upload to copyparty
|
||||
@@ -29,7 +29,7 @@ import platform
|
||||
import threading
|
||||
import datetime
|
||||
|
||||
EXE = sys.executable.endswith("exe")
|
||||
EXE = bool(getattr(sys, "frozen", False))
|
||||
|
||||
try:
|
||||
import argparse
|
||||
@@ -846,12 +846,12 @@ class Ctl(object):
|
||||
txt = " "
|
||||
|
||||
if not self.up_br:
|
||||
spd = self.hash_b / (time.time() - self.t0)
|
||||
eta = (self.nbytes - self.hash_b) / (spd + 1)
|
||||
spd = self.hash_b / ((time.time() - self.t0) or 1)
|
||||
eta = (self.nbytes - self.hash_b) / (spd or 1)
|
||||
else:
|
||||
spd = self.up_br / (time.time() - self.t0_up)
|
||||
spd = self.up_br / ((time.time() - self.t0_up) or 1)
|
||||
spd = self.spd = (self.spd or spd) * 0.9 + spd * 0.1
|
||||
eta = (self.nbytes - self.up_b) / (spd + 1)
|
||||
eta = (self.nbytes - self.up_b) / (spd or 1)
|
||||
|
||||
spd = humansize(spd)
|
||||
self.eta = str(datetime.timedelta(seconds=int(eta)))
|
||||
|
||||
@@ -16,11 +16,8 @@
|
||||
* sharex config file to upload screenshots and grab the URL
|
||||
* `RequestURL`: full URL to the target folder
|
||||
* `pw`: password (remove the `pw` line if anon-write)
|
||||
|
||||
however if your copyparty is behind a reverse-proxy, you may want to use [`sharex-html.sxcu`](sharex-html.sxcu) instead:
|
||||
* `RequestURL`: full URL to the target folder
|
||||
* `URL`: full URL to the root folder (with trailing slash) followed by `$regex:1|1$`
|
||||
* `pw`: password (remove `Parameters` if anon-write)
|
||||
* the `act:bput` thing is optional since copyparty v1.9.29
|
||||
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
||||
|
||||
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
||||
* browser integration, kind of? custom rightclick actions and stuff
|
||||
|
||||
@@ -11,6 +11,14 @@
|
||||
# (5'000 requests per second, or 20gbps upload/download in parallel)
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
#
|
||||
# if you are behind cloudflare (or another protection service),
|
||||
# remember to reject all connections which are not coming from your
|
||||
# protection service -- for cloudflare in particular, you can
|
||||
# generate the list of permitted IP ranges like so:
|
||||
# (curl -s https://www.cloudflare.com/ips-v{4,6} | sed 's/^/allow /; s/$/;/'; echo; echo "deny all;") > /etc/nginx/cloudflare-only.conf
|
||||
#
|
||||
# and then enable it below by uncomenting the cloudflare-only.conf line
|
||||
|
||||
upstream cpp {
|
||||
server 127.0.0.1:3923 fail_timeout=1s;
|
||||
@@ -21,7 +29,10 @@ server {
|
||||
listen [::]:443 ssl;
|
||||
|
||||
server_name fs.example.com;
|
||||
|
||||
|
||||
# uncomment the following line to reject non-cloudflare connections, ensuring client IPs cannot be spoofed:
|
||||
#include /etc/nginx/cloudflare-only.conf;
|
||||
|
||||
location / {
|
||||
proxy_pass http://cpp;
|
||||
proxy_redirect off;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.9.31"
|
||||
pkgver="1.11.1"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("a8ec1faf8cb224515355226882fdb2d1ab1de42d96ff78e148b930318867a71e")
|
||||
sha256sums=("13e4a65d1854f4f95308fa91c00bd8a5f5977b3ea4fa844ed08c7e1cb1c4bf29")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.9.31/copyparty-sfx.py",
|
||||
"version": "1.9.31",
|
||||
"hash": "sha256-yp7qoiW5yzm2M7qVmYY7R+SyhZXlqL+JxsXV22aS+MM="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.11.1/copyparty-sfx.py",
|
||||
"version": "1.11.1",
|
||||
"hash": "sha256-q7RiaB5yo1EDTwdPeMCNFnmcNj0TsKzBsbsddMSqTH4="
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
{
|
||||
"Version": "13.5.0",
|
||||
"Name": "copyparty-html",
|
||||
"DestinationType": "ImageUploader",
|
||||
"RequestMethod": "POST",
|
||||
"RequestURL": "http://127.0.0.1:3923/sharex",
|
||||
"Parameters": {
|
||||
"pw": "wark"
|
||||
},
|
||||
"Body": "MultipartFormData",
|
||||
"Arguments": {
|
||||
"act": "bput"
|
||||
},
|
||||
"FileFormName": "f",
|
||||
"RegexList": [
|
||||
"bytes // <a href=\"/([^\"]+)\""
|
||||
],
|
||||
"URL": "http://127.0.0.1:3923/$regex:1|1$"
|
||||
}
|
||||
@@ -1,17 +1,19 @@
|
||||
{
|
||||
"Version": "13.5.0",
|
||||
"Version": "15.0.0",
|
||||
"Name": "copyparty",
|
||||
"DestinationType": "ImageUploader",
|
||||
"RequestMethod": "POST",
|
||||
"RequestURL": "http://127.0.0.1:3923/sharex",
|
||||
"Parameters": {
|
||||
"pw": "wark",
|
||||
"j": null
|
||||
},
|
||||
"Headers": {
|
||||
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE"
|
||||
},
|
||||
"Body": "MultipartFormData",
|
||||
"Arguments": {
|
||||
"act": "bput"
|
||||
},
|
||||
"FileFormName": "f",
|
||||
"URL": "$json:files[0].url$"
|
||||
"URL": "{json:files[0].url}"
|
||||
}
|
||||
|
||||
13
contrib/sharex12.sxcu
Normal file
13
contrib/sharex12.sxcu
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"Name": "copyparty",
|
||||
"DestinationType": "ImageUploader, TextUploader, FileUploader",
|
||||
"RequestURL": "http://127.0.0.1:3923/sharex",
|
||||
"FileFormName": "f",
|
||||
"Arguments": {
|
||||
"act": "bput"
|
||||
},
|
||||
"Headers": {
|
||||
"accept": "url",
|
||||
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE"
|
||||
}
|
||||
}
|
||||
@@ -395,7 +395,7 @@ def configure_ssl_ciphers(al: argparse.Namespace) -> None:
|
||||
|
||||
def args_from_cfg(cfg_path: str) -> list[str]:
|
||||
lines: list[str] = []
|
||||
expand_config_file(lines, cfg_path, "")
|
||||
expand_config_file(None, lines, cfg_path, "")
|
||||
lines = upgrade_cfg_fmt(None, argparse.Namespace(vc=False), lines, "")
|
||||
|
||||
ret: list[str] = []
|
||||
@@ -503,6 +503,10 @@ def get_sects():
|
||||
* "\033[33mperm\033[0m" is "permissions,username1,username2,..."
|
||||
* "\033[32mvolflag\033[0m" is config flags to set on this volume
|
||||
|
||||
--grp takes groupname:username1,username2,...
|
||||
and groupnames can be used instead of usernames in -v
|
||||
by prefixing the groupname with %
|
||||
|
||||
list of permissions:
|
||||
"r" (read): list folder contents, download files
|
||||
"w" (write): upload files; need "r" to see the uploads
|
||||
@@ -840,6 +844,7 @@ def add_general(ap, nc, srvname):
|
||||
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores, 0=all")
|
||||
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, \033[33mUSER\033[0m:\033[33mPASS\033[0m; example [\033[32med:wark\033[0m]")
|
||||
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, \033[33mSRC\033[0m:\033[33mDST\033[0m:\033[33mFLAG\033[0m; examples [\033[32m.::r\033[0m], [\033[32m/mnt/nas/music:/music:r:aed\033[0m], see --help-accounts")
|
||||
ap2.add_argument("--grp", metavar="G:N,N", type=u, action="append", help="add group, \033[33mNAME\033[0m:\033[33mUSER1\033[0m,\033[33mUSER2\033[0m,\033[33m...\033[0m; example [\033[32madmins:ed,foo,bar\033[0m]")
|
||||
ap2.add_argument("-ed", action="store_true", help="enable the ?dots url parameter / client option which allows clients to see dotfiles / hidden files (volflag=dots)")
|
||||
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-form POSTs; see \033[33m--help-urlform\033[0m")
|
||||
ap2.add_argument("--wintitle", metavar="TXT", type=u, default="cpp @ $pub", help="server terminal title, for example [\033[32m$ip-10.1.2.\033[0m] or [\033[32m$ip-]")
|
||||
@@ -864,6 +869,7 @@ def add_fs(ap):
|
||||
ap2 = ap.add_argument_group("filesystem options")
|
||||
rm_re_def = "5/0.1" if ANYWIN else "0/0"
|
||||
ap2.add_argument("--rm-retry", metavar="T/R", type=u, default=rm_re_def, help="if a file cannot be deleted because it is busy, continue trying for \033[33mT\033[0m seconds, retry every \033[33mR\033[0m seconds; disable with 0/0 (volflag=rm_retry)")
|
||||
ap2.add_argument("--iobuf", metavar="BYTES", type=int, default=256*1024, help="file I/O buffer-size; if your volumes are on a network drive, try increasing to \033[32m524288\033[0m or even \033[32m4194304\033[0m (and let me know if that improves your performance)")
|
||||
|
||||
|
||||
def add_upload(ap):
|
||||
@@ -871,6 +877,7 @@ def add_upload(ap):
|
||||
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads, hiding them from clients unless \033[33m-ed\033[0m")
|
||||
ap2.add_argument("--plain-ip", action="store_true", help="when avoiding filename collisions by appending the uploader's ip to the filename: append the plaintext ip instead of salting and hashing the ip")
|
||||
ap2.add_argument("--unpost", metavar="SEC", type=int, default=3600*12, help="grace period where uploads can be deleted by the uploader, even without delete permissions; 0=disabled, default=12h")
|
||||
ap2.add_argument("--u2abort", metavar="NUM", type=int, default=1, help="clients can abort incomplete uploads by using the unpost tab (requires \033[33m-e2d\033[0m). [\033[32m0\033[0m] = never allowed (disable feature), [\033[32m1\033[0m] = allow if client has the same IP as the upload AND is using the same account, [\033[32m2\033[0m] = just check the IP, [\033[32m3\033[0m] = just check account-name (volflag=u2abort)")
|
||||
ap2.add_argument("--blank-wt", metavar="SEC", type=int, default=300, help="file write grace period (any client can write to a blank file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
|
||||
ap2.add_argument("--reg-cap", metavar="N", type=int, default=38400, help="max number of uploads to keep in memory when running without \033[33m-e2d\033[0m; roughly 1 MiB RAM per 600")
|
||||
ap2.add_argument("--no-fpool", action="store_true", help="disable file-handle pooling -- instead, repeatedly close and reopen files during upload (bad idea to enable this on windows and/or cow filesystems)")
|
||||
@@ -901,8 +908,8 @@ def add_network(ap):
|
||||
ap2.add_argument("--ll", action="store_true", help="include link-local IPv4/IPv6 in mDNS replies, even if the NIC has routable IPs (breaks some mDNS clients)")
|
||||
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to associate clients with; [\033[32m0\033[0m]=tcp, [\033[32m1\033[0m]=origin (first x-fwd, unsafe), [\033[32m2\033[0m]=outermost-proxy, [\033[32m3\033[0m]=second-proxy, [\033[32m-1\033[0m]=closest-proxy")
|
||||
ap2.add_argument("--xff-hdr", metavar="NAME", type=u, default="x-forwarded-for", help="if reverse-proxied, which http header to read the client's real ip from")
|
||||
ap2.add_argument("--xff-src", metavar="IP", type=u, default="127., ::1", help="comma-separated list of trusted reverse-proxy IPs; only accept the real-ip header (\033[33m--xff-hdr\033[0m) if the incoming connection is from an IP starting with either of these. Can be disabled with [\033[32many\033[0m] if you are behind cloudflare (or similar) and are using \033[32m--xff-hdr=cf-connecting-ip\033[0m (or similar)")
|
||||
ap2.add_argument("--ipa", metavar="PREFIX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPREFIX\033[0m; example: [\033[32m127., 10.89., 192.168.\033[0m]")
|
||||
ap2.add_argument("--xff-src", metavar="CIDR", type=u, default="127.0.0.0/8, ::1/128", help="comma-separated list of trusted reverse-proxy CIDRs; only accept the real-ip header (\033[33m--xff-hdr\033[0m) and IdP headers if the incoming connection is from an IP within either of these subnets. Specify [\033[32mlan\033[0m] to allow all LAN / private / non-internet IPs. Can be disabled with [\033[32many\033[0m] if you are behind cloudflare (or similar) and are using \033[32m--xff-hdr=cf-connecting-ip\033[0m (or similar)")
|
||||
ap2.add_argument("--ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||
ap2.add_argument("--rp-loc", metavar="PATH", type=u, default="", help="if reverse-proxying on a location instead of a dedicated domain/subdomain, provide the base location here; example: [\033[32m/foo/bar\033[0m]")
|
||||
if ANYWIN:
|
||||
ap2.add_argument("--reuseaddr", action="store_true", help="set reuseaddr on listening sockets on windows; allows rapid restart of copyparty at the expense of being able to accidentally start multiple instances")
|
||||
@@ -910,6 +917,7 @@ def add_network(ap):
|
||||
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
|
||||
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
|
||||
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
|
||||
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
|
||||
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
|
||||
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0, help="debug: socket write delay in seconds")
|
||||
ap2.add_argument("--rsp-slp", metavar="SEC", type=float, default=0, help="debug: response delay in seconds")
|
||||
@@ -949,8 +957,9 @@ def add_cert(ap, cert_path):
|
||||
def add_auth(ap):
|
||||
ap2 = ap.add_argument_group('IdP / identity provider / user authentication options')
|
||||
ap2.add_argument("--idp-h-usr", metavar="HN", type=u, default="", help="bypass the copyparty authentication checks and assume the request-header \033[33mHN\033[0m contains the username of the requesting user (for use with authentik/oauth/...)\n\033[1;31mWARNING:\033[0m if you enable this, make sure clients are unable to specify this header themselves; must be washed away and replaced by a reverse-proxy")
|
||||
return
|
||||
ap2.add_argument("--idp-h-grp", metavar="HN", type=u, default="", help="assume the request-header \033[33mHN\033[0m contains the groupname of the requesting user; can be referenced in config files for group-based access control")
|
||||
ap2.add_argument("--idp-h-key", metavar="HN", type=u, default="", help="optional but recommended safeguard; your reverse-proxy will insert a secret header named \033[33mHN\033[0m into all requests, and the other IdP headers will be ignored if this header is not present")
|
||||
ap2.add_argument("--idp-gsep", metavar="RE", type=u, default="|:;+,", help="if there are multiple groups in \033[33m--idp-h-grp\033[0m, they are separated by one of the characters in \033[33mRE\033[0m")
|
||||
|
||||
|
||||
def add_zeroconf(ap):
|
||||
@@ -999,7 +1008,7 @@ def add_ftp(ap):
|
||||
ap2.add_argument("--ftps", metavar="PORT", type=int, help="enable FTPS server on \033[33mPORT\033[0m, for example \033[32m3990")
|
||||
ap2.add_argument("--ftpv", action="store_true", help="verbose")
|
||||
ap2.add_argument("--ftp4", action="store_true", help="only listen on IPv4")
|
||||
ap2.add_argument("--ftp-ipa", metavar="PFX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPFX\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Example: [\033[32m127., 10.89., 192.168.\033[0m]")
|
||||
ap2.add_argument("--ftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||
ap2.add_argument("--ftp-wt", metavar="SEC", type=int, default=7, help="grace period for resuming interrupted uploads (any client can write to any file last-modified more recently than \033[33mSEC\033[0m seconds ago)")
|
||||
ap2.add_argument("--ftp-nat", metavar="ADDR", type=u, help="the NAT address to use for passive connections")
|
||||
ap2.add_argument("--ftp-pr", metavar="P-P", type=u, help="the range of TCP ports to use for passive connections, for example \033[32m12000-13000")
|
||||
@@ -1019,9 +1028,10 @@ def add_tftp(ap):
|
||||
ap2.add_argument("--tftp", metavar="PORT", type=int, help="enable TFTP server on \033[33mPORT\033[0m, for example \033[32m69 \033[0mor \033[32m3969")
|
||||
ap2.add_argument("--tftpv", action="store_true", help="verbose")
|
||||
ap2.add_argument("--tftpvv", action="store_true", help="verboser")
|
||||
ap2.add_argument("--tftp-no-fast", action="store_true", help="debug: disable optimizations")
|
||||
ap2.add_argument("--tftp-lsf", metavar="PTN", type=u, default="\\.?(dir|ls)(\\.txt)?", help="return a directory listing if a file with this name is requested and it does not exist; defaults matches .ls, dir, .dir.txt, ls.txt, ...")
|
||||
ap2.add_argument("--tftp-nols", action="store_true", help="if someone tries to download a directory, return an error instead of showing its directory listing")
|
||||
ap2.add_argument("--tftp-ipa", metavar="PFX", type=u, default="", help="only accept connections from IP-addresses starting with \033[33mPFX\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Example: [\033[32m127., 10.89., 192.168.\033[0m]")
|
||||
ap2.add_argument("--tftp-ipa", metavar="CIDR", type=u, default="", help="only accept connections from IP-addresses inside \033[33mCIDR\033[0m; specify [\033[32many\033[0m] to disable inheriting \033[33m--ipa\033[0m. Examples: [\033[32mlan\033[0m] or [\033[32m10.89.0.0/16, 192.168.33.0/24\033[0m]")
|
||||
ap2.add_argument("--tftp-pr", metavar="P-P", type=u, help="the range of UDP ports to use for data transfer, for example \033[32m12000-13000")
|
||||
|
||||
|
||||
@@ -1114,6 +1124,7 @@ def add_safety(ap):
|
||||
ap2.add_argument("--ban-url", metavar="N,W,B", type=u, default="9,2,1440", help="hitting more than \033[33mN\033[0m sus URL's in \033[33mW\033[0m minutes = ban for \033[33mB\033[0m minutes; applies only to permissions g/G/h (decent replacement for \033[33m--ban-404\033[0m if that can't be used)")
|
||||
ap2.add_argument("--sus-urls", metavar="R", type=u, default=r"\.php$|(^|/)wp-(admin|content|includes)/", help="URLs which are considered sus / eligible for banning; disable with blank or [\033[32mno\033[0m]")
|
||||
ap2.add_argument("--nonsus-urls", metavar="R", type=u, default=r"^(favicon\.ico|robots\.txt)$|^apple-touch-icon|^\.well-known", help="harmless URLs ignored from 404-bans; disable with blank or [\033[32mno\033[0m]")
|
||||
ap2.add_argument("--early-ban", action="store_true", help="if a client is banned, reject its connection as soon as possible; not a good idea to enable when proxied behind cloudflare since it could ban your reverse-proxy")
|
||||
ap2.add_argument("--aclose", metavar="MIN", type=int, default=10, help="if a client maxes out the server connection limit, downgrade it from connection:keep-alive to connection:close for \033[33mMIN\033[0m minutes (and also kill its active connections) -- disable with 0")
|
||||
ap2.add_argument("--loris", metavar="B", type=int, default=60, help="if a client maxes out the server connection limit without sending headers, ban it for \033[33mB\033[0m minutes; disable with [\033[32m0\033[0m]")
|
||||
ap2.add_argument("--acao", metavar="V[,V]", type=u, default="*", help="Access-Control-Allow-Origin; list of origins (domains/IPs without port) to accept requests from; [\033[32mhttps://1.2.3.4\033[0m]. Default [\033[32m*\033[0m] allows requests from all sites but removes cookies and http-auth; only ?pw=hunter2 survives")
|
||||
@@ -1169,7 +1180,8 @@ def add_thumbnail(ap):
|
||||
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
|
||||
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60, help="conversion timeout in seconds (volflag=convt)")
|
||||
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
|
||||
ap2.add_argument("--th-no-crop", action="store_true", help="dynamic height; show full image by default (client can override in UI) (volflag=nocrop)")
|
||||
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32mfy\033[0m]=crop, [\033[32mfn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
|
||||
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32mfy\033[0m]=yes, [\033[32mfn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
|
||||
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
|
||||
ap2.add_argument("--th-no-jpg", action="store_true", help="disable jpg output")
|
||||
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
|
||||
@@ -1267,6 +1279,7 @@ def add_ui(ap, retry):
|
||||
ap2.add_argument("--bname", metavar="TXT", type=u, default="--name", help="server name (displayed in filebrowser document title)")
|
||||
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
|
||||
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
|
||||
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
|
||||
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
|
||||
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README.md documents (volflags: no_sb_md | sb_md)")
|
||||
@@ -1429,6 +1442,8 @@ def main(argv: Optional[list[str]] = None) -> None:
|
||||
deprecated: list[tuple[str, str]] = [
|
||||
("--salt", "--warksalt"),
|
||||
("--hdr-au-usr", "--idp-h-usr"),
|
||||
("--idp-h-sep", "--idp-gsep"),
|
||||
("--th-no-crop", "--th-crop=n"),
|
||||
]
|
||||
for dk, nk in deprecated:
|
||||
idx = -1
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 10, 0)
|
||||
CODENAME = "tftp"
|
||||
BUILD_DT = (2024, 2, 15)
|
||||
VERSION = (1, 11, 2)
|
||||
CODENAME = "You Can (Not) Proceed"
|
||||
BUILD_DT = (2024, 3, 23)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -18,7 +18,6 @@ from .cfg import flagdescs, permdescs, vf_bmap, vf_cmap, vf_vmap
|
||||
from .pwhash import PWHash
|
||||
from .util import (
|
||||
IMPLICATIONS,
|
||||
META_NOBOTS,
|
||||
SQLITE_VER,
|
||||
UNPLICATIONS,
|
||||
UTC,
|
||||
@@ -34,6 +33,7 @@ from .util import (
|
||||
uncyg,
|
||||
undot,
|
||||
unhumanize,
|
||||
vsplit,
|
||||
)
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
@@ -61,6 +61,10 @@ BAD_CFG = "invalid config; {}".format(SEE_LOG)
|
||||
SBADCFG = " ({})".format(BAD_CFG)
|
||||
|
||||
|
||||
class CfgEx(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class AXS(object):
|
||||
def __init__(
|
||||
self,
|
||||
@@ -193,7 +197,7 @@ class Lim(object):
|
||||
self.dft = int(time.time()) + 300
|
||||
self.dfv = get_df(abspath)[0] or 0
|
||||
for j in list(self.reg.values()) if self.reg else []:
|
||||
self.dfv -= int(j["size"] / len(j["hash"]) * len(j["need"]))
|
||||
self.dfv -= int(j["size"] / (len(j["hash"]) or 999) * len(j["need"]))
|
||||
|
||||
if already_written:
|
||||
sz = 0
|
||||
@@ -780,6 +784,20 @@ class AuthSrv(object):
|
||||
self.line_ctr = 0
|
||||
self.indent = ""
|
||||
|
||||
# fwd-decl
|
||||
self.vfs = VFS(log_func, "", "", AXS(), {})
|
||||
self.acct: dict[str, str] = {}
|
||||
self.iacct: dict[str, str] = {}
|
||||
self.grps: dict[str, list[str]] = {}
|
||||
self.re_pwd: Optional[re.Pattern] = None
|
||||
|
||||
# all volumes observed since last restart
|
||||
self.idp_vols: dict[str, str] = {} # vpath->abspath
|
||||
|
||||
# all users/groups observed since last restart
|
||||
self.idp_accs: dict[str, list[str]] = {} # username->groupnames
|
||||
self.idp_usr_gh: dict[str, str] = {} # username->group-header-value (cache)
|
||||
|
||||
self.mutex = threading.Lock()
|
||||
self.reload()
|
||||
|
||||
@@ -797,6 +815,86 @@ class AuthSrv(object):
|
||||
|
||||
yield prev, True
|
||||
|
||||
def idp_checkin(
|
||||
self, broker: Optional["BrokerCli"], uname: str, gname: str
|
||||
) -> bool:
|
||||
if uname in self.acct:
|
||||
return False
|
||||
|
||||
if self.idp_usr_gh.get(uname) == gname:
|
||||
return False
|
||||
|
||||
gnames = [x.strip() for x in self.args.idp_gsep.split(gname)]
|
||||
gnames.sort()
|
||||
|
||||
with self.mutex:
|
||||
self.idp_usr_gh[uname] = gname
|
||||
if self.idp_accs.get(uname) == gnames:
|
||||
return False
|
||||
|
||||
self.idp_accs[uname] = gnames
|
||||
|
||||
t = "reinitializing due to new user from IdP: [%s:%s]"
|
||||
self.log(t % (uname, gnames), 3)
|
||||
|
||||
if not broker:
|
||||
# only true for tests
|
||||
self._reload()
|
||||
return True
|
||||
|
||||
broker.ask("_reload_blocking", False).get()
|
||||
return True
|
||||
|
||||
def _map_volume_idp(
|
||||
self,
|
||||
src: str,
|
||||
dst: str,
|
||||
mount: dict[str, str],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
un_gns: dict[str, list[str]],
|
||||
) -> list[tuple[str, str, str, str]]:
|
||||
ret: list[tuple[str, str, str, str]] = []
|
||||
visited = set()
|
||||
src0 = src # abspath
|
||||
dst0 = dst # vpath
|
||||
|
||||
un_gn = [(un, gn) for un, gns in un_gns.items() for gn in gns]
|
||||
if not un_gn:
|
||||
# ensure volume creation if there's no users
|
||||
un_gn = [("", "")]
|
||||
|
||||
for un, gn in un_gn:
|
||||
# if ap/vp has a user/group placeholder, make sure to keep
|
||||
# track so the same user/gruop is mapped when setting perms;
|
||||
# otherwise clear un/gn to indicate it's a regular volume
|
||||
|
||||
src1 = src0.replace("${u}", un or "\n")
|
||||
dst1 = dst0.replace("${u}", un or "\n")
|
||||
if src0 == src1 and dst0 == dst1:
|
||||
un = ""
|
||||
|
||||
src = src1.replace("${g}", gn or "\n")
|
||||
dst = dst1.replace("${g}", gn or "\n")
|
||||
if src == src1 and dst == dst1:
|
||||
gn = ""
|
||||
|
||||
if "\n" in (src + dst):
|
||||
continue
|
||||
|
||||
label = "%s\n%s" % (src, dst)
|
||||
if label in visited:
|
||||
continue
|
||||
visited.add(label)
|
||||
|
||||
src, dst = self._map_volume(src, dst, mount, daxs, mflags)
|
||||
if src:
|
||||
ret.append((src, dst, un, gn))
|
||||
if un or gn:
|
||||
self.idp_vols[dst] = src
|
||||
|
||||
return ret
|
||||
|
||||
def _map_volume(
|
||||
self,
|
||||
src: str,
|
||||
@@ -804,7 +902,11 @@ class AuthSrv(object):
|
||||
mount: dict[str, str],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
) -> None:
|
||||
) -> tuple[str, str]:
|
||||
src = os.path.expandvars(os.path.expanduser(src))
|
||||
src = absreal(src)
|
||||
dst = dst.strip("/")
|
||||
|
||||
if dst in mount:
|
||||
t = "multiple filesystem-paths mounted at [/{}]:\n [{}]\n [{}]"
|
||||
self.log(t.format(dst, mount[dst], src), c=1)
|
||||
@@ -825,6 +927,7 @@ class AuthSrv(object):
|
||||
mount[dst] = src
|
||||
daxs[dst] = AXS()
|
||||
mflags[dst] = {}
|
||||
return (src, dst)
|
||||
|
||||
def _e(self, desc: Optional[str] = None) -> None:
|
||||
if not self.args.vc or not self.line_ctr:
|
||||
@@ -852,31 +955,76 @@ class AuthSrv(object):
|
||||
|
||||
self.log(t.format(self.line_ctr, c, self.indent, ln, desc))
|
||||
|
||||
def _all_un_gn(
|
||||
self,
|
||||
acct: dict[str, str],
|
||||
grps: dict[str, list[str]],
|
||||
) -> dict[str, list[str]]:
|
||||
"""
|
||||
generate list of all confirmed pairs of username/groupname seen since last restart;
|
||||
in case of conflicting group memberships then it is selected as follows:
|
||||
* any non-zero value from IdP group header
|
||||
* otherwise take --grps / [groups]
|
||||
"""
|
||||
ret = {un: gns[:] for un, gns in self.idp_accs.items()}
|
||||
ret.update({zs: [""] for zs in acct if zs not in ret})
|
||||
for gn, uns in grps.items():
|
||||
for un in uns:
|
||||
try:
|
||||
ret[un].append(gn)
|
||||
except:
|
||||
ret[un] = [gn]
|
||||
|
||||
return ret
|
||||
|
||||
def _parse_config_file(
|
||||
self,
|
||||
fp: str,
|
||||
cfg_lines: list[str],
|
||||
acct: dict[str, str],
|
||||
grps: dict[str, list[str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
mount: dict[str, str],
|
||||
) -> None:
|
||||
self.line_ctr = 0
|
||||
|
||||
expand_config_file(cfg_lines, fp, "")
|
||||
expand_config_file(self.log, cfg_lines, fp, "")
|
||||
if self.args.vc:
|
||||
lns = ["{:4}: {}".format(n, s) for n, s in enumerate(cfg_lines, 1)]
|
||||
self.log("expanded config file (unprocessed):\n" + "\n".join(lns))
|
||||
|
||||
cfg_lines = upgrade_cfg_fmt(self.log, self.args, cfg_lines, fp)
|
||||
|
||||
# due to IdP, volumes must be parsed after users and groups;
|
||||
# do volumes in a 2nd pass to allow arbitrary order in config files
|
||||
for npass in range(1, 3):
|
||||
if self.args.vc:
|
||||
self.log("parsing config files; pass %d/%d" % (npass, 2))
|
||||
self._parse_config_file_2(cfg_lines, acct, grps, daxs, mflags, mount, npass)
|
||||
|
||||
def _parse_config_file_2(
|
||||
self,
|
||||
cfg_lines: list[str],
|
||||
acct: dict[str, str],
|
||||
grps: dict[str, list[str]],
|
||||
daxs: dict[str, AXS],
|
||||
mflags: dict[str, dict[str, Any]],
|
||||
mount: dict[str, str],
|
||||
npass: int,
|
||||
) -> None:
|
||||
self.line_ctr = 0
|
||||
all_un_gn = self._all_un_gn(acct, grps)
|
||||
|
||||
cat = ""
|
||||
catg = "[global]"
|
||||
cata = "[accounts]"
|
||||
catgrp = "[groups]"
|
||||
catx = "accs:"
|
||||
catf = "flags:"
|
||||
ap: Optional[str] = None
|
||||
vp: Optional[str] = None
|
||||
vols: list[tuple[str, str, str, str]] = []
|
||||
for ln in cfg_lines:
|
||||
self.line_ctr += 1
|
||||
ln = ln.split(" #")[0].strip()
|
||||
@@ -889,7 +1037,7 @@ class AuthSrv(object):
|
||||
subsection = ln in (catx, catf)
|
||||
if ln.startswith("[") or subsection:
|
||||
self._e()
|
||||
if ap is None and vp is not None:
|
||||
if npass > 1 and ap is None and vp is not None:
|
||||
t = "the first line after [/{}] must be a filesystem path to share on that volume"
|
||||
raise Exception(t.format(vp))
|
||||
|
||||
@@ -905,6 +1053,8 @@ class AuthSrv(object):
|
||||
self._l(ln, 6, t)
|
||||
elif ln == cata:
|
||||
self._l(ln, 5, "begin user-accounts section")
|
||||
elif ln == catgrp:
|
||||
self._l(ln, 5, "begin user-groups section")
|
||||
elif ln.startswith("[/"):
|
||||
vp = ln[1:-1].strip("/")
|
||||
self._l(ln, 2, "define volume at URL [/{}]".format(vp))
|
||||
@@ -941,15 +1091,39 @@ class AuthSrv(object):
|
||||
raise Exception(t + SBADCFG)
|
||||
continue
|
||||
|
||||
if cat == catgrp:
|
||||
try:
|
||||
gn, zs1 = [zs.strip() for zs in ln.split(":", 1)]
|
||||
uns = [zs.strip() for zs in zs1.split(",")]
|
||||
t = "group [%s] = " % (gn,)
|
||||
t += ", ".join("user [%s]" % (x,) for x in uns)
|
||||
self._l(ln, 5, t)
|
||||
grps[gn] = uns
|
||||
except:
|
||||
t = 'lines inside the [groups] section must be "groupname: user1, user2, user..."'
|
||||
raise Exception(t + SBADCFG)
|
||||
continue
|
||||
|
||||
if vp is not None and ap is None:
|
||||
if npass != 2:
|
||||
continue
|
||||
|
||||
ap = ln
|
||||
ap = os.path.expandvars(os.path.expanduser(ap))
|
||||
ap = absreal(ap)
|
||||
self._l(ln, 2, "bound to filesystem-path [{}]".format(ap))
|
||||
self._map_volume(ap, vp, mount, daxs, mflags)
|
||||
vols = self._map_volume_idp(ap, vp, mount, daxs, mflags, all_un_gn)
|
||||
if not vols:
|
||||
ap = vp = None
|
||||
self._l(ln, 2, "└─no users/groups known; was not mapped")
|
||||
elif len(vols) > 1:
|
||||
for vol in vols:
|
||||
self._l(ln, 2, "└─mapping: [%s] => [%s]" % (vol[1], vol[0]))
|
||||
continue
|
||||
|
||||
if cat == catx:
|
||||
if npass != 2 or not ap:
|
||||
# not stage2, or unmapped ${u}/${g}
|
||||
continue
|
||||
|
||||
err = ""
|
||||
try:
|
||||
self._l(ln, 5, "volume access config:")
|
||||
@@ -960,14 +1134,20 @@ class AuthSrv(object):
|
||||
if " " in re.sub(", *", "", sv).strip():
|
||||
err = "list of users is not comma-separated; "
|
||||
raise Exception(err)
|
||||
assert vp is not None
|
||||
self._read_vol_str(sk, sv.replace(" ", ""), daxs[vp], mflags[vp])
|
||||
sv = sv.replace(" ", "")
|
||||
self._read_vol_str_idp(sk, sv, vols, all_un_gn, daxs, mflags)
|
||||
continue
|
||||
except CfgEx:
|
||||
raise
|
||||
except:
|
||||
err += "accs entries must be 'rwmdgGhaA.: user1, user2, ...'"
|
||||
raise Exception(err + SBADCFG)
|
||||
raise CfgEx(err + SBADCFG)
|
||||
|
||||
if cat == catf:
|
||||
if npass != 2 or not ap:
|
||||
# not stage2, or unmapped ${u}/${g}
|
||||
continue
|
||||
|
||||
err = ""
|
||||
try:
|
||||
self._l(ln, 6, "volume-specific config:")
|
||||
@@ -984,11 +1164,14 @@ class AuthSrv(object):
|
||||
else:
|
||||
fstr += ",{}={}".format(sk, sv)
|
||||
assert vp is not None
|
||||
self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp])
|
||||
self._read_vol_str_idp(
|
||||
"c", fstr[1:], vols, all_un_gn, daxs, mflags
|
||||
)
|
||||
fstr = ""
|
||||
if fstr:
|
||||
assert vp is not None
|
||||
self._read_vol_str("c", fstr[1:], daxs[vp], mflags[vp])
|
||||
self._read_vol_str_idp(
|
||||
"c", fstr[1:], vols, all_un_gn, daxs, mflags
|
||||
)
|
||||
continue
|
||||
except:
|
||||
err += "flags entries (volflags) must be one of the following:\n 'flag1, flag2, ...'\n 'key: value'\n 'flag1, flag2, key: value'"
|
||||
@@ -999,12 +1182,18 @@ class AuthSrv(object):
|
||||
self._e()
|
||||
self.line_ctr = 0
|
||||
|
||||
def _read_vol_str(
|
||||
self, lvl: str, uname: str, axs: AXS, flags: dict[str, Any]
|
||||
def _read_vol_str_idp(
|
||||
self,
|
||||
lvl: str,
|
||||
uname: str,
|
||||
vols: list[tuple[str, str, str, str]],
|
||||
un_gns: dict[str, list[str]],
|
||||
axs: dict[str, AXS],
|
||||
flags: dict[str, dict[str, Any]],
|
||||
) -> None:
|
||||
if lvl.strip("crwmdgGhaA."):
|
||||
t = "%s,%s" % (lvl, uname) if uname else lvl
|
||||
raise Exception("invalid config value (volume or volflag): %s" % (t,))
|
||||
raise CfgEx("invalid config value (volume or volflag): %s" % (t,))
|
||||
|
||||
if lvl == "c":
|
||||
# here, 'uname' is not a username; it is a volflag name... sorry
|
||||
@@ -1019,16 +1208,62 @@ class AuthSrv(object):
|
||||
while "," in uname:
|
||||
# one or more bools before the final flag; eat them
|
||||
n1, uname = uname.split(",", 1)
|
||||
self._read_volflag(flags, n1, True, False)
|
||||
for _, vp, _, _ in vols:
|
||||
self._read_volflag(flags[vp], n1, True, False)
|
||||
|
||||
for _, vp, _, _ in vols:
|
||||
self._read_volflag(flags[vp], uname, cval, False)
|
||||
|
||||
self._read_volflag(flags, uname, cval, False)
|
||||
return
|
||||
|
||||
if uname == "":
|
||||
uname = "*"
|
||||
|
||||
junkset = set()
|
||||
unames = []
|
||||
for un in uname.replace(",", " ").strip().split():
|
||||
if un.startswith("@"):
|
||||
grp = un[1:]
|
||||
uns = [x[0] for x in un_gns.items() if grp in x[1]]
|
||||
if grp == "${g}":
|
||||
unames.append(un)
|
||||
elif not uns and not self.args.idp_h_grp:
|
||||
t = "group [%s] must be defined with --grp argument (or in a [groups] config section)"
|
||||
raise CfgEx(t % (grp,))
|
||||
|
||||
unames.extend(uns)
|
||||
else:
|
||||
unames.append(un)
|
||||
|
||||
# unames may still contain ${u} and ${g} so now expand those;
|
||||
un_gn = [(un, gn) for un, gns in un_gns.items() for gn in gns]
|
||||
|
||||
for src, dst, vu, vg in vols:
|
||||
unames2 = set(unames)
|
||||
|
||||
if "${u}" in unames:
|
||||
if not vu:
|
||||
t = "cannot use ${u} in accs of volume [%s] because the volume url does not contain ${u}"
|
||||
raise CfgEx(t % (src,))
|
||||
unames2.add(vu)
|
||||
|
||||
if "@${g}" in unames:
|
||||
if not vg:
|
||||
t = "cannot use @${g} in accs of volume [%s] because the volume url does not contain @${g}"
|
||||
raise CfgEx(t % (src,))
|
||||
unames2.update([un for un, gn in un_gn if gn == vg])
|
||||
|
||||
if "${g}" in unames:
|
||||
t = 'the accs of volume [%s] contains "${g}" but the only supported way of specifying that is "@${g}"'
|
||||
raise CfgEx(t % (src,))
|
||||
|
||||
unames2.discard("${u}")
|
||||
unames2.discard("@${g}")
|
||||
|
||||
self._read_vol_str(lvl, list(unames2), axs[dst])
|
||||
|
||||
def _read_vol_str(self, lvl: str, unames: list[str], axs: AXS) -> None:
|
||||
junkset = set()
|
||||
for un in unames:
|
||||
for alias, mapping in [
|
||||
("h", "gh"),
|
||||
("G", "gG"),
|
||||
@@ -1105,12 +1340,18 @@ class AuthSrv(object):
|
||||
then supplementing with config files
|
||||
before finally building the VFS
|
||||
"""
|
||||
with self.mutex:
|
||||
self._reload()
|
||||
|
||||
def _reload(self) -> None:
|
||||
acct: dict[str, str] = {} # username:password
|
||||
grps: dict[str, list[str]] = {} # groupname:usernames
|
||||
daxs: dict[str, AXS] = {}
|
||||
mflags: dict[str, dict[str, Any]] = {} # moutpoint:flags
|
||||
mount: dict[str, str] = {} # dst:src (mountpoint:realpath)
|
||||
|
||||
self.idp_vols = {} # yolo
|
||||
|
||||
if self.args.a:
|
||||
# list of username:password
|
||||
for x in self.args.a:
|
||||
@@ -1121,9 +1362,22 @@ class AuthSrv(object):
|
||||
t = '\n invalid value "{}" for argument -a, must be username:password'
|
||||
raise Exception(t.format(x))
|
||||
|
||||
if self.args.grp:
|
||||
# list of groupname:username,username,...
|
||||
for x in self.args.grp:
|
||||
try:
|
||||
# accept both = and : as separator between groupname and usernames,
|
||||
# accept both , and : as separators between usernames
|
||||
zs1, zs2 = x.replace("=", ":").split(":", 1)
|
||||
grps[zs1] = zs2.replace(":", ",").split(",")
|
||||
except:
|
||||
t = '\n invalid value "{}" for argument --grp, must be groupname:username1,username2,...'
|
||||
raise Exception(t.format(x))
|
||||
|
||||
if self.args.v:
|
||||
# list of src:dst:permset:permset:...
|
||||
# permset is <rwmdgGhaA.>[,username][,username] or <c>,<flag>[=args]
|
||||
all_un_gn = self._all_un_gn(acct, grps)
|
||||
for v_str in self.args.v:
|
||||
m = re_vol.match(v_str)
|
||||
if not m:
|
||||
@@ -1133,20 +1387,19 @@ class AuthSrv(object):
|
||||
if WINDOWS:
|
||||
src = uncyg(src)
|
||||
|
||||
# print("\n".join([src, dst, perms]))
|
||||
src = absreal(src)
|
||||
dst = dst.strip("/")
|
||||
self._map_volume(src, dst, mount, daxs, mflags)
|
||||
vols = self._map_volume_idp(src, dst, mount, daxs, mflags, all_un_gn)
|
||||
|
||||
for x in perms.split(":"):
|
||||
lvl, uname = x.split(",", 1) if "," in x else [x, ""]
|
||||
self._read_vol_str(lvl, uname, daxs[dst], mflags[dst])
|
||||
self._read_vol_str_idp(lvl, uname, vols, all_un_gn, daxs, mflags)
|
||||
|
||||
if self.args.c:
|
||||
for cfg_fn in self.args.c:
|
||||
lns: list[str] = []
|
||||
try:
|
||||
self._parse_config_file(cfg_fn, lns, acct, daxs, mflags, mount)
|
||||
self._parse_config_file(
|
||||
cfg_fn, lns, acct, grps, daxs, mflags, mount
|
||||
)
|
||||
|
||||
zs = "#\033[36m cfg files in "
|
||||
zst = [x[len(zs) :] for x in lns if x.startswith(zs)]
|
||||
@@ -1177,7 +1430,7 @@ class AuthSrv(object):
|
||||
|
||||
mount = cased
|
||||
|
||||
if not mount:
|
||||
if not mount and not self.args.idp_h_usr:
|
||||
# -h says our defaults are CWD at root and read/write for everyone
|
||||
axs = AXS(["*"], ["*"], None, None)
|
||||
vfs = VFS(self.log_func, absreal("."), "", axs, {})
|
||||
@@ -1213,9 +1466,13 @@ class AuthSrv(object):
|
||||
vol.all_vps.sort(key=lambda x: len(x[0]), reverse=True)
|
||||
vol.root = vfs
|
||||
|
||||
zss = set(acct)
|
||||
zss.update(self.idp_accs)
|
||||
zss.discard("*")
|
||||
unames = ["*"] + list(sorted(zss))
|
||||
|
||||
for perm in "read write move del get pget html admin dot".split():
|
||||
axs_key = "u" + perm
|
||||
unames = ["*"] + list(acct.keys())
|
||||
for vp, vol in vfs.all_vols.items():
|
||||
zx = getattr(vol.axs, axs_key)
|
||||
if "*" in zx:
|
||||
@@ -1249,18 +1506,20 @@ class AuthSrv(object):
|
||||
]:
|
||||
for usr in d:
|
||||
all_users[usr] = 1
|
||||
if usr != "*" and usr not in acct:
|
||||
if usr != "*" and usr not in acct and usr not in self.idp_accs:
|
||||
missing_users[usr] = 1
|
||||
if "*" not in d:
|
||||
associated_users[usr] = 1
|
||||
|
||||
if missing_users:
|
||||
self.log(
|
||||
"you must -a the following users: "
|
||||
+ ", ".join(k for k in sorted(missing_users)),
|
||||
c=1,
|
||||
)
|
||||
raise Exception(BAD_CFG)
|
||||
zs = ", ".join(k for k in sorted(missing_users))
|
||||
if self.args.idp_h_usr:
|
||||
t = "the following users are unknown, and assumed to come from IdP: "
|
||||
self.log(t + zs, c=6)
|
||||
else:
|
||||
t = "you must -a the following users: "
|
||||
self.log(t + zs, c=1)
|
||||
raise Exception(BAD_CFG)
|
||||
|
||||
if LEELOO_DALLAS in all_users:
|
||||
raise Exception("sorry, reserved username: " + LEELOO_DALLAS)
|
||||
@@ -1401,13 +1660,6 @@ class AuthSrv(object):
|
||||
if not vol.flags.get("robots"):
|
||||
vol.flags["norobots"] = True
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
h = [vol.flags.get("html_head", self.args.html_head)]
|
||||
if vol.flags.get("norobots"):
|
||||
h.insert(0, META_NOBOTS)
|
||||
|
||||
vol.flags["html_head"] = "\n".join([x for x in h if x])
|
||||
|
||||
for vol in vfs.all_vols.values():
|
||||
if self.args.no_vthumb:
|
||||
vol.flags["dvthumb"] = True
|
||||
@@ -1485,7 +1737,7 @@ class AuthSrv(object):
|
||||
if k not in vol.flags:
|
||||
vol.flags[k] = getattr(self.args, k)
|
||||
|
||||
for k in ("nrand",):
|
||||
for k in ("nrand", "u2abort"):
|
||||
if k in vol.flags:
|
||||
vol.flags[k] = int(vol.flags[k])
|
||||
|
||||
@@ -1749,20 +2001,37 @@ class AuthSrv(object):
|
||||
except Pebkac:
|
||||
self.warn_anonwrite = True
|
||||
|
||||
with self.mutex:
|
||||
self.vfs = vfs
|
||||
self.acct = acct
|
||||
self.iacct = {v: k for k, v in acct.items()}
|
||||
idp_err = "WARNING! The following IdP volumes are mounted directly below another volume where anonymous users can read and/or write files. This is a SECURITY HAZARD!! When copyparty is restarted, it will not know about these IdP volumes yet. These volumes will then be accessible by anonymous users UNTIL one of the users associated with their volume sends a request to the server. RECOMMENDATION: You should create a restricted volume where nobody can read/write files, and make sure that all IdP volumes are configured to appear somewhere below that volume."
|
||||
for idp_vp in self.idp_vols:
|
||||
parent_vp = vsplit(idp_vp)[0]
|
||||
vn, _ = vfs.get(parent_vp, "*", False, False)
|
||||
zs = (
|
||||
"READABLE"
|
||||
if "*" in vn.axs.uread
|
||||
else "WRITABLE"
|
||||
if "*" in vn.axs.uwrite
|
||||
else ""
|
||||
)
|
||||
if zs:
|
||||
t = '\nWARNING: Volume "/%s" appears below "/%s" and would be WORLD-%s'
|
||||
idp_err += t % (idp_vp, vn.vpath, zs)
|
||||
if "\n" in idp_err:
|
||||
self.log(idp_err, 1)
|
||||
|
||||
self.re_pwd = None
|
||||
pwds = [re.escape(x) for x in self.iacct.keys()]
|
||||
if pwds:
|
||||
if self.ah.on:
|
||||
zs = r"(\[H\] pw:.*|[?&]pw=)([^&]+)"
|
||||
else:
|
||||
zs = r"(\[H\] pw:.*|=)(" + "|".join(pwds) + r")([]&; ]|$)"
|
||||
self.vfs = vfs
|
||||
self.acct = acct
|
||||
self.grps = grps
|
||||
self.iacct = {v: k for k, v in acct.items()}
|
||||
|
||||
self.re_pwd = re.compile(zs)
|
||||
self.re_pwd = None
|
||||
pwds = [re.escape(x) for x in self.iacct.keys()]
|
||||
if pwds:
|
||||
if self.ah.on:
|
||||
zs = r"(\[H\] pw:.*|[?&]pw=)([^&]+)"
|
||||
else:
|
||||
zs = r"(\[H\] pw:.*|=)(" + "|".join(pwds) + r")([]&; ]|$)"
|
||||
|
||||
self.re_pwd = re.compile(zs)
|
||||
|
||||
def setup_pwhash(self, acct: dict[str, str]) -> None:
|
||||
self.ah = PWHash(self.args)
|
||||
@@ -2004,6 +2273,12 @@ class AuthSrv(object):
|
||||
ret.append(" {}: {}".format(u, p))
|
||||
ret.append("")
|
||||
|
||||
if self.grps:
|
||||
ret.append("[groups]")
|
||||
for gn, uns in self.grps.items():
|
||||
ret.append(" %s: %s" % (gn, ", ".join(uns)))
|
||||
ret.append("")
|
||||
|
||||
for vol in self.vfs.all_vols.values():
|
||||
ret.append("[/{}]".format(vol.vpath))
|
||||
ret.append(" " + vol.realpath)
|
||||
@@ -2101,27 +2376,50 @@ def split_cfg_ln(ln: str) -> dict[str, Any]:
|
||||
return ret
|
||||
|
||||
|
||||
def expand_config_file(ret: list[str], fp: str, ipath: str) -> None:
|
||||
def expand_config_file(
|
||||
log: Optional["NamedLogger"], ret: list[str], fp: str, ipath: str
|
||||
) -> None:
|
||||
"""expand all % file includes"""
|
||||
fp = absreal(fp)
|
||||
if len(ipath.split(" -> ")) > 64:
|
||||
raise Exception("hit max depth of 64 includes")
|
||||
|
||||
if os.path.isdir(fp):
|
||||
names = os.listdir(fp)
|
||||
crumb = "#\033[36m cfg files in {} => {}\033[0m".format(fp, names)
|
||||
ret.append(crumb)
|
||||
for fn in sorted(names):
|
||||
names = list(sorted(os.listdir(fp)))
|
||||
cnames = [x for x in names if x.lower().endswith(".conf")]
|
||||
if not cnames:
|
||||
t = "warning: tried to read config-files from folder '%s' but it does not contain any "
|
||||
if names:
|
||||
t += ".conf files; the following files were ignored: %s"
|
||||
t = t % (fp, ", ".join(names[:8]))
|
||||
else:
|
||||
t += "files at all"
|
||||
t = t % (fp,)
|
||||
|
||||
if log:
|
||||
log(t, 3)
|
||||
|
||||
ret.append("#\033[33m %s\033[0m" % (t,))
|
||||
else:
|
||||
zs = "#\033[36m cfg files in %s => %s\033[0m" % (fp, cnames)
|
||||
ret.append(zs)
|
||||
|
||||
for fn in cnames:
|
||||
fp2 = os.path.join(fp, fn)
|
||||
if not fp2.endswith(".conf") or fp2 in ipath:
|
||||
if fp2 in ipath:
|
||||
continue
|
||||
|
||||
expand_config_file(ret, fp2, ipath)
|
||||
expand_config_file(log, ret, fp2, ipath)
|
||||
|
||||
if ret[-1] == crumb:
|
||||
# no config files below; remove breadcrumb
|
||||
ret.pop()
|
||||
return
|
||||
|
||||
if not os.path.exists(fp):
|
||||
t = "warning: tried to read config from '%s' but the file/folder does not exist"
|
||||
t = t % (fp,)
|
||||
if log:
|
||||
log(t, 3)
|
||||
|
||||
ret.append("#\033[31m %s\033[0m" % (t,))
|
||||
return
|
||||
|
||||
ipath += " -> " + fp
|
||||
@@ -2135,7 +2433,7 @@ def expand_config_file(ret: list[str], fp: str, ipath: str) -> None:
|
||||
fp2 = ln[1:].strip()
|
||||
fp2 = os.path.join(os.path.dirname(fp), fp2)
|
||||
ofs = len(ret)
|
||||
expand_config_file(ret, fp2, ipath)
|
||||
expand_config_file(log, ret, fp2, ipath)
|
||||
for n in range(ofs, len(ret)):
|
||||
ret[n] = pad + ret[n]
|
||||
continue
|
||||
|
||||
@@ -20,7 +20,6 @@ def vf_bmap() -> dict[str, str]:
|
||||
"no_thumb": "dthumb",
|
||||
"no_vthumb": "dvthumb",
|
||||
"no_athumb": "dathumb",
|
||||
"th_no_crop": "nocrop",
|
||||
}
|
||||
for k in (
|
||||
"dotsrch",
|
||||
@@ -56,6 +55,8 @@ def vf_vmap() -> dict[str, str]:
|
||||
"re_maxage": "scan",
|
||||
"th_convt": "convt",
|
||||
"th_size": "thsize",
|
||||
"th_crop": "crop",
|
||||
"th_x3": "th3x",
|
||||
}
|
||||
for k in (
|
||||
"dbd",
|
||||
@@ -65,6 +66,7 @@ def vf_vmap() -> dict[str, str]:
|
||||
"rm_retry",
|
||||
"sort",
|
||||
"unlist",
|
||||
"u2abort",
|
||||
"u2ts",
|
||||
):
|
||||
ret[k] = k
|
||||
@@ -115,6 +117,7 @@ flagcats = {
|
||||
"hardlink": "does dedup with hardlinks instead of symlinks",
|
||||
"neversymlink": "disables symlink fallback; full copy instead",
|
||||
"copydupes": "disables dedup, always saves full copies of dupes",
|
||||
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
||||
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
||||
"nosub": "forces all uploads into the top folder of the vfs",
|
||||
"magic": "enables filetype detection for nameless uploads",
|
||||
@@ -129,6 +132,7 @@ flagcats = {
|
||||
"rand": "force randomized filenames, 9 chars long by default",
|
||||
"nrand=N": "randomized filenames are N chars long",
|
||||
"u2ts=fc": "[f]orce [c]lient-last-modified or [u]pload-time",
|
||||
"u2abort=1": "allow aborting unfinished uploads? 0=no 1=strict 2=ip-chk 3=acct-chk",
|
||||
"sz=1k-3m": "allow filesizes between 1 KiB and 3MiB",
|
||||
"df=1g": "ensure 1 GiB free disk space",
|
||||
},
|
||||
@@ -172,7 +176,8 @@ flagcats = {
|
||||
"dathumb": "disables audio thumbnails (spectrograms)",
|
||||
"dithumb": "disables image thumbnails",
|
||||
"thsize": "thumbnail res; WxH",
|
||||
"nocrop": "disable center-cropping by default",
|
||||
"crop": "center-cropping (y/n/fy/fn)",
|
||||
"th3x": "3x resolution (y/n/fy/fn)",
|
||||
"convt": "conversion timeout in seconds",
|
||||
},
|
||||
"handlers\n(better explained in --help-handlers)": {
|
||||
|
||||
@@ -20,6 +20,7 @@ from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .util import (
|
||||
Daemon,
|
||||
ODict,
|
||||
Pebkac,
|
||||
exclude_dotfiles,
|
||||
fsenc,
|
||||
@@ -217,7 +218,7 @@ class FtpFs(AbstractedFS):
|
||||
raise FSE("Cannot open existing file for writing")
|
||||
|
||||
self.validpath(ap)
|
||||
return open(fsenc(ap), mode)
|
||||
return open(fsenc(ap), mode, self.args.iobuf)
|
||||
|
||||
def chdir(self, path: str) -> None:
|
||||
nwd = join(self.cwd, path)
|
||||
@@ -299,7 +300,7 @@ class FtpFs(AbstractedFS):
|
||||
|
||||
vp = join(self.cwd, path).lstrip("/")
|
||||
try:
|
||||
self.hub.up2k.handle_rm(self.uname, self.h.cli_ip, [vp], [], False)
|
||||
self.hub.up2k.handle_rm(self.uname, self.h.cli_ip, [vp], [], False, False)
|
||||
except Exception as ex:
|
||||
raise FSE(str(ex))
|
||||
|
||||
@@ -409,7 +410,7 @@ class FtpHandler(FTPHandler):
|
||||
if cip.startswith("::ffff:"):
|
||||
cip = cip[7:]
|
||||
|
||||
if self.args.ftp_ipa_re and not self.args.ftp_ipa_re.match(cip):
|
||||
if self.args.ftp_ipa_nm and not self.args.ftp_ipa_nm.map(cip):
|
||||
logging.warning("client rejected (--ftp-ipa): %s", cip)
|
||||
self.connected = False
|
||||
conn.close()
|
||||
@@ -545,6 +546,8 @@ class Ftpd(object):
|
||||
if self.args.ftp4:
|
||||
ips = [x for x in ips if ":" not in x]
|
||||
|
||||
ips = list(ODict.fromkeys(ips)) # dedup
|
||||
|
||||
ioloop = IOLoop()
|
||||
for ip in ips:
|
||||
for h, lp in hs:
|
||||
|
||||
@@ -36,6 +36,7 @@ from .bos import bos
|
||||
from .star import StreamTar
|
||||
from .sutil import StreamArc, gfilter
|
||||
from .szip import StreamZip
|
||||
from .util import unquote # type: ignore
|
||||
from .util import (
|
||||
APPLESAN_RE,
|
||||
BITNESS,
|
||||
@@ -84,7 +85,6 @@ from .util import (
|
||||
sendfile_py,
|
||||
undot,
|
||||
unescape_cookie,
|
||||
unquote, # type: ignore
|
||||
unquotep,
|
||||
vjoin,
|
||||
vol_san,
|
||||
@@ -170,16 +170,11 @@ class HttpCli(object):
|
||||
self.can_dot = False
|
||||
self.out_headerlist: list[tuple[str, str]] = []
|
||||
self.out_headers: dict[str, str] = {}
|
||||
self.html_head = " "
|
||||
# post
|
||||
self.parser: Optional[MultipartParser] = None
|
||||
# end placeholders
|
||||
|
||||
self.bufsz = 1024 * 32
|
||||
h = self.args.html_head
|
||||
if self.args.no_robots:
|
||||
h = META_NOBOTS + (("\n" + h) if h else "")
|
||||
self.html_head = h
|
||||
self.html_head = ""
|
||||
|
||||
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||
ptn = self.asrv.re_pwd
|
||||
@@ -231,13 +226,11 @@ class HttpCli(object):
|
||||
"Vary": "Origin, PW, Cookie",
|
||||
"Cache-Control": "no-store, max-age=0",
|
||||
}
|
||||
if self.args.no_robots:
|
||||
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
||||
|
||||
if self.is_banned():
|
||||
if self.args.early_ban and self.is_banned():
|
||||
return False
|
||||
|
||||
if self.args.ipa_re and not self.args.ipa_re.match(self.conn.addr[0]):
|
||||
if self.conn.ipa_nm and not self.conn.ipa_nm.map(self.conn.addr[0]):
|
||||
self.log("client rejected (--ipa)", 3)
|
||||
self.terse_reply(b"", 500)
|
||||
return False
|
||||
@@ -300,6 +293,7 @@ class HttpCli(object):
|
||||
zs = "%s:%s" % self.s.getsockname()[:2]
|
||||
self.host = zs[7:] if zs.startswith("::ffff:") else zs
|
||||
|
||||
trusted_xff = False
|
||||
n = self.args.rproxy
|
||||
if n:
|
||||
zso = self.headers.get(self.args.xff_hdr)
|
||||
@@ -316,21 +310,26 @@ class HttpCli(object):
|
||||
self.log(t.format(self.args.rproxy, zso), c=3)
|
||||
|
||||
pip = self.conn.addr[0]
|
||||
if self.args.xff_re and not self.args.xff_re.match(pip):
|
||||
t = 'got header "%s" from untrusted source "%s" claiming the true client ip is "%s" (raw value: "%s"); if you trust this, you must allowlist this proxy with "--xff-src=%s"'
|
||||
xffs = self.conn.xff_nm
|
||||
if xffs and not xffs.map(pip):
|
||||
t = 'got header "%s" from untrusted source "%s" claiming the true client ip is "%s" (raw value: "%s"); if you trust this, you must allowlist this proxy with "--xff-src=%s"%s'
|
||||
if self.headers.get("cf-connecting-ip"):
|
||||
t += " Alternatively, if you are behind cloudflare, it is better to specify these two instead: --xff-hdr=cf-connecting-ip --xff-src=any"
|
||||
t += ' Note: if you are behind cloudflare, then this default header is not a good choice; please first make sure your local reverse-proxy (if any) does not allow non-cloudflare IPs from providing cf-* headers, and then add this additional global setting: "--xff-hdr=cf-connecting-ip"'
|
||||
else:
|
||||
t += ' Note: depending on your reverse-proxy, and/or WAF, and/or other intermediates, you may want to read the true client IP from another header by also specifying "--xff-hdr=SomeOtherHeader"'
|
||||
zs = (
|
||||
".".join(pip.split(".")[:2]) + "."
|
||||
if "." in pip
|
||||
else ":".join(pip.split(":")[:4]) + ":"
|
||||
)
|
||||
self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs), 3)
|
||||
) + "0.0/16"
|
||||
zs2 = ' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
|
||||
self.log(t % (self.args.xff_hdr, pip, cli_ip, zso, zs, zs2), 3)
|
||||
else:
|
||||
self.ip = cli_ip
|
||||
self.is_vproxied = bool(self.args.R)
|
||||
self.log_src = self.conn.set_rproxy(self.ip)
|
||||
self.host = self.headers.get("x-forwarded-host") or self.host
|
||||
trusted_xff = True
|
||||
|
||||
if self.is_banned():
|
||||
return False
|
||||
@@ -458,9 +457,56 @@ class HttpCli(object):
|
||||
|
||||
if self.args.idp_h_usr:
|
||||
self.pw = ""
|
||||
self.uname = self.headers.get(self.args.idp_h_usr) or "*"
|
||||
if self.uname not in self.asrv.vfs.aread:
|
||||
self.log("unknown username: [%s]" % (self.uname), 1)
|
||||
idp_usr = self.headers.get(self.args.idp_h_usr) or ""
|
||||
if idp_usr:
|
||||
idp_grp = (
|
||||
self.headers.get(self.args.idp_h_grp) or ""
|
||||
if self.args.idp_h_grp
|
||||
else ""
|
||||
)
|
||||
|
||||
if not trusted_xff:
|
||||
pip = self.conn.addr[0]
|
||||
xffs = self.conn.xff_nm
|
||||
trusted_xff = xffs and xffs.map(pip)
|
||||
|
||||
trusted_key = (
|
||||
not self.args.idp_h_key
|
||||
) or self.args.idp_h_key in self.headers
|
||||
|
||||
if trusted_key and trusted_xff:
|
||||
self.asrv.idp_checkin(self.conn.hsrv.broker, idp_usr, idp_grp)
|
||||
else:
|
||||
if not trusted_key:
|
||||
t = 'the idp-h-key header ("%s") is not present in the request; will NOT trust the other headers saying that the client\'s username is "%s" and group is "%s"'
|
||||
self.log(t % (self.args.idp_h_key, idp_usr, idp_grp), 3)
|
||||
|
||||
if not trusted_xff:
|
||||
t = 'got IdP headers from untrusted source "%s" claiming the client\'s username is "%s" and group is "%s"; if you trust this, you must allowlist this proxy with "--xff-src=%s"%s'
|
||||
if not self.args.idp_h_key:
|
||||
t += " Note: you probably also want to specify --idp-h-key <SECRET-HEADER-NAME> for additional security"
|
||||
|
||||
pip = self.conn.addr[0]
|
||||
zs = (
|
||||
".".join(pip.split(".")[:2]) + "."
|
||||
if "." in pip
|
||||
else ":".join(pip.split(":")[:4]) + ":"
|
||||
) + "0.0/16"
|
||||
zs2 = (
|
||||
' or "--xff-src=lan"' if self.conn.xff_lan.map(pip) else ""
|
||||
)
|
||||
self.log(t % (pip, idp_usr, idp_grp, zs, zs2), 3)
|
||||
|
||||
idp_usr = "*"
|
||||
idp_grp = ""
|
||||
|
||||
if idp_usr in self.asrv.vfs.aread:
|
||||
self.uname = idp_usr
|
||||
self.html_head += "<script>var is_idp=1</script>\n"
|
||||
else:
|
||||
self.log("unknown username: [%s]" % (idp_usr), 1)
|
||||
self.uname = "*"
|
||||
else:
|
||||
self.uname = "*"
|
||||
else:
|
||||
self.pw = uparam.get("pw") or self.headers.get("pw") or bauth or cookie_pw
|
||||
@@ -510,6 +556,10 @@ class HttpCli(object):
|
||||
|
||||
self.s.settimeout(self.args.s_tbody or None)
|
||||
|
||||
if "norobots" in vn.flags:
|
||||
self.html_head += META_NOBOTS
|
||||
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
||||
|
||||
try:
|
||||
cors_k = self._cors()
|
||||
if self.mode in ("GET", "HEAD"):
|
||||
@@ -518,9 +568,13 @@ class HttpCli(object):
|
||||
return self.handle_options() and self.keepalive
|
||||
|
||||
if not cors_k:
|
||||
host = self.headers.get("host", "<?>")
|
||||
origin = self.headers.get("origin", "<?>")
|
||||
self.log("cors-reject {} from {}".format(self.mode, origin), 3)
|
||||
raise Pebkac(403, "no surfing")
|
||||
proto = "https://" if self.is_https else "http://"
|
||||
guess = "modifying" if (origin and host) else "stripping"
|
||||
t = "cors-reject %s because request-header Origin='%s' does not match request-protocol '%s' and host '%s' based on request-header Host='%s' (note: if this request is not malicious, check if your reverse-proxy is accidentally %s request headers, in particular 'Origin', for example by running copyparty with --ihead='*' to show all request headers)"
|
||||
self.log(t % (self.mode, origin, proto, self.host, host, guess), 3)
|
||||
raise Pebkac(403, "rejected by cors-check")
|
||||
|
||||
# getattr(self.mode) is not yet faster than this
|
||||
if self.mode == "POST":
|
||||
@@ -651,7 +705,11 @@ class HttpCli(object):
|
||||
|
||||
def k304(self) -> bool:
|
||||
k304 = self.cookies.get("k304")
|
||||
return k304 == "y" or ("; Trident/" in self.ua and not k304)
|
||||
return (
|
||||
k304 == "y"
|
||||
or (self.args.k304 == 2 and k304 != "n")
|
||||
or ("; Trident/" in self.ua and not k304)
|
||||
)
|
||||
|
||||
def send_headers(
|
||||
self,
|
||||
@@ -1552,15 +1610,16 @@ class HttpCli(object):
|
||||
return enc or "utf-8"
|
||||
|
||||
def get_body_reader(self) -> tuple[Generator[bytes, None, None], int]:
|
||||
bufsz = self.args.s_rd_sz
|
||||
if "chunked" in self.headers.get("transfer-encoding", "").lower():
|
||||
return read_socket_chunked(self.sr), -1
|
||||
return read_socket_chunked(self.sr, bufsz), -1
|
||||
|
||||
remains = int(self.headers.get("content-length", -1))
|
||||
if remains == -1:
|
||||
self.keepalive = False
|
||||
return read_socket_unbounded(self.sr), remains
|
||||
return read_socket_unbounded(self.sr, bufsz), remains
|
||||
else:
|
||||
return read_socket(self.sr, remains), remains
|
||||
return read_socket(self.sr, bufsz, remains), remains
|
||||
|
||||
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, int, str, str]:
|
||||
# post_sz, sha_hex, sha_b64, remains, path, url
|
||||
@@ -1582,7 +1641,7 @@ class HttpCli(object):
|
||||
bos.makedirs(fdir)
|
||||
|
||||
open_ka: dict[str, Any] = {"fun": open}
|
||||
open_a = ["wb", 512 * 1024]
|
||||
open_a = ["wb", self.args.iobuf]
|
||||
|
||||
# user-request || config-force
|
||||
if ("gz" in vfs.flags or "xz" in vfs.flags) and (
|
||||
@@ -1841,7 +1900,7 @@ class HttpCli(object):
|
||||
f.seek(ofs)
|
||||
with open(fp, "wb") as fo:
|
||||
while nrem:
|
||||
buf = f.read(min(nrem, 512 * 1024))
|
||||
buf = f.read(min(nrem, self.args.iobuf))
|
||||
if not buf:
|
||||
break
|
||||
|
||||
@@ -1863,7 +1922,7 @@ class HttpCli(object):
|
||||
return "%s %s n%s" % (spd1, spd2, self.conn.nreq)
|
||||
|
||||
def handle_post_multipart(self) -> bool:
|
||||
self.parser = MultipartParser(self.log, self.sr, self.headers)
|
||||
self.parser = MultipartParser(self.log, self.args, self.sr, self.headers)
|
||||
self.parser.parse()
|
||||
|
||||
file0: list[tuple[str, Optional[str], Generator[bytes, None, None]]] = []
|
||||
@@ -2092,7 +2151,7 @@ class HttpCli(object):
|
||||
|
||||
self.log("writing {} #{} @{} len {}".format(path, chash, cstart, remains))
|
||||
|
||||
reader = read_socket(self.sr, remains)
|
||||
reader = read_socket(self.sr, self.args.s_rd_sz, remains)
|
||||
|
||||
f = None
|
||||
fpool = not self.args.no_fpool and sprs
|
||||
@@ -2103,7 +2162,7 @@ class HttpCli(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
f = f or open(fsenc(path), "rb+", 512 * 1024)
|
||||
f = f or open(fsenc(path), "rb+", self.args.iobuf)
|
||||
|
||||
try:
|
||||
f.seek(cstart[0])
|
||||
@@ -2126,7 +2185,8 @@ class HttpCli(object):
|
||||
)
|
||||
ofs = 0
|
||||
while ofs < chunksize:
|
||||
bufsz = min(chunksize - ofs, 4 * 1024 * 1024)
|
||||
bufsz = max(4 * 1024 * 1024, self.args.iobuf)
|
||||
bufsz = min(chunksize - ofs, bufsz)
|
||||
f.seek(cstart[0] + ofs)
|
||||
buf = f.read(bufsz)
|
||||
for wofs in cstart[1:]:
|
||||
@@ -2379,6 +2439,18 @@ class HttpCli(object):
|
||||
suffix = "-{:.6f}-{}".format(time.time(), dip)
|
||||
open_args = {"fdir": fdir, "suffix": suffix}
|
||||
|
||||
if "replace" in self.uparam:
|
||||
abspath = os.path.join(fdir, fname)
|
||||
if not self.can_delete:
|
||||
self.log("user not allowed to overwrite with ?replace")
|
||||
elif bos.path.exists(abspath):
|
||||
try:
|
||||
bos.unlink(abspath)
|
||||
t = "overwriting file with new upload: %s"
|
||||
except:
|
||||
t = "toctou while deleting for ?replace: %s"
|
||||
self.log(t % (abspath,))
|
||||
|
||||
# reserve destination filename
|
||||
with ren_open(fname, "wb", fdir=fdir, suffix=suffix) as zfw:
|
||||
fname = zfw["orz"][1]
|
||||
@@ -2423,7 +2495,7 @@ class HttpCli(object):
|
||||
v2 = lim.dfv - lim.dfl
|
||||
max_sz = min(v1, v2) if v1 and v2 else v1 or v2
|
||||
|
||||
with ren_open(tnam, "wb", 512 * 1024, **open_args) as zfw:
|
||||
with ren_open(tnam, "wb", self.args.iobuf, **open_args) as zfw:
|
||||
f, tnam = zfw["orz"]
|
||||
tabspath = os.path.join(fdir, tnam)
|
||||
self.log("writing to {}".format(tabspath))
|
||||
@@ -2719,7 +2791,7 @@ class HttpCli(object):
|
||||
if bos.path.exists(fp):
|
||||
wunlink(self.log, fp, vfs.flags)
|
||||
|
||||
with open(fsenc(fp), "wb", 512 * 1024) as f:
|
||||
with open(fsenc(fp), "wb", self.args.iobuf) as f:
|
||||
sz, sha512, _ = hashcopy(p_data, f, self.args.s_wr_slp)
|
||||
|
||||
if lim:
|
||||
@@ -2827,11 +2899,11 @@ class HttpCli(object):
|
||||
logtail = ""
|
||||
|
||||
#
|
||||
# if request is for foo.js, check if we have foo.js.{gz,br}
|
||||
# if request is for foo.js, check if we have foo.js.gz
|
||||
|
||||
file_ts = 0.0
|
||||
editions: dict[str, tuple[str, int]] = {}
|
||||
for ext in ["", ".gz", ".br"]:
|
||||
for ext in ("", ".gz"):
|
||||
try:
|
||||
fs_path = req_path + ext
|
||||
st = bos.stat(fs_path)
|
||||
@@ -2876,12 +2948,7 @@ class HttpCli(object):
|
||||
x.strip()
|
||||
for x in self.headers.get("accept-encoding", "").lower().split(",")
|
||||
]
|
||||
if ".br" in editions and "br" in supported_editions:
|
||||
is_compressed = True
|
||||
selected_edition = ".br"
|
||||
fs_path, file_sz = editions[".br"]
|
||||
self.out_headers["Content-Encoding"] = "br"
|
||||
elif ".gz" in editions:
|
||||
if ".gz" in editions:
|
||||
is_compressed = True
|
||||
selected_edition = ".gz"
|
||||
fs_path, file_sz = editions[".gz"]
|
||||
@@ -2897,13 +2964,8 @@ class HttpCli(object):
|
||||
is_compressed = False
|
||||
selected_edition = "plain"
|
||||
|
||||
try:
|
||||
fs_path, file_sz = editions[selected_edition]
|
||||
logmsg += "{} ".format(selected_edition.lstrip("."))
|
||||
except:
|
||||
# client is old and we only have .br
|
||||
# (could make brotli a dep to fix this but it's not worth)
|
||||
raise Pebkac(404)
|
||||
fs_path, file_sz = editions[selected_edition]
|
||||
logmsg += "{} ".format(selected_edition.lstrip("."))
|
||||
|
||||
#
|
||||
# partial
|
||||
@@ -2961,8 +3023,7 @@ class HttpCli(object):
|
||||
upper = gzip_orig_sz(fs_path)
|
||||
else:
|
||||
open_func = open
|
||||
# 512 kB is optimal for huge files, use 64k
|
||||
open_args = [fsenc(fs_path), "rb", 64 * 1024]
|
||||
open_args = [fsenc(fs_path), "rb", self.args.iobuf]
|
||||
use_sendfile = (
|
||||
# fmt: off
|
||||
not self.tls
|
||||
@@ -3097,6 +3158,7 @@ class HttpCli(object):
|
||||
|
||||
bgen = packer(
|
||||
self.log,
|
||||
self.args,
|
||||
fgen,
|
||||
utf8="utf" in uarg,
|
||||
pre_crc="crc" in uarg,
|
||||
@@ -3139,11 +3201,15 @@ class HttpCli(object):
|
||||
|
||||
ext = ext.rstrip(".") or "unk"
|
||||
if len(ext) > 11:
|
||||
ext = "⋯" + ext[-9:]
|
||||
ext = "~" + ext[-9:]
|
||||
|
||||
return self.tx_svg(ext, exact)
|
||||
|
||||
def tx_svg(self, txt: str, small: bool = False) -> bool:
|
||||
# chrome cannot handle more than ~2000 unique SVGs
|
||||
chrome = " rv:" not in self.ua
|
||||
mime, ico = self.ico.get(ext, not exact, chrome)
|
||||
# so url-param "raster" returns a png/webp instead
|
||||
# (useragent-sniffing kinshi due to caching proxies)
|
||||
mime, ico = self.ico.get(txt, not small, "raster" in self.uparam)
|
||||
|
||||
lm = formatdate(self.E.t0, usegmt=True)
|
||||
self.reply(ico, mime=mime, headers={"Last-Modified": lm})
|
||||
@@ -3170,7 +3236,7 @@ class HttpCli(object):
|
||||
sz_md = 0
|
||||
lead = b""
|
||||
fullfile = b""
|
||||
for buf in yieldfile(fs_path):
|
||||
for buf in yieldfile(fs_path, self.args.iobuf):
|
||||
if sz_md < max_sz:
|
||||
fullfile += buf
|
||||
else:
|
||||
@@ -3243,7 +3309,7 @@ class HttpCli(object):
|
||||
if fullfile:
|
||||
self.s.sendall(fullfile)
|
||||
else:
|
||||
for buf in yieldfile(fs_path):
|
||||
for buf in yieldfile(fs_path, self.args.iobuf):
|
||||
self.s.sendall(html_bescape(buf))
|
||||
|
||||
self.s.sendall(html[1])
|
||||
@@ -3339,6 +3405,8 @@ class HttpCli(object):
|
||||
self.reply(zb, mime="text/plain; charset=utf-8")
|
||||
return True
|
||||
|
||||
self.html_head += self.vn.flags.get("html_head", "")
|
||||
|
||||
html = self.j2s(
|
||||
"splash",
|
||||
this=self,
|
||||
@@ -3354,6 +3422,7 @@ class HttpCli(object):
|
||||
dbwt=vs["dbwt"],
|
||||
url_suf=suf,
|
||||
k304=self.k304(),
|
||||
k304vis=self.args.k304 > 0,
|
||||
ver=S_VERSION if self.args.ver else "",
|
||||
ahttps="" if self.is_https else "https://" + self.host + self.req,
|
||||
)
|
||||
@@ -3362,7 +3431,7 @@ class HttpCli(object):
|
||||
|
||||
def set_k304(self) -> bool:
|
||||
v = self.uparam["k304"].lower()
|
||||
if v == "y":
|
||||
if v in "yn":
|
||||
dur = 86400 * 299
|
||||
else:
|
||||
dur = 0
|
||||
@@ -3407,6 +3476,9 @@ class HttpCli(object):
|
||||
self.reply(pt.encode("utf-8"), status=rc)
|
||||
return True
|
||||
|
||||
if "th" in self.ouparam:
|
||||
return self.tx_svg("e" + pt[:3])
|
||||
|
||||
t = t.format(self.args.SR)
|
||||
qv = quotep(self.vpaths) + self.ourlq()
|
||||
html = self.j2s("splash", this=self, qvpath=qv, msg=t)
|
||||
@@ -3542,9 +3614,6 @@ class HttpCli(object):
|
||||
return ret
|
||||
|
||||
def tx_ups(self) -> bool:
|
||||
if not self.args.unpost:
|
||||
raise Pebkac(403, "the unpost feature is disabled in server config")
|
||||
|
||||
idx = self.conn.get_u2idx()
|
||||
if not idx or not hasattr(idx, "p_end"):
|
||||
raise Pebkac(500, "sqlite3 is not available on the server; cannot unpost")
|
||||
@@ -3562,7 +3631,20 @@ class HttpCli(object):
|
||||
if "fk" in vol.flags
|
||||
and (self.uname in vol.axs.uread or self.uname in vol.axs.upget)
|
||||
}
|
||||
for vol in self.asrv.vfs.all_vols.values():
|
||||
|
||||
x = self.conn.hsrv.broker.ask(
|
||||
"up2k.get_unfinished_by_user", self.uname, self.ip
|
||||
)
|
||||
uret = x.get()
|
||||
|
||||
if not self.args.unpost:
|
||||
allvols = []
|
||||
else:
|
||||
allvols = list(self.asrv.vfs.all_vols.values())
|
||||
|
||||
allvols = [x for x in allvols if "e2d" in x.flags]
|
||||
|
||||
for vol in allvols:
|
||||
cur = idx.get_cur(vol.realpath)
|
||||
if not cur:
|
||||
continue
|
||||
@@ -3614,9 +3696,13 @@ class HttpCli(object):
|
||||
for v in ret:
|
||||
v["vp"] = self.args.SR + v["vp"]
|
||||
|
||||
jtxt = json.dumps(ret, indent=2, sort_keys=True).encode("utf-8", "replace")
|
||||
self.log("{} #{} {:.2f}sec".format(lm, len(ret), time.time() - t0))
|
||||
self.reply(jtxt, mime="application/json")
|
||||
if not allvols:
|
||||
ret = [{"kinshi": 1}]
|
||||
|
||||
jtxt = '{"u":%s,"c":%s}' % (uret, json.dumps(ret, indent=0))
|
||||
zi = len(uret.split('\n"pd":')) - 1
|
||||
self.log("%s #%d+%d %.2fsec" % (lm, zi, len(ret), time.time() - t0))
|
||||
self.reply(jtxt.encode("utf-8", "replace"), mime="application/json")
|
||||
return True
|
||||
|
||||
def handle_rm(self, req: list[str]) -> bool:
|
||||
@@ -3631,11 +3717,12 @@ class HttpCli(object):
|
||||
elif self.is_vproxied:
|
||||
req = [x[len(self.args.SR) :] for x in req]
|
||||
|
||||
unpost = "unpost" in self.uparam
|
||||
nlim = int(self.uparam.get("lim") or 0)
|
||||
lim = [nlim, nlim] if nlim else []
|
||||
|
||||
x = self.conn.hsrv.broker.ask(
|
||||
"up2k.handle_rm", self.uname, self.ip, req, lim, False
|
||||
"up2k.handle_rm", self.uname, self.ip, req, lim, False, unpost
|
||||
)
|
||||
self.loud_reply(x.get())
|
||||
return True
|
||||
@@ -3773,11 +3860,9 @@ class HttpCli(object):
|
||||
e2d = "e2d" in vn.flags
|
||||
e2t = "e2t" in vn.flags
|
||||
|
||||
self.html_head = vn.flags.get("html_head", "")
|
||||
if vn.flags.get("norobots") or "b" in self.uparam:
|
||||
self.html_head += vn.flags.get("html_head", "")
|
||||
if "b" in self.uparam:
|
||||
self.out_headers["X-Robots-Tag"] = "noindex, nofollow"
|
||||
else:
|
||||
self.out_headers.pop("X-Robots-Tag", None)
|
||||
|
||||
is_dir = stat.S_ISDIR(st.st_mode)
|
||||
fk_pass = False
|
||||
@@ -3787,12 +3872,15 @@ class HttpCli(object):
|
||||
if idx and hasattr(idx, "p_end"):
|
||||
icur = idx.get_cur(dbv.realpath)
|
||||
|
||||
th_fmt = self.uparam.get("th")
|
||||
if self.can_read:
|
||||
th_fmt = self.uparam.get("th")
|
||||
if th_fmt is not None:
|
||||
nothumb = "dthumb" in dbv.flags
|
||||
if is_dir:
|
||||
vrem = vrem.rstrip("/")
|
||||
if icur and vrem:
|
||||
if nothumb:
|
||||
pass
|
||||
elif icur and vrem:
|
||||
q = "select fn from cv where rd=? and dn=?"
|
||||
crd, cdn = vrem.rsplit("/", 1) if "/" in vrem else ("", vrem)
|
||||
# no mojibake support:
|
||||
@@ -3815,10 +3903,10 @@ class HttpCli(object):
|
||||
break
|
||||
|
||||
if is_dir:
|
||||
return self.tx_ico("a.folder")
|
||||
return self.tx_svg("folder")
|
||||
|
||||
thp = None
|
||||
if self.thumbcli:
|
||||
if self.thumbcli and not nothumb:
|
||||
thp = self.thumbcli.get(dbv, vrem, int(st.st_mtime), th_fmt)
|
||||
|
||||
if thp:
|
||||
@@ -3829,6 +3917,9 @@ class HttpCli(object):
|
||||
|
||||
return self.tx_ico(rem)
|
||||
|
||||
elif self.can_write and th_fmt is not None:
|
||||
return self.tx_svg("upload\nonly")
|
||||
|
||||
elif self.can_get and self.avn:
|
||||
axs = self.avn.axs
|
||||
if self.uname not in axs.uhtml:
|
||||
@@ -3973,7 +4064,8 @@ class HttpCli(object):
|
||||
"idx": e2d,
|
||||
"itag": e2t,
|
||||
"dsort": vf["sort"],
|
||||
"dfull": "nocrop" in vf,
|
||||
"dcrop": vf["crop"],
|
||||
"dth3x": vf["th3x"],
|
||||
"u2ts": vf["u2ts"],
|
||||
"lifetime": vn.flags.get("lifetime") or 0,
|
||||
"frand": bool(vn.flags.get("rand")),
|
||||
@@ -4000,8 +4092,9 @@ class HttpCli(object):
|
||||
"sb_md": "" if "no_sb_md" in vf else (vf.get("md_sbf") or "y"),
|
||||
"readme": readme,
|
||||
"dgrid": "grid" in vf,
|
||||
"dfull": "nocrop" in vf,
|
||||
"dsort": vf["sort"],
|
||||
"dcrop": vf["crop"],
|
||||
"dth3x": vf["th3x"],
|
||||
"themes": self.args.themes,
|
||||
"turbolvl": self.args.turbo,
|
||||
"u2j": self.args.u2j,
|
||||
|
||||
@@ -23,7 +23,7 @@ from .mtag import HAVE_FFMPEG
|
||||
from .th_cli import ThumbCli
|
||||
from .th_srv import HAVE_PIL, HAVE_VIPS
|
||||
from .u2idx import U2idx
|
||||
from .util import HMaccas, shut_socket
|
||||
from .util import HMaccas, NetMap, shut_socket
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Optional, Pattern, Union
|
||||
@@ -55,6 +55,9 @@ class HttpConn(object):
|
||||
self.E: EnvParams = self.args.E
|
||||
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
||||
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
||||
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
||||
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
||||
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
||||
self.iphash: HMaccas = hsrv.broker.iphash
|
||||
self.bans: dict[str, int] = hsrv.bans
|
||||
self.aclose: dict[str, int] = hsrv.aclose
|
||||
|
||||
@@ -67,6 +67,7 @@ from .util import (
|
||||
Netdev,
|
||||
NetMap,
|
||||
absreal,
|
||||
build_netmap,
|
||||
ipnorm,
|
||||
min_ex,
|
||||
shut_socket,
|
||||
@@ -103,7 +104,7 @@ class HttpSrv(object):
|
||||
self.t0 = time.time()
|
||||
nsuf = "-n{}-i{:x}".format(nid, os.getpid()) if nid else ""
|
||||
self.magician = Magician()
|
||||
self.nm = NetMap([], {})
|
||||
self.nm = NetMap([], [])
|
||||
self.ssdp: Optional["SSDPr"] = None
|
||||
self.gpwd = Garda(self.args.ban_pw)
|
||||
self.g404 = Garda(self.args.ban_404)
|
||||
@@ -150,6 +151,10 @@ class HttpSrv(object):
|
||||
zs = os.path.join(self.E.mod, "web", "deps", "prism.js.gz")
|
||||
self.prism = os.path.exists(zs)
|
||||
|
||||
self.ipa_nm = build_netmap(self.args.ipa)
|
||||
self.xff_nm = build_netmap(self.args.xff_src)
|
||||
self.xff_lan = build_netmap("lan")
|
||||
|
||||
self.statics: set[str] = set()
|
||||
self._build_statics()
|
||||
|
||||
@@ -191,7 +196,7 @@ class HttpSrv(object):
|
||||
for fn in df:
|
||||
ap = absreal(os.path.join(dp, fn))
|
||||
self.statics.add(ap)
|
||||
if ap.endswith(".gz") or ap.endswith(".br"):
|
||||
if ap.endswith(".gz"):
|
||||
self.statics.add(ap[:-3])
|
||||
|
||||
def set_netdevs(self, netdevs: dict[str, Netdev]) -> None:
|
||||
@@ -199,7 +204,7 @@ class HttpSrv(object):
|
||||
for ip, _ in self.bound:
|
||||
ips.add(ip)
|
||||
|
||||
self.nm = NetMap(list(ips), netdevs)
|
||||
self.nm = NetMap(list(ips), list(netdevs))
|
||||
|
||||
def start_threads(self, n: int) -> None:
|
||||
self.tp_nthr += n
|
||||
|
||||
@@ -8,7 +8,7 @@ import re
|
||||
|
||||
from .__init__ import PY2
|
||||
from .th_srv import HAVE_PIL, HAVE_PILF
|
||||
from .util import BytesIO # type: ignore
|
||||
from .util import BytesIO, html_escape # type: ignore
|
||||
|
||||
|
||||
class Ico(object):
|
||||
@@ -31,10 +31,9 @@ class Ico(object):
|
||||
|
||||
w = 100
|
||||
h = 30
|
||||
if not self.args.th_no_crop and as_thumb:
|
||||
if as_thumb:
|
||||
sw, sh = self.args.th_size.split("x")
|
||||
h = int(100.0 / (float(sw) / float(sh)))
|
||||
w = 100
|
||||
|
||||
if chrome:
|
||||
# cannot handle more than ~2000 unique SVGs
|
||||
@@ -99,6 +98,6 @@ class Ico(object):
|
||||
fill="#{}" font-family="monospace" font-size="14px" style="letter-spacing:.5px">{}</text>
|
||||
</g></svg>
|
||||
"""
|
||||
svg = svg.format(h, c[:6], c[6:], ext)
|
||||
svg = svg.format(h, c[:6], c[6:], html_escape(ext, True))
|
||||
|
||||
return "image/svg+xml", svg.encode("utf-8")
|
||||
|
||||
@@ -206,6 +206,9 @@ class Metrics(object):
|
||||
try:
|
||||
x = self.hsrv.broker.ask("up2k.get_unfinished")
|
||||
xs = x.get()
|
||||
if not xs:
|
||||
raise Exception("up2k mutex acquisition timed out")
|
||||
|
||||
xj = json.loads(xs)
|
||||
for ptop, (nbytes, nfiles) in xj.items():
|
||||
tnbytes += nbytes
|
||||
|
||||
@@ -110,7 +110,7 @@ class MCast(object):
|
||||
)
|
||||
|
||||
ips = [x for x in ips if x not in ("::1", "127.0.0.1")]
|
||||
ips = find_prefix(ips, netdevs)
|
||||
ips = find_prefix(ips, list(netdevs))
|
||||
|
||||
on = self.on[:]
|
||||
off = self.off[:]
|
||||
|
||||
@@ -340,7 +340,7 @@ class SMB(object):
|
||||
yeet("blocked delete (no-del-acc): " + vpath)
|
||||
|
||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||
self.hub.up2k.handle_rm(uname, "1.7.6.2", [vpath], [], False)
|
||||
self.hub.up2k.handle_rm(uname, "1.7.6.2", [vpath], [], False, False)
|
||||
|
||||
def _utime(self, vpath: str, times: tuple[float, float]) -> None:
|
||||
if not self.args.smbw:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import stat
|
||||
import tarfile
|
||||
@@ -44,11 +45,12 @@ class StreamTar(StreamArc):
|
||||
def __init__(
|
||||
self,
|
||||
log: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
fgen: Generator[dict[str, Any], None, None],
|
||||
cmp: str = "",
|
||||
**kwargs: Any
|
||||
):
|
||||
super(StreamTar, self).__init__(log, fgen)
|
||||
super(StreamTar, self).__init__(log, args, fgen)
|
||||
|
||||
self.ci = 0
|
||||
self.co = 0
|
||||
@@ -126,7 +128,7 @@ class StreamTar(StreamArc):
|
||||
inf.gid = 0
|
||||
|
||||
self.ci += inf.size
|
||||
with open(fsenc(src), "rb", 512 * 1024) as fo:
|
||||
with open(fsenc(src), "rb", self.args.iobuf) as fo:
|
||||
self.tar.addfile(inf, fo)
|
||||
|
||||
def _gen(self) -> None:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import tempfile
|
||||
from datetime import datetime
|
||||
@@ -20,10 +21,12 @@ class StreamArc(object):
|
||||
def __init__(
|
||||
self,
|
||||
log: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
fgen: Generator[dict[str, Any], None, None],
|
||||
**kwargs: Any
|
||||
):
|
||||
self.log = log
|
||||
self.args = args
|
||||
self.fgen = fgen
|
||||
self.stopped = False
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ if True: # pylint: disable=using-constant-test
|
||||
import typing
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
from .__init__ import ANYWIN, EXE, MACOS, TYPE_CHECKING, EnvParams, unicode
|
||||
from .__init__ import ANYWIN, EXE, MACOS, TYPE_CHECKING, E, EnvParams, unicode
|
||||
from .authsrv import BAD_CFG, AuthSrv
|
||||
from .cert import ensure_cert
|
||||
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE
|
||||
@@ -49,6 +49,7 @@ from .util import (
|
||||
ODict,
|
||||
alltrace,
|
||||
ansi_re,
|
||||
build_netmap,
|
||||
min_ex,
|
||||
mp,
|
||||
odfusion,
|
||||
@@ -94,7 +95,7 @@ class SvcHub(object):
|
||||
self.stopping = False
|
||||
self.stopped = False
|
||||
self.reload_req = False
|
||||
self.reloading = False
|
||||
self.reloading = 0
|
||||
self.stop_cond = threading.Condition()
|
||||
self.nsigs = 3
|
||||
self.retcode = 0
|
||||
@@ -154,6 +155,8 @@ class SvcHub(object):
|
||||
lg.handlers = [lh]
|
||||
lg.setLevel(logging.DEBUG)
|
||||
|
||||
self._check_env()
|
||||
|
||||
if args.stackmon:
|
||||
start_stackmon(args.stackmon, 0)
|
||||
|
||||
@@ -170,6 +173,26 @@ class SvcHub(object):
|
||||
self.log("root", t.format(args.j), c=3)
|
||||
args.no_fpool = True
|
||||
|
||||
for name, arg in (
|
||||
("iobuf", "iobuf"),
|
||||
("s-rd-sz", "s_rd_sz"),
|
||||
("s-wr-sz", "s_wr_sz"),
|
||||
):
|
||||
zi = getattr(args, arg)
|
||||
if zi < 32768:
|
||||
t = "WARNING: expect very poor performance because you specified a very low value (%d) for --%s"
|
||||
self.log("root", t % (zi, name), 3)
|
||||
zi = 2
|
||||
zi2 = 2 ** (zi - 1).bit_length()
|
||||
if zi != zi2:
|
||||
zi3 = 2 ** ((zi - 1).bit_length() - 1)
|
||||
t = "WARNING: expect poor performance because --%s is not a power-of-two; consider using %d or %d instead of %d"
|
||||
self.log("root", t % (name, zi2, zi3, zi), 3)
|
||||
|
||||
if args.s_rd_sz > args.iobuf:
|
||||
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
|
||||
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
|
||||
|
||||
bri = "zy"[args.theme % 2 :][:1]
|
||||
ch = "abcdefghijklmnopqrstuvwx"[int(args.theme / 2)]
|
||||
args.theme = "{0}{1} {0} {1}".format(ch, bri)
|
||||
@@ -385,6 +408,17 @@ class SvcHub(object):
|
||||
|
||||
Daemon(self.sd_notify, "sd-notify")
|
||||
|
||||
def _check_env(self) -> None:
|
||||
try:
|
||||
files = os.listdir(E.cfg)
|
||||
except:
|
||||
files = []
|
||||
|
||||
hits = [x for x in files if x.lower().endswith(".conf")]
|
||||
if hits:
|
||||
t = "WARNING: found config files in [%s]: %s\n config files are not expected here, and will NOT be loaded (unless your setup is intentionally hella funky)"
|
||||
self.log("root", t % (E.cfg, ", ".join(hits)), 3)
|
||||
|
||||
def _process_config(self) -> bool:
|
||||
al = self.args
|
||||
|
||||
@@ -465,12 +499,11 @@ class SvcHub(object):
|
||||
|
||||
al.xff_hdr = al.xff_hdr.lower()
|
||||
al.idp_h_usr = al.idp_h_usr.lower()
|
||||
# al.idp_h_grp = al.idp_h_grp.lower()
|
||||
al.idp_h_grp = al.idp_h_grp.lower()
|
||||
al.idp_h_key = al.idp_h_key.lower()
|
||||
|
||||
al.xff_re = self._ipa2re(al.xff_src)
|
||||
al.ipa_re = self._ipa2re(al.ipa)
|
||||
al.ftp_ipa_re = self._ipa2re(al.ftp_ipa or al.ipa)
|
||||
al.tftp_ipa_re = self._ipa2re(al.tftp_ipa or al.ipa)
|
||||
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa)
|
||||
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa)
|
||||
|
||||
mte = ODict.fromkeys(DEF_MTE.split(","), True)
|
||||
al.mte = odfusion(mte, al.mte)
|
||||
@@ -487,6 +520,18 @@ class SvcHub(object):
|
||||
if ptn:
|
||||
setattr(self.args, k, re.compile(ptn))
|
||||
|
||||
for k in ["idp_gsep"]:
|
||||
ptn = getattr(self.args, k)
|
||||
if "]" in ptn:
|
||||
ptn = "]" + ptn.replace("]", "")
|
||||
if "[" in ptn:
|
||||
ptn = ptn.replace("[", "") + "["
|
||||
if "-" in ptn:
|
||||
ptn = ptn.replace("-", "") + "-"
|
||||
|
||||
ptn = ptn.replace("\\", "\\\\").replace("^", "\\^")
|
||||
setattr(self.args, k, re.compile("[%s]" % (ptn,)))
|
||||
|
||||
try:
|
||||
zf1, zf2 = self.args.rm_retry.split("/")
|
||||
self.args.rm_re_t = float(zf1)
|
||||
@@ -662,21 +707,37 @@ class SvcHub(object):
|
||||
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
|
||||
|
||||
def reload(self) -> str:
|
||||
if self.reloading:
|
||||
return "cannot reload; already in progress"
|
||||
with self.up2k.mutex:
|
||||
if self.reloading:
|
||||
return "cannot reload; already in progress"
|
||||
self.reloading = 1
|
||||
|
||||
self.reloading = True
|
||||
Daemon(self._reload, "reloading")
|
||||
return "reload initiated"
|
||||
|
||||
def _reload(self) -> None:
|
||||
self.log("root", "reload scheduled")
|
||||
def _reload(self, rescan_all_vols: bool = True) -> None:
|
||||
with self.up2k.mutex:
|
||||
if self.reloading != 1:
|
||||
return
|
||||
self.reloading = 2
|
||||
self.log("root", "reloading config")
|
||||
self.asrv.reload()
|
||||
self.up2k.reload()
|
||||
self.up2k.reload(rescan_all_vols)
|
||||
self.broker.reload()
|
||||
self.reloading = 0
|
||||
|
||||
self.reloading = False
|
||||
def _reload_blocking(self, rescan_all_vols: bool = True) -> None:
|
||||
while True:
|
||||
with self.up2k.mutex:
|
||||
if self.reloading < 2:
|
||||
self.reloading = 1
|
||||
break
|
||||
time.sleep(0.05)
|
||||
|
||||
# try to handle multiple pending IdP reloads at once:
|
||||
time.sleep(0.2)
|
||||
|
||||
self._reload(rescan_all_vols=rescan_all_vols)
|
||||
|
||||
def stop_thr(self) -> None:
|
||||
while not self.stop_req:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import argparse
|
||||
import calendar
|
||||
import stat
|
||||
import time
|
||||
@@ -218,12 +219,13 @@ class StreamZip(StreamArc):
|
||||
def __init__(
|
||||
self,
|
||||
log: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
fgen: Generator[dict[str, Any], None, None],
|
||||
utf8: bool = False,
|
||||
pre_crc: bool = False,
|
||||
**kwargs: Any
|
||||
) -> None:
|
||||
super(StreamZip, self).__init__(log, fgen)
|
||||
super(StreamZip, self).__init__(log, args, fgen)
|
||||
|
||||
self.utf8 = utf8
|
||||
self.pre_crc = pre_crc
|
||||
@@ -248,7 +250,7 @@ class StreamZip(StreamArc):
|
||||
|
||||
crc = 0
|
||||
if self.pre_crc:
|
||||
for buf in yieldfile(src):
|
||||
for buf in yieldfile(src, self.args.iobuf):
|
||||
crc = zlib.crc32(buf, crc)
|
||||
|
||||
crc &= 0xFFFFFFFF
|
||||
@@ -257,7 +259,7 @@ class StreamZip(StreamArc):
|
||||
buf = gen_hdr(None, name, sz, ts, self.utf8, crc, self.pre_crc)
|
||||
yield self._ct(buf)
|
||||
|
||||
for buf in yieldfile(src):
|
||||
for buf in yieldfile(src, self.args.iobuf):
|
||||
if not self.pre_crc:
|
||||
crc = zlib.crc32(buf, crc)
|
||||
|
||||
|
||||
@@ -10,19 +10,33 @@ except:
|
||||
self.__dict__.update(attr)
|
||||
|
||||
|
||||
import inspect
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import socket
|
||||
import stat
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
from partftpy import TftpContexts, TftpServer, TftpStates
|
||||
try:
|
||||
import inspect
|
||||
except:
|
||||
pass
|
||||
|
||||
from partftpy import (
|
||||
TftpContexts,
|
||||
TftpPacketFactory,
|
||||
TftpPacketTypes,
|
||||
TftpServer,
|
||||
TftpStates,
|
||||
)
|
||||
from partftpy.TftpShared import TftpException
|
||||
|
||||
from .__init__ import PY2, TYPE_CHECKING
|
||||
from .__init__ import EXE, TYPE_CHECKING
|
||||
from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .util import BytesIO, Daemon, exclude_dotfiles, runhook, undot
|
||||
from .util import BytesIO, Daemon, ODict, exclude_dotfiles, min_ex, runhook, undot
|
||||
|
||||
if True: # pylint: disable=using-constant-test
|
||||
from typing import Any, Union
|
||||
@@ -35,19 +49,22 @@ lg = logging.getLogger("tftp")
|
||||
debug, info, warning, error = (lg.debug, lg.info, lg.warning, lg.error)
|
||||
|
||||
|
||||
def noop(*a, **ka) -> None:
|
||||
pass
|
||||
|
||||
|
||||
def _serverInitial(self, pkt: Any, raddress: str, rport: int) -> bool:
|
||||
info("connection from %s:%s", raddress, rport)
|
||||
ret = _orig_serverInitial(self, pkt, raddress, rport)
|
||||
ptn = _hub[0].args.tftp_ipa_re
|
||||
if ptn and not ptn.match(raddress):
|
||||
ret = _sinitial[0](self, pkt, raddress, rport)
|
||||
nm = _hub[0].args.tftp_ipa_nm
|
||||
if nm and not nm.map(raddress):
|
||||
yeet("client rejected (--tftp-ipa): %s" % (raddress,))
|
||||
return ret
|
||||
|
||||
|
||||
# patch ipa-check into partftpd
|
||||
# patch ipa-check into partftpd (part 1/2)
|
||||
_hub: list["SvcHub"] = []
|
||||
_orig_serverInitial = TftpStates.TftpServerState.serverInitial
|
||||
TftpStates.TftpServerState.serverInitial = _serverInitial
|
||||
_sinitial: list[Any] = []
|
||||
|
||||
|
||||
class Tftpd(object):
|
||||
@@ -56,6 +73,7 @@ class Tftpd(object):
|
||||
self.args = hub.args
|
||||
self.asrv = hub.asrv
|
||||
self.log = hub.log
|
||||
self.mutex = threading.Lock()
|
||||
|
||||
_hub[:] = []
|
||||
_hub.append(hub)
|
||||
@@ -65,6 +83,41 @@ class Tftpd(object):
|
||||
lgr = logging.getLogger(x)
|
||||
lgr.setLevel(logging.DEBUG if self.args.tftpv else logging.INFO)
|
||||
|
||||
if not self.args.tftpv and not self.args.tftpvv:
|
||||
# contexts -> states -> packettypes -> shared
|
||||
# contexts -> packetfactory
|
||||
# packetfactory -> packettypes
|
||||
Cs = [
|
||||
TftpPacketTypes,
|
||||
TftpPacketFactory,
|
||||
TftpStates,
|
||||
TftpContexts,
|
||||
TftpServer,
|
||||
]
|
||||
cbak = []
|
||||
if not self.args.tftp_no_fast and not EXE:
|
||||
try:
|
||||
ptn = re.compile(r"(^\s*)log\.debug\(.*\)$")
|
||||
for C in Cs:
|
||||
cbak.append(C.__dict__)
|
||||
src1 = inspect.getsource(C).split("\n")
|
||||
src2 = "\n".join([ptn.sub("\\1pass", ln) for ln in src1])
|
||||
cfn = C.__spec__.origin
|
||||
exec (compile(src2, filename=cfn, mode="exec"), C.__dict__)
|
||||
except Exception:
|
||||
t = "failed to optimize tftp code; run with --tftp-noopt if there are issues:\n"
|
||||
self.log("tftp", t + min_ex(), 3)
|
||||
for n, zd in enumerate(cbak):
|
||||
Cs[n].__dict__ = zd
|
||||
|
||||
for C in Cs:
|
||||
C.log.debug = noop
|
||||
|
||||
# patch ipa-check into partftpd (part 2/2)
|
||||
_sinitial[:] = []
|
||||
_sinitial.append(TftpStates.TftpServerState.serverInitial)
|
||||
TftpStates.TftpServerState.serverInitial = _serverInitial
|
||||
|
||||
# patch vfs into partftpy
|
||||
TftpContexts.open = self._open
|
||||
TftpStates.open = self._open
|
||||
@@ -102,21 +155,90 @@ class Tftpd(object):
|
||||
self.log("tftp", "IPv6 not supported for tftp; listening on 0.0.0.0", 3)
|
||||
ip = "0.0.0.0"
|
||||
|
||||
self.ip = ip
|
||||
self.port = int(self.args.tftp)
|
||||
self.srv = TftpServer.TftpServer("/", self._ls)
|
||||
self.stop = self.srv.stop
|
||||
self.srv = []
|
||||
self.ips = []
|
||||
|
||||
ports = []
|
||||
if self.args.tftp_pr:
|
||||
p1, p2 = [int(x) for x in self.args.tftp_pr.split("-")]
|
||||
ports = list(range(p1, p2 + 1))
|
||||
|
||||
Daemon(self.srv.listen, "tftp", [self.ip, self.port], ka={"ports": ports})
|
||||
ips = self.args.i
|
||||
if "::" in ips:
|
||||
ips.append("0.0.0.0")
|
||||
|
||||
if self.args.ftp4:
|
||||
ips = [x for x in ips if ":" not in x]
|
||||
|
||||
ips = list(ODict.fromkeys(ips)) # dedup
|
||||
|
||||
for ip in ips:
|
||||
name = "tftp_%s" % (ip,)
|
||||
Daemon(self._start, name, [ip, ports])
|
||||
time.sleep(0.2) # give dualstack a chance
|
||||
|
||||
def nlog(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||
self.log("tftp", msg, c)
|
||||
|
||||
def _start(self, ip, ports):
|
||||
fam = socket.AF_INET6 if ":" in ip else socket.AF_INET
|
||||
have_been_alive = False
|
||||
while True:
|
||||
srv = TftpServer.TftpServer("/", self._ls)
|
||||
with self.mutex:
|
||||
self.srv.append(srv)
|
||||
self.ips.append(ip)
|
||||
|
||||
try:
|
||||
# this is the listen loop; it should block forever
|
||||
srv.listen(ip, self.port, af_family=fam, ports=ports)
|
||||
except:
|
||||
with self.mutex:
|
||||
self.srv.remove(srv)
|
||||
self.ips.remove(ip)
|
||||
|
||||
try:
|
||||
srv.sock.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
bound = bool(srv.listenport)
|
||||
except:
|
||||
bound = False
|
||||
|
||||
if bound:
|
||||
# this instance has managed to bind at least once
|
||||
have_been_alive = True
|
||||
|
||||
if have_been_alive:
|
||||
t = "tftp server [%s]:%d crashed; restarting in 3 sec:\n%s"
|
||||
error(t, ip, self.port, min_ex())
|
||||
time.sleep(3)
|
||||
continue
|
||||
|
||||
# server failed to start; could be due to dualstack (ipv6 managed to bind and this is ipv4)
|
||||
if ip != "0.0.0.0" or "::" not in self.ips:
|
||||
# nope, it's fatal
|
||||
t = "tftp server [%s]:%d failed to start:\n%s"
|
||||
error(t, ip, self.port, min_ex())
|
||||
|
||||
# yep; ignore
|
||||
# (TODO: move the "listening @ ..." infolog in partftpy to
|
||||
# after the bind attempt so it doesn't print twice)
|
||||
return
|
||||
|
||||
info("tftp server [%s]:%d terminated", ip, self.port)
|
||||
break
|
||||
|
||||
def stop(self):
|
||||
with self.mutex:
|
||||
srvs = self.srv[:]
|
||||
|
||||
for srv in srvs:
|
||||
srv.stop()
|
||||
|
||||
def _v2a(self, caller: str, vpath: str, perms: list, *a: Any) -> tuple[VFS, str]:
|
||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||
if not perms:
|
||||
@@ -190,7 +312,7 @@ class Tftpd(object):
|
||||
retl = ["# permissions: %s" % (", ".join(perms),)]
|
||||
retl += [fmt.format(*x) for x in ls]
|
||||
ret = "\n".join(retl).encode("utf-8", "replace")
|
||||
return BytesIO(ret)
|
||||
return BytesIO(ret + b"\n")
|
||||
|
||||
def _open(self, vpath: str, mode: str, *a: Any, **ka: Any) -> Any:
|
||||
rd = wr = False
|
||||
@@ -218,6 +340,9 @@ class Tftpd(object):
|
||||
if not self.args.tftp_nols and bos.path.isdir(ap):
|
||||
return self._ls(vpath, "", 0, True)
|
||||
|
||||
if not a:
|
||||
a = [self.args.iobuf]
|
||||
|
||||
return open(ap, mode, *a, **ka)
|
||||
|
||||
def _mkdir(self, vpath: str, *a) -> None:
|
||||
@@ -240,7 +365,7 @@ class Tftpd(object):
|
||||
yeet("attempted delete of non-empty file")
|
||||
|
||||
vpath = vpath.replace("\\", "/").lstrip("/")
|
||||
self.hub.up2k.handle_rm("*", "8.3.8.7", [vpath], [], False)
|
||||
self.hub.up2k.handle_rm("*", "8.3.8.7", [vpath], [], False, False)
|
||||
|
||||
def _access(self, *a: Any) -> bool:
|
||||
return True
|
||||
|
||||
@@ -78,16 +78,34 @@ class ThumbCli(object):
|
||||
if rem.startswith(".hist/th/") and rem.split(".")[-1] in ["webp", "jpg", "png"]:
|
||||
return os.path.join(ptop, rem)
|
||||
|
||||
if fmt == "j" and self.args.th_no_jpg:
|
||||
fmt = "w"
|
||||
if fmt[:1] in "jw":
|
||||
sfmt = fmt[:1]
|
||||
|
||||
if fmt == "w":
|
||||
if (
|
||||
self.args.th_no_webp
|
||||
or (is_img and not self.can_webp)
|
||||
or (self.args.th_ff_jpg and (not is_img or preferred == "ff"))
|
||||
):
|
||||
fmt = "j"
|
||||
if sfmt == "j" and self.args.th_no_jpg:
|
||||
sfmt = "w"
|
||||
|
||||
if sfmt == "w":
|
||||
if (
|
||||
self.args.th_no_webp
|
||||
or (is_img and not self.can_webp)
|
||||
or (self.args.th_ff_jpg and (not is_img or preferred == "ff"))
|
||||
):
|
||||
sfmt = "j"
|
||||
|
||||
vf_crop = dbv.flags["crop"]
|
||||
vf_th3x = dbv.flags["th3x"]
|
||||
|
||||
if "f" in vf_crop:
|
||||
sfmt += "f" if "n" in vf_crop else ""
|
||||
else:
|
||||
sfmt += "f" if "f" in fmt else ""
|
||||
|
||||
if "f" in vf_th3x:
|
||||
sfmt += "3" if "y" in vf_th3x else ""
|
||||
else:
|
||||
sfmt += "3" if "3" in fmt else ""
|
||||
|
||||
fmt = sfmt
|
||||
|
||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||
if not histpath:
|
||||
|
||||
@@ -16,9 +16,9 @@ from .__init__ import ANYWIN, TYPE_CHECKING
|
||||
from .authsrv import VFS
|
||||
from .bos import bos
|
||||
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
|
||||
from .util import BytesIO # type: ignore
|
||||
from .util import (
|
||||
FFMPEG_URL,
|
||||
BytesIO, # type: ignore
|
||||
Cooldown,
|
||||
Daemon,
|
||||
Pebkac,
|
||||
@@ -97,8 +97,8 @@ def thumb_path(histpath: str, rem: str, mtime: float, fmt: str, ffa: set[str]) -
|
||||
|
||||
# spectrograms are never cropped; strip fullsize flag
|
||||
ext = rem.split(".")[-1].lower()
|
||||
if ext in ffa and fmt in ("wf", "jf"):
|
||||
fmt = fmt[:1]
|
||||
if ext in ffa and fmt[:2] in ("wf", "jf"):
|
||||
fmt = fmt.replace("f", "")
|
||||
|
||||
rd += "\n" + fmt
|
||||
h = hashlib.sha512(afsenc(rd)).digest()
|
||||
@@ -200,9 +200,10 @@ class ThumbSrv(object):
|
||||
with self.mutex:
|
||||
return not self.nthr
|
||||
|
||||
def getres(self, vn: VFS) -> tuple[int, int]:
|
||||
def getres(self, vn: VFS, fmt: str) -> tuple[int, int]:
|
||||
mul = 3 if "3" in fmt else 1
|
||||
w, h = vn.flags["thsize"].split("x")
|
||||
return int(w), int(h)
|
||||
return int(w) * mul, int(h) * mul
|
||||
|
||||
def get(self, ptop: str, rem: str, mtime: float, fmt: str) -> Optional[str]:
|
||||
histpath = self.asrv.vfs.histtab.get(ptop)
|
||||
@@ -364,7 +365,7 @@ class ThumbSrv(object):
|
||||
|
||||
def fancy_pillow(self, im: "Image.Image", fmt: str, vn: VFS) -> "Image.Image":
|
||||
# exif_transpose is expensive (loads full image + unconditional copy)
|
||||
res = self.getres(vn)
|
||||
res = self.getres(vn, fmt)
|
||||
r = max(*res) * 2
|
||||
im.thumbnail((r, r), resample=Image.LANCZOS)
|
||||
try:
|
||||
@@ -379,7 +380,7 @@ class ThumbSrv(object):
|
||||
if rot in rots:
|
||||
im = im.transpose(rots[rot])
|
||||
|
||||
if fmt.endswith("f"):
|
||||
if "f" in fmt:
|
||||
im.thumbnail(res, resample=Image.LANCZOS)
|
||||
else:
|
||||
iw, ih = im.size
|
||||
@@ -396,7 +397,7 @@ class ThumbSrv(object):
|
||||
im = self.fancy_pillow(im, fmt, vn)
|
||||
except Exception as ex:
|
||||
self.log("fancy_pillow {}".format(ex), "90")
|
||||
im.thumbnail(self.getres(vn))
|
||||
im.thumbnail(self.getres(vn, fmt))
|
||||
|
||||
fmts = ["RGB", "L"]
|
||||
args = {"quality": 40}
|
||||
@@ -422,10 +423,10 @@ class ThumbSrv(object):
|
||||
def conv_vips(self, abspath: str, tpath: str, fmt: str, vn: VFS) -> None:
|
||||
self.wait4ram(0.2, tpath)
|
||||
crops = ["centre", "none"]
|
||||
if fmt.endswith("f"):
|
||||
if "f" in fmt:
|
||||
crops = ["none"]
|
||||
|
||||
w, h = self.getres(vn)
|
||||
w, h = self.getres(vn, fmt)
|
||||
kw = {"height": h, "size": "down", "intent": "relative"}
|
||||
|
||||
for c in crops:
|
||||
@@ -454,12 +455,12 @@ class ThumbSrv(object):
|
||||
seek = [b"-ss", "{:.0f}".format(dur / 3).encode("utf-8")]
|
||||
|
||||
scale = "scale={0}:{1}:force_original_aspect_ratio="
|
||||
if fmt.endswith("f"):
|
||||
if "f" in fmt:
|
||||
scale += "decrease,setsar=1:1"
|
||||
else:
|
||||
scale += "increase,crop={0}:{1},setsar=1:1"
|
||||
|
||||
res = self.getres(vn)
|
||||
res = self.getres(vn, fmt)
|
||||
bscale = scale.format(*list(res)).encode("utf-8")
|
||||
# fmt: off
|
||||
cmd = [
|
||||
@@ -594,7 +595,11 @@ class ThumbSrv(object):
|
||||
need = 0.2 + dur / coeff
|
||||
self.wait4ram(need, tpath)
|
||||
|
||||
fc = "[0:a:0]aresample=48000{},showspectrumpic=s=640x512,crop=780:544:70:50[o]"
|
||||
fc = "[0:a:0]aresample=48000{},showspectrumpic=s="
|
||||
if "3" in fmt:
|
||||
fc += "1280x1024,crop=1420:1056:70:48[o]"
|
||||
else:
|
||||
fc += "640x512,crop=780:544:70:48[o]"
|
||||
|
||||
if self.args.th_ff_swr:
|
||||
fco = ":filter_size=128:cutoff=0.877"
|
||||
|
||||
@@ -154,7 +154,7 @@ class Up2k(object):
|
||||
self.hashq: Queue[
|
||||
tuple[str, str, dict[str, Any], str, str, str, float, str, bool]
|
||||
] = Queue()
|
||||
self.tagq: Queue[tuple[str, str, str, str, str, float]] = Queue()
|
||||
self.tagq: Queue[tuple[str, str, str, str, int, str, float]] = Queue()
|
||||
self.tag_event = threading.Condition()
|
||||
self.hashq_mutex = threading.Lock()
|
||||
self.n_hashq = 0
|
||||
@@ -199,11 +199,16 @@ class Up2k(object):
|
||||
|
||||
Daemon(self.deferred_init, "up2k-deferred-init")
|
||||
|
||||
def reload(self) -> None:
|
||||
self.gid += 1
|
||||
self.log("reload #{} initiated".format(self.gid))
|
||||
def reload(self, rescan_all_vols: bool) -> None:
|
||||
"""mutex me"""
|
||||
self.log("reload #{} scheduled".format(self.gid + 1))
|
||||
all_vols = self.asrv.vfs.all_vols
|
||||
self.rescan(all_vols, list(all_vols.keys()), True, False)
|
||||
|
||||
scan_vols = [k for k, v in all_vols.items() if v.realpath not in self.registry]
|
||||
if rescan_all_vols:
|
||||
scan_vols = list(all_vols.keys())
|
||||
|
||||
self._rescan(all_vols, scan_vols, True, False)
|
||||
|
||||
def deferred_init(self) -> None:
|
||||
all_vols = self.asrv.vfs.all_vols
|
||||
@@ -232,8 +237,6 @@ class Up2k(object):
|
||||
for n in range(max(1, self.args.mtag_mt)):
|
||||
Daemon(self._tagger, "tagger-{}".format(n))
|
||||
|
||||
Daemon(self._run_all_mtp, "up2k-mtp-init")
|
||||
|
||||
def log(self, msg: str, c: Union[int, str] = 0) -> None:
|
||||
if self.pp:
|
||||
msg += "\033[K"
|
||||
@@ -282,9 +285,48 @@ class Up2k(object):
|
||||
}
|
||||
return json.dumps(ret, indent=4)
|
||||
|
||||
def get_unfinished_by_user(self, uname, ip) -> str:
|
||||
if PY2 or not self.mutex.acquire(timeout=2):
|
||||
return '[{"timeout":1}]'
|
||||
|
||||
ret: list[tuple[int, str, int, int, int]] = []
|
||||
try:
|
||||
for ptop, tab2 in self.registry.items():
|
||||
cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
if not cfg:
|
||||
continue
|
||||
addr = (ip or "\n") if cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if cfg in (1, 3) else ""
|
||||
drp = self.droppable.get(ptop, {})
|
||||
for wark, job in tab2.items():
|
||||
if (
|
||||
wark in drp
|
||||
or (user and user != job["user"])
|
||||
or (addr and addr != job["addr"])
|
||||
):
|
||||
continue
|
||||
|
||||
zt5 = (
|
||||
int(job["t0"]),
|
||||
djoin(job["vtop"], job["prel"], job["name"]),
|
||||
job["size"],
|
||||
len(job["need"]),
|
||||
len(job["hash"]),
|
||||
)
|
||||
ret.append(zt5)
|
||||
finally:
|
||||
self.mutex.release()
|
||||
|
||||
ret.sort(reverse=True)
|
||||
ret2 = [
|
||||
{"at": at, "vp": "/" + vp, "pd": 100 - ((nn * 100) // (nh or 1)), "sz": sz}
|
||||
for (at, vp, sz, nn, nh) in ret
|
||||
]
|
||||
return json.dumps(ret2, indent=0)
|
||||
|
||||
def get_unfinished(self) -> str:
|
||||
if PY2 or not self.mutex.acquire(timeout=0.5):
|
||||
return "{}"
|
||||
return ""
|
||||
|
||||
ret: dict[str, tuple[int, int]] = {}
|
||||
try:
|
||||
@@ -337,14 +379,21 @@ class Up2k(object):
|
||||
def rescan(
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], wait: bool, fscan: bool
|
||||
) -> str:
|
||||
with self.mutex:
|
||||
return self._rescan(all_vols, scan_vols, wait, fscan)
|
||||
|
||||
def _rescan(
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], wait: bool, fscan: bool
|
||||
) -> str:
|
||||
"""mutex me"""
|
||||
if not wait and self.pp:
|
||||
return "cannot initiate; scan is already in progress"
|
||||
|
||||
args = (all_vols, scan_vols, fscan)
|
||||
self.gid += 1
|
||||
Daemon(
|
||||
self.init_indexes,
|
||||
"up2k-rescan-{}".format(scan_vols[0] if scan_vols else "all"),
|
||||
args,
|
||||
(all_vols, scan_vols, fscan, self.gid),
|
||||
)
|
||||
return ""
|
||||
|
||||
@@ -456,7 +505,7 @@ class Up2k(object):
|
||||
if vp:
|
||||
fvp = "%s/%s" % (vp, fvp)
|
||||
|
||||
self._handle_rm(LEELOO_DALLAS, "", fvp, [], True)
|
||||
self._handle_rm(LEELOO_DALLAS, "", fvp, [], True, False)
|
||||
nrm += 1
|
||||
|
||||
if nrm:
|
||||
@@ -552,7 +601,7 @@ class Up2k(object):
|
||||
runihook(self.log, cmd, vol, ups)
|
||||
|
||||
def _vis_job_progress(self, job: dict[str, Any]) -> str:
|
||||
perc = 100 - (len(job["need"]) * 100.0 / len(job["hash"]))
|
||||
perc = 100 - (len(job["need"]) * 100.0 / (len(job["hash"]) or 1))
|
||||
path = djoin(job["ptop"], job["prel"], job["name"])
|
||||
return "{:5.1f}% {}".format(perc, path)
|
||||
|
||||
@@ -575,19 +624,32 @@ class Up2k(object):
|
||||
return True, ret
|
||||
|
||||
def init_indexes(
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], fscan: bool
|
||||
self, all_vols: dict[str, VFS], scan_vols: list[str], fscan: bool, gid: int = 0
|
||||
) -> bool:
|
||||
gid = self.gid
|
||||
while self.pp and gid == self.gid:
|
||||
time.sleep(0.1)
|
||||
if not gid:
|
||||
with self.mutex:
|
||||
gid = self.gid
|
||||
|
||||
if gid != self.gid:
|
||||
return False
|
||||
nspin = 0
|
||||
while True:
|
||||
nspin += 1
|
||||
if nspin > 1:
|
||||
time.sleep(0.1)
|
||||
|
||||
with self.mutex:
|
||||
if gid != self.gid:
|
||||
return False
|
||||
|
||||
if self.pp:
|
||||
continue
|
||||
|
||||
self.pp = ProgressPrinter(self.log, self.args)
|
||||
|
||||
break
|
||||
|
||||
if gid:
|
||||
self.log("reload #{} running".format(self.gid))
|
||||
self.log("reload #%d running" % (gid,))
|
||||
|
||||
self.pp = ProgressPrinter(self.log, self.args)
|
||||
vols = list(all_vols.values())
|
||||
t0 = time.time()
|
||||
have_e2d = False
|
||||
@@ -771,20 +833,14 @@ class Up2k(object):
|
||||
msg = "could not read tags because no backends are available (Mutagen or FFprobe)"
|
||||
self.log(msg, c=1)
|
||||
|
||||
thr = None
|
||||
if self.mtag:
|
||||
t = "online (running mtp)"
|
||||
if scan_vols:
|
||||
thr = Daemon(self._run_all_mtp, "up2k-mtp-scan", r=False)
|
||||
else:
|
||||
self.pp = None
|
||||
t = "online, idle"
|
||||
|
||||
t = "online (running mtp)" if self.mtag else "online, idle"
|
||||
for vol in vols:
|
||||
self.volstate[vol.vpath] = t
|
||||
|
||||
if thr:
|
||||
thr.start()
|
||||
if self.mtag:
|
||||
Daemon(self._run_all_mtp, "up2k-mtp-scan", (gid,))
|
||||
else:
|
||||
self.pp = None
|
||||
|
||||
return have_e2d
|
||||
|
||||
@@ -1809,26 +1865,28 @@ class Up2k(object):
|
||||
self.pending_tags = []
|
||||
return ret
|
||||
|
||||
def _run_all_mtp(self) -> None:
|
||||
gid = self.gid
|
||||
def _run_all_mtp(self, gid: int) -> None:
|
||||
t0 = time.time()
|
||||
for ptop, flags in self.flags.items():
|
||||
if "mtp" in flags:
|
||||
if ptop not in self.entags:
|
||||
t = "skipping mtp for unavailable volume {}"
|
||||
self.log(t.format(ptop), 1)
|
||||
continue
|
||||
self._run_one_mtp(ptop, gid)
|
||||
else:
|
||||
self._run_one_mtp(ptop, gid)
|
||||
|
||||
vtop = "\n"
|
||||
for vol in self.asrv.vfs.all_vols.values():
|
||||
if vol.realpath == ptop:
|
||||
vtop = vol.vpath
|
||||
if "running mtp" in self.volstate.get(vtop, ""):
|
||||
self.volstate[vtop] = "online, idle"
|
||||
|
||||
td = time.time() - t0
|
||||
msg = "mtp finished in {:.2f} sec ({})"
|
||||
self.log(msg.format(td, s2hms(td, True)))
|
||||
|
||||
self.pp = None
|
||||
for k in list(self.volstate.keys()):
|
||||
if "OFFLINE" not in self.volstate[k]:
|
||||
self.volstate[k] = "online, idle"
|
||||
|
||||
if self.args.exit == "idx":
|
||||
self.hub.sigterm()
|
||||
|
||||
@@ -2055,12 +2113,13 @@ class Up2k(object):
|
||||
return
|
||||
|
||||
try:
|
||||
st = bos.stat(qe.abspath)
|
||||
if not qe.mtp:
|
||||
if self.args.mtag_vv:
|
||||
t = "tag-thr: {}({})"
|
||||
self.log(t.format(self.mtag.backend, qe.abspath), "90")
|
||||
|
||||
tags = self.mtag.get(qe.abspath)
|
||||
tags = self.mtag.get(qe.abspath) if st.st_size else {}
|
||||
else:
|
||||
if self.args.mtag_vv:
|
||||
t = "tag-thr: {}({})"
|
||||
@@ -2101,11 +2160,16 @@ class Up2k(object):
|
||||
"""will mutex"""
|
||||
assert self.mtag
|
||||
|
||||
if not bos.path.isfile(abspath):
|
||||
try:
|
||||
st = bos.stat(abspath)
|
||||
except:
|
||||
return 0
|
||||
|
||||
if not stat.S_ISREG(st.st_mode):
|
||||
return 0
|
||||
|
||||
try:
|
||||
tags = self.mtag.get(abspath)
|
||||
tags = self.mtag.get(abspath) if st.st_size else {}
|
||||
except Exception as ex:
|
||||
self._log_tag_err("", abspath, ex)
|
||||
return 0
|
||||
@@ -2665,6 +2729,9 @@ class Up2k(object):
|
||||
a = [job[x] for x in zs.split()]
|
||||
self.db_add(cur, vfs.flags, *a)
|
||||
cur.connection.commit()
|
||||
elif wark in reg:
|
||||
# checks out, but client may have hopped IPs
|
||||
job["addr"] = cj["addr"]
|
||||
|
||||
if not job:
|
||||
ap1 = djoin(cj["ptop"], cj["prel"])
|
||||
@@ -3098,7 +3165,7 @@ class Up2k(object):
|
||||
raise
|
||||
|
||||
if "e2t" in self.flags[ptop]:
|
||||
self.tagq.put((ptop, wark, rd, fn, ip, at))
|
||||
self.tagq.put((ptop, wark, rd, fn, sz, ip, at))
|
||||
self.n_tagq += 1
|
||||
|
||||
return True
|
||||
@@ -3201,7 +3268,13 @@ class Up2k(object):
|
||||
pass
|
||||
|
||||
def handle_rm(
|
||||
self, uname: str, ip: str, vpaths: list[str], lim: list[int], rm_up: bool
|
||||
self,
|
||||
uname: str,
|
||||
ip: str,
|
||||
vpaths: list[str],
|
||||
lim: list[int],
|
||||
rm_up: bool,
|
||||
unpost: bool,
|
||||
) -> str:
|
||||
n_files = 0
|
||||
ok = {}
|
||||
@@ -3211,7 +3284,7 @@ class Up2k(object):
|
||||
self.log("hit delete limit of {} files".format(lim[1]), 3)
|
||||
break
|
||||
|
||||
a, b, c = self._handle_rm(uname, ip, vp, lim, rm_up)
|
||||
a, b, c = self._handle_rm(uname, ip, vp, lim, rm_up, unpost)
|
||||
n_files += a
|
||||
for k in b:
|
||||
ok[k] = 1
|
||||
@@ -3225,25 +3298,43 @@ class Up2k(object):
|
||||
return "deleted {} files (and {}/{} folders)".format(n_files, iok, iok + ing)
|
||||
|
||||
def _handle_rm(
|
||||
self, uname: str, ip: str, vpath: str, lim: list[int], rm_up: bool
|
||||
self, uname: str, ip: str, vpath: str, lim: list[int], rm_up: bool, unpost: bool
|
||||
) -> tuple[int, list[str], list[str]]:
|
||||
self.db_act = time.time()
|
||||
try:
|
||||
partial = ""
|
||||
if not unpost:
|
||||
permsets = [[True, False, False, True]]
|
||||
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn.get_dbv(rem)
|
||||
unpost = False
|
||||
except:
|
||||
else:
|
||||
# unpost with missing permissions? verify with db
|
||||
if not self.args.unpost:
|
||||
raise Pebkac(400, "the unpost feature is disabled in server config")
|
||||
|
||||
unpost = True
|
||||
permsets = [[False, True]]
|
||||
vn, rem = self.asrv.vfs.get(vpath, uname, *permsets[0])
|
||||
vn, rem = vn.get_dbv(rem)
|
||||
ptop = vn.realpath
|
||||
with self.mutex:
|
||||
_, _, _, _, dip, dat = self._find_from_vpath(vn.realpath, rem)
|
||||
abrt_cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
addr = (ip or "\n") if abrt_cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if abrt_cfg in (1, 3) else ""
|
||||
reg = self.registry.get(ptop, {}) if abrt_cfg else {}
|
||||
for wark, job in reg.items():
|
||||
if (user and user != job["user"]) or (addr and addr != job["addr"]):
|
||||
continue
|
||||
if djoin(job["prel"], job["name"]) == rem:
|
||||
if job["ptop"] != ptop:
|
||||
t = "job.ptop [%s] != vol.ptop [%s] ??"
|
||||
raise Exception(t % (job["ptop"] != ptop))
|
||||
partial = vn.canonical(vjoin(job["prel"], job["tnam"]))
|
||||
break
|
||||
if partial:
|
||||
dip = ip
|
||||
dat = time.time()
|
||||
else:
|
||||
if not self.args.unpost:
|
||||
t = "the unpost feature is disabled in server config"
|
||||
raise Pebkac(400, t)
|
||||
|
||||
_, _, _, _, dip, dat = self._find_from_vpath(ptop, rem)
|
||||
|
||||
t = "you cannot delete this: "
|
||||
if not dip:
|
||||
@@ -3336,6 +3427,9 @@ class Up2k(object):
|
||||
cur.connection.commit()
|
||||
|
||||
wunlink(self.log, abspath, dbv.flags)
|
||||
if partial:
|
||||
wunlink(self.log, partial, dbv.flags)
|
||||
partial = ""
|
||||
if xad:
|
||||
runhook(
|
||||
self.log,
|
||||
@@ -3680,9 +3774,10 @@ class Up2k(object):
|
||||
)
|
||||
job = reg.get(wark) if wark else None
|
||||
if job:
|
||||
t = "forgetting partial upload {} ({})"
|
||||
p = self._vis_job_progress(job)
|
||||
self.log(t.format(wark, p))
|
||||
if job["need"]:
|
||||
t = "forgetting partial upload {} ({})"
|
||||
p = self._vis_job_progress(job)
|
||||
self.log(t.format(wark, p))
|
||||
assert wark
|
||||
del reg[wark]
|
||||
|
||||
@@ -3825,7 +3920,7 @@ class Up2k(object):
|
||||
csz = up2k_chunksize(fsz)
|
||||
ret = []
|
||||
suffix = " MB, {}".format(path)
|
||||
with open(fsenc(path), "rb", 512 * 1024) as f:
|
||||
with open(fsenc(path), "rb", self.args.iobuf) as f:
|
||||
if self.mth and fsz >= 1024 * 512:
|
||||
tlt = self.mth.hash(f, fsz, csz, self.pp, prefix, suffix)
|
||||
ret = [x[0] for x in tlt]
|
||||
@@ -3930,7 +4025,13 @@ class Up2k(object):
|
||||
|
||||
if not ANYWIN and sprs and sz > 1024 * 1024:
|
||||
fs = self.fstab.get(pdir)
|
||||
if fs != "ok":
|
||||
if fs == "ok":
|
||||
pass
|
||||
elif "sparse" in self.flags[job["ptop"]]:
|
||||
t = "volflag 'sparse' is forcing use of sparse files for uploads to [%s]"
|
||||
self.log(t % (job["ptop"],))
|
||||
relabel = True
|
||||
else:
|
||||
relabel = True
|
||||
f.seek(1024 * 1024 - 1)
|
||||
f.write(b"e")
|
||||
@@ -4055,14 +4156,14 @@ class Up2k(object):
|
||||
with self.mutex:
|
||||
self.n_tagq -= 1
|
||||
|
||||
ptop, wark, rd, fn, ip, at = self.tagq.get()
|
||||
ptop, wark, rd, fn, sz, ip, at = self.tagq.get()
|
||||
if "e2t" not in self.flags[ptop]:
|
||||
continue
|
||||
|
||||
# self.log("\n " + repr([ptop, rd, fn]))
|
||||
abspath = djoin(ptop, rd, fn)
|
||||
try:
|
||||
tags = self.mtag.get(abspath)
|
||||
tags = self.mtag.get(abspath) if sz else {}
|
||||
ntags1 = len(tags)
|
||||
parsers = self._get_parsers(ptop, tags, abspath)
|
||||
if self.args.mtag_vv:
|
||||
|
||||
@@ -186,7 +186,7 @@ else:
|
||||
|
||||
SYMTIME = sys.version_info > (3, 6) and os.utime in os.supports_follow_symlinks
|
||||
|
||||
META_NOBOTS = '<meta name="robots" content="noindex, nofollow">'
|
||||
META_NOBOTS = '<meta name="robots" content="noindex, nofollow">\n'
|
||||
|
||||
FFMPEG_URL = "https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z"
|
||||
|
||||
@@ -559,20 +559,26 @@ class HLog(logging.Handler):
|
||||
|
||||
|
||||
class NetMap(object):
|
||||
def __init__(self, ips: list[str], netdevs: dict[str, Netdev]) -> None:
|
||||
def __init__(self, ips: list[str], cidrs: list[str], keep_lo=False) -> None:
|
||||
"""
|
||||
ips: list of plain ipv4/ipv6 IPs, not cidr
|
||||
cidrs: list of cidr-notation IPs (ip/prefix)
|
||||
"""
|
||||
if "::" in ips:
|
||||
ips = [x for x in ips if x != "::"] + list(
|
||||
[x.split("/")[0] for x in netdevs if ":" in x]
|
||||
[x.split("/")[0] for x in cidrs if ":" in x]
|
||||
)
|
||||
ips.append("0.0.0.0")
|
||||
|
||||
if "0.0.0.0" in ips:
|
||||
ips = [x for x in ips if x != "0.0.0.0"] + list(
|
||||
[x.split("/")[0] for x in netdevs if ":" not in x]
|
||||
[x.split("/")[0] for x in cidrs if ":" not in x]
|
||||
)
|
||||
|
||||
ips = [x for x in ips if x not in ("::1", "127.0.0.1")]
|
||||
ips = find_prefix(ips, netdevs)
|
||||
if not keep_lo:
|
||||
ips = [x for x in ips if x not in ("::1", "127.0.0.1")]
|
||||
|
||||
ips = find_prefix(ips, cidrs)
|
||||
|
||||
self.cache: dict[str, str] = {}
|
||||
self.b2sip: dict[bytes, str] = {}
|
||||
@@ -589,6 +595,9 @@ class NetMap(object):
|
||||
self.bip.sort(reverse=True)
|
||||
|
||||
def map(self, ip: str) -> str:
|
||||
if ip.startswith("::ffff:"):
|
||||
ip = ip[7:]
|
||||
|
||||
try:
|
||||
return self.cache[ip]
|
||||
except:
|
||||
@@ -1391,10 +1400,15 @@ def ren_open(
|
||||
|
||||
class MultipartParser(object):
|
||||
def __init__(
|
||||
self, log_func: "NamedLogger", sr: Unrecv, http_headers: dict[str, str]
|
||||
self,
|
||||
log_func: "NamedLogger",
|
||||
args: argparse.Namespace,
|
||||
sr: Unrecv,
|
||||
http_headers: dict[str, str],
|
||||
):
|
||||
self.sr = sr
|
||||
self.log = log_func
|
||||
self.args = args
|
||||
self.headers = http_headers
|
||||
|
||||
self.re_ctype = re.compile(r"^content-type: *([^; ]+)", re.IGNORECASE)
|
||||
@@ -1493,7 +1507,7 @@ class MultipartParser(object):
|
||||
|
||||
def _read_data(self) -> Generator[bytes, None, None]:
|
||||
blen = len(self.boundary)
|
||||
bufsz = 32 * 1024
|
||||
bufsz = self.args.s_rd_sz
|
||||
while True:
|
||||
try:
|
||||
buf = self.sr.recv(bufsz)
|
||||
@@ -1768,7 +1782,7 @@ def get_spd(nbyte: int, t0: float, t: Optional[float] = None) -> str:
|
||||
if t is None:
|
||||
t = time.time()
|
||||
|
||||
bps = nbyte / ((t - t0) + 0.001)
|
||||
bps = nbyte / ((t - t0) or 0.001)
|
||||
s1 = humansize(nbyte).replace(" ", "\033[33m").replace("iB", "")
|
||||
s2 = humansize(bps).replace(" ", "\033[35m").replace("iB", "")
|
||||
return "%s \033[0m%s/s\033[0m" % (s1, s2)
|
||||
@@ -1920,10 +1934,10 @@ def ipnorm(ip: str) -> str:
|
||||
return ip
|
||||
|
||||
|
||||
def find_prefix(ips: list[str], netdevs: dict[str, Netdev]) -> list[str]:
|
||||
def find_prefix(ips: list[str], cidrs: list[str]) -> list[str]:
|
||||
ret = []
|
||||
for ip in ips:
|
||||
hit = next((x for x in netdevs if x.startswith(ip + "/")), None)
|
||||
hit = next((x for x in cidrs if x.startswith(ip + "/") or ip == x), None)
|
||||
if hit:
|
||||
ret.append(hit)
|
||||
return ret
|
||||
@@ -2234,10 +2248,11 @@ def shut_socket(log: "NamedLogger", sck: socket.socket, timeout: int = 3) -> Non
|
||||
sck.close()
|
||||
|
||||
|
||||
def read_socket(sr: Unrecv, total_size: int) -> Generator[bytes, None, None]:
|
||||
def read_socket(
|
||||
sr: Unrecv, bufsz: int, total_size: int
|
||||
) -> Generator[bytes, None, None]:
|
||||
remains = total_size
|
||||
while remains > 0:
|
||||
bufsz = 32 * 1024
|
||||
if bufsz > remains:
|
||||
bufsz = remains
|
||||
|
||||
@@ -2251,16 +2266,16 @@ def read_socket(sr: Unrecv, total_size: int) -> Generator[bytes, None, None]:
|
||||
yield buf
|
||||
|
||||
|
||||
def read_socket_unbounded(sr: Unrecv) -> Generator[bytes, None, None]:
|
||||
def read_socket_unbounded(sr: Unrecv, bufsz: int) -> Generator[bytes, None, None]:
|
||||
try:
|
||||
while True:
|
||||
yield sr.recv(32 * 1024)
|
||||
yield sr.recv(bufsz)
|
||||
except:
|
||||
return
|
||||
|
||||
|
||||
def read_socket_chunked(
|
||||
sr: Unrecv, log: Optional["NamedLogger"] = None
|
||||
sr: Unrecv, bufsz: int, log: Optional["NamedLogger"] = None
|
||||
) -> Generator[bytes, None, None]:
|
||||
err = "upload aborted: expected chunk length, got [{}] |{}| instead"
|
||||
while True:
|
||||
@@ -2294,7 +2309,7 @@ def read_socket_chunked(
|
||||
if log:
|
||||
log("receiving %d byte chunk" % (chunklen,))
|
||||
|
||||
for chunk in read_socket(sr, chunklen):
|
||||
for chunk in read_socket(sr, bufsz, chunklen):
|
||||
yield chunk
|
||||
|
||||
x = sr.recv_ex(2, False)
|
||||
@@ -2317,10 +2332,46 @@ def list_ips() -> list[str]:
|
||||
return list(ret)
|
||||
|
||||
|
||||
def yieldfile(fn: str) -> Generator[bytes, None, None]:
|
||||
with open(fsenc(fn), "rb", 512 * 1024) as f:
|
||||
def build_netmap(csv: str):
|
||||
csv = csv.lower().strip()
|
||||
|
||||
if csv in ("any", "all", "no", ",", ""):
|
||||
return None
|
||||
|
||||
if csv in ("lan", "local", "private", "prvt"):
|
||||
csv = "10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, fd00::/8" # lan
|
||||
csv += ", 169.254.0.0/16, fe80::/10" # link-local
|
||||
csv += ", 127.0.0.0/8, ::1/128" # loopback
|
||||
|
||||
srcs = [x.strip() for x in csv.split(",") if x.strip()]
|
||||
cidrs = []
|
||||
for zs in srcs:
|
||||
if not zs.endswith("."):
|
||||
cidrs.append(zs)
|
||||
continue
|
||||
|
||||
# translate old syntax "172.19." => "172.19.0.0/16"
|
||||
words = len(zs.rstrip(".").split("."))
|
||||
if words == 1:
|
||||
zs += "0.0.0/8"
|
||||
elif words == 2:
|
||||
zs += "0.0/16"
|
||||
elif words == 3:
|
||||
zs += "0/24"
|
||||
else:
|
||||
raise Exception("invalid config value [%s]" % (zs,))
|
||||
|
||||
cidrs.append(zs)
|
||||
|
||||
ips = [x.split("/")[0] for x in cidrs]
|
||||
return NetMap(ips, cidrs, True)
|
||||
|
||||
|
||||
def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
|
||||
readsz = min(bufsz, 128 * 1024)
|
||||
with open(fsenc(fn), "rb", bufsz) as f:
|
||||
while True:
|
||||
buf = f.read(128 * 1024)
|
||||
buf = f.read(readsz)
|
||||
if not buf:
|
||||
break
|
||||
|
||||
|
||||
@@ -17,8 +17,10 @@ window.baguetteBox = (function () {
|
||||
titleTag: false,
|
||||
async: false,
|
||||
preload: 2,
|
||||
refocus: true,
|
||||
afterShow: null,
|
||||
afterHide: null,
|
||||
duringHide: null,
|
||||
onChange: null,
|
||||
},
|
||||
overlay, slider, btnPrev, btnNext, btnHelp, btnAnim, btnRotL, btnRotR, btnSel, btnFull, btnVmode, btnClose,
|
||||
@@ -144,7 +146,7 @@ window.baguetteBox = (function () {
|
||||
selectorData.galleries.push(gallery);
|
||||
});
|
||||
|
||||
return selectorData.galleries;
|
||||
return [selectorData.galleries, options];
|
||||
}
|
||||
|
||||
function clearCachedData() {
|
||||
@@ -593,6 +595,9 @@ window.baguetteBox = (function () {
|
||||
if (overlay.style.display === 'none')
|
||||
return;
|
||||
|
||||
if (options.duringHide)
|
||||
options.duringHide();
|
||||
|
||||
sethash('');
|
||||
unbindEvents();
|
||||
try {
|
||||
@@ -613,7 +618,7 @@ window.baguetteBox = (function () {
|
||||
if (options.afterHide)
|
||||
options.afterHide();
|
||||
|
||||
documentLastFocus && documentLastFocus.focus();
|
||||
options.refocus && documentLastFocus && documentLastFocus.focus();
|
||||
isOverlayVisible = false;
|
||||
unvid();
|
||||
unfig();
|
||||
|
||||
@@ -494,6 +494,7 @@ html.dz {
|
||||
|
||||
text-shadow: none;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
html.dy {
|
||||
--fg: #000;
|
||||
@@ -603,6 +604,7 @@ html {
|
||||
color: var(--fg);
|
||||
background: var(--bgg);
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
text-shadow: 1px 1px 0px var(--bg-max);
|
||||
}
|
||||
html, body {
|
||||
@@ -611,6 +613,7 @@ html, body {
|
||||
}
|
||||
pre, code, tt, #doc, #doc>code {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
.ayjump {
|
||||
position: fixed;
|
||||
@@ -759,6 +762,7 @@ html #files.hhpick thead th {
|
||||
}
|
||||
#files tbody td:nth-child(3) {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
text-align: right;
|
||||
padding-right: 1em;
|
||||
white-space: nowrap;
|
||||
@@ -821,6 +825,7 @@ html.y #path a:hover {
|
||||
.logue.raw {
|
||||
white-space: pre;
|
||||
font-family: 'scp', 'consolas', monospace;
|
||||
font-family: var(--font-mono), 'scp', 'consolas', monospace;
|
||||
}
|
||||
#doc>iframe,
|
||||
.logue>iframe {
|
||||
@@ -985,6 +990,10 @@ html.y #path a:hover {
|
||||
margin: 0 auto;
|
||||
display: block;
|
||||
}
|
||||
#ggrid.nocrop>a img {
|
||||
max-height: 20em;
|
||||
max-height: calc(var(--grid-sz)*2);
|
||||
}
|
||||
#ggrid>a.dir:before {
|
||||
content: '📂';
|
||||
}
|
||||
@@ -1151,9 +1160,6 @@ html.y #widget.open {
|
||||
@keyframes spin {
|
||||
100% {transform: rotate(360deg)}
|
||||
}
|
||||
@media (prefers-reduced-motion) {
|
||||
@keyframes spin { }
|
||||
}
|
||||
@keyframes fadein {
|
||||
0% {opacity: 0}
|
||||
100% {opacity: 1}
|
||||
@@ -1247,6 +1253,13 @@ html.y #widget.open {
|
||||
0% {opacity:0}
|
||||
100% {opacity:1}
|
||||
}
|
||||
#ggrid>a.glow {
|
||||
animation: gexit .6s ease-out;
|
||||
}
|
||||
@keyframes gexit {
|
||||
0% {box-shadow: 0 0 0 2em var(--a)}
|
||||
100% {box-shadow: 0 0 0em 0em var(--a)}
|
||||
}
|
||||
#wzip a {
|
||||
font-size: .4em;
|
||||
margin: -.3em .1em;
|
||||
@@ -1409,6 +1422,7 @@ input[type="checkbox"]:checked+label {
|
||||
}
|
||||
html.dz input {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
.opwide div>span>input+label {
|
||||
padding: .3em 0 .3em .3em;
|
||||
@@ -1694,6 +1708,7 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
}
|
||||
.ntree a:first-child {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
font-size: 1.2em;
|
||||
line-height: 0;
|
||||
}
|
||||
@@ -1824,6 +1839,10 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
#unpost td:nth-child(3),
|
||||
#unpost td:nth-child(4) {
|
||||
text-align: right;
|
||||
}
|
||||
#rui {
|
||||
background: #fff;
|
||||
background: var(--bg);
|
||||
@@ -1851,6 +1870,7 @@ html.y #tree.nowrap .ntree a+a:hover {
|
||||
}
|
||||
#rn_vadv input {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
#rui td+td,
|
||||
#rui td input[type="text"] {
|
||||
@@ -1914,6 +1934,7 @@ html.y #doc {
|
||||
#doc.mdo {
|
||||
white-space: normal;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
}
|
||||
#doc.prism * {
|
||||
line-height: 1.5em;
|
||||
@@ -1973,6 +1994,7 @@ a.btn,
|
||||
}
|
||||
#hkhelp td:first-child {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
html.noscroll,
|
||||
html.noscroll .sbar {
|
||||
@@ -2482,6 +2504,7 @@ html.y #bbox-overlay figcaption a {
|
||||
}
|
||||
#op_up2k.srch td.prog {
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
font-size: 1em;
|
||||
width: auto;
|
||||
}
|
||||
@@ -2496,6 +2519,7 @@ html.y #bbox-overlay figcaption a {
|
||||
white-space: nowrap;
|
||||
display: inline-block;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
#u2etas.o {
|
||||
width: 20em;
|
||||
@@ -2565,6 +2589,7 @@ html.y #bbox-overlay figcaption a {
|
||||
#u2cards span {
|
||||
color: var(--fg-max);
|
||||
font-family: 'scp', monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace;
|
||||
}
|
||||
#u2cards > a:nth-child(4) > span {
|
||||
display: inline-block;
|
||||
@@ -2730,6 +2755,7 @@ html.b #u2conf a.b:hover {
|
||||
}
|
||||
.prog {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
}
|
||||
#u2tab span.inf,
|
||||
#u2tab span.ok,
|
||||
@@ -3138,7 +3164,7 @@ html.d #treepar {
|
||||
margin-top: 1.7em;
|
||||
}
|
||||
}
|
||||
@supports (display: grid) {
|
||||
@supports (display: grid) and (gap: 1em) {
|
||||
#ggrid {
|
||||
display: grid;
|
||||
margin: 0em 0.25em;
|
||||
@@ -3163,3 +3189,24 @@ html.d #treepar {
|
||||
padding: 0.2em;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@media (prefers-reduced-motion) {
|
||||
@keyframes spin { }
|
||||
@keyframes gexit { }
|
||||
@keyframes bounce { }
|
||||
@keyframes bounceFromLeft { }
|
||||
@keyframes bounceFromRight { }
|
||||
|
||||
#ggrid>a:before,
|
||||
#widget.anim,
|
||||
#u2tabw,
|
||||
.dropdesc,
|
||||
.dropdesc b,
|
||||
.dropdesc>div>div {
|
||||
transition: none;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,9 +7,9 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8, minimum-scale=0.6">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/browser.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
{%- if css %}
|
||||
<link rel="stylesheet" media="screen" href="{{ css }}_={{ ts }}">
|
||||
{%- endif %}
|
||||
@@ -161,3 +161,4 @@
|
||||
</body>
|
||||
|
||||
</html>
|
||||
|
||||
|
||||
@@ -102,7 +102,7 @@ var Ls = {
|
||||
"access": " access",
|
||||
"ot_close": "close submenu",
|
||||
"ot_search": "search for files by attributes, path / name, music tags, or any combination of those$N$N<code>foo bar</code> = must contain both «foo» and «bar»,$N<code>foo -bar</code> = must contain «foo» but not «bar»,$N<code>^yana .opus$</code> = start with «yana» and be an «opus» file$N<code>"try unite"</code> = contain exactly «try unite»$N$Nthe date format is iso-8601, like$N<code>2009-12-31</code> or <code>2020-09-12 23:30:00</code>",
|
||||
"ot_unpost": "unpost: delete your recent uploads",
|
||||
"ot_unpost": "unpost: delete your recent uploads, or abort unfinished ones",
|
||||
"ot_bup": "bup: basic uploader, even supports netscape 4.0",
|
||||
"ot_mkdir": "mkdir: create a new directory",
|
||||
"ot_md": "new-md: create a new markdown document",
|
||||
@@ -194,6 +194,7 @@ var Ls = {
|
||||
|
||||
"ct_thumb": "in grid-view, toggle icons or thumbnails$NHotkey: T",
|
||||
"ct_csel": "use CTRL and SHIFT for file selection in grid-view",
|
||||
"ct_ihop": "when the image viewer is closed, scroll down to the last viewed file",
|
||||
"ct_dots": "show hidden files (if server permits)",
|
||||
"ct_dir1st": "sort folders before files",
|
||||
"ct_readme": "show README.md in folder listings",
|
||||
@@ -239,13 +240,14 @@ var Ls = {
|
||||
"ml_drc": "dynamic range compressor",
|
||||
|
||||
"mt_shuf": "shuffle the songs in each folder\">🔀",
|
||||
"mt_aplay": "autoplay if there is a song-ID in the link you clicked to access the server$N$Ndisabling this will also stop the page URL from being updated with song-IDs when playing music, to prevent autoplay if these settings are lost but the URL remains\">a▶",
|
||||
"mt_preload": "start loading the next song near the end for gapless playback\">preload",
|
||||
"mt_prescan": "go to the next folder before the last song$Nends, keeping the webbrowser happy$Nso it doesn't stop the playback\">nav",
|
||||
"mt_fullpre": "try to preload the entire song;$N✅ enable on <b>unreliable</b> connections,$N❌ <b>disable</b> on slow connections probably\">full",
|
||||
"mt_waves": "waveform seekbar:$Nshow audio amplitude in the scrubber\">~s",
|
||||
"mt_npclip": "show buttons for clipboarding the currently playing song\">/np",
|
||||
"mt_octl": "os integration (media hotkeys / osd)\">os-ctl",
|
||||
"mt_oseek": "allow seeking through os integration\">seek",
|
||||
"mt_oseek": "allow seeking through os integration$N$Nnote: on some devices (iPhones),$Nthis replaces the next-song button\">seek",
|
||||
"mt_oscv": "show album cover in osd\">art",
|
||||
"mt_follow": "keep the playing track scrolled into view\">🎯",
|
||||
"mt_compact": "compact controls\">⟎",
|
||||
@@ -276,8 +278,6 @@ var Ls = {
|
||||
"mm_prescan": "Looking for music to play next...",
|
||||
"mm_scank": "Found the next song:",
|
||||
"mm_uncache": "cache cleared; all songs will redownload on next playback",
|
||||
"mm_pwrsv": "<p>it looks like playback is being interrupted by your phone's power-saving settings!</p>" + '<p>please go to <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262121-2ffc51ae-7821-4310-a322-c3b7a507890c.png">the app settings of your browser</a> and then <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262123-c328cca9-3930-4948-bd18-3949b9fd3fcf.png">allow unrestricted battery usage</a> to fix it.</p><p><em>however,</em> it could also be due to the browser\'s autoplay settings;</p><p>Firefox: tap the icon on the left side of the address bar, then select "autoplay" and "allow audio"</p><p>Chrome: the problem will gradually dissipate as you play more music on this site</p>',
|
||||
"mm_iosblk": "<p>your web browser thinks the audio playback is unwanted, and it decided to block playback until you start another track manually... unfortunately we are both powerless in telling it otherwise</p><p>supposedly this will get better as you continue playing music on this site, but I'm unfamiliar with apple devices so idk if that's true</p><p>you could try another browser, maybe firefox or chrome?</p>",
|
||||
"mm_hnf": "that song no longer exists",
|
||||
|
||||
"im_hnf": "that image no longer exists",
|
||||
@@ -349,7 +349,8 @@ var Ls = {
|
||||
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
|
||||
|
||||
"gt_msel": "enable file selection; ctrl-click a file to override$N$N<em>when active: doubleclick a file / folder to open it</em>$N$NHotkey: S\">multiselect",
|
||||
"gt_full": "show uncropped thumbnails\">full",
|
||||
"gt_crop": "center-crop thumbnails\">crop",
|
||||
"gt_3x": "hi-res thumbnails\">3x",
|
||||
"gt_zoom": "zoom",
|
||||
"gt_chop": "chop",
|
||||
"gt_sort": "sort by",
|
||||
@@ -386,6 +387,8 @@ var Ls = {
|
||||
"md_eshow": "cannot render ",
|
||||
"md_off": "[📜<em>readme</em>] disabled in [⚙️] -- document hidden",
|
||||
|
||||
"badreply": "Failed to parse reply from server",
|
||||
|
||||
"xhr403": "403: Access denied\n\ntry pressing F5, maybe you got logged out",
|
||||
"cf_ok": "sorry about that -- DD" + wah + "oS protection kicked in\n\nthings should resume in about 30 sec\n\nif nothing happens, hit F5 to reload the page",
|
||||
"tl_xe1": "could not list subfolders:\n\nerror ",
|
||||
@@ -407,7 +410,7 @@ var Ls = {
|
||||
"fz_zipd": "zip with traditional cp437 filenames, for really old software",
|
||||
"fz_zipc": "cp437 with crc32 computed early,$Nfor MS-DOS PKZIP v2.04g (october 1993)$N(takes longer to process before download can start)",
|
||||
|
||||
"un_m1": "you can delete your recent uploads below",
|
||||
"un_m1": "you can delete your recent uploads (or abort unfinished ones) below",
|
||||
"un_upd": "refresh",
|
||||
"un_m4": "or share the files visible below:",
|
||||
"un_ulist": "show",
|
||||
@@ -416,12 +419,15 @@ var Ls = {
|
||||
"un_fclr": "clear filter",
|
||||
"un_derr": 'unpost-delete failed:\n',
|
||||
"un_f5": 'something broke, please try a refresh or hit F5',
|
||||
"un_nou": '<b>warning:</b> server too busy to show unfinished uploads; click the "refresh" link in a bit',
|
||||
"un_noc": '<b>warning:</b> unpost of fully uploaded files is not enabled/permitted in server config',
|
||||
"un_max": "showing first 2000 files (use the filter)",
|
||||
"un_avail": "{0} uploads can be deleted",
|
||||
"un_m2": "sorted by upload time – most recent first:",
|
||||
"un_avail": "{0} recent uploads can be deleted<br />{1} unfinished ones can be aborted",
|
||||
"un_m2": "sorted by upload time; most recent first:",
|
||||
"un_no1": "sike! no uploads are sufficiently recent",
|
||||
"un_no2": "sike! no uploads matching that filter are sufficiently recent",
|
||||
"un_next": "delete the next {0} files below",
|
||||
"un_abrt": "abort",
|
||||
"un_del": "delete",
|
||||
"un_m3": "loading your recent uploads...",
|
||||
"un_busy": "deleting {0} files...",
|
||||
@@ -689,6 +695,7 @@ var Ls = {
|
||||
|
||||
"ct_thumb": "vis miniatyrbilder istedenfor ikoner$NSnarvei: T",
|
||||
"ct_csel": "bruk tastene CTRL og SHIFT for markering av filer i ikonvisning",
|
||||
"ct_ihop": "bla ned til sist viste bilde når bildeviseren lukkes",
|
||||
"ct_dots": "vis skjulte filer (gitt at serveren tillater det)",
|
||||
"ct_dir1st": "sorter slik at mapper kommer foran filer",
|
||||
"ct_readme": "vis README.md nedenfor filene",
|
||||
@@ -734,13 +741,14 @@ var Ls = {
|
||||
"ml_drc": "compressor (volum-utjevning)",
|
||||
|
||||
"mt_shuf": "sangene i hver mappe$Nspilles i tilfeldig rekkefølge\">🔀",
|
||||
"mt_aplay": "forsøk å starte avspilling hvis linken du klikket på for å åpne nettsiden inneholder en sang-ID$N$Nhvis denne deaktiveres så vil heller ikke nettside-URLen bli oppdatert med sang-ID'er når musikk spilles, i tilfelle innstillingene skulle gå tapt og nettsiden lastes på ny\">a▶",
|
||||
"mt_preload": "hent ned litt av neste sang i forkant,$Nslik at pausen i overgangen blir mindre\">forles",
|
||||
"mt_prescan": "ved behov, bla til neste mappe$Nslik at nettleseren lar oss$Nfortsette å spille musikk\">bla",
|
||||
"mt_fullpre": "hent ned hele neste sang, ikke bare litt:$N✅ skru på hvis nettet ditt er <b>ustabilt</b>,$N❌ skru av hvis nettet ditt er <b>tregt</b>\">full",
|
||||
"mt_waves": "waveform seekbar:$Nvis volumkurve i avspillingsfeltet\">~s",
|
||||
"mt_npclip": "vis knapper for å kopiere info om sangen du hører på\">/np",
|
||||
"mt_octl": "integrering med operativsystemet (fjernkontroll, info-skjerm)\">os-ctl",
|
||||
"mt_oseek": "tillat spoling med fjernkontroll\">spoling",
|
||||
"mt_oseek": "tillat spoling med fjernkontroll$N$Nmerk: på noen enheter (iPhones) så vil$Ndette erstatte knappen for neste sang\">spoling",
|
||||
"mt_oscv": "vis album-cover på infoskjermen\">bilde",
|
||||
"mt_follow": "bla slik at sangen som spilles alltid er synlig\">🎯",
|
||||
"mt_compact": "tettpakket avspillerpanel\">⟎",
|
||||
@@ -771,8 +779,6 @@ var Ls = {
|
||||
"mm_prescan": "Leter etter neste sang...",
|
||||
"mm_scank": "Fant neste sang:",
|
||||
"mm_uncache": "alle sanger vil lastes på nytt ved neste avspilling",
|
||||
"mm_pwrsv": "<p>det ser ut som musikken ble avbrutt av telefonen sine strømsparings-innstillinger!</p>" + '<p>ta en tur innom <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262121-2ffc51ae-7821-4310-a322-c3b7a507890c.png">app-innstillingene til nettleseren din</a> og så <a target="_blank" href="https://user-images.githubusercontent.com/241032/235262123-c328cca9-3930-4948-bd18-3949b9fd3fcf.png">tillat ubegrenset batteriforbruk</a></p><p>NB: det kan også være pga. autoplay-innstillingene, så prøv dette:</p><p>Firefox: klikk på ikonet i venstre side av addressefeltet, velg "autoplay" og "tillat lyd"</p><p>Chrome: problemet vil minske gradvis jo mer musikk du spiller på denne siden</p>',
|
||||
"mm_iosblk": "<p>nettleseren din tror at musikken er uønsket, og den bestemte seg for å stoppe avspillingen slik at du manuelt må velge en ny sang... dessverre er både du og jeg maktesløse når den har bestemt seg.</p><p>det ryktes at problemet vil minske jo mer musikk du spiller på denne siden, men jeg er ikke godt kjent med apple-dingser så jeg er ikke sikker.</p><p>kanskje firefox eller chrome fungerer bedre?</p>",
|
||||
"mm_hnf": "sangen finnes ikke lenger",
|
||||
|
||||
"im_hnf": "bildet finnes ikke lenger",
|
||||
@@ -844,7 +850,8 @@ var Ls = {
|
||||
"tvt_edit": "redigér filen$NSnarvei: E\">✏️ endre",
|
||||
|
||||
"gt_msel": "markér filer istedenfor å åpne dem; ctrl-klikk filer for å overstyre$N$N<em>når aktiv: dobbelklikk en fil / mappe for å åpne</em>$N$NSnarvei: S\">markering",
|
||||
"gt_full": "ikke beskjær bildene\">full",
|
||||
"gt_crop": "beskjær ikonene så de passer bedre\">✂",
|
||||
"gt_3x": "høyere oppløsning på ikoner\">3x",
|
||||
"gt_zoom": "zoom",
|
||||
"gt_chop": "trim",
|
||||
"gt_sort": "sorter",
|
||||
@@ -881,6 +888,8 @@ var Ls = {
|
||||
"md_eshow": "viser forenklet ",
|
||||
"md_off": "[📜<em>readme</em>] er avskrudd i [⚙️] -- dokument skjult",
|
||||
|
||||
"badreply": "Ugyldig svar ifra serveren",
|
||||
|
||||
"xhr403": "403: Tilgang nektet\n\nkanskje du ble logget ut? prøv å trykk F5",
|
||||
"cf_ok": "beklager -- liten tilfeldig kontroll, alt OK\n\nting skal fortsette om ca. 30 sekunder\n\nhvis ikkeno skjer, trykk F5 for å laste siden på nytt",
|
||||
"tl_xe1": "kunne ikke hente undermapper:\n\nfeil ",
|
||||
@@ -902,7 +911,7 @@ var Ls = {
|
||||
"fz_zipd": "zip med filnavn i cp437, for høggamle maskiner",
|
||||
"fz_zipc": "cp437 med tidlig crc32,$Nfor MS-DOS PKZIP v2.04g (oktober 1993)$N(øker behandlingstid på server)",
|
||||
|
||||
"un_m1": "nedenfor kan du angre / slette filer som du nylig har lastet opp",
|
||||
"un_m1": "nedenfor kan du angre / slette filer som du nylig har lastet opp, eller avbryte ufullstendige opplastninger",
|
||||
"un_upd": "oppdater",
|
||||
"un_m4": "eller hvis du vil dele nedlastnings-lenkene:",
|
||||
"un_ulist": "vis",
|
||||
@@ -911,12 +920,15 @@ var Ls = {
|
||||
"un_fclr": "nullstill filter",
|
||||
"un_derr": 'unpost-sletting feilet:\n',
|
||||
"un_f5": 'noe gikk galt, prøv å oppdatere listen eller trykk F5',
|
||||
"un_nou": '<b>advarsel:</b> kan ikke vise ufullstendige opplastninger akkurat nå; klikk på oppdater-linken om litt',
|
||||
"un_noc": '<b>advarsel:</b> angring av fullførte opplastninger er deaktivert i serverkonfigurasjonen',
|
||||
"un_max": "viser de første 2000 filene (bruk filteret for å innsnevre)",
|
||||
"un_avail": "{0} filer kan slettes",
|
||||
"un_m2": "sortert etter opplastningstid – nyeste først:",
|
||||
"un_avail": "{0} nylig opplastede filer kan slettes<br />{1} ufullstendige opplastninger kan avbrytes",
|
||||
"un_m2": "sortert etter opplastningstid; nyeste først:",
|
||||
"un_no1": "men nei, her var det jaggu ikkeno som slettes kan",
|
||||
"un_no2": "men nei, her var det jaggu ingenting som passet overens med filteret",
|
||||
"un_next": "slett de neste {0} filene nedenfor",
|
||||
"un_abrt": "avbryt",
|
||||
"un_del": "slett",
|
||||
"un_m3": "henter listen med nylig opplastede filer...",
|
||||
"un_busy": "sletter {0} filer...",
|
||||
@@ -963,7 +975,7 @@ var Ls = {
|
||||
"u_emtleakf": 'prøver følgende:\n<ul><li>trykk F5 for å laste siden på nytt</li><li>så skru på <code>🥔</code> ("enkelt UI") i opplasteren</li><li>og forsøk den samme opplastningen igjen</li></ul>\nPS: Firefox <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1790500">fikser forhåpentligvis feilen</a> en eller annen gang',
|
||||
"u_s404": "ikke funnet på serveren",
|
||||
"u_expl": "forklar",
|
||||
"u_maxconn": "de fleste nettlesere tillater ikke mer enn 6, men firefox lar deg øke grensen med <code>connections-per-server</code> in <code>about:config</code>",
|
||||
"u_maxconn": "de fleste nettlesere tillater ikke mer enn 6, men firefox lar deg øke grensen med <code>connections-per-server</code> i <code>about:config</code>",
|
||||
"u_tu": '<p class="warn">ADVARSEL: turbo er på, <span> avbrutte opplastninger vil muligens ikke oppdages og gjenopptas; hold musepekeren over turbo-knappen for mer info</span></p>',
|
||||
"u_ts": '<p class="warn">ADVARSEL: turbo er på, <span> søkeresultater kan være feil; hold musepekeren over turbo-knappen for mer info</span></p>',
|
||||
"u_turbo_c": "turbo er deaktivert i serverkonfigurasjonen",
|
||||
@@ -1020,7 +1032,7 @@ modal.load();
|
||||
ebi('ops').innerHTML = (
|
||||
'<a href="#" data-dest="" tt="' + L.ot_close + '">--</a>' +
|
||||
'<a href="#" data-perm="read" data-dep="idx" data-dest="search" tt="' + L.ot_search + '">🔎</a>' +
|
||||
(have_del && have_unpost ? '<a href="#" data-dest="unpost" data-dep="idx" tt="' + L.ot_unpost + '">🧯</a>' : '') +
|
||||
(have_del ? '<a href="#" data-dest="unpost" tt="' + L.ot_unpost + '">🧯</a>' : '') +
|
||||
'<a href="#" data-dest="up2k">🚀</a>' +
|
||||
'<a href="#" data-perm="write" data-dest="bup" tt="' + L.ot_bup + '">🎈</a>' +
|
||||
'<a href="#" data-perm="write" data-dest="mkdir" tt="' + L.ot_mkdir + '">📂</a>' +
|
||||
@@ -1183,6 +1195,7 @@ ebi('op_cfg').innerHTML = (
|
||||
' <a id="griden" class="tgl btn" href="#" tt="' + L.wt_grid + '">田 the grid</a>\n' +
|
||||
' <a id="thumbs" class="tgl btn" href="#" tt="' + L.ct_thumb + '">🖼️ thumbs</a>\n' +
|
||||
' <a id="csel" class="tgl btn" href="#" tt="' + L.ct_csel + '">sel</a>\n' +
|
||||
' <a id="ihop" class="tgl btn" href="#" tt="' + L.ct_ihop + '">g⮯</a>\n' +
|
||||
' <a id="dotfiles" class="tgl btn" href="#" tt="' + L.ct_dots + '">dotfiles</a>\n' +
|
||||
' <a id="dir1st" class="tgl btn" href="#" tt="' + L.ct_dir1st + '">📁 first</a>\n' +
|
||||
' <a id="ireadme" class="tgl btn" href="#" tt="' + L.ct_readme + '">📜 readme</a>\n' +
|
||||
@@ -1396,6 +1409,7 @@ var mpl = (function () {
|
||||
ebi('op_player').innerHTML = (
|
||||
'<div><h3>' + L.cl_opts + '</h3><div>' +
|
||||
'<a href="#" class="tgl btn" id="au_shuf" tt="' + L.mt_shuf + '</a>' +
|
||||
'<a href="#" class="tgl btn" id="au_aplay" tt="' + L.mt_aplay + '</a>' +
|
||||
'<a href="#" class="tgl btn" id="au_preload" tt="' + L.mt_preload + '</a>' +
|
||||
'<a href="#" class="tgl btn" id="au_prescan" tt="' + L.mt_prescan + '</a>' +
|
||||
'<a href="#" class="tgl btn" id="au_fullpre" tt="' + L.mt_fullpre + '</a>' +
|
||||
@@ -1441,6 +1455,7 @@ var mpl = (function () {
|
||||
bcfg_bind(r, 'shuf', 'au_shuf', false, function () {
|
||||
mp.read_order(); // don't bind
|
||||
});
|
||||
bcfg_bind(r, 'aplay', 'au_aplay', true);
|
||||
bcfg_bind(r, 'preload', 'au_preload', true);
|
||||
bcfg_bind(r, 'prescan', 'au_prescan', true);
|
||||
bcfg_bind(r, 'fullpre', 'au_fullpre', false);
|
||||
@@ -1649,7 +1664,6 @@ var re_au_native = can_ogg ? /\.(aac|flac|m4a|mp3|ogg|opus|wav)$/i :
|
||||
|
||||
// extract songs + add play column
|
||||
var mpo = { "au": null, "au2": null, "acs": null };
|
||||
var t_fchg = 0;
|
||||
function MPlayer() {
|
||||
var r = this;
|
||||
r.id = Date.now();
|
||||
@@ -2304,7 +2318,7 @@ function seek_au_sec(seek) {
|
||||
|
||||
|
||||
function song_skip(n, dirskip) {
|
||||
var tid = mp.au ? mp.au.tid : null,
|
||||
var tid = mp.au && mp.au.evp == get_evpath() ? mp.au.tid : null,
|
||||
ofs = tid ? mp.order.indexOf(tid) : -1;
|
||||
|
||||
if (dirskip && ofs + 1 && ofs > mp.order.length - 2) {
|
||||
@@ -2319,15 +2333,7 @@ function song_skip(n, dirskip) {
|
||||
else
|
||||
play(mp.order[n == -1 ? mp.order.length - 1 : 0]);
|
||||
}
|
||||
function next_song_sig(e) {
|
||||
t_fchg = document.hasFocus() ? 0 : Date.now();
|
||||
return next_song_cmn(e);
|
||||
}
|
||||
function next_song(e) {
|
||||
t_fchg = 0;
|
||||
return next_song_cmn(e);
|
||||
}
|
||||
function next_song_cmn(e) {
|
||||
ev(e);
|
||||
if (mp.order.length) {
|
||||
var dirskip = mpl.traversals;
|
||||
@@ -2335,17 +2341,12 @@ function next_song_cmn(e) {
|
||||
return song_skip(1, dirskip);
|
||||
}
|
||||
if (mpl.traversals++ < 5) {
|
||||
if (MOBILE && t_fchg && Date.now() - t_fchg > 30 * 1000)
|
||||
modal.alert(IPHONE ? L.mm_iosblk : L.mm_pwrsv);
|
||||
|
||||
t_fchg = document.hasFocus() ? 0 : Date.now();
|
||||
treectl.ls_cb = next_song_cmn;
|
||||
treectl.ls_cb = next_song;
|
||||
return tree_neigh(1);
|
||||
}
|
||||
toast.inf(10, L.mm_nof);
|
||||
console.log("mm_nof2");
|
||||
mpl.traversals = 0;
|
||||
t_fchg = 0;
|
||||
}
|
||||
function last_song(e) {
|
||||
ev(e);
|
||||
@@ -2360,7 +2361,6 @@ function last_song(e) {
|
||||
toast.inf(10, L.mm_nof);
|
||||
console.log("mm_nof2");
|
||||
mpl.traversals = 0;
|
||||
t_fchg = 0;
|
||||
}
|
||||
function prev_song(e) {
|
||||
ev(e);
|
||||
@@ -2508,15 +2508,6 @@ var mpui = (function () {
|
||||
pbar.drawbuf();
|
||||
}
|
||||
|
||||
if (pos > 0.3 && t_fchg) {
|
||||
// cannot check document.hasFocus to avoid false positives;
|
||||
// it continues on power-on, doesn't need to be in-browser
|
||||
if (MOBILE && Date.now() - t_fchg > 30 * 1000)
|
||||
modal.alert(IPHONE ? L.mm_iosblk : L.mm_pwrsv);
|
||||
|
||||
t_fchg = 0;
|
||||
}
|
||||
|
||||
// preload next song
|
||||
if (mpl.preload && preloaded != mp.au.rsrc) {
|
||||
var len = mp.au.duration,
|
||||
@@ -3018,7 +3009,6 @@ function play(tid, is_ev, seek) {
|
||||
tn = 0;
|
||||
}
|
||||
else if (mpl.pb_mode == 'next') {
|
||||
t_fchg = document.hasFocus() ? 0 : Date.now();
|
||||
treectl.ls_cb = next_song;
|
||||
return tree_neigh(1);
|
||||
}
|
||||
@@ -3048,7 +3038,7 @@ function play(tid, is_ev, seek) {
|
||||
mp.au.onerror = evau_error;
|
||||
mp.au.onprogress = pbar.drawpos;
|
||||
mp.au.onplaying = mpui.progress_updater;
|
||||
mp.au.onended = next_song_sig;
|
||||
mp.au.onended = next_song;
|
||||
widget.open();
|
||||
}
|
||||
|
||||
@@ -3065,7 +3055,7 @@ function play(tid, is_ev, seek) {
|
||||
mp.au.onerror = evau_error;
|
||||
mp.au.onprogress = pbar.drawpos;
|
||||
mp.au.onplaying = mpui.progress_updater;
|
||||
mp.au.onended = next_song_sig;
|
||||
mp.au.onended = next_song;
|
||||
t = mp.au.currentTime;
|
||||
if (isNum(t) && t > 0.1)
|
||||
mp.au.currentTime = 0;
|
||||
@@ -3098,7 +3088,9 @@ function play(tid, is_ev, seek) {
|
||||
|
||||
try {
|
||||
mp.nopause();
|
||||
mp.au.play();
|
||||
if (mpl.aplay || is_ev !== -1)
|
||||
mp.au.play();
|
||||
|
||||
if (mp.au.paused)
|
||||
autoplay_blocked(seek);
|
||||
else if (seek) {
|
||||
@@ -3108,7 +3100,8 @@ function play(tid, is_ev, seek) {
|
||||
if (!seek && !ebi('unsearch')) {
|
||||
var o = ebi(oid);
|
||||
o.setAttribute('id', 'thx_js');
|
||||
sethash(oid);
|
||||
if (mpl.aplay)
|
||||
sethash(oid);
|
||||
o.setAttribute('id', oid);
|
||||
}
|
||||
|
||||
@@ -3126,7 +3119,7 @@ function play(tid, is_ev, seek) {
|
||||
toast.err(0, esc(L.mm_playerr + basenames(ex)));
|
||||
}
|
||||
clmod(ebi(oid), 'act');
|
||||
setTimeout(next_song_sig, 5000);
|
||||
setTimeout(next_song, 5000);
|
||||
}
|
||||
|
||||
|
||||
@@ -3270,9 +3263,9 @@ function eval_hash() {
|
||||
|
||||
if (mtype == 'a') {
|
||||
if (!ts)
|
||||
return play(id);
|
||||
return play(id, -1);
|
||||
|
||||
return play(id, false, ts);
|
||||
return play(id, -1, ts);
|
||||
}
|
||||
|
||||
if (mtype == 'g') {
|
||||
@@ -4515,9 +4508,11 @@ var thegrid = (function () {
|
||||
gfiles.innerHTML = (
|
||||
'<div id="ghead" class="ghead">' +
|
||||
'<a href="#" class="tgl btn" id="gridsel" tt="' + L.gt_msel + '</a> ' +
|
||||
'<a href="#" class="tgl btn" id="gridfull" tt="' + L.gt_full + '</a> <span>' + L.gt_zoom + ': ' +
|
||||
'<a href="#" class="btn" z="-1.2" tt="Hotkey: shift-A">–</a> ' +
|
||||
'<a href="#" class="btn" z="1.2" tt="Hotkey: shift-D">+</a></span> <span>' + L.gt_chop + ': ' +
|
||||
'<a href="#" class="tgl btn" id="gridcrop" tt="' + L.gt_crop + '</a> ' +
|
||||
'<a href="#" class="tgl btn" id="grid3x" tt="' + L.gt_3x + '</a> ' +
|
||||
'<span>' + L.gt_zoom + ': ' +
|
||||
'<a href="#" class="btn" z="-1.1" tt="Hotkey: shift-A">–</a> ' +
|
||||
'<a href="#" class="btn" z="1.1" tt="Hotkey: shift-D">+</a></span> <span>' + L.gt_chop + ': ' +
|
||||
'<a href="#" class="btn" l="-1" tt="' + L.gt_c1 + '">–</a> ' +
|
||||
'<a href="#" class="btn" l="1" tt="' + L.gt_c2 + '">+</a></span> <span>' + L.gt_sort + ': ' +
|
||||
'<a href="#" s="href">' + L.gt_name + '</a> ' +
|
||||
@@ -4528,9 +4523,10 @@ var thegrid = (function () {
|
||||
'<div id="ggrid"></div>'
|
||||
);
|
||||
lfiles.parentNode.insertBefore(gfiles, lfiles);
|
||||
var ggrid = ebi('ggrid');
|
||||
|
||||
var r = {
|
||||
'sz': clamp(fcfg_get('gridsz', 10), 4, 40),
|
||||
'sz': clamp(fcfg_get('gridsz', 10), 4, 80),
|
||||
'ln': clamp(icfg_get('gridln', 3), 1, 7),
|
||||
'isdirty': true,
|
||||
'bbox': null
|
||||
@@ -4548,7 +4544,7 @@ var thegrid = (function () {
|
||||
if (l)
|
||||
return setln(parseInt(l));
|
||||
|
||||
var t = ebi('files').tHead.rows[0].cells;
|
||||
var t = lfiles.tHead.rows[0].cells;
|
||||
for (var a = 0; a < t.length; a++)
|
||||
if (t[a].getAttribute('name') == s) {
|
||||
t[a].click();
|
||||
@@ -4573,10 +4569,13 @@ var thegrid = (function () {
|
||||
|
||||
lfiles = ebi('files');
|
||||
gfiles = ebi('gfiles');
|
||||
ggrid = ebi('ggrid');
|
||||
|
||||
var vis = has(perms, "read");
|
||||
gfiles.style.display = vis && r.en ? '' : 'none';
|
||||
lfiles.style.display = vis && !r.en ? '' : 'none';
|
||||
clmod(ggrid, 'crop', r.crop);
|
||||
clmod(ggrid, 'nocrop', !r.crop);
|
||||
ebi('pro').style.display = ebi('epi').style.display = ebi('lazy').style.display = ebi('treeul').style.display = ebi('treepar').style.display = '';
|
||||
ebi('bdoc').style.display = 'none';
|
||||
clmod(ebi('wrap'), 'doc');
|
||||
@@ -4593,10 +4592,10 @@ var thegrid = (function () {
|
||||
|
||||
r.setdirty = function () {
|
||||
r.dirty = true;
|
||||
if (r.en) {
|
||||
if (r.en)
|
||||
loadgrid();
|
||||
}
|
||||
r.setvis();
|
||||
else
|
||||
r.setvis();
|
||||
};
|
||||
|
||||
function setln(v) {
|
||||
@@ -4616,7 +4615,7 @@ var thegrid = (function () {
|
||||
|
||||
function setsz(v) {
|
||||
if (v !== undefined) {
|
||||
r.sz = clamp(v, 4, 40);
|
||||
r.sz = clamp(v, 4, 80);
|
||||
swrite('gridsz', r.sz);
|
||||
setTimeout(r.tippen, 20);
|
||||
}
|
||||
@@ -4624,6 +4623,7 @@ var thegrid = (function () {
|
||||
document.documentElement.style.setProperty('--grid-sz', r.sz + 'em');
|
||||
}
|
||||
catch (ex) { }
|
||||
aligngriditems();
|
||||
}
|
||||
setsz();
|
||||
|
||||
@@ -4765,7 +4765,7 @@ var thegrid = (function () {
|
||||
pels[a].removeAttribute('tt');
|
||||
}
|
||||
|
||||
tt.att(ebi('ggrid'));
|
||||
tt.att(ggrid);
|
||||
};
|
||||
|
||||
function loadgrid() {
|
||||
@@ -4776,8 +4776,11 @@ var thegrid = (function () {
|
||||
if (!r.dirty)
|
||||
return r.loadsel();
|
||||
|
||||
if (dfull != r.full && !sread('gridfull'))
|
||||
bcfg_upd_ui('gridfull', r.full = dfull);
|
||||
if (dcrop.startsWith('f') || !sread('gridcrop'))
|
||||
bcfg_upd_ui('gridcrop', r.crop = ('y' == dcrop.slice(-1)));
|
||||
|
||||
if (dth3x.startsWith('f') || !sread('grid3x'))
|
||||
bcfg_upd_ui('grid3x', r.x3 = ('y' == dth3x.slice(-1)));
|
||||
|
||||
var html = [],
|
||||
svgs = new Set(),
|
||||
@@ -4796,8 +4799,10 @@ var thegrid = (function () {
|
||||
|
||||
if (r.thumbs) {
|
||||
ihref += '?th=' + (have_webp ? 'w' : 'j');
|
||||
if (r.full)
|
||||
ihref += 'f'
|
||||
if (!r.crop)
|
||||
ihref += 'f';
|
||||
if (r.x3)
|
||||
ihref += '3';
|
||||
if (href == "#")
|
||||
ihref = SR + '/.cpr/ico/' + (ref == 'moar' ? '++' : 'exit');
|
||||
}
|
||||
@@ -4830,13 +4835,17 @@ var thegrid = (function () {
|
||||
ihref = SR + '/.cpr/ico/' + ext;
|
||||
}
|
||||
ihref += (ihref.indexOf('?') > 0 ? '&' : '?') + 'cache=i&_=' + ACB;
|
||||
if (CHROME)
|
||||
ihref += "&raster";
|
||||
|
||||
html.push('<a href="' + ohref + '" ref="' + ref +
|
||||
'"' + ac + ' ttt="' + esc(name) + '"><img style="height:' +
|
||||
(r.sz / 1.25) + 'em" onload="th_onload(this)" src="' +
|
||||
(r.sz / 1.25) + 'em" loading="lazy" onload="th_onload(this)" src="' +
|
||||
ihref + '" /><span' + ac + '>' + ao.innerHTML + '</span></a>');
|
||||
}
|
||||
ebi('ggrid').innerHTML = html.join('\n');
|
||||
ggrid.innerHTML = html.join('\n');
|
||||
clmod(ggrid, 'crop', r.crop);
|
||||
clmod(ggrid, 'nocrop', !r.crop);
|
||||
|
||||
var srch = ebi('unsearch'),
|
||||
gsel = ebi('gridsel');
|
||||
@@ -4865,7 +4874,12 @@ var thegrid = (function () {
|
||||
if (r.bbox)
|
||||
baguetteBox.destroy();
|
||||
|
||||
r.bbox = baguetteBox.run(isrc, {
|
||||
var br = baguetteBox.run(isrc, {
|
||||
duringHide: r.onhide,
|
||||
afterShow: function () {
|
||||
r.bbox_opts.refocus = true;
|
||||
document.body.style.overflow = 'hidden';
|
||||
},
|
||||
captions: function (g) {
|
||||
var idx = -1,
|
||||
h = '' + g;
|
||||
@@ -4881,11 +4895,71 @@ var thegrid = (function () {
|
||||
onChange: function (i) {
|
||||
sethash('g' + r.bbox[i].imageElement.getAttribute('ref'));
|
||||
}
|
||||
})[0];
|
||||
});
|
||||
r.bbox = br[0][0];
|
||||
r.bbox_opts = br[1];
|
||||
};
|
||||
|
||||
r.onhide = function () {
|
||||
document.body.style.overflow = '';
|
||||
if (!thegrid.ihop)
|
||||
return;
|
||||
|
||||
try {
|
||||
var el = QS('#ggrid a[ref="' + location.hash.slice(2) + '"]'),
|
||||
f = function () {
|
||||
try {
|
||||
el.focus();
|
||||
}
|
||||
catch (ex) { }
|
||||
};
|
||||
|
||||
f();
|
||||
setTimeout(f, 10);
|
||||
setTimeout(f, 100);
|
||||
setTimeout(f, 200);
|
||||
// thx fullscreen api
|
||||
|
||||
if (ANIM) {
|
||||
clmod(el, 'glow', 1);
|
||||
setTimeout(function () {
|
||||
try {
|
||||
clmod(el, 'glow');
|
||||
}
|
||||
catch (ex) { }
|
||||
}, 600);
|
||||
}
|
||||
r.bbox_opts.refocus = false;
|
||||
}
|
||||
catch (ex) {
|
||||
console.log('ihop:', ex);
|
||||
}
|
||||
};
|
||||
|
||||
r.set_crop = function (en) {
|
||||
if (!dcrop.startsWith('f'))
|
||||
return r.setdirty();
|
||||
|
||||
r.crop = dcrop.endsWith('y');
|
||||
bcfg_upd_ui('gridcrop', r.crop);
|
||||
if (r.crop != en)
|
||||
toast.warn(10, L.ul_btnlk);
|
||||
};
|
||||
|
||||
r.set_x3 = function (en) {
|
||||
if (!dth3x.startsWith('f'))
|
||||
return r.setdirty();
|
||||
|
||||
r.x3 = dth3x.endsWith('y');
|
||||
bcfg_upd_ui('grid3x', r.x3);
|
||||
if (r.x3 != en)
|
||||
toast.warn(10, L.ul_btnlk);
|
||||
};
|
||||
|
||||
bcfg_bind(r, 'thumbs', 'thumbs', true, r.setdirty);
|
||||
bcfg_bind(r, 'full', 'gridfull', false, r.setdirty);
|
||||
bcfg_bind(r, 'ihop', 'ihop', true);
|
||||
bcfg_bind(r, 'crop', 'gridcrop', !dcrop.endsWith('n'), r.set_crop);
|
||||
bcfg_bind(r, 'x3', 'grid3x', dth3x.endsWith('y'), r.set_x3);
|
||||
bcfg_bind(r, 'sel', 'gridsel', false, r.loadsel);
|
||||
bcfg_bind(r, 'en', 'griden', dgrid, function (v) {
|
||||
v ? loadgrid() : r.setvis(true);
|
||||
@@ -5461,7 +5535,7 @@ document.onkeydown = function (e) {
|
||||
|
||||
function xhr_search_results() {
|
||||
if (this.status !== 200) {
|
||||
var msg = unpre(this.responseText);
|
||||
var msg = hunpre(this.responseText);
|
||||
srch_msg(true, "http " + this.status + ": " + msg);
|
||||
search_in_progress = 0;
|
||||
return;
|
||||
@@ -5563,23 +5637,29 @@ function aligngriditems() {
|
||||
if (!treectl)
|
||||
return;
|
||||
|
||||
var em2px = parseFloat(getComputedStyle(ebi('ggrid')).fontSize);
|
||||
var gridsz = 10;
|
||||
var ggrid = ebi('ggrid'),
|
||||
em2px = parseFloat(getComputedStyle(ggrid).fontSize),
|
||||
gridsz = 10;
|
||||
try {
|
||||
gridsz = cprop('--grid-sz').slice(0, -2);
|
||||
}
|
||||
catch (ex) { }
|
||||
var gridwidth = ebi('ggrid').clientWidth;
|
||||
var griditemcount = ebi('ggrid').children.length;
|
||||
var totalgapwidth = em2px * griditemcount;
|
||||
var gridwidth = ggrid.clientWidth,
|
||||
griditemcount = ggrid.children.length,
|
||||
totalgapwidth = em2px * griditemcount;
|
||||
|
||||
if (/b/.test(themen + ''))
|
||||
totalgapwidth *= 2.8;
|
||||
|
||||
var val, st = ggrid.style;
|
||||
|
||||
if (((griditemcount * em2px) * gridsz) + totalgapwidth < gridwidth) {
|
||||
ebi('ggrid').style.justifyContent = 'left';
|
||||
val = 'left';
|
||||
} else {
|
||||
ebi('ggrid').style.justifyContent = treectl.hidden ? 'center' : 'space-between';
|
||||
val = treectl.hidden ? 'center' : 'space-between';
|
||||
}
|
||||
if (st.justifyContent != val)
|
||||
st.justifyContent = val;
|
||||
}
|
||||
onresize100.add(aligngriditems);
|
||||
|
||||
@@ -6084,13 +6164,14 @@ var treectl = (function () {
|
||||
|
||||
r.nextdir = null;
|
||||
var cdir = get_evpath(),
|
||||
cur = ebi('files').getAttribute('ts');
|
||||
lfiles = ebi('files'),
|
||||
cur = lfiles.getAttribute('ts');
|
||||
|
||||
if (cur && parseInt(cur) > this.ts) {
|
||||
console.log("reject ls");
|
||||
return;
|
||||
}
|
||||
ebi('files').setAttribute('ts', this.ts);
|
||||
lfiles.setAttribute('ts', this.ts);
|
||||
|
||||
try {
|
||||
var res = JSON.parse(this.responseText);
|
||||
@@ -6110,7 +6191,8 @@ var treectl = (function () {
|
||||
res.files[a].tags = {};
|
||||
|
||||
read_dsort(res.dsort);
|
||||
dfull = res.dfull;
|
||||
dcrop = res.dcrop;
|
||||
dth3x = res.dth3x;
|
||||
|
||||
srvinf = res.srvinf;
|
||||
try {
|
||||
@@ -6552,7 +6634,7 @@ function apply_perms(res) {
|
||||
|
||||
ebi('acc_info').innerHTML = '<span id="srv_info2"><span>' + srvinf +
|
||||
'</span></span><span' + aclass + axs + L.access + '</span>' + (acct != '*' ?
|
||||
'<a href="' + SR + '/?pw=x">' + L.logout + acct + '</a>' :
|
||||
'<a href="' + SR + '/?pw=x">' + (window.is_idp ? '' : L.logout) + acct + '</a>' :
|
||||
'<a href="?h">Login</a>');
|
||||
|
||||
var o = QSA('#ops>a[data-perm]');
|
||||
@@ -7393,7 +7475,7 @@ var msel = (function () {
|
||||
xhrchk(this, L.fd_xe1, L.fd_xe2);
|
||||
|
||||
if (this.status !== 201) {
|
||||
sf.textContent = 'error: ' + unpre(this.responseText);
|
||||
sf.textContent = 'error: ' + hunpre(this.responseText);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -7441,7 +7523,7 @@ var msel = (function () {
|
||||
xhrchk(this, L.fsm_xe1, L.fsm_xe2);
|
||||
|
||||
if (this.status < 200 || this.status > 201) {
|
||||
sf.textContent = 'error: ' + unpre(this.responseText);
|
||||
sf.textContent = 'error: ' + hunpre(this.responseText);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -7475,12 +7557,25 @@ var globalcss = (function () {
|
||||
var css = ds[b].cssText.split(/\burl\(/g);
|
||||
ret += css[0];
|
||||
for (var c = 1; c < css.length; c++) {
|
||||
var delim = (/^["']/.exec(css[c])) ? css[c].slice(0, 1) : '';
|
||||
ret += 'url(' + delim + ((css[c].slice(0, 8).indexOf('://') + 1 || css[c].startsWith('/')) ? '' : base) +
|
||||
css[c].slice(delim ? 1 : 0);
|
||||
var m = /(^ *["']?)(.*)/.exec(css[c]),
|
||||
delim = m[1],
|
||||
ctxt = m[2],
|
||||
is_abs = /^\/|[^)/:]+:\/\//.exec(ctxt);
|
||||
|
||||
ret += 'url(' + delim + (is_abs ? '' : base) + ctxt;
|
||||
}
|
||||
ret += '\n';
|
||||
}
|
||||
if (ret.indexOf('\n@import') + 1) {
|
||||
var c0 = ret.split('\n'),
|
||||
c1 = [],
|
||||
c2 = [];
|
||||
|
||||
for (var a = 0; a < c0.length; a++)
|
||||
(c0[a].startsWith('@import') ? c1 : c2).push(c0[a]);
|
||||
|
||||
ret = c1.concat(c2).join('\n');
|
||||
}
|
||||
}
|
||||
catch (ex) {
|
||||
console.log('could not read css', a, base);
|
||||
@@ -7764,15 +7859,39 @@ var unpost = (function () {
|
||||
if (!xhrchk(this, L.fu_xe1, L.fu_xe2))
|
||||
return ebi('op_unpost').innerHTML = L.fu_xe1;
|
||||
|
||||
var res = JSON.parse(this.responseText);
|
||||
try {
|
||||
var ores = JSON.parse(this.responseText);
|
||||
}
|
||||
catch (ex) {
|
||||
return ebi('op_unpost').innerHTML = '<p>' + L.badreply + ':</p>' + unpre(this.responseText);
|
||||
}
|
||||
|
||||
if (ores.u.length == 1 && ores.u[0].timeout) {
|
||||
html.push('<p>' + L.un_nou + '</p>');
|
||||
ores.u = [];
|
||||
}
|
||||
|
||||
if (ores.c.length == 1 && ores.c[0].kinshi) {
|
||||
html.push('<p>' + L.un_noc + '</p>');
|
||||
ores.c = [];
|
||||
}
|
||||
|
||||
for (var a = 0; a < ores.u.length; a++)
|
||||
ores.u[a].k = 'u';
|
||||
|
||||
for (var a = 0; a < ores.c.length; a++)
|
||||
ores.c[a].k = 'c';
|
||||
|
||||
var res = ores.u.concat(ores.c);
|
||||
|
||||
if (res.length) {
|
||||
if (res.length == 2000)
|
||||
html.push("<p>" + L.un_max);
|
||||
else
|
||||
html.push("<p>" + L.un_avail.format(res.length));
|
||||
html.push("<p>" + L.un_avail.format(ores.c.length, ores.u.length));
|
||||
|
||||
html.push(" – " + L.un_m2 + "</p>");
|
||||
html.push("<table><thead><tr><td></td><td>time</td><td>size</td><td>file</td></tr></thead><tbody>");
|
||||
html.push("<br />" + L.un_m2 + "</p>");
|
||||
html.push("<table><thead><tr><td></td><td>time</td><td>size</td><td>done</td><td>file</td></tr></thead><tbody>");
|
||||
}
|
||||
else
|
||||
html.push('-- <em>' + (filt.value ? L.un_no2 : L.un_no1) + '</em>');
|
||||
@@ -7785,10 +7904,13 @@ var unpost = (function () {
|
||||
'<tr><td></td><td colspan="3" style="padding:.5em">' +
|
||||
'<a me="' + me + '" class="n' + a + '" n2="' + (a + mods[b]) +
|
||||
'" href="#">' + L.un_next.format(Math.min(mods[b], res.length - a)) + '</a></td></tr>');
|
||||
|
||||
var done = res[a].k == 'c';
|
||||
html.push(
|
||||
'<tr><td><a me="' + me + '" class="n' + a + '" href="#">' + L.un_del + '</a></td>' +
|
||||
'<tr><td><a me="' + me + '" class="n' + a + '" href="#">' + (done ? L.un_del : L.un_abrt) + '</a></td>' +
|
||||
'<td>' + unix2iso(res[a].at) + '</td>' +
|
||||
'<td>' + res[a].sz + '</td>' +
|
||||
'<td>' + ('' + res[a].sz).replace(/\B(?=(\d{3})+(?!\d))/g, " ") + '</td>' +
|
||||
(done ? '<td>100%</td>' : '<td>' + res[a].pd + '%</td>') +
|
||||
'<td>' + linksplit(res[a].vp).join('<span> / </span>') + '</td></tr>');
|
||||
}
|
||||
|
||||
@@ -7874,7 +7996,7 @@ var unpost = (function () {
|
||||
var xhr = new XHR();
|
||||
xhr.n = n;
|
||||
xhr.n2 = n2;
|
||||
xhr.open('POST', SR + '/?delete&lim=' + req.length, true);
|
||||
xhr.open('POST', SR + '/?delete&unpost&lim=' + req.length, true);
|
||||
xhr.onload = xhr.onerror = unpost_delete_cb;
|
||||
xhr.send(JSON.stringify(req));
|
||||
};
|
||||
@@ -8024,7 +8146,7 @@ function reload_mp() {
|
||||
plays[a].parentNode.innerHTML = '-';
|
||||
|
||||
mp = new MPlayer();
|
||||
if (mp.au && mp.au.tid) {
|
||||
if (mp.au && mp.au.tid && mp.au.evp == get_evpath()) {
|
||||
var el = QS('a#a' + mp.au.tid);
|
||||
if (el)
|
||||
clmod(el, 'act', 1);
|
||||
|
||||
@@ -6,12 +6,12 @@
|
||||
<title>{{ title }}</title>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
{{ html_head }}
|
||||
<style>
|
||||
html{font-family:sans-serif}
|
||||
td{border:1px solid #999;border-width:1px 1px 0 0;padding:0 5px}
|
||||
a{display:block}
|
||||
</style>
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -61,3 +61,4 @@
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -25,3 +25,4 @@
|
||||
</body>
|
||||
|
||||
</html>
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ html, body {
|
||||
color: #333;
|
||||
background: #eee;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
line-height: 1.5em;
|
||||
}
|
||||
html.y #helpbox a {
|
||||
@@ -67,6 +68,7 @@ a {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
font-weight: bold;
|
||||
font-size: 1.3em;
|
||||
line-height: .1em;
|
||||
|
||||
@@ -4,12 +4,12 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.7">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/md.css?_={{ ts }}">
|
||||
{%- if edit %}
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/md2.css?_={{ ts }}">
|
||||
{%- endif %}
|
||||
{{ html_head }}
|
||||
</head>
|
||||
<body>
|
||||
<div id="mn"></div>
|
||||
@@ -160,3 +160,4 @@ try { l.light = drk? 0:1; } catch (ex) { }
|
||||
<script src="{{ r }}/.cpr/md2.js?_={{ ts }}"></script>
|
||||
{%- endif %}
|
||||
</body></html>
|
||||
|
||||
|
||||
@@ -512,13 +512,6 @@ dom_navtgl.onclick = function () {
|
||||
redraw();
|
||||
};
|
||||
|
||||
if (!HTTPS && location.hostname != '127.0.0.1') try {
|
||||
ebi('edit2').onclick = function (e) {
|
||||
toast.err(0, "the fancy editor is only available over https");
|
||||
return ev(e);
|
||||
}
|
||||
} catch (ex) { }
|
||||
|
||||
if (sread('hidenav') == 1)
|
||||
dom_navtgl.onclick();
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
width: calc(100% - 56em);
|
||||
}
|
||||
#mw {
|
||||
left: calc(100% - 55em);
|
||||
left: max(0em, calc(100% - 55em));
|
||||
overflow-y: auto;
|
||||
position: fixed;
|
||||
bottom: 0;
|
||||
@@ -56,6 +56,7 @@
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
overflow-wrap: break-word;
|
||||
|
||||
@@ -368,14 +368,14 @@ function save(e) {
|
||||
|
||||
function save_cb() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var r;
|
||||
try {
|
||||
r = JSON.parse(this.responseText);
|
||||
}
|
||||
catch (ex) {
|
||||
return toast.err(0, 'Failed to parse reply from server:\n\n' + this.responseText);
|
||||
return toast.err(0, 'Error! The file was likely NOT saved.\n\nFailed to parse reply from server:\n\n' + unpre(this.responseText));
|
||||
}
|
||||
|
||||
if (!r.ok) {
|
||||
@@ -418,7 +418,7 @@ function run_savechk(lastmod, txt, btn, ntry) {
|
||||
|
||||
function savechk_cb() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var doc1 = this.txt.replace(/\r\n/g, "\n");
|
||||
var doc2 = this.responseText.replace(/\r\n/g, "\n");
|
||||
|
||||
@@ -17,6 +17,7 @@ html, body {
|
||||
padding: 0;
|
||||
min-height: 100%;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
background: #f7f7f7;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
@@ -4,11 +4,11 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.7">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/mde.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/deps/mini-fa.css?_={{ ts }}">
|
||||
<link rel="stylesheet" href="{{ r }}/.cpr/deps/easymde.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
</head>
|
||||
<body>
|
||||
<div id="mw">
|
||||
@@ -54,3 +54,4 @@ try { l.light = drk? 0:1; } catch (ex) { }
|
||||
<script src="{{ r }}/.cpr/deps/easymde.js?_={{ ts }}"></script>
|
||||
<script src="{{ r }}/.cpr/mde.js?_={{ ts }}"></script>
|
||||
</body></html>
|
||||
|
||||
|
||||
@@ -134,14 +134,14 @@ function save(mde) {
|
||||
|
||||
function save_cb() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var r;
|
||||
try {
|
||||
r = JSON.parse(this.responseText);
|
||||
}
|
||||
catch (ex) {
|
||||
return toast.err(0, 'Failed to parse reply from server:\n\n' + this.responseText);
|
||||
return toast.err(0, 'Error! The file was likely NOT saved.\n\nFailed to parse reply from server:\n\n' + unpre(this.responseText));
|
||||
}
|
||||
|
||||
if (!r.ok) {
|
||||
@@ -180,7 +180,7 @@ function save_cb() {
|
||||
|
||||
function save_chk() {
|
||||
if (this.status !== 200)
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\n' + this.status + ": " + (this.responseText + '').replace(/^<pre>/, ""));
|
||||
return toast.err(0, 'Error! The file was NOT saved.\n\nError ' + this.status + ":\n" + unpre(this.responseText));
|
||||
|
||||
var doc1 = this.txt.replace(/\r\n/g, "\n");
|
||||
var doc2 = this.responseText.replace(/\r\n/g, "\n");
|
||||
|
||||
@@ -1,3 +1,8 @@
|
||||
:root {
|
||||
--font-main: sans-serif;
|
||||
--font-serif: serif;
|
||||
--font-mono: 'scp';
|
||||
}
|
||||
html,body,tr,th,td,#files,a {
|
||||
color: inherit;
|
||||
background: none;
|
||||
@@ -10,6 +15,7 @@ html {
|
||||
color: #ccc;
|
||||
background: #333;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
text-shadow: 1px 1px 0px #000;
|
||||
touch-action: manipulation;
|
||||
}
|
||||
@@ -23,6 +29,7 @@ html, body {
|
||||
}
|
||||
pre {
|
||||
font-family: monospace, monospace;
|
||||
font-family: var(--font-mono), monospace, monospace;
|
||||
}
|
||||
a {
|
||||
color: #fc5;
|
||||
|
||||
@@ -7,8 +7,8 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/msg.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -48,4 +48,5 @@
|
||||
{%- endif %}
|
||||
</body>
|
||||
|
||||
</html>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ html {
|
||||
color: #333;
|
||||
background: #f7f7f7;
|
||||
font-family: sans-serif;
|
||||
font-family: var(--font-main), sans-serif;
|
||||
touch-action: manipulation;
|
||||
}
|
||||
#wrap {
|
||||
@@ -127,6 +128,7 @@ pre, code {
|
||||
color: #480;
|
||||
background: #fff;
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
border: 1px solid rgba(128,128,128,0.3);
|
||||
border-radius: .2em;
|
||||
padding: .15em .2em;
|
||||
|
||||
@@ -7,9 +7,9 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -78,13 +78,15 @@
|
||||
|
||||
<h1 id="cc">client config:</h1>
|
||||
<ul>
|
||||
{% if k304 or k304vis %}
|
||||
{% if k304 %}
|
||||
<li><a id="h" href="{{ r }}/?k304=n">disable k304</a> (currently enabled)
|
||||
{%- else %}
|
||||
<li><a id="i" href="{{ r }}/?k304=y" class="r">enable k304</a> (currently disabled)
|
||||
{% endif %}
|
||||
<blockquote id="j">enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
|
||||
|
||||
{% endif %}
|
||||
|
||||
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>
|
||||
</ul>
|
||||
|
||||
@@ -118,3 +120,4 @@ document.documentElement.className = (STG && STG.cpp_thm) || "{{ this.args.theme
|
||||
<script src="{{ r }}/.cpr/splash.js?_={{ ts }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ var Ls = {
|
||||
"d1": "tilstand",
|
||||
"d2": "vis tilstanden til alle tråder",
|
||||
"e1": "last innst.",
|
||||
"e2": "leser inn konfigurasjonsfiler på nytt$N(kontoer, volumer, volumbrytere)$Nog kartlegger alle e2ds-volumer",
|
||||
"e2": "leser inn konfigurasjonsfiler på nytt$N(kontoer, volumer, volumbrytere)$Nog kartlegger alle e2ds-volumer$N$Nmerk: endringer i globale parametere$Nkrever en full restart for å ta gjenge",
|
||||
"f1": "du kan betrakte:",
|
||||
"g1": "du kan laste opp til:",
|
||||
"cc1": "klient-konfigurasjon",
|
||||
@@ -30,7 +30,7 @@ var Ls = {
|
||||
},
|
||||
"eng": {
|
||||
"d2": "shows the state of all active threads",
|
||||
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes",
|
||||
"e2": "reload config files (accounts/volumes/volflags),$Nand rescan all e2ds volumes$N$Nnote: any changes to global settings$Nrequire a full restart to take effect",
|
||||
"u2": "time since the last server write$N( upload / rename / ... )$N$N17d = 17 days$N1h23 = 1 hour 23 minutes$N4m56 = 4 minutes 56 seconds",
|
||||
"v2": "use this server as a local HDD$N$NWARNING: this will show your password!",
|
||||
}
|
||||
@@ -49,6 +49,15 @@ for (var k in (d || {})) {
|
||||
o[a].setAttribute("tt", d[k]);
|
||||
}
|
||||
|
||||
try {
|
||||
if (is_idp) {
|
||||
var z = ['#l+div', '#l', '#c'];
|
||||
for (var a = 0; a < z.length; a++)
|
||||
QS(z[a]).style.display = 'none';
|
||||
}
|
||||
}
|
||||
catch (ex) { }
|
||||
|
||||
tt.init();
|
||||
var o = QS('input[name="cppwd"]');
|
||||
if (!ebi('c') && o.offsetTop + o.offsetHeight < window.innerHeight)
|
||||
|
||||
@@ -7,10 +7,10 @@
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=0.8">
|
||||
<meta name="theme-color" content="#333">
|
||||
{{ html_head }}
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<style>ul{padding-left:1.3em}li{margin:.4em 0}</style>
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -246,3 +246,4 @@ document.documentElement.className = (STG && STG.cpp_thm) || "{{ args.theme }}";
|
||||
<script src="{{ r }}/.cpr/svcs.js?_={{ ts }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
:root {
|
||||
--font-main: sans-serif;
|
||||
--font-serif: serif;
|
||||
--font-mono: 'scp';
|
||||
|
||||
--fg: #ccc;
|
||||
--fg-max: #fff;
|
||||
--bg-u2: #2b2b2b;
|
||||
@@ -378,6 +382,7 @@ html.y textarea:focus {
|
||||
.mdo code,
|
||||
.mdo tt {
|
||||
font-family: 'scp', monospace, monospace;
|
||||
font-family: var(--font-mono), 'scp', monospace, monospace;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-all;
|
||||
}
|
||||
@@ -447,6 +452,7 @@ html.y textarea:focus {
|
||||
}
|
||||
.mdo blockquote {
|
||||
font-family: serif;
|
||||
font-family: var(--font-serif), serif;
|
||||
background: #f7f7f7;
|
||||
border: .07em dashed #ccc;
|
||||
padding: 0 2em;
|
||||
@@ -580,3 +586,11 @@ hr {
|
||||
border: .07em dashed #444;
|
||||
}
|
||||
}
|
||||
|
||||
@media (prefers-reduced-motion) {
|
||||
#toast,
|
||||
#toast a#toastc,
|
||||
#tt {
|
||||
transition: none;
|
||||
}
|
||||
}
|
||||
@@ -1722,8 +1722,6 @@ function up2k_init(subtle) {
|
||||
ebi('u2etas').style.textAlign = 'left';
|
||||
}
|
||||
etafun();
|
||||
if (pvis.act == 'bz')
|
||||
pvis.changecard('bz');
|
||||
}
|
||||
|
||||
if (flag) {
|
||||
@@ -1859,6 +1857,9 @@ function up2k_init(subtle) {
|
||||
timer.rm(donut.do);
|
||||
ebi('u2tabw').style.minHeight = '0px';
|
||||
utw_minh = 0;
|
||||
|
||||
if (pvis.act == 'bz')
|
||||
pvis.changecard('bz');
|
||||
}
|
||||
|
||||
function chill(t) {
|
||||
@@ -2256,6 +2257,7 @@ function up2k_init(subtle) {
|
||||
console.log('handshake onerror, retrying', t.name, t);
|
||||
apop(st.busy.handshake, t);
|
||||
st.todo.handshake.unshift(t);
|
||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
||||
t.keepalive = keepalive;
|
||||
};
|
||||
var orz = function (e) {
|
||||
@@ -2263,16 +2265,26 @@ function up2k_init(subtle) {
|
||||
return console.log('zombie handshake onload', t.name, t);
|
||||
|
||||
if (xhr.status == 200) {
|
||||
try {
|
||||
var response = JSON.parse(xhr.responseText);
|
||||
}
|
||||
catch (ex) {
|
||||
apop(st.busy.handshake, t);
|
||||
st.todo.handshake.unshift(t);
|
||||
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
|
||||
return toast.err(0, 'Handshake error; will retry...\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
|
||||
}
|
||||
|
||||
t.t_handshake = Date.now();
|
||||
if (keepalive) {
|
||||
apop(st.busy.handshake, t);
|
||||
tasker();
|
||||
return;
|
||||
}
|
||||
|
||||
if (toast.tag === t)
|
||||
toast.ok(5, L.u_fixed);
|
||||
|
||||
var response = JSON.parse(xhr.responseText);
|
||||
if (!response.name) {
|
||||
var msg = '',
|
||||
smsg = '';
|
||||
@@ -2856,6 +2868,8 @@ function up2k_init(subtle) {
|
||||
new_state = false;
|
||||
fixed = true;
|
||||
}
|
||||
if (new_state === undefined)
|
||||
new_state = can_write ? false : have_up2k_idx ? true : undefined;
|
||||
}
|
||||
|
||||
if (new_state === undefined)
|
||||
|
||||
@@ -1417,9 +1417,12 @@ function lf2br(txt) {
|
||||
}
|
||||
|
||||
|
||||
function unpre(txt) {
|
||||
function hunpre(txt) {
|
||||
return ('' + txt).replace(/^<pre>/, '');
|
||||
}
|
||||
function unpre(txt) {
|
||||
return esc(hunpre(txt));
|
||||
}
|
||||
|
||||
|
||||
var toast = (function () {
|
||||
@@ -1995,15 +1998,21 @@ function xhrchk(xhr, prefix, e404, lvl, tag) {
|
||||
if (tag === undefined)
|
||||
tag = prefix;
|
||||
|
||||
var errtxt = (xhr.response && xhr.response.err) || xhr.responseText,
|
||||
var errtxt = ((xhr.response && xhr.response.err) || xhr.responseText) || '',
|
||||
suf = '',
|
||||
fun = toast[lvl || 'err'],
|
||||
is_cf = /[Cc]loud[f]lare|>Just a mo[m]ent|#cf-b[u]bbles|Chec[k]ing your br[o]wser|\/chall[e]nge-platform|"chall[e]nge-error|nable Ja[v]aScript and cook/.test(errtxt);
|
||||
|
||||
if (errtxt.startsWith('<pre>'))
|
||||
suf = '\n\nerror-details: «' + unpre(errtxt).split('\n')[0].trim() + '»';
|
||||
else
|
||||
errtxt = esc(errtxt).slice(0, 32768);
|
||||
|
||||
if (xhr.status == 403 && !is_cf)
|
||||
return toast.err(0, prefix + (L && L.xhr403 || "403: access denied\n\ntry pressing F5, maybe you got logged out"), tag);
|
||||
return toast.err(0, prefix + (L && L.xhr403 || "403: access denied\n\ntry pressing F5, maybe you got logged out") + suf, tag);
|
||||
|
||||
if (xhr.status == 404)
|
||||
return toast.err(0, prefix + e404, tag);
|
||||
return toast.err(0, prefix + e404 + suf, tag);
|
||||
|
||||
if (is_cf && (xhr.status == 403 || xhr.status == 503)) {
|
||||
var now = Date.now(), td = now - cf_cha_t;
|
||||
|
||||
@@ -13,6 +13,9 @@
|
||||
|
||||
# other stuff
|
||||
|
||||
## [`TODO.md`](TODO.md)
|
||||
* planned features / fixes / changes
|
||||
|
||||
## [`example.conf`](example.conf)
|
||||
* example config file for `-c`
|
||||
|
||||
|
||||
18
docs/TODO.md
Normal file
18
docs/TODO.md
Normal file
@@ -0,0 +1,18 @@
|
||||
a living list of upcoming features / fixes / changes, very roughly in order of priority
|
||||
|
||||
* download accelerator
|
||||
* definitely download chunks in parallel
|
||||
* maybe resumable downloads (chrome-only, jank api)
|
||||
* maybe checksum validation (return sha512 of requested range in responses, and probably also warks)
|
||||
|
||||
* [github issue #64](https://github.com/9001/copyparty/issues/64) - dirkeys 2nd season
|
||||
* popular feature request, finally time to refactor browser.js i suppose...
|
||||
|
||||
* [github issue #37](https://github.com/9001/copyparty/issues/37) - upload PWA
|
||||
* or [maybe not](https://arstechnica.com/tech-policy/2024/02/apple-under-fire-for-disabling-iphone-web-apps-eu-asks-developers-to-weigh-in/), or [maybe](https://arstechnica.com/gadgets/2024/03/apple-changes-course-will-keep-iphone-eu-web-apps-how-they-are-in-ios-17-4/)
|
||||
|
||||
* [github issue #57](https://github.com/9001/copyparty/issues/57) - config GUI
|
||||
* configs given to -c can be ordered with numerical prefix
|
||||
* autorevert settings if it fails to apply
|
||||
* countdown until session invalidates in settings gui, with refresh-button
|
||||
|
||||
96
docs/bufsize.txt
Normal file
96
docs/bufsize.txt
Normal file
@@ -0,0 +1,96 @@
|
||||
notes from testing various buffer sizes of files and sockets
|
||||
|
||||
summary:
|
||||
|
||||
download-folder-as-tar: would be 7% faster with --iobuf 65536 (but got 20% faster in v1.11.2)
|
||||
|
||||
download-folder-as-zip: optimal with default --iobuf 262144
|
||||
|
||||
download-file-over-https: optimal with default --iobuf 262144
|
||||
|
||||
put-large-file: optimal with default --iobuf 262144, --s-rd-sz 262144 (and got 14% faster in v1.11.2)
|
||||
|
||||
post-large-file: optimal with default --iobuf 262144, --s-rd-sz 262144 (and got 18% faster in v1.11.2)
|
||||
|
||||
----
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/?tar
|
||||
3.3 req/s 1.11.1
|
||||
4.3 4.0 3.3 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer smaller)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/?zip
|
||||
2.9 req/s 1.11.1
|
||||
2.5 2.9 2.9 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer bigger)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/?tar
|
||||
8.3 req/s 1.11.1
|
||||
8.4 8.4 8.5 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer bigger)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/?zip
|
||||
13.9 req/s 1.11.1
|
||||
14.1 14.0 13.8 req/s 1.12.2
|
||||
64 256 512 --iobuf 256 (prefer smaller)
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/pairdupes/987a
|
||||
5260 req/s 1.11.1
|
||||
5246 5246 5280 5268 req/s 1.12.2
|
||||
64 256 512 256 --iobuf dontcare
|
||||
32 32 32 512 --s-rd-sz dontcare
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure https://127.0.0.1:3923/pairdupes/987a
|
||||
4445 req/s 1.11.1
|
||||
4462 4494 4444 req/s 1.12.2
|
||||
64 256 512 --iobuf dontcare
|
||||
32 32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure http://127.0.0.1:3923/bigs/gssc-02-cannonball-skydrift/track10.cdda.flac
|
||||
95 req/s 1.11.1
|
||||
95 97 req/s 1.12.2
|
||||
64 512 --iobuf dontcare
|
||||
32 32 --s-rd-sz
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure https://127.0.0.1:3923/bigs/gssc-02-cannonball-skydrift/track10.cdda.flac
|
||||
15.4 req/s 1.11.1
|
||||
15.4 15.3 14.9 15.4 req/s 1.12.2
|
||||
64 256 512 512 --iobuf 256 (prefer smaller, and smaller than s-wr-sz)
|
||||
32 32 32 32 --s-rd-sz
|
||||
256 256 256 512 --s-wr-sz
|
||||
|
||||
----
|
||||
|
||||
python3 ~/dev/old/copyparty\ v1.11.1\ dont\ ban\ the\ pipes.py -q -i 127.0.0.1 -v .::A --daw
|
||||
python3 ~/dev/copyparty/dist/copyparty-sfx.py -q -i 127.0.0.1 -v .::A --daw --iobuf $((1024*512))
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure -mPUT -r0 -D ~/Music/gssc-02-cannonball-skydrift/track10.cdda.flac http://127.0.0.1:3923/a.bin
|
||||
10.8 req/s 1.11.1
|
||||
10.8 11.5 11.8 12.1 12.2 12.3 req/s new
|
||||
512 512 512 512 512 256 --iobuf 256
|
||||
32 64 128 256 512 256 --s-rd-sz 256 (prefer bigger)
|
||||
|
||||
----
|
||||
|
||||
buildpost() {
|
||||
b=--jeg-er-grensestaven;
|
||||
printf -- "$b\r\nContent-Disposition: form-data; name=\"act\"\r\n\r\nbput\r\n$b\r\nContent-Disposition: form-data; name=\"f\"; filename=\"a.bin\"\r\nContent-Type: audio/mpeg\r\n\r\n"
|
||||
cat "$1"
|
||||
printf -- "\r\n${b}--\r\n"
|
||||
}
|
||||
buildpost ~/Music/gssc-02-cannonball-skydrift/track10.cdda.flac >big.post
|
||||
buildpost ~/Music/bottomtext.txt >smol.post
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure -mPOST -r0 -T 'multipart/form-data; boundary=jeg-er-grensestaven' -D big.post http://127.0.0.1:3923/?replace
|
||||
9.6 11.2 11.3 11.1 10.9 req/s v1.11.2
|
||||
512 512 256 128 256 --iobuf 256
|
||||
32 512 256 128 128 --s-rd-sz 256
|
||||
|
||||
oha -z10s -c1 --ipv4 --insecure -mPOST -r0 -T 'multipart/form-data; boundary=jeg-er-grensestaven' -D smol.post http://127.0.0.1:3923/?replace
|
||||
2445 2414 2401 2437
|
||||
256 128 256 256 --iobuf 256
|
||||
128 128 256 64 --s-rd-sz 128 (but use 256 since big posts are more important)
|
||||
@@ -1,3 +1,168 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0318-1709 `v1.11.1` dont ban the pipes
|
||||
|
||||
the [previous release](https://github.com/9001/copyparty/releases/tag/v1.11.0) had all the fun new features... this one's just bugfixes
|
||||
|
||||
## bugfixes
|
||||
|
||||
* less aggressive rejection of requests from banned IPs 51d31588
|
||||
* clients would get kicked before the header was parsed (which contains the xff header), meaning the server could become inaccessible to everyone if the reverse-proxy itself were to "somehow" get banned
|
||||
* ...which can happen if a server behind cloudflare also accepts non-cloudflare connections, meaning the client IP would not be resolved, and it'll ban the LAN IP instead heh
|
||||
* that part still happens, but now it won't affect legit clients through the intended route
|
||||
* the old behavior can be restored with `--early-ban` to save some cycles, and/or avoid slowloris somewhat
|
||||
* the unpost feature could appear to be disabled on servers where no volume was mapped to `/` 0287c7ba
|
||||
* python 3.12 support for [compiling the dependencies](https://github.com/9001/copyparty/tree/hovudstraum/bin/mtag#dependencies) necessary to detect bpm/key in audio files 32553e45
|
||||
|
||||
## other changes
|
||||
|
||||
* mention [real-ip configuration](https://github.com/9001/copyparty?tab=readme-ov-file#real-ip) in the readme ee80cdb9
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0315-2047 `v1.11.0` You Can (Not) Proceed
|
||||
|
||||
this release was made possible by [stoltzekleiven, kvikklunsj, and tako](https://a.ocv.me/pub/g/nerd-stuff/2024-0310-stoltzekleiven.jpg)
|
||||
|
||||
## new features
|
||||
|
||||
* #62 support for [identity providers](https://github.com/9001/copyparty#identity-providers) and automatically creating volumes for each user/group ("home folders")
|
||||
* login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
|
||||
* [documentation](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md) and [examples](https://github.com/9001/copyparty/tree/hovudstraum/docs/examples/docker/idp-authelia-traefik) could still use some help (I did my best)
|
||||
* #77 UI to cancel unfinished uploads (available in the 🧯 unpost tab) 3f05b665
|
||||
* the user's IP and username must match the upload by default; can be changed with global-option / volflag `u2abort`
|
||||
* new volflag `sparse` to pretend sparse files are supported even if the filesystem doesn't 8785d2f9
|
||||
* gives drastically better performance when writing to s3 buckets through juicefs/geesefs
|
||||
* only for when you know the filesystem can deal with it (so juicefs/geesefs is OK, but **definitely not** fat32)
|
||||
* `--xff-src` and `--ipa` now support CIDR notation (but the old syntax still works) b377791b
|
||||
* ux:
|
||||
* #74 option to use [custom fonts](https://github.com/9001/copyparty/tree/hovudstraum/docs/rice) 263adec7 6cc7101d 8016e671
|
||||
* option to disable autoplay when page url contains a song hash 8413ed6d
|
||||
* good if you're using copyparty to listen to music at the office and the office policy is to have the webbrowser automatically restart to install updates, meaning your coworkers are suddenly and involuntarily enjoying some loud af jcore while you're asleep at home
|
||||
|
||||
## bugfixes
|
||||
|
||||
* don't panic if cloudflare (or another reverse-proxy) decides to hijack json responses and replace them with html 7741870d
|
||||
* #73 the fancy markdown editor was incompatible with caddy (a reverse-proxy) ac96fd9c
|
||||
* media player could get confused if neighboring folders had songs with the same filenames 206af8f1
|
||||
* benign race condition in the config reloader (could only be triggered by admins and/or SIGUSR1) 096de508
|
||||
* running tftp with optimizations enabled would cause issues for `--ipa` b377791b
|
||||
* cosmetic tftp bugs 115020ba
|
||||
* ux:
|
||||
* up2k rendering glitch if the last couple uploads were dupes 547a4863
|
||||
* up2k rendering glitch when switching between readonly/writeonly folders 51a83b04
|
||||
* markdown editor preview was glitchy on tiny screens e5582605
|
||||
|
||||
## other changes
|
||||
|
||||
* add a [sharex v12.1](https://github.com/9001/copyparty/tree/hovudstraum/contrib#sharexsxcu) config example 2527e903
|
||||
* make it easier to discover/diagnose issues with docker and/or reverse-proxy config d744f3ff
|
||||
* stop recommending the use of `--xff-src=any` in the log messages 7f08f10c
|
||||
* ux:
|
||||
* remove the `k304` togglebutton in the controlpanel by default 1c011ff0
|
||||
* mention that a full restart is required for `[global]` config changes to take effect 0c039219
|
||||
* docs e78af022
|
||||
* [how to use copyparty with amazon aws s3](https://github.com/9001/copyparty#using-the-cloud-as-storage)
|
||||
* faq: http/https confusion caused by incorrectly configured cloudflare
|
||||
* #76 docker: ftp-server howto
|
||||
* copyparty.exe: updated pyinstaller to 6.5.0 bdbcbbb0
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0221-2132 `v1.10.2` tall thumbs
|
||||
|
||||
## new features
|
||||
|
||||
* thumbnails can be way taller when centercrop is disabled in the browser UI 5026b212
|
||||
* good for folders with lots of portrait pics (no more letterboxing)
|
||||
* more thumbnail stuff:
|
||||
* zoom levels are twice as granular 5026b212
|
||||
* write-only folders get an "upload-only" icon 89c6c2e0
|
||||
* inaccessible files/folders get a 403/404 icon 8a38101e
|
||||
|
||||
## bugfixes
|
||||
|
||||
* tftp fixes d07859e8
|
||||
* server could crash if a nic disappeared / got restarted mid-transfer
|
||||
* tiny resource leak if dualstack causes ipv4 bind to fail
|
||||
* thumbnails:
|
||||
* when behind a caching proxy (cloudflare), icons in folders would be a random mix of png and svg 43ee6b9f
|
||||
* produce valid folder icons when thumbnails are disabled 14af136f
|
||||
* trailing newline in html responses d39a99c9
|
||||
|
||||
## other changes
|
||||
|
||||
* webdeps: update dompurify 13e77777
|
||||
* copyparty.exe: update jinja2, markupsafe, pyinstaller, upx 13e77777
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0218-1554 `v1.10.1` big thumbs
|
||||
|
||||
## new features
|
||||
|
||||
* button to enable hi-res thumbnails 33f41f3e 58ae38c6
|
||||
* enable with the `3x` button in the gridview
|
||||
* can be force-enabled/disabled serverside with `--th-x3` or volflag `th3x`
|
||||
* tftp: IPv6 support and UTF-8 filenames + optimizations 0504b010
|
||||
* ux:
|
||||
* when closing the image viewer, scroll to the last viewed pic bbc37990
|
||||
* respect `prefers-reduced-motion` some more places fbfdd833
|
||||
|
||||
## bugfixes
|
||||
|
||||
* #72 impossible to delete recently uploaded zerobyte files if database was disabled 6bd087dd
|
||||
* tftp now works in `copyparty.exe`, `copyparty32.exe`, `copyparty-winpe64.exe`
|
||||
* the [sharex config example](https://github.com/9001/copyparty/tree/hovudstraum/contrib#sharexsxcu) was still using cookie-auth 8ff7094e
|
||||
* ux:
|
||||
* prevent scrolling while a pic is open 7f1c9926
|
||||
* fix gridview in older firefox versions 7f1c9926
|
||||
|
||||
## other changes
|
||||
|
||||
* thumbnail center-cropping can be force-enabled/disabled serverside with `--th-crop` or volflag `crop`
|
||||
* replaces `--th-no-crop` which is now deprecated (but will continue to work)
|
||||
|
||||
----
|
||||
|
||||
this release contains a build of `copyparty-winpe64.exe` which is almost **entirely useless,** except for in *extremely specific scenarios*, namely the kind where a TFTP server could also be useful -- the [previous build](https://github.com/9001/copyparty/releases/download/v1.8.7/copyparty-winpe64.exe) was from [version 1.8.7](https://github.com/9001/copyparty/releases/tag/v1.8.7) (2023-07-23)
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0215-0000 `v1.10.0` tftp
|
||||
|
||||
## new features
|
||||
|
||||
* TFTP server d636316a 8796c09f acbb8267 02879713
|
||||
* based on [partftpy](https://github.com/9001/partftpy), has most essential features EXCEPT for [rfc7440](https://datatracker.ietf.org/doc/html/rfc7440) so WAN will be slow
|
||||
* is already doing real work out in the wild! see the fantastic quote in the [readme](https://github.com/9001/copyparty?tab=readme-ov-file#tftp-server)
|
||||
* detect some (un)common configuration mistakes
|
||||
* buggy reverse-proxy which strips away all URL parameters 136c0fdc
|
||||
* could cause the browser to get stuck in a refresh-loop
|
||||
* a volume on an sqlite-incompatible filesystem (a remote cifs server or such) and an up2k volume inside d4da3861
|
||||
* sqlite could deadlock or randomly throw exceptions; serverlog will now explain how to fix it
|
||||
* ie11: file selection with shift-up/down 64ad5853
|
||||
|
||||
## bugfixes
|
||||
|
||||
* prevent music playback from stopping at the end of a folder f262aee8
|
||||
* preloader will now proactively hunt for the next file to play as the last song is ending
|
||||
* in very specific scenarios, clients could be told their upload had finished processing a tiny bit too early, while the HDD was still busy taking in the last couple bytes 6f8a588c
|
||||
* so if you expected to find the complete file on the server HDD immediately as the final chunk got confirmed, that was not necessarily the case if your server HDD was severely overloaded to the point where closing a file takes half a minute
|
||||
* huge thx to friend with said overloaded server for finding all the crazy edge cases
|
||||
* ignore harmless javascript errors from easymde 879e83e2
|
||||
|
||||
## other changes
|
||||
|
||||
* the "copy currently playing song info to clipboard" button now excludes the uploader IP ed524d84
|
||||
* mention that enabling `-j0` can improve HDD load during uploads 5d92f4df
|
||||
* mention a debian-specific docker bug which prevents starting most containers (not just copyparty) 4e797a71
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0203-1533 `v1.9.31` eject
|
||||
|
||||
|
||||
@@ -164,6 +164,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
||||
| PUT | `?xz` | (binary data) | compress with xz and write into file at URL |
|
||||
| mPOST | | `f=FILE` | upload `FILE` into the folder at URL |
|
||||
| mPOST | `?j` | `f=FILE` | ...and reply with json |
|
||||
| mPOST | `?replace` | `f=FILE` | ...and overwrite existing files |
|
||||
| mPOST | | `act=mkdir`, `name=foo` | create directory `foo` at URL |
|
||||
| POST | `?delete` | | delete URL recursively |
|
||||
| jPOST | `?delete` | `["/foo","/bar"]` | delete `/foo` and `/bar` recursively |
|
||||
@@ -218,7 +219,7 @@ if you don't need all the features, you can repack the sfx and save a bunch of s
|
||||
* `269k` after `./scripts/make-sfx.sh re no-cm no-hl`
|
||||
|
||||
the features you can opt to drop are
|
||||
* `cm`/easymde, the "fancy" markdown editor, saves ~82k
|
||||
* `cm`/easymde, the "fancy" markdown editor, saves ~89k
|
||||
* `hl`, prism, the syntax hilighter, saves ~41k
|
||||
* `fnt`, source-code-pro, the monospace font, saves ~9k
|
||||
* `dd`, the custom mouse cursor for the media player tray tab, saves ~2k
|
||||
|
||||
@@ -10,7 +10,6 @@
|
||||
|
||||
# q, lo: /cfg/log/%Y-%m%d.log # log to file instead of docker
|
||||
|
||||
# ftp: 3921 # enable ftp server on port 3921
|
||||
# p: 3939 # listen on another port
|
||||
# ipa: 10.89. # only allow connections from 10.89.*
|
||||
# df: 16 # stop accepting uploads if less than 16 GB free disk space
|
||||
|
||||
50
docs/examples/docker/idp-authelia-traefik/README.md
Normal file
50
docs/examples/docker/idp-authelia-traefik/README.md
Normal file
@@ -0,0 +1,50 @@
|
||||
> [!WARNING]
|
||||
> I am unable to guarantee the quality, safety, and security of anything in this folder; it is a combination of examples I found online. Please submit corrections or improvements 🙏
|
||||
|
||||
to try this out with minimal adjustments:
|
||||
* specify what filesystem-path to share with copyparty, replacing the default/example value `/srv/pub` in `docker-compose.yml`
|
||||
* add `127.0.0.1 fs.example.com traefik.example.com authelia.example.com` to your `/etc/hosts`
|
||||
* `sudo docker-compose up`
|
||||
* login to https://fs.example.com/ with username `authelia` password `authelia`
|
||||
|
||||
to use this in a safe and secure manner:
|
||||
* follow a guide on setting up authelia properly (TODO:link) and use the copyparty-specific parts of this folder as inspiration for your own config; namely the `cpp` subfolder and the `copyparty` service in `docker-compose.yml`
|
||||
|
||||
this folder is based on:
|
||||
* https://github.com/authelia/authelia/tree/39763aaed24c4abdecd884b47357a052b235942d/examples/compose/lite
|
||||
|
||||
incomplete list of modifications made:
|
||||
* support for running with podman as root on fedora (`:z` volumes, `label:disable`)
|
||||
* explicitly using authelia `v4.38.0-beta3` because config syntax changed since last stable release
|
||||
* disabled automatic letsencrypt certificate signing
|
||||
* reduced logging from debug to info
|
||||
* added a warning that traefik is given access to the docker socket (as recommended by traefik docs) which means traefik is able to break out of the container and has full root access on the host machine
|
||||
|
||||
|
||||
# security
|
||||
|
||||
there is probably/definitely room for improvement in this example setup. Some ideas taken from [github issue #62](https://github.com/9001/copyparty/issues/62):
|
||||
|
||||
* Add in a redis password to limit attacker lateral movement in the system
|
||||
* Move redis to a private network shared with just authelia
|
||||
* Pin to image hashes (or go all in on updates and add `watchtower`)
|
||||
* Drop bridge networking for just exposing traefik's public ports
|
||||
* Configure docker for non-root access to docker socket and then move traefik to use [non-root perms](https://docs.docker.com/engine/security/rootless/)
|
||||
|
||||
if you manage to improve on any of this, especially in a way that might be useful for other people, consider sending a PR :>
|
||||
|
||||
|
||||
# performance
|
||||
|
||||
currently **not optimal,** at least when compared to running the python sfx outside of docker... some numbers from my laptop (ryzen4500u/fedora39):
|
||||
|
||||
| req/s | https D/L | http D/L | approach |
|
||||
| -----:| ----------:|:--------:| -------- |
|
||||
| 5200 | 1294 MiB/s | 5+ GiB/s | [copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) running on host |
|
||||
| 4370 | 725 MiB/s | 4+ GiB/s | `docker run copyparty/ac` |
|
||||
| 2420 | 694 MiB/s | n/a | `copyparty/ac` behind traefik |
|
||||
| 75 | 694 MiB/s | n/a | traefik and authelia **(you are here)** |
|
||||
|
||||
authelia is behaving strangely, handling 340 requests per second for a while, but then it suddenly drops to 75 and stays there...
|
||||
|
||||
I'm assuming all of the performance issues is due to a misconfiguration of authelia/traefik/docker on my end, but I don't relly know where to start
|
||||
@@ -0,0 +1,66 @@
|
||||
# based on https://github.com/authelia/authelia/blob/39763aaed24c4abdecd884b47357a052b235942d/examples/compose/lite/authelia/configuration.yml
|
||||
|
||||
# Authelia configuration
|
||||
|
||||
# This secret can also be set using the env variables AUTHELIA_JWT_SECRET_FILE
|
||||
jwt_secret: a_very_important_secret
|
||||
|
||||
server:
|
||||
address: 'tcp://:9091'
|
||||
|
||||
log:
|
||||
level: info # debug
|
||||
|
||||
totp:
|
||||
issuer: authelia.com
|
||||
|
||||
authentication_backend:
|
||||
file:
|
||||
path: /config/users_database.yml
|
||||
|
||||
access_control:
|
||||
default_policy: deny
|
||||
rules:
|
||||
# Rules applied to everyone
|
||||
- domain: traefik.example.com
|
||||
policy: one_factor
|
||||
- domain: fs.example.com
|
||||
policy: one_factor
|
||||
|
||||
session:
|
||||
# This secret can also be set using the env variables AUTHELIA_SESSION_SECRET_FILE
|
||||
secret: unsecure_session_secret
|
||||
|
||||
cookies:
|
||||
- name: authelia_session
|
||||
domain: example.com # Should match whatever your root protected domain is
|
||||
default_redirection_url: https://fs.example.com
|
||||
authelia_url: https://authelia.example.com/
|
||||
expiration: 3600 # 1 hour
|
||||
inactivity: 300 # 5 minutes
|
||||
|
||||
redis:
|
||||
host: redis
|
||||
port: 6379
|
||||
# This secret can also be set using the env variables AUTHELIA_SESSION_REDIS_PASSWORD_FILE
|
||||
# password: authelia
|
||||
|
||||
regulation:
|
||||
max_retries: 3
|
||||
find_time: 120
|
||||
ban_time: 300
|
||||
|
||||
storage:
|
||||
encryption_key: you_must_generate_a_random_string_of_more_than_twenty_chars_and_configure_this
|
||||
local:
|
||||
path: /config/db.sqlite3
|
||||
|
||||
notifier:
|
||||
disable_startup_check: true
|
||||
smtp:
|
||||
username: test
|
||||
# This secret can also be set using the env variables AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
|
||||
password: password
|
||||
host: mail.example.com
|
||||
port: 25
|
||||
sender: admin@example.com
|
||||
@@ -0,0 +1,18 @@
|
||||
# based on https://github.com/authelia/authelia/blob/39763aaed24c4abdecd884b47357a052b235942d/examples/compose/lite/authelia/users_database.yml
|
||||
|
||||
# Users Database
|
||||
|
||||
# This file can be used if you do not have an LDAP set up.
|
||||
|
||||
# List of users
|
||||
users:
|
||||
authelia:
|
||||
disabled: false
|
||||
displayname: "Authelia User"
|
||||
# Password is authelia
|
||||
password: "$6$rounds=50000$BpLnfgDsc2WD8F2q$Zis.ixdg9s/UOJYrs56b5QEZFiZECu0qZVNsIYxBaNJ7ucIL.nlxVCT5tqh8KHG8X4tlwCFm5r6NTOZZ5qRFN/"
|
||||
email: authelia@authelia.com
|
||||
groups:
|
||||
- admins
|
||||
- dev
|
||||
- su
|
||||
82
docs/examples/docker/idp-authelia-traefik/cpp/copyparty.conf
Normal file
82
docs/examples/docker/idp-authelia-traefik/cpp/copyparty.conf
Normal file
@@ -0,0 +1,82 @@
|
||||
# not actually YAML but lets pretend:
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
|
||||
# example config for how authelia can be used to replace
|
||||
# copyparty's built-in authentication/authorization mechanism,
|
||||
# providing copyparty with HTTP headers through traefik to
|
||||
# signify who the user is, and what groups they belong to
|
||||
#
|
||||
# the filesystem-path that will be shared with copyparty is
|
||||
# specified in the docker-compose in the parent folder, where
|
||||
# a real filesystem-path is mapped onto this container's path `/w`,
|
||||
# meaning `/w` in this config-file is actually `/srv/pub` in the
|
||||
# outside world (assuming you didn't modify that value)
|
||||
|
||||
|
||||
[global]
|
||||
e2dsa # enable file indexing and filesystem scanning
|
||||
e2ts # enable multimedia indexing
|
||||
ansi # enable colors in log messages
|
||||
#q # disable logging for more performance
|
||||
|
||||
# if we are confident that we got the docker-network config correct
|
||||
# (meaning copyparty is only accessible through traefik, and
|
||||
# traefik makes sure that all requests go through authelia),
|
||||
# then accept X-Forwarded-For and IdP headers from any private IP:
|
||||
xff-src: lan
|
||||
|
||||
# enable IdP support by expecting username/groupname in
|
||||
# http-headers provided by the reverse-proxy; header "X-IdP-User"
|
||||
# will contain the username, "X-IdP-Group" the groupname
|
||||
idp-h-usr: remote-user
|
||||
idp-h-grp: remote-groups
|
||||
|
||||
# DEBUG: show all incoming request headers from traefik/authelia
|
||||
#ihead: *
|
||||
|
||||
|
||||
[/] # create a volume at "/" (the webroot), which will
|
||||
/w # share /w (the docker data volume, which is mapped to /srv/pub on the host in docker-compose.yml)
|
||||
accs:
|
||||
rw: * # everyone gets read-access, but
|
||||
rwmda: @su # the group "su" gets read-write-move-delete-admin
|
||||
|
||||
|
||||
[/u/${u}] # each user gets their own home-folder at /u/username
|
||||
/w/u/${u} # which will be "u/username" in the docker data volume
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/u/${u}/priv] # each user also gets a private area at /u/username/priv
|
||||
/w/u/${u}/priv # stored at DATAVOLUME/u/username/priv
|
||||
accs:
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}] # each group gets their own shared volume
|
||||
/w/lounge/${g} # stored at DATAVOLUME/lounge/groupname
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}/priv] # and a private area for each group too
|
||||
/w/lounge/${g}/priv # stored at DATAVOLUME/lounge/groupname/priv
|
||||
accs:
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
# and create some strategic volumes to prevent anyone from gaining
|
||||
# unintended access to priv folders if the users/groups db is lost
|
||||
[/u]
|
||||
/w/u
|
||||
accs:
|
||||
rwmda: @su
|
||||
[/lounge]
|
||||
/w/lounge
|
||||
accs:
|
||||
rwmda: @su
|
||||
99
docs/examples/docker/idp-authelia-traefik/docker-compose.yml
Normal file
99
docs/examples/docker/idp-authelia-traefik/docker-compose.yml
Normal file
@@ -0,0 +1,99 @@
|
||||
version: '3.3'
|
||||
|
||||
networks:
|
||||
net:
|
||||
driver: bridge
|
||||
|
||||
services:
|
||||
copyparty:
|
||||
image: copyparty/ac
|
||||
container_name: idp_copyparty
|
||||
user: "1000:1000" # should match the user/group of your fileshare volumes
|
||||
volumes:
|
||||
- ./cpp/:/cfg:z # the copyparty config folder
|
||||
- /srv/pub:/w:z # this is where we declare that "/srv/pub" is the filesystem-path on the server that shall be shared online
|
||||
networks:
|
||||
- net
|
||||
expose:
|
||||
- 3923
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.copyparty.rule=Host(`fs.example.com`)'
|
||||
- 'traefik.http.routers.copyparty.entrypoints=https'
|
||||
- 'traefik.http.routers.copyparty.tls=true'
|
||||
- 'traefik.http.routers.copyparty.middlewares=authelia@docker'
|
||||
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
|
||||
|
||||
authelia:
|
||||
image: authelia/authelia:v4.38.0-beta3 # the config files in the authelia folder use the new syntax
|
||||
container_name: idp_authelia
|
||||
volumes:
|
||||
- ./authelia:/config:z
|
||||
networks:
|
||||
- net
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.authelia.rule=Host(`authelia.example.com`)'
|
||||
- 'traefik.http.routers.authelia.entrypoints=https'
|
||||
- 'traefik.http.routers.authelia.tls=true'
|
||||
#- 'traefik.http.routers.authelia.tls.certresolver=letsencrypt' # uncomment this to enable automatic certificate signing (1/2)
|
||||
- 'traefik.http.middlewares.authelia.forwardauth.address=http://authelia:9091/api/authz/forward-auth?authelia_url=https://authelia.example.com'
|
||||
- 'traefik.http.middlewares.authelia.forwardauth.trustForwardHeader=true'
|
||||
- 'traefik.http.middlewares.authelia.forwardauth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'
|
||||
expose:
|
||||
- 9091
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
disable: true
|
||||
environment:
|
||||
- TZ=Etc/UTC
|
||||
|
||||
redis:
|
||||
image: redis:7.2.4-alpine3.19
|
||||
container_name: idp_redis
|
||||
volumes:
|
||||
- ./redis:/data:z
|
||||
networks:
|
||||
- net
|
||||
expose:
|
||||
- 6379
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=Etc/UTC
|
||||
|
||||
traefik:
|
||||
image: traefik:2.11.0
|
||||
container_name: idp_traefik
|
||||
volumes:
|
||||
- ./traefik:/etc/traefik:z
|
||||
- /var/run/docker.sock:/var/run/docker.sock # WARNING: this gives traefik full root-access to the host OS, but is recommended/required(?) by traefik
|
||||
security_opt:
|
||||
- label:disable # disable selinux because it (rightly) blocks access to docker.sock
|
||||
networks:
|
||||
- net
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.api.rule=Host(`traefik.example.com`)'
|
||||
- 'traefik.http.routers.api.entrypoints=https'
|
||||
- 'traefik.http.routers.api.service=api@internal'
|
||||
- 'traefik.http.routers.api.tls=true'
|
||||
#- 'traefik.http.routers.api.tls.certresolver=letsencrypt' # uncomment this to enable automatic certificate signing (2/2)
|
||||
- 'traefik.http.routers.api.middlewares=authelia@docker'
|
||||
ports:
|
||||
- '80:80'
|
||||
- '443:443'
|
||||
command:
|
||||
- '--api'
|
||||
- '--providers.docker=true'
|
||||
- '--providers.docker.exposedByDefault=false'
|
||||
- '--entrypoints.http=true'
|
||||
- '--entrypoints.http.address=:80'
|
||||
- '--entrypoints.http.http.redirections.entrypoint.to=https'
|
||||
- '--entrypoints.http.http.redirections.entrypoint.scheme=https'
|
||||
- '--entrypoints.https=true'
|
||||
- '--entrypoints.https.address=:443'
|
||||
- '--certificatesResolvers.letsencrypt.acme.email=your-email@your-domain.com'
|
||||
- '--certificatesResolvers.letsencrypt.acme.storage=/etc/traefik/acme.json'
|
||||
- '--certificatesResolvers.letsencrypt.acme.httpChallenge.entryPoint=http'
|
||||
- '--log=true'
|
||||
- '--log.level=WARNING' # DEBUG
|
||||
12
docs/examples/docker/idp-authentik-traefik/README.md
Normal file
12
docs/examples/docker/idp-authentik-traefik/README.md
Normal file
@@ -0,0 +1,12 @@
|
||||
> [!WARNING]
|
||||
> I am unable to guarantee the quality, safety, and security of anything in this folder; it is a combination of examples I found online. Please submit corrections or improvements 🙏
|
||||
|
||||
> [!WARNING]
|
||||
> does not work yet... if you are able to fix this, please do!
|
||||
|
||||
this is based on:
|
||||
* https://goauthentik.io/docker-compose.yml
|
||||
* https://goauthentik.io/docs/providers/proxy/server_traefik
|
||||
|
||||
incomplete list of modifications made:
|
||||
* support for running with podman as root on fedora (`:z` volumes, `label:disable`)
|
||||
@@ -0,0 +1,88 @@
|
||||
# https://goauthentik.io/docker-compose.yml
|
||||
---
|
||||
version: "3.4"
|
||||
|
||||
services:
|
||||
postgresql:
|
||||
image: docker.io/library/postgres:12-alpine
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
volumes:
|
||||
- database:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
|
||||
POSTGRES_USER: ${PG_USER:-authentik}
|
||||
POSTGRES_DB: ${PG_DB:-authentik}
|
||||
env_file:
|
||||
- .env
|
||||
redis:
|
||||
image: docker.io/library/redis:alpine
|
||||
command: --save 60 1 --loglevel warning
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 3s
|
||||
volumes:
|
||||
- redis:/data
|
||||
server:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.2.1}
|
||||
restart: unless-stopped
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
volumes:
|
||||
- ./media:/media
|
||||
- ./custom-templates:/templates
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- "${COMPOSE_PORT_HTTP:-9000}:9000"
|
||||
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
worker:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.2.1}
|
||||
restart: unless-stopped
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
# `user: root` and the docker socket volume are optional.
|
||||
# See more for the docker socket integration here:
|
||||
# https://goauthentik.io/docs/outposts/integrations/docker
|
||||
# Removing `user: root` also prevents the worker from fixing the permissions
|
||||
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
|
||||
# (1000:1000 by default)
|
||||
user: root
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- ./media:/media
|
||||
- ./certs:/certs
|
||||
- ./custom-templates:/templates
|
||||
env_file:
|
||||
- .env
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
|
||||
volumes:
|
||||
database:
|
||||
driver: local
|
||||
redis:
|
||||
driver: local
|
||||
@@ -0,0 +1,46 @@
|
||||
# https://goauthentik.io/docs/providers/proxy/server_traefik
|
||||
---
|
||||
version: "3.7"
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:v2.2
|
||||
container_name: traefik
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- 80:80
|
||||
command:
|
||||
- "--api"
|
||||
- "--providers.docker=true"
|
||||
- "--providers.docker.exposedByDefault=false"
|
||||
- "--entrypoints.web.address=:80"
|
||||
|
||||
authentik-proxy:
|
||||
image: ghcr.io/goauthentik/proxy
|
||||
ports:
|
||||
- 9000:9000
|
||||
- 9443:9443
|
||||
environment:
|
||||
AUTHENTIK_HOST: https://your-authentik.tld
|
||||
AUTHENTIK_INSECURE: "false"
|
||||
AUTHENTIK_TOKEN: token-generated-by-authentik
|
||||
# Starting with 2021.9, you can optionally set this too
|
||||
# when authentik_host for internal communication doesn't match the public URL
|
||||
# AUTHENTIK_HOST_BROWSER: https://external-domain.tld
|
||||
labels:
|
||||
traefik.enable: true
|
||||
traefik.port: 9000
|
||||
traefik.http.routers.authentik.rule: Host(`app.company`) && PathPrefix(`/outpost.goauthentik.io/`)
|
||||
# `authentik-proxy` refers to the service name in the compose file.
|
||||
traefik.http.middlewares.authentik.forwardauth.address: http://authentik-proxy:9000/outpost.goauthentik.io/auth/traefik
|
||||
traefik.http.middlewares.authentik.forwardauth.trustForwardHeader: true
|
||||
traefik.http.middlewares.authentik.forwardauth.authResponseHeaders: X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid,X-authentik-jwt,X-authentik-meta-jwks,X-authentik-meta-outpost,X-authentik-meta-provider,X-authentik-meta-app,X-authentik-meta-version
|
||||
restart: unless-stopped
|
||||
|
||||
whoami:
|
||||
image: containous/whoami
|
||||
labels:
|
||||
traefik.enable: true
|
||||
traefik.http.routers.whoami.rule: Host(`app.company`)
|
||||
traefik.http.routers.whoami.middlewares: authentik@docker
|
||||
restart: unless-stopped
|
||||
@@ -0,0 +1,72 @@
|
||||
# not actually YAML but lets pretend:
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
|
||||
# example config for how copyparty can be used with an identity
|
||||
# provider, replacing the built-in authentication/authorization
|
||||
# mechanism, and instead expecting the reverse-proxy to provide
|
||||
# the requester's username (and possibly a group-name, for
|
||||
# optional group-based access control)
|
||||
#
|
||||
# the filesystem-path `/w` is used as the storage location
|
||||
# because that is the data-volume in the docker containers,
|
||||
# because a deployment like this (with an IdP) is more commonly
|
||||
# seen in containerized environments -- but this is not required
|
||||
|
||||
|
||||
[global]
|
||||
e2dsa # enable file indexing and filesystem scanning
|
||||
e2ts # enable multimedia indexing
|
||||
ansi # enable colors in log messages
|
||||
|
||||
# enable IdP support by expecting username/groupname in
|
||||
# http-headers provided by the reverse-proxy; header "X-IdP-User"
|
||||
# will contain the username, "X-IdP-Group" the groupname
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
|
||||
[/] # create a volume at "/" (the webroot), which will
|
||||
/w # share /w (the docker data volume, which is mapped to /srv/pub on the host in docker-compose.yml)
|
||||
accs:
|
||||
rw: * # everyone gets read-access, but
|
||||
rwmda: @su # the group "su" gets read-write-move-delete-admin
|
||||
|
||||
|
||||
[/u/${u}] # each user gets their own home-folder at /u/username
|
||||
/w/u/${u} # which will be "u/username" in the docker data volume
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/u/${u}/priv] # each user also gets a private area at /u/username/priv
|
||||
/w/u/${u}/priv # stored at DATAVOLUME/u/username/priv
|
||||
accs:
|
||||
rwmda: ${u}, @su # read-write-move-delete-admin for that username + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}] # each group gets their own shared volume
|
||||
/w/lounge/${g} # stored at DATAVOLUME/lounge/groupname
|
||||
accs:
|
||||
r: * # read-access for anyone, and
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
[/lounge/${g}/priv] # and a private area for each group too
|
||||
/w/lounge/${g}/priv # stored at DATAVOLUME/lounge/groupname/priv
|
||||
accs:
|
||||
rwmda: @${g}, @su # read-write-move-delete-admin for that group + the "su" group
|
||||
|
||||
|
||||
# and create some strategic volumes to prevent anyone from gaining
|
||||
# unintended access to priv folders if the users/groups db is lost
|
||||
[/u]
|
||||
/w/u
|
||||
accs:
|
||||
rwmda: @su
|
||||
[/lounge]
|
||||
/w/lounge
|
||||
accs:
|
||||
rwmda: @su
|
||||
131
docs/examples/docker/idp-authentik-traefik/docker-compose.yml
Normal file
131
docs/examples/docker/idp-authentik-traefik/docker-compose.yml
Normal file
@@ -0,0 +1,131 @@
|
||||
version: "3.4"
|
||||
|
||||
volumes:
|
||||
database:
|
||||
driver: local
|
||||
redis:
|
||||
driver: local
|
||||
|
||||
services:
|
||||
copyparty:
|
||||
image: copyparty/ac
|
||||
container_name: idp_copyparty
|
||||
restart: unless-stopped
|
||||
user: "1000:1000" # should match the user/group of your fileshare volumes
|
||||
volumes:
|
||||
- ./cpp/:/cfg:z # the copyparty config folder
|
||||
- /srv/pub:/w:z # this is where we declare that "/srv/pub" is the filesystem-path on the server that shall be shared online
|
||||
ports:
|
||||
- 3923
|
||||
labels:
|
||||
- 'traefik.enable=true'
|
||||
- 'traefik.http.routers.fs.rule=Host(`fs.example.com`)'
|
||||
- 'traefik.http.routers.fs.entrypoints=http'
|
||||
#- 'traefik.http.routers.fs.middlewares=authelia@docker' # TODO: ???
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --spider -q 127.0.0.1:3923/?reset"]
|
||||
interval: 1m
|
||||
timeout: 2s
|
||||
retries: 5
|
||||
start_period: 15s
|
||||
stop_grace_period: 15s # thumbnailer is allowed to continue finishing up for 10s after the shutdown signal
|
||||
|
||||
traefik:
|
||||
image: traefik:v2.11
|
||||
container_name: traefik
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock # WARNING: this gives traefik full root-access to the host OS, but is recommended/required(?) by traefik
|
||||
security_opt:
|
||||
- label:disable # disable selinux because it (rightly) blocks access to docker.sock
|
||||
ports:
|
||||
- 80:80
|
||||
command:
|
||||
- '--api'
|
||||
- '--providers.docker=true'
|
||||
- '--providers.docker.exposedByDefault=false'
|
||||
- '--entrypoints.web.address=:80'
|
||||
|
||||
postgresql:
|
||||
image: docker.io/library/postgres:12-alpine
|
||||
container_name: idp_postgresql
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
volumes:
|
||||
- database:/var/lib/postgresql/data:z
|
||||
environment:
|
||||
POSTGRES_PASSWORD: postgrass
|
||||
POSTGRES_USER: authentik
|
||||
POSTGRES_DB: authentik
|
||||
env_file:
|
||||
- .env
|
||||
|
||||
redis:
|
||||
image: docker.io/library/redis:alpine
|
||||
command: --save 60 1 --loglevel warning
|
||||
container_name: idp_redis
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 3s
|
||||
volumes:
|
||||
- redis:/data:z
|
||||
|
||||
authentik_server:
|
||||
image: ghcr.io/goauthentik/server:2024.2.1
|
||||
container_name: idp_authentik_server
|
||||
restart: unless-stopped
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: authentik
|
||||
AUTHENTIK_POSTGRESQL__NAME: authentik
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: postgrass
|
||||
volumes:
|
||||
- ./media:/media:z
|
||||
- ./custom-templates:/templates:z
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- 9000
|
||||
- 9443
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
|
||||
authentik_worker:
|
||||
image: ghcr.io/goauthentik/server:2024.2.1
|
||||
container_name: idp_authentik_worker
|
||||
restart: unless-stopped
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: authentik
|
||||
AUTHENTIK_POSTGRESQL__NAME: authentik
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: postgrass
|
||||
# `user: root` and the docker socket volume are optional.
|
||||
# See more for the docker socket integration here:
|
||||
# https://goauthentik.io/docs/outposts/integrations/docker
|
||||
# Removing `user: root` also prevents the worker from fixing the permissions
|
||||
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
|
||||
# (1000:1000 by default)
|
||||
user: root
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- ./media:/media:z
|
||||
- ./certs:/certs:z
|
||||
- ./custom-templates:/templates:z
|
||||
env_file:
|
||||
- .env
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
@@ -26,6 +26,24 @@
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
# but copyparty will refuse to accept those headers unless you
|
||||
# tell it the LAN IP of the reverse-proxy to expect them from,
|
||||
# preventing malicious users from pretending to be the proxy;
|
||||
# pay attention to the warning message in the logs and then
|
||||
# adjust the following config option accordingly:
|
||||
xff-src: 192.168.0.0/16
|
||||
|
||||
# or just allow all LAN / private IPs (probably good enough):
|
||||
xff-src: lan
|
||||
|
||||
# an additional, optional security measure is to expect a
|
||||
# secret header name from the reverse-proxy; you can enable
|
||||
# this feature by setting the header-name to expect here:
|
||||
#idp-h-key: shangala-bangala
|
||||
|
||||
# convenient debug option:
|
||||
# log all incoming request headers from the proxy
|
||||
#ihead: *
|
||||
|
||||
[/] # create a volume at "/" (the webroot), which will
|
||||
/w # share /w (the docker data volume)
|
||||
|
||||
22
docs/idp.md
Normal file
22
docs/idp.md
Normal file
@@ -0,0 +1,22 @@
|
||||
there is a [docker-compose example](./examples/docker/idp-authelia-traefik) which is hopefully a good starting point (meaning you can skip the steps below) -- but if you want to set this up from scratch yourself (or learn about how it works), keep reading:
|
||||
|
||||
to configure IdP from scratch, you must place copyparty behind a reverse-proxy which sends all requests through a middleware (the IdP / identity-provider service) which will inject a set of headers into the requests, telling copyparty who the user is
|
||||
|
||||
in the copyparty `[global]` config, specify which headers to read client info from; username is required (`idp-h-usr: X-Authooley-User`), group(s) are optional (`idp-h-grp: X-Authooley-Groups`)
|
||||
|
||||
* it is also required to specify the subnet that legit requests will be coming from, for example `--xff-src=10.88.0.0/24` to allow 10.88.x.x (or `--xff-src=lan` for all private IPs), and it is recommended to configure the reverseproxy to include a secret header as proof that the other headers are also legit (and not smuggled in by a malicious client), telling copyparty the headername to expect with `idp-h-key: shangala-bangala`
|
||||
|
||||
|
||||
# important notes
|
||||
|
||||
## IdP volumes are forgotten on shutdown
|
||||
|
||||
IdP volumes, meaning dynamically-created volumes, meaning volumes that contain `${u}` or `${g}` in their URL, will be forgotten during a server restart and then "revived" when the volume's owner sends their first request after the restart
|
||||
|
||||
until each IdP volume is revived, it will inherit the permissions of its parent volume (if any)
|
||||
|
||||
this means that, if an IdP volume is located inside a folder that is readable by anyone, then each of those IdP volumes will **also become readable by anyone** until the volume is revived
|
||||
|
||||
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
|
||||
|
||||
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)
|
||||
49
docs/rice/README.md
Normal file
49
docs/rice/README.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# custom fonts
|
||||
|
||||
to change the fonts in the web-UI, first save the following text (the default font-config) to a new css file, for example named `customfonts.css` in your webroot:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--font-main: sans-serif;
|
||||
--font-serif: serif;
|
||||
--font-mono: 'scp';
|
||||
}
|
||||
```
|
||||
|
||||
add this to your copyparty config so the css file gets loaded: `--html-head='<link rel="stylesheet" href="/customfonts.css">'`
|
||||
|
||||
alternatively, if you are using a config file instead of commandline args:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
html-head: <link rel="stylesheet" href="/customfonts.css">
|
||||
```
|
||||
|
||||
restart copyparty for the config change to take effect
|
||||
|
||||
edit the css file you made and press `ctrl`-`shift`-`R` in the browser to see the changes as you go (no need to restart copyparty for each change)
|
||||
|
||||
if you are introducing a new ttf/woff font, don't forget to declare the font itself in the css file; here's one of the default fonts from `ui.css`:
|
||||
|
||||
```css
|
||||
@font-face {
|
||||
font-family: 'scp';
|
||||
font-display: swap;
|
||||
src: local('Source Code Pro Regular'), local('SourceCodePro-Regular'), url(deps/scp.woff2) format('woff2');
|
||||
}
|
||||
```
|
||||
|
||||
and because textboxes don't inherit fonts by default, you can force it like this:
|
||||
|
||||
```css
|
||||
input[type=text], input[type=submit], input[type=button] { font-family: var(--font-main) }
|
||||
```
|
||||
|
||||
and if you want to have a monospace font in the fancy markdown editor, do this:
|
||||
|
||||
```css
|
||||
.EasyMDEContainer .CodeMirror { font-family: var(--font-mono) }
|
||||
```
|
||||
|
||||
NB: `<textarea id="mt">` and `<div id="mtr">` in the regular markdown editor must have the same font; none of the suggestions above will cause any issues but keep it in mind if you're getting creative
|
||||
|
||||
@@ -248,9 +248,9 @@ symbol legend,
|
||||
| ----------------------- | - | - | - | - | - | - | - | - | - | - | - | - |
|
||||
| accounts | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ | █ |
|
||||
| per-account chroot | | | | | | | | | | | | █ |
|
||||
| single-sign-on | | | | █ | █ | | | | • | | | |
|
||||
| token auth | | | | █ | █ | | | █ | | | | |
|
||||
| 2fa | | | | █ | █ | | | | | | | █ |
|
||||
| single-sign-on | ╱ | | | █ | █ | | | | • | | | |
|
||||
| token auth | ╱ | | | █ | █ | | | █ | | | | |
|
||||
| 2fa | ╱ | | | █ | █ | | | | | | | █ |
|
||||
| per-volume permissions | █ | █ | █ | █ | █ | █ | █ | | █ | █ | ╱ | █ |
|
||||
| per-folder permissions | ╱ | | | █ | █ | | █ | | █ | █ | ╱ | █ |
|
||||
| per-file permissions | | | | █ | █ | | █ | | █ | | | |
|
||||
@@ -289,6 +289,7 @@ symbol legend,
|
||||
* `curl-friendly ls` = returns a [sortable plaintext folder listing](https://user-images.githubusercontent.com/241032/215322619-ea5fd606-3654-40ad-94ee-2bc058647bb2.png) when curled
|
||||
* `curl-friendly upload` = uploading with curl is just `curl -T some.bin http://.../`
|
||||
* `a`/copyparty remarks:
|
||||
* single-sign-on, token-auth, and 2fa is *possible* through authelia/authentik or similar, but nobody's made an example yet
|
||||
* one-way folder sync from local to server can be done efficiently with [u2c.py](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy), or with webdav and conventional rsync
|
||||
* can hot-reload config files (with just a few exceptions)
|
||||
* can set per-folder permissions if that folder is made into a separate volume, so there is configuration overhead
|
||||
|
||||
45
docs/xff.md
Normal file
45
docs/xff.md
Normal file
@@ -0,0 +1,45 @@
|
||||
when running behind a reverse-proxy, or a WAF, or another protection service such as cloudflare:
|
||||
|
||||
if you (and maybe everybody else) keep getting a message that says `thank you for playing`, then you've gotten banned for malicious traffic. This ban applies to the IP-address that copyparty *thinks* identifies the shady client -- so, depending on your setup, you might have to tell copyparty where to find the correct IP
|
||||
|
||||
knowing the correct IP is also crucial for some other features, such as the unpost feature which lets you delete your own recent uploads -- but if everybody has the same IP, well...
|
||||
|
||||
----
|
||||
|
||||
for most common setups, there should be a helpful message in the server-log explaining what to do, something like `--xff-src=10.88.0.0/16` or `--xff-src=lan` to accept the `X-Forwarded-For` header from your reverse-proxy with a LAN IP of `10.88.x.y`
|
||||
|
||||
if you are behind cloudflare, it is recommended to also set `--xff-hdr=cf-connecting-ip` to use a more trustworthy source of info, but then it's also very important to ensure your reverse-proxy does not accept connections from anything BUT cloudflare; you can do this by generating an ip-address allowlist and reject all other connections
|
||||
|
||||
* if you are using nginx as your reverse-proxy, see the [example nginx config](https://github.com/9001/copyparty/blob/hovudstraum/contrib/nginx/copyparty.conf) on how the cloudflare allowlist can be done
|
||||
|
||||
----
|
||||
|
||||
the server-log will give recommendations in the form of commandline arguments;
|
||||
|
||||
to do the same thing using config files, take the options that are suggested in the serverlog and put them into the `[global]` section in your `copyparty.conf` like so:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
xff-src: lan
|
||||
xff-hdr: cf-connecting-ip
|
||||
```
|
||||
|
||||
----
|
||||
|
||||
# but if you just want to get it working:
|
||||
|
||||
...and don't care about security, you can optionally disable the bot-detectors, either by specifying commandline-args `--ban-404=no --ban-403=no --ban-422=no --ban-url=no --ban-pw=no`
|
||||
|
||||
or by adding these lines inside the `[global]` section in your `copyparty.conf`:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
ban-404: no
|
||||
ban-403: no
|
||||
ban-422: no
|
||||
ban-url: no
|
||||
ban-pw: no
|
||||
```
|
||||
|
||||
but remember that this will make other features insecure as well, such as unpost
|
||||
|
||||
@@ -49,7 +49,7 @@ thumbnails2 = ["pyvips"]
|
||||
audiotags = ["mutagen"]
|
||||
ftpd = ["pyftpdlib"]
|
||||
ftps = ["pyftpdlib", "pyopenssl"]
|
||||
tftpd = ["partftpy>=0.2.0"]
|
||||
tftpd = ["partftpy>=0.3.1"]
|
||||
pwhash = ["argon2-cffi"]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -3,7 +3,7 @@ WORKDIR /z
|
||||
ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
||||
ver_hashwasm=4.10.0 \
|
||||
ver_marked=4.3.0 \
|
||||
ver_dompf=3.0.8 \
|
||||
ver_dompf=3.0.9 \
|
||||
ver_mde=2.18.0 \
|
||||
ver_codemirror=5.65.16 \
|
||||
ver_fontawesome=5.13.0 \
|
||||
@@ -24,7 +24,7 @@ ENV ver_asmcrypto=c72492f4a66e17a0e5dd8ad7874de354f3ccdaa5 \
|
||||
# the scp url is regular latin from https://fonts.googleapis.com/css2?family=Source+Code+Pro&display=swap
|
||||
RUN mkdir -p /z/dist/no-pk \
|
||||
&& wget https://fonts.gstatic.com/s/sourcecodepro/v11/HI_SiYsKILxRpg3hIP6sJ7fM7PqlPevW.woff2 -O scp.woff2 \
|
||||
&& apk add cmake make g++ git bash npm patch wget tar pigz brotli gzip unzip python3 python3-dev brotli py3-brotli \
|
||||
&& apk add cmake make g++ git bash npm patch wget tar pigz brotli gzip unzip python3 python3-dev py3-brotli \
|
||||
&& rm -f /usr/lib/python3*/EXTERNALLY-MANAGED \
|
||||
&& wget https://github.com/openpgpjs/asmcrypto.js/archive/$ver_asmcrypto.tar.gz -O asmcrypto.tgz \
|
||||
&& wget https://github.com/markedjs/marked/archive/v$ver_marked.tar.gz -O marked.tgz \
|
||||
@@ -143,9 +143,8 @@ RUN ./genprism.sh $ver_prism
|
||||
|
||||
|
||||
# compress
|
||||
COPY brotli.makefile zopfli.makefile /z/dist/
|
||||
COPY zopfli.makefile /z/dist/
|
||||
RUN cd /z/dist \
|
||||
&& make -j$(nproc) -f brotli.makefile \
|
||||
&& make -j$(nproc) -f zopfli.makefile \
|
||||
&& rm *.makefile \
|
||||
&& mv no-pk/* . \
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
all: $(addsuffix .br, $(wildcard easymde*))
|
||||
|
||||
%.br: %
|
||||
brotli -jZ $<
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM fedora:38
|
||||
FROM fedora:39
|
||||
WORKDIR /z
|
||||
LABEL org.opencontainers.image.url="https://github.com/9001/copyparty" \
|
||||
org.opencontainers.image.source="https://github.com/9001/copyparty/tree/hovudstraum/scripts/docker" \
|
||||
@@ -21,7 +21,7 @@ RUN dnf install -y \
|
||||
vips vips-jxl vips-poppler vips-magick \
|
||||
python3-numpy fftw libsndfile \
|
||||
gcc gcc-c++ make cmake patchelf jq \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools python3-wheel \
|
||||
vamp-plugin-sdk qm-vamp-plugins \
|
||||
vamp-plugin-sdk-devel vamp-plugin-sdk-static \
|
||||
&& rm -f /usr/lib/python3*/EXTERNALLY-MANAGED \
|
||||
@@ -29,7 +29,7 @@ RUN dnf install -y \
|
||||
&& bash install-deps.sh \
|
||||
&& dnf erase -y \
|
||||
gcc gcc-c++ make cmake patchelf jq \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools \
|
||||
python3-devel ffmpeg-devel fftw-devel libsndfile-devel python3-setuptools python3-wheel \
|
||||
vamp-plugin-sdk-devel vamp-plugin-sdk-static \
|
||||
&& dnf clean all \
|
||||
&& find /usr/ -name __pycache__ | xargs rm -rf \
|
||||
|
||||
@@ -33,6 +33,8 @@ the recommended way to configure copyparty inside a container is to mount a fold
|
||||
* but you can also provide arguments to the docker command if you prefer that
|
||||
* config files must be named `something.conf` to get picked up
|
||||
|
||||
also see [docker-specific recommendations](#docker-specific-recommendations)
|
||||
|
||||
|
||||
## editions
|
||||
|
||||
@@ -88,6 +90,26 @@ the following advice is best-effort and not guaranteed to be entirely correct
|
||||
|
||||
|
||||
|
||||
# docker-specific recommendations
|
||||
|
||||
* copyparty will generally create a `.hist` folder at the top of each volume, which contains the filesystem index, thumbnails and such. For performance reasons, but also just to keep things tidy, it might be convenient to store these inside the config folder instead. Add the line `hist: /cfg/hists/` inside the `[global]` section of your `copyparty.conf` to do this
|
||||
|
||||
|
||||
## enabling the ftp server
|
||||
|
||||
...is tricky because ftp is a weird protocol and docker is making it worse 🎉
|
||||
|
||||
add the following three config entries into the `[global]` section of your `copyparty.conf`:
|
||||
|
||||
* `ftp: 3921` to enable the service, listening for connections on port 3921
|
||||
|
||||
* `ftp-nat: 127.0.0.1` but replace `127.0.0.1` with the actual external IP of your server; the clients will only be able to connect to this IP, even if the server has multiple IPs
|
||||
|
||||
* `ftp-pr: 12000-12099` to restrict the [passive-mode](http://slacksite.com/other/ftp.html#passive) port selection range; this allows up to 100 simultaneous file transfers
|
||||
|
||||
then finally update your docker config so that the port-range you specified (12000-12099) is exposed to the internet
|
||||
|
||||
|
||||
# build the images yourself
|
||||
|
||||
basically `./make.sh hclean pull img push` but see [devnotes.md](./devnotes.md)
|
||||
|
||||
@@ -37,7 +37,7 @@ help() { exec cat <<'EOF'
|
||||
# _____________________________________________________________________
|
||||
# web features:
|
||||
#
|
||||
# `no-cm` saves ~82k by removing easymde/codemirror
|
||||
# `no-cm` saves ~89k by removing easymde/codemirror
|
||||
# (the fancy markdown editor)
|
||||
#
|
||||
# `no-hl` saves ~41k by removing syntax hilighting in the text viewer
|
||||
@@ -225,9 +225,9 @@ necho() {
|
||||
mv pyftpdlib ftp/
|
||||
|
||||
necho collecting partftpy
|
||||
f="../build/partftpy-0.2.0.tar.gz"
|
||||
f="../build/partftpy-0.3.1.tar.gz"
|
||||
[ -e "$f" ] ||
|
||||
(url=https://files.pythonhosted.org/packages/64/4a/360dde1e7277758a4ccb0d6434ec661042d9d745aa6c3baa9ec0699df3e9/partftpy-0.2.0.tar.gz;
|
||||
(url=https://files.pythonhosted.org/packages/37/79/1a1de1d3fdf27ddc9c2d55fec6552e7b8ed115258fedac6120679898b83d/partftpy-0.3.1.tar.gz;
|
||||
wget -O$f "$url" || curl -L "$url" >$f)
|
||||
|
||||
tar -zxf $f
|
||||
@@ -368,7 +368,7 @@ git describe --tags >/dev/null 2>/dev/null && {
|
||||
|
||||
printf '%s\n' "$git_ver" | grep -qE '^v[0-9\.]+-[0-9]+-g[0-9a-f]+$' && {
|
||||
# long format (unreleased commit)
|
||||
t_ver="$(printf '%s\n' "$ver" | sed -r 's/\./, /g; s/(.*) (.*)/\1 "\2"/')"
|
||||
t_ver="$(printf '%s\n' "$ver" | sed -r 's/[-.]/, /g; s/(.*) (.*)/\1 "\2"/')"
|
||||
}
|
||||
|
||||
[ -z "$t_ver" ] && {
|
||||
@@ -406,7 +406,7 @@ find -type f -name ._\* | while IFS= read -r f; do cmp <(printf '\x00\x05\x16')
|
||||
|
||||
rm -f copyparty/web/deps/*.full.* copyparty/web/dbg-* copyparty/web/Makefile
|
||||
|
||||
find copyparty | LC_ALL=C sort | sed -r 's/\.(gz|br)$//;s/$/,/' > have
|
||||
find copyparty | LC_ALL=C sort | sed -r 's/\.gz$//;s/$/,/' > have
|
||||
cat have | while IFS= read -r x; do
|
||||
grep -qF -- "$x" ../scripts/sfx.ls || {
|
||||
echo "unexpected file: $x"
|
||||
@@ -603,7 +603,7 @@ sed -r 's/(.*)\.(.*)/\2 \1/' | LC_ALL=C sort |
|
||||
sed -r 's/([^ ]*) (.*)/\2.\1/' | grep -vE '/list1?$' > list1
|
||||
|
||||
for n in {1..50}; do
|
||||
(grep -vE '\.(gz|br)$' list1; grep -E '\.(gz|br)$' list1 | (shuf||gshuf) ) >list || true
|
||||
(grep -vE '\.gz$' list1; grep -E '\.gz$' list1 | (shuf||gshuf) ) >list || true
|
||||
s=$( (sha1sum||shasum) < list | cut -c-16)
|
||||
grep -q $s "$zdir/h" 2>/dev/null && continue
|
||||
echo $s >> "$zdir/h"
|
||||
|
||||
@@ -37,7 +37,7 @@ rm -rf $TEMP/pe-copyparty*
|
||||
python copyparty-sfx.py --version
|
||||
|
||||
rm -rf mods; mkdir mods
|
||||
cp -pR $TEMP/pe-copyparty/copyparty/ $TEMP/pe-copyparty/{ftp,j2}/* mods/
|
||||
cp -pR $TEMP/pe-copyparty/{copyparty,partftpy}/ $TEMP/pe-copyparty/{ftp,j2}/* mods/
|
||||
[ $w10 ] && rm -rf mods/{jinja2,markupsafe}
|
||||
|
||||
af() { awk "$1" <$2 >tf; mv tf "$2"; }
|
||||
|
||||
@@ -1,13 +1,10 @@
|
||||
f117016b1e6a7d7e745db30d3e67f1acf7957c443a0dd301b6c5e10b8368f2aa4db6be9782d2d3f84beadd139bfeef4982e40f21ca5d9065cb794eeb0e473e82 altgraph-0.17.4-py2.py3-none-any.whl
|
||||
eda6c38fc4d813fee897e969ff9ecc5acc613df755ae63df0392217bbd67408b5c1f6c676f2bf5497b772a3eb4e1a360e1245e1c16ee83f0af555f1ab82c3977 Git-2.39.1-32-bit.exe
|
||||
17ce52ba50692a9d964f57a23ac163fb74c77fdeb2ca988a6d439ae1fe91955ff43730c073af97a7b3223093ffea3479a996b9b50ee7fba0869247a56f74baa6 pefile-2023.2.7-py3-none-any.whl
|
||||
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
|
||||
f23615c522ed58b9a05978ba4c69c06224590f3a6adbd8e89b31838b181a57160739ceff1fc2ba6f4239b8fee46f92ce02910b2debda2710558ed42cff1ce3f1 pyinstaller-6.1.0-py3-none-win_amd64.whl
|
||||
5747b3b119629c4cf956f0eaa85f29218bb3680d3a4a262fa6e976e56b35067302e153d2c0a001505f2cb642b1f78752567889b3b82e342d6cd29aac8b70e92e pyinstaller_hooks_contrib-2023.10-py2.py3-none-any.whl
|
||||
126ca016c00256f4ff13c88707ead21b3b98f3c665ae57a5bcbb80c8be3004bff36d9c7f9a1cc9d20551019708f2b195154f302d80a1e5a2026d6d0fe9f3d5f4 pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
|
||||
749a473646c6d4c7939989649733d4c7699fd1c359c27046bf5bc9c070d1a4b8b986bbc65f60d7da725baf16dbfdd75a4c2f5bb8335f2cb5685073f5fee5c2d1 pywin32_ctypes-0.2.2-py3-none-any.whl
|
||||
6e0d854040baff861e1647d2bece7d090bc793b2bd9819c56105b94090df54881a6a9b43ebd82578cd7c76d47181571b671e60672afd9def389d03c9dae84fcf setuptools-68.2.2-py3-none-any.whl
|
||||
3c5adf0a36516d284a2ede363051edc1bcc9df925c5a8a9fa2e03cab579dd8d847fdad42f7fd5ba35992e08234c97d2dbfec40a9d12eec61c8dc03758f2bd88e typing_extensions-4.4.0-py3-none-any.whl
|
||||
8d16a967a0a7872a7575b1005cf66915deacda6ee8611fbb52f42fc3e3beb2f901a5140c942a5d146bd412b92bfa9cbadd82beeba83df6d70930c6dc26608a5b upx-4.1.0-win32.zip
|
||||
# u2c (win7)
|
||||
f3390290b896019b2fa169932390e4930d1c03c014e1f6db2405ca2eb1f51f5f5213f725885853805b742997b0edb369787e5c0069d217bc4e8b957f847f58b6 certifi-2023.11.17-py3-none-any.whl
|
||||
904eb57b13bea80aea861de86987e618665d37fa9ea0856e0125a9ba767a53e5064de0b9c4735435a2ddf4f16f7f7d2c75a682e1de83d9f57922bdca8e29988c charset_normalizer-3.3.0-cp37-cp37m-win32.whl
|
||||
@@ -18,15 +15,19 @@ b795abb26ba2f04f1afcfb196f21f638014b26c8186f8f488f1c2d91e8e0220962fbd259dbc9c387
|
||||
91c025f7d94bcdf93df838fab67053165a414fc84e8496f92ecbb910dd55f6b6af5e360bbd051444066880c5a6877e75157bd95e150ead46e5c605930dfc50f2 future-0.18.2.tar.gz
|
||||
c06b3295d1d0b0f0a6f9a6cd0be861b9b643b4a5ea37857f0bd41c45deaf27bb927b71922dab74e633e43d75d04a9bd0d1c4ad875569740b0f2a98dd2bfa5113 importlib_metadata-5.0.0-py3-none-any.whl
|
||||
016a8cbd09384f1a9a44cb0e8274df75a8bcb2f3966bb5d708c62145289efaa5db98f75256c97e4f8046735ce2e529fbb076f284a46cdb716e89a75660200ad9 pip-23.2.1-py3-none-any.whl
|
||||
f298e34356b5590dde7477d7b3a88ad39c622a2bcf3fcd7c53870ce8384dd510f690af81b8f42e121a22d3968a767d2e07595036b2ed7049c8ef4d112bcf3a61 pyinstaller-5.13.2-py3-none-win32.whl
|
||||
6bb73cc2db795c59c92f2115727f5c173cacc9465af7710db9ff2f2aec2d73130d0992d0f16dcb3fac222dc15c0916562d0813b2337401022020673a4461df3d python-3.7.9-amd64.exe
|
||||
500747651c87f59f2436c5ab91207b5b657856e43d10083f3ce27efb196a2580fadd199a4209519b409920c562aaaa7dcbdfb83ed2072a43eaccae6e2d056f31 python-3.7.9.exe
|
||||
2e04acff170ca3bbceeeb18489c687126c951ec0bfd53cccfb389ba8d29a4576c1a9e8f2e5ea26c84dd21bfa2912f4e71fa72c1e2653b71e34afc0e65f1722d4 upx-4.2.2-win32.zip
|
||||
68e1b618d988be56aaae4e2eb92bc0093627a00441c1074ebe680c41aa98a6161e52733ad0c59888c643a33fe56884e4f935178b2557fbbdd105e92e0d993df6 windows6.1-kb2533623-x64.msu
|
||||
479a63e14586ab2f2228208116fc149ed8ee7b1e4ff360754f5bda4bf765c61af2e04b5ef123976623d04df4976b7886e0445647269da81436bd0a7b5671d361 windows6.1-kb2533623-x86.msu
|
||||
ba91ab0518c61eff13e5612d9e6b532940813f6b56e6ed81ea6c7c4d45acee4d98136a383a25067512b8f75538c67c987cf3944bfa0229e3cb677e2fb81e763e zipp-3.10.0-py3-none-any.whl
|
||||
# win10
|
||||
00558cca2e0ac813d404252f6e5aeacb50546822ecb5d0570228b8ddd29d94e059fbeb6b90393dee5abcddaca1370aca784dc9b095cbb74e980b3c024767fb24 Jinja2-3.1.2-py3-none-any.whl
|
||||
7f8f4daa4f4f2dbf24cdd534b2952ee3fba6334eb42b37465ccda3aa1cccc3d6204aa6bfffb8a83bf42ec59c702b5b5247d4c8ee0d4df906334ae53072ef8c4c MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
|
||||
e3e2e6bd511dec484dd0292f4c46c55c88a885eabf15413d53edea2dd4a4dbae1571735b9424f78c0cd7f1082476a8259f31fd3f63990f726175470f636df2b3 Jinja2-3.1.3-py3-none-any.whl
|
||||
e21495f1d473d855103fb4a243095b498ec90eb68776b0f9b48e994990534f7286c0292448e129c507e5d70409f8a05cca58b98d59ce2a815993d0a873dfc480 MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
|
||||
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
||||
656015f5cc2c04aa0653ee5609c39a7e5f0b6a58c84fe26b20bd070c52d20b4effb810132f7fb771168483e9fd975cc3302837dd7a1a687ee058b0460c857cc4 packaging-23.2-py3-none-any.whl
|
||||
424e20dc7263a31d524307bc39ed755a9dd82f538086fff68d98dd97e236c9b00777a8ac2e3853081b532b0e93cef44983e74d0ab274877440e8b7341b19358a pillow-10.2.0-cp311-cp311-win_amd64.whl
|
||||
8760eab271e79256ae3bfb4af8ccc59010cb5d2eccdd74b325d1a533ae25eb127d51c2ec28ff90d449afed32dd7d6af62934fe9caaf1ae1f4d4831e948e912da pyinstaller-6.5.0-py3-none-win_amd64.whl
|
||||
e6bdbae1affd161e62fc87407c912462dfe875f535ba9f344d0c4ade13715c947cd3ae832eff60f1bad4161938311d06ac8bc9b52ef203f7b0d9de1409f052a5 python-3.11.8-amd64.exe
|
||||
729dc52f1a02bc6274d012ce33f534102975a828cba11f6029600ea40e2d23aefeb07bf4ae19f9621d0565dd03eb2635bbb97d45fb692c1f756315e8c86c5255 upx-4.2.2-win64.zip
|
||||
|
||||
@@ -17,19 +17,19 @@ uname -s | grep NT-10 && w10=1 || {
|
||||
fns=(
|
||||
altgraph-0.17.4-py2.py3-none-any.whl
|
||||
pefile-2023.2.7-py3-none-any.whl
|
||||
pyinstaller_hooks_contrib-2023.10-py2.py3-none-any.whl
|
||||
pyinstaller_hooks_contrib-2024.3-py2.py3-none-any.whl
|
||||
pywin32_ctypes-0.2.2-py3-none-any.whl
|
||||
setuptools-68.2.2-py3-none-any.whl
|
||||
upx-4.1.0-win32.zip
|
||||
)
|
||||
[ $w10 ] && fns+=(
|
||||
pyinstaller-6.1.0-py3-none-win_amd64.whl
|
||||
Jinja2-3.1.2-py3-none-any.whl
|
||||
MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl
|
||||
pyinstaller-6.5.0-py3-none-win_amd64.whl
|
||||
Jinja2-3.1.3-py3-none-any.whl
|
||||
MarkupSafe-2.1.5-cp311-cp311-win_amd64.whl
|
||||
mutagen-1.47.0-py3-none-any.whl
|
||||
packaging-23.2-py3-none-any.whl
|
||||
pillow-10.2.0-cp311-cp311-win_amd64.whl
|
||||
python-3.11.8-amd64.exe
|
||||
upx-4.2.2-win64.zip
|
||||
)
|
||||
[ $w7 ] && fns+=(
|
||||
pyinstaller-5.13.2-py3-none-win32.whl
|
||||
@@ -38,6 +38,7 @@ fns=(
|
||||
idna-3.4-py3-none-any.whl
|
||||
requests-2.28.2-py3-none-any.whl
|
||||
urllib3-1.26.14-py2.py3-none-any.whl
|
||||
upx-4.2.2-win32.zip
|
||||
)
|
||||
[ $w7 ] && fns+=(
|
||||
future-0.18.2.tar.gz
|
||||
|
||||
@@ -234,8 +234,9 @@ def u8(gen):
|
||||
|
||||
|
||||
def yieldfile(fn):
|
||||
with open(fn, "rb") as f:
|
||||
for block in iter(lambda: f.read(64 * 1024), b""):
|
||||
s = 64 * 1024
|
||||
with open(fn, "rb", s * 4) as f:
|
||||
for block in iter(lambda: f.read(s), b""):
|
||||
yield block
|
||||
|
||||
|
||||
|
||||
36
scripts/test/tftp.sh
Executable file
36
scripts/test/tftp.sh
Executable file
@@ -0,0 +1,36 @@
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
|
||||
# PYTHONPATH=.:~/dev/partftpy/ taskset -c 0 python3 -m copyparty -v srv::r -v srv/junk:junk:A --tftp 3969
|
||||
|
||||
get_src=~/dev/copyparty/srv/palette.flac
|
||||
get_fn=${get_src##*/}
|
||||
|
||||
put_src=~/Downloads/102.zip
|
||||
put_dst=~/dev/copyparty/srv/junk/102.zip
|
||||
|
||||
cd /dev/shm
|
||||
|
||||
echo curl get 1428 v4; curl --tftp-blksize 1428 tftp://127.0.0.1:3969/$get_fn | cmp $get_src || exit 1
|
||||
echo curl get 1428 v6; curl --tftp-blksize 1428 tftp://[::1]:3969/$get_fn | cmp $get_src || exit 1
|
||||
|
||||
echo curl put 1428 v4; rm -f $put_dst && curl --tftp-blksize 1428 -T $put_src tftp://127.0.0.1:3969/junk/ && cmp $put_src $put_dst || exit 1
|
||||
echo curl put 1428 v6; rm -f $put_dst && curl --tftp-blksize 1428 -T $put_src tftp://[::1]:3969/junk/ && cmp $put_src $put_dst || exit 1
|
||||
|
||||
echo atftp get 1428; rm -f $get_fn && ~/src/atftp/atftp --option "blksize 1428" -g -r $get_fn 127.0.0.1 3969 && cmp $get_fn $get_src || exit 1
|
||||
|
||||
echo atftp put 1428; rm -f $put_dst && ~/src/atftp/atftp --option "blksize 1428" 127.0.0.1 3969 -p -l $put_src -r junk/102.zip && cmp $put_src $put_dst || exit 1
|
||||
|
||||
echo tftp-hpa get; rm -f $put_dst && tftp -v -m binary 127.0.0.1 3969 -c get $get_fn && cmp $get_src $get_fn || exit 1
|
||||
|
||||
echo tftp-hpa put; rm -f $put_dst && tftp -v -m binary 127.0.0.1 3969 -c put $put_src junk/102.zip && cmp $put_src $put_dst || exit 1
|
||||
|
||||
echo curl get 512; curl tftp://127.0.0.1:3969/$get_fn | cmp $get_src || exit 1
|
||||
|
||||
echo curl put 512; rm -f $put_dst && curl -T $put_src tftp://127.0.0.1:3969/junk/ && cmp $put_src $put_dst || exit 1
|
||||
|
||||
echo atftp get 512; rm -f $get_fn && ~/src/atftp/atftp -g -r $get_fn 127.0.0.1 3969 && cmp $get_fn $get_src || exit 1
|
||||
|
||||
echo atftp put 512; rm -f $put_dst && ~/src/atftp/atftp 127.0.0.1 3969 -p -l $put_src -r junk/102.zip && cmp $put_src $put_dst || exit 1
|
||||
|
||||
echo nice
|
||||
2
setup.py
2
setup.py
@@ -141,7 +141,7 @@ args = {
|
||||
"audiotags": ["mutagen"],
|
||||
"ftpd": ["pyftpdlib"],
|
||||
"ftps": ["pyftpdlib", "pyopenssl"],
|
||||
"tftpd": ["partftpy>=0.2.0"],
|
||||
"tftpd": ["partftpy>=0.3.1"],
|
||||
"pwhash": ["argon2-cffi"],
|
||||
},
|
||||
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},
|
||||
|
||||
17
tests/res/idp/1.conf
Normal file
17
tests/res/idp/1.conf
Normal file
@@ -0,0 +1,17 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[accounts]
|
||||
ua: pa
|
||||
|
||||
[/]
|
||||
/
|
||||
accs:
|
||||
r: ua
|
||||
|
||||
[/vb]
|
||||
/b
|
||||
29
tests/res/idp/2.conf
Normal file
29
tests/res/idp/2.conf
Normal file
@@ -0,0 +1,29 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[accounts]
|
||||
ua: pa
|
||||
ub: pb
|
||||
uc: pc
|
||||
|
||||
[groups]
|
||||
ga: ua, ub
|
||||
|
||||
[/]
|
||||
/
|
||||
accs:
|
||||
r: @ga
|
||||
|
||||
[/vb]
|
||||
/b
|
||||
accs:
|
||||
r: @ga, ua
|
||||
|
||||
[/vc]
|
||||
/c
|
||||
accs:
|
||||
r: @ga, uc
|
||||
16
tests/res/idp/3.conf
Normal file
16
tests/res/idp/3.conf
Normal file
@@ -0,0 +1,16 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[/vu/${u}]
|
||||
/
|
||||
accs:
|
||||
r: ${u}
|
||||
|
||||
[/vg/${g}]
|
||||
/b
|
||||
accs:
|
||||
r: @${g}
|
||||
25
tests/res/idp/4.conf
Normal file
25
tests/res/idp/4.conf
Normal file
@@ -0,0 +1,25 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[accounts]
|
||||
ua: pa
|
||||
ub: pb
|
||||
|
||||
[/vu/${u}]
|
||||
/u-${u}
|
||||
accs:
|
||||
r: ${u}
|
||||
|
||||
[/vg/${g}1]
|
||||
/g1-${g}
|
||||
accs:
|
||||
r: @${g}
|
||||
|
||||
[/vg/${g}2]
|
||||
/g2-${g}
|
||||
accs:
|
||||
r: @${g}, ua
|
||||
21
tests/res/idp/5.conf
Normal file
21
tests/res/idp/5.conf
Normal file
@@ -0,0 +1,21 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[/ga]
|
||||
/ga
|
||||
accs:
|
||||
r: @ga
|
||||
|
||||
[/gb]
|
||||
/gb
|
||||
accs:
|
||||
r: @gb
|
||||
|
||||
[/g]
|
||||
/g
|
||||
accs:
|
||||
r: @ga, @gb
|
||||
24
tests/res/idp/6.conf
Normal file
24
tests/res/idp/6.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
# -*- mode: yaml -*-
|
||||
# vim: ft=yaml:
|
||||
|
||||
[global]
|
||||
idp-h-usr: x-idp-user
|
||||
idp-h-grp: x-idp-group
|
||||
|
||||
[/get/${u}]
|
||||
/get/${u}
|
||||
accs:
|
||||
g: *
|
||||
r: ${u}, @su
|
||||
m: @su
|
||||
|
||||
[/priv/${u}]
|
||||
/priv/${u}
|
||||
accs:
|
||||
r: ${u}, @su
|
||||
m: @su
|
||||
|
||||
[/team/${g}/${u}]
|
||||
/team/${g}/${u}
|
||||
accs:
|
||||
r: @${g}
|
||||
@@ -49,11 +49,7 @@ class TestHttpCli(unittest.TestCase):
|
||||
with open(filepath, "wb") as f:
|
||||
f.write(filepath.encode("utf-8"))
|
||||
|
||||
vcfg = [
|
||||
".::r,u1:r.,u2",
|
||||
"a:a:r,u1:r,u2",
|
||||
".b:.b:r.,u1:r,u2"
|
||||
]
|
||||
vcfg = [".::r,u1:r.,u2", "a:a:r,u1:r,u2", ".b:.b:r.,u1:r,u2"]
|
||||
self.args = Cfg(v=vcfg, a=["u1:u1", "u2:u2"], e2dsa=True)
|
||||
self.asrv = AuthSrv(self.args, self.log)
|
||||
|
||||
@@ -96,7 +92,7 @@ class TestHttpCli(unittest.TestCase):
|
||||
tar = tarfile.open(fileobj=io.BytesIO(b), mode="r|").getnames()
|
||||
top = ("top" if not url else url.lstrip(".").split("/")[0]) + "/"
|
||||
assert len(tar) == len([x for x in tar if x.startswith(top)])
|
||||
return " ".join([x[len(top):] for x in tar])
|
||||
return " ".join([x[len(top) :] for x in tar])
|
||||
|
||||
def curl(self, url, uname, binary=False):
|
||||
conn = tu.VHttpConn(self.args, self.asrv, self.log, hdr(url, uname))
|
||||
|
||||
229
tests/test_idp.py
Normal file
229
tests/test_idp.py
Normal file
@@ -0,0 +1,229 @@
|
||||
#!/usr/bin/env python3
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
import json
|
||||
import os
|
||||
import unittest
|
||||
|
||||
from copyparty.authsrv import AuthSrv
|
||||
from tests.util import Cfg
|
||||
|
||||
|
||||
class TestVFS(unittest.TestCase):
|
||||
def dump(self, vfs):
|
||||
print(json.dumps(vfs, indent=4, sort_keys=True, default=lambda o: o.__dict__))
|
||||
|
||||
def log(self, src, msg, c=0):
|
||||
m = "%s" % (msg,)
|
||||
if (
|
||||
"warning: filesystem-path does not exist:" in m
|
||||
or "you are sharing a system directory:" in m
|
||||
or "reinitializing due to new user from IdP:" in m
|
||||
or m.startswith("hint: argument")
|
||||
or (m.startswith("loaded ") and " config files:" in m)
|
||||
):
|
||||
return
|
||||
|
||||
print(("[%s] %s" % (src, msg)).encode("ascii", "replace").decode("ascii"))
|
||||
|
||||
def nav(self, au, vp):
|
||||
return au.vfs.get(vp, "", False, False)[0]
|
||||
|
||||
def assertAxs(self, axs, expected):
|
||||
unpacked = []
|
||||
zs = "uread uwrite umove udel uget upget uhtml uadmin udot"
|
||||
for k in zs.split():
|
||||
unpacked.append(list(sorted(getattr(axs, k))))
|
||||
|
||||
pad = len(unpacked) - len(expected)
|
||||
self.assertEqual(unpacked, expected + [[]] * pad)
|
||||
|
||||
def assertAxsAt(self, au, vp, expected):
|
||||
vn = self.nav(au, vp)
|
||||
self.assertAxs(vn.axs, expected)
|
||||
|
||||
def assertNodes(self, vfs, expected):
|
||||
got = list(sorted(vfs.nodes.keys()))
|
||||
self.assertEqual(got, expected)
|
||||
|
||||
def assertNodesAt(self, au, vp, expected):
|
||||
vn = self.nav(au, vp)
|
||||
self.assertNodes(vn, expected)
|
||||
|
||||
def prep(self):
|
||||
here = os.path.abspath(os.path.dirname(__file__))
|
||||
cfgdir = os.path.join(here, "res", "idp")
|
||||
|
||||
# globals are applied by main so need to cheat a little
|
||||
xcfg = {"idp_h_usr": "x-idp-user", "idp_h_grp": "x-idp-group"}
|
||||
|
||||
return here, cfgdir, xcfg
|
||||
|
||||
# buckle up...
|
||||
|
||||
def test_1(self):
|
||||
"""
|
||||
trivial; volumes [/] and [/vb] with one user in [/] only
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/1.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "/")
|
||||
self.assertNodes(au.vfs, ["vb"])
|
||||
self.assertNodes(au.vfs.nodes["vb"], [])
|
||||
|
||||
self.assertAxs(au.vfs.axs, [["ua"]])
|
||||
self.assertAxs(au.vfs.nodes["vb"].axs, [])
|
||||
|
||||
def test_2(self):
|
||||
"""
|
||||
users ua/ub/uc, group ga (ua+ub) in basic combinations
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/2.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "/")
|
||||
self.assertNodes(au.vfs, ["vb", "vc"])
|
||||
self.assertNodes(au.vfs.nodes["vb"], [])
|
||||
self.assertNodes(au.vfs.nodes["vc"], [])
|
||||
|
||||
self.assertAxs(au.vfs.axs, [["ua", "ub"]])
|
||||
self.assertAxsAt(au, "vb", [["ua", "ub"]]) # same as:
|
||||
self.assertAxs(au.vfs.nodes["vb"].axs, [["ua", "ub"]])
|
||||
self.assertAxs(au.vfs.nodes["vc"].axs, [["ua", "ub", "uc"]])
|
||||
|
||||
def test_3(self):
|
||||
"""
|
||||
IdP-only; dynamically created volumes for users/groups
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/3.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, [])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
|
||||
au.idp_checkin(None, "iua", "iga")
|
||||
self.assertNodes(au.vfs, ["vg", "vu"])
|
||||
self.assertNodesAt(au, "vu", ["iua"]) # same as:
|
||||
self.assertNodes(au.vfs.nodes["vu"], ["iua"])
|
||||
self.assertNodes(au.vfs.nodes["vg"], ["iga"])
|
||||
self.assertEqual(au.vfs.nodes["vu"].realpath, "")
|
||||
self.assertEqual(au.vfs.nodes["vg"].realpath, "")
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertAxsAt(au, "vu/iua", [["iua"]]) # same as:
|
||||
self.assertAxs(self.nav(au, "vu/iua").axs, [["iua"]])
|
||||
self.assertAxs(self.nav(au, "vg/iga").axs, [["iua"]]) # axs is unames
|
||||
|
||||
def test_4(self):
|
||||
"""
|
||||
IdP mixed with regular users
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/4.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, ["vu"])
|
||||
self.assertNodesAt(au, "vu", ["ua", "ub"])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertAxsAt(au, "vu", [])
|
||||
self.assertAxsAt(au, "vu/ua", [["ua"]])
|
||||
self.assertAxsAt(au, "vu/ub", [["ub"]])
|
||||
|
||||
au.idp_checkin(None, "iua", "iga")
|
||||
self.assertNodes(au.vfs, ["vg", "vu"])
|
||||
self.assertNodesAt(au, "vu", ["iua", "ua", "ub"])
|
||||
self.assertNodesAt(au, "vg", ["iga1", "iga2"])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertAxsAt(au, "vu", [])
|
||||
self.assertAxsAt(au, "vu/iua", [["iua"]])
|
||||
self.assertAxsAt(au, "vu/ua", [["ua"]])
|
||||
self.assertAxsAt(au, "vu/ub", [["ub"]])
|
||||
self.assertAxsAt(au, "vg", [])
|
||||
self.assertAxsAt(au, "vg/iga1", [["iua"]])
|
||||
self.assertAxsAt(au, "vg/iga2", [["iua", "ua"]])
|
||||
self.assertEqual(self.nav(au, "vu/ua").realpath, "/u-ua")
|
||||
self.assertEqual(self.nav(au, "vu/iua").realpath, "/u-iua")
|
||||
self.assertEqual(self.nav(au, "vg/iga1").realpath, "/g1-iga")
|
||||
self.assertEqual(self.nav(au, "vg/iga2").realpath, "/g2-iga")
|
||||
|
||||
au.idp_checkin(None, "iub", "iga")
|
||||
self.assertAxsAt(au, "vu/iua", [["iua"]])
|
||||
self.assertAxsAt(au, "vg/iga1", [["iua", "iub"]])
|
||||
self.assertAxsAt(au, "vg/iga2", [["iua", "iub", "ua"]])
|
||||
|
||||
def test_5(self):
|
||||
"""
|
||||
one IdP user in multiple groups
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/5.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
|
||||
au.idp_checkin(None, "iua", "ga")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxsAt(au, "g", [["iua"]])
|
||||
self.assertAxsAt(au, "ga", [["iua"]])
|
||||
self.assertAxsAt(au, "gb", [])
|
||||
|
||||
au.idp_checkin(None, "iua", "gb")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxsAt(au, "g", [["iua"]])
|
||||
self.assertAxsAt(au, "ga", [])
|
||||
self.assertAxsAt(au, "gb", [["iua"]])
|
||||
|
||||
au.idp_checkin(None, "iua", "ga|gb")
|
||||
self.assertNodes(au.vfs, ["g", "ga", "gb"])
|
||||
self.assertAxsAt(au, "g", [["iua"]])
|
||||
self.assertAxsAt(au, "ga", [["iua"]])
|
||||
self.assertAxsAt(au, "gb", [["iua"]])
|
||||
|
||||
def test_6(self):
|
||||
"""
|
||||
IdP volumes with anon-get and other users/groups (github#79)
|
||||
"""
|
||||
_, cfgdir, xcfg = self.prep()
|
||||
au = AuthSrv(Cfg(c=[cfgdir + "/6.conf"], **xcfg), self.log)
|
||||
|
||||
self.assertAxs(au.vfs.axs, [])
|
||||
self.assertEqual(au.vfs.vpath, "")
|
||||
self.assertEqual(au.vfs.realpath, "")
|
||||
self.assertNodes(au.vfs, [])
|
||||
|
||||
au.idp_checkin(None, "iua", "")
|
||||
star = ["*", "iua"]
|
||||
self.assertNodes(au.vfs, ["get", "priv"])
|
||||
self.assertAxsAt(au, "get/iua", [["iua"], [], [], [], star])
|
||||
self.assertAxsAt(au, "priv/iua", [["iua"], [], []])
|
||||
|
||||
au.idp_checkin(None, "iub", "")
|
||||
star = ["*", "iua", "iub"]
|
||||
self.assertNodes(au.vfs, ["get", "priv"])
|
||||
self.assertAxsAt(au, "get/iua", [["iua"], [], [], [], star])
|
||||
self.assertAxsAt(au, "get/iub", [["iub"], [], [], [], star])
|
||||
self.assertAxsAt(au, "priv/iua", [["iua"], [], []])
|
||||
self.assertAxsAt(au, "priv/iub", [["iub"], [], []])
|
||||
|
||||
au.idp_checkin(None, "iuc", "su")
|
||||
star = ["*", "iua", "iub", "iuc"]
|
||||
self.assertNodes(au.vfs, ["get", "priv", "team"])
|
||||
self.assertAxsAt(au, "get/iua", [["iua", "iuc"], [], ["iuc"], [], star])
|
||||
self.assertAxsAt(au, "get/iub", [["iub", "iuc"], [], ["iuc"], [], star])
|
||||
self.assertAxsAt(au, "get/iuc", [["iuc"], [], ["iuc"], [], star])
|
||||
self.assertAxsAt(au, "priv/iua", [["iua", "iuc"], [], ["iuc"]])
|
||||
self.assertAxsAt(au, "priv/iub", [["iub", "iuc"], [], ["iuc"]])
|
||||
self.assertAxsAt(au, "priv/iuc", [["iuc"], [], ["iuc"]])
|
||||
self.assertAxsAt(au, "team/su/iuc", [["iuc"]])
|
||||
|
||||
au.idp_checkin(None, "iud", "su")
|
||||
self.assertAxsAt(au, "team/su/iuc", [["iuc", "iud"]])
|
||||
self.assertAxsAt(au, "team/su/iud", [["iuc", "iud"]])
|
||||
@@ -110,28 +110,28 @@ class Cfg(Namespace):
|
||||
def __init__(self, a=None, v=None, c=None, **ka0):
|
||||
ka = {}
|
||||
|
||||
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats th_no_crop vague_403 vc ver xdev xlink xvol"
|
||||
ex = "daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid hardlink ih ihead magic never_symlink nid nih no_acode no_athumb no_dav no_dedup no_del no_dupe no_lifetime no_logues no_mv no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw q rand smb srch_dbg stats vague_403 vc ver xdev xlink xvol"
|
||||
ka.update(**{k: False for k in ex.split()})
|
||||
|
||||
ex = "dotpart dotsrch no_dhash no_fastboot no_rescan no_sendfile no_voldump re_dhash plain_ip"
|
||||
ka.update(**{k: True for k in ex.split()})
|
||||
|
||||
ex = "ah_cli ah_gen css_browser hist ipa_re js_browser no_forget no_hash no_idx nonsus_urls"
|
||||
ex = "ah_cli ah_gen css_browser hist js_browser no_forget no_hash no_idx nonsus_urls"
|
||||
ka.update(**{k: None for k in ex.split()})
|
||||
|
||||
ex = "hash_mt srch_time u2j"
|
||||
ex = "hash_mt srch_time u2abort u2j"
|
||||
ka.update(**{k: 1 for k in ex.split()})
|
||||
|
||||
ex = "reg_cap s_thead s_tbody th_convt"
|
||||
ka.update(**{k: 9 for k in ex.split()})
|
||||
|
||||
ex = "db_act df loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ex = "db_act df k304 loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
|
||||
ka.update(**{k: 0 for k in ex.split()})
|
||||
|
||||
ex = "ah_alg bname doctitle exit favico idp_h_usr html_head lg_sbf log_fk md_sbf name textfiles unlist vname R RS SR"
|
||||
ka.update(**{k: "" for k in ex.split()})
|
||||
|
||||
ex = "on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
|
||||
ex = "grp on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
|
||||
ka.update(**{k: [] for k in ex.split()})
|
||||
|
||||
ex = "exp_lg exp_md th_coversd"
|
||||
@@ -146,6 +146,8 @@ class Cfg(Namespace):
|
||||
E=E,
|
||||
dbd="wal",
|
||||
fk_salt="a" * 16,
|
||||
idp_gsep=re.compile("[|:;+,]"),
|
||||
iobuf=256 * 1024,
|
||||
lang="eng",
|
||||
log_badpwd=1,
|
||||
logout=573,
|
||||
@@ -153,10 +155,13 @@ class Cfg(Namespace):
|
||||
mth={},
|
||||
mtp=[],
|
||||
rm_retry="0/0",
|
||||
s_wr_sz=512 * 1024,
|
||||
s_rd_sz=256 * 1024,
|
||||
s_wr_sz=256 * 1024,
|
||||
sort="href",
|
||||
srch_hits=99999,
|
||||
th_crop="y",
|
||||
th_size="320x256",
|
||||
th_x3="n",
|
||||
u2sort="s",
|
||||
u2ts="c",
|
||||
unpost=600,
|
||||
@@ -239,6 +244,7 @@ class VHttpConn(object):
|
||||
self.freshen_pwd = 0.0
|
||||
self.hsrv = VHttpSrv(args, asrv, log)
|
||||
self.ico = None
|
||||
self.ipa_nm = None
|
||||
self.lf_url = None
|
||||
self.log_func = log
|
||||
self.log_src = "a"
|
||||
@@ -250,4 +256,4 @@ class VHttpConn(object):
|
||||
self.thumbcli = None
|
||||
self.u2fh = FHC()
|
||||
|
||||
self.get_u2idx = self.hsrv.get_u2idx
|
||||
self.get_u2idx = self.hsrv.get_u2idx
|
||||
|
||||
Reference in New Issue
Block a user